Cybersecurity and AI intertwined: VPN provider launches AI tool
- NordVPN introduces Sonar, an AI tool that identifies and educates users about phishing, to strengthen cybersecurity habits.
- NordLabs community members get early access to Sonar and contribute to its continuous improvement.
- More new tools coming soon will allow differentiation between AI images and non-AI.
In an age where our email accounts are not just a means of communication but also a gateway to our personal and professional worlds, understanding how to tell a legitimate email from a phishing attempt has become essential. For most people, emails are a vital link to work activities, social engagements, and financial transactions, making them a prime target for cybercriminals. A single misstep in identifying a phishing email could lead to disastrous consequences that range from financial loss to identity theft.
Phishing attacks are not merely increasing in frequency but also evolving in complexity. Cybercriminals leverage the current dynamic and often confusing digital environment to their advantage. Advances in cybersecurity techniques are being matched and often outpaced by attackers utilizing artificial intelligence (AI) and machine learning to craft more intelligent, believable phishing emails that can easily deceive even a vigilant eye.
Sonar: The phishing attack killer
So, what could be the solution to this growing problem? What if we could employ the same cutting-edge technology used by attackers, but to fortify our digital walls? This is the challenge that NordVPN has chosen to tackle. Within the framework of their NordLabs initiative, it has developed and launched Sonar, an AI-imbued browser extension. This tool is designed to identify and educate users about the hazards and identifying markers of phishing scams, thereby turning the tables on would-be attackers.
This initiative is more than just a product launch; it’s a call to arms for all internet users. It’s an invitation to actively participate in a revolutionary effort aimed at fortifying personal cybersecurity and contributing to a larger global initiative to make the digital space safer for everyone, everywhere.
Vykintas Maknickas, the chief strategist for product development at Nord Security, explained the gravity of the situation. He noted that AI technology has significantly amplified the scale and effectiveness of phishing attacks. Without intervention, this trend is set to grow exponentially, creating an untenable cybersecurity risk for individuals and organizations alike.
“Sonar is based on the large language model technology used by ChatGPT so it will help internet users better identify phishing emails in the changing environment of cybercrimes,” said Maknickas.
Crafted by a multidisciplinary team of engineers and developers, Sonar is more than a warning system. It actively analyzes incoming emails to gauge the likelihood of them being phishing attempts. It provides a breakdown of its assessment, highlighting particular features of the email that contributed to the risk score, and educates users on spotting these red flags in the future. Initially launching for Gmail users on the Google Chrome browser, plans are already in motion to extend Sonar’s reach to other platforms and email services.
A community approach to cybersecurity
Not a company to rest on its laurels, NordVPN recently inaugurated NordLabs as a specialized research and development platform. It serves as an incubator for exploring emerging technologies like AI and crafting novel tools and services to bolster internet security and user privacy. Being a part of the NordLabs community offers a ‘VIP pass’ to witness and test innovations before they become available to the general public.
NordVPN is gearing up to introduce another groundbreaking project in the NordLabs platform in September. Dubbed Pixray; this AI-powered tool aims to differentiate between images generated by AI techniques and those created through traditional photography or graphic design methods.
The future of cybersecurity and AI: What lies ahead
The dual nature of AI as both a weapon and a shield in cybersecurity is becoming increasingly apparent. Case in point: there have been documented instances where cybercriminal groups have used AI algorithms to manipulate social media trends and even disseminate disinformation, particularly relating to socially critical issues like the COVID-19 pandemic.
Even more troubling is the advent of AI in more direct criminal activity. There was a remarkable case where AI voice imitation technology was used to mimic a CEO’s voice, convincing another executive to fraudulently transfer a staggering US$243,000. This is a chilling reminder of the lengths to which AI can be manipulated for malicious intent.
The ethical considerations surrounding AI are under intense scrutiny too. Despite its designed-in ethos of adherence to ethical guidelines, even ChatGPT is not immune to exploitation. Cybersecurity experts have discovered discussion threads on underground hacking forums where hackers claim to manipulate ChatGPT for recreating malware strains.
Given that one such case has come to light, it is reasonable to surmise that several more may lurk in the dark web’s recesses. That creates an urgency for cybersecurity professionals to constantly evolve their tactics and tools to counter increasingly sophisticated, AI-enabled threats.
READ MORE
- Data Strategies That Dictate Legacy Overhaul Methods for Established Banks
- Securing Data: A Guide to Navigating Australian Privacy Regulations
- Ethical Threads: Transforming Fashion with Trust and Transparency
- Top 5 Drivers Shaping IT Budgets This Financial Year
- Beyond Connectivity: How Wireless Site Surveys Enhance Tomorrow’s Business Network