By Steve Sanders
Artificial intelligence (AI) has dominated headlines and conversations over the last year, with various industries exploring what this technology means for automation, collaboration, communication and more. However, new challenges come with those opportunities to streamline processes, particularly in cybersecurity. Banks should know the following perils and tips to minimize risk when incorporating this technology into their strategies.
What Cybersecurity Risks Does AI Pose?
One of AI’s chief dangers is how bad actors can use it to streamline cyber attacks. According to Darktrace, there was a 135% spike in novel social engineering attacks from January to February 2023, aligning with the adoption of ChatGPT. Other reports reveal that 75% of cybersecurity professionals have seen an increase in AI-powered cyber attacks since 2023.
Here are some ways cybercriminals use AI, all representing cybersecurity risk to your institution.
Increased Speed and Scalability of Attacks: Instead of writing their own phishing emails or scripts, cybercriminals prompt AI to create them in seconds. It’s also easier for them to launch personalized phishing attacks, as automation leads to increased speed and scalability. With targeted ransomware, they can use AI and ML to create accurate profiles of their targets.
Deepfake Attacks: Cybercriminals also use deepfakes—which are AI-generated photos, videos or audio—to carry out identity theft and other social engineering schemes. This includes using fake audio on phone calls to execute account takeover or requesting wire transfers. Since deepfakes come in varying levels of sophistication, vigilance is critical.
Circumventing Security Protections: Fraudsters leverage AI to navigate around institutions’ security protections. Using AI, malware can adapt and evade detection by identifying patterns in detection systems and bypassing them. AI can also rewrite vulnerabilities to make them more difficult to detect.
Talent Shortage and Skills Gap: Since cybercriminals are often on the cutting edge of technology and tactics, having employees who share that mindset helps institutions elevate defenses. However, talent is expensive, and the market for highly skilled cybersecurity professionals is highly competitive. Many institutions turn to trusted managed cybersecurity providers to help bridge this gap.
How to Use AI for Cybersecurity
Despite the above risks, AI’s positive effects on cybersecurity cannot be denied. AI is accelerating the advancement of tools like automated security operations, malware protection and authentication. Many organizations are automating security operations to increase efficiencies, with 51% expanding automation or AI into their cybersecurity strategy over the last two years. Below are several advantages AI brings to security.
1. Enhance Vulnerability Management
Institutions can use AI to determine which vulnerabilities within their systems are most likely to be exploited, allowing them to prioritize remediation and reduce expenses for incident response.
2. Analyze Data Efficiently
AI/ML can strengthen threat detection by analyzing large amounts of data such as emails, website links and more to identify patterns and trends. For example, AI can evaluate email content to see if common phishing indicators, such as a sense of urgency, are present.
3. Automate Manual Tasks
Some routine tasks—such as security log monitoring—can be tedious or time-consuming. Automating these tasks using AI allows employees to work on more rewarding activities. AI also helps close out investigations and support tickets faster, as in the case of spam or phishing emails.
4. Increase Accuracy in Detecting Unusual Activity
AI can more accurately and quickly detect abnormal behavior in some instances than humans, especially with high volumes of cases to evaluate. AI can transform an institution’s approach to log reviews, spam emails, or anything else that requires quick analysis of a large amount of data.
5. Improve Security against Evolving Threats
Traditional cybersecurity methods may be slow to adapt to new or evolving threats, but AI quickly adapts to identify new threats and elevates threat intelligence. AI can also automate incident response by containing a breach within seconds and improves network security by locating and securing weak spots. Additionally, AI is strengthening attack prediction with behavioral and predictive analytics, which will likely only grow more accurate.
6. Alert on Anomalous Behavior
Since AI can analyze enormous amounts of data, it can identify patterns and detect anomalies by distinguishing between normal and unusual behavior without involving a human to spend time investigating all cases. If unusual activity occurs on a network, AI sends real-time alerts. Enhanced threat detection and response are already highly accurate, but AI drives enhanced investigations and forensics of attacks.
Evaluating Cybersecurity and AI
Before implementing AI in your cybersecurity strategy, there are various considerations to keep in mind. One of the top considerations for your institution should be network security. Your configurations should be addressed to ensure no vulnerabilities exist.
Securing your data is critical to ensure your AI engine is not training on confidential information. Are your files and confidential information protected? If you have a strategic plan or other proprietary documents saved in an area used for training your AI model, then your confidential data could be generated in search results. Banks should understand their controls around data security before moving forward with an AI engine.
If your institution uses publicly available tools like ChatGPT, ensure employees understand the risks of uploading sensitive documents or information. Refer to available AI guidance from regulators before encouraging or allowing employees to leverage these tools.
How Should Community Banks Approach AI and Cybersecurity?
Given AI’s prevalence in the industry, it seems this technology is here for the long haul. Community banks should seize the opportunity to educate and train their employees, customers and communities about AI and cybersecurity best practices.
Community banks can directly contribute to a stronger, safer community and customer base by providing education on AI’s perils and pitfalls. Training topics should include educating customers on identifying AI-generated schemes, how to report them and best practices for navigating this new AI landscape.
Community banks should take this opportunity to educate their customers and communities about AI and other cybersecurity best practices. For instance, as social engineering schemes evolve, banks should ensure that customers know what questions their institution will and won’t ask them via phone, text or email.
Moving Forward with Your AI Strategy
AI stands to revolutionize cyber defense for organizations in various industries, including community banks. And as cybercriminals continue leveraging AI to improve their attacks, institutions must figure out how to effectively use it to beat them in their own game. This means adapting and implementing new tools—all while ensuring proper controls remain in place to mitigate risk.
As institutions develop strategies to supplement existing cybersecurity measures with AI, it’s important to consider the risks and rewards. But one thing is clear: This technology is set to continue shaping the technology landscape.
About the Author
Steve Sanders serves as CSI’s chief risk officer and chief information security officer. In his role, Steve leads enterprise risk management and other key components of CSI’s corporate compliance program, including privacy and business continuity. He also oversees threat and vulnerability management as well as information security strategy and awareness programs. With more than 15 years of experience focused on cybersecurity, information security and privacy, he employs his strong background in audit, information security and IT security.