AI is not just reshaping our classrooms, but it’s also becoming a crucial part of the ever-evolving cybersecurity threats targeting our schools. AI is increasingly being used both by hackers and defenders to great effect.
The key question that we consider is: How will this affect the security of PowerSchool’s products?
That is an important question. Especially when you consider that the education sector is under siege from hackers. Ransomware was up 70% in 2023. Phishing is a constant threat, causing 30% of cyber incidents in 2023.
School administrators are increasingly having to deal with the direct and indirect fallout from these cyber incidents. For those directly involved in a breach, this means working with law enforcement while handling a media circus at the same time. Schools and districts that have not experienced a breach are finding it more difficult to obtain cyber liability insurance. And everyone must deal with worried parents and families.
By understanding both the benefits and risks of AI, we can better prepare ourselves to handle the evolving landscape of modern cybersecurity. This balanced approach will help us leverage AI’s capabilities while minimizing its potential dangers, ensuring a safer and more secure environment for everyone.
How Hackers Use AI
Advanced Social Engineering
Phishing attacks, traditionally based on easily recognizable spam, are becoming more nuanced. AI can craft personalized phishing emails by scraping social media profiles and other online data to tailor messages that resonate with the recipient’s interests or responsibilities. This level of personalization increases the likelihood of success for these attacks.
Once a link is clicked on, AI can simulate real-time interactions through chatbots, making it seem like the communication is coming from a trusted colleague or authority figure. These advanced tactics significantly raise the stakes, requiring heightened awareness and vigilance from all staff members.
AI is drastically enhancing the sophistication of social engineering attacks. Cybercriminals now have the capability to create highly convincing phishing emails, deepfake videos, and voice impersonations. These AI-generated tools make it easier to trick individuals into divulging sensitive information or clicking on malicious links.
For instance, a deepfake video might feature a seemingly genuine message from a school administrator, leading unsuspecting staff to transfer money to an attacker’s bank account. While this might seem far-fetched, it has already happened.
In a recent example, the British engineering firm Arup was defrauded out of $25 million. A finance worker at Arup transferred the money after attending a video call with people he believed were the chief financial officer and other members of the staff. The problem was that all the other attendees on the call were deepfakes.
As the technology becomes more widely available, schemes like this one will become more common.
Automated Vulnerability Discovery
Hackers are increasingly using AI algorithms to scan systems and networks for vulnerabilities. These AI tools can identify weaknesses that might take human hackers a lot longer to find. Automated vulnerability discovery means that potential entry points can be identified and attacked much faster than before; in some instances before the vulnerabilities are even known to the defenders.
AI-driven tools can perform continuous, real-time assessments of large-scale networks, identifying weak spots in software, hardware, and configurations. They can even prioritize these vulnerabilities based on exploitability and impact, making it easier for hackers to focus on the most critical flaws.
This automation not only increases the efficiency of attacks but also lowers the barrier to entry, enabling less skilled hackers to launch sophisticated attacks.
Overcoming Security Measures
AI can analyze and identify weaknesses in existing security systems, which helps hackers bypass defenses more efficiently. By continuously learning and adapting, AI-driven attacks can evolve to counter new security measures, making traditional defense mechanisms increasingly obsolete.
Hackers use AI to study security protocols and algorithms, identify patterns, and develop strategies to exploit them. This dynamic approach lets hackers stay one step ahead, constantly adapting to and circumventing new security measures.
One way that hackers can do this is by using AI-assisted code-writing tools to produce malware. Some examples are starting to show up, such as this information stealer being tracked by the security company Proofpoint. Additionally, Microsoft is tracking a list of groups that are using AI for a variety of tasks from scripting to malware development to social engineering.
How AI Can Be Used for Defense
Phishing and Fraud Prevention
On the defensive side, AI algorithms are proving invaluable in detecting phishing attempts and fraudulent activities. By analyzing email content, sender behavior, and other indicators, AI can identify and flag suspicious activities that might otherwise go unnoticed.
AI systems can be trained on vast datasets of phishing emails and known fraud patterns, enabling them to recognize subtle cues that indicate a potential threat. For instance, AI can detect anomalies in email metadata, such as unusual sending patterns or discrepancies between the sender’s display name and email address.
AI can also analyze the linguistic patterns and writing styles of emails to spot phishing attempts. Machine learning models can be trained to recognize the typical language used in phishing emails, such as urgent or threatening tones, and flag them for further inspection.
Vendors such as Microsoft, Proofpoint, and Abnormal Security offer AI-enabled tools that detect and quarantine malicious emails created by other AIs.
Vulnerability Management
AI can automate the processes of vulnerability scanning, assessment, and prioritization, making it easier for organizations to address security weaknesses. It is very similar to what hackers would do to find vulnerabilities. In the same way it makes hackers more efficient, it also makes defenders better at what they do.
By continuously monitoring systems, AI can identify and prioritize vulnerabilities based on their potential impact, ensuring that the most critical issues are addressed promptly. This continuous assessment allows for real-time vulnerability management, reducing the window of opportunity for hackers to exploit weaknesses.
The real benefit here is speed. Vulnerabilities can be found, assessed, and fixed much faster than a human can do it manually. By minimizing the window of time that a vulnerability can be exploited, it takes away the advantage that hackers have when using AI to find vulnerabilities.
Advanced Threat Detection
AI-powered systems are really good at looking at lots of data to find patterns that might mean there’s a threat. These systems work fast, giving instant alerts and responses to new threats. AI checks things like network traffic, user behavior, and system logs to find anything strange that might suggest a security problem. Using machine learning, AI learns from past problems and keeps getting better at spotting threats.
These advanced systems can catch stealthy attacks that might get past regular detection methods. For example, AI can notice small changes in user behavior, like logging in at odd times or accessing unusual files, which might mean an account is hacked. AI can also spot signs of compromise, like unauthorized system changes or known malware. By providing real-time alerts, AI helps organizations respond to security issues quickly, reducing potential damage.
In conclusion, AI is a powerful tool that can greatly enhance cybersecurity efforts, but it also presents new challenges. On one hand, AI can significantly improve our ability to detect and respond to cyber threats by analyzing vast amounts of data quickly and accurately. This means that schools and other organizations can identify and mitigate risks more effectively, protecting sensitive information and ensuring smooth operations.
On the other hand, the same technology can be used by hackers to create more sophisticated and convincing attacks. As AI continues to evolve, it is crucial for school administrators, staff, and students to stay informed and vigilant.