In the face of the digital age’s advancements, AI’s role in cybersecurity presents both innovation and challenges. CloudTweaks welcomes a Q&A with Bobby Cornwell, Vice President strategic partner enablement & integration at SonicWall, to discuss the pressing issue of AI-enhanced cyber threats. With Cornwell’s extensive experience in network security, this conversation aims to uncover the complexities of AI-driven attacks, their impact, and defense strategies. We’ll navigate through the intricacies of combating AI-enhanced threats, exploring SonicWall’s approaches to safeguarding digital integrity in an era where cyber risks are constantly evolving. Join us for a critical exploration of the future of cybersecurity in an AI-dominated landscape.
RANDY: Can you elaborate on how threat actors are using Large Language Modules to compile and utilize breach data in novel ways? How does this differ significantly from traditional methods of cyberattacks?
BOBBY: Converting data of this nature into something readable by a LLM takes a lot of skill and resources. It’s not something that anybody will be doing as a hobby today. However, if you look at the threat industry, you will see that this is a multibillion-dollar industry. It’s also an industry that has sponsorship by different governments. If you take the elite threat actors, give them the resources they need, then this data would be put into a LLM and would be used in a way that has never been seen in the past. Again, this is not something that I personally feel is being done at a large scale today but is either in the infancy stage or in the concept stage. Imagine the possibilities if threat actors were able to take the data from this leak, plus data from other leaks and combine everything into a LLM readable format. At that point, threat actors could ask a compromised or modified AI platform that they pulled from an open-source hub, to find patterns, trends, etc. of all the people in this database. They could potentially ask, “I want to know everything about Bobby Cornwell. I want to know where he lives, where he used to live, where he works, what kind of stuff does he buys. Has he ever been to these type websites? Who is his family? Are they religious? Who are his kids? What school do they go to? What games do they play? How many times have they gone to the doctor? Does he pay his bills? What’s his credit score? How many credit cards does he have? Etc. And, in an instant, they would have a detailed report of everything I mentioned. From there, they could cross reference other names, friends, and even look at my LinkedIn profile to see if anybody that was in any data breach matched my breach and see if we were in the same place (like a conference) together. They could then take that information and conduct all manner of nefarious activities. Could they call the electric company and turn off my power? Probably so. Could they turn off my water? Probably so. Could they call my credit card companies and change billing addresses? Probably so. Could they contact me directly and tell me that they know all the bad stuff about me (there isn’t any, by the way😊) and use that as extortion to gain money? They 100% could. Now, I’m just a person, but what if they did the same thing for a big political leader? What if they knew of a guy that was designing the next generation nuclear warhead? What if they targeted him and his family to where he either gave up secret information, or what if they knew so much about him, that he thought he was talking to his boss or superior officer because of generative AI? I know this seems surreal, like some Hollywood movie concept but this, while again not overly easy, is something that is capable today. Can they do this with OpenAI’s platform, or platforms by Google or other mainstream AI? No. These platforms currently have built-in ethical protections. But if you give a hacker an open software platform that has extremely skilled knowledge, then that platform can be modified. To circle back to a traditional method; when data breaches started happening, it typically involved someone’s credit card info and/or their social security numbers. Credit card data was sold on the dark web in bulk. You could go to some sites, and they would have credit cards listed in different categories with different prices. There would be “verified”, meaning they verified the card worked by charging a few pennies on the card to see if it would authorize. There were “verified high limit”, meaning they were able to verify that the card worked, and had a high limit available to charge, and there were the “unverified”, which obviously were cheaper to purchase, but you would also get a greater amount of card numbers for your money. Those threat actors would then take those credit card numbers and sell them to other people looking for a quick buck purchasing things like gift cards. Gift cards were super easy to buy and use because there were websites where you could resell gift cards for less than their face value. Not only was this an easy way to launder money, but it’s an activity that’s almost impossible to track. Threat actors knew banks would simply pay the card holder back for the lost money, and they would just close the account and write the loss off. Today, this is still done, but now there are more breaches with different data. For example, medical records are being stolen. This contains your policy information, which can be put into a database and searched. But to be useful, a threat actor needs to either know what they’re searching for specifically or they must figure out how to build queries for that data. Not entirely different than what AI would do, but without AI it’s difficult to cross reference data. An attacker would need to run different queries, import that data into a single database, then rerun specific queries. This is a time-consuming process and like everyone else, they’re always looking at ways they can improve efficiency. AI eases the ability to cross reference data, identify patterns, track individuals and anybody associated with that individual.
RANDY: You mentioned the emergence of companies using aggregated breach databases. How does this change the landscape for both hackers and those looking to protect personal information?
BOBBY: Historically, if I signed up for a “dark web search” of my data using the company that provides that service would charge a subscription fee and anytime their software platform identified anomalous behavior, I would receive and alert. While effective, it takes time and lots of resources to accomplish this task. Having the data in a single place improves speed and efficiency, allowing today’s dark web scanning platforms to instantly tell you how many places your information has been leaked. Imagine I’m speaking with you in person, and I get a real-time alert from a dark web search scanner that my password for my company showed up in a breach of data that was just poured into a database (like we are discussing). I could immediately stop what I’m doing, change my password, and ensure I exit out of any programs or connections that could allow lateral access to my corporate infrastructure. That would be huge, as the more advance notice we have, the faster we can move to secure our infrastructure from attack. On the flip side, imagine if I was a threat actor, and I get paid based on the quality of information I provide. But now these dark web scanning platforms are messing me up because they are tapping into the same resources as I use and are notifying people quickly that their accounts have been hacked. I would essentially have stale data and would not get paid. On the flipside, if hackers have access to a database like we are discussing here, either via subscription from a main hacker or state sponsorship, imagine how fast they could target people and exploit them. Imagine how sophisticated and accurate a phishing attack could be knowing everything about the employees of a company they are targeting? I should note that it was just revealed that North Korea is using AI in advanced cyber-attacks.
RANDY: After checking your details on Malwarebytes, you discovered a significant amount of your personal…
Source link