September 30

AI tools for writing code do not bode well for cybersecurity

0  comments

Many cybersecurity risks can be traced to vulnerabilities that were known but unresolved before an application was deployed in production. Cybercriminals routinely look for these vulnerabilities because they are so commonly deployed or left unpatched. In fact, the top 10 list of vulnerabilities found most often in Web applications as identified by The OWASP Foundation rarely changes.

As the amount of code being generated continues to increase, it’s probably safe to say that the number of vulnerabilities being deployed into production environments will continue to grow. Most of the large language models (LLMs) being used to generate code were trained using samples of code of varying quality from across the Web. As a result, new applications built by this generated code will have many of the same vulnerabilities that appeared in the original code used to train the LLM.

Many cybersecurity professionals are already aware of the implications. A survey of 800 security decision-makers finds nearly all (92%) are concerned about the use of code generated by large language models (LLMs) within applications.

Conducted by Venafi, a provider of a platform for securing machine identities, the survey also finds 63% of respondents have considered banning the use of AI in coding because of security risks. However, 72% also conceded they have no choice but to allow developers to use AI if their organization is to remain competitive.

Two-thirds (63%) of respondents also said it is impossible to govern the safe use of AI in their organization because they do not have visibility of where AI is being used. Less than half (47%) said their organization has policies in place to ensure the safe use of AI within development environments.

Ultimately, however, over three quarters (78%) said AI-developed code will lead to a security reckoning, with 59% already losing sleep over the security implications of AI.

There’s not much cybersecurity teams can do at this point beyond continuing to encourage application development teams to embrace best DevSecOps practices to identify and remediate as many vulnerabilities as possible before software. In the meantime, it’s all but certain a tsunami of vulnerabilities will show up in production environments in the months ahead. Hopefully, these vulnerabilities will be found and mitigated before they are exploited.

As troubling as that is, however, there is cause for optimism. The next generation of LLMs are being trained using code that has been highly vetted. These LLMs should generate much higher quality code than LLMs such as ChatGPT that have been trained using bots that collected code from almost everywhere and anywhere. Rather than telling application development teams they can’t use AI tools, cybersecurity teams would instead be better served if they encourage application development teams to use LLMs that have either been specifically built for building applications or have been customized to reduce the number of vulnerabilities being generated.

Ultimately, the generative AI genie is not going back into the bottle. Application developers are going to use these tools with or without permission. The thing cybersecurity teams need to remember is not these tools are the same. The focus now needs to be on identifying the ones that have an actual understanding of application security fundamentals.


Tags


You may also like

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Get in touch

Name*
Email*
Message
0 of 350