AI & Software Security: How to Implement AI Responsibly & Successfully

Devin Maguire
January 11, 2024
AI & Software Security: How to Implement AI Responsibly & Successfully

Generative AI (GenAI) dominated the technology landscape in 2023 prompting many technology companies to formulate an AI strategy - from adopting AI-enabled tools for performance and productivity gains to developing and building upon large language models (LLM) to compete. Reading Gartner Predicts 2024: AI & Cybersecurity - Turning Disruption Into an Opportunity provides compelling insights and recommendations for organizations to consider particularly around the security implications—positive and negative—of AI.

Knowledge is power. Advanced knowledge in practice is leadership.

Lead your organization through the responsible adoption of AI with advanced insights from Gartner.



On one hand, GenAI has emerged as a beacon of hope to address the persistent skills shortage and widening gap between security challenges and the resources and capacity to address them. On the other hand, GenAI introduces additional complexity and risks into environments where security teams already struggle to keep pace. However, organizations that successfully navigate AI adoption - and stay ahead of new AI threats - have the opportunity to reduce business risk, lower costs, and increase productivity. In this post, we will explore causes for caution, cases for optimism, and recommendations to build a strategy for successful and responsible AI adoption.

3 Causes for Caution: AI & Software Security Risks & Costs

Software flaws and vulnerabilities are found, created, and disseminated faster than they can be fixed. The ability to create and distribute software - including insecure software - has increased exponentially. Meanwhile, the capacity to remediate and secure software has only improved incrementally. While there is the potential that GenAI can help security teams increase remediation capacity, the immediate reality is that GenAI tools introduce new risks and layers of complexity requiring additional security effort and spending.

At least initially, it seems new costs will outweigh security efficiencies. According to Gartner Predicts 2024: AI & Cybersecurity - Turning Disruption Into an Opportunity, “Through 2025, generative AI will cause a spike of cybersecurity resources required to secure it, causing more than a 15% incremental spend on application and data security.” Amid the hype and rush to adopt AI here are three cybersecurity cautions to temper that excitement and understand the realities. 

  1. AI capabilities are immature - especially from a security perspective. 2023 saw cybersecurity vendors primarily using AI to add natural language processing interactions - particularly on the SecOps front. These fall short of the hype and promise of automation and can prove a distraction. However, most of the GenAI conversation for software development has revolved around GenAI coding assistants. Here too there is cause for caution. Software generated by and in collaboration with generalist LLMs (i.e. those trained on code generally and not specifically trained for security tasks) is prone to the same errors and insecurities as the code they are trained on. While there is hope tools will emerge, evolve, and improve to generate secure code, in the immediate term GenAI can accelerate the production of insecure code and further widen the security gap.
  1. AI introduces new threats and layers of risk. Every technological leap introduces new challenges. We witnessed this pattern with the proliferation of open-source software, cloud-native development, and now again with AI. The OWASP Top 10 for Large Language Model Applications capture many of these threats and the additional AI attack surfaces security teams need to manage. Beyond additional risk in application development and management, AI also introduces a new adversarial weapon to develop malicious scripts, escalate attacks, and exploit novel attack vectors like prompt injection and model poisoning.
  1. Organizations face additional costs to address AI risk. As mentioned above, adopters of AI face new risks that require additional security coverage and practices. These range from validating the output of GenAI tools to avoid inaccurate or illegal actions (for example, avoiding copyright infringement or referencing inaccurate data) to preventing data leaks and breaches of confidentiality from exposure to external LLMs to weaving AI Trust Risk and Security Management (TRiSM) into AppSec practices. The majority of early adopter implementations of GenAI will likely face increased expenditures to deal with these risks that outweigh the productivity and cost-saving gains. 

When assessing your AI strategy for 2024 and beyond, it is important to sift through the hype to understand the value and benefits against the risks and costs associated with AI adoption. Awareness of these potential pitfalls can help you navigate to more cost-effective and secure implementations of AI. So, where are those opportunities and where are we seeing early signs of how AI can deliver on its promises?

3 Cases for Optimism: Securing AI Applications & Looking to the Next Generation of AI

Amidst the cautions, there are glimpses of success. Specialized AI applications can augment employees with security skills, and the emergence of solutions to secure AI applications will help manage new risks. There are also promising signs multimodal, multiagent, and composite AI could deliver powerful capabilities. Here are three cases for optimism about the near future and next generation of AI capabilities:

  1. AI can augment humans to improve security outcomes. According to Gartner, “Cybersecurity leaders focusing on human augmentation will achieve better results than those jumping too quickly on solutions promising full automation.” An example of this is AI-assisted remediation for insecure code. In isolation, “auto-remediation” capabilities do not account for performance, reliability, and other code quality factors. On the other hand, developers may lack secure code knowledge and struggle to adapt generic remediation examples to their specific code. In tandem, developers with AI-assisted remediation - particularly those that present multiple coding options - retain control of their code while alleviating the burden of writing the secure code fix from scratch. 
  1. Specialized solutions help manage risk.  Two categories of solutions are emerging to help tackle risk. The first covers AI TRISM solutions which extend security and risk management coverage to include the AI attack surface. The second is the emergence of specialized AI security solutions. In contrast to generalist AI tools trained on massive quantities of data, specialized AI solutions are trained on highly curated datasets to excel at a more limited and focused task. Both of these emerging solutions can help manage risk; however, they add additional layers of complexity to the security ecosystem. Already the proliferation of tools to cover everything from code security and dependencies to cloud infrastructure and containers leaves organizations struggling to manage this complexity. It is important to consider how these tools fit into your broader ecosystem to deliver practical benefits.
  1. Multiagent AI improves security and risk management. Between AI-augmented human enablement and specialized solutions, there is evidence the next generation of AI solutions will revolve around facilitating collaboration among people and intelligent solutions in what Gartner calls Multiagent Systems (MAS) - a “type of AI systems composed of multiple, independent but interactive agents.” This could have a big impact on security and risk management. Gartner predicts the use of multiagent AI in threat detection and incident response will reach 70% in 2028 to augment staff.  

Near-term decisions and plans should account for immediate AI risk management needs and consider future capabilities. Among these are strategies that incorporate security into the software development and security ecosystem and facilitate collaboration between employees and MAS solutions. Next, we will cover recommendations you can start implementing now to position your organization for successful AI adoption.

3 Recommendations for Responsible  AI Implementation 

Organizations - and particularly security leaders - face difficult decisions around AI adoption. Moving too fast can introduce significant risks and costs without significant benefits. However, laggards may struggle to integrate and implement future generations of AI capabilities and find themselves at a disadvantage. Ultimately, applying risk management principles will guide leaders to responsible and successful implementations of AI. Here are three recommendations:

  1. Learn from the history lessons of open-source and cloud-native software. While GenAI seems novel, it follows familiar patterns where excitement and adoption initially outpace security concerns but awareness of risk and disillusionment quickly follow. Anticipate threats and proceed responsibly. Understand the GenAI attack surface and implement AI policies and practices like TRISM to prevent the proliferation of unmanaged risk within the organization. 
  1. Prepare your ecosystem and take a multi-year approach to manage AI complexity and leverage next-gen solutions. GenAI tools and TRISM further complicate the security landscape. Solutions like ArmorCode ASPM and RBVM streamline the adoption and risk-based management of additional security tools following Garnter’s advice to start AI security initiatives with application security. They also create an architecture to support MAS solutions that augment teams with AI-enabled solutions. 
  1. Leverage data to prioritize AI initiatives and measure performance. AI adoption will come with additional costs. Organizations need relevant metrics to assess GenAI costs - both direct and indirect - against the benefits. If you have not already, you should invest in baseline visibility to prioritize where to direct your AI investments and then measure and track outcomes over time. 

The most important recommendation is to educate yourself and stay informed. The AI technology and threat landscape is evolving rapidly. Getting advanced insights and predictions from leading analysts will help you make better decisions today and plan for the future. Download a complimentary copy of Gartner Predicts 2024: AI & Cybersecurity - Turning Disruption Into an Opportunity to separate hype from reality and inform your strategic and secure adoption of GenAI in 2024 and beyond.

Devin Maguire
Devin Maguire
Sr. Product Marketing Manager, ArmorCode
January 11, 2024
Devin Maguire
January 11, 2024
Subscribe for Updates
RSS Feed Logo
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.