the growing reliance on Generative AI (GenAI) in software development has introduced new challenges, particularly in the realm of cybersecurity. According to recent surveys, a staggering 96% of security and software development professionals report that their organizations are integrating GenAI for application development, with 79% of these companies seeing frequent use across their teams.
AI technologies are reshaping industries across the globe, yet they come with a host of challenges that can’t be ignored. One of the biggest concerns is the overreliance on Generative AI (GenAI) in software development, which, while offering tremendous benefits, also introduces significant risks. An alarming 96% of professionals in security and software development report that their companies have embraced GenAI for building or delivering applications. Of those, 79% say that their development teams use it regularly. Interestingly, a greater number of developers (8%) express concern over the potential loss of critical thinking in AI-powered development than security professionals (3%), which signals a growing unease about the consequences of outsourcing decision-making to machines.
AI’s learning mechanisms are another area of worry. With 43% of security professionals concerned about the risks of codebase leaks, many fear that AI systems could inadvertently reproduce patterns that contain sensitive information. In addition, 32% highlight the use of hardcoded secrets as a major risk factor within the software supply chain, leading to potential vulnerabilities that could be exploited by malicious actors.
At the same time, strong privacy laws are seen as crucial for maintaining confidence in the use of AI. According to a survey, 63% of respondents believe that AI can positively impact their lives, and GenAI’s regular use has nearly doubled from last year. Despite this optimism, concerns remain about privacy, with 30% of users inputting personal or confidential data, including health and financial information, into GenAI tools. This is despite the fact that 84% are uneasy about the potential for such data to be exposed.
The security risks aren’t confined to development teams alone. Hackers, too, are leveraging the power of AI to enhance their malicious activities. While only 21% of hackers saw AI as valuable in 2023, 71% believe it is now an indispensable tool in their arsenal for 2024. A sharp increase in the use of GenAI by cybercriminals — 77%, up from 64% last year — highlights the growing threat posed by AI in the hands of bad actors.
In the workplace, the surge in AI adoption isn’t always accompanied by proper oversight. Ivanti’s research reveals that 15% of office workers are using unsanctioned GenAI tools, and 32% of IT professionals admit there is no documented strategy to address the risks associated with these tools. With the threat of shadow IT expanding the organization’s attack surface, this lack of governance could lead to unaddressed vulnerabilities that increase the likelihood of a breach.
Security leaders are also facing mounting pressure regarding AI. A staggering 92% have expressed concerns about the use of AI-generated code within their organizations, with 66% reporting that their security teams are struggling to keep pace with the speed of AI-powered development. Alarmingly, 78% of security leaders foresee a future reckoning in the form of significant security incidents driven by AI. As AI continues to evolve, it’s clear that its impact on cybersecurity — both as a tool for defense and attack — is profound and multifaceted.
Despite these challenges, AI adoption continues to grow. A significant 67% of organizations report an increase in their investment in GenAI, yet many of these projects are still in the pilot or proof-of-concept phase. While GenAI is proving its worth, particularly in improving efficiency and scaling operations, organizations are still grappling with the complexities of implementing it securely.
The risks associated with GenAI models are substantial. According to cybersecurity experts, 95% express low confidence in the security measures of these models, and 35% are concerned about the reliability and accuracy of large language models (LLMs). With many models easily compromised by bad actors, the fear of data privacy violations remains a major barrier to widespread AI adoption.
For many organizations, security risks related to AI are largely data-driven. An increasing number of companies are adopting multiple AI applications, but this also brings the challenge of maintaining secure data practices. A significant 88% of professionals say they lack visibility into code sent to repositories, while 87% are concerned about files sent to personal cloud accounts and 90% worry about CRM system data being vulnerable to unauthorized access.
Despite the clear security risks, organizations are pressed to adopt AI technologies. A large majority (87%) of C-suite executives report feeling pressure to implement AI solutions quickly to stay competitive. However, with 68% acknowledging the difficulty of identifying genuine innovators in the AI space, the struggle to find reliable and secure AI tools persists.
As AI continues to evolve, cybersecurity professionals are adapting their strategies to combat the rise of AI-powered threats. In fact, 75% of security professionals have had to revise their cybersecurity strategies to counteract the growing volume of AI-driven cyberattacks. However, 66% of them also report that the increase in AI-related threats is causing significant stress and burnout, underscoring the human toll of this technological shift.
The future of AI and cybersecurity is a complex one, with businesses weighing the risks and rewards of embracing these transformative technologies. As AI creates a new generation of cyberattacks, companies are being forced to adapt quickly — or risk falling behind. While 93% of security leaders expect to face daily AI-driven cyberattacks in the near future, many are still adjusting to the dual challenges of defending against AI threats while integrating AI into their own cybersecurity operations.
As organizations continue to expand their use of GenAI tools, the pressure to balance innovation with security will only grow. The rapid pace of AI development, coupled with an ever-expanding attack surface, leaves businesses with little choice but to stay vigilant. With 22% of employees admitting to breaching company rules by using GenAI inappropriately, and with data exfiltration risks mounting, security will remain a top concern for businesses embracing this transformative technology.
Leave a Comment