Skip to content
Home » Snyk releases latest research report on companies’ use of generative AI Many companies are neglecting implementation best practices in the intensifying generative AI competition, creating a recognition gap in preparing for AI implementation

Snyk releases latest research report on companies’ use of generative AI Many companies are neglecting implementation best practices in the intensifying generative AI competition, creating a recognition gap in preparing for AI implementation

Snyk Co., Ltd.
[Snyk releases latest research report on companies’ use of generative AI] As competition for generative AI intensifies, many companies neglect implementation best practices, creating an awareness gap in preparing for AI implementation
Executives are 2 to 5 times less likely to be aware of AI-related security concerns than security leaders
Boston (June 4, 2024)_Snyk Inc. (Headquarters: Shibuya-ku, Tokyo, Representative CEO: Peter McKay), which provides a developer security platform, announced the research report “Deploying security in the era of generative AI”. did. Many global companies have already adopted generative AI code tools to speed up application development. However, our findings show that many companies are neglecting best practices in hiring in order to jump into the generative AI race as quickly as possible. Additionally, the survey results show that a clear awareness gap exists regarding security concerns related to generative AI code creation. Specifically, we found that company executives were more enthusiastic and confident about the technology than security leaders and some developers.
[Image:×259.png ]
This research found that:
Only 20% of organizations conducted a proof of concept (POC) before implementing AI coding options, and 58% said security was the biggest barrier to adoption.
Less than half (44%) of organizations provide training to developers on AI coding tools.
Corporate chief technology officers (CTOs) and CISOs (chief
information security officers) are five times more likely than developers to believe that AI coding tools pose no risk and are “very unprepared” to adopt AI coding tools. are twice as likely as developers to think that
Danny Allan, Snyk’s CTO, said:
“We believe it is now incumbent on the cybersecurity industry to recommend clear guidelines that will allow us all to benefit from this productivity increase without sacrificing security. The latest research also makes clear that scaling AI coding tools must be a collaborative effort. CTOs can trust DevSecOps team leaders to maximize the benefits of generative AI over time. We should aim to work together.”
■Results reveal that people involved with code tend to be more concerned about security and other issues
The security of AI-generated code was not a major concern for most organizations surveyed. Almost two-thirds (63.3%) of respondents rated security as “excellent” or “good,” with just 5.9% rating it as poor. But when we looked more closely at the numbers, we found that those “close to the code” didn’t display the same confidence as many of their peers.
Almost 4 in 10 security professionals (38.3%) say AI coding tools are “very dangerous.” Security respondents also questioned their organizations’ security policies regarding AI coding tools. Almost a third (30.1%) of security team members say their company’s AI security policy is inadequate, compared to 11% of C-suite respondents and 19% of developers/engineers.
Almost one in five (19%) C-suite respondents said AI coding tools pose “no risk at all,” compared to just 4.1% of security respondents who agreed with this statement. became.
■Best practices hold the key as the introduction of generative AI increases rapidly
Our research shows that top technology decision makers, such as CISOs and CTOs, believe their organizations are already ready for AI coding tools. In fact, 32% of C-suite respondents said rapid adoption of AI coding tools is important, twice as many as security respondents. This means that further adoption of these tools is underway (and in many cases is already underway), regardless of security or developer concerns. However, these organizations must urgently implement appropriate security measures so that they can continue to scale the rapid adoption of AI coding tools.
Based on the survey results, recommended actions for companies include: Establish a formal POC process for adopting all new AI technologies Weigh and prioritize feedback from your security team regarding generated AI security concerns
Document and audit all instances of AI code generation tools Invest for the long term in security technologies that provide “AI guardrails” for the adoption of AI-assisted tools
Strengthen and continuously implement company-wide AI training Please check the report details from the URL below: About Snyk
Snyk is a developer-first security platform. A tool for finding, prioritizing, and remediating vulnerabilities in your code, open source and dependencies, containers, and infrastructure as code (IaC). It’s easy for developers to use by integrating directly into Git, integrated development environments (IDEs), and CI/CD pipelines. Snyk is currently used by more than 3,000 customers around the world, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut, and Salesforce.
Request for information: More details about this release: