Security issues targeting AI itself including the lifecycle (Security for AI) and situations that AI is applied to the current security issues (AI for Security).
The aspect for AI was drastically changed after the introduction of ChatGPT from 2022. Even during 2010s, the question of whether the evolution of AI can overcome human’s logical thinking had been researched and developed (e.g., IBM Watson and Google AlphaGo). Now, a few years later from these results, everyone can experience the future potential of AI from the advent of generative AI. Major IT companies and AI startups are actively developing the environment where many people can access LLM (Large Language Model) in a chat format. They start AI services such as interactive communication, programming or image generation for consumers.
Various companies expect that the evolution of AI will contribute to expand their business opportunity and efficiency of their work. In a 2023 survey from 4,702 CEOs by PricewaterhouseCoopers (*3), more than 64% answered that AI would improve their employees’ work efficiency and 59% said that AI also improve their own work [1]. On the other hand, 59% CEOs also concern that the cybersecurity is the major risk in generative AI. Another company asked more than 300 risk and compliance professionals in 2023 and surveyed that 93% companies recognize that there is a risk against generative AI while only 9% of them has been prepared for the risk mitigation [2]. Moreover, another survey from 1,123 security professionals organized by ISC2 (International Information System Security Certification Consortium) (*4) showed that only 28% agreed and 38% disagreed to the question whether AI is beneficial for cybersecurity rather than criminal [3]. In fact, another survey in [3] reveals that 12% of respondents had prohibited to use all generative AI tools in their business and 32% had banned several generative AI tools.
Independent from the development and deployment of AI, many people expect that the flexibility of AI can improve the efficiency of security operation centers and automate threat detection and response. On the other hand, AI is not solely contributed to cyber defense. While AI services provided by major companies are appropriately trained to ensure that no harmful inputs and outputs are possible, AI tools for cyberattack built from scratch have been found in the hacking community.
In this white paper, we mainly focus on the enterprise usage of AI and discuss two AI security issues, security issues targeting AI itself including the lifecycle (Security for AI) and situations that AI is applied to the current security issues (AI for Security). We also show the current trends in governmental organizations and industry associations to tackle against the risk against AI.
Click here to read more.
This post was originally published on 3rd party site mentioned in the title of this site