Detroit, Michigan | November 10, 2023 09:00 AM Eastern Standard Time
By Faith Ashmore, Benzinga
What Is Generative AI?
Artificial Intelligence (AI) technology is growing and becoming more prevalent in the business world. The market size and growth of AI are already substantial and projected to continue expanding in the coming years. By 2027, the AI market is expected to reach $407 billion, experiencing significant growth from its estimated current value. One of the reasons for this growth is due to companies turning to AI as a solution for efficiency or to gain a competitive edge in the marketplace.
Generative AI involves a subset of AI including machines creating original content, such as images, text, music or even videos. It operates by using complex algorithms to analyze and understand patterns in vast amounts of existing data. Companies across various industries are increasingly embracing generative AI to provide novel solutions and enhance their business processes. For example, in marketing and advertising, generative AI can produce personalized content and recommendations to target specific customer segments. Generative AI can also be used in the entertainment industry to create realistic CGI in movies and games, and can assist in drug recovery and medical image analysis in the healthcare industry. Overall, companies see generative AI as a powerful tool to automate tasks, foster creativity and innovation, and improve overall efficiency and effectiveness in various domains.
Not Without Risk: AI Integration
With this increased dependence on AI comes the need for companies to be more mindful about how they approach the use of the technology. Most companies do not have the resources to build their proprietary AI technology from the ground up and therefore have to rely on third-party providers. This can create inherent risks related to intellectual property rights, data security, privacy issues, ethics and confidentiality.
Here’s a few tips on how companies can protect themselves while mitigating risk in the new world of AI.
1. Check for compliance with data privacy policies
Companies must carefully consider the user agreements that they have with any AI tools and how it impacts their data privacy policies. This is especially true for companies that have sensitive information and are legally bound by confidentiality laws.
AI breaches are a significant concern in the adoption of AI technology, highlighting the need for caution and protection when integrating AI systems. One example of an AI breach was an incident in March 2023 involving a breach of AI security. A glitch in the source code of OpenAI's ChatGPT AI resulted in unauthorized access to sensitive data. This breach allowed malicious actors to enter the Redis memory database, which is used by OpenAI to store user information, and to view the chat history of users.
The breach impacted several active ChatGPT Plus subscribers from March 2023. Their payment details, such as their name, email address, payment address, credit card type, and the last four digits of their credit card number, were compromised.
Additionally, it is believed that the account credentials of over 100,000 OpenAI ChatGPT users were also compromised and subsequently sold on the dark web marketplaces between June 2022 and May 2023.
OpenAI acknowledged the potential issue where the first message of a newly created conversation could be visible in another user's chat history if both users were active at roughly the same time. As a result, OpenAI had to inform the affected users that their payment information may have been exposed.
The breach raised concerns because ChatGPT permits users to store conversations, potentially granting unauthorized access to proprietary information, internal business strategies, personal communications, software code, and other sensitive data.
The breach also raised concerns about the potential misuse of AI technology and highlighted the importance of ensuring safeguards are in place to prevent unauthorized access and manipulation of AI systems.
2. Build an airtight risk management process
“As more companies embrace the use of AI, it is essential that they have a governance and managed risk management process in place” urges Deborah Nitka, Senior Manager focused on AI consulting at CohnReznick. Risk management frameworks should be leveraged by organizations to identify their risk appetites ahead of introducing AI into their environment. Similarly risk assessment for cybersecurity and privacy should also be undertaken to understand the ongoing potential risk impacts.
3. Regular audits for data bias
Furthermore, the AI models used by companies should be regularly audited to minimize the risk of data bias and to guarantee that confidentiality, privacy, and intellectual property concerns are appropriately addressed. While AI can provide several benefits, it is important that companies exercise due diligence and take a risk-based approach to the adoption and integration of AI systems.
The primary risks associated with AI adoption include intellectual property issues, confidentiality breaches, as well as privacy concerns. Addressing these issues will help organizations maximize the benefits of AI while minimizing the risks and better ensuring that the technology is used appropriately to deliver positive outcomes. Otherwise, AI technology is not only potentially dangerous but unsustainable for growth.
CohnReznick is a leading advisory, assurance and tax firm that helps organizations achieve their goals by optimizing performance, maximizing value and managing risk. It offers a comprehensive range of consulting services encompassing various areas, and over the past few years it has begun consulting on AI integration.
For more information, please contact Deborah Nitka or Adonye Chamberlin, leaders in consulting on AI integration at CohnReznick LLP.
This post contains sponsored content. This content is for informational purposes only and not intended to be investing advice.
Contact Details
Benzinga
+1 877-440-9464
Company Website