Using Generative AI: Understanding Potential Risks and Developing Responsible Usage
Businesses and consumers alike are eager to explore the potential benefits of generative artificial intelligence (AI), but it’s important to also consider the potential risks. According to Alex Toh, local principal for Baker McKenzie Wong & Leow’s IP and technology practice, as generative AI is still in its experimental stage, businesses must figure out the potential implications of tapping into this technology. Toh, who is a Certified Information Privacy Professional and a certified AI Ethics and Governance Professional, suggests that businesses must ask critical questions about the safety of exploring this technology, both legally and in terms of security concerns.
As interest in generative AI grows, Toh continues to field frequent questions from clients regarding copyright implications and policies that may need to be implemented. There are risks of trademark and copyright infringement if generative AI models create images that are similar to existing work, particularly when instructed to replicate someone else’s artwork. Organizations want to know the considerations they must take into account if they explore the use of generative AI, so the deployment and use of such tools does not lead to legal liabilities and related business risks.
To reduce risks, organizations are putting in place policies, processes, and governance measures. One client, for instance, asked about liabilities their company could face if a generative AI-powered product it offered malfunctioned. Companies that decide to use tools such as ChatGPT to support customer service via an automated chatbot, for example, will have to assess its ability to provide answers and the public wants. Toh suggests that businesses carry out a risk analysis to identify potential risks and assess whether these can be managed. Assessment should include the use of prompts, a key factor in generative AI.
Countries, such as Singapore, have put out frameworks to guide businesses across any sector in their AI adoption, with the main objective of creating a trustworthy ecosystem. Toh says that these frameworks should include principles that organizations can adopt easily. Singapore’s AI framework revolves around transparency and explainability, which are critical to establishing consumer trust in the products they use. In a recent parliamentary reply on AI regulatory frameworks, Singapore’s Ministry of Communications and Information pointed to the need for “responsible” development and deployment.
As AI continues to accelerate, it’s crucial to establish public safety along with embracing technological advancements. While laws catch up with the pace of technology, the public’s ability to manage their own risks will be paramount. Consumers should choose trusted brands that invest in being responsible for their customer data and its use in AI deployment.
Editor Notes:
As artificial intelligence continues to transform every sector, it’s the responsibility of businesses and individuals to manage the potential risks effectively. Adopting AI technology without understanding the potential implications can lead to legal liabilities, security threats, and related business risks. At the same time, embracing AI responsibly can create a trustworthy ecosystem that fosters meaningful innovation. To explore the latest AI news, trends, breakthroughs, and hype, visit the GPT News Room and get inspired.
Source link