Blue Planet Studio - stock.adobe

Tip

How to craft a responsible generative AI strategy

Generative AI's potential in the enterprise must be balanced with responsible use. Without a clear strategy in place, the technology's risks could outweigh its rewards.

As more organizations introduce generative AI into their workflows, establishing a responsible generative AI strategy has become essential.

Across industries, enterprise functions from marketing and operations to finance and legal are exploring department-specific generative AI use cases. Through proofs of concept and pilot projects, enterprise leaders are discovering the best ways to deploy and scale generative AI.

Compared to other technologies -- including other forms of AI, such as traditional machine learning and deep learning -- generative AI brings new risks and amplifies existing ones. To mitigate these risks and maximize the potential of generative AI applications, organizations must include a strategy for responsible use in their AI roadmap.

Top generative AI concerns

Concerns with generative AI fall into four broad categories: hallucinations and inaccuracies, intellectual property rights violations, data privacy and security concerns, and bias and harmful content.

Hallucinations

Large language models (LLMs) tend to hallucinate, or generate factually incorrect content. This is problematic in both employee-facing and customer-facing use cases, especially when users are not aware of the possibility that generative AI tools will produce false outputs.

Intellectual property rights violations

Two main issues arise regarding generative AI and copyright violations. First, the data used to train generative AI models might be governed by open source licenses or protected by copyrights. This could lead to legal challenges if the creators of the content used as training data did not give permission for such use. Various high-profile lawsuits against AI companies, such as the New York Times' case against OpenAI and Microsoft, are progressing through courts across the world.

The second issue focuses on the content generated by the AI model, rather than the data used to train it -- specifically, whether generative AI outputs might infringe on intellectual property rights. For example, if a generative AI application produces text, images or audio that closely resembles an existing copyrighted work present in its training data, this could lead to potential plagiarism accusations.

Data privacy and security concerns

Introducing generative AI into an organization poses significant data privacy and security risks. Often, the data used to train generative AI models is not publicly disclosed. If private user or enterprise data is included in the training corpus, a model's output could inadvertently reveal this sensitive information. Additionally, shadow AI -- when employees use generative AI applications independently without the organization's explicit knowledge -- can result in the accidental release of private data into a generative AI model, making it accessible to others.

Generative AI applications can be deployed on premises, in the cloud or in hybrid environments. Depending on the deployment type, data privacy might be compromised if input prompts and model outputs are not subject to the same stringent data privacy policies normally applied to enterprise data. Furthermore, generative AI introduces new attack surfaces, and malicious actors can exploit models via techniques such as prompt injection.

Bias and harmful content

AI-generated outputs can exhibit a variety of biases that do not align with antidiscrimination laws or an enterprise's brand values, such as stereotyping or prejudice against marginalized groups. In some cases, generative AI models can produce harmful content. Misleading, offensive or biased outputs can cause direct harm to users and result in increased liability for the organization.

There are also concerns about the energy required to power generative AI, not only during the training phase but also during use at inference time. Generative AI's energy consumption can be significantly higher than traditional enterprise technologies, raising questions about environmental impact and sustainability.

How to create a responsible generative AI strategy

Enterprises need a strategy to ensure responsible use of generative AI while promoting innovation and efficiency. Follow these strategies to get started.

Increase awareness

Many enterprises lack an acceptable use policy for generative AI, with some even imposing blanket bans on using these tools -- an often counterproductive approach, as many employees will simply continue to use the tools anyway.

Thus, the first step to ensuring responsible generative AI use is to create a policy that establishes ground rules and provides practical guidance. Update existing security and vendor policies to include clauses related to generative AI as appropriate. Conduct awareness and training sessions that cover not just a tool's use cases and strengths but also its limitations, enabling employees to become discerning users.

Institute guardrails

Implement guardrails, such as content moderation filters, at both the input and output stages. The goal is not to influence or reshape the output, but rather to ensure that harmful recommendations are not produced.

Strengthen governance

Adopt an AI governance framework. This is an area that requires close monitoring of an ever-changing landscape and responding to new opportunities and threats accordingly. Organizations should stay aware of the regulatory and compliance requirements specific to their industry and geography.

Adopt mitigation techniques and testing

Ensuring generative AI quality requires different skills compared with traditional software quality assurance. While hallucinations cannot be eliminated completely, they can be significantly reduced through techniques like retrieval-augmented generation and fine-tuning, which help align outputs with company values and ethical principles. Security testing, such as red-teaming LLMs, is also critical to generative AI success.

Update procurement standards

Some generative AI vendors offer indemnity clauses for plagiarism claims arising from their models' outputs. Update procurement guidelines to prioritize model transparency and documentation requirements. Consider requesting that AI vendors undergo independent audits of their AI models.

Kashyap Kompella is an industry analyst, author, educator and AI advisor to leading companies and startups across the U.S., Europe and the Asia-Pacific region. Currently, he is the CEO of RPA2AI Research, a global technology industry analyst firm.

Next Steps

 Top generative AI influencers to follow

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close