Skip to main content

Generative AI Policies: 7 Issues Leaders Should Watch

Dan O'Keefe, Appian
September 28, 2023

Generative artificial intelligence (AI) is a game-changer, bringing with it the promise of unparalleled efficiency and the potential for entrée into new markets. As generative AI continues to soar in popularity, organizations are eager to tap into its transformative power. 

However, this enthusiasm should come with a side of caution. It’s critical that organizations develop a strong generative AI policy so the allure of new technology doesn’t lead to devastating mishaps. Generative AI use policies help prevent mistakes, reduce potential risks, and ensure organizations unlock the clear benefits of AI while limiting the downsides. 

This post will cover seven critical issues to consider as you craft a strong, well-informed generative AI policy.

1. Data privacy.

Your data is gold. You shouldn’t have to share it with a third party. Data privacy might be one of the top issues with generative AI, because many companies build LLMs and train them on a mix of publicly available info and their customers’ data. When you share your data with these models, you could be inadvertently helping your competition or unwittingly sharing trade secrets. 

Beyond that, you could run afoul of compliance regulations. Entering private information that could potentially identify customers violates regulations in tightly controlled industries, leading to the risk of fines. Make sure to set your rules around generative AI usage accordingly.

[ What differentiates private AI from public AI? Read the post, Private AI vs. Public AI: 4 Key Facts. ]

2. Intellectual property and copyright.

Since many AI models are trained on widely available information and on their customers’ data, intellectual property usage and ownership becomes fuzzy. This is a two-fold problem. First, your own IP could end up being used by a competitor if they use a generative AI model that builds content very similar to yours. Second, you could unwittingly use AI-generated content that’s extremely similar to someone else’s content, leading to accusations of plagiarism.

 

8 experts weigh in on the future of AI. Download their analyses and predictions. 

3. Information access.

One critical generative AI policy to get right is governing information access. If an employee asks a question, generative AI will give an answer. But should they be able to receive that answer? For example, if you work for an organization and ask, “What is our underlying architecture for our systems?” This information might be helpful for IT or architecture teams, but someone in payroll accessing this information could indicate a security risk. 

Information governance is crucial. When you’re building your own AI, make sure to only train it on safe information. This could easily become a nightmare if you’re not careful.

[ Learn more about the differences of generative AI and large language models. ]

4. Traceability and bias.

Most people have heard about AI hallucinations, even if they’re not following AI/ML news every day. Hallucinations can often be silly and inconsequential—for example, asking who won the world series in 1910 and they tell you the Philadelphia Eagles (in this case, right city, wrong sport . . .  it was the Philadelphia Athletics). 

Hallucinations like these have little impact. But when you’re making critical business decisions, you can’t afford a wrong answer. It’s important to choose AI models that are trained on your specific data (which reduces the chances of inaccuracies) and/or that show how they arrived at an answer. If the model shows its work, experts can validate the output and ensure you’re making decisions based on accurate information. 

Traceability is also closely related to bias. AI bias has been a hot topic, too, and if you don’t understand what went into building a model or how it arrived at a result, you could end up institutionalizing biases.

5. Regulations.

AI is under scrutiny from regulators across the world. Nearly everyone recognizes that this technology can be misused—from spreading false information to leaking data to causing copyright and plagiarism issues. The legal details will vary by region, and the jury’s out on how regulations will materialize in the end. But it’s important to realize that the regulatory landscape will continue to evolve

However, this isn’t just about new laws. Many industries already face existing regulations that could open them to risks when it comes to generative AI. Many public sector organizations, for example, have to be risk-averse in their generative AI use policies. Healthcare organizations may need to be extra cautious as well, especially about where protected health information might be used.

6. Lack of clear use cases.

AI isn’t magic, although it can seem that way. If organizations want to deploy AI effectively, it’s critical that they don’t just use AI for its own sake. It must tie back to their overall corporate strategy. 

Often, companies fear falling behind when it comes to using AI, but they should avoid acting hastily. Companies should think hard about which organizational problems they want AI to help solve. For example, they may want to predict outcomes around customer retention, reduce the cost incurred processing documents and content, or build a natural language query for their own knowledge base to improve questions and answers. 

Remember to focus on key metrics: Will this reduce costs? Free up resources for other work? Tie into a larger strategic initiative? Often, it helps to look at the problems you’d like to solve, then decide how to apply AI to help—or even decide that it would be better to use another technology, such as robotic process automation (RPA) or traditional analytics. 

[Curious about more concrete use cases for AI? Read 7 AI/ML Use Cases to Watch.]

7. The need to reskill the workforce.

The news media emphasizes the need to reskill the workforce, usually in the context of shifting careers that may be impacted by AI to some extent. However, AI doesn’t replace humans—it augments us. And this means that organizations must focus on reskilling their own workforce to meet the moment and prepare for the future. 

How do you go about doing this? Training and education is critical, but you also have to give employees the time and space to work with these tools. Give them room to experiment with AI in their work, assuming they do so in a safe way and don’t break governance rules. But employees need to grow comfortable using AI tools for you to truly see long-term gains.

Generative AI policies: Your key to responsible AI usage.

As technology advances at an increasing clip, generative AI promises to reshape industries and prompt organizations to re-evaluate their existing strategies. But there are risks and hurdles to clear. Companies must keep data privacy, intellectual property, bias, governance, and even the application of AI itself in mind. They must have clear use cases that support business initiatives, instead of treating AI as a solution in search of a problem. Doing so can help organizations reap the benefits of AI while still safeguarding their business from risks. 

As mentioned earlier, data privacy is a critical factor when it comes to operationalizing AI in your organization. But you don’t always have to compromise your data privacy. Get the eBook, Implementing Private AI: A Practical Guide, to learn how.