How RevGen Set Up Our Internal AI Governance

How is a data & AI company handling their own AI Governance? This article gives a peek behind the curtain into our own processes and guidelines.

AI Ethics and Governance symbolized by a digitized set of scales fed by blue circuitry

As a data and AI consulting firm, it is our job to give our clients expert advice on strategy, technology, and implementation. Of course, we wouldn’t expect any of our clients to take us at our word—asking for proof before signing a contract is just shrewd business. So, we’re offering a quick peek behind the curtain into RevGen’s approach to one of today’s hottest topics, AI Governance, and how we’ve applied techniques and best practices internally. 

What is AI Governance? Much like Data Governance, it is the set of operating principles, restrictions, and guidelines to maintain secure and high-quality output from AI tools. The two concepts feed into each other as well; AI cannot be properly implemented without quality data, and that quality data must be protected from misuse by AI. 

 

[Read More: Data Governance in the Age of AI]

 

We feel that being trusted advisors for our clients requires us to be leaders within the practice we are supporting, and RevGen has been practicing good AI governance for the past several years, both through policy and technology. RevGen rolled out our first policy for the use of Generative AI in 2023. This policy enforced the strict confidentiality guidelines we use to protect our clients’ and RevGen’s own data. Since then, the scope and depth of AI usage have blossomed, with new tools coming online for daily use.  

Pero Dalkovski, our VP of Technology, and Anne Lifton, Principal Architect of AI, along with our CISO as executive sponsor, joined together to approve three AI tools for internal use, design a new AI governance program, and implement a roadmap to adoption. This was in addition to our voluntary monthly AI training program that is attended by over 40% of the company. These meetings cover AI best practices and teach core AI skills.

Our internal guidelines aren’t just restrictions. They also provide examples of acceptable use cases for Generative AI, such as drafting PowerPoint content, as well as clarifying how the RevGen-approved technologies are best used, helping our internal teams self-select the correct tool for their needs. 

Another change to the guidance was recognizing that specific tools need specific guidelines.  

Pero and Anne approved three technologies that would be the most helpful for internal use: Microsoft Copilot, Claude by Anthropic, and GitHub Copilot. These technologies were selected through similar criteria used when we help clients with these assessments: 

  • Uncovering AI use cases across the entire company 
  • Understanding the features of potential tools 
  • Comparing feature sets to costs as well as terms & conditions (T&Cs) 

Ultimately, we chose the technologies that covered the most use cases while staying within our established budget. 

 

 

 

Our new guidance reflected that each of these tools has different terms and conditions and usages. For instance, HR was unlikely to need the power of Claude, however, our developers will need more features than Microsoft Copilot can provide. We also required that all RevGen employees earn 100% on a test of our guidelines before using LLMs and GitHub Copilot. Then, we provided how-to guides for each of the tools, which was required reading before a user could be approved for a license. This allowed us to track and verify that the guidance has been read and understood. 

The final piece of the puzzle was establishing an AI Governance & Enablement committee. The committee audits the terms and conditions of our chosen tools quarterly, as these T&Cs change regularly, with minimal notice. Because of this, the committee is how we engage Legal and IT to keep any AI implementation consistent with our security protocols and compliant with both governmental and internal restrictions. We also chose champions for each of the three tools to answer questions for internal users, raise governance red flags, and drive overall adoption. 

These champions are key to AI enablement throughout the company, as they are tasked with driving effective usage of the three tools. They help users by providing “tips & tricks”, highlighting recommended trainings, and hosting enablement sessions to talk through specific questions, concerns, and anything else they may want to know. This facilitates knowledge sharing across RevGen, and also provides a face that’s easy to talk to—not every AI beginner is willing to ask a CISO a “stupid” question. 

The small, SME-led committee also has the power to review new and emerging technologies, as well as requests for the creation of internal AI agents. It also keeps our executive team apprised of where we are and where we need to be with respect to AI usage and governance. Empowering this smaller group to have oversight into the internal AI operations allows us to move nimbly in a fast-changing landscape.  

Every organization will harness AI differently. However, no company, RevGen included, should be doing so without strong AI Governance in place to protect yourself from the inherent risk of such a powerful technology.  

To learn more about how we can help your business establish an AI Governance policy, contact us today. 

Subscribe to our Newsletter

Get the latest updates and Insights from RevGen delivered straight to your inbox.

RevGen
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.