Insights | Technology Services

A Survey of the Generative AI Technology Landscape

Finding the right fit between your business needs and the new generative AI services is key to getting the most value from this exciting technology.

A phone screen displays several logos of Generative AI services

 

 

With several new generative AI services being announced over the past year, finding a right fit between your business goals and the newest technologies can be an overwhelming process.  

You may be considering multiple factors all at once: the cost of the services, the expertise needed to use the services, and what is needed to integrate them with the tools you are already using. Compound those considerations with a dynamic market where options are constantly evolving, and you may not see a clear-cut answer as to what you should do. 

Fortunately, RevGen is here to provide you with a better understanding of what generative AI services are available at the moment. There are several options you can consider as you begin to investigate how Generative AI can accelerate your business model. 

 

Major Generative AI Services Overview 

If you have limited technical expertise on staff, or just want to minimize ramp up time, you may consider services involving pre-built models. Often, these services support integration with a business’ applications for a specific use case. 

The following are examples of major AI services that enable business to create AI applications: 

  • OpenAI’s ChatGPT (recently invested in by Microsoft) is a large language model chat bot. It includes an API that developers can use to integrate ChatGPT requests into their own applications. Additionally, OpenAI’s API includes functionalities beyond just large language models. 
  • Google PaLM is a large language model developed by Google AI. It includes an API for building generative AI applications. 
  • Amazon Bedrock is broader than the previously mentioned platforms, offering models for both text and image generation. The array of services can be accessed with a single API. 

Note that all while all three services involve APIs, the ability to manage the model itself may be limited. ChatGPT and Google PaLM have limited visibility into the model itself for users. On the other hand, Amazon Bedrock allows users to make API calls to open-source models. 

While many aspects of these services are managed, you will still need some level of AI expertise to effectively use them. Prior to fully committing to a service, you will want to conduct cursory research of the service’s documentation, which can save a lot of headaches down the road. 

 

Using OpenAI’s API 

OpenAI’s API can be accessed through traditional HTTP requests from any language. The official API documentation includes cURLs (client for URL) for authorization and requests. 

You can also make requests through OpenAI’s Python library, Node.js library, or community-maintained libraries. 

The API supports endpoints for many aspects of generative AI: 

  • Audio to text 
  • Chat responses from the chat model 
  • Create an embedding from a text input so it can be used by machine learning models 
  • Image generation from prompts. 

Businesses may consider using OpenAI’s API in workflows that require multiple automated steps: for example, automated prompting of OpenAI’s chat model for industry research purposes.  

Note that OpenAI has a is a pay per use pricing model, with price per unit depending on which model you use. 

 

 

Using AWS (Amazon Web Services) Bedrock API 

Amazon Bedrock’s uses are quite broad, ranging from text summarization to image generation. A workshop of sample code for different use cases can be found here. 

Users interface with Amazon Bedrock API via AWS’s SDK (software development kit) boto3. 

As shown in the Amazon workshop, development is done in Python Jupyter Notebooks (a web-based Python interface). In the notebook, the user sets a connection to Amazon bedrock with Python syntax, chooses the model and its parameters, and prepares input data before sending it as a request. For example, a user may opt to conduct text summarization tasks with the “Claude” model by Anthropic. After connecting to Amazon Bedrock and specifying model parameters, the user may prepare a text prompt in Jupyter Notebook and send it through an API request. 

Businesses may consider a service like this in cases where they do not want to develop their own models but still have a need for customization. For example, a business can use Amazon Bedrock’s text analysis models to analyze customer sentiment from user reviews on products in large volume. 

Amazon Bedrock offers an On-Demand pricing model (pay per use), as well as a more committed “Provisioned Throughput” pricing model (charged by the hour). Provisioned Throughput is needed for custom models. 

 

Open-Source Libraries 

Organizations should consider open-source AI libraries instead of proprietary generative AI platforms for the following reasons: 

  • “Open-source” means that the user can inspect and change the tools to their own needs, allowing for a more adaptable workflow. 
  • There is stability when development is maintained by communities instead of by a sole intellectual property owner, limiting the fragility of the user’s workflow. 
  • Free open-source technology saves on licensing costs. 

However, when it comes to open-source, deployment and management can get a bit more complex. Working with open-source libraries often requires development work in building the model, as opposed to just making API requests. 

Some open-source frameworks available: 

  • TensorFlow – An open-source library developed by Google: Used primarily for deep neural networks. You can use this library in several programming languages such as Python. 
  • Keras – A high level API written in Python on top of TensorFlow. Useful for rapid prototyping. 

Because open-source libraries for AI are often tools for directly building models, you will need to be familiar with the specifications of an algorithm on a technical level. For example, in the case of deep neural nets in TensorFlow, your team should have members that are familiar with concepts like assembling layers, choosing appropriate activation functions, and choosing an optimization algorithm. 

 

What Next? 

As you build Generative AI tools into your business, keep in mind that different services may be integrated with each other, and you are not stuck with one AI service for all your needs. You might choose to build a custom deep neural network in TensorFlow for a niche engineering application, while also making API calls to a fully managed AI service for more common business applications. 

RevGen is here to help you design an effective strategy for capitalizing on the best of the technologies available. We can provide recommendations and support in applying generative AI tools effectively to your business to generate revenue, increase customer satisfaction, and improve maturity of response to complex business scenarios. 

For more information on RevGen’s AI services, visit our site. 

 

 

Subscribe to our Newsletter

Get the latest updates and Insights from RevGen delivered straight to your inbox.