Amazon has heard FinOps practitioners’ cries asking for new AI tools, and the answer is Titan and AWS Bedrock. 

These new tools provide the same generative AI abilities of generating images like expected from DALL-E, operating like a Large Language Model (LLM) like ChatGPT, and even transcribing audio to text. 

But how do these new tools compare to pre-existing ones like Azure’s OpenAI? Most importantly, which of these tools is the best financial investment for your organization? 

Take it from the cloud experts. Here’s your complete guide to AWS Bedrock vs Azure OpenAI so you know the strengths and weaknesses of each tool. 

What is AWS Bedrock?

 

Amazon Bedrock Logo

Source: Amazon

First, let’s define some terms. 

AWS Bedrock is one of the newer AI tools on the block. Bedrock is a fully managed severless offering. Developers can use third-party provider foundation models (FMs) through Amazon’s API and personalize accordingly, building out the custom tool of their dreams. 

Common Bedrock support models include:

Model Family Functionality Max Requests
Embeddings (Ada, Text Embedding 3 (Small, Large)) Identifying anomalies, classifying tasks, clustering data, generating recommendations, and conducting searches. 2k-8k tokens
DALL-E (2 and 3) Create, modify, or refresh images based on text. 1k-4k characters
GPT-3.5 Turbo Sophisticated reasoning and conversation, comprehension and generation of code, and conventional completion tasks. 4k and 16k tokens
GPT-4 Advanced reasoning and dialogue, intricate problem-solving, code comprehension and generation, and standard completion tasks. 8k and 32k tokens
GPT-4o GPT-4 Turbo with Vision capabilities. Improved responses. 128k tokens
Whisper Convert audio to text. 25 MB audio size

 

What is Azure OpenAI?

Amazon Bedrock vs OpenAI: Guide to Your Best Generative AI Platform

Azure OpenAI is Azure’s solution to keep up with the booming AI market. This partnership between Microsoft and OpenAI means Azure users can use their Azure cloud account to access OpenAI with either an API, a web-based interface, or Python SDK.

Azure OpenAI offers slightly different features than OpenAI, including private networking, better security, and co-developed APIs.

Model Family Functionality Max Requests
Embeddings (Ada, Text Embedding 3 (Small, Large)) Identifying anomalies, classifying tasks, clustering data, generating recommendations, and conducting searches. 2k-8k tokens
DALL-E (2 and 3) Create, modify, or refresh images based on text. 1k-4k characters
GPT-3.5 Turbo Sophisticated reasoning and conversation, comprehension and generation of code, and conventional completion tasks. 4k and 16k tokens
GPT-4 Advanced reasoning and dialogue, intricate problem-solving, code comprehension and generation, and standard completion tasks. 8k and 32k tokens
GPT-4o GPT-4 Turbo with Vision capabilities. Improved responses. 128k tokens
Whisper Convert audio to text. 25 MB audio size

Azure OpenAI vs AWS Bedrock

 

So,  now that you now what Azure OpenAI and AWS Bedrock are now – let’s take a look at the real question: which is better? 

Obvious differences like OpenAI (besides being a much bigger name), the biggest gaps between the two are: 

  1. Services offered
  2. Accessible models
  3. Pricing

That said, as of 2024, many brands have endorsed GPT-4o as the leader in terms of quality. But that doesn’t mean it’s the clear winner. It all depends on the company and what it is trying to accomplish. 

AWS Bedrock vs Azure OpenAI: Services

 

When we take a look at services, we want to keep in mind four things: 

    1. Ease of use: How easy is it for customers to accomplish what they’re trying to do?
    2. Help documents: How does a customer get help if they have questions?
    3. Ability to create: What kind of content can you create? How easy is it to create useful content?
    4. Security: Will that data be kept safe? Customers need to give a lot of proprietary data to AI tools.

Ease of use

Bedrock makes things as simple as possible for users. It’s accessible via API or SDK, and also has plenty of no-code playgrounds, providing a very low barrier to entry. Bedrock is a fully managed service, which means you can count on Amazon to handle the technical details. You’ll have more time to focus on fine-tuning and working on Retrieval Augmented Generation (RAG) techniques. 

OpenAI is also accessible via SDK and API and offers similar no-code playgrounds to Bedrock. However, since OpenAI is not a managed service, users must be a bit more hands-on. 

Help documents

Customers flourish in cloud environments where they can easily find the tools they need and access help forums or customer support if they get stuck. A good tool should provide ample documentation to pre-empt questions and a community for users to turn to their peers if they get confused. Customer support lines are a must-have, too, of course! 

AWS Bedrock has a growing community and backlog of help documents. It’s a pretty new service, so this makes sense, but there’s still room for growth. 

OpenAI is similar to AWS Bedrock in terms of community and help documents. Since it is an older service, there are more places to ask for assistance, but its still not as robust as our cloud professionals would like. 

Ability to create

OpenAI provides natural language processing, copy, and code generation, and can also help make other creative assets like images by using its DALL-E integration. It’s for copy, chat, and code features, are very similar toGPT-3 and GPT-4

Bedrock, on the other hand, is capable of creating the same but at a larger scale. You can access DALL-E 2 and other models for image creation. For text and code generation and chat,  users can use Jurassic-1, Jumbo, and Bloom. 

Security

Bedrock prides itself on user data protection. Data is maintained by a service-team escrowed accounts.  Customers have access to tools like Amazon DataZone, a cloud platform management tool that allows managing data across AWS. There’s also Guardrails, a safeguard solution that will redact or block personally identifiable information (PII) and prevent it from being shared with others. 

OpenAI, is not quite as robust with its data security offerings. User data is kept secure, but it’s worth looking into 30-day storage policies for those who are especially private.

What do we mean by that? Any data you provide to Azure OpenAI is stored for up to 30 days to detect and prevent behavior that violates its code of conduct. If pre-approved or elected to turn off abuse monitoring, exemption from this policy applies.

 

New call-to-action

AWS Bedrock vs Azure OpenAI: Models

 

Keep in mind model offerings for tools when picking which is the best for the organization. Make sure to note the following:

  • Supported languages 
  • Supported regions 
  • Max tokens 
  • Training data 

Supported languages

Bedrock’s language varies from model to model. Jurassic supports seven languages. Most other models typically only support English.

OpenAI has better language support by far. Speech-to-text has support for over 100 languages, from Spanish to French, to German. 

Supported regions

Bedrock’s supported regions fall short of OpenAI. Here are the regions to access Bedrock:

  • Asia Pacific (Singapore, Tokyo) 
  • Europe 
  • US West 
  • US East 

The regions that access OpenAI vary by model, but in general users can access OpenAI as a whole if they’re based in: 

  • Australia East 
  • Canada East 
  • France Central 
  • Japan East 
  • Norway East 
  • South India 
  • Sweden Central 
  • Switzerland North 
  • UK South 
  • West Europe 
  • US

Max tokens

AWS Bedrock’s max token number ranges depending on the model type and category. Take LLMs, for instance. Bedrock’s models start with 4k to 8k and can increase to as high as 128k tokens, which amounts to about 300 pages of information. This number isn’t anything to write home about though – Claude models can go up to 200k tokens, or provide 500 pages of information. 

Whereas OpenAI is limited to 4,096 tokens, which is much smaller in comparison. 

Training data 

All of OpenAI’s models were trained until September 2021 except for GPT-4 Turbo, which was trained until April 2023. 

Bedrock’s Claude model was trained until December 2022, and Jurassic about mid-2022

AWS Bedrock vs Azure OpenAI: Pricing

 

The ideal pricing plan should offer more flexibility for those willing to accept higher prices and better rates for those ready to commit. Companies should be offered multiple plans designed to meet their needs depending on where they are in their AI adaptation process.

Overall, Bedrock wins out when it comes to pricing. OpenAI costs a bit more, and it also has far less flexible plans. 

But we’re getting ahead of ourselves – take a look at the payment plan breakdowns for AWS Bedrock and OpenAI below. 

Amazon Bedrock pricing

Bedrock pricing rises and falls depending: 

  • Model interference 
  • Customization 
  • Region 

Not all models can be customized and only certain models can be purchased with specific plans. 

There are two plans to pick from: On-Demand or Provisioned Throughput.

On-Demand

On-demand is a high-cost, high-flexibility payment plan option. You pay for each usage, with your charges varying depending on your model of choice. 

For example, an image generation model charges for each image you produce, whereas a text generation model charges for each input token processed and each output token created. 

Here are some of the most common models and the cost per 1000 input and output tokens to give an idea of the numbers you’ll be dealing with:

Model Price per 1000 Input Tokens Price per 1000 Output Tokens
Claude 3.5 Sonnet $0 $0
Claude 3 Opus $0 $0
Command R+ $0 $0
Command $0 $0
Embed – English $0.00 N/A
Embed – Multilingual $0.00 N/A
Jamba-Instruct $0.00 $0.00
Jurassic-2 Ultra $0.02 $0.02
Llama 3 70B $0.00 $0.00
Llama 3 8B $0.00 $0.00
Llama 2 70B $0.00 $0.00
Mistral Small $0.00 $0.00
Mistral Large $0.00 $0.01
Titan Text Embeddings $0.00 N/A

 

In comparison, here’s how much it’ll cost you to work with image generation models: 

Model Image resolution Cost Per Image – Standard Quality (<51 steps) Cost Per Image – Premium Quality (>51 steps)
SDXL 0.8 (Stable Diffusion)
512X512 or smaller $0 $0.04
Larger than 512X512 $0 $0
SDXL 1.0 (Stable Diffusion) 1024X1024 or smaller $0 $0
Titan Image Generator (Standard)
512X512 $0 $0
1024X1024 $0 $0
Titan Image Generator (Custom Models)
512X512 $0.02 $0.02
1024X1024 $0 $0

 

Provisioned Throughput

Provisioned throughput requires commitment to multiple model units. You’ll need to know what kind of content you’re going to generate. You’ll be charged on an hourly basis. Prepare to commit to either a one- or six-month contract. If you’re ready to sign up for a much larger workload, this will get you a lower cost.

Here’s a list of the models you’ll typically be dealing with and the associated costs: 

Model Price per Hour per Model Unit With No Commitment (Max One Custom Model Unit Inference) Price per Hour per Model Unit With a One Month Commitment (Includes Inference) Price per Hour per Model Unit With a Six Month Commitment (Includes Inference)
Claude 2.0/2.1 $70 $63 $35.00
Command $50 $40 $24
Embed – English $7 $7 $6
Embed – Multilingual $7 $7 $6
Llama 2 Pre-Trained and Chat (13B) N/A $21 $13
Llama 2 Pre-Trained (70B) N/A $21.18 $13.08
SDXL1.0 (Stable Diffusion) N/A $50 $46
Titan Embeddings N/A $6.40 $5.10
Titan Image Generator (Standard) N/A $16.20 $13.00
Titan Multimodal Embeddings $9.38 $8.45 $6.75
Titan Text Lite $7.10 $6.40 $5.10

OpenAI pricing

OpenAI has only one pricing open: pay-as-you-go. There is ease in simplicity, but this also means no access to those lower prices by committing to more long-term usage. To customize models, extra costs apply. Prices vary depending on the region.

Pay-as-you-go pricing

OpenAI’s pay-as-you-go pricing varies depending on the model. Payment is per prompt token and completion token for text generation, per token for embeddings, and per 100 images generated for image models.

Here’s a preview of what OpenAI’s pricing looks like per model: 

Model Price for 1000 Input Tokens Price for 1000 Output Tokens
Ada $0 N/A
Text Embedding 3 Large $0 N/A
Text Embedding 3 Small $0 N/A
Babbage-002 (GPT Base) $0 $0
Davinci-002 (GPT Base) $0 $0
GPT-3.5 Turbo $0.00 $0.00
GPT-4 $0 $0
GPT-4 Turbo $0.01 $0.03
GPT-4o $0.01 $0.02
GPT-3.5 Turbo $0.00 $0.00
GPT-4 $0.03 $0.06
GPT-4 Turbo $0.01 $0.03
GPT-4o $0.01 $0.02

 

Here’s the cost for image for the most common image generation models:

Models Resolution Price (per 100 Images)
Dall-E-3 1024X1024 $4
Dall-E-3 1024X1024 $8
Dall-E-2 1024X1024 $2

 

Ira Cohen

Ira holds a Ph.D. in machine learning and is an innovator in real-time anomaly detection with over 12 years of industry expertise.

AWS Bedrock vs OpenAI: Summary

 

Here’s a table that breaks down each feature and how AWS Bedrock and OpenAI compare: 

Feature AWS Bedrock Rating Azure OpenAI Rating Winner
Ease of use API & SDK access 7/10 API & SDK access 6/10 Bedrock
Help documents Some community & documents 7/10 Some community & documents 6/10 Bedrock
Ability to create Access to wide range of creative models 8/10 More limited access to creative models 6/10 Bedrock
Security High security 8/10 Medium security 6/10 Bedrock
Supported languages Varies. Jurassic supports 7 languages 5/10 OpenAI supports 100+ languages 7/10 OpenAI
Supported regions Asia Pacific, Europe, US West & East 5/10 Varies by model but widely available 8/10 OpenAI
Max tokens <=200k tokens 6/10 4,096 tokens 4/10 Bedrock
Training data Latest trained until 12/2022 7/10 Latest trained until 4/2023 8/10 OpenAI
Pricing On-demand and Provisioned Throughput 9/10 Pay-as-you-go 4/10 Bedrock

 

Overall, AWS Bedrock wins, though it depends on what you’re is trying to accomplish. Azure OpenAI offers support for more languages and more regions, and its models have been updated most recently. 

But Bedrock can’t be beaten for its pricing plans, tokens, and wide range of models to pick from though. 

Our final verdict:

For a flexible, robust tool that works with all kinds of machine learning (ML) frameworks, choose AWS Bedrock. For enterprises looking to connect an AI tool to other Azure services, Azure OpenAI is likely the best option.

New call-to-action

Get 100% visibility into your new cloud AI tools

 

If you’re worried about keeping your data secure while you open up a new Bedrock or OpenAI account – or, worse, you’re unsure how you’ll manage your budget with these new additions, we have just the solution for you. We have some experience managing budgets during OpenAI to Bedrock migrations, so know you’ll be in expert hands. 

The good news is that it’s not necessary to commit to AWS Bedrock’s full cost… not with the right tools. Cloud prices don’t need to be a surprise with the right cloud cost management tool.

Getting 100% visibility into your cloud spending can help enterprises and MSPs reduce costs. Tools like Anodot capture minute data changes down to the hour for up to a two-year period for an entire multi-cloud environment. With a dashboard that shows all of the cloud spending in one place, with AI-powered recommendations and forecasting. That’s Anodot. 

And it gets better. Anodot has been demystifying cloud costs since day one. We’ve made it our mission to address poor cloud spend visibility, and by partnering with Automat-it, we’ve created the perfect aid to identify hidden prices, and poor cost monitoring and reporting: CostGPT. This new tool uses AI to address your cloud price fluctuations and can help you save up to 30% on spend. 

Want a proof of concept? Talk to us to learn how much you can save with Anodot’s tools.  

Written by Perry Tapiero

Perry Tapiero is an experienced marketer specializing in demand generation across diverse B2B verticals such as AdTech, FinTech, and Cyber. With a focus on driving revenue and growth, Perry excels in developing and executing effective Go-To-Market strategies.

linkedin

You'll believe it when you see it