- Amazon recently launched Cedric, an internal AI chatbot, to increase employee productivity.
- Cedric is designed for safe use, addressing privacy concerns with external AI tools.
- Amazon has repeatedly warned employees not to use third-party AI chatbots, including ChatGPT.
Amazon warns employees not to use third-party AI chatbots like ChatGPT. Instead, staff are now encouraged to use a new internal tool called Cedric, Business Insider has learned.
Cedric is a “general purpose AI chatbot” that is “more secure than ChatGPT,” says an internal document obtained by BI. Amazon employees can “use it to ask questions, summarize documents, and brainstorm new ideas.”
Cedric’s goal is to help Amazon employees increase their productivity and job satisfaction since external AI tools are not available for security reasons, the document said.
“It has been more than a year since ChatGPT Enterprise and Co-pilot Enterprise were released, but Amazonians have been left behind due to limited options that are safe for enterprise use,” the document reads. “Without an AI assistant, Amazonians will have lower job satisfaction than other companies that adopt these tools. Additionally, companies that leverage AI will have greater decision-making speed and be more productive, so they will be able to serve customers faster and better than Amazon.”
The launch of Cedric highlights the challenges businesses face in trying to use AI tools safely and securely. While AI-powered chatbots have the potential to help workers, the risk of employees sharing confidential company information, whether intentionally or unintentionally, is high. Questions remain about how generative AI companies handle sensitive information coming in and out of their systems, and whether this data is used to train models.
A thorny question
For Amazon the issue is particularly thorny. Its arch-rival Microsoft has launched AI assistant products and is a close partner and investor in OpenAI, the startup behind ChatGPT. Shortly after ChatGPT’s release in late 2022, Amazon began warning employees not to share confidential information with the chatbot. Earlier this year, Amazon formalized internal guidelines banning external AI tools, including ChatGPT, for commercial purposes.
In recent years, several large companies, including Apple, Samsung, and JP Morgan Chase, have banned their employees from using ChatGPT for privacy reasons. This has created a hidden wave of employees secretly using such AI tools at work, called “CheatGPTs”, because the technology can help them do their jobs faster.
In an email to BI, an Amazon spokesperson said the company supports the use of generative AI technology at work, including Cedric, and internal guidance helps employees “use these services by properly managing confidential information”.
“Amazon employees use internal generative AI tools every day to innovate on behalf of our customers. We have safeguards in place for employee use of these technologies that focus on protecting confidential information, including guidance on accessing third-party generative AI services,” the spokesperson added.
‘Reading and writing companion’
Internally, Amazon calls Cedric “your secure document reading and writing companion,” according to the internal document. It suggests employees use it to create Amazon’s popular six-page memos “in seconds” and turn meeting notes into email-ready formats “safely and securely.”
The internal document adds that Cedric has been trained on the text of conversations, so employees are encouraged to use plain English as if they were speaking conversationally. The new tool can also help generate new ideas and solve problems. One suggested use case showed that employees can upload Word documents, PDF files, and Excel spreadsheets and ask what a vice president would say about the content.
All Amazon employees now have access to Cedric. Some employees told BI that Amazon began promoting Cedric more broadly within the company a few weeks ago, after an initial pilot period.
For Cedric, Amazon did not use its internal Titan AI model. Instead, according to the document, Amazon used its Bedrock AI development platform and Anthropic’s Claude large language model. (Amazon has invested heavily in Anthropic.)
Approved for “highly confidential data”
Cedric is one of many AI tools that Amazon has recently launched or is building internally. Amazon Q is its flagship AI tool aimed at businesses and developers. Separately, Amazon is working on another AI chatbot code-named Metis and an updated AI-powered Alexa app, BI previously reported.
Unlike these apps, Cedric is for internal use only and its output “may not be used outside of Amazon,” the internal document warns.
Safety is a key point for Cedric. Software developers and Amazon Web Services employees are allowed to use Cedric with “highly sensitive data,” the document said. Cedric does not share any input into the underlying base model for training purposes or send it to third-party developers. Your chat history is saved in an encrypted database.
For code generation, however, employees should use Amazon Q rather than Cedric, the document recommends.
Cedric should not be used for any “consequential” decisions that may have a “legal or similarly significant effect on an individual,” the document adds.
However, Amazon seems confident in Cedric’s abilities and encourages employees to use them before meetings with the company’s most senior executives.
“How would a CEO respond?” the internal document suggested asking the AI chatbot.
Do you work at Amazon? Do you have advice?
Contact the reporter, Eugene Kim, via the encrypted messaging apps Signal or Telegram (+1-650-942-3061) or email (ekim@businessinsider.com). Contact the user using a non-functioning device. Check it out on Business Insider source guide for more tips on sharing information safely.