Anthropic, a Google-backed artificial intelligence startup, on Tuesday, expanded customer access to its chatbot Claude and updated the underlying technology that makes “Claude 2” more adept at things like coding and maths.
The Silicon Valley-based corporation, with headquarters in San Francisco, claimed that its new AI model was also less likely than its previous technology to provide insulting or harmful responses.
The revelation, which comes just a few months after Claude’s initial debut, illustrates how quickly businesses—including Alphabet’s Google, Microsoft, the company Inflection AI, and many others—are looking for an advantage in AI.
The capacity of Anthropic’s system to read roughly 75,000 words at a time, which the company claims allows the AI to probe through lengthy business documents, is one of the ways the company hopes to differentiate itself from the competition.
Following copyright issues and frequent instances of AIs producing false material or other damaging content, interest in regulation has, in the meantime, increased significantly across the globe.
Anthropic said that by upgrading Claude, the model’s safety evaluation performance had increased by a factor of two.
Anthropic, a company founded by former OpenAI employees, reported that Claude 2 had improved from its previous model’s 73% score to 76.5% on the Bar’s multiple-choice part.