Anthropic, an AI safety and research company, has launched a new AI assistant named Claude, which is designed to handle harmful, unpleasant, and malicious queries by explaining its objections to them. Unlike OpenAI's ChatGPT, which has generated negative headlines, Claude has been trained through self-improvement and provided with a set of rules and principles to be helpful, honest, and harmless while performing tasks such as summarization, search, writing, Q&A, and coding. Anthropic co-founders, former OpenAI executives Dario and Daniela Amodei, have used a method called "Constitutional AI" to create Claude. It involves supervised and reinforcement learning phases, which train the AI to engage with harmful queries without producing any harmful output. Claude is already being used by several companies and is available in two versions, Claude and Claude Instant, with pricing per million characters. Anthropic plans to roll out updates to Claude in the coming weeks.
#claude #anthropic #chatgpt #chatgpt4.0 #artificialintelligence
#claude #anthropic #chatgpt #chatgpt4.0 #artificialintelligence
- Category
- Claude AI Latest
- Tags
- chat gpt, chatgpt 4.0, chatgpt coding
Be the first to comment

