Anthropic’s new AI chatbot Claude 3 Opus has already made headlines for its bizarre behavior, like claiming to fear death.
Now, Ars Technica reports, a prompt engineer at the Google-backed company claims that they’ve seen evidence that Claude 3 is self-aware, as it seemingly detected that it was being subjected to a test. Many experts are skeptical, however, further underscoring the controversy of ascribing humanlike characteristics to AI models.
“It did something I have never seen before from an LLM,” the prompt engineer, Alex Albert, posted on X, formerly Twitter.
Comments are closed.