Has AI advanced too far and too fast? Does it represent an out-of-control threat to humanity? Some credible observers believe AI may have reached a tipping point, and that if research on the technology continues unchecked, AI could spin out of control and become dangerous.
This article explores how Google responded to ChatGPT by using foundation models and generative AI to create innovative products and improve its existing offerings. It also examines Google’s use of Safe AI when creating new products.
“Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
We are indeed living in “interesting” times.
Paul Smith-Goodson is the Vice President and Principal Analyst covering AI and quantum for Moor Insights & Strategy. He is currently working on several research projects, one of which is a unique method of using machine learning for highly accurate prediction of real-time and future global propagation of HF radio signals.
Comments are closed.