May 24, 2024
The Danger Of Superhuman AI Is Not What You Think
Posted by Kelvin Dafiaghor in category: robotics/AI
The rhetoric over “superhuman” AI implicitly erases what’s most important about being human.
The rhetoric over “superhuman” AI implicitly erases what’s most important about being human.
Still, ChatGPT operates in a mostly siloed fashion. It can’t yet venture out “into the wild” to execute online tasks. For example, if you wanted to buy a milk frother on Amazon for under $100, ChatGPT might be able to recommend a product or two, and even provide links, but it can’t actually navigate Amazon and make the purchase.
Why? Besides obvious concerns, like letting a flawed AI model go on a shopping spree with your credit card, one challenge lies in training AI to successfully navigate graphical user interfaces (GUIs), like your laptop or smartphone screen.
But even the current version of GPT-4 seems to grasp the basic steps of online shopping. That’s the takeaway of a recent preprint paper in which AI researchers described how they successfully trained a GPT-4-based agent to “buy” products on Amazon. The agent, dubbed the MM-Navigator, did not actually purchase products, but it was able to analyze screenshots of an iOS smartphone screen and specify the appropriate action and where it should click, with impressive accuracy.
“The fear is that they can’t afford to let someone else get there first,” said Scott Jenson, a UX designer who left Google last month.
From MIT
Not all language model features are linear.
Recent work has proposed the linear representation hypothesis: that language models perform computation by manipulating one-dimensional representations of concepts (“features”) in activation space.
Continue reading “Not All Language Model Features Are Linear” »
⚔️ Closed-source vs. Open-weight LLMs The gap between closed-source and open-weight models is closing in terms of MMLU.
Post-training, model editing, quantization.
Researchers have developed a new technique to view living mammalian cells. The team used a powerful laser, called a soft X-ray free electron laser, to emit ultrafast pulses of illumination at the speed of femtoseconds, or quadrillionths of a second.
Apptronik, a NASA-backed robotics company, has unveiled Apollo, a humanoid robot that could revolutionize the workforce — because there’s virtually no limit to the number of jobs it can do.
“The focus for Apptronik is to build one robot that can do thousands of different things,” Jeff Cardenas, the company’s co-founder and CEO, told Freethink. “The best way to think of it is kind of like the iPhone of robots.”
Continue reading “NASA partner unveils the ‘iPhone’ of robots” »
The future of space-based UV/optical/IR astronomy requires ever larger telescopes. The highest priority astrophysics targets, including Earth-like exoplanets, first generation stars, and early galaxies, are all extremely faint, which presents an ongoing challenge for current missions and is the opportunity space for next generation telescopes: larger telescopes are the primary way to address this issue.
With mission costs depending strongly on aperture diameter, scaling current space telescope technologies to aperture sizes beyond 10 m does not appear economically viable. Without a breakthrough in scalable technologies for large telescopes, future advances in astrophysics may slow down or even completely stall. Thus, there is a need for cost-effective solutions to scale space telescopes to larger sizes.
The FLUTE project aims to overcome the limitations of current approaches by paving a path towards space observatories with large aperture, unsegmented liquid primary mirrors, suitable for a variety of astronomical applications. Such mirrors would be created in space via a novel approach based on fluidic shaping in microgravity, which has already been successfully demonstrated in a laboratory neutral buoyancy environment, in parabolic microgravity flights, and aboard the International Space Station (ISS).
The boom in AI funding rounds requiring a lot of capital at speed has increased attention to funding via SAFE notes.
Although the Human Genome Project announced the completed sequencing of 20,000 human genes more than 20 years ago, scientists are still working to grasp how fully formed beings emerge from basic genetic instructions.