Toggle light / dark theme

Microscale light-emitting diodes (micro-LEDs) are emerging as a next-generation display technology for optical communications, augmented and virtual reality, and wearable devices. Metal-halide perovskites show great potential for efficient light emission, long-range carrier transport, and scalable manufacturing, making them potentially ideal candidates for bright LED displays.

However, manufacturing thin-film perovskites suitable for micro-LED displays faces serious challenges. For example, thin-film perovskites may exhibit inhomogeneous light emission, and their surfaces may be unstable when subjected to lithography. For these reasons, solutions are needed to make thin-film perovskites compatible with micro-LED devices.

Recently, a team of Chinese researchers led by Professor Wu Yuchen at the Technical Institute of Physics and Chemistry of the Chinese Academy of Sciences has made significant strides in overcoming these challenges. The team has developed a novel method for the remote epitaxial growth of continuous crystalline perovskite thin films. This advance allows for seamless integration into ultrahigh-resolution micro-LEDs with pixels less than 5 μm.

I had a conversation with NVIDIA CEO Jensen Huang and we spoke about groundbreaking developments in physical AI and other big announcements made at CES. Jensen discusses how NVIDIA Cosmos and Omniverse are revolutionizing robot training, enabling machines to understand the physical world and learn in virtual environments — reducing training time from years to hours.

He shares insights on NVIDIA DRIVE AI’s autonomous vehicle developments, including their major partnership with Toyota, and talks about the critical role of safety in their three-computer system approach.

Virtual reality headsets like the Meta Quest or Apple Vision Pro will be a Christmas gift in more than one home this year.

Now mice are getting in on the action.

Researchers have developed a set of VR goggles for lab mice for use in brain studies, according to a report published recently in the journal Nature Methods.

Thanks to their genetic makeup, their ability to navigate mazes and their willingness to work for cheese, mice have long been a go-to model for behavioral and neurological studies.

In recent years, they have entered a new arena—virtual reality—and now Cornell researchers have built miniature VR headsets to immerse them more deeply in it.

The team’s MouseGoggles—yes, they look as cute as they sound—were created using low-cost, off-the-shelf components, such as smartwatch displays and tiny lenses, and offer visual stimulation over a wide field of view while tracking the mouse’s eye movements and changes in pupil size.

We can expect to see more recommendations for VR in catastrophic injury cases.

Immersive Virtual Reality (IVR or VR) as a tool in rehabilitation is changing at pace and has far reaching consequences that will increasingly be seen in the claims space.

Combined with AI powered treatment planning and smart home devices for daily rehabilitation, innovative technologies are now evident in all aspects of rehabilitation.

Just as the metaverse industry finally began its breakthrough, ChatGPT’s launch in November 2022 proved a technological avalanche. Amid post-pandemic economic pressures, companies pivoted away from metaverse aspirations to AI adoption, seeking immediate returns through automation and virtualization.

Apple’s 2023 entry into the space with Vision Pro VR headset has similarly faced challenges. At $3,499, with limited content and a sparse developer ecosystem, Apple struggles to find its market. Early projections suggest initial production runs of fewer than 400,000 units.

What if we’ve been looking for the metaverse in the wrong places? While Meta, HTC and Sony have so far struggled to establish their vision of a VR-first digital world, gaming platforms like Roblox quietly built what might be the actual metaverse.

Creating realistic 3D models for applications like virtual reality, filmmaking, and engineering design can be a cumbersome process requiring lots of manual trial and error.

While generative artificial intelligence models for images can streamline artistic processes by enabling creators to produce lifelike 2D images from text prompts, these models are not designed to generate 3D shapes. To bridge the gap, a recently developed technique called Score Distillation leverages 2D image generation models to create 3D shapes, but its output often ends up blurry or cartoonish.

The dream of many – to try the taste through a monitor – is getting closer.

Remove ads and unlock PREMIUM content

A team of biomedical engineers and virtual reality experts has developed a groundbreaking lollipop-shaped interface that simulates taste in virtual reality.

The Matrix is a groundbreaking AI model capable of generating infinite, high-quality video worlds in real time, offering unmatched interactivity and adaptability. Developed using advanced techniques like the Video Diffusion Transformer and Swin-DPM, it enables seamless, frame-level precision for creating dynamic, responsive simulations. This innovation surpasses traditional systems, making it a game-changer for gaming, autonomous vehicle testing, and virtual environments.

🔍 Key Topics Covered:
The Matrix AI model and its ability to generate infinite, interactive video worlds.
Real-time applications in gaming, autonomous simulations, and dynamic virtual environments.
Revolutionary AI techniques like Video Diffusion Transformer, Swin-DPM, and Interactive Modules.