Menu

Blog

Archive for the ‘virtual reality’ category

Sep 24, 2020

Northrop Grumman’s CRS-14 Mission to the International Space Station: What’s on Board

Posted by in categories: biotech/medical, virtual reality

On Sept. 29, we are launching science, tech demos, & products to the International Space Station!

🌱 Growing radishes in space
🧬 Cancer therapies
🚽 Space toilet
🌊 Water recovery
🎥 A Felix & Paul Studios Virtual Reality camera
💫 An Estee Lauder serum.

Sep 23, 2020

Remote-control VR robots to start working in Japanese convenience stores this summer

Posted by in categories: food, robotics/AI, virtual reality

Family Mart’s robots will still be controlled by human employees.

Hardly a day goes by that we don’t find ourselves stopping into one of Japan’s many convenience stores to grab a bite to eat or something to drink. But while we’ve come to expect tasty onigiri rice balls and tempting dessert beverages when we walk through the door, soon we might be seeing robots.

Sep 23, 2020

Report: Fewer Americans want to work from home

Posted by in categories: augmented reality, biotech/medical, business, neuroscience, virtual reality

Before #COVID19, we like to imagine a #future where we can get and do anything from home, including working, with the help of novel #technologies such as #VR and #AR.

However, the #COVID19 pandemic shows us the human nature, that is, “going out” is one of the basic needs for human being!

One revelation here is that: When speaking of how #technology can change our lives, we often neglect the humane factors and focus only on the technical ones. Take #VR as an example. Yes, it does allow you to have a shopping experience similar to (or even better than) shop outside. However, do you really want to stay at home 24/7 and complete everything online?

Continue reading “Report: Fewer Americans want to work from home” »

Sep 10, 2020

Catholic university astrophysicist creates black hole simulation in VR

Posted by in categories: cosmology, virtual reality

A team of researchers from the Instituto de Astrofísica VR Lab at Pontificia Universidad Católica de Chile has released a virtual simulation of the black hole at the center of our galaxy. Known as “Galactic Center VR,” the short video, released on the Chandra X-ray Observatory Youtube channel, offers a 360-degree view of the center of the Milky Way, which takes the viewer through about 500 years of stellar movement.

The simulation puts the viewer in the place of the black hole itself, Sagittarius A*, and allows for full rotation of the camera. The team explains in the video notes that the simulation shows “stellar giants” moving around the galactic center, while “stellar winds” blow off their surfaces to create different colors. Thankfully, they went into a little detail as to what these colors represent. They wrote:

Blue and cyan represent X-ray emission from hot gas with temperatures of tens of millions of degrees, while the red emission shows ultraviolet emission from moderately dense regions of cooler gas with temperatures of tens of thousands of degrees, and yellow shows the cooler gas with the highest densities.

Continue reading “Catholic university astrophysicist creates black hole simulation in VR” »

Sep 8, 2020

Facebook focuses on smart audio for AR glasses

Posted by in categories: augmented reality, virtual reality

Inspirational speaker and Amazon best-selling author Sanjo Jendayi once said, “Listening doesn’t always equate to hearing. Hearing doesn’t always lead to understanding, but active listening helps each person truly ‘see’ the other.”

Jendayi was providing a little philosophical advice during a motivational speech, and technology was likely the last thing on her mind. But her words in fact might best describe the notion behind groundbreaking advances by the Facebook Reality Labs Research (FRLR) team’s top scientists, programmers and designers.

A post on the FRLR web site last week provided a peek into where the social media giant is heading in the world of augmented reality and virtual reality.

Aug 24, 2020

Scientists Develop Nanophotonic 3D Printing for Virtual Reality Screens

Posted by in categories: 3D printing, government, mobile phones, nanotechnology, quantum physics, virtual reality, wearables

In Korea, scientists are turning to better ways for improving our screen time, and this means 3D printing something most of us know little about: quantum dots. Focusing on refining the wonders of virtual reality and other electronic displays even further, researchers from the Nano Hybrid Technology Research Center of Korea Electrotechnology Research Institute (KERI), a government-funded research institute under National Research Council of Science & Technology (NST) of the Ministry of Science and ICT (MSIT), have created nanophotonic 3D printing technology for screens. Meant to be used with virtual reality, as well as TVs, smartphones, and wearables, high resolution is achieved due to a 3D layout expanding the density and quality of the pixels.

Led by Dr. Jaeyeon Pyo and Dr. Seung Kwon Seol, the team has published the results of their research and development in “3D-Printed Quantum Dot Nanopixels.” While pixels are produced to represent data in many electronics, conventionally they are created with 2D patterning. To overcome limitations in brightness and resolution, the scientists elevated this previously strained technology to the next level with 3D printed quantum dots to be contained within polymer nanowires.

Aug 23, 2020

Facebook is training robot assistants to hear as well as see

Posted by in categories: information science, robotics/AI, virtual reality

In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without crashing.

In the year since, FAIR has rapidly pushed the boundaries of its work on “embodied AI.” In a blog post today, the lab has announced three additional milestones reached: two new algorithms that allow an agent to quickly create and remember a map of the spaces it navigates, and the addition of sound on the platform to train the agents to hear.

Aug 23, 2020

Stanford Scientists Slow Light Down and Steer It With Resonant Nanoantennas

Posted by in categories: augmented reality, biotech/medical, computing, internet, nanotechnology, quantum physics, virtual reality

Researchers have fashioned ultrathin silicon nanoantennas that trap and redirect light, for applications in quantum computing, LIDAR and even the detection of viruses.

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on August 17, 2020, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

Continue reading “Stanford Scientists Slow Light Down and Steer It With Resonant Nanoantennas” »

Aug 18, 2020

Scientists slow and steer light with resonant nanoantennas

Posted by in categories: augmented reality, biotech/medical, computing, internet, nanotechnology, quantum physics, virtual reality

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

“We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent—as is the case with many Silicon-based applications.”

Aug 18, 2020

Mix-StAGE: A model that can generate gestures to accompany a virtual agent’s speech

Posted by in categories: robotics/AI, space, virtual reality

Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak.

Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how and robots communicate with humans by generating to accompany their speech. Their paper, pre-published on arXiv and set to be presented at the European Conference on Computer Vision (ECCV) 2020, introduces Mix-StAGE, a new that can produce different styles of co-speech gestures that best match the voice of a and what he/she is saying.

“Imagine a situation where you are communicating with a friend in a through a ,” Chaitanya Ahuja, one of the researchers who carried out the study, told TechXplore. “The headset is only able to hear your voice, but not able to see your hand gestures. The goal of our model is to predict the accompanying the speech.”

Page 1 of 5912345678Last