Menu

Blog

Archive for the ‘robotics/AI’ category: Page 177

Jun 3, 2024

Nvidia unveils next-gen AI chip platform Rubin, set for 2026 release

Posted by in category: robotics/AI

Nvidia Corp. has launched Rubin, its next-generation artificial intelligence (AI) chip platform, which will be available in 2026, according to CEO Jensen Huang. The announcement was made during a presentation at National Taiwan University in Taipei as part of the Computex trade fair.

Rubin platform to include advanced CPUs, GPUs, and networking chips.

The Rubin chip family will include new GPUs, CPUs, and networking processors. The new CPU, called Versa, will be geared to improve AI capabilities. The GPUs, which are critical for powering AI applications, will use next-generation high-bandwidth memory from industry giants including SK Hynix, Micron, and Samsung. Despite the enthusiasm surrounding the introduction, Huang revealed only limited information regarding the Rubin platform’s specific features and capabilities.

Jun 2, 2024

Understanding Abstractions in Neural Networks

Posted by in category: robotics/AI

How thinking machines implement one of the most important functions of cognition.

Jun 2, 2024

Convolutional Neural Networks (CNNs) Explained

Posted by in category: robotics/AI

Learn the basics of Convolutional Neural Networks (CNNs) in this beginner-friendly guide. Discover how they work, their applications in image recognition, and how they’re changing the world of AI.

Jun 2, 2024

Machine intelligence accelerated design of conductive MXene aerogels with programmable properties

Posted by in categories: bioengineering, nanotechnology, robotics/AI, wearables

Conductive aerogels have gained significant research interests due to their ultralight characteristics, adjustable mechanical properties, and outstanding electrical performance1,2,3,4,5,6. These attributes make them desirable for a range of applications, spanning from pressure sensors7,8,9,10 to electromagnetic interference shielding11,12,13, thermal insulation14,15,16, and wearable heaters17,18,19. Conventional methods for the fabrication of conductive aerogels involve the preparation of aqueous mixtures of various building blocks, followed by a freeze-drying process20,21,22,23. Key building blocks include conductive nanomaterials like carbon nanotubes, graphene, Ti3C2Tx MXene nanosheets24,25,26,27,28,29,30, functional fillers like cellulose nanofibers (CNFs), silk nanofibrils, and chitosan29,31,32,33,34, polymeric binders like gelatin25,26, and crosslinking agents that include glutaraldehyde (GA) and metal ions30,35,36,37. By adjusting the proportions of these building blocks, one can fine-tune the end properties of the conductive aerogels, such as electrical conductivities and compression resilience38,39,40,41. However, the correlations between compositions, structures, and properties within conductive aerogels are complex and remain largely unexplored42,43,44,45,46,47. Therefore, to produce a conductive aerogel with user-designated mechanical and electrical properties, labor-intensive and iterative optimization experiments are often required to identify the optimal set of fabrication parameters. Creating a predictive model that can automatically recommend the ideal parameter set for a conductive aerogel with programmable properties would greatly expedite the development process48.

Machine learning (ML) is a subset of artificial intelligence (AI) that builds models for predictions or recommendations49,50,51. AI/ML methodologies serve as an effective toolbox to unravel intricate correlations within the parameter space with multiple degrees of freedom (DOFs)50,52,53. The AI/ML adoption in materials science research has surged, particularly in the fields with available simulation programs and high-throughput analytical tools that generate vast amounts of data in shared and open databases54, including gene editing55,56, battery electrolyte optimization57,58, and catalyst discovery59,60. However, building a prediction model for conductive aerogels encounters significant challenges, primarily due to the lack of high-quality data points. One major root cause is the lack of standardized fabrication protocols for conductive aerogels, and different research laboratories adopt various building blocks35,40,46. Additionally, recent studies on conductive aerogels focus on optimizing a single property, such as electrical conductivity or compressive strength, and the complex correlations between these attributes are often neglected to understand37,42,61,62,63,64. Moreover, as the fabrication of conductive aerogels is labor-intensive and time-consuming, the acquisition rate of training data points is highly limited, posing difficulties in constructing an accurate prediction model capable of predicting multiple characteristics.

Herein, we developed an integrated platform that combines the capabilities of collaborative robots with AI/ML predictions to accelerate the design of conductive aerogels with programmable mechanical and electrical properties (see Supplementary Fig. 1 for the robot–human teaming workflow). Based on specific property requirements, the robots/ML-integrated platform was able to automatically suggest a tailored parameter set for the fabrication of conductive aerogels, without the need for conducting iterative optimization experiments. To produce various conductive aerogels, four building blocks were selected, including MXene nanosheets, CNFs, gelatin, and GA crosslinker (see Supplementary Note 1 and Supplementary Fig. 2 for the selection rationale and model expansion strategy). Initially, an automated pipetting robot (i.e., OT-2 robot) was operated to prepare 264 mixtures with varying MXene/CNF/gelatin ratios and mixture loadings (i.e.

Jun 2, 2024

Temporal dendritic heterogeneity incorporated with spiking neural networks for learning multi-timescale dynamics

Posted by in category: robotics/AI

Brain-inspired spiking neural networks have shown their capability for effective learning, however current models may not consider realistic heterogeneities present in the brain. The authors propose a neuron model with temporal dendritic heterogeneity for improved neuromorphic computing applications.

Jun 2, 2024

NVIDIA NIM Revolutionizes Model Deployment, Now Available to Transform World’s Millions of Developers Into Generative AI Developers

Posted by in category: robotics/AI

NVIDIA today announced that the world’s 28 million developers can now download NVIDIA NIM™ — inference microservices that provide models as optimized containers — to deploy on clouds, data centers or workstations, giving them the ability to easily build generative AI applications for copilots, chatbots and more, in minutes rather than weeks.

Jun 2, 2024

NVIDIA CEO Jensen Huang Keynote at COMPUTEX 2024

Posted by in category: robotics/AI

NVIDIA founder and CEO Jensen Huang will deliver a live keynote address ahead of COMPUTEX 2024 on June 2 at 7 p.m. in Taipei, Taiwan, outlining what’s next for the AI ecosystem. Tune in to watch it live. https://nvda.ws/3UXATRe

Jun 2, 2024

Memristor-based adaptive neuromorphic perception in unstructured environments

Posted by in categories: information science, robotics/AI, transportation

Differential neuromorphic computing, as a memristor-assisted perception method, holds the potential to enhance subsequent decision-making and control processes. Compared with conventional technologies, both the PID control approach and the proposed differential neuromorphic computing share a fundamental principle of smartly adjusting outputs in response to feedback, they diverge significantly in the data manipulation process (Supplementary Discussion 12 and Fig. S26); our method leverages the nonlinear characteristics of the memristor and a dynamic selection scheme to execute more complex data manipulation than linear coefficient-based error correction in PID. Additionally, the intrinsic memory function of memristors in our system enables real-time adaptation to changing environments. This represents a significant advantage compared to the static parameter configuration of PID systems. To perform similar adaptive control functions in tactile experiments, the von Neumann architecture follows a multi-step process involving several data movements: 1. Input data about the piezoresistive film state is transferred to the system memory via an I/O interface. 2. This sensory data is then moved from the memory to the cache. 3. Subsequently, it is forwarded to the Arithmetic Logic Unit (ALU) and waits for processing.4. Historical tactile information is also transferred from the memory to the cache unless it is already present. 5. This historical data is forwarded to the ALU. 6. ALU calculates the current sensory and historical data and returns the updated historical data to the cache. In contrast, our memristor-based approach simplifies this process, reducing it to three primary steps: 1. ADC reads data from the piezoresistive film. 2. ADC reads the current state of the memristor, which represents the historical tactile stimuli. 3. DAC, controlled by FPGA logic, updates the memristor state based on the inputs. This process reduces the costs of operation and enhances data processing efficiency.

In real-world settings, robotic tactile systems are required to elaborate large amounts of tactile data and respond as quickly as possible, taking less than 100 ms, similar to human tactile systems58,59. The current state-of-the-art robotics tactile technologies are capable of elaborating sudden changes in force, such as slip detection, at millisecond levels (from 500 μs to 50 ms)59,60,61,62, and the response time of our tactile system has also reached this detection level. For the visual processing, suppose a vehicle travels 40 km per hour in an urban area and wants control effective for every 1 m. In that case, the requirement translates a maximum allowable response time of 90 ms for the entire processing pipeline, which includes sensors, operating systems, middleware, and applications such as object detection, prediction, and vehicle control63,64. When incorporating our proposed memristor-assisted method with conventional camera systems, the additional time delay includes the delay from filter circuits (less than 1 ms) and the switching time for the memristor device, which ranges from nanoseconds (ns) to even picoseconds (ps)21,65,66,67. Compared to the required overall response time of the pipeline, these additions are negligible, demonstrating the potential of our method application in real-world driving scenarios68. Although our memristor-based perception method meets the response time requirement for described scenarios, our approach faces several challenges that need to be addressed for real-world applications. Apart from the common issues such as variability in device performance and the nonlinear dynamics of memristive responses, our approach needs to overcome the following challenges:

Currently, the modulation voltage applied to memristors is preset based on the external sensory feature, and the control algorithm is based on hard threshold comparison. This setting lacks the flexibility required for diverse real-world environments where sensory inputs and required responses can vary significantly. Therefore, it is crucial to develop a more automatic memristive modulation method along with a control algorithm that can dynamically adjust based on varying application scenarios.

Jun 2, 2024

A 3D ray traced biological neural network learning model

Posted by in categories: biological, information science, robotics/AI

In artificial neural networks, many models are trained for a narrow task using a specific dataset. They face difficulties in solving problems that include dynamic input/output data types and changing objective functions. Whenever the input/output tensor dimension or the data type is modified, the machine learning models need to be rebuilt and subsequently retrained from scratch. Furthermore, many machine learning algorithms that are trained for a specific objective, such as classification, may perform poorly at other tasks, such as reinforcement learning or quantification.

Even if the input/output dimensions and the objective functions remain constant, the algorithms do not generalize well across different datasets. For example, a neural network trained on classifying cats and dogs does not perform well on classifying humans and horses despite both of the datasets having the exact same image input1. Moreover, neural networks are highly susceptible to adversarial attacks2. A small deviation from the training dataset, such as changing one pixel, could cause the neural network to have significantly worse performance. This problem is known as the generalization problem3, and the field of transfer learning can help to solve it.

Transfer learning4,5,6,7,8,9,10 solves the problems presented above by allowing knowledge transfer from one neural network to another. A common way to use supervised transfer learning is obtaining a large pre-trained neural network and retraining it for a different but closely related problem. This significantly reduces training time and allows the model to be trained on a less powerful computer. Many researchers used pre-trained neural networks such as ResNet-5011 and retrained them to classify malicious software12,13,14,15. Another application of transfer learning is tackling the generalization problem, where the testing dataset is completely different from the training dataset. For example, every human has unique electroencephalography (EEG) signals due to them having distinctive brain structures. Transfer learning solves the generalization problem by pretraining on a general population EEG dataset and retraining the model for a specific patient16,17,18,19,20. As a result, the neural network is dynamically tailored for a specific person and can interpret their specific EEG signals properly. Labeling large datasets by hand is tedious and time-consuming. In semi-supervised transfer learning21,22,23,24, either the source dataset or the target dataset is unlabeled. That way, the neural networks can self-learn which pieces of information to extract and process without many labels.

Jun 2, 2024

Geoffrey Hinton

Posted by in category: robotics/AI

Timestamps Early inspirations (00:00:00) Meeting Ilya Sutskever (00:05:05) Ilya’s intuition (00:06:12) Understanding of LLMs (00:09:00) Scaling neural networks (00:15:15) What is language? (00:18:30) The GPU revolution (00:21:35) Human Brain Insights (00:25:05) Feelings & analogies (00:29:05 Problem selection (00:32:58) Gradient processing (00:35:21) Ethical implications (00:36:52) Selecting talent (00:40:15) Developing intuition (00:41:49) The road to AGI (00:43:50) Proudest moment (00:45:00)

Page 177 of 2,431First174175176177178179180181Last