Toggle light / dark theme

A novel architecture for optical neural networks utilizes wavefront shaping to precisely manipulate the travel of ultrashort pulses through multimode fibers, enabling nonlinear optical computation.

Present-day artificial intelligence systems rely on billions of adjustable parameters to accomplish complex objectives. Yet, the vast quantity of these parameters incurs significant expenses. The training and implementation of such extensive models demand considerable memory and processing power, available only in enormous data center facilities, consuming energy on par with the electrical demands of medium-sized cities. In response, researchers are currently reevaluating both the computing infrastructure and the machine learning algorithms to ensure the sustainable advancement of artificial intelligence continues at its current rate.

Optical implementation of neural network architectures is a promising avenue because of the low-power implementation of the connections between the units. New research reported in Advanced Photonics combines light propagation inside multimode fibers with a small number of digitally programmable parameters and achieves the same performance on image classification tasks with fully digital systems with more than 100 times more programmable parameters.

Without a more comprehensive set of big data, AI algorithms are more likely to generate an inaccurate or incomplete data model. Insufficient data leads to a model that is not capable of predicting outcomes with the level of accuracy that’s needed in the real world.

Anyone with experience in the art market also knows that markets can fluctuate without any indication as to why. AI will not have the answer. Tech entrepreneur Boris Pevzner, founder of AI-powered data platform Live Art, asserts that while AI is a tool that can be used as an indicator, it is not something that can predict any real-world auction prices.

Although AI is becoming increasingly prevalent in the art business, it does not have to be seen as a threat. Many people view AI as a dangerous tool, but AI does not need to be perceived in this way. Instead of a replacement for human expertise, we should see it as a tool of advancement to be used alongside humans to improve the quality of their work.

The rapid advancement of deep learning algorithms and generative models has enabled the automated production of increasingly striking AI-generated artistic content. Most of this AI-generated art, however, is created by algorithms and computational models, rather than by physical robots.

Researchers at Universidad Complutense de Madrid (UCM) and Universidad Carlos III de Madrid (UC3M) recently developed a deep learning-based model that allows a humanoid robot to sketch pictures, similarly to how a human artist would. Their paper, published in Cognitive Systems Research, offers a remarkable demonstration of how robots could actively engage in creative processes.

“Our idea was to propose a robot application that could attract the scientific community and the general public,” Raúl Fernandez-Fernandez, co-author of the paper, told Tech Xplore. “We thought about a task that could be shocking to see a robot performing, and that was how the concept of doing art with a humanoid robot came to us.”

Elon Musk is suing OpenAI and Sam Altman for allegedly abandoning OpenAI’s original mission to develop artificial intelligence to benefit humanity.

“OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawyers wrote in the lawsuit, which was filed late on Thursday in San Francisco.

“Under its new board, it is not just developing but is refining an AGI [Artificial General Intelligence] to maximize profits for Microsoft, rather than for the benefit of humanity,” claims the filing. “On information and belief, GPT-4 is an AGI algorithm.”

The most popular words of 2023 were recently released, with AI Large Language Model (LLM) unquestionably topping the list. As a front-runner, ChatGPT also emerged as one of the international buzzwords of the year. These disruptive innovations in AI owe much to big data, which has played a pivotal role. Yet, AI has simultaneously presented new opportunities and challenges to the development of big data.

High-capacity data storage is indispensable in today’s digital economy. However, major storage devices like and semiconductor flash devices face limitations in terms of cost-effectiveness, durability, and longevity.

Optical data storage offers a promising green solution for cost-effective and long-term data storage. Nonetheless, optical data storage encounters a fundamental limitation in the spacing of adjacent recorded features, owing to the optical diffraction limit. This physical constraint not only impedes the further development of direct laser writing machines but also affects and storage technology.

The purpose of this work is to investigate how several inflationary and bouncing scenarios can be realized by imperfect fluids. We shall use two different theoretical frameworks, namely classical cosmology and Loop Quantum Cosmology (LQC) (see where the derivation of the Hamiltonian in LQC was firstly derived to yield the modified Friedman equation, and also see for a recent derivation of the effective Hamiltonian in LQC, which was derived by demanding repulsive gravity, as in Loop Quantum Gravity). In both cases we shall investigate which imperfect fluid can realize various inflationary and bouncing cosmology scenarios. The inflationary cosmology and bouncing cosmology are two alternative scenarios for our Universe evolution. In the case of inflation, the Universe starts from an initial singularity and accelerates at early times, while in the case of the bouncing cosmology, the Universe initially contracts until it reaches a minimum radius, and then it expands again. With regards to inflation, we shall be interested in four different inflationary scenarios, namely the intermediate inflation, the Starobinsky inflation, and two constant-roll inflation scenarios. With regards to bouncing cosmologies, we shall be interested in realizing several well studied bouncing cosmologies, and particularly the matter bounce scenario, the superbounce scenario and the singular bounce.

As we already mentioned we shall use two theoretical frameworks, that of classical cosmology and that of LQC. After presenting the reconstruction methods for realizing the various cosmologies with imperfect fluids, we proceed to the realization of the cosmologies by using the reconstruction methods. In the case of classical cosmology, we will calculate the power spectrum of primordial curvature perturbations, the scalar-to-tensor ratio and the running of the spectral index for all the aforementioned cosmologies, and we compare the results to the recent Planck data. The main outcome of our work is that, although the cosmological scenarios we study in this paper are viable in other modified gravity frameworks, these are not necessarily viable in all the alternative modified gravity descriptions. As we will demonstrate, in some cases the resulting imperfect fluid cosmologies are not compatible at all with the observational data, and in some other cases, there is partial compatibility.

We need to note that the perturbation aspects in LQC are not transparent enough and assume that there are no non-trivial quantum gravitational modifications arising due to presence of inhomogeneities. As it was shown in, a consistent Hamiltonian framework does not allow this assumption to be true. The perturbations issues that may arise in the context of the present work, are possibly more related to some early works in LQC, so any calculation of the primordial power spectrum should be addressed as we commented above.

Elon Musk claims OpenAI is using GPT-4 to ‘maximize profits’ instead of ‘for the benefit of humanity.’


The lawsuit claims that the GPT-4 model OpenAI released in March 2023 isn’t just capable of reasoning but is also actually “better at reasoning than average humans,” having scored in the 90th percentile on the Uniform Bar Examination for lawyers. The company is rumored to be developing a more advanced model, known as “Q Star,” that has a stronger claim to being true artificial general intelligence (AGI).

Altman was fired (and subsequently rehired five days later) by OpenAI in 2023 over vague claims that his communication with the board was “hindering its ability to exercise its responsibilities.” The lawsuit filed by Musk alleges that in the days following this event, Altman, Brockman, and Microsoft “exploited Microsoft’s significant leverage over OpenAI” to replace board members with handpicked alternatives that were better approved of by Microsoft.

“The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGI — and hence when it has developed an algorithm that is outside the scope of Microsoft’s license,” claims the lawsuit. The partnership between OpenAI and Microsoft is currently being examined by regulators in the UK, EU, and US to assess if their shared relationship impacts competition.

Popular Summary.

Unequivocally demonstrating that a quantum computer can significantly outperform any existing classical computers will be a milestone in quantum science and technology. Recently, groups at Google and at the University of Science and Technology of China (USTC) announced that they have achieved such quantum computational advantages. The central quantity of interest behind their claims is the linear cross-entropy benchmark (XEB), which has been claimed and used to approximate the fidelity of their quantum experiments and to certify the correctness of their computation results. However, such claims rely on several assumptions, some of which are implicitly assumed. Hence, it is critical to understand when and how XEB can be used for quantum advantage experiments. By combining various tools from computer science, statistical physics, and quantum information, we critically examine the properties of XEB and show that XEB bears several intrinsic vulnerabilities, limiting its utility as a benchmark for quantum advantage.

Concretely, we introduce a novel framework to identify and exploit several vulnerabilities of XEB, which leads to an efficient classical algorithm getting comparable XEB values to Google’s and USTC’s quantum devices (2% 12% of theirs) with just one GPU within 2 s. Furthermore, its performance features better scaling with the system size than that of a noisy quantum device. We observe that this is made possible because the XEB can highly overestimate the fidelity, which implies the existence of “shortcuts” to achieve high XEB values without simulating the system. This is in contrast to the intuition of the hardness of achieving high XEB values by all possible classical algorithms.

In a new study, scientists have been able to leverage a machine learning algorithm to tackle one of the biggest challenges facing cancer researchers — predicting when cancer will resist chemotherapy.


But in what could be a game-changer, scientists at the University of California San Diego School of Medicine revealed today in a study that a high-tech machine learning tool might just figure out when cancer is going to give the cold shoulder to chemotherapy.

Teaming up against cancer

When cells divide, even the cancer ones, they rely on complex molecular machinery that helps them copy their DNA. Chemotherapy drugs usually put a stop to this DNA-copying mechanism, especially in fast-growing tumor cells.