Toggle light / dark theme

What came before the Big Bang? Supercomputers may hold the answer

Scientists are rethinking the universe’s deepest mysteries using numerical relativity, complex computer simulations of Einstein’s equations in extreme conditions. This method could help explore what happened before the Big Bang, test theories of cosmic inflation, investigate multiverse collisions, and even model cyclic universes that endlessly bounce through creation and destruction.

A new perspective on how cosmological correlations change based on kinematic parameters

To study the origin and evolution of the universe, physicists rely on theories that describe the statistical relationships between different events or fields in spacetime, broadly referred to as cosmological correlations. Kinematic parameters are essentially the data that specify a cosmological correlation—the positions of particles, or the wavenumbers of cosmological fluctuations.

Changes in cosmological correlations influenced by variations in parameters can be described using so-called differential equations. These are a type of mathematical equation that connect a function (i.e., a relationship between an input and an output) to its rate of change. In physics, these equations are used extensively as they are well-suited for capturing the universe’s highly dynamic nature.

Researchers at Princeton’s Institute for Advanced Study, the Leung Center for Cosmology and Particle Astrophysics in Taipei, Caltech’s Walter Burke Institute for Theoretical Physics, the University of Chicago, and the Scuola Normale Superiore in Pisa recently introduced a new perspective to approach equations describing how cosmological correlations are affected by smooth changes in kinematic parameters.

Relativistic Motion Boosts Engine Efficiency Beyond Limits

The pursuit of more efficient engines continually pushes the boundaries of thermodynamics, and recent work demonstrates that relativistic effects may offer a surprising pathway to surpass conventional limits. Tanmoy Pandit from the Leibniz Institute of Hannover, along with Tanmoy Pandit from TU Berlin and Pritam Chattopadhyay from the Weizmann Institute of Science, and colleagues, investigate a novel thermal machine that harnesses the principles of relativity to achieve efficiencies beyond those dictated by the Carnot cycle. Their research reveals that by incorporating relativistic motion into the system, specifically through the reshaping of energy spectra via the Doppler effect, it becomes possible to extract useful work even without a temperature difference, effectively establishing relativistic motion as a valuable resource for energy conversion. This discovery not only challenges established thermodynamic boundaries, but also opens exciting possibilities for designing future technologies that leverage the fundamental principles of relativity to enhance performance.


The appendices detail the Lindblad superoperator used to describe the system’s dynamics and the transformation to a rotating frame to simplify the analysis. They show how relativistic motion affects the average number of quanta in the reservoir and the superoperators, and present the detailed derivation of the steady-state density matrix elements for the three-level heat engine, providing equations for power output and efficiency. The document describes the Monte Carlo method used to estimate the generalized Carnot-like efficiency bound in relativistic quantum thermal machines, providing pseudocode for implementation and explaining how the efficiency bound is extracted from efficiency and power pairs. Overall, this is an excellent supplementary material document that provides a comprehensive and detailed explanation of the theoretical framework, calculations, and numerical methods used in the research paper. The clear organization, detailed derivations, and well-explained physical concepts make it a valuable resource for anyone interested in relativistic quantum thermal machines.

Relativistic Motion Boosts Heat Engine Efficiency

Researchers have demonstrated that relativistic motion can function as a genuine thermodynamic resource, enabling a heat engine to surpass the conventional limits of efficiency. The team investigated a three-level maser, where thermal reservoirs are in constant relativistic motion relative to the working medium, using a model that accurately captures the effects of relativistic motion on energy transfer. The results reveal that the engine’s performance is not solely dictated by temperature differences, but is significantly influenced by the velocity of the thermal reservoirs. Specifically, the engine can operate with greater efficiency than predicted by the Carnot limit, due to the reshaping of the energy spectrum caused by relativistic motion.

Grok answers my questions about what Elon meant when he said Tesla FSD v14 will seem sentient

Questions to inspire discussion.

Advanced Navigation and Obstacle Recognition.

🛣️ Q: How will FSD v14 handle unique driveway features? A: The improved neural net and higher resolution video processing will help FSD v14 better recognize and navigate features like speed bumps and humps, adjusting speed and steering smoothly based on their shape and height.

🚧 Q: What improvements are expected in distinguishing real obstacles? A: Enhanced object detection driven by improved algorithms and higher resolution video inputs will make FSD v14 better at distinguishing real obstacles from false positives like tire marks, avoiding abrupt breaking and overreacting.

Edge case handling and smooth operation.

🧩 Q: How will FSD v14 handle complex edge cases? A: The massive jump in parameter count and better video compression will help the AI better understand edge cases, allowing it to reason that non-threatening objects like a stationary hatch in the road aren’t obstacles, maintaining smooth cruising.

What happened before the Big Bang? Computational method may provide answers

We’re often told it is “unscientific” or “meaningless” to ask what happened before the Big Bang. But a new paper by FQxI cosmologist Eugene Lim, of King’s College London, UK, and astrophysicists Katy Clough, of Queen Mary University of London, UK, and Josu Aurrekoetxea, at Oxford University, UK, published in Living Reviews in Relativity, proposes a way forward: using complex computer simulations to numerically (rather than exactly) solve Einstein’s equations for gravity in extreme situations.

Ultrathin metasurface enables high-efficiency vectorial holography

Holography—the science of recording and reconstructing light fields—has long been central to imaging, data storage, and encryption. Traditional holographic systems, however, rely on bulky optical setups and interference experiments, making them impractical for compact or integrated devices. Computational methods such as the Gerchberg–Saxton (GS) algorithm have simplified hologram design by eliminating the need for physical interference patterns, but these approaches typically produce scalar holograms with uniform polarization, limiting the amount of information that can be encoded.

A Wearable Robot That Learns

Having lived with an ALS diagnosis since 2018, Kate Nycz can tell you firsthand what it’s like to slowly lose motor function for basic tasks. “My arm can get to maybe 90 degrees, but then it fatigues and falls,” the 39-year-old said. “To eat or do a repetitive motion with my right hand, which was my dominant hand, is difficult. I’ve mainly become left-handed.”

People like Nycz who live with a neurodegenerative disease like ALS or who have had a stroke often suffer from impaired movement of the shoulder, arm or hands, preventing them from daily tasks like tooth-brushing, hair-combing or eating.

For the last several years, Harvard bioengineers have been developing a soft, wearable robot that not only provides movement assistance for such individuals but could even augment therapies to help them regain mobility.

But no two people move exactly the same way. Physical motions are highly individualized, especially for the mobility-impaired, making it difficult to design a device that works for many different people.

It turns out advances in machine learning can create a more personal touch. Researchers in the John A. Paulson School of Engineering and Applied Sciences (SEAS), together with physician-scientists at Massachusetts General Hospital and Harvard Medical School, have upgraded their wearable robot to be responsive to an individual user’s exact movements, endowing the device with more personalized assistance that could give users better, more controlled support for daily tasks.


ChatGPT in Your Clinic: Who’s the Expert Now

Patients arriving at appointments with researched information is not new, but artificial intelligence (AI) tools such as ChatGPT are changing the dynamics.

Their confident presentation can leave physicians feeling that their expertise is challenged. Kumara Raja Sundar, MD, a family medicine physician at Kaiser Permanente Burien Medical Center in Burien, Washington, highlighted this trend in a recent article published in JAMA.

A patient visited Sundar’s clinic reporting dizziness and described her symptoms with unusual precision: “It’s not vertigo, but more like a presyncope feeling.” She then suggested that the tilt table test might be useful for diagnosis.

Occasionally, patient questions reveal subtle familiarity with medical jargon. This may indicate that they either have relevant training or have studied the subject extensively.

(Artificial Intelligence is the science of making machines do things that would require intelligence if done by men — Marvin Minsky. Google helps you gain information with a search engine. AI helps you gain information through algorithms. It is the same thing. However people profit from ignorance).


Patients are showing up with ChatGPT-generated diagnoses, challenging physicians to balance empathy, evidence, and authority in the exam room.

Johns Hopkins APL Takes a Quantum Approach to Tracking Online Trends

Researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, have demonstrated that a quantum algorithm can be used to speed up an information analysis task that classical computers struggle to perform.

The innovation tackles a key element of information operations: tracking and attributing topics and narratives as they emerge and evolve online, which can help analysts spot indications of potential terrorist acts, for example. This involves using computers to perform what’s known as semantic text similarity analysis, or comparing the similarities within a textual dataset — not just the similarity of the words, but the meaning behind them, which makes it possible to identify related texts even if they don’t share any common keywords.

“The amount of open-source text data online — on social media platforms especially — is growing dramatically, and our ability to analyze all of that data has not kept pace with our ability to collect it,” said Roxy Holden, a mathematician at APL and principal investigator of this effort. “Intelligence analysts have limited resources, so finding better ways to automate this kind of analysis is critical for the military and the intelligence community.”


APL researchers have demonstrated that a quantum algorithm can be used to speed up an information analysis task that classical computers struggle to perform.

This “smart coach” helps LLMs switch between text and code

Large language models (LLMs) excel at using textual reasoning to understand the context of a document and provide a logical answer about its contents. But these same LLMs often struggle to correctly answer even the simplest math problems.

Textual reasoning is usually a less-than-ideal way to deliberate over computational or algorithmic tasks. While some LLMs can generate code like Python to handle symbolic queries, the models don’t always know when to use code, or what kind of code would work best.

LLMs, it seems, may need a coach to steer them toward the best technique.

Enter CodeSteer, a smart assistant developed by MIT researchers that guides an LLM to switch between code and text generation until it correctly answers a query. (Strangely like a text editor “CodeSteer”🤔)


CodeSteer is a smart assistant from MIT that automatically guides large language models to switch between generating text and code, and to refine its response, until it answers a query correctly.

/* */