Toggle light / dark theme

#survival #coronavirus
In light of recent events its a good opportunity to go over the basics of pandemic preparedness.

*Correction* I need to make a correction to information provided within this video. The case-fatality rate of the Spanish influenza is often quoted by virologists as 2.5 % when in reality the math on this doesn’t add up as the population of the planet at the time doesn’t align with this statistic. This stat is misinterpreted to mean the overall case-fatality rate was (greater than) 2.5%. It is presumed a safer mortality estimate was between 7.5%-15% at the pandemics peak wave. The correct statistic is the 2.5%-5% of the WORLDS population perished as a result of this. It should be noted that there were several waves to this pandemic hence the 2.5 (greater than) statistic. The first wave was relatively tame, the second wave was devastating, the third wave was less severe. https://www.cdc.gov/flu/pandemic-resources/1918-commemoration/three-waves.htm

Similar videos
Hazsuit

Biohazard bags


Get biohazard bags here
https://amzn.to/30Sx6ab

Biohazard Stickers
https://amzn.to/36pmtgj

The Urban prepper containment video

Complex cognitive dissonance disorder guaranteed. 😬.


Garrett Lisi, the so called “Surf Bum with a Theory of Everything (or T.O.E.)”, is a PhD theoretical physicist who has refused to be captured by the theoretical physics community. By making shrewd investments, he has avoided holding meaningful employment for his entire adult life. Instead, he lives in Maui and travels the world chasing the perfect wave.

In this episode Garrett and Eric sit down to discuss the current status of Garrett’s ideas for a final theory based on a mysterious object called E8, perhaps the oddest of mathematical symmetries to be found in the universe. Garrett and Eric have held each other in mutual “contempt” for over a decade. By vacationing together and staying in each others’ homes, they had hoped to hone and deepen their mutual disgust for each other’s ideas. However, as the theoretical physics community moved away from actually trying to unify our incompatible models of the physical world, it became intellectually unmoored, and drifted toward a culture of performative Cargo Cult Physics. The antagonists were thus forced by necessity to develop a begrudging admiration for each other’s iconoclasm and unwillingness to give up on the original dream of Einstein to unify and understand our world.

The discussion is rough but a fairly accurate depiction of scientific relationships belonging to a type that is generally not shown to the public. This may be uncomfortable for those who have been habituated to NOVA, The Elegant Universe, or other shows produced for mass consumption. We apologize in advance.

Original sponsors of the audio version of the episode:

A team of researchers, two with the French Atomic Energy Commission (AEC) and a third with the Soleil synchrotron, have found evidence of a phase change for hydrogen at a pressure of 425 gigapascals. In their paper published in the journal Nature, Paul Loubeyre, Florent Occelli and Paul Dumas describe testing hydrogen at such a high pressure and what they learned from it.

Researchers long ago theorized that if gas were exposed to enough pressure, it would transition into a metal. But the theories were not able to derive how much pressure is required. Doubts about the theories began to arise when scientists developed tools capable of exerting the high pressures that were believed necessary to squeeze hydrogen into a metal. Theorists simply moved the number higher.

In the past several years, however, theorists have come to a consensus—their math showed that hydrogen should transition at approximately 425 gigapascals—but a way to generate that much pressure did not exist. Then, last year, a team at the AEC improved on the diamond anvil cell, which for years has been used to create intense pressure in experiments. In a diamond anvil cell, two opposing diamonds are used to compress a sample between highly polished tips—the pressure generated is typically measured using a reference material. With the new design, called a toroidal diamond anvil cell, the tip was made into a donut shape with a grooved dome. When in use, the dome deforms but does not break at high pressures. With the new design, the researchers were able to exert pressures up to 600 GPa. That still left the problem of how to test a sample of hydrogen as it was being squeezed.

There’s nothing quite like a maddening math problem, mind-bending optical illusion, or twisty logic puzzle to halt all productivity in the Popular Mechanics office. We’re curious people by nature, but we also collectively share a stubborn insistence that we’re right, dammit, and so we tend to throw work by the wayside whenever we come upon a problem with several seemingly possible solutions.

This triangle brain teaser isn’t new—shoutout to Popsugar for unearthing it a couple years ago—but based on some shady Internet magic, the tweet below reappeared in my feed today and kick-started a new debate on our staff-wide Slack channel, a place traditionally reserved for workshopping ideas, but instead mostly used for yelling about other stuff that we occasionally turn into content.

Not everything is knowable. In a world where it seems like artificial intelligence and machine learning can figure out just about anything, that might seem like heresy – but it’s true.

At least, that’s the case according to a new international study by a team of mathematicians and AI researchers, who discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

“The advantages of mathematics, however, sometimes come with a cost… in a nutshell… not everything is provable,” the researchers, led by first author and computer scientist Shai Ben-David from the University of Waterloo, write in their paper.

Isaac Newton and other premodern physicists saw space and time as separate, absolute entities — the rigid backdrops against which we move. On the surface, this made the mathematics behind Newton’s 1687 laws of motion look simple. He defined the relationship between force, mass and acceleration, for example, as $latex \vec{F} = m \vec{a}$.

In contrast, when Albert Einstein revealed that space and time are not absolute but relative, the math seemed to get harder. Force, in relativistic terms, is defined by the equation $latex \vec {F} =\gamma (\vec {v})^{3}m_{0}\,\vec {a} _{\parallel }+\gamma (\vec {v})m_{0}\,\vec {a} _{\perp }$.

But in a deeper sense, in the ways that truly matter to our fundamental understanding of the universe, Einstein’s theory represented a major simplification of the underlying math.

The information-processing capabilities of the brain are often reported to reside in the trillions of connections that wire its neurons together. But over the past few decades, mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable.

The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.

“I believe that we’re just scratching the surface of what these neurons are really doing,” said Albert Gidon, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that presented these findings in Science earlier this month.

The discovery marks a growing need for studies of the nervous system to consider the implications of individual neurons as extensive information processors. “Brains may be far more complicated than we think,” said Konrad Kording, a computational neuroscientist at the University of Pennsylvania, who did not participate in the recent work. It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.

The Limitations of Dumb Neurons

In the 1940s and ’50s, a picture began to dominate neuroscience: that of the “dumb” neuron, a simple integrator, a point in a network that merely summed up its inputs. Branched extensions of the cell, called dendrites, would receive thousands of signals from neighboring neurons — some excitatory, some inhibitory. In the body of the neuron, all those signals would be weighted and tallied, and if the total exceeded some threshold, the neuron fired a series of electrical pulses (action potentials) that directed the stimulation of adjacent neurons.

At around the same time, researchers realized that a single neuron could also function as a logic gate, akin to those in digital circuits (although it still isn’t clear how much the brain really computes this way when processing information). A neuron was effectively an AND gate, for instance, if it fired only after receiving some sufficient number of inputs.