This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.
Open source engine PlayCanvas is what Iakov Sumygin used to build that browser-based FPS. Resources like this strengthen Schindelar’s case, particularly as the engine just introduced SplatTransform 2.0, a tool that offers “fully automated, lightning-fast generation of high-quality collision for your splats.” Without a collision mesh, players could otherwise phase through the environment, so this is yet another option that streamlines the pipeline between scan and interactive assets.
“Gaussian Splatting training—meaning the reconstruction process after capture—can reproduce real-world appearance in ways that traditional scanning methods struggle with or cannot handle properly,” He tells me, “We can now capture and represent things like hair, semi-transparency, translucency, subsurface scattering, fine foliage, and other complex visual phenomena that are extremely difficult to reconstruct as clean geometry with traditional texture workflows.”
“This direct connection between captured real-world data and a production-ready, real-time representation is what makes Gaussian Splatting so interesting,” Schindelar says, “It is not just a rendering trick—it changes the entire capture-to-delivery pipeline.”
This is an essay written by John von Neumann in 1955, which I think is fairly described as being about global catastrophic risks from emerging technologies. It discusses a bunch of specific technologies that seemed like a big deal in 1955 — which is interesting in itself as a list of predictions; nuclear power! increased automation! weather control? — but explicitly tries to draw a general lesson.
Von Neumann is regarded as one of the greatest scientists of the 20th century, and was involved in the Manhattan project in addition to inventing zillions of other things.
I’m posting here because a) I think the essay is worth reading in its own right, and b) I find it interesting to see what the past’s intellectuals thought of issues related transformative technology, and how their perspective differs/is similar to ours. Notably, I disagree with several of the conclusions (e.g. von Neumann seems to think differential technological development is doomed).
The threat is to the librarian. The threat is to the small, vanishing population of people who still go into the hexagons. Who still pull a book from the shelf. Who still spend three days reading it. Who still close it and feel changed. That practice is not a hobby. It is a technology. One older than print, older than the codex, possibly older than writing. It is a process of assembly inside one human skull. The kind of patient, sequential, focused and embodied attention that produces what we used to call understanding. AI does not produce that attention. AI produces a feeling that closely resembles attention while being something else, the way saccharin produces a feeling that closely resembles sweetness while being something else.
If this practice disappears, the Library will not notice. The books will not notice. The infinite hexagons will continue to extend in every direction. There will be no one in them. There will only be the queries, falling into the air, decaying into training data, generating fresh continuations for an audience that no longer reads them. Only, occasionally, glances at a summary.
This is the message Borges was telegraphing. This is what he saw, sitting in the National Library of Argentina, going slowly blind, surrounded by more books than any one man could read. He saw that the deepest threat to a literary culture was not the burning of books. It was the rendering of books unnecessary. He saw that a Library of Babel which contained every possible answer was, paradoxically, the most efficient instrument ever conceived for ending the practice of reading. And he saw, finally, that the only response available to a serious person was the response his narrator chose. To stop searching for the catalogue of catalogues. To return to one’s own hexagon. To pick up one particular book. To read it slowly. To die, eventually, a few leagues from where one was born, with one’s body falling through the fathomless air.
Follow Closer To Truth on Instagram for updates, announcements, and videos: https://shorturl.at/EzYo9
AI consciousness, its possibility or probability, has burst into public debate, eliciting all kinds of issues from AI ethics and rights to AI going rogue and harming humanity. We explore diverse views; we argue that AI consciousness depends on theories of consciousness.
Make a donation to Closer To Truth to help us continue exploring the world’s deepest questions without the need for paywalls: https://shorturl.at/OnyRq.
Iain McGilchrist FRSA is a British psychiatrist, philosopher and neuroscientist who wrote the 2009 book The Master and His Emissary: The Divided Brain and the Making of the Western World.
Closer To Truth, hosted by Robert Lawrence Kuhn and directed by Peter Getzels, presents the world’s greatest thinkers exploring humanity’s deepest questions. Discover fundamental issues of existence. Engage new and diverse ways of thinking. Appreciate intense debates. Share your own opinions. Seek your own answers.
Further Reading
Brain implants revive cognitive abilities long after traumatic brain injury
https://med.stanford.edu/news/all-new…
Brain implants revive cognitive abilities long after traumatic brain injury
https://www.sciencedirect.com/science…
Neural co-processors for restoring brain function: results from a cortical model of grasping
https://iopscience.iop.org/article/10…
Brain–computer interfaces: the innovative key to unlocking neurological conditions
https://pmc.ncbi.nlm.nih.gov/articles…
MindPilot: Closed-loop Visual Stimulation Optimization for Brain Modulation with EEG-guided Diffusion
https://arxiv.org/abs/2602.
Advancing brain-computer interfaces with generative AI: A review of state-of-the-art and future outlook.
Mathematics is All You Need 2: Sign-Stabilized Behavioral Fibers in Transformer Residual Streams This volume presents a pre-registered empirical investigation of the residual-stream geometry of frozen transformer language models, anchored by a four-test decision sprint executed on 2026/05/09 and a six-experiment tier-0 lockdown battery, with full reproducibility manifest. Empirical findings. Cross-architecture transfer of behavioral readouts from Qwen-2.5-7B-Instruct to Hermes-3-Llama-3.1-8B yields mean AUC retention of 0.749 across 75 probe-layer pairs over 10 seeds (BCa bootstrap 95% CI [0.7466, 0.7577] from 10,000 resamples; permutation test 10,000 permutations p < 10⁻⁴; significance survives Bonferroni correction at α = 0.05). Causal steering of the target architecture using a probe direction trained on the source architecture produces strictly monotonic probe-output deflection on 29 of 29 held-out prompts (median Spearman ρ = 1.000, intervention range α ∈ [−3, +3]). Gauge-flexibility of the underlying low-rank substrate is established at high statistical power: 100 random orthogonal rotations of the projection basis produce retention standard deviation σ = 0.0096. The intrinsic dimension of the behavioral substrate is shown to be 1–4 for the majority of behavioral traits tested, with single-direction (r = 1) retention of 0.897. The angle between the rank-1 output highway direction and the centroid of trained probe directions at proportional depth is measured as 85.59° on Qwen-2.5-7B-Instruct at layer 13, independently reproducing a prior internal measurement of 85.5° to within 0.1°. Theoretical synthesis. The Two-Channel theorem: the residual stream of a frozen transformer admits a decomposition into a high-variance rank-1-dominant output channel read by the unembedding head and a low-rank near-orthogonal behavioral channel supporting both readout and causal cross-architecture steering. The architecture-invariant object is established empirically as the sign-stabilized SVD subspace itself rather than any specific basis within it; the canonical-basis specificity hypothesis is formally rejected by pre-registered ablation (T2). Convergence with prior work. The geometric near-orthogonality result provides a measurement-side mechanism complementary to the training-side finding of Huang, LeCun & Balestriero (LLM-JEPA, arXiv:2509.14252, 2025) that embedding-space training objectives improve LLM performance without altering generative capabilities. The two results describe the same underlying functional separability of latent structure and generation in transformer residual streams via independent methodologies. Scope and limitations. The empirical foundation is restricted to a single source–target architecture pair (Qwen-2.5-7B-Instruct → Hermes-3-Llama-3.1-8B), both decoder-only instruction-tuned transformers in the 7-8B parameter class. The headline T4 causal steering result is on one probe (language_id) at one layer pair (qL13 → hL15). Cross-family extension (Mistral, Phi, Gemma, Yi, Llama variants), multi-probe causal steering benchmarks, full d-model space angle measurement, and the PLATINUM-probe leakage audit are queued for the cluster reproduction sprint as a 15-pipeline validation matrix. Several claims from the prior volume Mathematics is All You Need (Napolitano 2026) are explicitly retracted or demoted to conjecture in Part VI of this work. Compute and reproducibility. Total wall time for the empirical foundation: approximately 9 hours on a single NVIDIA RTX 5090. Reproducibility manifest, replication recipes, and full numerical results are included as appendices. Keywords. Mechanistic interpretability; representation engineering; activation steering; cross-architecture transfer; linear representation hypothesis; transformer residual stream; behavioral probes; gauge invariance; pre-registered evaluation; Joint Embedding Predictive Architectures. Models and datasets used. Qwen-2.5-7B-Instruct; Hermes-3-Llama-3.1-8B. Datasets: HumanEval, MBPP, MATH, GSM8K, ProofNet, WritingPrompts, ROC stories, Wikipedia. Companion volume. Integrates and supersedes the unreleased internal report CYGNUS 2: Information Field Theory and the Geometry of Machine Consciousness (April 2026), included as Part II. Access. Distribution prior to public-release date is restricted to identified academic reviewers and partner research labs under signed NDA. Public release is scheduled for 30 days after the priority date of associated U.S. provisional patent applications. Source code, model weights, cached residuals, and intermediate artifacts are proprietary property of Proprioceptive AI, Inc. License. Text under CC-BY 4.0; source code and artifacts proprietary. ORCID. 0009−0000−1927−8537
Further Reading.
Large Language Models Inference Engines based on Spiking Neural Networks
https://arxiv.org/html/2510.00133v1
CL1_LLM_Encoder
https://github.com/4R7I5T/CL1_LLM_Enc…
Organoid Intelligence: The Dawn of Living AI
/ organoid-intelligence-the-dawn-of-living-ai.
New 3D device harnesses living brain cells for computing
https://bioengineering.princeton.edu/.…
US scientists merge 70,000 live neurons with electronics in hybrid brain chip
https://interestingengineering.com/in…
What’s changed in the year since he coined “vibe coding and explains why he’s never felt more behind as a programmer, why agentic engineering is the more serious discipline taking shape on top of vibe coding, and why we should think of LLMs not as animals but as ghosts: jagged, statistical, summoned entities that require a new kind of taste and judgment to direct. He also touches on Software 3.0, the limits of verifiability, and why you can outsource your thinking but never your understanding.