Menu

Blog

Page 418

Jul 21, 2024

Blood protein assessment of leading incident diseases and mortality in the UK Biobank

Posted by in categories: biotech/medical, life extension

Identifying individuals who are at a high risk of age-related morbidities may aid in personalized medicine. Circulating proteins can discriminate disease cases from controls and delineate the risk of incident diagnoses1,2,3,4,5,6,7,8. While singular protein markers offer insight into the mediators of disease5,9,10,11, simultaneously harnessing multiple proteins may improve clinical utility12. Clinically available non-omics scores such as QRISK typically profile the 10-year onset risk of a disease13. Proteomic scores have recently been trained on diabetes, cardiovascular and lifestyle traits as outcomes in 16,894 individuals14. Proteomic and metabolomic scores have also been developed for time-to-event outcomes, including all-cause mortality6,15,16,17,18,19,20,21.

Here, we demonstrate how large-scale proteomic sampling can identify candidate protein targets and facilitate the prediction of leading age-related incident outcomes in mid to later life (see the study design summary in Extended Data Fig. 1). We used 1,468 Olink plasma protein measurements in 47,600 individuals (aged 40–70 years) available as part of the UK Biobank Pharma Proteomics Project (UKB-PPP)22. Cox proportional hazards (PH) models were used to characterize associations between each protein and 24 incident outcomes, ascertained through electronic health data linkage. Next, the dataset was randomly split into training and testing subsets to train proteomic scores (ProteinScores) and assess their utility for modeling either the 5-or 10-year onset of the 19 incident outcomes that had a minimum of 150 cases available. We modeled ProteinScores alongside clinical biomarkers, polygenic risk scores (PRS) and metabolomics measures to investigate how these markers may be used to augment risk stratification.

Jul 21, 2024

A recipe for cooking up more effective artificial neurons

Posted by in category: materials

The compressive study details a pathway for developing artificial spiking neurons out of new materials.

Jul 21, 2024

Liquid metal offers fluid replacement of quantum chip interconnects

Posted by in categories: computing, quantum physics

PDF | On Jan 1, 2009, Galen Strawson published Realistic Monism: Why Physicalism Entails Panpsychism | Find, read and cite all the research you need on ResearchGate.

Jul 21, 2024

Frontiers: The purpose of the attention schema theory is to explain how an information-processing device

Posted by in categories: biological, neuroscience, robotics/AI

The brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.

This article is part of a special issue on consciousness in humanoid robots. The purpose of this article is to summarize the attention schema theory (AST) of consciousness for those in the engineering or artificial intelligence community who may not have encountered previous papers on the topic, which tended to be in psychology and neuroscience journals. The central claim of this article is that AST is mechanistic, demystifies consciousness and can potentially provide a foundation on which artificial consciousness could be engineered. The theory has been summarized in detail in other articles (e.g., Graziano and Kastner, 2011; Webb and Graziano, 2015) and has been described in depth in a book (Graziano, 2013). The goal here is to briefly introduce the theory to a potentially new audience and to emphasize its possible use for engineering artificial consciousness.

The AST was developed beginning in 2010, drawing on basic research in neuroscience, psychology, and especially on how the brain constructs models of the self (Graziano, 2010, 2013; Graziano and Kastner, 2011; Webb and Graziano, 2015). The main goal of this theory is to explain how the brain, a biological information processor, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is in the realm of science and engineering.

Jul 21, 2024

OpenAI’s 5 Levels Of ‘Super AI’ (AGI To Outperform Human Capability)

Posted by in category: robotics/AI

OpenAI is reportedly tracking its progress toward building artificial general intelligence (AGI). This is AI that can outperform humans on most tasks. Using a set of five levels, the company can gauge its progress towards its ultimate goal.

According to Bloomberg, OpenAI believes its technology is approaching the second level of five on the path to artificial general intelligence. Anna Gallotti, co-chair of the International Coaching Federation’s special task force for AI and coaching, called this a “super AI” scale when sharing on LinkedIn, seeing the possibility for entrepreneurs, coaches and consultants.

Axios said that AI experts disagree over whether “today’s large language models, which excel at generating text and images, will ever be capable of broadly understanding the world and flexibly adapting to novel information and circumstances.” Disagreement means blind spots, which lead to opportunity.

Jul 21, 2024

What is AGI and how will we know when it’s been attained?

Posted by in categories: existential risks, robotics/AI

Achieving such a concept — commonly referred to as AGI — is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants https://fortune.com/company/amazon-com/” class=””>Amazon, https://fortune.com/company/alphabet/” class=””>Google, Meta and https://fortune.com/company/microsoft/” class=””>Microsoft.

It’s also a cause for concern https://apnews.com/article/artificial-intelligence-risks-uk-…d6e2b910b” rel=“noopener” class=””>for world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with “long-term planning” skills could pose an existential risk to humanity.

But what exactly is AGI and how will we know when it’s been attained? Once on the fringe of computer science, it’s now a buzzword that’s being constantly redefined by those trying to make it happen.

Jul 21, 2024

Watch Your Step! There’s AGI Everywhere

Posted by in category: robotics/AI

A world teeming with self-aware brands would be quite hectic. According to Gartner, by 2025, generative A.I. will be a workforce partner within 90 percent of companies worldwide. This doesn’t mean that all of these companies will be surging toward organizational AGI, however. Generative A.I., and LLMs in particular, can’t meet an organization’s automation needs on its own. Giving an entire workforce access to GPTs or Copilot won’t move the needle much in terms of efficiency. It might help people write better emails faster, but it takes a great deal of work to make LLMs reliable resources for user queries.

Their hallucinations have been well documented and training them to provide trustworthy information is a herculean effort. Jeff McMillan, chief analytics and data officer at Morgan Stanley (MS), told me it took his team nine months to train GPT-4 on more than 100,000 internal documents. This work began before the launch of ChatGPT, and Morgan Stanley had the advantage of working directly with people at OpenAI. They were able to create a personal assistant that the investment bank’s advisors can chat with, tapping into a large portion of its collective knowledge. “Now you’re talking about wiring it up to every system,” he said, with regards to creating the kinds of ecosystems required for organizational A.I. “I don’t know if that’s five years or three years or 20 years, but what I’m confident of is that that is where this is going.”

Continue reading “Watch Your Step! There’s AGI Everywhere” »

Jul 21, 2024

David Wiltshire | Solution to the Cosmological Constant Problem

Posted by in categories: cosmology, quantum physics

Jul 21, 2024

We are seeing a sign that dark energy is not a cosmological constant

Posted by in categories: cosmology, quantum physics

Image: Custom colormap package by cmastro; Claire Lamman / DESI collaboration On April 4, 2024, the Dark Energy Spectroscopic Instrument (DESI), a collaboration of more than 900 researchers from over 70 institutions around the world, announced that they have made the most precise measurement of the expansion of the universe and its acceleration.

Jul 21, 2024

The Donation of Human Biological Material for Brain Organoid Research: The Problems of Consciousness and Consent

Posted by in categories: biotech/medical, ethics, neuroscience

Human brain organoids are three-dimensional masses of tissues derived from human stem cells that partially recapitulate the characteristics of the human brain. They have promising applications in many fields, from basic research to applied medicine. However, ethical concerns have been raised regarding the use of human brain organoids. These concerns primarily relate to the possibility that brain organoids may become conscious in the future. This possibility is associated with uncertainties about whether and in what sense brain organoids could have consciousness and what the moral significance of that would be. These uncertainties raise further concerns regarding consent from stem cell donors who may not be sufficiently informed to provide valid consent to the use of their donated cells in human brain organoid research.

Page 418 of 11,894First415416417418419420421422Last