Toggle light / dark theme

David is one of the world’s best-known philosophers of mind and thought leaders on consciousness. I was a freshman at the University of Toronto when I first read some of his work. Since then, Chalmers has been one of the few philosophers (together with Nick Bostrom) who has written and spoken publicly about the Matrix simulation argument and the technological singularity. (See, for example, David’s presentation at the 2009 Singularity Summit or read his The Singularity: A Philosophical Analysis)

During our conversation with David, we discuss topics such as: how and why Chalmers got interested in philosophy; and his search to answer what he considers to be some of the biggest questions – issues such as the nature of reality, consciousness, and artificial intelligence; the fact that academia in general and philosophy, in particular, doesn’t seem to engage technology; our chances of surviving the technological singularity; the importance of Watson, the Turing Test and other benchmarks on the way to the singularity; consciousness, recursive self-improvement, and artificial intelligence; the ever-shrinking of the domain of solely human expertise; mind uploading and what he calls the hard problem of consciousness; the usefulness of philosophy and ethics; religion, immortality, and life-extension; reverse engineering long-dead people such as Ray Kurzweil’s father.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.

00:00 Introduction to Jay Friedenberg.
01:02 Connecting Cognitive Science and AI
02:36 Human Augmentation and Technology.
03:50 Brain-Machine Interfaces.
05:43 Balancing Optimism and Caution in AI
07:52 Free Will vs Determinism.
12:34 Creativity in Humans and Machines.
16:45 Ethics and Value Alignment in AI
20:09 Conclusion and Future Work.

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.

The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.

Website: https://singularitynet.io.
X: https://x.com/SingularityNET
Instagram: / singularitynet.io.
Discord: / discord.
Forum: https://community.singularitynet.io.
Telegram: https://t.me/singularitynet.
WhatsApp: https://whatsapp.com/channel/0029VaM8
Warpcast: https://warpcast.com/singularitynet.
Mindplex Social: https://social.mindplex.ai/@Singulari
Github: https://github.com/singnet.
Linkedin: / singularitynet.

Is our brain responsible for how we react to people who are different from us? Why can’t people with autism tell lies? How does the brain produce empathy? Why is imitation a fundamental trait of any social interaction? What are the secret advantages of teamwork? How does the social environment influence the brain? Why is laughter different from any other emotion?

This course is aimed at deepening our understanding of how the brain shapes and is shaped by social behavior, exploring a variety of topics such as the neural mechanisms behind social interactions, social cognition, theory of mind, empathy, imitation, mirror neurons, interacting minds, and the science of laughter.

Serious Science experts from leading universities worldwide answer these and other questions. This course offers a range of scientific perspectives on classical philosophical problems in ethics. It is comprised of 10 lectures filmed from 2014 to 2020. If you have any questions or comments on the content of this course, please write to us at hello@serious-science.org.

HomePage

Follow us:
We are on Patreon: / seriousscience.
Facebook — / serious.science.org.
Twitter — / scienceserious.
YouTube — / seriousscience.
Instagram — / serious.science

From the article:

Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.


Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.

For twenty years, I have been talking about old age dependency ratios as an argument for universal basic income and investing in anti-aging therapies to keep elders healthy longer. A declining number of young workers supporting a growing number of retirees is straining many welfare systems. Healthy seniors are less expensive and work longer. UBI is more intergenerationally equitable, especially if we face technological unemployment.

But as a person anticipating grandchildren, I think the declining fertility part of the demographic shift is more on my mind. It’s apparently on the minds of a growing number of people, including folks on the Right, ranging from those worried that feminists are pushing humanity to suicide or that there won’t be enough of their kind of people in the future to those worried about the health of innovation and the economy. The reluctance by the Left to entertain any pronatalism is understandable, given the reactionary ways it has been promoted. But I believe a progressive pro-family agenda is possible.

“We need a defined framework, but instead what we see here is a fairly wild race between labs,” one journal editor told me during the ISSCR meeting. “The overarching question is: How far do they go, and where do we place them in a legal-moral spectrum? How can we endorse working with these models when they are much further along than we were two years ago?”

So where will the race lead? Most scientists say the point of mimicking the embryo is to study it during the period when it would be implanting in the wall of the uterus. In humans, this moment is rarely observed. But stem-cell embryos could let scientists dissect these moments in detail.

Yet it’s also possible that these lab embryos turn out to be the real thing—so real that if they were ever transplanted into a person’s womb, they could develop into a baby.

Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.