Toggle light / dark theme

Homo Invocator

We live immersed in a persistent illusion: the idea that consciousness arises from the brain like the flame from a candle. Contemporary science, constrained by the very instruments it creates, proclaims that the mind is merely the result of electrical impulses and chemical reactions — an epiphenomenon of flesh.

Yet a deeper look — one that doesn’t reject science but rather transcends it — reveals a more radical reality: we, living beings, are not the origin of consciousness, but rather its antenna.

We are hardware. Bodies shaped by millions of years of biological evolution, a complex architecture of atoms and molecules organized into a fractal of systems. But this hardware, no matter how sophisticated, is nothing more than a receptacle, a stage, an antenna. What truly moves, creates, and inspires does not reside here, within this tangible three-dimensional realm; it resides in an unlimited field, a divine matrix where everything already exists. Our mind, far from being an original creator, is a channel, a receiver, an interpreter.

The great question of our time — and perhaps of all human history — is this: how can we update the software running on this biological hardware without the hardware itself becoming obsolete? Herein lies the fundamental paradox: we can dream of enlightenment, wisdom, and transcendence, yet if the body does not keep pace — if the physical circuits cannot support the flow — the connection breaks, the signal distorts, and the promise of spiritual evolution stalls.

The human body, a product of Darwinian evolution’s slow dance, is both marvel and prison. Our eyes capture only a minuscule fraction of the electromagnetic spectrum; our ears are limited to a narrow range of frequencies; our brains filter out and discard 99% of the information surrounding us. Human hardware was optimized for survival — not for truth!

This is the first major limitation: if we are receivers of a greater reality, our apparatus is radically constrained. It’s like trying to capture a cosmic symphony with an old radio that only picks up static. We may glimpse flashes — a sudden intuition, an epiphany, a mystical experience — but the signal is almost always imperfect.

Thus, every spiritual tradition in human history — from shamans to mystery schools, from Buddhism to Christian mysticism — has sought ways to expand or “hack” this hardware: fasting, meditation, chanting, ecstatic dance, entheogens. These are, in fact, attempts to temporarily reconfigure the biological antenna to tune into higher frequencies. Yet we remain limited: the body deteriorates, falls ill, ages, and dies.

If the body is hardware, then the mind — or rather, the set of informational patterns running through it — is software: human software (and a limited one at that). This software isn’t born with us; it’s installed through culture, language, education, and experience. We grow up running inherited programs, archaic operating systems that dictate beliefs, prejudices, and identities.

Beneath this cultural software, however, lies a deeper code: access to an unlimited field of possibilities. This field — call it God, Source, Cosmic Consciousness, the Akashic Records, it doesn’t matter — contains everything: all ideas, all equations, all music, all works of art, all solutions to problems not yet conceived. We don’t invent anything; we merely download it.

Great geniuses throughout history — from Nikola Tesla to Mozart, from Leonardo da Vinci to Fernando Pessoa — have testified to this mystery: ideas “came” from outside, as if whispered by an external intelligence. Human software, then, is the interface between biological hardware and this divine ocean. But here lies the crucial question: what good is access to supreme software if the hardware lacks the capacity to run it?

An old computer might receive the latest operating system, but only if its minimum specifications allow it. Otherwise, it crashes, overheats, or freezes. The same happens to us: we may aspire to elevated states of consciousness, but without a prepared body, the system fails. That’s why so many mystical experiences lead to madness or physical collapse.

Thus, we arrive at the heart of the paradox. If the hardware doesn’t evolve, even the most advanced software download is useless. But if the software isn’t updated, the hardware remains a purposeless machine — a biological robot succumbing to entropy.

Contemporary society reflects this tension. On one hand, biotechnology, nanotechnology, and regenerative medicine promise to expand our hardware: stronger, more resilient, longer-lived bodies. On the other, the cultural software governing us remains archaic: nationalism, tribalism, dogma, consumerism. It’s like installing a spacecraft engine onto an ox-drawn cart.

At the opposite end of the spectrum, we find the spiritual movement, which insists on updating the software — through meditation, energy therapies, expanded states of consciousness — but often neglects the hardware. Weakened, neglected bodies, fed with toxins, become incapable of sustaining the frequency they aim to channel. The result is a fragile, disembodied spirituality — out of sync with matter.

Humanity’s challenge in the 21st century and beyond is not to choose between hardware and software, but to unify them. Living longer is meaningless if the mind remains trapped in limiting programs. Aspiring to enlightenment is futile if the body collapses under the intensity of that light.

It’s essential to emphasize: the power does not reside in us (though, truthfully, it does — if we so choose). This isn’t a doctrine of self-deification, but of radical humility. We are merely antennas. True power lies beyond the physical reality we know, in a plane where everything already exists — a divine, unlimited power from which Life itself emerges.

Our role is simple yet grand: to invoke. We don’t create from nothing; we reveal what already is. We don’t invent; we translate. A work of art, a mathematical formula, an act of compassion — all are downloads from a greater source.

Herein lies the beauty: this field is democratic. It belongs to no religion, no elite, no dogma. It’s available to everyone, always, at any moment. The only difference lies in the hardware’s capacity to receive it and the (human) software that interprets it.

But there are dangers. If the hardware is weak or the software corrupted, the divine signal arrives distorted. This is what we see in false prophets, tyrants, and fanatics: they receive fragments of the field, but their mental filters — laden with fear, ego, and the desire for power — twist the message. Instead of compassion, violence emerges; instead of unity, division; instead of wisdom, dogma.

Therefore, conscious evolution demands both purification of the software (clearing toxic beliefs and hate-based programming) and strengthening of the hardware (healthy bodies, resilient nervous systems). Only then can the divine frequency manifest clearly.

If we embrace this vision, humanity’s future will be neither purely biological nor purely spiritual — it will be the fusion of both. The humans of the future won’t merely be smarter or longer-lived; they’ll be more attuned. A Homo Invocator: the one who consciously invokes the divine field and translates it into matter, culture, science, and art.

The initial paradox remains: hardware without software is useless; software without hardware is impossible. But the resolution isn’t in choosing one over the other — it’s in integration. The future belongs to those who understand that we are antennas of a greater power, receivers of an infinite Source, and who accept the task of refining both body and mind to become pure channels of that reality.

If we succeed, perhaps one day we’ll look back and realize that humanity’s destiny was never to conquer Earth or colonize Mars — but to become a conscious vehicle for the divine within the physical world.

And on that day, we’ll understand that we are neither merely hardware nor merely software. We are the bridge.

Deep down, aren’t we just drifting objects after all?
The question is rhetorical, for I don’t believe any of us humans holds the answer.

__
Copyright © 2025, Henrique Jorge (ETER9)

Image by Gerd Altmann from Pixabay

[ This article was originally published in Portuguese in Link to Leaders at: https://linktoleaders.com/o-ser-como-interface-henrique-jorge-eter9/]

The Holy Grail of Technology

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

The Symphony of Tomorrow (A Poetic Title for a Serious Reflection)

At the threshold of a century poised for unprecedented transformations, we find ourselves at a crossroads unlike any before. The convergence of humanity and technology is no longer a distant possibility; it has become a tangible reality that challenges our most fundamental conceptions of what it means to be human.

This article seeks to explore the implications of this new era, in which Artificial Intelligence (AI) emerges as a central player. Are we truly on the verge of a symbiotic fusion, or is the conflict between the natural and the artificial inevitable?

The prevailing discourse on AI oscillates between two extremes: on one hand, some view this technology as a powerful extension of human capabilities, capable of amplifying our creativity and efficiency. On the other, a more alarmist narrative predicts the decline of human significance in the face of relentless machine advancement. Yet, both perspectives seem overly simplistic when confronted with the intrinsic complexity of this phenomenon. Beyond the dichotomy of utopian optimism and apocalyptic pessimism, it is imperative to critically reflect on AI’s cultural, ethical, and philosophical impact on the social fabric, as well as the redefinition of human identity that this technological revolution demands.

Since the dawn of civilization, humans have sought to transcend their natural limitations through the creation of tools and technologies. From the wheel to the modern computer, every innovation has been seen as a means to overcome the physical and cognitive constraints imposed by biology. However, AI represents something profoundly different: for the first time, we are developing systems that not only execute predefined tasks but also learn, adapt, and, to some extent, think.

This transition should not be underestimated. While previous technologies were primarily instrumental—serving as controlled extensions of human will—AI introduces an element of autonomy that challenges the traditional relationship between subject and object. Machines are no longer merely passive tools; they are becoming active partners in the processes of creation and decision-making. This qualitative leap radically alters the balance of power between humans and machines, raising crucial questions about our position as the dominant species.

But what does it truly mean to “be human” in a world where the boundaries between mind and machine are blurring? Traditionally, humanity has been defined by attributes such as consciousness, emotion, creativity, and moral decision-making. Yet, as AI advances, these uniquely human traits are beginning to be replicated—albeit imperfectly—within algorithms. If a machine can imitate creativity or exhibit convincing emotional behavior, where does our uniqueness lie?

This challenge is not merely technical; it strikes at the core of our collective identity. Throughout history, humanity has constructed cultural and religious narratives that placed us at the center of the cosmos, distinguishing us from animals and the forces of nature. Today, that narrative is being contested by a new technological order that threatens to displace us from our self-imposed pedestal. It is not so much the fear of physical obsolescence that haunts our reflections but rather the anxiety of losing the sense of purpose and meaning derived from our uniqueness.

Despite these concerns, many AI advocates argue that the real opportunity lies in forging a symbiotic partnership between humans and machines. In this vision, technology is not a threat to humanity but an ally that enhances our capabilities. The underlying idea is that AI can take on repetitive or highly complex tasks, freeing humans to engage in activities that truly require creativity, intuition, and—most importantly—emotion.

Concrete examples of this approach can already be seen across various sectors. In medicine, AI-powered diagnostic systems can process vast amounts of clinical data in record time, allowing doctors to focus on more nuanced aspects of patient care. In the creative industry, AI-driven text and image generation software are being used as sources of inspiration, helping artists and writers explore new ideas and perspectives. In both cases, AI acts as a catalyst, amplifying human abilities rather than replacing them.

Furthermore, this collaboration could pave the way for innovative solutions in critical areas such as environmental sustainability, education, and social inclusion. For example, powerful neural networks can analyze global climate patterns, assisting scientists in predicting and mitigating natural disasters. Personalized algorithms can tailor educational content to the specific needs of each student, fostering more effective and inclusive learning. These applications suggest that AI, far from being a destructive force, can serve as a powerful instrument to address some of the greatest challenges of our time.

However, for this vision to become reality, a strategic approach is required—one that goes beyond mere technological implementation. It is crucial to ensure that AI is developed and deployed ethically, respecting fundamental human rights and promoting collective well-being. This involves regulating harmful practices, such as the misuse of personal data or the indiscriminate automation of jobs, as well as investing in training programs that prepare people for the new demands of the labor market.

While the prospect of symbiotic fusion is hopeful, we cannot ignore the inherent risks of AI’s rapid evolution. As these technologies become more sophisticated, so too does the potential for misuse and unforeseen consequences. One of the greatest dangers lies in the concentration of power in the hands of a few entities, whether they be governments, multinational corporations, or criminal organizations.

Recent history has already provided concerning examples of this phenomenon. The manipulation of public opinion through algorithm-driven social media, mass surveillance enabled by facial recognition systems, and the use of AI-controlled military drones illustrate how this technology can be wielded in ways that undermine societal interests.

Another critical risk in AI development is the so-called “alignment problem.” Even if a machine is programmed with good intentions, there is always the possibility that it misinterprets its instructions or prioritizes objectives that conflict with human values. This issue becomes particularly relevant in the context of autonomous systems that make decisions without direct human intervention. Imagine, for instance, a self-driving car forced to choose between saving its passenger or a pedestrian in an unavoidable collision. How should such decisions be made, and who bears responsibility for the outcome?

These uncertainties raise legitimate concerns about humanity’s ability to maintain control over increasingly advanced technologies. The very notion of scientific progress is called into question when we realize that accumulated knowledge can be used both for humanity’s benefit and its detriment. The nuclear arms race during the Cold War serves as a sobering reminder of what can happen when science escapes moral oversight.

Whether the future holds symbiotic fusion or inevitable conflict, one thing is clear: our understanding of human identity must adapt to the new realities imposed by AI. This adjustment will not be easy, as it requires confronting profound questions about free will, the nature of consciousness, and the essence of individuality.

One of the most pressing challenges is reconciling our increasing technological dependence with the preservation of human dignity. While AI can significantly enhance quality of life, there is a risk of reducing humans to mere consumers of automated services. Without a conscious effort to safeguard the emotional and spiritual dimensions of human experience, we may end up creating a society where efficiency outweighs empathy, and interpersonal interactions are replaced by cold, impersonal digital interfaces.

On the other hand, this very transformation offers a unique opportunity to rediscover and redefine what it means to be human. By delegating mechanical and routine tasks to machines, we can focus on activities that truly enrich our existence—art, philosophy, emotional relationships, and civic engagement. AI can serve as a mirror, compelling us to reflect on our values and aspirations, encouraging us to cultivate what is genuinely unique about the human condition.

Ultimately, the fate of our relationship with AI will depend on the choices we make today. We can choose to view it as an existential threat, resisting the inevitable changes it brings, or we can embrace the challenge of reinventing our collective identity in a post-humanist era. The latter, though more daring, offers the possibility of building a future where technology and humanity coexist in harmony, complementing each other.

To achieve this, we must adopt a holistic approach that integrates scientific, ethical, philosophical, and sociological perspectives. It also requires an open, inclusive dialogue involving all sectors of society—from researchers and entrepreneurs to policymakers and ordinary citizens. After all, AI is not merely a technical tool; it is an expression of our collective imagination, a reflection of our ambitions and fears.

As we gaze toward the horizon, we see a world full of uncertainties but also immense possibilities. The future is not predetermined; it will be shaped by the decisions we make today. What kind of social contract do we wish to establish with AI? Will it be one of domination or cooperation? The answer to this question will determine not only the trajectory of technology but the very essence of our existence as a species.

Now is the time to embrace our historical responsibility and embark on this journey with courage, wisdom, and an unwavering commitment to the values that make human life worth living.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/a-sinfonia-do-amanha-tit…exao-seria ]

Charoen Thani Hotel, Khon Kaen. · 260 Sri Chant Rd, Nai Mueang, Mueang Khon Kaen District, Khon Kaen 40000, Thailand

The name of the conference will be lifeboat foundation conference for polymaths futuristics and visionaries.

The place will be this hotel.

https://maps.app.goo.gl/sdG14SRcrJEJGYGH6

With nice accommodation equipped with fitness swimming pool sauna Jacuzzi and restaurant.

Please help me organise it…

Dear All Polymaths I will arrange the conference for polymaths like all of you.


We Need a Far Better Plan for Dealing With Existential Threat

Here’s my latest Opinion piece just out for Newsweek. Check it out! Lifeboat Foundation mentioned.


We need to remember that universal distress we all had when the world started to shut down in March 2020: when not enough ventilators and hospital beds could be found; when food shelves and supplies were scarce; when no COVID-19 vaccines existed. We need to remember because COVID is just one of many different existential risks that can appear out of nowhere, and halt our lives as we know it.

Naturally, I’m glad that the world has carried on with its head high after the pandemic, but I’m also worried that more people didn’t take to heart a longer-term philosophical view that human and earthly life is highly tentative. The best, most practical way to protect ourselves from more existential risks is to try to protect ourselves ahead of time.

That means creating vaccines for diseases even when no dire need is imminent. That means trying to continue to denuclearize the military regardless of social conflicts. That means granting astronomers billions of dollars to scan the skies for planet-killer asteroids. That means spending time to build safeguards into AI, and keeping it far from military munitions.

If we don’t take these steps now, either via government or private action, it could be far too late when a global threat emerges. We must treat existential risk as the threat it is: a human species and planet killer—the potential end of everything we know.

Effect of exercise for depression: systematic review and network meta-analysis of randomised controlled trials

Cecile G. Tamura ‎Lifeboat Foundation An effective treatment for depression from a systematic review of 200 unique RCTs:

Exercise.


Objective To identify the optimal dose and modality of exercise for treating major depressive disorder, compared with psychotherapy, antidepressants, and control conditions.

Design Systematic review and network meta-analysis.

Methods Screening, data extraction, coding, and risk of bias assessment were performed independently and in duplicate. Bayesian arm based, multilevel network meta-analyses were performed for the primary analyses. Quality of the evidence for each arm was graded using the confidence in network meta-analysis (CINeMA) online tool.

Data sources Cochrane Library, Medline, Embase, SPORTDiscus, and PsycINFO databases.

/* */