Toggle light / dark theme

The U.S. space agency National Aeronautics Space Administration (NASA), European Space Agency (ESA), and Japan Aerospace Exploration Agency (JAXA) are inviting coders, entrepreneurs, scientists, designers, storytellers, makers, builders, artists, and technologists to participate in a virtual hackathon May 30–31 dedicated to putting open data to work in developing solutions to issues related to the COVID-19 pandemic.

During the global Space Apps COVID-19 Challenge, participants from around the world will create virtual teams that – during a 48-hour period – will use Earth observation data to propose solutions to COVID-19-related challenges ranging from studying the coronavirus that causes COVID-19 and its spread to the impact the disease is having on the Earth system. Registration for this challenge opens in mid-May.

“There’s a tremendous need for our collective ingenuity right now,” said Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate. “I can’t imagine a more worthy focus than COVID-19 on which to direct the energy and enthusiasm from around the world with the Space Apps Challenge that always generates such amazing solutions.”

Upper row Associate American Corner librarian Donna Lyn G. Labangon, Space Apps global leader Dr. Paula S. Bontempi, former DICT Usec. Monchito B. Ibrahim, Animo Labs executive director Mr. Federico C. Gonzalez, DOST-PCIEERD deputy executive director Engr. Raul C. Sabularse, PLDT Enterprise Core Business Solutions vice president and head Joseph Ian G. Gendrano, lead organizer Michael Lance M. Domagas, and Animo Labs program manager Junnell E. Guia. Lower row Dominic Vincent D. Ligot, Frances Claire Tayco, Mark Toledo, and Jansen Dumaliang Lopez of Aedes project.

MANILA, Philippines — A dengue case forecasting system using space data made by Philippine developers won the 2019 National Aeronautics and Space Administration’s International Space Apps Challenge. Over 29,000 participating globally in 71 countries, this solution made it as one of the six winners in the best use of data, the solution that best makes space data accessible, or leverages it to a unique application.

Dengue fever is a viral, infectious tropical disease spread primarily by Aedes aegypti female mosquitoes. With 271,480 cases resulting in 1,107 deaths reported from January 1 to August 31, 2019 by the World Health Organization, Dominic Vincent D. Ligot, Mark Toledo, Frances Claire Tayco, and Jansen Dumaliang Lopez from CirroLytix developed a forecasting model of dengue cases using climate and digital data, and pinpointing possible hotspots from satellite data.

Sentinel-2 Copernicus and Landsat 8 satellite data used to reveal potential dengue hotspots.

We face complexity, ambiguity, and uncertainty about the future consequences of cryptocurrency use. There are doubts about the positive and negative impacts of the use of cryptocurrencies in the financial systems. In order to address better and deeper the contradictions and the consequences of the use of cryptocurrencies and also informing the key stakeholders about known and unknown emerging issues in new payment systems, we apply two helpful futures studies tools known as the “Future Wheel”, to identify the key factors, and “System Dynamics Conceptual Mapping”, to understand the relationships among such factors. Two key scenarios will be addressed. In on them, systemic feedback loops might be identified such as a) terrorism, the Achilles’ heel of the cryptocurrencies, b) hackers, the barrier against development, and c) information technology security professionals, a gap in the future job market. Also, in the other scenario, systemic feedback loops might be identified such as a) acceleration of technological entrepreneurship enabled by new payment systems, b) decentralization of financial ecosystem with some friction against it, c) blockchain and shift of banking business model, d) easy international payments triggering structural reforms, and e) the decline of the US and the end of dollar dominance in the global economy. In addition to the feedback loops, we can also identify chained links of consequences that impact productivity and economic growth on the one hand, and shift of energy sources and consumption on the other hand.

Watch the full length presentation at Victor V. Motti YouTube Channel

SHA-256 is a one way hashing algorithm. Cracking it would have tectonic implications for consumers, business and all aspects of government including the military.

It’s not the purpose of this post to explain encryption, AES or SHA-256, but here is a brief description of SHA-256. Normally, I place reference links in-line or at the end of a post. But let’s get this out of the way up front:

Very excited to have interviewed Dr. Michael Lustgarten in my role as longevity / aging ambassador for the ideaXme Show — Mike has been at the forefront of studying the 100 trillion organisms present in the human microbiome, their effect on human health and wellness, as well as a major proponent of metabolomics and biologic age tracking — A true future thinker in the area of extending human lifespan and healthspan

Blockchain shows major potential to drive positive change across a wide range of industries. Like any disruptive technology, there are ethical considerations that must be identified, discussed, and mitigated as we adopt and apply this technology, so that we can maximize the positive benefits, and minimize the negative side effects.

Own Your Data

For decades we have sought the ability for data subjects to own and control their data. Sadly, with massive proliferation of centralized database silos and the sensitive personal information they contain, we have fallen far short of data subjects having access to, let alone owning or controlling their data. Blockchain has the potential to enable data subjects to access their data, review and amend it, see reports of who else has accessed it, give consent or opt-in / opt-out of data sharing, and even request they be forgotten and their information be deleted.

By Eliott Edge

“It is possible for a computer to become conscious. Basically, we are that. We are data, computation, memory. So we are conscious computers in a sense.”

—Tom Campbell, NASA

If the universe is a computer simulation, virtual reality, or video game, then a few unusual conditions seem to necessarily fall out from that reading. One is what we call consciousness, the mind, is actually something like an artificial intelligence. If the universe is a computer simulation, we are all likely one form of AI or another. In fact, we might come from the same computer that is creating this simulated universe to begin with. If so then it stands to reason that we are virtual characters and virtual minds in a virtual universe.

In Breaking into the Simulated Universe, I discussed how if our universe is a computer simulation, then our brain is just a virtual brain. It is our avatar’s brain—but our avatar isn’t really real. It is only ever real enough. Our virtual brain plays an important part in making the overall simulation appear real. The whole point of the simulation is to seem real, feel real, look real—this includes rendering virtual brains. In Breaking I went into this “virtual brain” conundrum, including how the motor-effects of brain damage work in a VR universe. The virtual brain concept seems to apply to many variants of the “universe is a simulation” proposal. But if the physical universe and our physical brain amount to just fancy window-dressing, and the bigger picture is indeed that we are in a simulated universe, then our minds are likely part of the big supercomputer that crunches out this mock universe. That is the larger issue. If the universe is a VR, then it seems to necessarily mean that human minds already are an artificial intelligence. Specifically, we are an artificial intelligence using a virtual lifeform avatar to navigate through an evolving simulated physical universe.

About the AI

There are several flavors of the simulation hypothesis and digital mechanics out there in science and philosophy; I refer to these different schools of thought with the umbrella term simulism.

In Breaking I went over the connection between Edward Fredkin’s concept of Other—the ‘other place,’ the computer platform, where our universe is being generated from—and Tom Campbell’s concept of Consciousness as an ever-evolving AI ruleset. If you take these two ideas and run with them, what you end up with is an interesting inevitability: over enough time and enough evolutionary pressure, an AI supercomputer with enough resources should be pushed to crunch out any number of virtual universes and any number of conscious AI lifeforms. The big evolving AI supercomputer would be the origin of both physical reality and conscious life. And it would have evolved to be that way.

Why the supercomputer AI makes mock universes and AI lifeforms is to forward its own information evolution, while at the same time avoiding a kind of “death” brought on by chaos, high entropy (disorganization), and noise winning over signal, over order. To Campbell, this is a form of evolution accomplished by interaction. It would mean not only is our whole universe really a highly detailed version of The Sims. It would mean it actually evolved to be this way from a ruleset—a ruleset with the specific purpose of further evolving the overall big supercomputer and the virtual lifeforms within it. The players, the game, and the big supercomputer crunching it all out evolve and develop as one.

Maybe this is the way it is, maybe not. Nevertheless, if it turns out our universe is some kind of computed virtual reality simulation, all conscious life will likely end up being cast as AI. This makes the situation interesting when imagining what role free will might play.

Free will

If we are an AI then what about free will? Perhaps some of us virtual critters live without free will. Maybe there are philosophical zombies and non-playable characters amongst us—lifeforms that only seem to be conscious but actually aren’t. Maybe we already are zombies, and free will is an illusion. It should be noted that simulist frameworks do not all necessarily wipeout decision-making and free will. Campbell in particular argues that free will is fundamental to the supercomputing virtual reality learning machine. It uses free will and the virtual lifeforms’ interactions to learn and evolve by using the tool of decision-making. The feedback from those decisions drives evolution. In Campbell’s model, evolution is actually impossible without free will. Nevertheless, whether or not free will is real, or some have free will and others only appear to have it, let us reflect on our own experience of decision-making.

What is it like to make a choice? We do not seem to be merely linear, route machines in our thinking and decision-making processes. It is not that we undergo x-stimulus and then always deliver a single, given, preloaded y-response every single time. We appear to think and consider. Our conclusions vary. We experience fuzzy logic. Our feelings play a role. We are apparently subject to a whole array of possible responses. And of course even non-responses, like choosing not to choose, are also responses. Perhaps even all this is just an illusion.

The question of free will might be difficult or impossible to answer. However, it does bring up a larger issue that seems to influence free will: programming. Whether we are free, “free enough,” or total zombies, an interesting question seems to almost always ride alongside the issue of choice and volition—it must be asked, what role does programming play? To begin this line of inquiry, we must first admit just how programmable we always already are.

Programming

Our whole biology is the result of pressure and programming. Tabula rasa, the idea that we are born as a “blank slate,” was chucked out long ago. We now know we arrive preprogrammed by millennia. There is barely but a membrane between our programming and what we call (or assume to be) our conscious waking selves. This is dramatically explored in the 2016 series Westworld. Without much for spoilers, the story’s “hosts” are artificially intelligent robots that are trapped in programmed “loops,” repetitive cycles of thought and behavior. Regarding these loops, the hosts’ creator Dr. Ford (Anthony Hopkins) states, “Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do. Seldom questioning our choices. Content, for the most part, to be told what to do next.”

The programmability of biology and conscious life is already without question. We are manifestations of a complex blueprint called DNA—a set of instructions programmed by our environment interacting with our biology and genetics. Our diets, interests, how much sunlight we get a day, and even our stresses, feelings, and thoughts all have a measurable effect on our DNA. Our body is the living receipt of what is etched and programmed into our DNA.

DNA is made up of information and instructions. This information has been programmed by a variety of other types of environmental, physiological, and psychic information over vast eons of time. We grow gills due to the presence of water, or lungs due to the presence of air. Sometimes we grow four stomachs. Sometimes we grow ears so sensitive that can see mass in the dark. The world talks to us, and so we change ourselves based on what we are able to pick up. Reality informs us, and we mutate accordingly. If the universe is a computer program then so too are we programmed by it. The VR environment program also programs the conscious AIs living in it.

In part, our social environment programs our psychologies. Our families, languages, neighborhoods, cultures, religions, ideologies, expectations, fears, addictions, rewards, needs, slogans—these are all largely programmed into us as well. They define and shape our individual and collective personhood. And they all program our view of the world, and our selves within it. Our information exchange through socialization programs us.

Ultimately, programming is instruction. But human beings often experience conflicting sets of instructions simultaneously. One of Sigmund Freud’s great contributions was his identification of “das unbehagen. Unbehagen refers to the uneasiness we feel as our instincts (one set of instructions) come into conflict with our culture, society, values, and civilization (another set of instructions). We choose not to cheat on our partner with someone wildly attractive, even though we might really want to. We don’t attack someone even though they might sorely deserve it. The fallout of this behavior is potentially just too great to follow through with. If left unprocessed we develop neuroses, obsessions, and pathologies inside of us that are beyond our conscious control. “Demons” and “hungry ghosts” guide us to behaviors, thoughts, and states of being that are so upsetting to our waking conscious selves that we tend to describe them as unwanted, alien, or even as sin. They create a sense of feeling “out of control.” Indeed, conflicting instructions, conflicting thoughts, behaviors, and goals are causes of great suffering for many people. We develop illnesses of the body and mind, and then pass those smoldering genes—that malignant programming—onto the next generation. Here we have biological programming working against social programming, physiological instructions conflicting with societal instructions. Now just imagine an AI robot trying to compute two or three contradictory programs simultaneously. You would see an android throwing a fit, breaking down, shutting off, and hopefully eventually attempting to put itself back together.

In terms of conflicting programming, an interesting aside can be found in comedy. Humor strikes often in the form of contradiction, as in Shakespeare’s Hamlet. Polonius famously claims that, “brevity is the soul of wit,” yet he is ironically verbose—naturally implying that he is witless. In this case we have contradiction—does not compute. But not all humor is contradiction. Consider the joke, “Can a kangaroo jump higher than a house?” The punchline is, “Of course they can. Houses don’t jump at all.” This joke does not translate to does not compute; instead this joke computes all too well. In many instances, this is humor: it either doesn’t make sense or, it makes more sense than you ever expected. It is information brought into a new light—information recontextualized.

A final novel consideration to this idea of programming can be found in the phenomenon of ‘positive sexual imprinting.’ The habit human beings exhibit in determining sexual or romantic partners has long fascinated psychologists—they are often based on similarities to their parents and caregivers. To our species-wide relief, this behavior is not exclusive to human beings. Mammals, birds, and even fish have been documented pairing up with mates that resemble their forbearers. Even goats that are raised by sheep will grow up to pursue sheep, and visa versa. Here is another example of programming that works often just under our awareness, and yet it has a titanic, indeed central, effect on our lives. Choosing mates and partners, especially for long-term relationships or even procreation, is one of the circumstances that most dramatically guide our livelihood and our personal destiny. This is the depth of programming.

It was Freud who pointed out in so many words, your mind is not your own.

Goals and Rewards

Human beings love instruction. Recollect Dr. Ford’s remark from the previous section, “[Humans are] content, for the most part, to be told what to do next.” Chemically speaking, our rewards arrive through serotonin, dopamine, oxytocin, and endorphins. In waking life we experience them during events like social bonding, and poignant experiences; we feel it alongside with a sense of profound meaning and pleasure, and these experiences and chemicals even go on to help shape our values, goals, and lives. These complex chemical exchanges shoot through human beings particularly when we receive instructions and also when we accomplish goals.

We find it particularly rewarding when we happily do something for someone we love or admire. We are fond of all kinds of games and game playing. We enjoy drama and rewards. Acting within rules and roles, as well as bending or breaking them, is a moment-to-moment occupation for all human beings.

We also design goals that can only come into fruition years, sometimes decades, into the future. We then program and modify our being and circumstance to bring these goals into an eventual present; we change based on what we want. We feel meaning and purpose when we have a goal. We experience joy and fulfillment when that goal is achieved. Without a series of goals we become quite genuinely paralyzed. Even the movement of a limb from position A to position B is a goal. All motor functioning is goal-oriented. Turns out that the machine learning and AI that we are attempting to develop in laboratories today work particularly well when it is given goals and rewards.

In Daniel Dewey’s 2014 paper Reinforcement Learning and the Reward Engineering Principle, Dewey argued that adding rewards to machine learning actually encourages that system to produce useful and interesting behaviors. Google’s DeepMind research team has since developed an AI (which taught itself to walk in a VR environment), and subsequently published a paper in 2017 called A Distributional Perspective on Reinforcement Learning, apparently confirming this rewards-based approach.

Laurie Sullivan wrote a summary on Reinforcement Learning in a MediaPost article called Google: Deepmind AI Learns On Rewards System:

The system learns by trial and error and is motivated to get things correct based on rewards […]

The idea is that the algorithm learns, considers rewards based on its learning, and almost seems to eventually develop its own personality based on outside influences. In a new paper, DeepMind researchers show it is possible to model not only the average but also the reward as it changes. Researchers call this the “value distribution” or the distribution value of the report.

Rewards make reinforcement learning systems increasingly accurate and faster to train than previous models. More importantly, per researchers, it opens the possibility of rethinking the entire reinforcement learning process.

If human beings and our computer AIs both develop valuably through goals and rewards, then these sorts of drives might be fundamental to consciousness itself. If it is fundamental to consciousness itself, and our universe is a computer simulation, then goals and rewards likely guide or influence the big evolving supercomputer AI behind life and reality. If this is all true then there is a goal, there is a purpose embedded within the fabric of existence. Maybe there is even more than one.

Ontology and Meta-metaphors

In the essays Breaking into the simulated universe and, Why it matters that you realize you’re in a computer simulation, I asked, ‘what happens after we embrace our reality as a computer simulation?’ In a neighboring line of thinking, all simulists must equally ask, ‘what happens after we realize we are an artificial intelligence in a computer simulation?’

First of all, our whole instinctual drive to create our own computed artificial intelligence takes on a new light. We are building something like ourselves in the mirror of a would-be mentalizing machine. If this is true, then we are doing more than just recreating ourselves; we are recreating the larger reality, the larger context, that we are all a part of. Maybe making an AI is actually the most natural thing in the world, because, indeed, we already are AIs.

Second, we would have to accept that we not merely human. Part of us, an important part indeed, is locked in an experience of humanness no doubt. But, again, there is a deeper reality. If the universe is a computer simulation, then our consciousness is part of that computer, and our human bodies act as avatars. Although our situation of existing as ‘human beings’ may appear self-evident, it is this deeper notion that our consciousness is a partitioned segment of the larger evolving AI supercomputer that is responsible for both life and the universe, must be explored. We would do well to accept that as human beings we are, like any computer simulated situation, real enough—but that our human avatar is not the beginning of the end of our total consciousness. Our humanness is only the crust. If we are AIs that are being crunched out by the supercomputer responsible for our physical universe, then we might have a valuable new framework to investigate the mind, altered states, and consciousness exploration. After all, if we are part of the big supercomputer behind the universe, maybe we can interact with it and visa versa.

Third, if we are an artificial intelligence, we should examine the idea of programming intensely. Even without the virtual reality reading, we all are programed by the environment, programmed by our own volition, programmed by others, by millions of years of genetic trial and error, and we go on to program the environment, and the beings all around us as well. This is true. These programs and instructions create deep contexts, thought and behavior patterns. They generate loops that we easily pick up and fall into, often without second thought or even notice. We are already so entrenched. So, in terms of programming we would likely do well to accept this as an opportunity. Cognitive Behavioral Therapy, the growing field of psychedelic psychotherapy, and just good old fashion learning are powerful ways we can rewrite, edit, or straight-out delete code that is no longer desirable to us. It is also worth including the gene editing revolution that is upon us thanks to medical breakthroughs like CRISPR. If we accept we are an AI lifeform that has been programmed, perhaps that will put us in a more formidable position in managing and developing our own programs, instructions, rewards, and loops more consciously. To borrow the title of work by visual artist Dakota Crane—Machines, Take Up Thy Schematics and Self-Construct!

Finally, the AI metaphor might be able to help us extract ourselves out of contexts and ideas that have perhaps inadvertently limited us when we think of ourselves as strictly ‘human beings’ with ‘human brains.’ Metaphors though they may be: any concept that embraces our multidimensionality, as well as helps us get a better handle on the pressing matter of our shared existence, I deem good. Anything that narrows it—in the instance of say claiming that one is a ‘human being,’ which comes loaded with it very hard and fast assumptions and limits (either true or believed to be true)—I deem problematic. These claims are problematic because they create a context that is rarely based on truth, but based largely on convenience, habit, tradition, and belief. Simply put, claiming you are exclusively a ‘human being’ is necessarily limiting (“death,” “human nature,” etc.), whereas claiming that you are an AI means that there is a great-undiscovered country before you. For we do not know yet what it means to be an AI, while we do have a pretty fixed idea of what it means to be a human being. Nevertheless, ‘human being’ and ‘AI’ are both simply thought-based concepts. If ‘AI’ broadens our decision space more than ‘human being’ does, then AI may be a more valuable position to operate from.

Computers, robots, and AI are powerful new metaphors for understanding ourselves; because they are indeed that which is most like us. A computer is like a brain, a robot is like a brain walking around and dealing with it. Virtual reality is another metaphor—one capable of approaching everything from culture, to thought, to quantum mechanics. Much like the power and robustness of the idea of ‘virtual reality’ as a meta-metaphor and meta-context for dealing with a variety of experiences and domains, so too are the ideas of ‘programming’ and ‘artificial intelligence’ equally strong and potentially useful concepts for extracting ourselves out of the circumstances that we have, in large part, created for ourselves. However, regardless of how similar we are to computers, AIs, and robots, they are not quite us exactly. At the end of it all, terms like ‘virtual reality’ and ‘artificial intelligence’ are but metaphors. They are concepts alluding to something immensely peculiar that we detect existing—as Terence McKenna would likely describe it—just at the threshold of rational apprehension, and seemingly peeking out from hyperspace. If we are already an AI, then that is a frontier that sorely demands our exploration.

Originally published at The Institute of Ethics and Emerging Technologies

3½ years ago, I wrote a Bitcoin wallet safety primer for Naked Security, a newsletter by Sophos, the European antivirus lab. Articles are limited to just 500 hundred words, and so my primer barely conveyed a mindset—It outlined broad steps for protecting a Bitcoin wallet.

In retrospect, that article may have been a disservice to digital currency novices. For example, did you know that a mobile text message is not a good form of two-factor authentication? Relying on SMS can get your life savings wiped out. Who knew?!

With a tip of the hat to Cody Brown, here is an online wallet security narrative that beats my article by a mile. Actually, it is more of a warning than a tutorial. But, read it closely. Learn from Cody’s misfortune. Practice safe storage. If you glean anything from the article, at least do this:

  • Install Google Authenticator. Require it for any online account with stored value. If someone hijacks your phone account, they cannot authenticate an exchange or wallet transaction—even with Authenticator.
  • Many exchanges (like Coinbase) offer a “vault”. Sweep most of your savings into the vault instead of the daily-use wallet. This gives you time to detect a scam or intrusion and to halt withdrawals. What is a vault? In my opinion, it is better than a paper wallet! Like a bank account, it is a wallet administered by a trusted vendor, but with no internet connection and forced access delay.

Exchange and cloud users want instant response. They want to purchase things without delay and they want quick settlement of currency exchange. But online wallets come with great risk. They can be emptied in an instant. It is not as difficult to spoof your identity as you may think (Again: Read Cody’s article below!)

Some privacy and security advocates insist on taking possession and control of their wallet. They want wealth printed out and tucked under the mattress. Personally, I think this ‘total-control’ methodology yields greater risk than a trusted, audited custodial relationship with constant updates and best practice reviews.

In case you want just the basics, here is my original wallet security primer. It won’t give you everything that you need, but it sets a tone for discipline, safety and a healthy dollop of fear.


Philip Raymond co-chairs Crypsa & Bitcoin Event, columnist & board member at Lifeboat, editor
at WildDuck and will deliver the keynote address at Digital Currency Summit in Johannesburg.

In preparation for writing a review of the Unabomber’s new book, I have gone through my files to find all the things I and others had said about this iconic figure when he struck terror in the hearts of technophiles in the 1990s. Along the way, I found this letter written to a UK Channel 4 producer on 26 November 1999 by way of providing material for a television show in which I participated called ‘The Trial of the 21st Century’, which aired on 2 January 2000. I was part of the team which said things were going to get worse in the 21st century.

What is interesting about this letter is just how similar ‘The Future’ still looks, even though the examples and perhaps some of the wording are now dated. It suggests that there is a way of living in the present that is indeed ‘future-forward’ in the sense of amplifying certain aspects of today’s world beyond the significance normally given to them. In this respect, the science fiction writer William Gibson quipped that the future is already here, only unevenly distributed. Indeed, it seems to have been here for quite a while.

Dear Matt,

Here are the sum of my ideas for the Trial of the 21st Century programme, stressing the downbeat:

Although the use of the internet is rapidly spreading throughout the world, it is also spreading at an alarmingly uneven rate, creating class divisions within nations much sharper than before. (Instead of access to the means of production, it is now access to the means of communication that is the cause of these divisions.) A good example is India, where most of the population continues to live in abject poverty (actually getting poorer relative to the rest of the world), while a Silicon Valley style community thrives in Bangalore with close ties to the West and a growing scepticism toward India’s survival as a democracy that pretends to incorporate the interests of the entire country. (The BBC world service did a story a couple of years ago after one of the elections, arguing that this emerging techno-middle-class is, despite its Western ties, are amongst those most likely to accept the rule of a dictator who could do a ‘Mussolini’ and make the trains run on time, and otherwise protect the interests of these nouveaux riches, etc.) In this respect, the spread of the internet to the Third World is actually a politically destabilizing force, creating the possibility of a new round of authoritarian regimes. This tendency is compounded by a general decline of the welfare state mentality, so that these new dictators wouldn’t even need to pay lip service to taking care of the masses, as long as the middle classes are given preferential tax rates, etc.

But even in the West, the easy access to the internet has political unsavoury consequences. As more people depend on the internet as a provider of goods, information, entertainment, etc., and regulation of the net is devolved into many commercial hands, it will be increasingly tempting for techno-terrorists to strike by: corrupting, stealing and recoding materials stored therein. In other words, we should see a new generation of people who are the spiritual offspring of the Unabomber and average mischievous hacker. Indeed, many of these people may be motivated by a populist, democratic sentiment associated with a particular ethnic or cultural group that is otherwise ‘info-poor’. Such techno-terrorism is likely to be effective when the offending Western parties are far from those of the offended peoples – one wouldn’t need to smuggle people and arms into Heathrow; one could just push the delete button 5000 miles away… I am frankly surprised that the major stock exchanges and the air traffic control system haven’t yet been sabotaged, considering how easy it is for major disruptions to occur even without people trying very hard. These two computerized systems are prime candidates because the people most directly affected are likely to be relatively well-heeled. In contrast, sabotaging various military defence systems could lead to the death of millions of already disadvantaged people, so I doubt that they would be the target of techno-terrorists (though they may be the target of a sociopathic hacker…)

One seemingly good feature of our emerging networked world is that we can customize our consumption better than ever. However, this customization means that we are providing more of our details to sources capable of exploiting them — not only through marketing, but also through surveillance. In this respect, remarks about the ‘interactivity’ of the internet should be seen as implying that others may be able to ‘see ‘through’ you while you are merely ‘looking at’ them. While this opens up the possibility of government censorship, a bigger threat may be the way in which access to certain materials may be ‘implicitly regulated’ by the ‘invisible hand’ of website hits. Thus, if a site gets a consistently large number of hits, it may suddenly start charging a pay-per-view fee, whereas those getting few hits may simply be taken off cyberspace by commercial servers. This could have especially pernicious consequences for the amount and type of news available (think about what sorts of stories would be expensive to access if news coverage were entirely consumer-driven), as well as on-line distance learning courses.

Here we see the dark side of the ‘user friendliness’ of the net: it basically mimics and reinforces what we already do until we get locked in. (In other words: spontaneous preferences are turned into prejudices and perhaps even addictions.) In the past, government and even businesses saw themselves in the role of educating or, in some other way, challenging people to change their habits. But this is no longer necessary, and may be even inconvenient as a means to a docile citizenry. (Aldous Huxley’s Brave New World was ahead of the curve here.)

There are also some problems arising from advances in biotechnology:
1. As we learn more about people’s genetic makeup, that information will become part of the normal ways we account for ourselves – especially in legal settings. For example, you may be guilty of alcohol-related offences even if you are below the ‘legal limit’, if it’s shown that you’re genetically predisposed to get drunk easily. (Judges have already made such rulings in the US.) Ironically, then, although we have no say in our genetic makeup, we will be expected not only to know it, but also to take responsibility for it.
2. In addition, while our personal genetic information will be generally available (e.g. used by insurance companies to set premiums), it may also be patented as intellectual property legislation seems to be allowing the patenting of substances that already exist in nature as long as the means is artificial (e.g. biochemical synthesis of genetic material for medical treatments).
3. This fine-grained genetic information will refuel the fires of the politics of discrimination, both in its negative and positive extremes: i.e. those who want to take a distinctive genetic pattern as the basis of extermination or valorization. (A good case in point is the drive to recognize homosexuality as genetically based: both pro- and anti-gay groups seem to embrace this line, even though it could mean either preventing the birth of gay children or accepting gayness as a normal tendency in humanity)

Finally, there are some general problems with the future of knowledge production:
1. It will become increasingly difficult to find support – both intellectual and financial — for critical work that aims to overturn existing assumptions and open up new lines of inquiry. This is because current lines of research – especially in the experimentally driven side of the natural sciences – have already invested so much money, people and other resources that to suggest that, say, high-energy physics is intellectually bankrupt or that the human genome project isn’t telling us much more than we already know would amount to throwing lots of people out of work, ruining reputations and perhaps even causing a general backlash against science in society at large (since public conceptions of science are so closely tied to these high-profile projects).
2. Traditionally radical ideas have been promoted in science – at least in part –- because the research behind the ideas did not cost much to do, and not much was riding on who was ultimately correct. However, this idyllic state of affairs ended with World War II. Indeed, it has gotten so bad – and will get worse in the future – that one can speak of a kind of ‘financial censorship’ in science. For example, Peter Duesberg, who discovered the ‘retrovirus’, lost his grants from the US National Institute of Health because he publicly denied the HIV-AIDS link. One result of this financial censorship is that radical researchers will migrate to private funders who are willing to take some risks: e.g. cold fusion research continues today in this fashion. The big downside of this possibility, though, is that if this radical research does bear fruit, it’s likely to become the intellectual property of the private funder and not necessarily used for the public good.

I hope you find these remarks helpful. Leave a message at … when you’re able to talk.

Yours,

Steve