Menu

InternetShield

Lifeboat Foundation InternetShield


By Kemal Akman, Jaan Tallinn, Paul Werbos, and other members of the Lifeboat Foundation Scientific Advisory Board. This is an ongoing program so you may submit suggestions to programs@lifeboat.com.
 
 

Overview

As the Internet grows in importance, an attack on it could cause physical as well as informational damage. An attack today on hospital systems or electric utilities could lead to deaths. In the future an attack could be used to alter the output that is produced by nanofactories worldwide leading to massive deaths. This program looks for solutions to prevent such attacks, or at least reduce the damage caused by them.
 


 

Types of Attacks

There are 6 types of attacks. They are:
 

1. Distributed Attack

A distributed attack (in practice, that means commandeering and deploying botnets of zombies). This is the most serious threat that we are currently aware of and it needs to be addressed.
 
Besides mitigating weakest links for zombie use by making common operating systems more secure, the strategy is early detection and blocking at ISP or tier level, by having key routers worldwide communicate about suspicious events. Because of today’s financial significance, there is already significant research and practical cooperation (much of it non-public).
 
Important papers on this topic are Inferring Internet denial-of-service activity by David Moore, Geoffrey M. Voelker and Stefan Savage, and Implementing Pushback: Router-Based Defense Against DDoS Attacks by John Ioannidis and Steven M. Bellovin.
 
 

2. Phishing Attacks

Phishing attacks are a very real and growing problem that causes billions in damage each year. These attacks and worm/virus threats can be recognized and somewhat mitigated or blocked early by global cooperation that makes heuristic comparison and detection of suspicious activity possible.
 
 

3. Attack Against Physical Infrastructure

Attack against the physical infrastructure of the net. Because of its distributed nature, the Internet is rather immune to physical attacks and therefore this type of attack is not a focus of the InternetShield.
 
Note that the idea of secureDNS could make the Internet less secure *if* it was controlled by just one government. The key infrastructure of the net must be spread out over many countries, and long-term should be spread further out than its main base of North America and Europe (where most of the root servers are).
 
 

4. Specialized One-On-One Attack

A “specialized one-on-one” attack against a specific node. For example, a cracker infiltrating the intranet of a company or government institution. Addressing such specialized attacks with a generic security system is rather ineffective, because specialized attacks usually can afford tailored and/or “offline” vectors such as social engineering. This would be a difficult attack for the InternetShield to handle.
 
 

5. One-To-Many Attack

A “one-to-many” attack is an attack with many targets but with a small number of fixed sources that can be localized.
 
This problem will become increasingly significant as software becomes more intelligent and the strategies to exploit targets becomes more efficient and faster. Better detection of these attacks, and strategies against worms fooling users by social engineering more sophisticated than “i love you” emails are needed.
 
The better the strategy of a piece of exploit software, the faster and more reliably it can exploit other hosts — or their users — before that “attacker” host is shut down. The one-to-many efficiency directly dictates the virulence of an Internet worm, for example. Imagine an artificial intelligence developed a few years from now which specialized in one-to-many attacks, and which was self-improving within specialized limits, and you have imagined a big problem.
 
 

6. Attack Against Software Components of the Infrastructure

Attack against the software components of the infrastructure, such as router firmware or DNS implementations. If, in current context, we consider routers, DNS servers, etc., as network nodes, we don’t really need a special case for them.
 
Note that common Internet software and Operating Systems should implement user-friendly ways of switching to the existing alternative DNS roots.
 
 

Social Solutions

The most important part of the InternetShield is its organizing of all the interested parties controlling various facets of the Internet architecture coming together to share information and provide quick response to reports of malicious activity emanating from their networks. The reason we’re in such a bad state today is that there are too many countries/networks who simply don’t take any action when malicious activity is reported to them. Worse still, some are actively complicit in the activity, but operate with a stance of plausible deniability and a surface appearance of cooperation, all the while sharing reports with the bad guys.
 
South Korea has the right approach to enforcing shutdowns of malicious activity on networks they control — if the South Korean CERT tells an ISP to kill an IP address/hostname or a registrar in the country to kill a hostname, with proof of malicious activity, the ISP/registrar must comply by law. Compare that to CERT bodies in other countries, which are powerless by comparison, and only act as information conduits.
 
We believe that each country in the world that has a connection to the Internet should sign a treaty that mandates they will:
  1. Create a CERT body to deal with reports of malicious activity. This body should be mostly composed of independent professional representatives, especially of the major ISPs, to assure neutrality, competence and efficiency in dealing with these issues.
     
    Reporting of incidents to the responsible CERT should become legally binding. A less bureaucratic solution would be that IT-related insurance companies demand reporting of all incidents in their contracts to mitigate the overall impact and frequency of cyberattacks. (And CERT would require that you have IT insurance.)
  2. Create a separate law-enforcement branch that will follow up on repeated criminal Internet activity by actors within their physical borders (regardless of provable monetary losses).
     
    This branch will need to differentiate between individual financial losses (traditional fraud against companies, persons, copyright) and the more serious: attacking the Internet infrastructure (DDoS, mass phishing, mass intrusions, worms, viruses, attacks against ISPs and real-world infrastructure such as CCTV, hospitals, and power grids).
  3. Require cooperation and information sharing between these bodies inside the country and also with other countries’ CERT/LE bodies.
  4. Give these bodies the authority and power to require ISPs/registrars within their borders to remove malicious sites and hostnames upon a report by the CERT or face monetary penalties for not doing so.
Care must also be taken to limit the scope of the responsibilities of the CERT/LE bodies to widespread and massively-impacting malicious activity — it would be unfortunate if all their resources went to trying to appease limited third-party interests such as copyright lobbies or being called on by countries/companies to cooperate in suppressing free speech.
 
 

Technical Solutions

Our 7 recommended technical solutions are:
 
 

1. Secure Operating Systems

A weak point are routers and other core infrastructure, they are often still prone to attack. We need microkernel- and capability-based operating systems, better (capability-based, not only user-based) privilege separation and zero-configuration, secure by default operating systems. It is still possible and very common to use easy and default passwords, even on critical routing and network equipment.
 
We all know that there are important fuzzy aspects of cybersecurity which need to be faced up to, like insider threats, physical attacks, possible “trap doors” in physical hardware produced outside the US, Internet management and advanced computer systems able to crack public key encryption particularly for wireless communication.
 
The first (and maybe last) large time-sharing system based on such principles was the Multics operating system. This operating system (owned at times by GE and Honeywell, and developed at MIT and Harvard) was based on far more rigorous mathematical understanding and clear foundations than anything before or since. It was cleared to operate jobs at different levels of security on the same computer, based in part on the mathematics and on the results of a tiger team which spent a year trying to crack it, with full access to all the code. It was the core computer of the Pentagon’s World-Wide Military Command and Control System for many years — but sheer horsepower issues and the vagaries of marketing eventually shut it down.
 
The core principle here was “ring bracket design”, as well as the use of the same intelligible language (PL/1) for everything, including the PL/1 compiler itself.
 
People at places like Berkeley have studied how it is possible to specify MACHINE-VERIFIABLE coding rules, to ensure EXACT compliance with ring bracket kinds of rules, that make it simply impossible for anyone to directly take over an operating system and exceed their proper authority in a computer.
 
Governments do have the power to insist that they will use no operating system which does not meet certain publicly announced standards for their source code. There are some sociological barriers involved — but also a growing need to protect critical information, and a growing problem with critical infrastructures (like electric power) under attack.
 
 

2. Secure Hardware

A major security hole is the infamous “buffer overflow”. This should be addressed by fixing the age-old flaw in processor design in which subroutine return addresses share the space with local data in stack and, thus, can be “inadvertently” overwritten (the data-execution-prevention system in modern processors is an attempted fix).
 
 

3. Variety of Operating Systems

We recommend an increase in the diversity of network nodes. The bigger the difference between nodes, the more limited the scope of a single attack can be. Actions such as like reducing the dominance of popular OSes and applications, virtualization and sandboxing (ie, “splitting the nodes”) qualify as countermeasures.
 
 

4. Honeypots

We recommend the development and deployment of “honeypots” that run popular OS/application configurations in virtual machines and are constantly monitored for signs of infection.
 
 

5. Secure Email

Since email is a vector for attacks we recommend replacing email (as we know it) with a communication medium that a) authenticates conversation participants, and b) makes it very difficult (or even impossible) to send unsolicited one-to-many messages.
 
The key item holding this development back is a lack of standardization. Secure email developers should be urged to cooperate, and ISO, IETF and IEEE should be supported in standardizing those measures and encouraging major mail server and client vendors to implement these standards.
 
 

6. Untrusted Executables

Can be addressed by disabling “one-click” execution and/or making sure that untrusted executables are put in “quarantine”.
 
Additionally a trusted database of system files and trusted software packages should be maintained by the operating system or installed security software. The database should include the fingerprint/signature of each program.
 
 

7. Clean Slate

Ideally the Internet would be replaced with a clean slate to meet today’s needs.
 
Since the Internet took its first baby steps in Sept. 2, 1969, the needs and expectations of the Internet have greatly changed. The Internet “works well in many situations but was designed for completely different assumptions,” said Dipankar Raychaudhuri, a Rutgers University professor overseeing three clean-slate projects. “It’s sort of a miracle that it continues to work well today.”
 
The Internet’s early architects built the system on the principle of trust. Researchers largely knew one another, so they kept the shared network open and flexible, but spammers and hackers arrived and were able to roam freely because the Internet doesn’t have built-in mechanisms for knowing with certainty who sent what.
 
Even if the original designers had the benefit of hindsight, they would have had problems incorporating features that are needed today. Computers, for instance, were much slower then, possibly too weak for the computations needed for robust authentication.
 
A new network could run parallel with the current Internet and eventually replace it, or perhaps aspects of the research could go into a major overhaul of the existing architecture.
 
 

Conclusion

Authorities are underestimating the damage that could be caused by an Internet based attack. The time to develop solutions is now.
 
 

Notes and References

CIA Says Hackers Have Cut Power Grid: Several cities outside the U.S. have sustained attacks on utility systems and extortion demands by Robert McMillan, IDG News Service – January 19, 2008.
 
Cybercrime “more lucrative” than drugs by John Leyden, The Register – November 29, 2005.
 
Protecting Against the Unknown: A guide to improving network security to protect the Internet against future forms of security hazards by Kemal Akman – January 2000.
 
Researchers Use PlayStation Cluster to Forge a Web Skeleton Key – December 30, 2008.