Following the clickable table of contents, these columns are given in REVERSE CHRONOLOGICAL ORDER.
The text here is not necessarily completely identical to the actual printed versions, in which some ACM editing has taken place (for example, due to space limitations). Of some particular interest recently, items on computer-related voting can be found in the columns of January 2001, November 2000, June 2000, and in earlier columns November 1993, November 1992, and November 1990 appended below, which you will find in the continuation of the menu. Other columns prior to December 1997 can be added on request.
========================================================
========================================================
Inside Risks 143, CACM 45, 5, May 2002
Scientists and technologists create a variety of impressions in the eyes of society at large, some positive and others negative. In the latter category is the perception (often clearly a mischaracterization) that many individuals in these occupations are not involved with society in positive ways, making them easy to target for many of society's ills.
It's not difficult to see how this simplistic stereotype developed. We technically-oriented folks can easily become so focused on the science and machines that we willingly leave most aspects of the deployment and use of our labors to others, who often don't solicit our advice -- or who may even actively disdain it.
In the broad scope of technology over the centuries, there have been many innovators who lived to have second thoughts about their creations. From the Gatling gun to nuclear bombs and DNA science, the complex nature of the real world can alter inventions and systems in ways that their creators might never have imagined.
It of course would be unrealistic and unwise for us to expect or receive total control over the ways in which society uses the systems we place into its collective hands. However, it is also unreasonable for the technical and scientific minds behind these systems to take passive and detached roles in the decision-making processes relating to the uses of their works.
Within the computer science and software arenas, an array of current issues would be well served by our own direct and sustained inputs. The continuing controversies over the already-enacted Digital Millennium Copyright Act (DMCA) is one obvious example. Even more ominously, the newly proposed Consumer Broadband and Digital Television Promotion Act (CBDTPA), formerly known as the Security Systems Standards and Certification Act (SSSCA), is a draconian measure; it would greatly impact the ways in which our technologies will be exploited, controlled, and in some cases severely hobbled. We never planned for digital systems to create a war between the entertainment industry, the computer industry, and consumers, but in many ways that's what we're now seeing.
Controversies are raging over a vast range of Internet-related issues, from the nuts and bolts of technology to the influence of politics. Concerns about ICANN (the Internet Corporation for Assigned Names and Numbers) -- the ersatz overseer of the Net -- have been rising to a fever pitch.
Throughout all of these areas and many more, critical decisions relating to technology are frequently being made by politicians, corporate executives, and others with limited technical understanding -- frequently without any meaningful technological inputs other than those from paid lobbyists with their own selfish agendas.
The technical and scientific communities do have associations and other groups ostensibly representing their points of view to government and others. But all too often the pronouncements of such groups seem timid and not particularly ``street-savvy'' in their approaches. Fears are often voiced about sounding too un-academic or expressing viewpoints on ethical matters rather than on technology or science itself, even when there is a clear interrelationship between these elements. Meanwhile, the lobbyists, who have the financial resources and what passes for a straight-talking style, have the ears of government firmly at their disposal.
Computers and related digital technologies have become underpinnings of our modern world, and in many ways are no less fundamental than electricity or plumbing. However, it can be devilishly difficult to explain their complex effects clearly and convincingly to the powers-that-be and the world at large.
As individuals, most of us care deeply about many of these issues -- but that is not enough. We must begin taking greater responsibility for the manners in which the fruits of our labors are used. We need to take on significantly more activist roles, and should accept no less from the professional associations and other groups that represent us. If we do not take these steps, we will have ceded any rights to complain.
Lauren Weinstein (lauren@privacy.org) is co-founder of People For Internet Responsibility (www.pfir.org). He is moderator of the Privacy Forum (http://www.vortex.com) and a member of the ACM Committee on Computers and Public Policy.
========================================================
Inside Risks 142, CACM 45, 4, April 2002
Those of you concerned with privacy issues and identity theft will be familiar with the concept of dumpster diving. Trash often reveals the dealings of an individual or a corporation. The risks of revealing private information through the trash has led to a boom in the sale of paper shredders (and awareness of the risks of reassembling shredded documents). However, how many of us take the same diligent steps with our digital information?
The recovery of digital documents in the Enron case and the use of e-mail in the Microsoft anti-trust case have brought these concerns into the fore. For example, we are all more aware of the risks inherent in the efficient (``lazy'') method of deleting files used by modern operating systems, where files are `forgotten about' rather than actually removed from the drive.
There will certainly be an increase in the sales of `wiper' software following this increase in awareness, but that's not the end of the story. Overwriting data merely raises the bar on the sophistication required of the forensic examiner. To ensure reliable data storage, the tracks on hard-drive platters are wider than the influence of the heads, with a gap (albeit small) between tracks. Thus, even after wiper software has been applied, there may still be ghosts of the original data, just partially obscured.
So, what more can we do? Clearly, we are in a tradeoff between the cost to the user and the cost to the investigator. To take the far extreme, we would take a hammer to the drive and melt down the resulting fragments, but this is infeasible without a large budget for disks.
One could booby-trap the computer, such that if a certain action isn't taken at boot time, the disk is harmed in some way. Forensics investigators are mindful of this, however, and take care to examine disks in a manner that does not tamper with the evidence. If we're open to custom drives, we could push the booby-trap into the drive hardware, causing it to fail when hooked up to investigative hardware (or, more cunningly, produce a false image of a file-system containing merely innocent data).
Another approach is to consider file recovery as a fait accompli and ensure that the recovered data is not available as evidence. Encryption clearly has a role to play here. An encrypting file-system built into your operating system can be helpful, but may provide only a false sense of security -- unless you have adequate assurance of its cryptanalytic strength (which is likely to be weakened if there is common structure to your data) and the strength of the underlying operating systems. Per-file encryption with a plurality of keys might help, but that begs the question of key management and key storage.
One could consider possible key escrow, backdoors, and poorly implemented cryptography software to be below your paranoia threshold. Another useful step can be secret sharing (A. Shamir, "How to Share a Secret", Comm.ACM 22, 11, 612--613, November 1979). Spread your data in fragments around the network such that k of the n fragments are required to be co-located to decipher the original file. In a carefully designed system, any k-1 fragments yield no useful insight into the contents of the file; k and n can be tuned according to the paranoia required, including the placement of no more than k-1 within the jurisdiction of the investigating agency.
Clearly, there are a number of steps we can take to push the evidence as far as possible beyond the reach of those who might use it to incriminate us. But one question not often raised in this topic is why should we bother? Given the lack of strong authentication in most computing systems, it's not beyond reasonable doubt that the files in question are not even yours.
Furthermore, there are many risks of trusting recovered digital evidence, given the ease with which digital documents can be fraudulently created, modified, or accidentally altered, or their time stamps manipulated. Corroboration by independent sources of evidence is usually required to establish a case, even for non-digital evidence, although when all of these corroborating sources of evidence are digital, the risks remain. See, for example, discussion of the potential holes in evidence in the case of the Rome Labs Intrusion in 1994 (Peter Sommer, "Intrusion Detection Systems as Evidence", BCS Legal Affairs Committee, March 2000. http://www.bcs.org.uk/lac/ids.htm).
So, things may not be what they seem. Supposedly deleted information may still be retrievable -- from backup files, as residues on physical media, or from decrypted forms that over time become decryptable by brute force, as computing power increases. Supposedly reliable evidentiary information may have been forged, tampered, or bypassed altogether. Be careful what you believe.
David Stringer-Calvert is a Senior Project Manager in the Computer Science Lab at SRI International.
======================================================================
Inside Risks 141, CACM 45, 3, March 2002
For over half a century we have classified research on a scale from basic to applied. Basic research is a quest for fundamental understanding without regard to potential utility. Applied research is technology development that solves near-term problems. These two models have different diffusion times from research result to practice -- often 20-50 years for basic research and 2-3 years for applied. Because the return on investment of basic research is so far in the future, the Federal government is the main sponsor and university faculty are the main investigators.
For over a generation we have classified software development on a scale from technology- centered to human-centered. Technology-centered work is focused on advancing software technology with new functions, algorithms, protocols, and efficiencies. Human-centered work is focused on making software more useful to those paying for or using it.
These two one-dimensional (linear) scales create false dichotomies, obscure fundamental issues, and encourage tensions that hurt research and software development.
Most of our academic departments place high value on basic and technology-centered work. Faculty who do applied or human-centered projects often find themselves disadvantaged for tenure and promotion and occasionally the object of scorn. Most eventually toe the line or leave the academy. (See National Research Council, Academic careers for experimental computer scientists, NRC Press, 1994.) The resulting bias prevents us from valuing and teaching the full range of vital software development topics. Many of the risks discussed in this forum over the years will never be fully addressed as long as this bias persists.
In 1997, Donald Stokes (Pasteur's Quadrant: Basic Science and
Technological Innovation, Brookings Institution, 1997 http://www.brook.edu/) put the research issue
into a new light. He traced the conceptual problem back to Vannevar Bush, who in
1945 coined the term basic research, characterized it as the pacemaker
of technological process, and claimed that in mixed settings applied research
will eventually drive out basic. Bush thus put the goals of understanding and
use into opposition, a belief that is at odds with the actual experience of
science. Stokes proposes that we examine research in two dimensions, not
one:
* Inspired by considerations of use?
* Quest for fundamental
understanding?
He names the (yes,yes) quadrant Pasteur's, the (no,yes) quadrant Bohr's, and the (yes,no) quadrant Edison's. He did not name the (no,no) quadrant, although some will recognize this quadrant as the home of much junk science.
Those who favor applied research call for greater emphasis on Pasteur's+Edison's quadrants, and those who favor basic, on Bohr's+Pasteur's. In fact, most of the basic-versus-applied protaganists will, if shown the diagram of four quadrants, agree that these three correspond to vital sectors of research, none of which is inherently superior to the others.
A similar model can be applied to software development. Here the common
belief is that the attention of the designer can either be focused on the
technology itself or on the user, or somewhere in between. Michael Dertouzos
(The Unfinished Revolution, Harper Collins, 2001) recently documented
15 chronic design flaws in software and said that they will be eliminated only
when we learn human-centered design, design that seeks software that serves
people and does not debase or subvert them. Dertouzos called for his fellow
academics to teach human-centered design and not to scorn software developers
who interact closely with their customers. Some critics incorrectly concluded
that he therefore also supported reducing attention to the world of software
technology. However, we can view software development in two dimensions, rather
than one:
* Inspired by considerations of utility and value?
* Seeks
advancement of software technology?
Three of these quadrants correspond to important software development sectors:
(yes,yes) -- projects to create new technologies in close collaboration with their customers (examples: MIT Multics, AT\&T Unix, Xerox PARC Alto, IBM System R, World Wide Web).
(yes,no) -- projects to employ existing knowledge to solve human problems (examples: Harlan Mills' work, CHI, much application development).
(no,yes) -- projects to create new software technologies for their intrinsic interest (examples: many university research projects)
The final (no,no) quadrant is the home of many projects purely for the amusement of the developer. Many software developers will agree that the first three quadrants are all important and that none is inherently superior to the others. Perhaps this two-dimensional interpretation will help unstick our thinking about software development.
Peter Denning (pjd@cs.gmu.edu) has contributed to ACM for many years in many capacities. Jim Horning (horning@acm.org) has been involved in computing research for more than 30 years, and is presently at InterTrust Technologies.
========================================================
Inside Risks 140, CACM 45, 2, February 2002
[NOTE: Choose an appropriate Cyrillic character set for your browser for this column only, if your browser does not recognize the russian for gazeta.ru and the russian c and o in microsoft.]
Oldtimers remember slashes (/) through zeros [or through the letter O where there was no difference] in program listings to avoid confusing them with the letter O. This has long been obsoleted by advances in editing tools and font differentiation. However, the underlying problem of character resemblance remains, and has now emerged as a security problem.
Let us begin with a risks case. On April 7, 2000, an anonymous site published a bogus story intimating that the company PairGain Technologies (NASDAQ:PAIR) was about to be acquired for approximately twice its market value. The site employed the look and feel of the Bloomberg news service, and thus appeared quite authentic to unsuspecting users. A message containing a link to the story was simultaneously posted to the Yahoo message board dedicated to PairGain. The link referred to the phony site by its numerical IP address rather than by name, and thus obscured its true identity. Many readers were convinced by the Bloomberg look and feel, and accepted the story at face value despite its suspicious address. As a result, PairGain stock first jumped 31%, and then fell drastically, incurring severe losses to investors. A variant of this hoax might have used a domain named BL00MBERG.com, with zeros replacing os. However, forthcoming Internet technologies have the potential to make such attacks much more elusive and devastating.
A new initiative, promoted by a number of Internet standards bodies including IETF and IANA, allows one to register domain names in national alphabets. This way, for example, Russian news site gazeta.ru (gazeta means newspaper in Russian) might register a more appealing word in Russian. ("газета.ру"). The initiative caters to the genuine needs of non-English-speaking Internet users, who currently find it difficult to access Web sites otherwise. Several alternative implementations are currently being considered, and we can expect the standardization process to be completed soon.
The benefits of this initiative are indisputable. Yet the very idea of such an infrastructure is compromised by the peculiarities of world alphabets. Revisiting our newspaper example, one can observe that Russian letters a,e,p,y are indistinguishable in writing from their English counterparts. Some of the letters (such as a) are close etymologically, while others look similar by sheer coincidence. (As it happens, other Cyrillic languages may cause similar collisions.)
With the proposed infrastructure in place, numerous English domain names may be homographed -- i.e., maliciously misspelled by substitution of non-Latin letters. For example, the Bloomberg attack could have been crafted much more skillfully, by registering a domain name bloomberg.com, where the letters o and e have been faked with Russian substitutes. Without adequate safety mechanisms, this scheme can easily mislead even the most cautious reader.
Sounds frightening? Here is something more scary.
One day John Hacker similarly imitates the name of your bank's Web site. He then uses the newly registered domain to install an eavesdropping proxy, which transparently routes all the incoming traffic to the real site. To make the bank's customers go through his site, John H. hacks several prominent portals which link to the bank, substituting the bogus address for the original one. And now John H. has access to unending streams of passwords to bank accounts. Note that this plot can be in service for years, while customers unfortunate enough to have bookmarked the new link might use it forever.
Several approaches can be employed to guard against this kind of attack. The simplest fix would indiscriminately prohibit domain names that mix letters from different alphabets, but this will block certainly useful names like CNNenEspanol.com with a tilde over the last n. More practically, the browser can highlight international letters present in domain names with a distinct color, although many users may find this technique overly intrusive. A more user-friendly browser may highlight only truly suspicious names, such as ones that mix letters within a single word. For additional security, the browser can use a map of identical letters to search for collisions between the requested domain and similarly written registered ones.
Caveat: To demonstrate the feasibility of the described attack, we registered a homographed domain name http://www.microsoft.com with corresponding Russian letters instead of c and o: http://www.miсrоsоft.com/ While this name may be tricky to type in, you can conveniently access it from http://www.cs.technion.ac.il/~gabr/papers/homograph.html.
(Predictably, MICR0S0FT.com, MICR0SOFT.com, and MICROS0FT.com are already registered, as is BL00MBERG.com. John H. has not been wasting his time.)
So, next time you see microsoft.com, where does it want to go today?
Evgeniy Gabrilovich (gabr@acm.org) and Alex Gontmakher (gsasha@cs.technion.ac.il) are Ph.D. students in Computer Science at the Technion -- Israel Institute of Technology. Evgeniy is a member of the ACM and the IEEE; his interests involve computational linguistics, information retrieval, and machine learning. Alex's interests include parallel algorithms and constructed languages.
========================================================Inside Risks 139, CACM 45, 1, January 2002
The software development process can benefit from the use of established standards and procedures to assess compliance with specified objectives, and reduce the risk of undesired behaviors. One such international standard for information security evaluation is the Common Criteria (CC, ISO IS 15408, 1999, http://csrc.nist.gov/cc). Although use of the CC is currently mandated in the United States for government equipment (typically military-related) that processes sensitive information whose ``loss, misuse, or unauthorized access to or modification of which could adversely affect the national interest or the conduct of Federal programs'' (Congressional Computer Security Act of 1987), it has been voluntarily applied in other settings (such as health care). In the USA, oversight of CC product certification is provided by the National Institute of Standards and Technologies (NIST).
The goal of the CC is to provide security assurances via anticipation and elimination of vulnerabilities in the requirements, construction, and operation of information technology products through testing, design review, and implementation. Assurance is expressed by degrees, as defined by selection of one of seven Evaluation Assurance Levels (EALs), and then derived through assessment of correct implementation of the security functions appropriate to the level selected, and evaluation in order to obtain confidence in their effectiveness.
However, the use of standards is not a panacea, because product specifications may contain simultaneously unresolvable requirements. Even the CC, which is looked upon as a 'state of the art' standard, disclaims its own comprehensiveness, saying that it is ``not meant to be a definitive answer to all the problems of IT security. Rather, the CC offers a set of well understood security functional requirements that can be used to create trusted products or systems reflecting the needs of the market.'' As it turns out, the CC methodology falls short in addressing and detecting all potential design conflicts.
This major flaw of the CC is directly related to its security functional requirement hierarchy. In selecting an EAL appropriate to the product under evaluation, the CC specifies numerous dependencies among the items necessary for implementing a level's criteria of assurance. In essence, it formulates a mapping whereby if you choose to implement X, you are required to implement Y (and perhaps also Z, etc.). But the CC fails to include a similar mapping for counter-indications, and does not show that if you implement J then you can not implement K (and perhaps also not L, etc.).
A good example of how this becomes problematic arises when both anonymity and auditability are required. The archetypical application of such simultaneous needs occurs in off-site election balloting, but one can also find this in such arenas as Swiss-style banking or AIDS test reporting. If the CC process were to be used with voting (to date, no such standards have been mandated, but NIST involvement is now being considered), it must assure that each ballot is cast anonymously, unlinkably, and unobservably, protecting the voter's identity from association with the voting selections. Because access to the ballot-casting modules requires prior authentication and authorization, pseudonymity through the use of issued passcodes seems to provide a plausible solution. But the CC does not indicate how it is possible to maintain privacy while also resolving the additional requirement that all aliases must ultimately be traceable back to the individual voters in order to assure validity.
Furthermore, the need for anonymity precludes the use of traditional transaction logging methods for providing access assurances. Randomized audit logs have been proposed by some voting system vendors, but equipment or software malfunction, errors, or corruption can easily render these self-generated trails useless. Multiple electronic backups provide no additional assurances, since if the error occurs between the point of user data entry and the writing of the cast ballot, all trails would contain the same erroneous information. Pure anonymity and unlinkability, then, are possible only if authentication and authorization transactions occur separately from balloting, but this is difficult to achieve in a fully-electronic implementation.
The remedy to this and other such flaws in the CC involves augmentation with extensions that go beyond the current standard. For voting, one solution is to produce voter-verified paper ballots for use in recounts. Thus, the use of the CC in the secure product development cycle is encouraged, but prudent application and consideration of risks imposed by conflicting requirements is also necessary.
Rebecca Mercuri (mercuri@acm.org) is an assistant professor of computer science at Bryn Mawr College with a PhD from the University of Pennsylvania. Her dissertation, Electronic Vote Tabulation Checks and Balances, contains a detailed discussion of the common criteria evaluation process. See http://www.notablesoftware.com/evote.html for further information, including a computer security checklist.
========================================================
Inside Risks 138, CACM 44, 12, December 2001
In the wake of September 11th, the concept of a National Identity (NID) Card system has been getting considerable play, largely promoted by persons who might gain financially or politically from its implementation, or by individuals who simply do not understand the complex implications of such a plan. Authentic unique identifiers do have some potentially useful purposes, such as staving off misidentifications and false arrests. However, there are many less-than-obvious risks and pitfalls to consider relating to the misuse of NID cards.
In particular, we must distinguish between the apparent identity claimed by an NID and the actual identity of an individual, and consider the underlying technology of NID cards and the infrastructures supporting those cards. It's instructive to consider the problems of passports and drivers' licenses. These supposedly unique IDs are often forged. Rings of phony ID creators abound, for purposes including both crime and terrorism. Every attempt thus far at hardening ID cards against forgery has been compromised. Furthermore, insider abuse is a particular risk in any ID infrastructure. One such example occurred in Virginia, where a ring of motor-vehicle department employees was issuing unauthorized drivers' licenses for a modest fee.
The belief that ``smart'' NID cards could provide irrefutable biometric matches without false positives and negatives is fallacious. Also, such systems will still be cracked, and the criminals and terrorists we're most concerned about will find ways to exploit them, using the false sense of security that the cards provide to their own advantage -- making us actually less secure as a result!
Another set of risks arise with respect to the potentials for abuse of the supporting databases and communication complexes that would be necessary to support NIDs -- card readers, real-time networking, monitoring, data mining, aggregation, and probably artificially intelligent inference engines of questionable reliability. The opportunities for overzealous surveillance and serious privacy abuses are almost limitless, as are opportunities for masquerading, identity theft, and draconian social engineering on a grand scale.
The RISKS archives relate numerous examples of misuses of law enforcement, National Crime Information, motor vehicle, Social Security, and other databases, by authorized insiders as well as total outsiders. RISKS readers may be familiar with the cases of the stalker who murdered the actress Rebecca Schaeffer after using DMV data to find her, and the former Arizona law enforcement officer who tracked and killed an ex-girlfriend aided by insider data. The US General Accounting Office has reported widespread misuse of NCIC and other data. Social Security Number abuse is endemic.
Seemingly high-tech smart-card technology has been compromised with surprisingly little high-tech effort. Public-key infrastructures (PKI) for NID cards are also suspect due to risks in the underlying computer infrastructures themselves, as noted in the January/February 2000 columns on PKI risks. Recall that PKI does not prove the identity of the bearers -- it merely gives some possible credence relating to the certificate issuer. Similar doubts will exist relating to NID cards and their authenticity. The November 2000 RISKS column warned against low-tech subversions of high-tech solutions via human work-arounds, a major and highly likely pitfall for any NID.
The NID card is touted by some as a voluntary measure (at least for U.S. citizens). The discriminatory treatment that non-card-holders would surely undergo makes this an obvious slippery slope -- the cards would likely become effectively mandatory for everyone in short order, and subject to the same abuses as other more conventional IDs. The road to an Orwellian police state of universal tracking, but actually reduced security, could well be paved with hundreds of millions of such NID cards.
We have noted here before that technological solutions entail risks that should be identified and understood in advance of deployment to the greatest extent possible, regardless of any panic of the moment. The purported (yet unproven) ``benefits'' of an NID card system notwithstanding, these risks deserve to be discussed and understood in detail before any decisions regarding its adoption in any form should be made.
Peter Neumann (neumann@pfir.org) and Lauren Weinstein (lauren@pfir.org) moderate the ACM RISKS Forum (www.risks.org) and the PRIVACY Forum (www.privacyforum.org), respectively. They are co-founders of People For Internet Responsibility: http://www.csl.sri.com/users/neumann/www.pfir.org
NOTE: Over 5 years ago, Simon Davies quite rationally addressed many common questions relating to such ID cards. See his Frequently Asked Questions, August 24, 1996: http://www.privacy.org/pi/activities/idcard/idcard_faq.html. See also Chris Hibbert's FAQ on SSNs: http://cpsr.org/cpsr/privacy/ssn/ssn.faq.html.
========================================================
Inside Risks 137 CACM 44, 11, November 2001
The horrific events of September 11, 2001, have brought grief, anger, fear, and many other emotions. As we write these words a few weeks later, risks issues are now squarely on the world's center stage, particularly technological risks relating to security and privacy.
With the nightmare of recent events still in a haze of emotions, now is not the time to delve into the technical details of the many risks involved and their impacts on the overall issues of terrorism. We can only hope that future risks warnings will be given greater credence than has typically been the case in the past.
We all want to prevent future attacks, and see terrorists brought to justice for their heinous actions. But this does not suggest that we should act precipitously without carefully contemplating the potential implications, especially when there has been little (if any) meaningful analysis of such decisions' real utility or effects.
Calls for quick action abound, suggesting technical and non-technical approaches intended to impede future terrorism or to calm an otherwise panicky public. Below is a sampling of some current proposals (all in a state of flux and subject to change by the time you read this) that may have various degrees of appeal at the moment. However, not only is it highly questionable whether these ideas can achieve their ostensible goals, it's certainly true that all of them carry a high risk of significant and long-lasting deleterious effects on important aspects of our lives. While improvements in our intelligence and security systems are clearly needed, we should not even be considering the implementation of any of the items below without extremely careful consideration and soul-searching:
* Increased use of wiretapping, without many existing legal restraints
* Widespread monitoring of e-mail, URLs, and other Internet usage
* Banning strong encryption without ``backdoors'' for government access. (In general, the existence of such backdoors creates a single point of attack likely to be exploitable by unauthorized as well as authorized entities, possibly increasing crime and terrorism risks instead of reducing them [1].)
* Face and fingerprint identification systems
* Arming of pilots; remote-controlled airliners; biometrically-locked airliner controls
* Indefinite detention without trial
* Life in prison without parole for various actions that proposals are broadly interpreting as ``terrorist'' (potentially including some security research, petty computer hacking, and other activities that clearly do not fall under currently established definitions of ``terrorism'')
* National ID cards (such as smartcards or photographic IDs), which have only limited potential to enhance security but also entail an array of serious risks and other negative characteristics.
* Massive interagency data sharing and loosened ``need to know'' restrictions on personal information related to areas such as social security numbers, drivers' license information, educational records, domestic and foreign intelligence data, etc. All such data can lead directly not only to identity theft but also to a wide range of other abuses.
These and many other proposals are being made with little or no evidence that they would have prevented the events of September 11th, nor deter future highly adaptable terrorists. Some of these concepts, though their motives may often be laudable, could actually reduce the level of security and increase the risks of terrorist attacks. The details of these effects will be topics for much future discussion, but now is not the time for law-enforcement ``wish lists'' or knee-jerk reactions, including many ideas that have been soundly rejected in the past and which have no greater value, and no fewer risks, than they did prior to September 11th.
We must not obliterate hard-won freedoms through hasty decisions. To do so would be to give the terrorists their ultimate victory.
Our best wishes to you and yours.
Lauren Weinstein (lauren@pfir.org) and Peter Neumann (neumann@pfir.org) moderate the PRIVACY Forum (www.privacyforum.org) and the ACM RISKS Forum (www.risks.org), respectively. They are co-founders of People For Internet Responsibility (www.pfir.org).
1. Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, and Bruce Schneier, The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption, http://www.cdt.org/crypto/risks98/; reprinting an earlier article in the World Wide Web Journal, 2, 3, Summer 1997, with a new preface.
2. J.J. Horning, P.G. Neumann, D.D. Redell, J. Goldman, D.R. Gordon, Computer Professionals for Social Responsibility, A Review of NCIC 2000 (report to the Subcommittee on Civil and Constitutional Rights of the Committee on the Judiciary, United States House of Representatives), February 1989, Palo Alto, California. (This reference discusses among other things some of the privacy and life-critical risks involved in monitoring and tracking within law enforcement.)
3. Also see various Web sites for further background: http://www.acm.org/, http://catless.ncl.ac.uk/Risks/ and http://www.privacyforum.org/, http://www.pfir.org/, http://www.epic.org/, etc.
========================================================
Inside Risks 136 CACM 44, 10, October 2001
In the months that the Code Red worm and its relatives have traveled the Net, they've caused considerable consternation among users of Microsoft's Internet Information Server, and elicited abundant Schadenfreude from unaffected onlookers. Despite the limited havoc that it wrought, the Code Red family highlights a much more pernicious problem: the vulnerability of embedded devices with IP addresses, particularly those with built-in Web servers.
Thus far, the Code Red worms work their way through self-generated lists of IP addresses and contact each address's port 80, the standard HTTP port. If a server answers, the worm sends an HTTP request that forces a buffer overflow on unpatched IIS servers, compromising the entire computer.
Any effect that these worms have on other devices that listen on port 80 appears to be unintended. Cisco has admitted that some of its DSL routers are susceptible to denial-of-service; when affected routers' embedded Web servers are contacted by Code Red, the router goes down. HP print servers and 3Com LANmodems seem to be similarly affected; other network-infrastructure hardware likely suffered, too.
HTTP has become the computers' lingua franca of the Internet. Since Web browsers are effectively ubiquitous, many hardware and software companies can't resist making their products' functions visible -- and often controllable -- from any Web browser. Indeed, it almost seems as if all future devices on the Net will be listening on port 80. This increasing reliance on network-accessible gadgetry will return to haunt us; Code Red is only a harbinger.
Sony cryptically announced in April that it would endow all future products with IP addresses; a technically implausible claim, but nonetheless a clear statement of intent. Car vendors are experimenting with wirelessly accessible cars that can be interrogated and controlled from a Web browser. The possibilities for nearly untraceable shenanigans perpetrated by the script kiddie next door after working out your car's password are endless. This problem won't be solved by encrypting the Web traffic between car and browser, either.
The rise of HTTP as a communications common denominator comes from ease of use, for programmer and customer alike. All customers need is a Web browser and the device's IP address, and they're set. Creating a lightweight server is trivial for developers, especially since both in- and outbound HTTP data is text.
Even more attractive, HTTP traffic is usually allowed through firewalls and other network traffic barriers. Numerous non-HTTP protocols are tunneled via HTTP in order to ease their passage.
But HTTP isn't the miscreant. The problem is created by the companies that embed network servers into products without making them sufficiently robust. Bullet-proof design and implementation of software -- especially network software -- in embedded devices is no longer an engineering luxury. Customer expectation of reliability for turnkey gadgets is higher than that for PC-based systems. The successful infiltration of the Code Red worms well after the alarm was sounded is eloquent proof that getting it right the first time has become imperative.
Given the ease of implementation and small code size of a lightweight Web server, it's particularly disturbing that such software isn't engineered with greater care. Common errors that cause vulnerabilities -- buffer overflows, poor handling of unexpected types and amounts of data -- are well understood. Unfortunately, features still seem to be valued more highly among manufacturers than reliability. Until that changes, Code Red and its ilk will continue unabated.
One example of doing it right is the OpenBSD project, whose developers have audited its kernel source code since the mid-1990s, and have discovered numerous vulnerabilities, such as buffer overflows, before they were exploited. Such proactive manual scrutiny of code is labor intensive and requires great attention to detail, but its efficacy is irrefutable. OpenBSD's security track record -- no remotely exploitable vulnerabilities found in the past four years -- speaks for itself.
Like sheep, companies and customers have been led along the path of least resistance by the duplicitous guide called convenience. HTTP is easy: easy to implement, easy to use, and easy to co-opt. With a little diligence and forethought, it is also easy to secure, as are other means of remote network access. HTTP wasn't originally designed to be all things to all applications, but its simplicity has made it an understandable favorite. But with this simplicity also comes the responsibility on the part of its implementors to make sure it's not abused.
Stephan Somogyi References:
Advisories Stephan Somogyi writes frequently -- and speaks occasionally -- on
technology, business, design, and distilled spirits for paper and online
publications worldwide.
Bruce Schneier, CTO, Counterpane Internet Security, Inc. Ph: 408-777-3612
19050 Pruneridge Ave, Cupertino, CA 95014. Internet security newsletter:
http://www.counterpane.com/crypto-gram.html
========================================================
Inside Risks 135 CACM 44, 9, September 2001 Most people have heard about the risks of Web cookies in the context of user
privacy. Advertisers such as DoubleClick use cookies to track users and deliver
targeted advertising, drawing significant media attention [1]. But cookies are
also used to authenticate users to personalized services, which is at least as
risky as using cookies to track users.
A cookie is a key/value pair sent to a browser by a Web server to capture the
current state of a Web session. The browser automatically includes the cookie in
subsequent requests. Servers can specify an expiration date for a cookie, but
the browser is not guaranteed to discard the cookie. Because there are few
restrictions on their contents, cookies are highly flexible and easily misused.
Cookies have been used for tracking and authentication. An advertiser can
track your movements between Web sites because the first banner-ad presented to
you can set a cookie containing a unique identifier. As you read subsequent
advertisements, the advertiser can construct a profile about you based on the
cookies it receives from you. Cookies can also authenticate you for multistep
Web transactions. For example, WSJ.com sets a cookie to identify you after you
login. This allows you to download content from WSJ.com without having to
re-enter a password. E-commerce sites like Amazon.com use cookies to associate
you with a shopping cart. In all cases, a valid cookie will grant access to data
about you, but the information protected by an authentication cookie is
especially sensitive. Unlike tracking cookies, authentication cookies must be
protected from both exposure and forgery.
Unfortunately, cookies were not designed with these protections in mind. For
example, there is no standard mechanism to establish the integrity of a cookie
returned by a browser, so a server must provide its own method. As might be
expected, some servers use much better methods than others. The cookie
specification also relies heavily on the cooperation of the user and the browser
for correct operation. Despite the lack of security in the design of cookies,
their flexibility makes them highly attractive for authentication. This is
especially true in comparison to mechanisms like HTTP Basic Authentication or
SSL that have fixed requirements, are not extensible, and are confusing to
users. Thus cookie-based authentication is very popular and often insecure,
allowing anything from extension of privileges to the impersonation of users.
Most sites do not use cryptography to prevent forgery of cookie-based
authenticators. The unsafe practice of storing usernames or ID numbers in
cookies illustrates this. In such a scheme, anyone can impersonate a user by
substituting the victim's username or ID number in the cookie. Even schemes that
do use cryptography often crumble under weak cryptanalytic attacks. Designing a
secure cookie-based authentication mechanism is difficult because the cookie
interface is not amenable to strong challenge-response protocols. Thus, many
designers without clear security requirements invent weak, home-brew
authentication schemes [2].
Many sites also rely on cookie expiration to automatically terminate a login
session. However, you can modify your cookies to extend expiration times.
Further, most HTTP exchanges do not use SSL to protect against eavesdropping:
anyone on the network between the two computers can overhear the traffic. Unless
a server takes stronger precautions, an eavesdropper can steal and reuse a
cookie, impersonating a user indefinitely.
These examples illustrate just a few of the common problems with cookie-based
authentication. Web site designers must bear these risks in mind, especially
when designing privacy policies and implementing Web sites. Although there is
currently no consensus on the best design practices for a cookie authentication
scheme, we offer some guidance [2]. To protect against the exposure of your own
personal data, your best (albeit extreme) defense is to avoid shopping online or
registering with online services. Disabling cookies makes any use of cookies a
conscious decision (you must re-enable cookies) and prevents any implicit data
collection. Unfortunately, today's cookie technology offers no palatable
solution for users to securely access personalized Web sites.
1. Hal Berghel, "Digital Village: Caustic cookies," Communications of the
ACM 44, 5, 19-22, May 2001.
2. Kevin Fu, Emil Sit, Kendra Smith, Nick Feamster, ``Dos and Don'ts of
Client Authentication on the Web'', Proc. of 10th USENIX Security
Symposium, August 2001. [NOTE: This paper won the best student paper award!
Also, see http://cookies.lcs.mit.edu/
PGN]
Emil Sit (sit@mit.edu) and Kevin Fu (fubob@mit.edu) are graduate students at
the MIT Laboratory for Computer Science in Cambridge, MA.
========================================================
Inside Risks 134 CACM 44, 8, August 2001 It is easy to create bogus electronic mail with someone else's e-mail name
and address: SMTP servers don't check sender authenticity. S/MIME
(Secure/Multipurpose Internet Mail Extensions) can help, as can digital
signatures and globally-known trustworthy Certification Authorities (CA) that
issue certificates. The recipient's mail software verifies the sender's
certificate to find out his/her public-key, which is then used to verify e-mail
signed by the sender. In order to trust the legitimacy of the e-mail signatures,
the recipient must trust the CA's certificate-issuance procedures. There are 3
classes of certificates. The certificate classes and issuance procedures are
more or less the same for all CA companies that directly issue certificates to
individuals, e.g., Verisign, Globalsign, and Thawte.
Class-1 certificates have online processes for enrollment application and
certificate retrieval. There is no real identity check, and it is possible to
use a bogus name -- but the PIN sent by e-mail to complete the application at
least connects the applicant to an e-mail address.
Class-2 certificates are more secure than class-1. CAs issue them after some
online and offline controls. They automatically check applicant's identity and
address against the database of a third party, such as a credit-card company or
DMV. As Schneier and Ellison note in their column ``Risks of PKI: Secure
E-Mail'' (*Comm. ACM 43,* 1, January 2000), it is possible to create fake
certificates using this online method simply by private information theft. In
order to reduce the likelihood of impersonation, CAs use a postal service for
identity verification and/or confirmation.
Class-3 certificates require in-person presence for strong identity
control prior to issuance by CAs, so they are still more secure.
As usually used in S/MIME, class-1 certificates can easily mislead users. The
recipient's e-mail program verifies the signature over a signed message using
the sender's class-1 certificate. Because the information in the e-mail message
and in the certificate match, the e-mail client program would accept the
signature as valid, but must take the sender's word. With a dishonest sender,
the spurious verification is garbage-in, gospel-out. The only seeming
assurance the signature gives is that the message might have been sent by a
person who has access to the e-mail address specified in the message, but this
fact isn't clearly specified by the e-mail programs. An average user thinks that
a class-1 certificate provides identity verification, which is not true. This is
neither a bug nor a one-time security flaw. It is exactly how the system works.
CA companies are, of course, aware of this, and put appropriate disclaimers
within their Certificate Practice Statements (CPS) and class-1 certificates.
However, such disclaimers must be read and interpreted by the verifiers. Who
would spend time reading these details when the e-mail program says that the
message has been signed? The average Internet user isn't an experienced security
technician.
Some CA companies, like Globalsign, don't include the certificate holder's
name in class-1 certificates. This is good approach, but not sufficient. A
message signed by such a class-1 certificate would also be verified by the
e-mail programs. People who don't read the disclaimers also won't read a
lack-of-identification notice. Worse, this lets a sender use the same
certificate to impersonate multiple persons.
If you receive an e-mail message without a signature, you might be wary --
but are likely to take a signed message at face value. Class-1 certificates, in
that respect, provide vulnerability in the name of security.
The verifier should check the level of assurance given in a certificate.
Perhaps e-mail programs should be designed to help verifiers by giving clear and
direct warnings specifying the exact level of identity validation associated
with the certificate. If a class-1 certificate is used, the program should
display a box saying that sender's identity hasn't been validated.
Certificate holders as well as verifiers must be aware of the fact that
class-1 certificates don't certify real identities. They have to use class-3
certificates for this.
Certificate classes were invented to serve the security-vs-convenience
tradeoff. Class-3 certificates have a good level of identity check for personal
authentication, but CA companies should still promote class-1 and class-2
certificates for the users who need the convenience of online processing.
Refusing to provide them would lose the CAs too many customers. We believe that
class-1 certificates will gradually disappear as certificate use reaches
maturity and as people become more conscious of the limitations of class-1
certificates.
Albert Levi (levi@ece.orst.edu) is a postdoctoral research associate at
Information Security Lab, Oregon State University. Cetin Kaya Koc [with cedillas
under C and c] (koc@ece.orst.edu) is a professor of Electrical & Computer
Engineering at OSU.
========================================================
Inside Risks 133 CACM 44, 7, July 2001 Despite a half-century of practice, a distressingly large portion of today's
software is over budget, behind schedule, bloated, and buggy. As you
know, all four factors generate risks, and bugs can be life-critical. Our reach
continues to exceed our grasp. While hardware has grown following Moore's Law,
software seems to be stuck with Gresham's Law. Most providers studiously avoid
taking any responsibility for the software they produce.
These observations are not new. They were eloquently presented at the famous
1968 NATO conference for which the term ``software engineering'' was coined. (It
was ``deliberately chosen as being provocative, in implying the need for ... the
types of theoretical foundations and practical disciplines, that are traditional
in the established branches of engineering.'') But many of today's programmers
and managers were not even born in 1968, and most of them probably got their
training after the conference proceedings (Software Engineering: Concepts
and Techniques, P. Naur, B. Randell, and J.N. Buxton (eds.),
Petrocelli/Charter, 1976) went out of print.
For those who care about software, wonder why it's in such bad shape, and
want to do something about it, I prescribe the study of both the current
literature and the classics. It is not enough to learn from your own experience;
you should learn from the experiences of others. ``Those who cannot remember the
past are condemned to repeat it.'' (George Santayana)
I have long recommended the book The Mythical Man-Month, by
Frederick P. Brooks, Jr., Addison-Wesley, 2nd edition, July 1995. It is a
product of both bitter experience (``It is a very humbling experience to make a
multimillion-dollar mistake.'') and careful reflection on that experience. It
distills much of what was learned about management in the first quarter-century
of software development. This book has stayed continuously in print since 1975,
with a new edition in 1995. It is still remarkably relevant to managing software
development.
Now there is another book I would put beside it as a useful source of
time-tested advice. Software Fundamentals: Collected Papers by David L.
Parnas, Daniel M. Hoffman and David M. Weiss (eds.), with a foreword by J.
Bentley, Addison-Wesley, 2001 is more technical and less management-oriented,
but equally thought-provoking. In one volume, it covers in depth many
risks-oriented topics.
Parnas has been writing seminal and provocative papers about software and its
development for more than 30 years, based on original research, observation, and
diligent efforts to put theory into practice, often in risky systems such as
avionics and nuclear reactor control. Software Fundamentals collects 33
of these papers, selected for their enduring messages. It includes such classics
as ``On the Criteria to Be Used in Decomposing Systems into Modules''; ``On the
Design and Development of Program Families''; ``Designing Software for Ease of
Extension and Contraction''; A Rational Design Process: How and Why to Fake
It''; and ``Software Engineering: An Unconsummated Marriage''. It also has some
lesser-known gems, such as ``Active Design Reviews: Principles and Practices''
and ``Software Aging''. Even if you remember these papers, it is worth
refreshing your memory.
The papers were written to stand alone. Each has a new introduction,
discussing its historical and modern relevance. Thus, readers can browse the
papers in just about any order, choosing those that catch their interest.
However, this is a book where browsing can easily turn to serious study; the
editors' arrangement provides an orderly sequence for reading.
Whether browsing or studying this book, you'll be struck by how much of
today's ``conventional wisdom'' about software was introduced (or championed
very early) by Parnas. Equally surprising is the number of his good ideas that
have still not made their way into current practice. Anyone who cares about
software and risks should ask, Why?
Parnas is never dull. You won't agree with everything he says, and he'd
probably be disappointed if you did. Pick something he says with which you
disagree (preferably something you think is ``obviously wrong''), and try to
construct a convincing theoretical or practical counter-argument. You'll
probably find it harder than you expect, and you'll almost surely learn
something worthwhile when you discover the source of your disagreement. Then,
pick one of Parnas's good ideas that isn't being used where you work, and try to
figure out why it isn't. That could inspire you to write a new column.
Jim Horning (Horning@acm.org) is Director of the Strategic Technologies and
Architectural Research Laboratory (STAR Lab) of InterTrust Technologies
Corporation. (He wrote introductions for two of the papers in the Parnas
anthology, but doesn't get any royalties.) He started programming in 1959; his
long-term interest is the mastery of complexity.
========================================================
Inside Risks 132, CACM 44, 6, June 2001 On March 22, 2001, Microsoft issued a Security Bulletin (MS01-017) alerting
the Internet community that two digital certificates were issued in Microsoft's
name by VeriSign (the largest Digital Certificate company) to an individual --
an impostor -- not associated with Microsoft. Instantaneously, VeriSign (a
self-proclaimed "Internet Trust Company") and the entire concept of Public Key
Infrastructure (PKI) and digital certificates -- an industry and service based
on implicit trust -- became the focus of an incident seriously undermining its
level of trustworthiness. This incident also challenges the overall value of
digital certificates.
In theory, certificates are worthwhile to both businesses and consumers by
providing a measure of confidence regarding whom they are dealing with. For
example, consumers entering a bricks-and-mortar business can look around at the
condition of the store, the people working there, and the merchandise offered.
As desired, they can research various business references to determine the
reliability and legitimacy of the business. Depending on the findings, they
decide whether or not to shop there. However, with an Internet-based business,
there is no easy way to determine with whom one is considering doing business.
The Internet business may be a familiar name (from the "real" business world)
and an Internet consumer might take comfort from that and enter into an
electronic relationship with that site. Without a means to transparently verify
the identity of a given Website (through digital certificates), how will they
really know with whom they are dealing?
Recall the incident involving Microsoft. Potentially, the erroneously-issued
certificates were worth a considerable amount of money should their holders have
attempted to distribute digitally-signed software purporting to be legitimate
products from Microsoft. In fact, these certificates were worth much more than
the "authentic" certificates issued to Microsoft because (as mentioned earlier)
end-users do not have the ability to independently verify that certificates are
valid. Since users can't verify the validity of certificates legitimate or
otherwise -- the genuine Microsoft certificates are essentially worthless!
In ``Risks of PKI: Secure E-Mail'' (Comm. ACM 43, 1, January 2000)
[below], cryptanalysts Bruce Schneier and Carl Ellison note that certificates
are an attractive business model with significant income potential, but that
much of the public information regarding PKI's vaunted benefits is developed
(and subsequently hawked) by the PKI vendors. Thus, they are skeptical of the
usefulness and true security of certificates.
As a result of how PKI is currently marketed and implemented, the only value
of digital certificates today is for the PKI vendor who is paid real money when
certificates are issued. For the concept of certificates to have real value for
both purchaser and end-user, there must be real-time, every-time, confirmation
that the presented certificate is valid, similar to how credit cards are
authorized in retail stores. Unless a certificate can be verified during each
and every use, its value and trustworthiness is significantly reduced.
In the real world, when submitted for a purchase, credit cards are subjected
to at least 6 steps of verification. The first is when the Point of Sale (POS)
terminal contacts the credit-card issuer, who verifies that the POS terminal
belongs to an authorized merchant. Then, when the customer's card information is
transmitted, the issuer verifies that the card number is valid, is active (not
revoked, or appearing on a list of stolen or canceled cards), and that the card
balance (including the current purchase) is not over the approved limit. Finally
the merchant, after receiving an approval for the transaction by the credit-card
company, usually (but not always) verifies the customer's signature on the
receipt matches the signature on the card. If there is no signature on the card,
the merchant may ask for another form of signed identification, sometimes even
asking for photo identification.
The Schneier-Ellison article and recent real-world events demonstrate that a
system of robust, mutual and automatic authentication, checks-and-balances, and
active, ongoing cross-checks between all parties involved is necessary before
PKI can be considered a secure or "trusted" concept of identification. Without
such features, certificates simply become a few bits of data with absolutely no
value to anyone but the PKI vendor.
Without effective revisions to the current process of generating and
authenticating new and existing certificate holders, the concept of PKI as a
tool providing ``Internet trust'' will continue to be a whiz-bang media buzzword
for the PKI industry, full of the sound and fury of marketing dollars, but, in
reality, securing nothing.
Note: See http://www.infowarrior.org/articles/2001-01.html
for a more detailed discussion: A Matter of Trusting Trust: Why Current
Public-Key Infrastructures are a House of Cards.
========================================================
Inside Risks 131, CACM 44, 5, May 2001 You get up to the turnstile at a sporting event and learn that you won't be
permitted inside unless you provide a blood sample for instant DNA analysis, so
that you can be compared against a wanted criminal database. Thinking of that
long overdue library book, you slink away rather than risk exposure.
Farfetched? Sure, today. But tomorrow, a similar scenario could actually
happen, except that you'll probably never even know that you're being scanned.
True, overdue library books probably won't be a high priority, and we should all
of course obey the rules, return those books, and pay any fines! But there's
actually a range of extremely serious risks from the rapid rise of biometric and
tracking technologies in a near void of laws and regulations controlling their
use, and abuse.
There was an outcry when it was revealed that patrons at the 2000 Superbowl
game (some critics have dubbed it the "Snooper Bowl") were unknowingly scanned
by a computerized system that tried matching their faces against those of wanted
criminals, even though this sort technology has long been used in venues such as
some casinos and ATM machines. The accuracy of these devices appears quite
limited in most cases today, but they will get better. Video cameras are
becoming ubiquitous in public, and the potential of these systems to provide the
basis for detailed individual dossiers is significant and rapidly expanding.
Other technologies will soon provide even better identification and tracking.
We constantly shed skin and other materials that could be subjected to DNA
matching; automated systems to vastly speed this process for immediate use are
under development. Will "planting" someone else's DNA become the future's
version of a criminal "frameup"? DNA concerns have already found their way into
the popular media -- the 1997 film Gattaca postulated a nightmarish
DNA-obsessed society. Even without biometrics, the ability for others to track
our movements is growing with alarming speed. There will be wide use of
cell-phone location data (which is generally available whenever your cell phone
is on, even if not engaged in a call). The availability of this data (originally
mandated by the FCC for laudable 911 purposes) is being rapidly explored by both
government and commercial firms.
It's often argued that there's no expectation of privacy in public places.
But by analogy this suggests that it would be acceptable for every one of us to
be followed around by a snoop with a notepad, who then provides his notes
regarding our movements to the government and/or any commercial parties willing
to pay his fees. As a society, would we put up with this? Should the fact that
technology could allow such mass tracking to be done surreptiously somehow make
it more acceptable?
Proponents of these systems tend to concentrate on scenarios that most of us
would agree are valuable, like catching child molesters and murderers, or
finding a driver trapped in a blizzard. But the industry shows much less
enthusiasm for possible restrictions to prevent the inappropriate or trivialized
use of such data. An infrastructure that could potentially track the movements
of its citizens, both in realtime and retrospectively via archived data, could
become a powerful tool for oppression by some governments less enlightened than
our own is today. Detailed automated monitoring of the citizenry could probably
result in a dramatic reduction in all manner of infractions, from the most minor
to the very serious. Such monitoring would also fundamentally alter our society
in ways that most of us would find abhorrent.
Even in current civil and commercial contexts the potential for abuse is very
real. Lawyers in divorce cases would love to get hold of data detailing where
that supposedly errant husband has been. Insurance companies could well profit
from knowledge about where their customers go and what sorts of potentially
risky activities they enjoy. Such data in the wrong hands could help enable
identity fraud, or far worse.
We've already seen automated toll collection records (which tend to be kept
long after they're needed for their original purpose) drawn into legal battles
concerning persons' whereabouts. Cell-phone location information (even when
initially collected with the user's consent in some contexts) can become fodder
for all manner of commercial resale, data-matching, and long-term archival
efforts, with few (if any) significant restrictions on such applications or how
the data collected can be later exploited.
It would be wrong to fault technology itself for introducing this array of
risks to privacy. The guilt lies with our willingness to allow technological
developments (and the vested interests behind them in many cases) to skew major
aspects of our society without appropriate consideration being given to
society's larger goals and needs. If we're unwilling to tackle that battle,
we'll indeed get what we deserve.
Lauren Weinstein (lauren@vortex.com) moderates the PRIVACY Forum
(http://www.vortex.com/privacy). He also co-founded People For Internet
Responsibility (http://www.pfir.org).
========================================================
Inside Risks 130, CACM 44, 4, April 2001 Underwriters Laboratories (UL) is an independent testing organization created
in 1893, when William Henry Merrill was called in to find out why the Palace of
Electricity at the Columbian Exposition in Chicago kept catching on fire (which
is not the best way to tout the wonders of electricity). After making the
exhibit safe, he realized he had a business model on his hands. Eventually, if
your electrical equipment wasn't UL certified, you couldn't get insurance.
Today, UL rates all kinds of equipment, not just electrical. Safes, for
example, are rated based on time to crack and strength of materials. A ``TL-15''
rating means that the safe is secure against a burglar who is limited to
safecracking tools and 15 minutes' working time. These ratings are not
theoretical; employed by UL, actual hotshot safecrackers take actual safes and
test them. Applying this sort of thinking to computer networks -- firewalls,
operating systems, Web servers -- is a natural idea. And the newly formed Center
for Internet Security (no relation to UL) plans to implement it.
This is not a good idea, not now, and possibly not ever. First, network
security is too much of a moving target. Safes are easy; safecracking tools
don't change much. Not so with the Internet. There are always new
vulnerabilities, new attacks, new countermeasures; any rating is likely to
become obsolete within months, if not weeks.
Second, network security is much too hard to test. Modern software is
obscenely complex: there is an enormous number of features, configurations,
implementations. And then there are interactions between different products,
different vendors, and different networks. Testing any reasonably sized software
product would cost millions of dollars, and wouldn't guarantee anything at the
end. Testing is inherently incomplete. And if you updated the product, you'd
have to test it all over again.
Third, how would we make security ratings meaningful? Intuitively, I know
what it means to have a safe rated at 30 minutes and another rated at an hour.
But computer attacks don't take time in the same way that safecracking does. The
Center for Internet Security talks about a rating from 1 to 10. What does a 9
mean? What does a 3 mean? How can ratings be anything other than binary: either
there is a vulnerability or there isn't?
The moving-target problem particularly exacerbates this issue. Imagine a
server with a 10 rating; there are no known weaknesses. Someone publishes a
single vulnerability that allows an attacker to easily break in. Once a
sophisticated attack has been discovered, the effort to replicate it is
effectively zero. What is the server's rating then? 9? 1? How does the Center
re-rate the server once it is updated? How are users notified of new ratings? Do
different patch levels have different ratings?
Fourth, how should a rating address context? Network components would be
certified in isolation, but deployed in a complex interacting environment.
Ratings cannot take into account all possible operating environments and
interactions. It is common to have several individual ``secure'' components
completely fail a security requirement when they are forced to interact with one
another.
And fifth, how does this concept combine with security practices? Today the
biggest problem with firewalls is not how they're built, but how they're
configured. How does a security rating take that into account, along with other
people problems: users naively executing e-mail attachments, or resetting
passwords when a stranger calls and asks them to?
This is not to say that there's no hope. Eventually, the insurance industry
will drive network security, and then some sort of independent testing is
inevitable. But providing a rating, or a seal of approval, doesn't have any
meaning right now.
Ideas like this are part of the Citadel model of security, as opposed to the
Insurance model. The Citadel model basically says, ``If you have this stuff and
do these things, then you'll be safe.'' The Insurance model says, ``Inevitably
things will go wrong, so you need to plan for what happens when they do.'' In
theory, the Citadel model is a much better model than the pessimistic,
fatalistic Insurance model. But in practice, no one has ever built a citadel
that is both functional and dependable.
The Center for Internet Security has the potential to become yet another
``extort-a-standard'' body, which charges companies for a seal of approval. This
is not to disparage the motives of those behind the Center; you can be an
ethical extortionist with completely honorable intentions. What makes it
extortion is the decrement from not paying. If you don't have the
``Security Seal of Approval'', then (tsk, tsk) you're just not concerned about
security.
Bruce Schneier, CTO of Counterpane Internet Security, Inc. (a
managed-security monitoring firm), 3031 Tisch Way, 100 Plaza East, San Jose, CA
95128, 1-408-556-2401, is author of Secrets and Lies: Digital Security in a
Networked World (Wiley, 2000). http://www.counterpane.com/. Bruce also
writes a monthly Crypto-Gram newsletter http://www.counterpane.com/crypto-gram.html.
========================================================
Inside Risks 129, CACM 44, 3, March 2001 Predicting the long-term effects of computers is both difficult and easy: we
won't get it right, but we won't see ourselves proven wrong. Rather than try, we
present some alternatives allowing readers to make their own predictions.
* Computers play an increasing role in enabling and mediating communication
between people. They have great potential for improving communication, but there
is a real risk that they will simply overload us, keeping us from really
communicating. We already receive far more information than we can process. A
lot of it is noise. Will computers help us to communicate or will they
interfere?
* Computers play an ever increasing role in our efforts to educate our young.
Many countries want to have computers in every school, or even one on every
desk. Computers can help in certain kinds of learning, but it takes time to
learn the arcane set of conventions that govern their use. Even worse, many
children become so immersed in the cartoon world created by computers that they
accept it as real, losing interest in other things. Will computers really
improve our education, or will children be consumed by them?
* Computers play an ever increasing role in our war-fighting. Most modern
weapons systems depend on computers. Computers also play a central role in
military planning and exercises. Perhaps computers will eventually do the
fighting and protect human beings. We might even hope that wars would be fought
with simulators, not weapons. On the other hand, computers in weapon systems
might simply make us more efficient at killing each other and impoverishing
ourselves. Will computers result in more slaughter or a safer world?
* Information processing can help to create and preserve a healthy
environment. Computers can help to reduce the energy and resources we expend on
such things as transportation and manufacturing, as well as improve the
efficiency of buildings and engines. However, they also use energy and their
production and disposal create pollution. They seem to inspire increased
consumption, creating what some ancient Chinese philosophers called ``artificial
desires''. Will computers eventually improve our environment or make it less
healthy?
* By providing us with computational power and good information, computers
have the potential to help us think more effectively. On the other hand, bad
information can mislead us, irrelevant information can distract us, and
intellectual crutches can cripple our reasoning ability. We may find it easier
to surf the web than to think. Will computers ultimately enhance or reduce our
ability to make good decisions?
* Throughout history, we have tried to eliminate artificial and unneeded
distinctions among people. We have begun to learn that we all have much in
common -- men and women, black and white, Russians and Americans, Serbs and
Croats, .... Computers have the power to make borders irrelevant, to hide
surface differences, and to help us to overcome long-standing prejudices.
However, they also encourage the creation of isolated, antisocial, groups that
may, for example, spread hatred over networks. Will computers ultimately improve
our understanding of other peoples or lead to more misunderstanding and hatred?
* Computers can help us to grow more food, build more houses, invent better
medicines, and satisfy other basic human needs. They can also distract us from
our real needs and make us hunger for more computers and more technology, which
we then produce at the expense of more essential commodities. Will computers
ultimately enrich us or leave us poorer?
* Computers can be used in potentially dangerous systems to make them safer.
They can monitor motorists, nuclear plants, and aircraft. They can control
medical devices and machinery. Because they don't fatigue and are usually
vigilant, they can make our world safer. On the other hand, the software that
controls these systems is notoriously untrustworthy. Bugs are not the exception;
they are the norm. Will computers ultimately make us safer or increase our level
of risk?
Most of us are so busy advancing and applying technology that we don't look
either back or forward. We should look back to recognize what we have learned
about computer-related risks. We must look forward to anticipate the future
effects of our efforts, including unanticipated combinations of apparently
harmless phenomena. Evidence over the past decade of Inside Risks and other
sources suggests that we are not responding adequately to that challenge. Humans
have repeatedly demonstrated our predilection for short-term optimization
without regard for long-term costs. We must strive to make sure that we maximize
the benefits and minimize the harm. Among other things, we must build stronger
and more robust computer systems while remaining acutely aware of the risks
associated with their use.
Professor David Lorge Parnas, P.Eng., is Director of the Software Engineering
Programme, Department of Computing and Software, Faculty of Engineering McMaster
University, Hamilton, Ontario, Canada L8S 4L7. (PGN is PGN.)
========================================================
Inside Risks 128, CACM 44, 2, February 2001 In this column, we assert that deeper knowledge of fundamental principles of
computer technology and their implications will be increasingly essential in the
future, for a wide spectrum of individuals and groups, each with its own
particular needs. Our lives are becoming ever more dependent on understanding
computer-related systems and the risks involved. Although this may sound like a
motherhood statement, wise implementation of motherhood is decidedly nontrivial
-- especially with regard to risks.
Computer scientists who are active in creating the groundwork for the future
need to understand system issues in the large, including the practical
limitations of theoretical approaches. System designers and developers need
broader and deeper knowledge -- including those people responsible for the human
interfaces that must be used in inherently riskful operational environments that
must be trusted; interface design is often critical. Particularly in those
systems that are not wisely conceived and implemented, operators and users of
the resulting systems also need an understanding of certain fundamentals.
Corporation executives need an understanding of various risks and
countermeasures. In each case, our knowledge must increase dramatically over
time, to reflect rapid evolution. Fortunately, the fundamentals do not change as
quickly as the widget of the day, which suggests an important emphasis for
education and ongoing training.
An alternative view suggests that many technologies can be largely hidden
from view, and that people need not understand (or indeed, might prefer not to
know) the inner workings. David Parnas's early papers on abstraction,
encapsulation, and information hiding are important in this regard. Although
masking complexity is certainly possible in theory, in practice we have seen too
many occasions (for examples, see the RISKS archives) in which inadequate
understanding of the exceptional cases resulted in disasters. The complexities
arising in handling exceptions apply ubiquitously, to defense, medical systems,
transportation systems, personal finance, security, to our dependence on
critical infrastructures that can fail -- and to anticipating the effects of
such exceptions in advance.
The importance of understanding the idiosyncrasies of mechanisms and human
interfaces, and indeed the entire process, is illustrated by the 2000
Presidential election -- with respect to hanging chad, dimpled chad (due to
stuffed chad slots), butterfly ballot layouts, inherent statistical realities,
and the human procedures underlying voter registration and balloting. Clearly,
the election process is problematic, including the technology and the
surrounding administration that must be considered as part of the overall
system. Looking into the future, a new educational problem will arise if
preferential balloting becomes more widely adopted, whereby preferences for
competing candidates are prioritized and the votes for the lowest-vote candidate
are iteratively reallocated according to the specified priorities. This concept
has many merits, although it would certainly further complicate ballot layouts!
Thus, computer-related education is vital for everyone. The meaning of the
Latin word ``educere'' (to educate) is literally ``to lead forth''. However, in
general, many people do not have an adequate perception of the risks and their
potential implications. When, for example, the media tell us that air travel is
safer than automobile travel (on a passenger-mile basis, perhaps), the
comparison may be less important than the concept that both could be
significantly improved. When we are told that electronic commerce is secure and
reliable, we need to recognize the cases in which it isn't.
With considerable foresight and wisdom, Vint Cerf has repeatedly said that
``The Internet is for Everyone.'' The Internet can provide a fertile medium for
learning for anyone who wants to learn, but it also creates serious
opportunities for the unchecked perpetuation of misinformation and
counterproductive learning that should eventually be unlearned.
In general, we learn what is most valuable to us from personal experience,
not by being force-fed lowest-common denominator details. In that spirit, it is
important that education, training, and practical experiences provide
motivations for true learning. For technologists, education needs to have a
pervasive systems orientation that encompasses concepts of software and system
engineering, security, and reliability, as well stressing the importance of
suitable human interfaces. For everyone else, there needs to be much better
appreciation of the sociotechnical and economic implications -- including the
risks issues. Above all, a sense of vision of the bigger picture is what is most
needed.
For previous columns in this space relating to education, see February 1996
(research), August 1998 (computer science and software engineering), and October
1998 (risks of E-education), the first two by Peter Denning, the first and third
by PGN. PGN's open notes for a Fall 1999 University of Maryland course on
survivable, secure, reliable systems and networks, and a supporting report are
on his Web site: http://www.CSL.sri.com/neumann
========================================================
Inside Risks 127, CACM 44, 1, January 2001 In addition, the user interface (which changes periodically) is designed
without ergonomic considerations. Input error rates are typically around 2%,
although experience has indicated errors in excess of 10% under certain
conditions. This is not considered problematic because errors are thought to be
distributed evenly throughout the data. The interface provides essentially no
user feedback as to the content of input selections or to the correctness of the
inputs, even though variation from the proper input sequence will void the user
data.
Furthermore, multiple reads of the same user data set often produce different
results, due to storage media problems. The media contain a physical audit trail
of user activity that can be manually perused. There is an expectation that this
audit trail should provide full recoverability for all data in order to include
information lost through user error. (In practice, the audit trail is often
disregarded, even when the user error rate could yield a significant difference
in the reported results.)
We have just described the balloting systems used by over a third of the
voters in the United States. For decades, voters have been required to use
inherently flawed punched-card systems, which are misrepresented as providing
100% accuracy (``every vote counts'') -- even though this assertion is widely
known to be patently untrue. Lest you think that other voting approaches are
better, mark-sense systems suffer from many of the same problems described
above. Lever-style voting machines offer more security, auditability, and a
significantly better user interface, but these devices have other drawbacks --
including the fact that no new ones have been manufactured for decades.
Erroneous claims and product failures leading to losses are the basis of many
liability suits, yet (up to now) candidates have been dissuaded from contesting
election results through the legal system. Those who have lost their vote
through faulty equipment also have little or no recourse; there is no recognized
monetary or other value for the right of suffrage in any democracy. With
consumer product failures, many avenues such as recalls and class action suits
are available to ameliorate the situation -- but these are not presently
applicable to the voting process. As recent events have demonstrated, the right
to a properly counted private vote is an ideal rather than a guarantee.
The foreseeable future holds little promise for accurate and secure
elections. Earlier columns here [November 1990, 1992, 1993, 2000, and June 2000]
and Rebecca Mercuri's
doctoral thesis (http://www.notablesoftware.com/evote.html) describe a
multitude of problems with direct electronic balloting (where audit trails
provide no more security than the fox guarding the henhouse) and Internet voting
(which facilitates tampering by anyone on the planet, places trust in the hands
of an insider electronic elite, and increases the likelihood of privacy
violations). Flawed though they may be, the paper-based and lever methods at
least provide a visible auditing mechanism that is absent in fully automated
systems.
In their rush to prevent ``another Florida'' in their own jurisdictions, many
legislators and election officials mistakenly believe that more computerization
offers the solution. All voting products are vulnerable due to the adversarial
nature of the election process, in addition to technical, social, and
sociotechnical risks common to all secure systems. Proposals for universal
voting machines fail to address the sheer impossibility of creating an
ubiquitous system that could conform with each of the varying and often
conflicting election laws of the individual states. Paper-based systems are not
totally bad; some simple fixes (such as printing the candidates' names directly
on the ballot and automated validity checks before ballot deposit) could go a
long way in reducing user error and improving auditability.
As the saying goes, ``Those who cannot remember the past are condemned to
repeat it.'' If the computer science community remains mute and allows
unauditable and insecure voting systems to be procured by our communities, then
we abdicate what may be our only opportunity to ensure the democratic process in
elections. Government officials need your help in understanding the serious
risks inherent in computer-related election systems. Now is the time for all
good computer scientists to come to the aid of the election process.
Contact us at mercuri@acm.org and pneumann@acm.org.
========================================================
Inside Risks 126, CACM 43, 12, December 2000 On August 25, 2000, Internet Wire received a forged e-mail press release
seemingly from Emulex Corp., saying that the Emulex CEO had resigned and the
company's earnings would be restated. Internet Wire posted the message, without
verifying either its origin or contents. Several financial news services and Web
sites further distributed the false information, and the stock dropped 61% (from
$113 to $43) before the hoax was exposed.
This was a devastating network attack. Despite its amateurish execution (the
perpetrator, trying to make money on the stock movements, was caught in less
than 24 hours), $2.54 billion in market capitalization disappeared, only to
reappear hours later. With better planning, a similar attack could do more
damage and be more difficult to detect. It's an illustration of what I see as
the third wave of network attacks -- which will be much more serious and harder
to defend against than the first two waves.
The first wave is physical: attacks against computers, wires, and
electronics. As defenses, distributed protocols reduce the dependency on any one
computer, and redundancy removes single points of failure. Although physical
outages have caused problems (power, data, etc.), these are problems we
basically know how to solve.
The second wave of attacks is syntactic, attacking vulnerabilities in
software products, problems with cryptographic algorithms and protocols, and
denial-of-service vulnerabilities -- dominating recent security alerts. We have
a bad track record in protecting against syntactic attacks, as noted in previous
columns here. At least we know what the problem is.
The third wave of network attacks is semantic, targetting the way we assign
meaning to content. In our society, people tend to believe what they read. How
often have you needed the answer to a question and searched for it on the Web?
How often have you taken the time to corroborate the veracity of that
information, by examining the credentials of the site, finding alternate
opinions, and so on? Even if you did, how often do you think writers make things
up, blindly accept ``facts'' from other writers, or make mistakes in
translation? On the political scene, we've seen many examples of false
information being reported, getting amplified by other reporters, and eventually
being believed as true. Someone with malicious intent can do the same thing.
People already take advantage of others' naivete. Many old scams have been
adapted to e-mail and the Web. Unscrupulous stockbrokers use the Internet to
fuel ``pump and dump'' strategies. On September 6, the Securities and Exchange
Commission charged 33 companies and individuals with Internet fraud, many based
on semantic attacks such as posting false information on message boards.
However, changing old information can also have serious consequences. I don't
know of any instance of someone breaking into a newspaper's article database and
rewriting history, but I don't know of any newspaper that checks, either.
Against computers, semantic attacks become even more serious. Computer
processes are rigid in the type of inputs they accept -- and generally much less
than a human making the same decision would see. Falsifying computer input can
be much more far-reaching, simply because the computer cannot demand all the
corroborating input that people have instinctively come to rely on. Indeed,
computers are often incapable of deciding what the ``corroborating input'' would
be, or how to go about using it in any meaningful way. Despite what you see in
movies, real-world software is incredibly primitive when it comes to what we
call ``simple common sense.'' For example, consider how incredibly stupid most
Web filtering software is at deriving meaning from human-targeted content.
Can air-traffic control systems, process-control computers, and ``smart''
cars on ``smart'' highways be fooled by bogus inputs? You once had to buy piles
of books to fake your way onto The New York Times best-seller list;
it's a lot easier to just change a few numbers in booksellers' databases. What
about a successful semantic attack against the NASDAQ or Dow Jones databases?
The people who lost the most in the Emulex hoax were the ones with preprogrammed
sell orders.
None of these attacks is new; people have long been the victims of bad
statistics, urban legends, hoaxes, gullibility, and stupidity. Computer networks
make it easier to start attacks and speed their dissemination, or for anonymous
individuals to reach vast numbers of people at almost no cost.
In the future, I predict that semantic attacks will be more serious than
physical and syntactic attacks. It's not enough to dismiss them with the
cryptographic magic wands of digital signatures, authentication, and
integrity. Semantic attacks directly target the human/computer
interface, the most insecure interface on the Internet. Amateurs tend to attack
machines, whereas professionals target people. Any solutions will have to target
the people problem, not the math problem.
Bruce Schneier is CTO of Counterpane Internet Security, Inc. References are
included in the archival version of this article at
http://www.csl.sri.com/neumann/insiderisks.html.
NOTE: The conceptualization of physical, syntactic, and semantic attacks is
from an essay by Martin Libicki on the future of warfare.
http://www.ndu.edu/ndu/inss/macnair/mcnair28/m028cont.html
PFIR Statement on Internet hoaxes: http://www.pfir.org/statements/hoaxes
Swedish Lemon Angels recipe: http://www.rkey.demon.co.uk/Lemon_Angels.htm A
version of it hidden among normal recipes (I didn't do it, honest):
http://www.cookinclub.com/cookbook/desserts/zestlem.html Mediocre photos of
people making them (note the gunk all over the counter by the end):
http://students.washington.edu/aferrel/pnt/lemangl.html
SatireWire: How to Spot a Fake Press Release
http://www.satirewire.com/features/fake_press_release.shtml
Amazingly stupid results from Web content filtering software:
http://dfn.org/Alerts/contest.htm
See also: Bruce Schneier, Secrets and Lies: Digital Security in a Networked
World, Wiley, 2000.
Bruce Schneier, CTO, Counterpane Internet Security, Inc., 3031 Tisch Way, 100
Plaza East, San Jose, CA 95128. Phone: 1-408-556-2401, Fax: 1-408-556-0889. Free
Internet security newsletter. See: http://www.counterpane.com
========================================================
Inside Risks 125, CACM 43, 11, November 2000 Computerization of manual processes often creates opportunities for social
risks, despite decades of experience. This is clear to everyone who has waded
through deeply nested telephone menus and then been disconnected. Electronic
voting is an area where automation seems highly desirable but fails to offer
significant improvements over existing systems, as illustrated by the following
examples.
Back in 1992, when I wrote here [1] about computerized vote tabulation, a
$60M election system intended for purchase by New York City had come under
scrutiny. Although the system had been custom designed to meet the City's
stringent and extensive criteria, numerous major flaws (particularly those
related to secure operations) were noted during acceptance testing and review by
independent examiners. The City withheld its final purchase approval and legal
wranglings ensued. This summer, the contract was finally cancelled, with the
City agreeing to pay for equipment and services they had received; all lawsuits
were dropped, thus ending a long and costly process without replacing the City's
bulky arsenal of mechanical lever machines.
Given NYC's lack of success in obtaining a secure, accurate, reliable voting
system, built from the ground up, operating in a closed network environment,
despite considerable time, resources, expertise and expenditures, it might seem
preposterous to propose the creation of a system that would enable ``the casting
of a secure and secret electronic ballot transmitted to election officials using
the Internet'' [2]. Internet security features are largely add-ons (firewalls,
encryption), and problems are numerous (denial-of-service attacks, spoofing,
monitoring). (See [3,4].) Yet this does not seem to dissuade well-intentioned
officials from promoting the belief that on-line voting is around the corner,
and that it will resolve a wide range of problems from low voter turnout to
access for the disabled.
The recent California Task Force report suggested I-voting could be helpful
to ``the occasional voter who neglects to participate due to a busy schedule and
tight time constraints'' [2]. Its convenient access promise is vacuous, in that
the described authorization process requires pre-election submission of a signed
I-voting request, and subsequent receipt of a password, instructions, and access
software on CD-ROM. Clearly, it would be far easier to mail out a conventional
absentee ballot that could be quickly marked and returned, rather than requiring
each voter to reboot a computer in order to install ``a clean, uncorrupted
operating system and/or a clean Internet browser'' [2].
Countless I-voting dotcoms have materialized recently, each hoping to land
lucrative contracts in various aspects of election automation. Purportedly an
academic project at Rensselaer Polytech, voteauction.com was shut down following
threats of legal action for violating New York State election laws [5]. It has
since been sold and reopened from an off-shore location where prosecution may be
circumventable. Vote-selling combined with Internet balloting provides a
powerful way to throw an election to the highest bidder, but this is probably
not what election boards have in mind for their modernized systems. The
tried-and-true method of showing up to vote where your neighbors can verify your
existence is still best used at least until biometric identification is reliable
and commonplace.
While jurisdictions rush to obtain new voting systems, protective laws have
lagged behind. Neither the Federal Election Commission nor any State agencies
have required that computerized election equipment and software comply with
existing government standards for secure systems. The best of these, the ISO
Common Criteria, addresses matters important to voting such as privacy and
anonymity; although it fails to delineate areas in which satisfaction of some
requirements would preclude implementation of others, its components should not
be ignored by those who are establishing minimum certification benchmarks [6].
Computerization of electronic voting systems can have costly consequences,
not only in time and money, but also in the much grander sense of further
eroding confidence in the democratic process. ``If it ain't broke, don't fix
it'' might be a Luddite battle cry, but it may also be prudent where the
benefits of automation are still outweighed by the risks.
1. R. Mercuri, ``Voting-machine risks,'' CACM 35, 11, November 1992. [Added note: See also ``Corrupted Polling'', Inside Risks 41, CACM
36, 11, November 1993, and Voting-Machine Risks, Inside Risks 29, CACM
35, 11, November 1992, which I have added to the end of this partial
collection in the light of recent election considerations. PGN]
========================================================
Inside Risks 124, CACM 43, 10, October 2000 Readers of this column are familiar with the risks of illegal monitoring of
Internet traffic. Less familiar, but perhaps just as serious, are the risks
introduced when law enforcement taps that same traffic legally.
Ironically, as insecure as the Internet may be in general, monitoring a
particular user's traffic as part of a legal wiretap isn't so simple, with
failure modes that can be surprisingly serious. Packets from one user are
quickly mixed in with those of others; even the closest thing the Internet has
to a telephone number --- the ``IP address'' --- often changes from one session
to the next and is generally not authenticated. An Internet wiretap by its
nature involves complex software that must reliably capture and reassemble the
suspect's packets from a stream shared with many other users. Sometimes an
Internet Service Provider (ISP) is able to provide a properly filtered traffic
stream; more often, there is no mechanism available to separate out the targeted
packets.
Enter Carnivore. If an ISP can't provide exactly the traffic covered
by some court order, the FBI offers its own packet sniffer, a PC running special
software designed especially for wiretap interception. The Carnivore computer
(so named, according to press reports, for its ability to ``get to the meat'' of
the traffic) is connected to the ISP's network segment expected to carry the
target's traffic. A dial-up link allows FBI agents to control and configure the
system remotely.
Needless to say, any wiretapping system (whether supplied by an ISP or the
FBI) relied upon to extract legal evidence from a shared, public network link
must be audited for correctness and must employ strong safeguards against
failure and abuse. The stringent requirements for accuracy and operational
robustness provide especially fertile ground for many familiar risks.
First, there is the problem of extracting exactly (no more and no less) the
intended traffic. Standard network monitoring techniques provide only an
approximation of what was actually sent or received by any particular computer.
For wiretaps, the results could be quite misleading. If a single packet is
dropped, repeated, or miscategorized (common occurrences in practice), an
intercepted message could be dramatically misinterpreted. Nor is it always clear
``who said what.'' Dynamic IP addresses make it necessary to capture and
interpret accurately not only user traffic, but also the messages that identify
the address currently in use by the target. Furthermore, it is frequently
possible for a third party to alter, forge, or misroute packets before they
reach the monitoring point; this usually cannot be detected by the monitor.
Correctly reconstructing higher-level transactions, such as electronic mail,
adds still more problems.
The general-purpose nature of Carnivore entails its own risks. ISPs vary
greatly in their architecture and configuration; a new component that works
correctly in one might fail badly --- silently or destructively --- in another.
Carnivore's remote control features are of special concern, given the potential
for damage should a criminal gain control of an installed system. ISPs are
understandably reluctant to allow such devices to be installed deep within their
infrastructures.
Complicating matters further are the various kinds of authorized wiretaps,
with different legal standards for each. Because Carnivore is a general purpose
``black box,'' an ISP (or a court) cannot independently verify that any
particular installation has been configured to collect only the traffic for
which it is legally authorized.
Internet wiretaps raise many difficult questions, both legal and technical.
The legal issues are being debated in Congress, in the courts, and in the press.
The technical issues include the familiar (and tough) problems of software
correctness, complex system robustness, user interfaces, audit, accountability,
and security.
Unfortunately, there's no systematic way to be sure that any system as
complex and sensitive as Carnivore works as it is supposed to. A first step, the
best our community has yet found for this situation, is to subject the source
code and system details to wide scrutiny. Focused reviews by outside experts
should be part of this process, as should opening the code to the public. While
the details of particular wiretaps may properly be kept secret, there's no
reason for the wiretapping mechanism to be concealed. The observation that
sunshine is the best disinfectant applies at least as well to software as it
does to government.
Even if we could guarantee the correctness of software, difficult systems
issues still remain. Software alone cannot ensure that the reviewed code is what
is actually used, that filters and configuration files match court orders, that
evidence is not tampered with, and so on.
Ultimately, it comes down to trust --- of those who operate and control the
system and of the software itself. Trusting a law enforcement agent to be honest
and faithful to duty in a free society is one thing. Trusting complex, black-box
software to be correct and operationally faithful to specifications, however, is
quite another.
Matt Blaze and Steven M. Bellovin are researchers at AT&T Labs in Florham
Park, NJ. This
column is also at http://www.crypto.com/papers/carnivore-risks.html along
with other background information in the /papers directory.
========================================================
Inside Risks 123, CACM 43, 9, September 2000 For evaluating the proposed U.S. national missile-defense shield, President
Clinton has outlined four criteria relating to strategic value, technological
and operational feasibility, cost, and impact on international stability.
Strategic value is difficult to assess without considering the feasibility; if
the desired results are technologically infeasible, then the strategic value may
be minimal. Feasibility remains an open question, in the light of recent test
difficulties and six successive failures in precursor tests of the Army's
Theater High-Altitude Area Defense (THAAD), as well as intrinsic difficulties in
dealing with system complexity. The cost is currently estimated at $60 billion,
but how can any such estimate be realistic with so many unknowns? The impact on
international stability also remains an open question, with considerable
discussion domestically and internationally.
We consider here primarily technological feasibility. One issue of great
concern involves the relative roles of offense and defense, particularly the
ability of the defense to differentiate between real missiles and intelligent
decoys. The failed July 2000 experiment ($100 million) had only one decoy; it
was an order of magnitude brighter than the real missile, to give the computer
analysis a better chance of discriminating between one decoy and the one desired
target. (The test failed because the second stage of the defensive missile never
deployed properly; the decoy also failed to deploy. Thus, the goal of target
discrimination could not be assessed.)
Theodore Postol of MIT has pointed out this was a very simplistic test.
Realistically, decoy technology is orders of magnitude cheaper than
discrimination technology. It is likely to defeat a defensive system that makes
assumptions about the specific attack types and decoys that might be deployed,
because those assumptions will surely be incomplete and perhaps incorrect.
Furthermore, the testing process is always inconclusive. Complex systems fail
with remarkably high probability, even under controlled conditions and even if
all subsystems work adequately in isolated tests. In Edsger Dijkstra's words,
``Testing can be used to show the presence of bugs, but never to show their
absence.''
David Parnas's 1985 arguments [1] relative to President Reagan's Strategic
Defense Initiative (SDI) are all still valid in the present context, and deserve
to be revisited: Risks in the software development process seem to have gotten worse since
1985. (See our July 2000 column.) Many complex system developments have failed.
Even when systems have emerged from the development process, they have typically
been very late, way over budget, and -- perhaps most importantly -- incapable of
fulfilling their critical requirements for trustworthiness, security, and
reliability. In the case of missile-defense systems, there are far too many
unknowns; significant risks would always remain.
Some people advocate attacking incoming objects in the boost phase -- which
might seem conceptually easier to detect and pinpoint, although it is likely to
inspire earlier deployment of multiple warheads and decoys. Clearly, this
concept also has some serious practical limitations. Other alternative
approaches (diplomatic, international agreements, mandatory inspections, etc.)
also need to be considered, especially if they can result in greater likelihood
of success, lower risks of escalation, and enormous cost savings. The choices
should not be limited to just the currently proposed U.S. approach and a
boost-phase defense, but to other approaches as well -- including less
technologically intensive ones.
Important criteria should include honesty and integrity in assessing the past
tests, detailed architectural analyses (currently missing), merits of various
other alternatives, and overall risks. Given all of the unknowns and
uncertainties in technology and the potential social consequences, the decision
process needs to be much more thoughtful, careful, patient, and depoliticized.
It should openly address the issues raised by its critics, rather than
attempting to hide them. It should encompass the difficulties of defending
against unanticipated types of decoys and the likelihood of weapon delivery by
other routes. It should not rely solely on technological solutions to problems
with strong nontechnological components. Some practical realism is essential.
Rushing into a decision to deploy an inherently unworkable concept seems
ludicrous, shameful, and wasteful. The ultimate question is this: Reflecting on
the track record of similar projects in the past and of software in general,
would we trust such a software-intensive system? If we are not willing to trust
it, what benefit would it have?
1. David L. Parnas, Software Aspects of Strategic Defense Systems,
American Scientist, 73, 5, Sep-Oct 1985, 432-440; Comm. ACM
28, 12, Dec 1985, 1326-1335. In Computerization and Controversy: Value
Conflicts and Social Changes (edited by C. Dunlop and R. Kling), Academic
Press, Boston, March 1991; also in other languages. http://www.crl.mcmaster.ca/SERG/parnas.homepg
========================================================
Inside Risks 122, CACM 43, 8, August 2000 Laws relating to computers, software, and the Internet are being proposed and
passed at such a breathless rate that even those of us trying to follow them are
having trouble keeping up. Unfortunately, some bad laws, such as the Uniform
Computer Information Transactions Act (UCITA), are likely to encourage other bad
laws, such as proposals to increase surveillance of the Internet. Yet, few
people have heard of UCITA, an extraordinary example of a legal proposal with
far-reaching consequences. Because commerce is regulated at the state level in
the United States, UCITA is being considered in several states; Virginia and
Maryland have passed it.
UCITA will write into state law some of the most egregious excesses contained
in shrink-wrap software licenses. These include statements that disclaim
liability for any damages caused by the software, regardless of how
irresponsible the software manufacturer might have been. Shrink-wrap licenses
may forbid reverse engineering, even to fix bugs. Manufacturers may prohibit the
non-approved use of proprietary formats. They can prohibit the publication of
benchmarking results. By contrast, software vendors may modify the terms of the
license, with only e-mail notification. They may remotely disable the software
if they decide that the terms of the license have been violated. There is no
need for court approval, and it is unlikely that the manufacturer would be held
liable for any harm created by the shutdown, whether or not the shutdown was
groundless. (The mere existence of such mechanisms is likely to enable denial of
service attacks from anywhere.)
Since a small contractor probably will have a contract that holds him or her
liable for damages, the little guy may be forced to pay for damages resulting
from buggy commercial software. Furthermore, the small business owner may be
unable to sell the software portion of the business to another company, because
most shrink-wrap licenses require the permission of the software vendor before a
transfer of software can occur.
Very few manufacturers of other products have the chutzpah to disclaim all
liability for any damage whatsoever caused by defects in their products, and
most states restrict the effectiveness of such disclaimers. Software vendors
base their non-liability claim on the notion that they are selling only
licenses, not `goods'. Consequently, so the argument goes, U.S. federal and
state consumer protection laws, such as the Magnuson-Moss Warranty Act, do not
apply. The strong anti-consumer component of UCITA resulted in opposition from
twenty-six state attorneys-general, as well as consumer groups and professional
societies such as the IEEE-USA, the U.S. Technology Policy Committee of ACM
(USACM), and the Software Engineering Institute (SEI). (See [1] for more
information about ACM's activities).
When most people learn of UCITA, they assume that the unreasonable components
of software licenses won't survive court challenges. But because there is very
little relevant case law, UCITA could make it difficult for courts to reverse
the terms of a shrink wrap license.
Quoting from the state attorneys-general letter [2], ``We believe the current
draft puts forward legal rules that thwart the common sense expectations of
buyers and sellers in the real world. We are concerned that the policy choices
embodied in these new rules seem to almost invariably favor a relatively small
number of vendors to the detriment of millions of businesses and consumers who
purchase computer software and subscribe to Internet services. ... [UCITA] rules
deviate substantially from long established norms of consumer expectations. We
are concerned that these deviations will invite overreaching that will
ultimately interfere with the full realization of the potential of e-commerce in
our states.''
We know that it is almost impossible to write bug-free software. But UCITA
will remove any legal incentives to develop trustworthy software, because there
need be no liability. While the software industry is pressuring the states to
pass UCITA, law enforcement is pressuring Congress to enact laws that increase
law enforcement's rights to monitor e-mail and the net. Congress, concerned
about the insecurities of our information infrastructure, is listening. So, in
addition to the risks relating to unsecure and non-robust software implied by
UCITA, we also have the risk of increased surveillance and the accompanying
threats to speech and privacy.
If you want to learn about the status of UCITA in your state and how you
might get involved, information is available from a coalition of UCITA opponents
[3].
1. http://www.acm.org/usacm/copyright/
2. http://www.tao.ca/wind/rre/0821.html
Barbara Simons has been President of the ACM for the past two years.
[Added note, not in the CACM: Willis Ware offered the following comments,
which are appended herewith. PGN]
* UCITA acts not only to harm the consumer as pointed out, but it intrudes on
the capability of the industry to build secure software; and hence, directly
opposes federal efforts to protect the information infrastructure.
* The Council of Europe is also a threat with its draft treaty on Cybercrime.
Its provisions oppose all the good tenets that the software industry has learned
with so much difficulty to produce trusted, reliable, and secure software.
Willis Ware, willis@rand.org
========================================================
Inside Risks 121, CACM 43, 7, July 2000 Having now completed ten years of Inside Risks, we reflect here on what has
happened in that time. In short, our basic conclusions have not changed much
over the years -- despite many advances in the technology. Indeed, this lack of
change itself seems like a serious risk. Overall, the potential risks have
monotonously if not monotonically become worse, relative to increased
system/network vulnerabilities and increased threats, and their consequent
domestic and worldwide social implications with respect to national stability,
electronic commerce, personal well-being, and many other factors.
Enormous advances in computing power have diversely challenged our abilities
to use information technology intelligently. Distributed systems and the
Internet have opened up new possibilities. Security, reliability, and
predictability remain seriously inadequate. Privacy, safety, and other socially
significant attributes have suffered. Risks have increased in part because of
greater complexity, worldwide connectivity, and dependence on systems and people
of unknown trustworthiness; vastly many more people are now relying on computers
and the Internet; neophytes are diminishing the median level of risk awareness.
The mass-market software marketplace eagerly creates new functionality, but is
not sufficiently responsive to the needs of critical applications. The
development process is often unmanageable for complex systems, which tend to be
late, over budget, noncompliant, and in some case cancelled altogether. Much
greater discipline is needed. Many efforts seek quick-and-dirty solutions to
complex problems, and long-time readers of this column realize how
counterproductive that can be in the long run. The electric power industry has
evidently gone from a mentality of ``robust'' to ``just-good-enough
most-of-the-time''. The monocultural mass-market computer industry seems even
less proactive. Off-the-shelf solutions are typically not adequate for
mission-critical systems, and in some cases are questionable even in routine
uses. The U.S. Government and state legislative bodies are struggling to pass
politically appealing measures, but are evidently unable to address most of the
deeper issues.
Distributed and networked systems are inherently risky. Security is a serious
problem, but reliability is also -- systems and networks often tend to fall
apart on their own, without any provocation. In 1980, we had the accidental
complete collapse of the ARPAnet. In 1990, we had the accidental AT&T
long-distance collapse. In 1999, Melissa spread itself widely by e-mail
infecting Microsoft Outlook users. Just the first few months of 2000 saw
extensive distributed denial-of-service attacks (Inside Risks, April 2000) and
the ILOVEYOU e-mail Trojan horse that again exploited Microsoft Outlook
features, propagating much more widely than Melissa. ILOVEYOU was followed by
numerous copycat clones. The cost estimates of ILOVEYOU alone are already in the
many billions of dollars (Love's Labor Lost?).
Ironically, these rather simple attacks have demonstrated that relatively
minimal technical sophistication can result in far-reaching effects;
furthermore, dramatically less sophistication is required for subsequent copycat
attacks. Filtering out attachments to an e-mail message that might contain
executable content is not nearly enough. Self-propagating Trojan horses and
worms do not require an unsuspecting user to open an attachment -- or even to
read e-mail. Any Web page read on a system without significant security
precautions represents a threat, considering the capabilities of ActiveX, Java,
JavaScript, and PostScript (for example). With many people blindly using
underprotected operating systems, the existing systemic vulnerabilities also
create massive opportunities for direct penetrations and misuse. Thus, the
damage could be much greater than the simple cases thus far. Massive
penetrations, denials of service, system crashes, and network outages are
characteristically easy to perpetrate, and can be parlayed into coordinated
unfriendly-nation attacks on some of our national infrastructures. Much subtler
attacks are also possible that might not be detected until too late, such as
planting Trojan horses capable of remote monitoring, stealing sensitive
information, and systematically compromising backups over a long period of time
-- seriously complicating recovery. However, because such attacks have not
happened with wide-scale devastation, most people seem to be rather complacent
despite their own fundamental lack of adequate information security.
It is clear that much greater effort is needed to improve the security and
robustness of our computer systems. Although many technological advances are
emerging in the research community, those that relate to critical systems seem
to be of less interest to the commercial development community. Warning signs
seem to be largely ignored. Much remains to be done, as has been recommended
here for the past ten years.
Neumann's Website http://www.csl.sri.com/neumann
includes ``Risks in Our Information Infrastructures: The Tip of a Titanic
Iceberg Is Still All That Is Visible,'' testimony for the May 10, 2000 hearing
of the U.S. House Science Committee Subcommittee on Technology 2000, information
on the ACM Risks Forum (which PGN moderates), etc.
========================================================
Inside Risks 120, CACM 43, 6, June 2000 Risks in computer-related voting have been discussed here by PGN in November
1990 and by Rebecca Mercuri in November 1992 and 1993. Recently we've seen the
rise of a new class of likely risks in this area, directly related to the
massive expansion of the Internet and World Wide Web.
This is not a theoretical issue -- the Arizona Democratic Party recently held
their (relatively small) presidential primary, which was reported to be the
first legally-binding U.S. public election allowing Web-based voting. Whereas
there were problems related to confused voters and overloaded systems, the
supporters of the AZ project (including firms providing the technology) touted
the election as a major success. In their view, the proof was the increased
voter turnout over the party's primary four years earlier (reportedly more than
a six-fold increase). But the comparison is basically meaningless, since the
previous primary involved an unopposed President Clinton -- hardly a
cliffhanger.
Now other states and even the federal government seem to be on the fast track
toward converting every Web browser into a voting machine. In reality, this rush
to permit such voting remains a highly risky proposition, riddled with
serious technical pitfalls that are rarely discussed.
Some of these issues are fairly obvious, such as the need to provide for
accurate and verifiable vote counts and simultaneously enforcing rigorous
authentication of voters (while still making it impossible to retroactively
determine how a given person voted). All software involved in the election
process should have its source code subject to inspection by trusted outside
experts -- not always simple with proprietary ``off-the-shelf'' software. But
even with such inspections, these systems are likely to have bugs and problems
of various sorts, some of which will not be found and fixed quickly; it's an
inescapable aspect of complex software systems.
Perhaps of far greater concern is the apparent lack of understanding
suggested by permitting the use of ordinary PC operating systems and standard
Web browsers for Internet voting. The use of digital certificates and ``secure''
Web sites for such voting can help identify connections and protect the
communications between voters and the voting servers, but those are not where
the biggest risks are lurking. In the recent mass releases of credit-card
numbers and other customer information, it was typically the security at the
servers themselves at fault, not communications security. The same kinds of
security failures leading to private information disclosure or unauthorized
modifications are possible with Internet voting, just as in the commercial
arena. Trust in the election process is at the very heart of the world's
democracies. Internet voting is a perfect example of an application for which
rushing into deployment could have severe negative risks and repercussions of
enormous importance.
Weinstein (lauren@vortex.com) moderates the PRIVACY Forum (http://www.vortex.com/privacy). He is
Co-Founder of People For Internet Responsibility (PFIR, http://www.pfir.org/), which includes a longer
statement on Internet voting.
========================================================
Inside Risks 119, CACM 43, 5, May 2000 The Internet is expanding at an unprecedented rate. However, along with the
enormous potential benefits, almost all of the risks discussed here in past
columns are relevant, in many cases made worse by the Internet -- for example,
due to widespread remote-access capabilities, ever-increasing communication
speeds, the Net's exponential growth, and weak infrastructure. This month we
summarize some of the risks that are most significant, although we can only skim
the surface.
Internet use is riddled with vulnerabilities relating to security,
reliability, service availability, and overall integrity. As noted last month,
denials of service are easy to perpetrate. But more serious attacks are also
relatively easy, including penetrations, insider misuse, and fraudulent e-mail.
Internet video, audio, and voice are creating huge new bandwidth demands that
risk overloads. Some organizations that have become hooked on Internet
functionality are now incapable of reverting to their previous modes of
operation. We cite just a few examples of risks to personal privacy and
integrity that are intensified by the Internet:
* The Internet's vast communications and powerful search engines enable
large-scale data abuses. Massive data mining efforts intensify many problems,
including identity theft. Cookies are one complex component of Web technology,
and possess both positive and highly negative attributes, depending on how they
are used.
* False information abounds, either accidentally or with evil intent.
* Privacy policies relating to encryption, surveillance, and Net-tapping
raise thorny issues. Digital-certificate infrastructures raise integrity
problems.
* Anonymity and pseudo-anonymity have useful purposes but also can foster
serious abuses.
* Obtrusive advertising, spamming, overzealous filtering, Internet gambling
(often illegal) are increasing.
* The many risks involved in Internet voting are not well understood, even as
some jurisdictions rush ahead with fundamentally insecure implementations.
* Nonproprietary free software and open-source software have opened up new
challenges.
The question of who controls the Internet is a tricky matter. In general, the
Internet's lack of central control is both a blessing and a curse! Various
governments seem to desire pervasive Internet monitoring capabilities, and in
some cases also to control access and content. Many corporate interests and
privacy advocates want to avoid such scenarios in most cases. Domain naming is
controversial and exacerbates a number of intellectual property and other issues
that already present problems. Mergers are tending to reduce competition.
The global nature of the Internet intensifies many of the problems that
previously seemed less critical. Local, national, and international
jurisdictional issues are complicated by the lack of geographical boundaries.
Legislatures are rushing to pass new laws, often without understanding
technological realities.
The Uniform Computer Information Transaction Act (UCITA) is currently being
considered by state legislatures. Although championed by proprietary software
concerns, it has received strong opposition from 24 state Attorneys General, the
Bureau of Consumer Protection, the Policy Planning Office of the Federal Trade
Commission, professional and trade associations, and many consumer groups. It
tends to absolve vendors from liability, and could be a serious impediment to
security research. Opposition views from USACM and IEEE-USA are at http://www.acm.org/usacm/copyright/
and http://www.ieeeusa.org/forum/POSITIONS/ucita.html,
respectively.
There are also many social issues, including the so-called digital divide
between the technological haves and have-nots. Educational institutions are
increasingly using the Internet, providing the potential for wonderful
resources, but also frequently as something of a lowest common denominator in
the learning process. Controversies over the mandated use of seriously flawed
filtering technology in Internet environments further muddy the situation.
The potentials of the Internet must be tempered with some common sense
relating to the risks of misuse and abuse. Technological solutions to social
problems have proven to be generally ineffective, as have social solutions to
technological problems. It is crucial that we all become active, as individuals,
organizations, and communities, in efforts to bring some reasonable balance to
these increasingly critical issues. The benefits generally outweigh the risks,
but let's not ignore the risks!
Weinstein (lauren@vortex.com) moderates the PRIVACY Forum
(http://www.vortex.com/privacy) and Neumann (neumann@csl.sri.com) moderates the
ACM Risks Forum (http://catless.ncl.ac.uk/Risks). They are also the founders of
People For Internet Responsibility (PFIR - http://www.pfir.org), which has
assembled a growing enumeration of Internet risks issues as well as position
statements on Internet voting, legislation, hacking, etc. There are of course
many organizations devoted to particular subsets of these important issues. We
hope that PFIR will be an effective resource in working with them on a wide
range of Internet issues.
========================================================
Inside Risks 118, CACM 43, 4, April 2000 WARNING: Although it is April, this is neither an April Fools' column nor a
foolish concern.
A Funny Thing Happened on my Way to the (Risks) Forum this month. I had
planned to write a column on the ever burgeoning risks of denial-of-service
(DoS) attacks relating to the Internet, private networks, computer systems,
cable modems and DSL (for which spoofing is a serious risk), and the critical
infrastructures that we considered here in January 1998.
DoS threats are rampant, although there are only a few previous cases in the
RISKS archives -- for example, involving attacks on PANIX, WebCom, and
Australian communications. There are many DoS types that do not even require
direct access to the computer systems being attacked. Instead, those attacks are
able to exploit fundamental architectural deficiencies external to the systems
themselves rather than just widespread weak links that permit internal
exploitations.
Well, just as I started to write this column in February 2000, an amazing
thing happened. Within a three-day period, Yahoo, Amazon, eBay, CNN.com,
Buy.com, ZDNet, E*Trade, and Excite.com were all subjected to total or regional
outages of several hours caused by distributed denial-of-service (DDoS) attacks
-- that is, multiple DoS attacks from multiple sources. Media moguls seem to
have been surprised, but the DDoS concepts have been around for many years.
Simple DoS flooding attacks (smurf, syn, ping-of-death) can be carried out
remotely over the Net, without any system penetrations. Other DoS attacks may
exploit security vulnerabilities that permit penetrations, followed by crashes
or resource exhaustion. Some DDoS attack scripts (Trinoo, Tribal Flood Network
TFN and TFN2K, Stacheldraht) combine two modes, using the Internet to install
attack software on multiple unwitting intermediary systems (``zombies''), from
which simultaneous DoS attacks can be launched on target systems without
requiring penetrations. In general, DDoS attacks can cause massive outages, as
well as serious congestion even on unattacked sites.
DoS attacks are somewhat like viruses -- some specific instances can be
detected and blocked, but no general preventive solutions exist today or are
likely in the future. DDoS attacks are even more insidious. They are difficult
to detect because they can come from many sources; trace-back is greatly
complicated when they use spoofed IP addresses.
Common security advice can help a little in combatting DDoS: install and
properly configure firewalls (blocking nasty traffic); isolate machines from the
Net when connections are not needed; demand cryptographic authenticators rather
than reusable fixed passwords, to reduce masqueraders. But those ideas are
clearly not enough. We also need network protocols that are less vulnerable to
attack and that more effectively accommodate emerging applications (interactive
and noninteractive, symmetric and asymmetric, broadcast and point-to-point,
etc.) -- for example, blocking bogus IP addresses. For starters, we need
firewalls and routers that are more defensive; cryptographic authentication
among trustworthy sites; systems with fewer flaws and fewer risky features;
monitoring that enables early warnings and automated reconfiguration;
constraints on Internet service providers to isolate bad traffic; systems and
networks that can be more easily administered; and much greater collaboration
among different system administrations.
As attack scripts become increasingly available, DoS and DDoS attacks become
even more trivial to launch. It is probably naive to hope that the novelty of
these attacks might wear off (which is what many people hoped in the early days
of viruses, although today there are reportedly over 50,000 virus types). But if
the attacks were to disappear for a while, the incentives to address the problem
might also diminish.
The FBI and its National Infrastructure Protection Center (NIPC) are taking a
role in trying to track down attackers, but the flakiness of the technology
itself makes tracing difficult. Above all, it is clear that this is a problem in
desperate need of some technological and operational approaches; relying on law
enforcement as a deterrent is not adequate -- especially against attacks mounted
from outside of the U.S. This is not just a national problem: every computerized
nation has similar risks, and attacks on any site can be launched from anywhere
in the world.
The Internet has grown without overall architectural design (as have many of
its applications). Although this may have accelerated expansion, some current
uses vastly exceed what is prudent. We urgently need to launch a concerted
effort to improve the security and robustness of our computer-communication
infrastructures. The recent denial-of-service problems are only a foretaste of
what could happen otherwise.
See Results of the Distributed-Systems Intruder Tools Workshop http://www.cert.org/reports/dsit_workshop.pdf
and some partial antidotes such as for Trinoo http://www.fbi.gov/nipc/trinoo.htm.
Peter Neumann is the Moderator of
the on-line Risks Forum (comp.risks).
========================================================
Inside Risks 117, CACM 43, 3, March 2000 It was the best of times, it was the worst of times, but now it is time to
reflect on the lessons of Y2K. Ironically, if the extensive media hype had not
stimulated significant progress in the past half-year, serious social
disruptions could have occurred. However, the colossal remediation effort is
simultaneously (1) a success story that improved systems and people's technical
knowledge, (2) a wonderful opportunity to have gotten rid of some obsolete
systems (although there were some unnecessary hardware upgrades where software
fixes would have sufficed), and (3) a manifestation of long-term
short-sightedness. After spending billions of dollars worldwide, we must wonder
why a little more foresight did not avoid many of the Y2K problems sooner.
* System development practice. System development should be based on
constructive measures throughout the life-cycle, on well-specified requirements,
system architectures that are inherently sound, and intelligently applied system
engineering and software engineering. The Y2K problem is a painful example of
the absence of good practice -- somewhat akin to its much neglected but
long-suffering stepchild, the less glitzy but persistent buffer-overflow
problem. For example, systematic use of concepts such as abstraction,
encapsulation, information hiding, and object-orientation could have allowed the
construction of efficient programs in which the representation of dates could be
changed easily when needed.
* Integrity of remediation. In the rush to remediation, relatively
little attention was paid to the integrity of the process and ensuing risks.
Many would-be fixes introduced new bugs. Windowing deferred some problems until
later. Opportunities existed for theft of proprietary software, blackmail,
financial fraud, and insertion of Trojan horses -- some of which may not be
evident for some time.
* What happened? In addition to various problems triggered before
the new year, there were many Y2K date-time screwups. See the on-line Risks
Forum, volume 20, beginning with issue 71, and http://www.csl.sri.com/neumann/cal.html
for background. The Pentagon had a self-inflicted Y2K mis-fix that resulted in
complete loss of ability to process satellite intelligence data for 2.5 hours at
midnight GMT on the year turnover, with the fix for that leaving only a trickle
of data from 5 satellites for several days afterward. The Pentagon DefenseLINK
site was disabled by a preventive mistake. The Kremlin press office could not
send e-mail. In New Zealand, an automated radio station kept playing the New
Year's Eve 11pm news hour as most recent, because 99 is greater than 00. Toronto
abandoned their non-Y2K-compliant bus schedule information system altogether,
rather than fix it. Birth certificates for British newborns were for 1900. Some
credit-card machines failed, and some banks repeatedly charged for the same
transaction -- once a day until a previously available fix was finally
installed. Various people received bills for cumulative interest since 1900. At
least one person was temporarily rich, for the same reason. In e-mail, Web
sites, and other applications, strange years were observed beginning on New
Year's Day (and continuing until patched), notably the years 100 (99+1), 19100
(19 concatenated with 99+1), 19000 (19 concatenated with 99+1 (mod 100)), 1900,
2100, 3900, and even 20100. Some Compaq sites said it was Jan 2 on Jan 1. U.K.'s
NPL atomic clock read Dec 31 1999 27:00 at 2am GMT on New Year's Day. But all of
these anomalies should be no surprise; as we noted here in January 1991,
calendar arithmetic is a tricky business, even in the hands of expert
programmers.
* Conclusions: Local optimization certainly seems advantageous in
the short term (e.g., to reduce immediate costs), but is often counterproductive
in the long term. The security and safety communities (among others) have long
maintained that trying to retrofit quality into poorly conceived systems is
throwing good money after bad. It is better to do things right from the outset,
with a clear strategy for evolution and analysis -- so that mistakes can be
readily fixed whenever they are recognized. Designing for evolvability,
interoperability, and the desired functional ``-ities'' (such as
security, reliability, survivability in the presence of arbitrary adversities)
is difficult. Perhaps this column should have been entitled ``A Tale of Two
-ities'' -- predictability and dependability, both of which are greatly
simplified when the requirements are defined in advance.
Between grumbles about the large cost of Y2K remediation and views on what
might have happened had there not been such an intensive remediation effort, we
still have much to learn. (Will this experience be repeated for Y10K?) Perhaps
the biggest Y2K lessons are simply further reminders that greater foresight
would have been beneficial, that fixes themselves are prone to errors, and that
testing is inherently incomplete (especially merely advancing a clock to New
Year's Eve and observing the rollover). We need better system-oriented education
and training. Maybe it is also time for certification of developers, especially
when dealing with critical systems.
http://catless.ncl.ac.uk/Risks/
and ftp://www.sri.com/risks/ house the official archives for the ACM Risks
Forum, moderated by PGN.
========================================================
Inside Risks 116, CACM 43, 2, February 2000 [*** NOTE *** The next-to-last sentence in the first paragraph of this column
below is the correct version. Somehow the version printed in the CACM was
garbled. While I am editorializing, I might mention Understanding Public-Key
Infrastructure by Carlisle Adams and Steve Lloyd, MacMillan, 1999. PGN]
Open any popular article on public-key infrastructure (PKI) and you're likely
to read that a PKI is desperately needed for E-commerce to flourish. Don't
believe it. E-commerce is flourishing, PKI or no PKI. Web sites are happy to
take your order if you don't have a certificate and even if you don't use a
secure connection. Fortunately, you're protected by credit-card rules.
The main risk in believing this popular falsehood stems from the
cryptographic concept of ``non-repudiation''.
Under old, symmetric-key cryptography, the analog to a digital signature was
a message authentication code (MAC). If Bob received a message with a correct
MAC, he could verify that it hadn't changed since the MAC was computed. If only
he and Alice knew the key needed to compute the MAC and if he didn't compute it,
Alice must have. This is fine for the interaction between them, but if the
message was ``Pay Bob $1,000,000.00, signed Alice'' and Alice denied having sent
it, Bob could not go to a judge and prove that Alice sent it. He could have
computed the MAC himself.
A digital signature does not have this failing. Only Alice could have
computed the signature. Bob and the judge can both verify it without having the
ability to compute it. That is ``non-repudiation'': the signer cannot credibly
deny having made the signature. Since Diffie and Hellman discussed this concept
in their 1976 paper, it has become part of the conventional wisdom of the field
and has made its way into standards documents and various digital signature
laws.
However, practice differs from theory.
Alice's digital signature does not prove that Alice signed the message, only
that her private key did. When writing about non-repudiation, cryptographic
theorists often ignore a messy detail that lies between Alice and her key: her
computer. If her computer were appropriately infected, the malicious code could
use her key to sign documents without her knowledge or permission. Even if she
needed to give explicit approval for each signature (e.g., via a fingerprint
scanner), the malicious code could wait until she approved a signature and sign
its own message instead of hers. If the private key is not in tamper-resistant
hardware, the malicious code can just steal the key as soon as it's used.
While it's legitimate to ignore such details in cryptographic research
papers, it is just plain wrong to assume that real computer systems implement
the theoretical ideal. Our computers may contain viruses. They may be accessible
to passers-by who could plant malicious code or manually sign things with our
keys. Should we then need to deny some signature, we would have the burden of
proving the negative: that we didn't make the signature in question against the
presumption that we did.
Digital signatures are not the first mechanical signatures. There have been
check-writing machines for at least 50 years but in the USA their signatures are
not legally binding without a contract between two parties declaring them
acceptable. Digital signatures are proposed to be binding without such a
contract. Yet, the computers doing digital signatures are harder to secure than
mechanical check-writers that could be locked away between uses.
Other uses of PKI for E-commerce are tamer, but there are risks there too.
A CA signing SSL server certificates may have none of the problems described
above, but that doesn't imply that the lock in the corner of your browser window
means that the web page came from where it says it did. SSL deals with URLs, not
with page contents, but people actually judge where a page came from by the
logos displayed on the page, not by its URL and certainly not by some
certificate they never look at.
Using SSL client certificates as if they carried E-commerce meaning is also
risky. They give a name for the client, but a merchant needs to know if it will
be paid. Client certificates don't speak to that. Digital signatures might be
used with reasonable security for business-to-business transactions. Businesses
can afford to turn signing computers into single-function devices, kept off the
net and physically available only to approved people. Two businesses can sign a
paper contract listing signature keys they will use and declaring that digital
signatures will be accepted. This has reasonable security and reflects business
practices, but it doesn't need any PKI -- and a PKI might actually diminish
security.
Independent of its security problems, it seems that PKI is becoming a big
business. Caveat emptor.
For more details, see http://www.counterpane.com/PKI-risks.html.
Carl Ellison is a security architect at Intel in Hillsboro Oregon. Bruce
Schneier is the CTO of Counterpane.
[See January 2000: Risks of PKI: Secure E-Mail]
========================================================
Inside Risks 115, CACM 43, 1, January 2000 Public-key infrastructure (PKI), usually meaning digital certificates from a
commercial or corporate certificate authority (CA), is touted as the current
cure-all for security problems.
Certificates provide an attractive business model. They cost almost nothing
to manufacture, and you can dream of selling one a year to everyone on the
Internet. Given that much potential income for CAs, we now see many commercial
CAs, producing literature, press briefings and lobbying. But, what good are
certificates? In particular, are they any good for E-mail? What about free
certificates, as with PGP?
For e-mail, you want to establish whether a given keyholder is the person you
think or want it to be. When you verify signed e-mail, you hope to establish who
sent the message. When you encrypt e-mail to a public key, you need to know who
will be capable of reading it. This is the job certificates claim to do.
An ID certificate is a digitally signed message from the issuer (signer or
CA) to the verifier (user) associating a name with a public key. But, using one
involves risks.
The first risk is that the certificate signer might be compromised, through
theft of signing key or corruption of personnel. Good commercial CAs address
this risk with strong network, physical and personnel security. PGP addresses it
with the ``web of trust'' - independent signatures on the same certificate.
The next risk is addressed unevenly. How did the signer know the information
being certified? PGP key signers are instructed to know the person whose key is
being signed, personally, but commercial CAs often operate on-line, without
meeting the people whose keys they sign. One CA was started by a credit bureau,
using their existing database for online authentication. Online authentication
works if you have a shared secret, but there are no secrets in a credit bureau's
database because that data is for sale. Therefore, normal identity theft should
be sufficient to get such a certificate. Worse, since credit bureaus are so good
at collecting and selling data, any CA is hard pressed to find data for
authentication that is not already available through some credit bureau.
The next risk is rarely addressed. ID certificates are good only in small
communities. That's because they use people's names. For example, one company
has employees named: john.wilson, john.a.wilson, john.t.wilson, john.h.wilson
and jon.h.wilson. When you met Mr. Wilson, did you ask which one he was? Did you
even know you needed to ask? That's just one company, not the whole Internet.
Name confusion in unsecured e-mail leads to funny stories and maybe
embarrassment. Name confusion in certificates leads to faulty security
decisions.
To a commercial CA, the more clients it has the better. But the more it
succeeds, the less meaningful its certificates become. Addressing this problem
requires work on your part. You need to keep your namespace under control. With
PGP, you could mark keys ``trusted'' (acting as a CA) only if they certify a
small community (e.g., project members), otherwise, you could sign keys
personally, and only when the certified name is meaningful to you. With some
S/MIME mailers, you could disable trust in any CA that has too many (over 500?)
clients and personally mark individual keys trusted instead. Meanwhile, you can
print your public key fingerprint (a hash value, sometimes called a thumbprint)
on your business cards, so that others can certify/trust your key individually.
There are other risks, also.
Did the issuer verify that the keyholder controlled the associated private
key? That's what the certificate claims.
Does your mail agent check for key or certificate revocation? Few do.
Finally, how well are the computers at both ends protected? Are private keys
protected by password, and if so, how strong? Are they used in tamper-resistant
hardware or merely in software? Do you have to provide the password for each
operation or is it cached? Is the encryption code itself protected from
tampering? Are public (root) keys protected at all? Usually they aren't but they
need to be to prevent false signature verification or encryption to an
eavesdropper's key. Can a physical passer-by sign something with the signer's
key or tamper with the software or public key storage? Is your machine always
locked?
Real security is hard work. There is no cure-all, especially not PKI.
For more details, see http://www.counterpane.com/PKI-risks.html.
[February 2000: Risks of PKI in electronic commerce.]
========================================================
Inside Risks 114, CACM 42, 12, December 1999
This month we consider some of the risks associated with insiders. For
present purposes, an insider is simply someone who has been (explicitly or
implicitly) granted privileges that authorize him or her to use a particular
system or facility. This concept is clearly relative to virtual space and real
time, because at any given moment a user may be an insider with respect to some
services and an outsider with respect to others, with different degrees of
privilege. In essence, insider misuse involves misuse of authorized privileges.
Recent incidents have heightened awareness of the problems associated with
insider misuse -- such as the Department of Energy's long-term losses of
supposedly protected information within a generally collegial environment, and
the Bank of New York's discovery of the laundering of billions of dollars
involving Russian organized crime. The RISKS archives include many cases of
insider misuse, with an abundance of financial fraud and other cases of
intentional misuse by privileged personnel in law enforcement, intelligence,
government tax agencies, motor-vehicle and medical databases. In addition, there
are many cases of accidental insider screwups in financial services, medical
applications, critical infrastructures, and computer system security
administration. Accidental misuse may be effectively indistinguishable from
intentional misuse, and in some cases has been claimed as a cover-up for
intentional misuse. Related potential risks of insider misuse have been
discussed previously on this page, such as in cryptographic key management and
electronic voting systems.
Although much concern has been devoted in the past to penetrations and other
misuse by outsiders, insider threats have long represented serious problems in
government and private computer-communication systems. However, until recently
the risks have been largely ignored by system developers, application purveyors,
and indeed governments.
Today's operating systems and security-relevant application software
frequently do not provide fine-grained differential access controls that can
distinguish among different trusted users. Furthermore, there are often
all-powerful administrator root privileges that are undifferentiated. In
addition, many systems typically do not provide serious authentication (that is,
something other than fixed reusable passwords flying around unencrypted) and
basic system protection that might otherwise prevent insiders from masquerading
as one another and making subversive alterations of systems and data.
Too often it is assumed that once a user has been granted access, that user
should then have widespread access to almost everything. (Furthermore, even when
that assumption is not made, it is often difficult to prevent outsiders from
becoming insiders.) Audit trails are typically inadequate (particularly with
respect to insider misuse), and in some cases compromisable by privileged
insiders. Existing commercial software for detecting misuse are oriented
primarily toward intrusions by outsiders, not misuse by insiders (although a few
ongoing research efforts are not so limited). Even more important, there is
typically not even a definition of what constitutes insider misuse in any given
system or application. Where there is no such definition of misuse, insider
misuse certainly becomes difficult to detect! There are many such reasons why it
is difficult to address the insider misuse problem.
Insiders may have various advantages beyond just allocated privileges and
access, such as better knowledge of system vulnerabilities and the whereabouts
of sensitive information, and the availability of implicitly high human levels
of trust within sensitive enclaves.
We need better definitions of what is meant by insider misuse in specific
applications (accidental and intentional, and in the latter case malicious and
otherwise), better defenses to protect against such misuse, better techniques
for detecting misuse when it cannot be prevented, better techniques for
assessing the damage once misuse has been detected, and then better techniques
for subsequent remediation to whatever extent is possible and prudent --
consistent with the desired security requirements. Techniques such as separation
of duties, two-person controls, encryption with split keys, and enlightened
management can also contribute. A comprehensive approach is essential.
A Workshop on Preventing, Detecting, and Responding to Malicious Insider
Misuse was held in Santa Monica, CA, August 16-18, 1999, sponsored by several
U.S. Government organizations. The purpose of the workshop was to address the
issues outlined above. The report of that workshop is now available on-line
(http://www2.csl.sri.com/insider-misuse/). It surveys the problems presented by
insider misuse and outlines various approaches that were proposed at the
workshop. The report is recommended reading for those of you concerned with
these problems.
Peter Neumann (http://www.csl.sri.com/neumann/) is the Moderator of the
on-line Risks Forum (comp.risks).
========================================================
Inside Risks 113, CACM 42, 11, November 1999
[Note: This is an adaptation of the version originally submitted to ACM. The
quote cited from [2] was omitted from the final version by the CACM Editor's,
because of space limitations. PGN]
The Internet and World Wide Web may be the ultimate double-edged swords. They
bring diverse opportunities and risks. Just about anything anyone might want is
on the Net, from the sublime to the truly evil. Some categories of information
could induce argument forever, such as what is obscene or
harmful, whereas others may be more easily categorized -- hate
literature, direct misinformation, slander, libel, and other writings or images
that serve no purpose other than to hurt or destroy.
Proposed legal sanctions, social pressures, and technological means to
prevent or limit access to what is considered detrimental all appear to be
inadequate as well as full of risky side-effects.
Web self-rating is a popular notion, and is being promoted by the recent
``Internet Content Summit'' as an alternative to government regulation. The ACLU
believes both government intervention and self-rating are undesirable, because
self-rating schemes will cause controversial speech to be censored, and will be
burdensome and costly. The ACLU also points out that self-rating will encourage
rather than prevent government regulation, by creating the infrastructure
necessary for government-enforced controls. There's also a concern that
self-rating schemes will turn the Internet into a homogenized environment
dominated exclusively by large commercial media operations [1, 1--19].
Furthermore, what happens to sites that refuse to rate themselves ( persona
non grata status?), or whose self-ratings are disputed? It seems to be a
no-win situation.
The reliability of third-party filtering is notoriously low. As noted in
RISKS, sites such as ``middlesex.gov'' and ``SuperBowlxxx.com'' were blocked
simply due to their domain names. Commercial site-censoring filters have blocked
NOW, EFF, Mother Jones, HotWired, Planned Parenthood, and many others [1,
29--31]. The PRIVACY Forum was blocked by a popular commercial filter, when one
of their raters equated discussions of cryptography social issues with
prohibited ``criminal skills!'' Sites may not know that they've been blocked
(there usually is no notification), and procedures for appealing blocking are
typically unavailable or inadequate.
In a survey comparing a traditional search engine with a popular
``family-friendly'' search engine, the Electronic Privacy Information Center
attempted to access such phrases as American Red Cross, San Diego Zoo,
Smithsonian, Christianity, and Bill of Rights. In every case, the
``friendly'' engine prevented access to 90% of the relevant materials, and in
some cases 99% of what would be available without filters [1, 53--66].
Remarkable!
The Utah Education Network (www.uen.org) used filtering software that blocked
public schools and libraries from accessing the Declaration of Independence, the
U.S. Constitution, George Washington's Farewell Address, the Bible, the Book of
Mormon, the Koran, all of Shakespeare's plays, standard literary works, and many
completely noncontroversial Web sites [1, 67--81]. Efforts to link federal
funding to the mandatory use of filters in libraries, schools, and other
organizations are clearly coercive and counterproductive.
With respect to children's use of the Internet, there is no adequate
universal definition of ``harmful to minors,'' nor is such a definition ever
likely to be satisfactory. Attempts to mandate removal of vaguely-defined
``harmful" materials from the Internet (and perhaps the next step, from
bookstores?) can result only in confusion and the creation of a new class of
forbidden materials which will become even more sought after!
Parents need to reassert guidance roles that they often abdicate. Children
are clearly at risk today, but not always in the manners that some politicians
would have us believe. ``Indeed, perhaps we do the minors of this country harm
if First Amendment protections, which they will with age inherit fully, are
chipped away in the name of their protection'' [2]. Responsible parenting is not
merely plopping kids down alone in front of a computer screen and depending on
inherently defective filtering technology that is touted as both allowing them
to be educated and ``protecting'' them.
As always in considering risks, there are no easy answers -- despite the
continual stampede to implement incomplete solutions addressing only tiny
portions of particular issues, while creating all sorts of new problems. Freedom
of speech matters are particularly thorny, and seemingly among the first to be
sublimated by commercial interests and seekers of simplistic answers. ``Filters
and Freedom,'' an extraordinary collection of information on these topics [1]
should be required reading.
We must seek constructive alternatives, most likely nontechnological in
nature. However, we may ultimately find few, if any, truly workable alternatives
between total freedom of speech (including its dark side) and the specter of
draconian censorship. With the Net still in its infancy, we haven't begun to
understand the ramifications of what will certainly be some of the preeminent
issues of the next century.
References
1. Filters and Freedom: Free Speech Perspectives on Internet Content
Controls, David L. Sobel (Ed.), www.epic.org, ISBN 1-893044-06-8.
http://www.epic.org/filters&freedom/
(See Fahrenheit 451.2: Is Cyberspace Burning? How Rating and Blocking
Proposals May Torch Free Speech on the Internet, ACLU, Reference 1, pages
1--19.)
(See Sites Censored by Censorship Software, Peacefire, Reference 1,
pages 29--31.)
(See Faulty Filters: How Content Filters Block Access to Kid-Friendly
Information on the Internet, EPIC, Reference 1, pages 53--66.)
(See Censored Internet Access in Utah Public Schools and Libraries,
Censorware, Reference 1, pages 67--81.)
2. ACLU v. Reno (``Reno II''), 31 F. Supp. 2d 473 (E.D.Pa. 1999) at 498
(Memorandum Opinion enjoining enforcement of the Child Online Protection Act).
"Harry J. Foxwell" ``My current solution: adult supervision, and no filters. See:
HREF="http://mason.gmu.edu/~hfoxwell/fieldtrip.html">
http://mason.gmu.edu/~hfoxwell/fieldtrip.html .''
========================================================
Inside Risks 112, CACM 42, 10, October 1999
Cryptography is often treated as if it were magic security dust: ``sprinkle
some on your system, and it is secure; then, you're secure as long as the key
length is large enough--112 bits, 128 bits, 256 bits'' (I've even seen companies
boast of 16,000 bits.) ``Sure, there are always new developments in
cryptanalysis, but we've never seen an operationally useful cryptanalytic attack
against a standard algorithm. Even the analyses of DES aren't any better than
brute force in most operational situations. As long as you use a conservative
published algorithm, you're secure.''
This just isn't true. Recently we've seen attacks that hack into the
mathematics of cryptography and go beyond traditional cryptanalysis, forcing
cryptography to do something new, different, and unexpected. For example:
* Using information about timing, power consumption, and radiation of a
device when it executes a cryptographic algorithm, cryptanalysts have been able
to break smart cards and other would-be secure tokens. These are called
``side-channel attacks.''
* By forcing faults during operation, cryptanalysts have been able to break
even more smart cards. This is called ``failure analysis.'' Similarly,
cryptanalysts have been able to break other algorithms based on how systems
respond to legitimate errors.
* One researcher was able to break RSA-signed messages when formatted using
the PKCS standard. He did not break RSA, but rather the way it was used. Just
think of the beauty: we don't know how to factor large numbers effectively, and
we don't know how to break RSA. But if you use RSA in a certain common way, then
in some implementations it is possible to break the security of RSA ... without
breaking RSA.
* Cryptanalysts have analyzed many systems by breaking the pseudorandom
number generators used to supply cryptographic keys. The cryptographic
algorithms might be secure, but the key-generation procedures were not. Again,
think of the beauty: the algorithm is secure, but the method to produce keys for
the algorithm has a weakness, which means that there aren't as many possible
keys as there should be.
* Researchers have broken cryptographic systems by looking at the way
different keys are related to each other. Each key might be secure, but the
combination of several related keys can be enough to cryptanalyze the system.
The common thread through all of these exploits is that they've all pushed
the envelope of what constitutes cryptanalysis by using out-of-band information
to determine the keys. Before side-channel attacks, the open crypto community
did not think about using information other than the plaintext and the
ciphertext to attack algorithms. After the first paper, researchers began to
look at invasive side channels, attacks based on introducing transient and
permanent faults, and other side channels. Suddenly there was a whole new way to
do cryptanalysis.
Several years ago I was talking with an NSA employee about a particular
exploit. He told about how a system was broken; it was a sneaky attack, one that
I didn't think should even count. ``That's cheating,'' I said. He looked at me
as if I'd just arrived from Neptune.
``Defense against cheating'' (that is, not playing by the assumed
rules) is one of the basic tenets of security engineering. Conventional
engineering is about making things work. It's the genesis of the term ``hack,''
as in ``he worked all night and hacked the code together.'' The code works; it
doesn't matter what it looks like. Security engineering is different; it's about
making sure things don't do something they shouldn't. It's making sure security
isn't broken, even in the presence of a malicious adversary who does everything
in his power to make sure that things don't work in the worst possible way at
the worst possible times. A good attack is one that the engineers never even
thought about.
Defending against these unknown attacks is impossible, but the risk can be
mitigated with good system design. The mantra of any good security engineer is:
"Security is a not a product, but a process." It's more than designing strong
cryptography into a system; it's designing the entire system such that all
security measures, including cryptography, work together. It's designing the
entire system so that when the unexpected attack comes from nowhere, the system
can be upgraded and resecured. It's never a matter of "if a security flaw is
found," but "when a security flaw is found."
This isn't a temporary problem. Cryptanalysts will forever be pushing the
envelope of attacks. And whenever crypto is used to protect massive financial
resources (especially with world-wide master keys), these violations of
designers' assumptions can be expected to be used more aggressively by malicious
attackers. As our society becomes more reliant on a digital infrastructure, the
process of security must be designed in from the beginning.
Bruce Schneier is CTO of Counterpane Internet Security, Inc. You can
subscribe to his free e-mail newsletter, Crypto-Gram, at
http://www.counterpane.com.
=======================================================
Inside Risks 111, CACM 42, 9, September 1999
1999 is a pivotal year for malicious software ( malware) such as
viruses, worms, and Trojan horses. Although the problem is not new, Internet
growth and weak system security have evidently increased the risks.
Viruses and worms survive by moving from computer to computer. Prior to the
Internet, computers (and viruses!) communicated relatively slowly, mostly
through floppy disks and bulletin boards. Antivirus programs were initially
fairly effective at blocking known types of malware entering personal computers,
especially when there were only a handful of viruses. But now there are over
10,000 virus types; with e-mail and Internet connectivity, the opportunities and
speed of propagation have increased dramatically.
Things have changed, as in the Melissa virus, the Worm.ExploreZip worm, and
their inevitable variants, which arrive via e-mail and use e-mail software
features to replicate themselves across the network. They mail themselves to
people known to the infected host, enticing the recipients to open or run them.
They propagate almost instantaneously. Antiviral software cannot possibly keep
up. And e-mail is everywhere. It runs over Internet connections that block
everything else. It tunnels through firewalls. Everyone uses it.
Melissa uses features in Microsoft Word (with variants using Excel) to
automatically e-mail itself to others, and Melissa and Worm.ExploreZip make use
of the automatic mail features of Microsoft Outlook. Microsoft is certainly to
blame for creating the powerful macro capabilities of Word and Excel, blurring
the distinction between executable files (which can be dangerous) and data files
(which hitherto seemed safe). They will be to blame when Outlook 2000, which
supports HTML, makes it possible for users to be attacked by HTML-based malware
simply by opening e-mail. DOS set the security state-of-the-art back 25 years,
and MS has continued that legacy to this day. They certainly have a lot to
answer for, but the real cause is more subtle.
It's easy to point fingers, including at virus creators or at the media for
publicity begetting further malware. But a basic problem is the permissive
nature of the Internet and computers attached to it. As long as a program has
the ability to do anything on the computer it is running, malware will be
incredibly dangerous. Just as firewalls protect different computers on the same
network, we're going to need something to protect different processes running on
the same computer.
This malware cannot be stopped at the firewall, because e-mail tunnels it
through a firewall, and then pops up on the inside and does damage. Thus far,
the examples have been mild, but they represent a proof of concept. The
effectiveness of firewalls will diminish as we open up more services (e-mail,
Web, etc.), as we add increasingly complex applications on the internal net, and
as misusers catch on. This ``tunnel-inside-and-play'' technique will only get
worse.
Another problem is rich content. We know we have to make Internet
applications (sendmail, rlogin) more secure. Melissa exploits security problems
in Microsoft Word, others exploit Excel. Suddenly, these are network
applications. Has anyone bothered to check for buffer overflow bugs in pdf
viewers? Now, we must.
Antivirus software can't help much. If Melissa can infect 1.2 million
computers in the hours before a fix is released, that's a lot of damage. What if
the code took pains to hide itself, so that a virus remained hidden? What if a
worm just targeted an individual; it would delete itself off any computer whose
userID didn't match a certain reference? How long would it take before that one
was discovered? What if it e-mailed a copy of the user's login script (most
contain passwords) to an anonymous e-mail box before self-erasing? What if it
automatically encrypted outgoing copies of itself with PGP or S/MIME? Or signed
itself? (Signing keys are often left lying around.) What about Back Orifice for
NT? Even a few minutes' thought yields some pretty scary possibilities.
It's impossible to push the problem off onto users with ``do you trust this
message/macro/application?'' confirmations. Sure, it's unwise to run executables
from strangers, but both Melissa and Worm.ExploreZip arrive pretending to be
friends and associates of the recipient. Worm.ExploreZip even replied to real
subject lines. Users can't make good security decisions under ideal conditions;
they don't stand a chance against malware capable of social engineering.
What we're seeing is the convergence of several problems: the inadequate
security in personal-computer operating systems, the permissiveness of networks,
interconnections between applications on modern operating systems, e-mail as a
vector to tunnel through network defenses and as a means to spread extremely
rapidly, and the traditional naivete of users. Simple patches are inadequate. A
large distributed system communicating at the speed of light is going to have to
accept the reality of infections at the speed of light. Unless security is
designed into the system from the bottom up, we're constantly going to be
swimming against a strong tide.
Bruce Schneier is President of Counterpane Systems. Phone:
612-823-1098 =======================================================
Inside Risks 110, CACM 42, 8, August 1999 Biometrics are seductive. Your voiceprint unlocks the door of your house.
Your iris scan lets you into the corporate offices. You are your own key.
Unfortunately, the reality isn't that simple.
Biometrics are the oldest form of identification. Dogs have distinctive
barks. Cats spray. Humans recognize faces. On the telephone, your voice
identifies you. Your signature identifies you as the person who signed a
contract.
In order to be useful, biometrics must be stored in a database. Alice's voice
biometric works only if you recognize her voice; it won't help if she is a
stranger. You can verify a signature only if you recognize it. To solve this
problem, banks keep signature cards. Alice signs her name on a card when she
opens the account, and the bank can verify Alice's signature against the stored
signature to ensure that the check was signed by Alice.
There is a variety of different biometrics.In addition to the three mentioned
above, there are hand geometry, fingerprints, iris scans, DNA, typing patterns,
signature geometry (not just the look of the signature, but the pen pressure,
signature speed, etc.). The technologies are different, some are more reliable,
and they'll all improve with time.
Biometrics are hard to forge: it's hard to put a false fingerprint on your
finger, or make your iris look like someone else's. Some people can mimic
others' voices, and Hollywood can make people's faces look like someone else,
but these are specialized or expensive skills. When you see someone sign his
name, you generally know it is he and not someone else.
On the other hand, some biometrics are easy to steal. Imagine a remote system
that uses face recognition as a biometric. ``In order to gain authorization,
take a Polaroid picture of yourself and mail it in. We'll compare the picture
with the one we have in file.'' What are the attacks here?
Take a Polaroid picture of Alice when she's not looking. Then, at some later
date, mail it in and fool the system. The attack works because while it is hard
to make your face look like Alice's, it's easy to get a picture of Alice's face.
And since the system does not verify when and where the picture was taken--only
that it matches the picture of Alice's face on file--we can fool it.
A keyboard fingerprint reader can be similar. If the verification takes place
across a network, the system may be unsecure. An attacker won't try to forge
Alice's real thumb, but will instead try to inject her digital thumbprint into
the communications.
The moral is that biometrics work well only if the verifier can verify two
things: one, that the biometric came from the person at the time of
verification, and two, that the biometric matches the master biometric on file.
If the system can't do that, it can't work. Biometrics are unique identifiers,
but they are not secrets. You leave your fingerprints on everything you touch,
and your iris patterns can be observed anywhere you look.
Biometrics also don't handle failure well. Imagine that Alice is using her
thumbprint as a biometric, and someone steals the digital file. Now what? This
isn't a digital certificate, where some trusted third party can issue her
another one. This is her thumb. She has only two. Once someone steals your
biometric, it remains stolen for life; there's no getting back to a secure
situation.
And biometrics are necessarily common across different functions. Just as you
should never use the same password on two different systems, the same encryption
key should not be used for two different applications. If my fingerprint is used
to start my car, unlock my medical records, and read my electronic mail, then
it's not hard to imagine some very unsecure situations arising.
Biometrics are powerful and useful, but they are not keys. They are not
useful when you need the characteristics of a key: secrecy, randomness, the
ability to update or destroy. They are useful as a replacement for a PIN, or a
replacement for a signature (which is also a biometric). They can sometimes be
used as passwords: a user can't choose a weak biometric in the same way they
choose a weak password.
Biometrics are useful in situations where the connection from the reader to
the verifier is secure: a biometric unlocks a key stored locally on a PCM-CIA
card, or unlocks a key used to secure a hard drive. In those cases, all you
really need is a unique hard-to-forge identifier. But always keep in mind that
biometrics are not secrets.
Bruce Schneier =======================================================
Inside Risks 109, CACM 42, 7, July 1999 As we begin the tenth year of this monthly column, it seems eminently clear
that information technology has enormous benefits, but that it can also be put
to undesirable use. Market forces have produced many wonderful products and
services, but they do not ensure beneficial results. Many systems are
technologically incapable of adequately supporting society-critical uses, and
may further handicap the disadvantaged. Good education and altruism are helpful,
whereas legislation and other forms of regulation have been less successful.
Ultimately, we are all responsible for realistically assessing risks and acting
accordingly.
The rapidily expanding computer-communication age is bringing with it
enormous new opportunities that in many ways outpace the agrarian and industrial
revolutions that preceded it. As with any technology, the potentials for
significant social advances are countered with serious risks of misuse --
including over-agressive surveillance. Here are four currently relevant
examples.
1. Satellite technology makes possible an amazingly detailed and up-to-date
picture of what is going on almost everywhere on the planet, ostensibly for the
benefit of mankind. However, until now most applications of the imagery have
been for military purposes, with a lurking fear by U.S. Department of Defense
that the same technology could be used against it. In 1994, the U.S. Government
seemingly relaxed its controls, approving a private satellite to be launched by
a company called Space Imaging -- which expects that its clients would use its
information for urban planning, environmental monitoring, mapping, assessing
natural disasters, resource exploration, and other benevolent purposes. This
opportunity may lead to renewed efforts to restrict the available content --
what can be monitored, where, when, and by whom -- because of the risks of
misuse. In the long run, there are likely to be many such private satellites.
(Unfortunately, the first such satellite Ikonos 1, with one-square-meter
resolution, disappeared from contact 8 minutes after launch on April 27, 1999,
although we presume Space Imaging will try again.)
2. The Internet has opened up unprecedented new opportunities. But it is also
blamed for pornography, bomb-making recipes, hate-group literature, the
Littleton massacre, spamming, and fraud. Consequently, there are ongoing
attempts to control its use -- especially in repressive nations, but even in
some local constituencies that seek easy technological answers to complex social
problems. In the long run, there are likely to be many private networks.
However, as long as they are implemented with flaky technology and are coupled
to the Internet, their controls will tend to be ineffective. Besides, most
controls on content are misguided and incapable of solving the problems that
they are attempting to solve.
3. Computer systems themselves have created hitherto unbelievable advances in
almost every discipline. Readers of this column realize the extent to which the
risks to the public inherent in computer technologies must also be kept in mind,
especially those involving people (designers, purveyors, users, administrators,
government officials, etc.) who were not adequately aware of the risks.
Furthermore, computers can clearly be used for evil purposes, which again
suggests to some people restrictions on who can have advanced computers. In the
long run, such controls seem unrealistic.
4. Good cryptography that is well implemented can facilitate electronic
commerce, nonspoofable private communications, meaningful authentication, and
the salvation of oppressed inviduals in times of crisis. It can of course also
be used to hide criminal or otherwise antisocial behavior -- which has led to
attempts by governments to control its spread. However, obvious risks exist with
the use of weak crypto that can be easily and rapidly broken. The French
government seems to have reversed its course, realizing that its own national
well-being is dependent on the use of strong cryptography that is securely
implemented, with no trapdoors. In the long run, there is likely to be a
plethora of good cryptography freely available worldwide, which suggests that
law enforcement and national intelligence gathering need to seek other
alternatives than export controls and surreptitiously exploitable trap-doored
crypto.
In attempting to control societal behavior, there are always serious risks of
overreacting. About 100 years ago, the Justice Department reportedly proposed in
all seriousness that the general public should not be permitted to have
automobiles -- which would allow criminals to escape from the scene of a crime.
Some of that mentality is still around today. However, the solutions must lie
elsewhere. Let's not bash the Internet and computers for the ways in which they
can be used. Remember that technology is a double-edged sword, and that the
handle is also a weapon.
See the archives of the online Risks Forum (comp.risks) at
http://catless.ncl.ac.uk/Risks/, and the current index at
http://www.csl.sri.com/neumann/illustrative.html, as well as Peter Neumann's ACM
Press/Addison-Wesley book, Computer-Related Risks, for myriad cases of
information systems and people whose behavior was other than what was expected
-- and what might be done about it.
=======================================================
Inside Risks 108, CACM 42, 6, June 1999 As we approach January 1, 2000, it's time to review what progress is being
made and what risks remain. Our conclusion: Considerable uncertainty continues;
optimists predict only minor problemsm and pessimists claim that the effects
will be far-reaching. The uncertainty is itself unsettling.
Y2K fixes seem to have accelerated in the months since the Inside Risks
column last September. For example, most U.S. Government agencies and
departments claim they have advanced significantly in the past year, with some
notable exceptions; see http//www.house.gov/reform/gmit and late-breaking
worries (such as the Veterans Administration). However, some agencies have
weakened their definitions of which systems are critical, and government
auditors warn that the success rates are based on self-reported data.
The U.S. Government has recently been exuding a reverse-spin air of
confidence, perhaps in an attempt to stave off panic. However, many states,
local governments, and other countries are lagging. International reliance on
unprepared nations is a serious cause for concern. Some vendor software is yet
to be upgraded. Although many systems may appear to work in isolation, they
depend on computer infrastructures (such as routers, telecommunications, and
power), which must also be Y2K-proof. The uncertainty that results from the
inherent incompleteness of local testing is also a huge factor. Cynics might
even suggest that the federal government's stay-calm message is misleading,
because there is no uniform definition of compliance, no uniform definition of
testing, and little independent validation and verification. And then there are
desires for legislating absolution from Y2K liability.
There is a real risk of popular overreaction. One of the strangest risks is
the possibility of widespread panic inspired by people who fear the worst, even
if the technology works perfectly. Many people are already stockpiling cash,
food supplies, fuel, even guns. Bulk food companies and firearm manufacturers
report record sales. Some Government officials fear that accelerated purchases
in 1999 and reduced demand in early 2000 could spark a classic inventory
recession.
There is also a potential risk of government overreaction. As far back as
June 1998, Robert Bennett, the Utah Republican who chairs the U.S. Senate's Y2K
committee, asked what plans the Pentagon has ``in the event of a Y2K-induced
breakdown of community services that might call for martial law." Y2K fears
prompted city officials in Norfolk, Nebraska to divert funds from a new mug-shot
system to night-vision scopes, flashlights for assault rifles, gas masks, and
riot gear. The Federal Emergency Management Agency and the Canadian government
will have joint military-civilian forces on alert by late December. For the
first time since the end of the Cold War, a Cabinet task force is devising
emergency disaster responses, and thus some concerns about potentially draconian
Government measures arise. Senators Frank Church and Charles McMathias wisely
pointed out in a 1973 report that emergency powers ``remain a potential source
of virtually unlimited power for a President should he choose to activate
them.''
There is also a risk of underreaction and underpreparation. Sensibly
anticipating something like a bad earthquake or massive hurricane seems prudent.
Some people have lived without electricity for prolonged periods of time, for
example, for six weeks in Quebec two winters ago. Water also is a precious
resource, as a million Quebecois who were nearly evacuated learned. However,
fundamental differences exist between Y2K preparedness and hurricane
preparedness. The Y2K transition will occur worldwide (and even in space).
Hurricanes and tornados are localized, and experience over many years has given
us a reasonably accurate picture of the extent of what typically happens. But we
have little past experience with Y2K-like transitions.
It is not uncommon for officials to assure the public that things are under
control. People look to leaders for reassurance, and this is a natural response.
Under normal circumstances, such statements are no more disturbing than any
other law or regulation. However, calling out troops and declaring a national
emergency are plans that deserve additional scrutiny and public debate. In a
worst-case scenario of looting and civil unrest, the involvement of the military
in urban areas could extend to martial law, the suspension of due-process
rights, and seizures of industrial or personal property. U.S. Defense Department
regulations let the military restore ``public order when sudden and unexpected
civil disturbances, disaster, or calamities seriously endanger life and property
and disrupt normal governmental functions."
It might be more reassuring if discussions were happening in public -- but
some critical meetings happen behind closed doors. Increasingly, legislators are
discussing details about Y2K only in classified sessions, and a new law that had
overwhelming bipartisan support in Congress bars the public from attending
meetings of the White House's Y2K council. A partial antidote for uncertainty is
the usual one: increased openness and objective scrutiny. U.S. Supreme Court
Justice Louis Brandeis said it well: ``Sunlight is the best disinfectant."
Declan McCullagh is the Washington bureau chief for Wired News. He writes
frequently about Y2K. PGN is PGN.
=======================================================
Inside Risks 107, CACM 42, 5, May 1999 As I write this, the alarmist reports about Y2K are being replaced with more
comforting statements. Repeatedly, I hear, ``We have met the enemy and fixed the
bugs." I would find such statements comforting if I had not heard them before
when they were untrue. How often have you seen a product, presumably
well-tested, sent to users full of errors? By some estimates, 70% of first fixes
are not correct. Why should these fixes, made to old code by programmers who are
not familiar with the systems, have a better success rate?
The Y2K mistake would never have been made if programmers had been properly
prepared for their profession. There were many ways to avoid the problem without
using more memory. Some of these were taught 30 years ago and are included in
software design textbooks. The programmers who wrote this code do not have my
confidence, but we are now putting a lot of faith in many of the same
programmers. Have they been re-educated? Are they now properly prepared to fix
the bugs or to know if they have fixed them? In discussing this problem with a
variety of programmers and engineers, I have heard a few statements that strike
me as unprofessional ``urban folklore". These statements are false, but I have
heard each of them used to declare victory over a Y2K problem.
Myth 1: ``Y2K is a software problem. If the hardware is not programmable,
there is no problem."
Obviously, hardware that stores dates can have the same problems.
Myth 2: If the system does not have a real-time clock, there is no Y2K
problem.
Systems that simply relay a date from one system with a clock to other
systems can have problems. Myth 3: If the system does not have a battery to
maintain date/time during a power outage there can be no Y2K problem.
Date information may enter the system from other sources and cause problems.
Myth 4: If the software does not process dates, there can be no Y2K problem.
The software may depend for data on software that does process dates such as
the operating system or software in another computer.
Myth 5: Software that does not need to process dates is ``immune" to Y2K
problems.
Software obtained by ``software re-use" may process dates even though it need
not do so.
Myth 6: Systems can be tested one-at-a-time by specialized teams. If each
system is fixed, the combined systems will work correctly.
It is possible to fix two communicating systems for Y2K so that each works
but they are not compatible. Many of the fixes today simply move the 100 year
window. Not only will the problem reappear when people are even less familiar
with the code, two systems that have been fixed in this way may not be
compatible when they communicate. Where two such systems communicate, each may
pass tests with flying colors, but ...
Myth 7: If no date dependent data flows in or out of a system while it is
running, there is no problem.
Date information may enter the system on an EPROMs, diskettes, etc. during a
build.
Myth 8: Date stamps in files don't matter.
Some of the software in the system may process the date stamps, e.g. to make
sure that the latest version of a module is being used, when doing backups, etc.
Myth 9: Planned testing, using ``critical dates" is adequate.
As Harlan Mills used to say, ``Planned testing is a source of anecdotes, not
data". Programmers who overlook a situation or event may also fail to test it.
Myth 10: You can rely on keyword scan lists.
Companies are assembling long lists of words that may be used as identifiers
for date-dependent data. They seem to be built on the assumption that
programmers are monolingual English speakers who never misspell a word.
As long as I hear such statements from those who are claiming victory over
Y2K, I remain concerned. I was a sceptic when the gurus were predicting
disaster, and I remain a sceptic now that they are claiming success.
David L. Parnas, P.Eng., holds the NSERC/Bell Industrial Research Chair in
Software Engineering, and is Director of the Software Engineering Programme in
the Department of Computing and Software at McMaster University, Hamilton,
Ontario, Canada - L8S 4L7.
=======================================================
Inside Risks 106, CACM 42, 4, Apr 1999 The previously incomprehensible increases in communication capacities now
appearing almost daily may now be enabling a quantum leap in one of the
ultimately most promising, yet underfunded, areas of scientific
research--teleportation. But to an extent even greater than with many other
facets of technology, funding shortfalls in this area can carry with them
serious risks to life, limb, and various other useful body parts.
Teleportation, also known as matter transmission (MT), has a long history of
experimentation, largely by independent researchers (use of pejorative terms
such as ``mad scientists'' in reference to these brilliant early innovators is
usually both unwarranted and unfair). Their pioneering work established the
theoretical underpinnings for matter transmission, and also quickly illustrated
the formidable hurdles and risks associated with the practical implementation of
teleportation systems.
Early studies suggested that physical matter could be teleported between
disparate spatial locations through mechanisms such as enhanced quantum
probability displacement, matter-energy scrambling, or artificial wormholes.
Unfortunately, these techniques proved difficult to control precisely and had
unintended side-effects (see Distant Galactic Detonations from Unbalanced
Space-Time MT Injection Nodes, Exeter and Meacham, 1954).
During this period, a major teleportation system risk factor relating to
portal environmental controls was first clearly delineated, in the now classic
work by the late Canadian MT researcher Andre' Delambre ( Pest Control of
Airborne Insects in Avoidance of MT Matrix Reassembly Errors, 1958), later
popularized as the film ``The Fly and I'' (1975).
Problems such as these led to the development of the MT technology still
currently considered to be the most promising, officially referred to as
``Matter Displacement via Dedicated Transmission, Replication, and
Dissolution,'' but more commonly known as ``Copy, Send, and Burn.'' In this
technique, an exact scan of the transmission object (ranging from an inorganic
item to a human subject) records all aspects of that object to the subatomic
level, including all particle positions and charges. The amount of data
generated by this process is vast, so data compression techniques are often
applied at this stage (however, ``lossy'' compression algorithms are to be
avoided in MT applications, particularly when teleporting organic materials).
Next, the data is transmitted to the distant target point for reassembly,
where an exact duplicate of the original object is recreated from locally
available carbon-based or other molecular materials (barbecue charcoal
briquettes have often been used as an MT reconstruction source matrix with
reasonably good results).
After verifying successful reconstruction at the target location, the final
step is to disintegrate the original object, leaving only the newly assembled
duplicate, which is completely indistinguishable from the original in all
respects. It is strongly recommended that the verification step not be shortcut
in any manner. Attempts to use various cyclic-redundancy checks, Reed-Solomon
coding, and other alternatives to (admittedly time-consuming) bit-for-bit
verification of the reassembled objects have yielded some unfortunate
situations, several of which have become all too familiar through tabloid
articles. Some early MT researchers had advocated omission of the final
``dissolution'' step in the teleportation process, citing various metaphysical
concerns. However, the importance of avoiding the long-term continuance of both
the source and target objects was clearly underscored in the infamous ``Thousand
Clowns'' incident at the Bent Fork National Laboratory in 1979. For similar
reasons, use of multicast protocols for teleportation is contraindicated except
in highly specialized (and mostly classified) environments.
The enormous amounts of data involved with MT have always made the
availability and cost of transmission bandwidth a severe limiting factor. But
super-capacity single and multimode fiber systems, the presence of higher speed
routers, and other developments, have rendered these limitations nearly
obsolete.
There are still serious concerns, of course. It is now assumed that
Internet-based TCP/IP protocols will be used for most MT applications, the
protests of the X.400 Teleportation Study Committee notwithstanding. Protocol
design is critical. Packet fragmentation can seriously degrade MT performance
parameters, and UDP protocols are not recommended except where robust error
correction and retransmission processes are in place. Incidents such as running
out of disk spool space or poor backup procedures are intolerable in production
teleportation networks. The impact of web ``mirror sites'' on MT operational
characteristics is still a subject of heated debate.
We've come a long way since the early MT days where 300-bps 103-type modems
would have required centuries to transmit a cotton swab between two locations.
With the communications advances now at our disposal, it appears likely that, so
long as we take due consideration of the significant risks involved, the promise
of practical teleportation may soon be only a phone call away.
Lauren Weinstein (lauren@vortex.com) of Vortex Technology
(http://www.vortex.com) is the Moderator of the PRIVACY Forum. He avoids being a
teleportation test subject.
=======================================================
Inside Risks 105, CACM 42, 3, Mar 1999 It's obvious that our modern society is becoming immensely dependent on
stored digital information, a trend that will only increase dramatically. Ever
more aspects of our culture that have routinely been preserved in one or another
analog form are making transitions into the digital arena. Consumers who have
little or no technical expertise are now using digital systems as replacements
for all manner of traditionally analog storage. Film-based snapshots are
replaced by digital image files. Financial records move from the file cabinet to
the PC file system.
But whereas we now have long experience with the storage characteristics,
lifetimes, and failure modes of traditional media such as newsprint, analog
magnetic tape, and film stock, such is not the case with the dizzying array of
new digital storage technologies that seem to burst upon the scene at an ever
increasing pace. How long will the information we entrust to these systems
really be safe and retrievable in a practical manner? Do the consumer users of
these systems understand their real-world limitations and requirements?
From magnetic disks to CDs, from DVD-ROMs to high density digital tape, we're
faced with the use of media whose long-term reliability can be estimated only
through the use of accelerated testing methodologies, themselves often of
questionable reliability. And before we've even had a chance to really
understand one of these new systems, it's been rendered obsolete by the next
generation with even higher densities and speeds.
Even if we assume the physical media themselves to be reasonably stable over
time, the availability of necessary hardware and software to retrieve
information from media that are no longer considered ``current'' can be very
difficult to assure. Have you tried to get a file from an 8-inch CP/M floppy
recently? There are already CD-ROMs that are very difficult to read because the
necessary operating system support is obsolete and largely unavailable.
Of course technology marches onward, and the capabilities of the new systems
to store ever-increasing amounts of data in less and less space is truly
remarkable. A big advantage of digital systems is that it's possible, at least
theoretically, to copy materials to newer formats as many times as necessary,
without change or loss of data -- a sort of digital immortality.
But such a scenario works only if the users of the systems have the technical
capability to make such copies, and an understanding of the need to do so on an
ongoing basis. While it can be argued that the ultimate responsibility for
keeping tabs on data integrity and retrievability rests on the shoulders of the
user, there has been vastly insufficient effort by the computer industry to
educate consumers regarding the realities of these technologies.
Another issue is that when a digital medium fails, it frequently does so
catastrophically. The odds of retrieving usable audio from a 40-year-old 1/4
inch analog magnetic tape is sometimes far higher than for a DAT (Digital Audio
Tape) only a few years old stored under suboptimal conditions. Digital systems
have immense capacities, but their tight tolerances present new vulnerabilities
as well, which need to be understood by their often mostly non-technical users.
Many consumers who are now storing their important data in digital form are
completely oblivious to the risks. Many don't even do any routine backups, and
ever increasing disk capacities have tended to exacerbate this trend. The belief
that ``if it's digital, it's reliable'' is taken as an article of faith -- an
attitude reinforced by advertising mantras.
We need to appreciate the viewpoint of the increasing number of persons who
treat PCs as if they were toasters. The design of OS and application software
systems doesn't necessarily help matters. Even moving files from an old PC to a
new one can be a mess for the average consumer under the popular OS
environments. Many manufacturers quickly cease fixing bugs in hardware drivers
and the like after only a few years. It's almost as if they expect consumers to
simply throw out everything and start from scratch every time they upgrade. The
technical support solution of ``reinstall everything from the original
installation disk'' is another indication of the ``disposable'' attitude present
in some quarters of the industry.
If we expect consumers to have faith in digital products, there must be a
concerted effort to understand consumer needs and capabilities. Hardware and
software systems must be designed with due consideration to backwards
compatibility, reliability, and long-term usability by the public at large.
Marketing hype must not be a substitute for honest explanations of the
characteristics of these systems and their proper use. Failure in this regard
puts at risk the good will of the consumers who hold the ultimate power to
control the directions that digital technology will be taking into the future.
Lauren Weinstein (lauren@vortex.com) of Vortex Technology
(http://www.vortex.com) is the Moderator of the PRIVACY Forum.
=======================================================
Inside Risks 104, CACM 42, 2, Feb 1999 Closed-source proprietary software, which is seemingly the lifeblood
of computer system entrepreneurs, tends to have associated risks:
* Unavailability of source code reduces on-site adaptability and
repairability.
* Inscrutability of code prohibits open peer analysis (which otherwise might
improve reliability and security), and masks the reality that state-of-the-art
development methods do not produce adequately robust systems.
* Lack of interoperability and composability often induces inflexible
monolithic solutions.
* Where software bloat exists, it often hinders subsetting.
A well-known (but certainly not the only) illustration of these risk factors
is Windows NT 5.0. It reportedly will have 48 million lines of source code
in the kernel alone, plus 7.5 million lines of associated test code.
Unfortunately, the code on which security, reliability, and survivability of
system applications depend is essentially all 48M lines plus
application code. (Recall the divide-by-zero in an NT application that brought
the Yorktown Aegis missile cruiser to a halt: RISKS, vol. 19, no. 88.) In
critical applications, an enormous amount of untrustworthy code may have to be
taken on faith.
Open-source software offers an opportunity to surmount these risks of
proprietary software. ``Open Source'' is registered as a certification mark,
subject to the conditions of The Open Source Definition
http://www.opensource.org/osd.html, which has various explicit requirements:
unrestricted redistribution; distributability of source code; permission for
derived works; constraints on integrity; nondiscriminatory practices regarding
individuals, groups, and fields of endeavor; transitive licensing of rights;
context-free licensing; and noncontamination of associated software. For
background, see the opensource.org Website, which cites Gnu GPL, BSD Unix, the X
Consortium, MPL, and QPL as conformant examples. Additional useful sources
include the Free Software Foundation (http://www.gnu.org). The Netscape browser
(an example of open, but proprietary software), Perl, Bind, the Gnu system with
Linux, Gnu Emacs, Gnu C, GCC, etc., are further examples of what can be done.
Also, Diffie-Hellman is now in the public domain.
In many critical applications, we desperately need operating systems and
applications that are meaningfully robust, where ``robust'' is an
intentionally inclusive term embracing meaningful security, reliability,
availability, and system survivability, in the face of a wide and realistic
range of potential adversities -- which might in some cases include hardware
faults, software flaws, malicious and accidental exploitation of systemic
vulnerabilities, environmental hazards, unfortunate animal behaviors, etc.
We need significant improvements on today's software, both open-source and
proprietary, in order to overcome myriad risks (see the RISKS archives
(http://catless.ncl.ac.uk/Risks/) or my Illustrative Risks document
(http://www.csl.sri.com/~neumann/). When commercial systems are not adequately
robust, we should consider how sound open-source components might be composed
into demonstrably robust systems. This requires an international collaborative
process, open-ended, long-term, far-sighted, somewhat altruistic, incremental,
and with diverse participants from different disciplines and past experiences.
Pervasive adherence to good development practice is also necessary (which
suggests better teaching as well). The process also needs some discipline, in
order to avoid rampant proliferation of incompatible variants. Fortunately,
there are already some very substantive efforts to develop, maintain, and
support open-source software systems, with significant momentum. If those
efforts can succeed in producing demonstrably robust systems, they will also
provide an incentive for better commercial systems.
We need techniques that augment the robustness of less robust components,
public-key authentication, cryptographic integrity seals, good cryptography,
trustworthy distribution paths, trustworthy descriptions of the provenance of
individual components and who has modified them. We need detailed evaluations of
components and the effects of their composition (with interesting opportunities
for formal methods). Many problems must be overcome, including defenses against
Trojan horses hidden in systems, compilers, evaluation tools, etc. -- especially
when perpetrated by insiders. We need providers who give real support;
warranties on systems today are mostly very weak. We need serious incentives
including funding for robust open-source efforts. Despite all the challenges,
the potential benefits of robust open-source software are worthy of
considerable collaborative effort.
========================================================
Inside Risks 103, Comm. ACM 42, 1, Jan 1999 The public telephone network (PTN) in the U.S. is changing---partly in
response to changes in technology and partly due to deregulation. Some changes
are for the better: lower prices with more choices and services for consumers.
But there are other consequences and, in some ways, PTN trustworthiness is
eroding. Moreover, this erosion can have far-reaching consequences. Critical
infrastructures and other networked information systems rely today on the PTN
and will do so for the foreseeable future.
Prior to the 1970's, most of the U.S. telephone network was run by one
company, AT&T. AT&T built and operated a network with considerable
reserve capacity and geographically diverse, redundant routings, often at the
explicit request of the federal government. Many telephone companies compete in
today's market. So cost presures have become more pronounced. Reserve capacity
and rarely-needed emergency systems are now sacrificed on the altar of cost. And
new dependencies---hence, new vulnerabilities---are introduced because some
services are being imported from other producers.
Desire to attract and retain market share has led telephone companies to
introduce new features and services. Some new functionality (such as voice menus
within the PTN) relies on call-translation databases and programmable adjunct
processors, which introduce new points of access and, therefore, new points of
vulnerability. Other new functionality is intrinsically vulnerable. CallerID,
for example, is increasingly used by PTN customers, even though the underlying
telephone network is unable to provide such information with a high degree of
assurance. Finally, new functionality leads to more-complex systems, which are
liable to behave in unexpected and undesirable ways.
You might expect that having many phone companies would increase the capacity
and diversity of the PTN. It does, but not as much as one would hope. To lower
their own capital costs, telephone companies lease circuits from each other.
Now, a single errant backhoe can knock out service from several different
companies. And there is no increase in diversity for the consumer who buys
service from many providers. Furthermore, the explicit purchase of diverse
routes is more difficult to orchestrate when different companies must cooperate.
In addition, the need for the many phone companies to interoperate has itself
increased PTN complexity. Second, competition for local phone service has
necessitated creating databases (updated by many different telephone companies)
that must be consulted in processing each call, to determine which local phone
company serves that destination.
The increased number of telephone companies along with an increased
multiplexing of physical resources has other repercussions. The cross connects
and multiplexors used to route calls depend on software running in operations
support systems (OSSs). But information about OSSs is becoming less proprietary,
since today virtually anybody can form a telephone company. The vulnerabilities
of OSSs are thus accessible to ever larger numbers of attackers. Similarly, the
SS7 network used for communication between central office switches was designed
for a small, closed community of telephone companies; deregulation thus
increases the opportunities for insider attacks (because anyone can become an
insider by becoming a telephone company). Security by obscurity is not the
solution: network components must be redesigned to provide more security in this
new environment.
To limit outages, telephone companies have turned to newer technologies.
Synchronous Optical Network (SONET) rings, for example, allow calls to continue
when a fiber is severed. But despite the increased robustness provided by SONET
rings, using high-capacity fiber optic cables leads to greater concentrations of
bandwidth over fewer paths, for economic reasons. Failure (or sabotage) of a
single link is thus likely to disrupt service for many customers---particularly
worrisome, because the single biggest cause of telephone outages is cable cuts.
Today's telephone switches---crucial components of the PTN---are quite
reliable. Indeed, a recent National Security Telecommunications Advisory
Committee study found that procedural errors, hardware faults, and software bugs
were roughly equal in magnitude as causes of switch outages. Reducing software
failure to the level of hardware failures is an impressive achievement. But
switch vendors are coming under considerable competitive pressure, and they,
too, are striving to reduce costs and develop features more rapidly, which could
make matters worse.
Fred B. Schneider (Cornell University) and Steven M. Bellovin (AT&T Labs
Research) served on the NRC Computer Science and Telecommunications Board
committee that authored Trust in Cyberspace. Chapter 2 of that report
(see www2.nas.edu/cstbweb/index.html) discusses the eroding trustworthiness of
the PTN. See Comm. ACM 41, 11, 144, November 1998 for a summary of the
report.
========================================================
Inside Risks 102, (Comm.ACM 41, 12, December 1998) Hubris is risky: a tautologous claim. But how to recognize it? Phaethon, the
human child of Phoebus the sun god, fed up with being ridiculed, visited his
father to prove his progeniture. Happy to see his son, Phoebus granted him one
request. Phaethon chose to drive the sun-chariot. In Ted Hughes' vivid rendering
of Ovid, Phoebus, aghast, warns him
... Be persuaded He admonished Phaethon to ... avoid careening The chariot set off, Phaethon lost control, scorched the earth and ruined the
whole day.
Was that vehicle safe? The sun continues to rise each day, so I guess in
Phoebus's hands it is, and in Phaethon's it isn't. The two contrary answers give
us a clue that the question was misplaced: safety cannot be a property of the
vehicle alone. To proclaim the system `safe', we must include the driver, and
the pathway travelled. Consider the Space Shuttle. Diane Vaughan pointed out in
The Challenger Launch Decision that it flies with aid of an
extraordinary organization devised to reiterate the safety and readiness case in
detail for each mission. Without this organization, few doubt there would be
more failures. Safety involves human affairs as well as hardware and software.
If NASA would be Phoebus, who would be Phaethon? Consider some opportunities.
A car company boasts that their new product has more computational power than
was needed to take Apollo to the moon. (Programmers of a different generation
would be embarrassed by that admission.) We may infer that high performance,
physical or digital, sells cars. Should crashworthiness, physical and digital?
Safe flight is impossible in clouds or at night without reliable information
on attitude, altitude, speed, and position. Commercial aircraft nowadays have
electronic displays, which systems are not considered `safety-critical'. Should
we have expected the recent reports of loss of one or both displays, including
at least two accidents? This failure mode did not occur with non-electronic
displays.
In 1993, Airbus noted that the amount of airborne software in new aircraft
doubled every two years (2MLOC for the A310, 1983; 4M for the A320, 1988; 10M
for the A330/340, 1992). Has the ability to construct adequate software safety
increased by similar exponential leaps? One method, extrapolation from the
reliability of previous versions, does not apply: calculations show that testing
or experience cannot increase one's confidence to the high level required. If
not by this method, then how?
If there's a deadly sin of safety-critical computing, Hubris must be
one. But suppose we get away with it. What then? In Design Paradigms,
Henry Petroski reports a study suggesting that the first failure of a new bridge
type seems to occur some 30 years after its successful introduction. He offers
thereby the second sin, Complacency. It is hard to resist suggesting a
first axiom of safety-critical sinning --
Hubris & negation of Failure "leadsto" Complacency
where "leadsto" is the temporal LEADSTO operator (depicted as a squiggly
line). (Compare Vaughan's starker concept normalization of deviance.)
Why might engineers used to modern logic look at such classical themes?
Consider what happened to Phaethon. Jove, the lawmaker, acted:
With a splitting crack of thunder he lifted a bolt, Then as now, although more the thousand cuts than the thunderbolt. Pursuant
to an accident, Boeing is involved in legal proceedings concerning, amongst
other things, error messages displayed by its B757 on-board monitoring systems;
Airbus is similarly involved in Japan concerning a specific design feature of
its A300 autopilot. Whatever the merits of so proceeding, detailed technical
design is coming under increasing legal scrutiny.
But what should we have expected? Recall: safety involves human affairs, of
which the law is an instrument. This much hasn't changed since Ovid. To imagine
otherwise was, perhaps, pure Hubris.
Peter Ladkin (ladkin@rvs.uni-bielefeld.de) is a professor at the University
of Bielefeld, Germany, and a founder of Causalis Limited. Ted Hughes' Tales
from Ovid is published by Faber and Faber.
=========================================================
Inside Risks 101, CACM 41, 11, November 1998)
When today's networked information systems (NISs) perform badly or don't work
at all, lives, liberty, and property can be put at risk [1]. Interrupting
service can threaten lives and property; destroying information or changing it
improperly can disrupt the work of governments and corporations; disclosing
secrets can embarrass people or harm organizations.
For us---as individuals or a nation---to become dependent on NISs, we will
want them to be trustworthy. That is, we will want them to be designed
and implemented so that not only do they work but also we have a basis to
believe that they will work, despite environmental disruption, human user and
operator errors, and attacks by hostile parties. Design and implementation
errors must be avoided, eliminated, or the system must somehow compensate for
them.
Today's NISs are not very trustworthy. A recent National Research Council
CSTB study [2] investigated why and what can be done about it, observing:
* Little is known about the primary causes of NIS outages today or about how
that might change in the future. Moreover, few people are likely to understand
an entire NIS much less have an opportunity to study several, and consequently
there is remarkably poor understanding of what engineering practices actually
contribute to NIS trustworthiness.
* Available knowledge and technologies for improving trustworthiness are
limited and not widely deployed. Creating a broader range of choices and more
robust tools for building trustworthy NISs is essential.
The study offers a detailed research agenda with hopes of advancing the
current discussions about critical infrastructure protection from matters of
policy, procedure, and consequences of vulnerabilities towards questions about
the science and technology needed for implementing more-trustworthy NISs.
Why is it so difficult to build a trustworthy NIS? Beyond well known (to
RISKS readers) difficulties associated with building any large
computing system, there are problems specifically associated with satisfying
trustworthiness requirements. First, the transformation of informal
characterizations of system-level trustworthiness requirements into precise
requirements that can be imposed on system components is beyond the current
state of the art. Second, employing ``separation of concerns'' and using only
trustworthy components are not sufficient for building a trustworthy
NIS---interconnections and interactions of components play a significant role in
NIS trustworthiness.
One might be tempted to employ ``separation of concerns'' and hope to treat
each of the aspects of trustworthiness (e.g., security, reliability, ease of
use) in isolation. But the aspects interact, and care must be taken to ensure
that one is not satisfied at the expense of another. Replication of components,
for example, can enhance reliability but may complicate the operation of the
system (ease of use) and may increase exposure to attack (security) due to the
larger number of sites and the vulnerabilities implicit in the protocols to
coordinate them. Thus, research aimed at enhancing specific individual aspects
of trustworthiness courts irrelevance. And research that is bound by existing
subfield demarcations can actually contribute more to the trustworthiness
problem than to its solution.
Economics dictates the use of commercial off the shelf (COTS) components
wherever possible in building an NIS, which means that system developers have
neither control nor detailed information about many of their system's
components. Economics also increasingly dictates the use of system components
whose functionality can be changed remotely while the system is running. These
trends create needs for new science and technology. For example, the substantial
COTS makeup of an NIS, the use of extensible components, the expectation of
growth by accretion, and the likely absence of centralized control, trust, or
authority, demand a new look at security: risk mitigation rather than risk
avoidance, add-on technologies and defense in depth, and relocation of
vulnerabilities rather than their elimination.
Today's systems could surely be improved by using what is already known. But,
according to the CSTB study, doing only that will not be enough. We lack the
necessary science and technology base for building NISs that are sufficiently
trustworthy for controlling critical infrastructures. Therefore, the message of
the CSTB study is a research agenda and technical justifications for studying
those topics.
1. P.G. Neumann, Protecting the Infrastructures, Comm.ACM 41, 1, 128
(Inside Risks) gives a summary of the President's Commission on Critical
Infrastructure Protection (PCCIP) report, which discusses the dependence of
communication, finance, energy distribution, and transportation on NISs.
2. Trust in Cyberspace, the final report for the Information Systems
Trustworthiness study by the Computer Science and Telecommunications Board
(CSTB) of the National Research Council, can be accessed at
http://www2.nas.edu/cstbweb/index.html.
Cornell University Professor Fred B. Schneider chaired the CSTB study
discussed in this column.
=====================================================
Inside Risks 100, CACM 41, 10, October 1998)
Some universities and other institutions are offering or contemplating
courses to be taken remotely via Internet, including a few with degree programs.
There are many potential benefits: teachers can reuse collaboratively prepared
course materials; students can schedule their studies at their own convenience,
and employees can participate in selected subunits for refreshers; society might
benefit from an overall increase in literacy -- and perhaps even computer
literacy. On-line education inherits many of the advantages and disadvantages of
textbooks and conventional teaching, but also introduces some of its own.
People involved in course preparation quickly discover that creating
high-quality teaching materials is labor intensive, and very challenging. To be
successful, on-line instruction requires even more organization and forethought
in creating courses than otherwise, because there may be only limited
interactions with students, and it is difficult to anticipate all possible
options. Thoughtful planning and carefully debugged instructions are essential
to make the experience more fulfilling for the students. Furthermore, for many
kinds of courses, on-line materials must be updated regularly to remain timely.
There are major concerns regarding who owns the materials (some universities
claim proprietary rights to all multimedia courseware), with high likelihood
that materials will be purloined or emasculated. Some altruism is desirable in
exactly the same sense that open-source software has become such an important
driving force. Besides, peer review and ongoing collaborations among instructors
could lead to continued improvement of public-domain course materials.
Administrators might seek cost-saving measures in the common quest for easy
answers, less-qualified instructors, mammoth class sizes, and teaching materials
prepared elsewhere.
Loss of interactions among students and instructors is a serious potential
risk, especially if the instructor does not realize that students are not
grasping what is being taught. This can be partially countered by including some
live lectures or videoteleconferenced lectures, and requiring instructors and
teaching assistants to be accessible on a regular basis, at least via e-mail.
Multicast course communications and judicious use of Web sites may be
appropriate for dealing with an entire class. Inter-student contacts can be
aided by chat rooms, with instructors hopefully trying to keep the discussions
on target. Also, students can be required to work in pairs or teams on projects
whose success is more or less self-evident. However, reliability and security
weaknesses in the infrastructure suggest that students will find lots of excuses
such as ``The Internet ate my e-mail'' variants on the old ``My dog ate my
homework'' routine.
E-education may be better for older or more disciplined students, and for
students who expect more than just being entertained. It is useful for stressing
fundamentals as well as helping students gain real skills. But only certain
types of courses are suitable for on-line offerings -- unfortunately,
particularly those courses that emphasize memorization and regurgitation, or
that can be easily graded mechanically by evaluation software. Such courses are
highly susceptible to cheating, which can be expected to occur rampantly
whenever grades are the primary goal, used as a primary determinant for jobs and
promotions. Cheating tends to penalize only the honest students. It also
seriously complicates the challenge of meaningful professional certification
based primarily on academic records.
Society may find that electronic teaching loses many of the deeper advantages
of traditional universities -- where smaller classrooms are generally more
effective, and where considerable learning typically takes place outside of
classrooms. But e-education may also force radical transformations on
conventional classrooms. If we are to make the most out of the challenges, the
advice of Brynjolfsson and Hitt (Beyond the Productivity Paradox, CACM
41, 8, 11-12, August 1998) would suggest that new approaches to education
will be required, with a ``painful and time consuming period of reengineering,
restructuring and organization redesign...''
There is still a lack of experience and critical evaluation of the benefits
and risks of such techniques. For example, does electronic education scale well
to large numbers of students in other than rote-learning settings? Can a strong
support staff compensate for many of the potential risks? On the whole, there
are some significant potential benefits, for certain types of courses. I hope
that some of the universities and other institutions already pursuing remote
electronic education will evaluate their progress on the basis of actual student
experiences (rather than just the perceived benefits to the instructors), and
share the results openly. Until then, I vastly prefer in-person teaching coupled
with students who are self-motivated.
-------- =======================================================
Inside Risks 99, CACM 41, 9, September 1998
Somewhere in the wide spectrum from doomsday hype to total disdain lie the
realities of the Year-2000 problem. Some computer systems and infrastructures
will be OK, but others could have major impact on our lives. We won't know until
it happens. At any rate, here is a summary of where we stand with 16 months
left.
Many departments and agencies of the U.S. Government are lagging badly in
their efforts to fix their critical computers. The Departments of
Transportation, Defense, State, Energy, and Health and Human Services are
particularly conspicuous at the bottom of Congressman Stephen Horn's report
card. The Social Security Administration seems to be doing better -- although
its checks are issued by the Treasury Department, whose compliance efforts Horn
labelled ``dismal.''
The critical national infrastructures discussed in our January and June 1998
columns are increasingly dependent on information systems and the Internet.
Public utilities are of concern, particularly among smaller companies. Aviation
is potentially at risk, with its archaic air-traffic control systems. Railway
transportation is also at risk. There are predictions that the U.S. railroad
system will fail; nationwide control is now highly centralized, and manual
backup systems for communications, switching and power have all been discarded.
Financial systems are reportedly in better shape -- perhaps because the risks
are more tangible.
Many smaller corporations are slow in responding, hoping that someone else
will take care of the problem. Developers of many application software packages
and indeed some operating systems are also slow. Replacing old legacy systems
with new systems is no guarantee, as some newer systems are also noncompliant.
In addition, although some systems may survive 1 Jan 2000, they may fail on 29
Feb 2000 or 1 Mar 2000 or 1 Jan 2001, or some other date. Also, even if a system
tests out perfectly when dates are advanced to 1 Jan 2000, there are risks that
it will not work in conjunction with other systems when that date actually
arrives. Even more insidious, some systems that tested successfully with
Y2K-crossing dates subsequently collapsed when the dates were set back to their
correct values, because of the backward discontinuity!
Estimates are widely heard about the cost of analysis, prevention, and repair
exceeding one trillion dollars. Other estimates suggest that the legal costs
could reach the same rather astounding level -- perhaps merely reflecting the
extent to which we have become a litigious society. There is a risk that some of
the fly-by-night Y2K companies will pack up their tents and vanish immediately
after New Year's Eve 1999, to avoid lawsuits. There are also some efforts to put
caps on liability, in some cases as an incentive to share information.
Although a few hucksters are hawking quick fixes, there are in general no
easy answers. There are also very serious risks to national, corporate, and
personal well-being associated with letting other people fix your
software -- with rampant opportunities for Trojan horses, sloppy fixes, and
theft of proprietary code. Considerable Y2K efforts are being done abroad.
Ultimately, Y2K is an international problem with a particularly nonnegotiable
deadline and ever increasing interdependence on unpredictable entities. Reports
from many other countries are not encouraging. Overall, any nation or
organization that is not aggressively pursuing its Y2K preparedness is
potentially at risk. Also, where pirated software abounds (as in China and
Russia), official fixes may not be accessible.
One of the strangest risks of all is that even if all of the anticipatory
preventive measures were to work perfectly beyond everyone's expectations,
engendering no adverse Y2K effects, the media hype and general paranoia could
nevertheless result in massive panic and hoarding, including banks running out
of cash reserves.
The Y2K problem is the result of bad software engineering practice and a
serious lack of foresight. Y2K has been largely ignored until recently, despite
having been recognized long ago: the 1965 Multics system design used a 71-bit
microsecond calendar-clock. (Java does even better, running until the year
292271023.) Innovative solutions often stay out of the mainstream unless they
are performance related. For example, Multics contributed some major advances
that would be timely today in other systems, although Ken Thompson carried some
of those concepts into Unix.
Effects of noncompliant systems have the potential of propagating to other
systems, as we have seen here before. Local testing is not adequate, and
pervasive testing is often impossible. There is little room for complacency in
the remaining months. Oddly, the Y2K problem is relatively simple compared to
the ubiquitous security and software engineering problems -- which seem less
pressing because there is no fixed doomsdate. Perhaps when 2000 has past, we
will be able to focus on the deeper problems. Peter Neumann
(http://www.csl.sri.com/neumann/) chairs the ACM Committee on Computers and
Public Policy and moderates the on-line Risks Forum.
=====================================================
Inside Risks 98, CACM 41, 8, August 1998
Recent proposals to license software engineers have strained the uneasy
tension between computer scientists and software engineers. Computer scientists
tend to believe that certification is unnecessary and that licensing would be
harmful because it would lock in minimal standards in a changing field of rising
standards. Software engineers tend to believe that certification is valuable and
licensing is inevitable; they want significant changes in the curriculum for
professional software engineers. Frustrated, a growing number of software
engineers want to split off from computer science and form their own academic
departments and degree programs. Noting other dualities such as chemical
engineering and chemistry, they ask, why not software engineering and computer
science? [1] Must software engineers divorce computer scientists to achieve
this?
No such rift existed in the 1940s and 1950s, when electrical engineers and
mathematicians worked cheek by jowl to build the first computers. In those days,
most of the mathematicians were concerned with correct execution of algorithms
in application domains. A few were concerned with models to define precisely the
design principles and to forecast system behavior.
By the 1960s, computer engineers and programmers were ready for marriage,
which they consummated and called computer science. But it was not an easy
union. Computer scientists, who sought respect from traditional scientists and
engineers for their discipline, loathed a lack of rigor in application
programming and feared a software crisis. Professional programmers found little
in computer science to help them make practical software dependable and easy to
use. Software engineers emerged as the peacemakers, responding to the needs of
professional programming by adapting computer science principles and engineering
design practice to the construction of software systems.
But the software engineers and computer scientists did not separate or
divorce. They needed each other. Technologies and applications were changing too
fast. Unless they communicated and worked together, they could make no progress
at all. Their willingness to experiment helped them bridge a communication gap:
Software engineers validated new programming theories and computer scientists
validated new design principles.
Ah, but that was a long time ago. Hasn't the field matured enough to permit
the two sides to follow separate paths successfully? I think not: the pace of
technological change has accelerated. Even in the traditional technologies such
as CPU, memory, networks, graphics, multimedia, and speech, capacity seems to
double approximately every 18 months while costs decline. Each doubling opens
new markets and applications. New fields form at interdisciplinary boundaries --
examples:
* New computing paradigms with biology and physics including DNA, analog
silicon, nanodevices, organic devices, and quantum devices.
* Internet computations mobilizing hundreds of thousands of computers.
* Neuroscience, cognitive science, psychology, and brain models.
* Large scale computational models for cosmic structure, ocean movements,
global climate, long-range weather, materials properties, flying aircraft,
structural analysis, and economics.
* New theories of physical phenomena by ``mining'' patterns from very large
(multiple) datasets.
It is even more important today than in the past to keep open the lines of
communication among computer scientists, software engineers, and applications
practitioners. Even if they do not like each other, they can work together from
a common interest in innovation, progress, and solution of major problems. The
practices of experimentation are crucial in the communication process. A recent
study suggests that such practices could be significantly improved: Zelkowitz
and Wallace found that fewer than 20% of 600 papers advocating new software
technologies offered any kind of credible experimental evidence in support of
their claims [2]. (See also [3].)
Separation between the theory and engineering has succeeded in other
disciplines because they have matured to the point where they communicate well
among their science, engineering, and applications branches. A similar
separation would be a disaster for computer science. Spinning off software
engineers would cause communication between engineers, theorists, and
application specialists to stop. Communication, not divorce, is the answer.
1. Parnas, D., Software Engineering: An unconsummated marriage.
Communications of ACM, September 1997, 128 (Inside Risks). Peter Denning teaches computer science and helps engineers become better
designers. He is a former President of ACM and recently chaired the Publications
Board while it developed the ACM digital library. (Computer Science Department,
485, George Mason University, Fairfax, VA 22030; 703-993-1525; pjd@gmu.edu.)
============================================================
Inside Risks 97, CACM 41, 7, July 1998
Certain U.S. Senators have strongly resisted efforts to allow laptops on the
Senate floor. Whereas a few Senators and Representatives have a good
understanding of computer-communication technology, many others do not. This
month, we examine some of the benefits and risks that might result from the
presence of laptops in the Senate and House floors and hearing rooms. In the
following enumeration, ``+'' denotes potential advantages, ``--'' denotes
possible disadvantages or risks, and ``='' denotes situations whose relative
merits depend on various factors. Benefits and risks are both more or less
cumulative as we progress from stand-alone to networked laptops.
Isolated individual laptops: Locally networked laptops: Internet access: Although there are other potential benefits and risks, this summary considers
some of the primary issues. Despite any technopessimism that Inside Risks
readers may have developed over the past 8 years, I believe that many of the
risks can be avoided -- including those requiring the avoidance of human frailty
on the part of Senators and Representatives themselves. (Most of these
observations also seem to apply to other democracies as well.)
In conclusion, the benefits of laptops may in the long run outweigh the risks
and other disadvantages. A deeper Congressional awareness of the technological
and social risks of our technology would in itself be enormously beneficial to
the nation as a whole. Better awareness that our infrastructures are not
adequately secure, reliable, and survivable (Inside Risks, May 1992) might also
result in greater emphasis on increasing their robustness, a need that has been
recognized by the President's Commission on Critical Infrastructure Protection
(Inside Risks, January 1998). Most universities (often with government
encouragement) now require computer literacy of all students, being vital to a
technically mobile workforce; perhaps this should also be expected of
Congresspersons!
Whereas cellular phones and pagers are in wide use (as are PC-controlled
teleprompters), laptops may be less distracting -- because they do not
necessarily cry out for instant responses. If nothing else, they could help
Congressional staffers. In that the Senate is very tradition-bound, it may be a
long time until we see laptops on the Senate floor. However, an incremental
strategy might be appropriate: begin with laptops only for those who wish to
keep notes and access files, then expand access to private local nets of
individual Congresspersons and their staffers, then migrate to Senate and House
intranets, and then perhaps to a closed Congressnet.
[See http://catless.ncl.ac.uk/Risks/ for the archives of the ACM Risks Forum
(risks-request@CSL.sri.com). Also, see Neumann's Senate and House testimonies
(http://www.csl.sri.com/neumann/).]
============================================================
Inside Risks 96, CACM 41, 6, June 1998 The FBI and US Attorney General Janet Reno recently announced plans to
establish a National Infrastructure Protection Center. Critical processing and
communication structures are to be protected against hackers and criminals.
Given today's IT infrastructure complexity, I am sure that attaining reasonable
effectiveness will require an enormous personnel and equipment investment.
In addition to dealing with malicious acts, infrastructure reliability and
availability is vital. The recent Alan Greenspan disclosure of a New York bank
computer failure a few years ago makes me shudder. The Federal Reserve bailed
the bank out with a loan of $20 billion. Greenspan admitted that if the loan
could not have been supplied, or if other banks simultaneously had the same
problem, the entire banking system could have become unstable. Such
``information outages'' are probably common, but, as with this case, covered up
to avoid public panic.
How far away are we from major catastrophes? Will it be Y2K? What would
happen if, due to outages, international companies start defaulting on debt
payments resulting in business failures? Is there any protection from the
resulting mushrooming effect? International information outages could make power
and telecommunication outages seem like small inconveniences.
These and many other related risk questions, lead me to conclude that a
concerted effort, aimed at infrastructure risk, must come to the absolute top of
national and international political and commercial agendas.
While there are many sources of risk, there is an undeniable relationship
between risk and complexity. Thus, a major part of risk mitigation must be aimed
at reducing complexity.
Today's computer-communication-based system structures are laden with
significant unnecessary complexity. This unnecessary complexity is partially,
but significantly, due to the mapping of application functions through various
levels of languages and middlewares onto poor or inappropriate platforms of
system software and hardware.
Mainstream microprocessors require large quantities of complex software to be
useful. This is not a new situation. Even the earlier mainstream IBM 360/370
series suffered in this regard. There is a significant semantic gap between
useful higher levels of problem solving (via programming languages) and the
instruction repertoires of these machines.
That current RISC and CISC processors are poor hosts for higher level
languages perpetuates the motivation to widely deploy lower levels of
programming, including C and C++. This adds unnecessary complexity, the cost and
risk of which is borne many times over around the world in developing and
especially in maintaining the growing mountain of software.
In my opinion, complexity and risk reduction must focus on restructuring of
the hardware and system software infrastructure components. Restructuring must
address programming languages, suitable system software functions, and, most
importantly, well defined (verifiable) execution machines for the languages and
functions. Further, robust security mechanisms must be integrated into the
infrastructure backbone. The restructuring must result in publicly available
"standards" that are strictly enforced via independent certification agencies.
The vital IT infrastructure cannot continue to be based upon caveat emptor
(buyer-beware) products. Enforced standards do not eliminate the competitive
nature of supplying infrastructure components; nor do they hinder creativity in
introducing a virtually unlimited number of value-added products. Standards
increase the market potential for good products. For safety-critical
computer-based systems in areas such as nuclear energy, aviation, and medical
instruments, certification against standards is applied. However, even for these
critical embedded systems, there is a pressing need for tougher standards as
well as complexity reduction via appropriate architectural structuring of
hardware and system software.
Given the fact that today's suppliers of critical infrastructure components
swear themselves free from product responsibility, an insurance-related
enforcement solution may be appropriate, analogous to the Underwriters
Laboratory certification of electrical products. Before infrastructure products
are put on the market, they must be certified against standards in order to
limit (but not eliminate) supplier product responsibility and instill public
confidence.
It is time to stop quibbling over trivial issues such as Internet browsers.
When catastrophes occur, browsers will seem like small potatoes. Today, we can
do fantastic things with electronic circuitry. We must tame this potential and
do the right things aimed at reducing infrastructure risk. It is time to take
the bull by the horns and find a political and commercial path leading to
infrastructure restructuring and enforceable standards!!!
Harold W. (Bud) Lawson (bud@lawson.se) is an independent consultant in
Stockholm. A Fellow of ACM and IEEE, he has contributed to several pioneering
hardware, software, and application related endeavors.
======================================================================
Inside Risks 95, CACM 41, 5, May 1998 As the use of computers in scholastic disciplines has grown and matured, so
have many related issues involving academic integrity. Although the now rather
commonplace risks of security breaches (such as falsification of student
records, and access to examination or assignment files) are real and still
occur, this type of violation has become a small part of an insidious spectrum
of creative computer-based student offenses. Academic institutions have
responded to this threat by developing integrity policies that typically use
punitive methods to discourage cheating, plagiarism, and other forms of
misconduct.
For example, in December 1997, the Testing Center staff at Mercer County
Community College (NJ) discovered that eight calculus students had been issued
variants of ``a multi-version multiple-choice test, but submitted responses that
were appropriate for a completely different set of questions. By falsely coding
the test version, they triggered computer scoring of their responses as if they
had been given a version of the test which they in fact had never been given.''
[1] The students were suspended, and the Center restructured its system to
thwart this sort of deception. This particular incident is noteworthy, because
it demonstrates the technological savvy used to circumvent the grading in a
course whose tuition was a mere $300, and whose knowledge was essential for
further studies. Tangentially, it also provides an illustration of the
vulnerability of multi-version mark-sense tallying, a system used in an
ever-increasing number of municipalities for voting, a much higher-stakes
application. [2]
The proliferation of affordable computer systems is both a boon and a
headache for educators. The great wealth of information available via Internet
and World Wide Web is a tremendous asset in course preparation and presentation,
but its downside is that teachers need to stay one screen dump ahead of their
students in order to issue projects requiring original solutions. Faculty
members bemoan the accessibility of term-paper banks, where any of thousands of
boilerplate essays can be downloaded for a small fee. For the more affluent (or
desperate) student, there are ``writers'' who will provide a custom work that
conforms to the most stringent of professorial requirements. Although assignment
fraud has always existed, it is now easier and more tempting. In-class writing
projects can establish some level of control, but with networked lab rooms,
individual contributions become difficult to monitor -- as soon as someone
solves a problem, it quickly propagates to the rest of the class. The discreetly
passed slip of paper under the desk is now a broadcast e-mail message or part of
a password-concealed web site!
Creative solutions lead to relevance in learning. As a computer-science
educator, I have begun to phase out the ``write a heap sort'' and other
traditional coding assignments, because so many instances of their solutions
exist. Using the Web, these projects have been transformed into ``download
various heap sort programs and analyze their code'' which encourages individual
exploration of reusable libraries. Perhaps it will not be so long before the ACM
Programming Contest contains a component where contestants ``start their search
engines'' to ferret out adaptable modules instead of just hacking programs from
scratch!
The motivation of assignments and exams should be the reinforcement of
comprehension of the course material and assessment of student progress. Yet,
the best way to know what the students know is to know the students, a task made
more complicated as classes grow in size and expand to remote learning sites.
Ben Shneiderman's Relate-Create-Donate philosophy urges a move to collaborative
and ambitious team projects, solving service-oriented problems, with results
subsequently publicized on the Web, in order to enhance enthusiasm and
understanding. [3] Examination and homework collusion is actually a form of
sharing -- albeit with erroneous goals. Perhaps it is now time to promote
sharing, at least in some components of our coursework, by finding new ways to
encourage group efforts, and monitoring such activities to ensure that learning
is achieved by all of the students. This is a challenging task, but one whose
implementation would be well rewarded.
1. From a publication issued by the Vice President of Academic and Student
Affairs, Mercer County Community College, February 4, 1998.
2. See earlier articles by Rebecca Mercuri in CACM 35:11 (November 1992) and
36:11 (November 1993).
3. Ben Shneiderman, Symposium Luncheon Lecture (preprint), SIGCSE '98,
Atlanta.
Rebecca Mercuri (http://www.mcs.drexel.edu/~rmercuri, mercuri@acm.org) is a
full-time member of the Mathematics and Computer Science faculty at Drexel
University and also appears as a visiting Artist Teacher in the Music Department
at Mercer County Community College.
======================================================================
Inside Risks 94, CACM 41, 4, Apr 1998 Concurrent programs are notoriously difficult to get right. This is as true
today as it was 30 years ago. But 30 years ago, concurrent programs would be
found only in the bowels of operating systems, and these were built by
specialists. The risks were carefully controlled. Today, concurrent programs are
everywhere and are being built by relatively inexperienced programmers:
* All sorts of application programmers write concurrent programs. A
PC freezing when you pull down a menu or click on an icon is likely to be caused
by a concurrent-programming bug in the application.
* Knowledge of operating system routines is no longer required to write a
concurrent program. Java threads enable programmers to write concurrent
programs, whether for spiffy animation on web pages or for applications that
manage multiple activities.
This column discusses simple rules that can go a long way toward eliminating
bugs and reducing risks associated with concurrent programs.
A concurrent program consists of a collection of sequential processes whose
execution is interleaved; the interleaving is the result of choices made by a
scheduler and is not under programmer control. Lots of execution interleavings
are possible, which makes exhaustive testing of all but trivial concurrent
programs infeasible.
To make matters worse, functional specifications for concurrent programs
often concern intermediate steps of the computation. For example, consider a
word-processing program with two processes: one that formats pages and passes
them through a queue to the second process, which controls a printer. The
functional specification might stipulate that the page-formatter process never
deposit a page image into a queue slot that is full and that the printer-control
process never retrieve the contents of an empty or partially filled queue slot.
If contemplating the individual execution interleavings of a concurrent
program is infeasible, then we must seek methods that allow all executions to be
analyzed together. We do have on hand a succinct description of the entire set
of executions: the program text itself. Thus, analysis methods that work
directly on the program text (rather than on the executions it encodes) have the
potential to circumvent problems that limit the effectiveness of testing.
For example, here is a rule for showing that some ``bad thing" doesn't happen
during execution:
Identify a relation between the program variables that is true initially
and is left true by each action of the program. Show that this relation implies
the ``bad thing'' is impossible.
Thus, to show that the printer-control process in the above example never
reads the contents of a partially-filled queue slot (a ``bad thing"), we might
see that the shared queue is implemented in terms of two variables:
* NextFull points to the queue slot that has been full the longest
and is the one that the printer-control process will next read.
* FirstEmpty points to the queue slot that has been empty the
longest and is the one where the page-formatter process will next deposit a page
image.
We would then establish that NextFull ~= FirstEmpty is true
initially and that no action of either process falsifies it. And, from the
variable definitions, we would note that NextFull ~= FirstEmpty implies
that the printer-control process reads the contents of a different queue slot
than the page-formatter process writes, so the ``bad thing" cannot occur.
It turns out that all functional specifications for concurrent programs can
be partitioned into ``bad things'' that must not happen and ``good things'' that
must happen. Thus, a rule for such ``good things'' will complete the picture. To
show that some ``good thing'' does happen during execution:
Identify an expression involving the program variables that when equal to
some minimal value implies that the ``good thing'' has happened. Show that this
expression Note that our rules for ``bad things'' and ``good things'' do not require
checking individual process interleavings. They require only effort proportional
to the size of the program being analyzed. Even the size of a large program need
not be an impediment---large concurrent programs are often just small algorithms
in disguise. Such small concurrent algorithms can be programmed and analyzed; we
build a model and analyze it to gain insight about the full-scale artifact.
Writing concurrent programs is indeed difficult, but there exist mental tools
that can help the practicing programmer. The trick is to abandon the habit of
thinking about individual execution interleavings.
Cornell University Professor Fred B. Schneider's textbook On Concurrent
Programming (Springer-Verlag, 1997) discusses how assertional reasoning can
be used in the analysis and development of concurrent programs.
======================================================================
Inside Risks 93, CACM 41, 3, Mar 1998 Last month we considered some of the risks associated with Internet gambling.
While noting that gambling is addictive, and that the Internet can compound the
problems therewith, we left implicit the notion that computer use itself might
have addictive characteristics. This month we consider that notion further,
although we consider primarily compulsive behavior due to psychological and
environmental causes rather than pharmacological and physiological addiction.
To be addicted to something typically means that you have habitually,
obsessively, and perhaps unconsciously surrendered yourself to it. In addition
to Internet gambling, activities that can lend themselves to addictive or
compulsive behavior include playing computer games, being a junkie of
unmoderated newsgroups and chat rooms, surfing the Web, browsing for cool tools,
cracking system security for amusement, and perhaps even programming itself --
which seems to inspire compulsive behavior in certain individuals.
We are immediately confronted with the question as to how computers make
these problems any different from our normal (uncomputerized) lives. The
customary answer seems to be that computers intensify and depersonalize whatever
activity is being done, and enable it to be done remotely, more expeditiously,
less expensively, and perhaps without identification, accountability, or
answerability. Is there more to it than that?
The effects of compulsive computer-related behavior can involve many risks to
individuals and to society, including diminution of social and intellectual
skills, loss of motivation for more constructive activities, loss of jobs and
livelihood, and so on. A reasonable sense of world reality can be lost through
immersion in virtual reality. Similarly, a sense of time reality can be lost
through computer access that is totally encompassing and uninterrupted by
external events.
Computerized games have become a very significant part of the lives of many
youths today. Although personal-computer games have long been popular, multiuser
dungeons (MUDs) and other competitive or collaborative games have emerged. These
are perhaps even more addictive than solitary games. Despite their seemingly
increased interactions with other people, they may serve as a substitute for
meaningful interpersonal communication. This detachment can be amplified further
when the other persons involved are anonymous or pseudonymous, and become
abstractions rather than real people.
Chat rooms, newsgroups, and e-mail also educe compulsive behavior. In
addition, they can be sources of rampant misinformation, disseminated around the
Net with remarkable ease and economy. Compulsive novices seem particularly
vulnerable to believing what they read and spreading it further; consequently,
they may be likely targets for frauds and scams.
``Hackers'' have stereotypically been associated with compulsive behavior --
such as chronically bad eating habits, generally antisocial manners, and in some
cases habitual system penetrations. Even the very best programmers may have a
tendency toward total absorption in writing and debugging code. There's
something about the challenge of pitting yourself against the computer system
that is very compelling.
Among the extreme cases of pathological Internet use studied by
Kimberly Young (Internet Addiction: The Emergence of a New Clinical Disorder,
American Psychological Association, Chicago, August 14, 1997), most of them
involved people without permanent jobs and newbies (rather than experienced
computer folks), spending an average of 38 hours a week in cyber-addictive
behavior. Young estimates that 10% of Web users qualify as addicts, which may be
either an astounding factoid or an extreme use of the term ``addict''.
So, do we need to do anything about it, and if so, what? Treatments for
addictions usually involve total abstinence rather than partial withdrawals --
although some people are psychologically able to live under regimens of
moderation. Cybertherapy is apparently booming, with many Internet addicts
ironically turning to Internet counseling sites. Parental oversight of minors
and employer supervision of employees is often appropriate. However, there are
risks of overreacting, such as trying to block Internet sites that offer a
preponderance of addicting opportunities (once again, there are risks in seeking
technological solutions to nontechnological problems), or legal attempts to
outlaw such sites altogether. Whether we are off-line or on-line, we all need to
have real lives beyond computers. Achieving that rests on our educational
system, our childhood environment, and our workplaces -- but ultimately on
ourselves and our associations with other people.
NOTE: Jim Horning suggests you look at Mihaly Csikszentmihalyi's ``Flow: The
Psychology of Optimal Experience'' (Harper-Collins, 1991), which treats
programming as flow and resonates with many of the ideas here. Thanks to Jon
Swartz of the San Francisco Chronicle for the pointer to Young's paper,
noted in his article on August 15, 1997.
======================================================================
Inside Risks 92, CACM 41, 2, Feb 1998 Internet gambling is evidently increasing steadily, with many new on-line
gambling houses operating from countries having little or no regulation.
Attempts to ban or regulate it are likely to inspire more foreign
establishments, including sites outside of territorial waters. Revenues from
Internet gambling operation are estimated to reach $8 billion by the year 2000,
whereas the current total take for all U.S. casinos is $23 billion.
We consider here primarily specific risks associated with Internet gambling.
Generic risks have been raised in earlier Inside Risks columns, such as Webware
security (April 1997), cryptography (August 1997), anonymity (December 1996),
and poor authentication (discussed in part in April/May 1994). For example, how
would you ensure that you are actually connected to the Internet casino of your
choice?
Gambling suffers from well-known risks including disadvantageous odds,
uncertainty of payback, skimming by casinos, personal addiction and ruin.
Internet gambling brings further problems, including lack of positive
identification and authentication of the parties, the remote use of credit
cards, and out-of-jurisdiction casinos. Even if there were some assurance that
you are connected to your desired on-line casino (for example, using some form
of strong cryptographic authentication), how would you know that organization is
reputable? If you are not sure, you are taking an extra gamble -- and technology
cannot help you. Payoffs could be rigged. There could also be fraudulent
collateral activities such as capture and misuse of credit-card numbers,
blackmail, money laundering, and masqueraders using other people's identities --
either winning or racking up huge losses, at relatively little risk to
themselves. Serious addicts who might otherwise be observed could remain
undetected much longer. (On the Internet no one knows you are a gambler --
except maybe for the casino, unless you gamble under many aliases.)
Anonymity of gamblers is a particularly thorny issue. Tax-collecting agencies
that strongly oppose anonymous gambling might lobby to require to require
recoverable cryptographic keys.
Legislation before the U.S. Congress would prohibit Internet gambling by
bringing it under the Interstate Wire Act, with stiff fines and prison terms for
both operators and gamblers. It would also allow law enforcement to ``pull the
plug'' on illegal Internet sites. It is not clear whether such legislation would
hinder off-shore operations -- where casinos would be beyond legal reach, and
gamblers might use encryption to mask their activities. Legalization is an
alternative; for example, the Australian state of Victoria has decided to
strictly regulate and tax on-line gambling, hoping to drive out illegal
operations.
Although Internet gambling can be outlawed, it cannot be stopped. There are
too many ways around prohibition, including hopping through a multitude of
neutral access sites (for example, service providers), continually changing
Internet addresses on the part of the casinos, anonymous remailers and traffic
redirectors, encryption and steganography, and so on. On-line gambling could
also have harmful legal side-effects, by generating pressure to outlaw good
security. However, legally restricting good system security practices and strong
cryptography would interfere with efforts to better protect our national
infrastructures and with the routine conduct of legitimate Internet commerce.
Thus, Internet gambling represents the tip of a giant iceberg. What happens here
can have major impacts on the rest of our lives, even for those of us who do not
gamble.
One possibility not included in current legislation would be to make
electronic gambling winnings and debts legally uncollectible in the United
States. That would make it more difficult for on-line casinos to collect legally
from customers. However, with increasingly sophisticated Internet tracking
services, it might also inspire some new forms of innovative, unorthodox,
life-threatening illegal collection methods on behalf of the e-casinos. It would
also exacerbate the existing problem that gamblers are required to report
illegal losses if they wish to offset their winnings (legal or otherwise), and
would also bring into question the authenticity of computerized receipts of
losses.
Attempts to ban any human activity will never be 100% effective, no matter
how self-destructive that behavior may be judged by society. In some cases, the
imposition of poorly considered technological ``fixes'' for sociological
problems has the potential of doing more harm than good. For example, requiring
ISPs to block clandestine illegal subscriber activities is problematic. Besides,
the Internet is international. Seemingly easy local answers -- such as outlawing
or regulating Internet gambling -- are themselves full of risks.
The Internet can be addictive, but being hooked into it is different from
being hooked on it. In any event, whether or not you want to bet on the Net,
don't bet on the Net being adequately secure! Whereas you are already gambling
with the weaknesses in our computer-communication infrastructures, Internet
gambling could raise the ante considerably. Caveat aleator. (Let the gambler
beware!)
NOTE: Several members of the ACM Committee on Computers and Public Policy
contributed to this column.
======================================================================
Inside Risks 91, CACM 41, 1, Jan 1998 The President's Commission on Critical Infrastructure Protection (PCCIP) has
completed its investigation, having addressed eight major critical national
infrastructures: telecommunications; generation, transmission, and
distribution of electric power; storage and distribution of gas and oil; water
supplies; transportation; banking and finance; emergency services; and
continuity of government services. The final report (Critical Foundations:
Protecting America's Infrastructures. October 1997) is available on the PCCIP
Web site (http://www.pccip.gov). Additional working papers are included there as
well.
The PCCIP is to be commended for the breadth and scope of their report, which
provides some recommendations for future action that deserve your careful
attention. Here is a brief summary of their findings.
The report identifies pervasive vulnerabilities, a wide spectrum of threats,
and increasing dependence on the national infrastructures. It recognizes a
serious lack of awareness on the part of the general public. It declares a need
for a national focus or advocate for infrastructure protection; although it
observes that no one is in charge, it also recognizes that the situation is such
that no one single individual or entity could be in charge. It also makes a
strong case that infrastructure assurance is a shared responsibility among
governmental and private organizations.
The report recommends a broad program of awareness and education,
infrastructure protection through industry cooperation and information sharing,
reconsideration of laws related to infrastructure protection, a revised program
of research and development, and new thinking throughout. It outlines
suggestions for several new national organizations: sector coordinators to
represent the various national infrastructures; lead agencies within the federal
government; a National Infrastructure Assurance Council of CEOs, Cabinet
Secretaries, and representives of state and local governments; an Information
Sharing and Analysis Center; an Infrastructure Support Office; and an Office of
National Infrastructure Assurance. The report's fundamental conclusion is this:
``Waiting for disaster is a dangerous strategy. Now is the time to act to
protect our future.''
Several conclusions will be of particular interest to readers of the Risks
Forum and the Inside Risks column. First, the report recognizes that very
serious vulnerabilities and threats exist today in each of the national
infrastructures. Second, it recognizes that these national infrastructures are
closely interdependent. Third, it observes that all of the national
infrastructures depend to some extent on underlying computer-communication
information infrastructures, such as computing resources, databases,
private networks, and the Internet. These realizations should come as no
surprise to us. However, it is noteworthy that a high-level White House
commission has made them quite explicit, and also very significant that the
PCCIP has recommended some courses of action that have the potential of
identifying some of the most significant risks -- and perhaps actually helping
to reduce those risks and others that will emerge in the future.
* The Commission's report is almost silent on the subject of cryptography
(see their pages 74--75). It recognizes (in two sentences) that strong
cryptography and sound key management are important; it also states (in one
sentence) that key management should include key recovery for business access to
data and court-authorized law-enforcement access, but fails to acknowledge any
of the potential risks associated with key management and key recovery (see this
column, August 1996, January 1997, and August 1997) or any of the controversy
associated with pending legislation in Congress. In essence, the report confuses
the distinctions between key management and key recovery.
* The risks arising from chronic system-development woes such as the
Year-2000 problem and rampant fiascoes associated with large systems (e.g.,
Inside Risks, December 1997) are almost completely ignored, although an analysis
of the Y2K problem is apparently forthcoming.
* The chapter on research and development is startlingly skimpy, although a
four-fold increase in funding levels by 2004 is recommended. Again, further
documentation is expected to emerge.
On the whole, this report is an impressive achievement. The Commission has
clearly recognized that protecting the national infrastructures must be a widely
shared responsibility, and also that it is a matter of national security -- not
just in the narrow sense of the U.S. military and national intelligence
services, but in the broader sense of the well-being and perhaps the survival of
the nation and the people of this planet.
NOTE: Peter G. Neumann moderates the ACM Risks Forum (comp.risks). See
http://www.csl.sri.com/neumann/, which includes (along with earlier Senate
testimonies) his November 6, 1997, written testimony, Computer-Related Risks
and the National Infrastructures, for the U.S. House Science Committee
Subcommittee on Technology. The written and oral testimonies for that hearing --
including that of PCCIP Chairman Tom Marsh -- are published by the U.S.
Government Printing Office. See www.pccip.gov.
======================================================================
Inside Risks 90, CACM 40, 12, Dec 1997 Our column of October 1993 (System Development Woes, CACM 36, 10)
considered some system development efforts that were cancelled, seriously late,
overrun, or otherwise unacceptable. In the light of recent fiascos reported in
the Risks Forum, it seems timely to reexamine new abandonments and failed
upgrades.
* IRS modernization. In early 1997, after many years, $4 billion spent,
extensive criticism from the General Accounting Office and the National Research
Council, and reevaluation by the National Commission on Restructuring
(``reinventing'') the IRS, the IRS abandoned its Tax Systems Modernization
effort. A system for converting paper returns to electronic form was also
cancelled, along with the Cyberfile system -- which would have enabled direct
electronic taxpayer filing of returns. A GAO report blamed mismanagement and
shoddy contracting practices, and identified security problems for taxpayers and
for the IRS.
* Other government systems. The FBI abandoned development of a $500-million
new fingerprint-on-demand computer system and crime information database. The
State of California spent $1 billion on a nonfunctional welfare database system;
it spent more than $44 million on a new motor vehicles database system that was
never built; the Assembly Information Technology Committee was considering
scrapping California's federally mandated Statewide Automated Child Support
System (SACSS), which had already overrun its $100 million budget by more than
200%.
* The Confirm system. The Intrico consortium's Confirm reservation system
development was abandoned -- after five years, many lawsuits, and millions of
dollars in overruns. Kweku Ewusi-Mensah analyzed the cancellation (Critical
Issues in Abandoned Information Systems Development Projects, Comm.ACM
40, 9, September 1997, pp. 74--80) and gives some important guidelines for
system developers who would like to avoid similar problems.
*
Bell Atlantic 411 outage. On November 25, 1996, Bell Atlantic had an outage of
several hours in its telephone directory-assistance service, due apparently to
an errant operating-system upgrade on a database server. The backup system also
failed. The problem -- reportedly the most extensive such failure of
computerized directory assistance -- was resolved by backing out the software
upgrade.
* San Francisco 911 system. San Francisco tried for three years to upgrade
its 911 system, but computer outages and unanswered calls remain rampant. For
example, the dispatch system crashed for over 30 minutes in the midst of a
search for an armed suspect (who escaped). It had been installed as a temporary
fix to recurring problems, but also suffered from unexplained breakdowns and
hundreds of unanswered calls daily.
* Social Security Administration. The SSA botched a software upgrade in 1978
that resulted in almost 700,000 people being underpaid an estimated $850 million
overall, as a result of cutting over from quarterly to annual reporting.
Subsequently, the SSA discovered that its computer systems did not properly
handle certain non-Anglo-saxon surnames and married women who change their
names. This glitch affected the accumulated wages of $234 billion for 100,000
people, some going back to 1937. The SSA also withdraw its Personal Earnings and
Benefit Estimate Statement (PEBES) Website (see Inside Risks, July 1997) for
further analysis, because of many privacy complaints.
* NY Stock Exchange. The New York Stock Exchange opened late on December 18,
1995 because of communications software problems, after a weekend spent
upgrading the system software. It was the first time since December 27, 1990,
that the exchange had to shut down -- and it affected various other Exchanges as
well.
* Interac. On November 30, 1996, the Canadian Imperial Bank of Commerce
Interac service was halted by an attempted software upgrade, affecting about
half of all would-be transactions across eastern Canada.
* Barclays Bank's successful upgrade. In one of the rare success stories in
the RISKS archives, Barclays Bank shut down its main customer systems for a
weekend to cut over to a new distributed system accommodating 25 million
customer accounts. This system seamlessly replaced three incompatible systems.
It is rumored that Barclays spent at least 100 pounds million on the upgrade.
The causes of these difficulties are very diverse, and not easy to
characterize. It is clear from these examples that deep conceptual understanding
and sensible system- and software-engineering practice are much more important
than merely tossing money and people into system developments. Incidentally, we
have not even mentioned the Year-2000 problem -- primarily because we must wait
until January 2000 to adequately assess the successes and failures of some of
the ongoing efforts. But all of the examples here suggest that we need much
greater sharing of the bad and good experiences.
NOTE: Peter G. Neumann moderates the ACM Risks Forum (comp.risks), which
provides background on all of these cases and which can be searched at
http://catless.ncl.ac.uk/Risks/ .
======================================================================
Inside Risks 84, CACM 40, 6, June 1997 Are you flooded with Internet spams (unsolicited e-mail
advertisements) from hustlers, scammers, and purveyors of smut, net sex,
get-rich-quick-schemes, and massive lists of e-mail addresses? (The term derives
from the World-War-II ubiquitous canned-meat product dramatized by Monty
Python.) Some of us -- particularly moderators of major mailing lists --
typically receive dozens of spams each day, often with multiple copies. We tend
to delete replicated items without reading them, even if the subject line is
somewhat intriguing. (Many spammers use deceptive Subject: lines.)
Unmoderated lists are particularly vulnerable to being spammed.
Some spammers offer to remove you from their lists upon request. However,
when you reply, you may discover that their From: and
Reply-to: addresses are bogus and their provided ``sales'' phone number
may be valid only for a few days. Some of them are legitimate, but others may be
attempting credit or identity fraud; it can be hard to tell the difference.
What might you do to stanch the flow? Some folks suggest not posting to
newsgroups or mailing lists--from which spammers often cull addresses, but this
throws out the baby with the bathwater. Other folks suggest using the spammer's
trick of a bogus From: address, letting your recipients know how to
generate your real address. But this causes grief for everybody (recipients,
administrators, and even you if the mail is undeliverable), and is a bad idea.
Filtering out messages from specific domains may have some success at the IP
level (e.g., via firewalls and TCP-wrappers) against centralized spammers who
operate their own domains and servers. But filtering based on header
lines is generally not effective, because the headers are subject to forgery and
alterations. Also, many spammers route their junk through large ISPs, or
illicitly through unwitting hosts. Complaining to those site administrators is
of little value. Filtering out messages based on offensive keywords is also
tricky, because it may reject e-mail that you really want.
A service whereby senders must first acquire an authorized certificate to
send you e-mail would be impractical and undesirable for many individuals. It
would certainly hinder newsgroups that seek worldwide contributions and
subscriptions.
Technical options are of limited value in the real world, tending toward an
offensive-defensive escalation of technical trickery. Alternatively, legislation
might be contemplated, for example, to require an individual's permission for
the release of certain personal information to third parties, and to treat
unsolicited e-mail more like unsolicited junk faxes. On the other hand, there is
a serious risk of legislative overreaction with draconian laws that might kill
the proverbial golden goose.
E-mail spam differs somewhat from postal mail. You must pay (one way or
another) for the storage of e-mail you receive (or else delete it as fast as it
comes in!), whereas the sender pays for postal junk mail. The spam sender pays
almost nothing to transmit, especially when hacking into an unsuspecting
third-party server site (which is increasingly common). Simson Garfinkel
RISKS vol. 18 no. 79) notes that a spammer recently hacked
vineyard.net, sending about 66,000 messages.
There are other spam-like problems, such as recent forged subscriptions to
automated list servers in the name of unwitting victims such as the White House
and Newt Gingrich. Servers such as majordomo can be used to invoke manual
processing of suspicious would-be subscriptions, particularly when the
From: address and the given address differ.
Many such problems exist because the Internet has cooperative decentralized
control; but that's also its beauty. It has very limited intrinsic security
(although improving), and relies heavily on its constituent systems. In the
absence of meaningful authentication and authorization, clever perpetrators are
not easy to identify or hold accountable. But swinging too far toward forced
authentication impacts privacy and freedom-of-speech issues. What a tangled Web
we weave!
Asking what you can do individually may be the wrong question; the
technical burden must ultimately fall on ISPs and software developers, as they
continue to pursue approaches such as blocking third-party use of SMTP
mail-server ports and requiring authentication for mass mailings. As
RISKS and PRIVACY readers know, fully automated mechanisms
will always have deficiencies, and security is always a weak-link problem.
Spamming will ultimately be dealt with through a combination of legislation,
ISP administrative changes, further technological developments, and individual
efforts. We must find ways to protect ourselves without undermining free
enterprise, freedom of speech rights, and common sense, and without encumbering
our own normal use -- a difficult task indeed! In the meantime, perhaps the best
you can do yourself is to never, ever, respond positively to a spammer's ad!
Lauren Weinstein (lauren@vortex.com) moderates the PRIVACY Forum
(privacy-request@vortex.com; www.vortex.com). Peter Neumann moderates the ACM
Risks Forum (risks-request@csl.sri.com; http://catless.ncl.ac.uk/Risks).
======================================================================
Inside Risks 41, CACM 36, 11, November 1993, p. 122 Traditionally, the November off-year elections draw little attention, as only
a handful of federal positions are filled. Voter turnouts of 30% or less are
common in many municipalities. But these elections are far from insignificant,
because local posts won in odd-numbered years frequently provide office-holders
with the power to make procurements and appointments. Through the
grass-roots election process, Boards of Elections are staffed at city, county
and state levels, and these Board members are currently the key decision-makers
in the ongoing conversion from lever and manual voting systems to electronic
ballot tabulation in the U.S.A.
As vast metropolises adopt computer ballot-counting methods (including
punch-card, mark-sense and direct-entry systems), the question arises whether a
national or local election can be "thrown" via internal or external manipulation
of hardware, software and/or data. Proponents of electronic voting systems say
sufficient controls are being exercised, such that attempts to subvert an
election would be detectable. But speakers at a recent session on security and
auditability of electronic vote-tabulation systems [1] pointed out that the
Federal Election Commission has provided only voluntary voting system standards
that may not be adequate to ensure election integrity. Numerous incidents of
electronic voting difficulties have come to the attention of the press, although
to date there have been no convictions for vote-fraud by computer.
One of the more interesting recent cases occurred during the March 23, 1993,
city election in St. Petersburg, Florida. Two systems for ballot tabulation were
being used on a trial basis. For an industrial precinct in which there were no
registered voters, the vote summary showed 1,429 votes for the incumbent mayor
(who incidentally won the election by 1,425 votes). Officials explained under
oath that this precinct was used to merge regions counted by the two computer
systems, but were unable to identify precisely how the 1,429 vote total was
produced. Investigation by the Pinellas Circuit Court revealed sufficient
procedural anomalies to authorize a costly manual recount, which certified the
results. The Florida Business Council continues to look into this matter.
Equipment-related problems are a source of concern to Election Boards,
especially when time-critical operations must be performed. The Columbus
Dispatch reported (June 12, 1992) that 40 of the 758 electronic machines used in
Franklin County's June primary required service on election day. Noted is the
fact that only 13 of the County's 1500 older mechanical lever machines needed
repair during the election. Of the defective electronic machines, 7 of the voter
ballot cartridges were not able to be loaded into the tallying computers so
those precincts' results had to be hand-keypunched; power boards in 10 of the
machines had blown fuses; 18 had malfunctions with the paper tape on which the
results were printed. Difficulties with the central software for merging the
electronic and mechanical tallies created further delays in reporting results.
Officials decided to withhold the final payment of $1.7M of their $3.82M
contract until greater reliability is assured.
If Franklin County did not have enough trouble already, two electronic ballot
tabulation vendors are presently contesting the contract award. MicroVote
Corporation is suing the R.F. Shoup Corp., Franklin County, and others in U.S.
District Court for the Southern District of Ohio, Columbus Division, for over
$10M in damages, claiming conspiracy and fraud in the bidding process. This
matter is, as yet, unresolved.
In another region of Ohio, in the same primary, the Cleveland Plain Dealer
(June 11, 1992) reported that Kenneth J. Fisher, member of the Cuyahoga County
Board of Elections, allowed an employee to feed a computer a precinct
identification card that was not accompanied by that precinct's ballots, during
the vote tabulation process. Apparently, the ballots cast in the Glenville
region had been inadvertently misplaced, and at 1 A.M. the board members "were
tired and wanted to go home" so the election official authorized the bogus
procedure, despite the fact that doing so might have constituted a violation of
state law. Subsequent inquiry did not lead to any indictments.
Technology alone does not eliminate the possibility of corruption and
incompetence in elections; it merely changes the platform on which they may
occur. The voters and the Election Boards who serve them must be made aware of
the risks of adopting electronic vote-tallying systems, insisting that the
checks and balances inherent to our democracy be maintained.
[1] Papers by Saltman, Mercuri, Neumann and Greenhalgh, Proc. 16th
National Computer Security Conf., NIST/NCSC, Baltimore MD, Sep. 20-23, 1993.
Inside Risks columns by Neumann (Nov. 1990) and Mercuri (Nov. 1992) give
further background.
Rebecca Mercuri is a research fellow at the University of
Pennsylvania, where she is completing her dissertation on Computational
Acoustics in the Computer and Information Science Department. She frequently
testifies as an expert witness on computer security and voting systems. E-mail :
mercuri@acm.org
======================================================================
Inside Risks 29, CACM 35, 11, November 1992 On July 23, 1992, New York City Mayor Dinkins announced that 7000 Direct
Recording Electronic (DRE) voting machines would be purchased from Sequoia
Pacific, pending the outcome of public hearings. This runs counter to advice of
the NY Bar Association, independent groups of concerned scientists and citizens
(such as Election Watch, CPSR and NYPIRG), and SRI International (a consultant
to NYC, and the system evaluators), all of whom have indicated that the
equipment is not yet fit for use.
Background. At first glance, most DREs appear similar to mechanical
`lever' voting machines. Lacking any visual identification as `computers' (no
monitors or keyboards), voters would be unlikely to assume that one or more (in
some cases, as many as nine) microprocessors are housed in the units. The ballot
is printed on paper which is mounted over a panel of buttons and LEDs. A thin
piece of flexible plastic covers the ballot face, to protect it from damage or
removal. The machine is housed in an impact and moisture-resistant case,
shielded from EMI, and protected by battery back-up in the event of power loss.
At the start of the election session, poll workers run through a procedure to
make the machine operational, and similarly follow another sequence (which
produces a printed result total) to shut the device down at the end of the day.
A cartridge which contains the record of votes (scrambled for anonymity) is
removed and taken to a central site for vote tallying.
Risks. The astute reader, having been given this description of the
system, should already have at least a dozen points of entry in mind for system
tampering. Rest assured that all of the obvious ones (and many of the
non-obvious ones) have been brought to the attention of the NYC Board of
Elections. Furthermore, in SRI's latest published evaluation (June 19, 1991) the
Sequoia Pacific AVC Advantage (R) systems failed 15 environmental/engineering
requirements and 13 functional requirements including resistance to dropping,
temperature, humidity and vibration. Under the heading of reliability, the
vendor's reply to the testing status report stated: ``SP doesn't know how to
show that [the Electronic Voting Machine and its Programmable Memory Device]
meets requirement -- this depends on poll workers' competence.''
The Pennsylvania Board of Elections examined the system on July 11, 1990, and
rejected it for a number of reasons, including the fact that it ``can be placed
inadvertently in a mode in which the voter is unable to vote for certain
candidates'' and it ``reports straight-party votes in a bizarre and inconsistent
manner.'' When this was brought to the attention of NYCBOE, they replied by
stating that ``the vendor has admitted to us that release 2.04 of their software
used in the Pennsylvania certification process had just been modified and that
it was a mistake to have used it even in a certification demonstration.''
Needless to say, the machines have not yet received certification in
Pennsylvania.
Other problems noted with the system include its lack of a guaranteed audit
trail (see Inside Risks, CACM 33, 11, November 1990), and the presence
of a real-time clock which Pennsylvania examiner Michael Shamos referred to as
``a feature that is of potential use to software intruders.''
Vaporware. Sequoia Pacific has now had nearly four years from when
they were told they would be awarded the contract (following a competitive
evaluation of four systems) if they could bring their machines up to the
specifications stated in the Requirements for Purchase. At an August 20 open
forum, a SP representative stated publicly that no machine presently existed
that could meet those standards. Yet the city intends to award SP the
$60,000,000 contract anyway, giving them 18 months to satisfy the RFP and
deliver a dozen machines for preliminary testing (the remainder to be phased in
over a period of six years).
Conclusions. One might think that the election of our government
officials would be a matter that should be covered by the Computer Security Act
of 1987, but voting machines, being procured by the states and municipalities
(not by the Federal government) do not fall under the auspices of this law,
which needs to be broadened. Additionally, no laws in N.Y. state presently
preclude convicted felons or foreign nationals from manufacturing, engineering,
programming or servicing voting machines.
This would not be so much of a concern, had computer industry vendors been
able to provide fully auditable, tamper-proof, reliable, and secure systems
capable of handling anonymous transactions. Such products are needed not only in
voting, but in the health field for AIDS test reporting, and in banking for
Swiss-style accounts. It is incumbent upon us to devise methodologies for
designing verifiable systems that meet these stringent criteria, and to demand
that they be implemented where necessary. ``Trust us'' should not be the bottom
line for computer scientists.
Rebecca Mercuri (mercuri@acm.org) is a Research Fellow at the University of
Pennsylvania's Moore School of Engineering and a computer consultant with
Notable Software. She has served on the board of the Princeton ACM chapter since
its inception in 1980. Copyright (C) 1992 by Rebecca Mercuri.
======================================================================
Inside Risks 5, CACM 33, 11, p.170, November 1990 Background. Errors and alleged fraud in computer-based elections have
been recurring Risks Forum themes. The state of the computing art continues to
be primitive. Punch-card systems are seriously flawed and easily tampered with,
and still in widespread use. Direct recording equipment is also suspect, with no
ballots, no guaranteed audit trails, and no real assurances that votes cast are
properly recorded and processed. Computerized elections are being run or
considered in many countries, including some notorious for past riggings; thus
the risks discussed here exist worldwide.
Erroneous results. Computer-related errors occur with alarming
frequency in elections. Last year there were reports of uncounted votes in
Toronto and doubly counted votes in Virginia and in Durham, North Carolina. Even
the U.S. Congress had difficulties when 435 Representatives tallied 595 votes on
a Strategic Defense Initiative measure. An election in Yonkers NY was reversed
because of the presence of leftover test data that accumulated into the totals.
Alabama and Georgia also reported irregularities. After a series of mishaps,
Toronto has abandoned computerized elections altogether. Most of these cases
were attributed to ``human error'' and not ``computer error'' (cf. the October
1990 Inside Risks column), and were presumably due to operators and not
programmers; however, in the absence of dependable accountability, who can tell?
Fraud. If wrong results can occur accidentally, they can also happen
intentionally. Rigging has been suspected in various elections, but lawsuits
have been unsuccessful, particularly in the absence of incisive audit trails. In
many other cases, fraud could easily have taken place. For many years in
Michigan, manual system overrides were necessary to complete the processing of
noncomputerized precincts, according to Lawrence Kestenbaum. The opportunities
for rigging elections are manifold, including the installation of trapdoors and
Trojan horses, child's play for vendors and knowledgeable election officials.
Checks and balances are mostly placebos, and easily subverted. Incidentally, Ken
Thompson's oft-cited Turing lecture, Commun. ACM 27, 8, (August 1984)
761-763, reminds us that tampering can occur even without any source-code
changes; thus, code examination is not enough.
Discussion. The U.S. Congress has the constitutional power to set
mandatory standards for Federal elections, but has not yet acted. Existing
standards for designing, testing, certifying, and operating computerized
vote-counting systems are inadequate and voluntary, and provide few hard
constraints, almost no accountability, and no independent expert evaluations.
Vendors can hide behind a mask of secrecy with regard to their proprietary
programs and practice, especially in the absence of controls. Poor software
engineering is thus easy to hide. Local election officials are typically not
sufficiently computer-literate to fully understand the risks. In many cases, the
vendors run the elections.
Reactions in RISKS. John Board at Duke University expressed surprise
that it took over a day for the doubling of votes to be detected in eight Durham
precincts. Lorenzo Strigini reported last November on a read-ahead
synchronization glitch and an operator pushing for speedier results, which
together caused the computer program to declare the wrong winner in a city
election in Rome, Italy. Many of us have wondered how often errors or frauds
have remained undetected.
Conclusions. Providing sufficient assurances for computerized election
integrity is a very difficult problem. Serious risks will always remain, and
some elections will be compromised. The alternative of counting paper ballots by
hand is not promising. But we must question more forcefully whether computerized
elections are really worth the risks, and if so, how to impose more meaningful
constraints.
Peter G. Neumann is chairman of the ACM Committee on Computers and Public
Policy, moderator of the ACM Forum on Risks to the Public in the Use of
Computers and Related Systems, and editor of ACM SIGSOFT's Software Engineering
Notes (SEN). Contact risks-request@csl.sri.com for on-line receipt of RISKS.}
References. The Virginia, Durham, Rome, Yonkers, and Michigan cases
were discussed in ACM Software Engineering Notes 15, 1 (January 1990),
10-13. Additinal cases were discussed in earlier issues. For background, see
Ronnie Dugger's New Yorker article, 7 November 1988, and a report by Roy G.
Saltman, Accuracy, Integrity, and Security in Computerized Vote-Tallying, NIST
(NBS) special publication, 1988. Also, see publications by two nongovernmental
organizations, Computer Professionals for Social Responsibility (POBox 717, Palo
Alto CA 94302) and Election Watch (a project of the Urban Policy Research
Institute, 530 Paseo Miramar, Pacific Palisades CA 90272).
======================================================================
CERT advisories click click click click
Cisco
advisory click
Microsoft
advisory about Index Server ISAPI Extension buffer overflow click
OpenBSD
strlcat/strlcpy USENIX paper click
Stanford
Meta-compilation research click
HotMail/FedEx
compromise click
AT&T
blocks port 80 click
Other
affected devices
3Com LANmodems click
Xerox printers click
Alcatel release (redacted) click
Web Cookies: Not Just a Privacy Risk
Emil Sit and Kevin Fu
Risks in E-mail Security
Albert Levi and Cetin Kaya Koc
Learning from Experience
Jim Horning
PKI: A Question of Trust and Value
Richard Forno and William
Feinbloom
Be Seeing You!
Lauren Weinstein
Cyber Underwriters Lab?
Bruce Schneier
Computers: Boon or Bane?
Peter G. Neumann and David L. Parnas
What To Know About Risks
Peter G. Neumann
System Integrity Revisited
Rebecca T. Mercuri and Peter G.
Neumann
Consider a computer product specification with data input, tabulation,
reporting, and audit capabilities. The read error must not exceed one in a
million, although the input device is allowed to reject any data that it
considers to be marginal. Although the system is intended for use in secure
applications, only functional (black box) acceptance testing has been performed,
and the system does not conform to even the most minimal security criteria.
Semantic Network Attacks
Bruce Schneier
Voting Automation (Early and Often?)
Rebecca Mercuri
2.
California Internet Voting Task Force, ``A report on the feasibility of Internet
voting,'' January 2000. http://www.ss.ca.gov/executive/ivote/home.htm
3. L.
Weinstein, ``Risks of Internet voting,'' CACM 43, 6, June 2000.
4. M.A. Blaze
and S.M. Bellovin, ``Tapping on my network door,'' CACM 43, 10, October
2000.
5. M.K. Anderson, ``Close vote? You can bid on it,'' August 17, 2000,
and ``Voteauction bids the dust,'' August 22, 2000, Wired News.
6. This is
discussed at length in my Ph.D. Dissertation (see
www.notablesoftware.com/evote.html).
Rebecca Mercuri (mercuri@acm.org) has
defended her doctoral thesis on this subject at the University of Pennsylvania
on 27 October 2000. She is a member of the Computer Science faculty at Bryn Mawr
College, and an expert witness in forensic computing.
Tapping On My Network Door
Matt Blaze and Steven M. Bellovin
Missile Defense
Peter G. Neumann
1. Why software is unreliable.
2. Why SDI would be
unreliable.
3. Why conventional software development does not produce
reliable programs.
4. The limits of software engineering methods.
5.
Artificial intelligence and the SDI.
6. Can automatic programming solve the
SDI software problem?
7. Can program verification make the SDI software
reliable?
8. Is the SDI Office an efficient way to fund worthwhile research?
Shrink-Wrapping Our Rights
Barbara Simons
Risks in Retrospect
Peter G. Neumann
Risks of Internet Voting
Lauren Weinstein
Internet Risks
Lauren Weinstein and Peter G. Neumann
Denial-of-Service Attacks
Peter G. Neumann
A Tale of Two Thousands
Peter G. Neumann
Risks of PKI: Electronic Commerce
Carl Ellison and Bruce
Schneier
Risks of PKI: Secure E-Mail
Carl Ellison and Bruce Schneier
Risks of Insiders
Peter G. Neumann
Risks of Content Filtering
Peter G. Neumann and Lauren Weinstein
Risks of Relying on Cryptography
Bruce Schneier
The Trojan Horse Race
Bruce Schneier
See http://www.counterpane.com
101 E Minnehaha Parkway,
Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See:
http://www.counterpane.com
Biometrics: Uses and Abuses
Bruce Schneier
Information is a Double-Edged Sword
Peter G. Neumann
Risks of Y2K
Peter G. Neumann and Declan McCullagh
Ten Myths about Y2K Inspections
David Lorge Parnas
A Matter of Bandwidth
Lauren Weinstein
Bit-Rot Roulette
Lauren Weinstein
Robust Open-Source Software
Peter G. Neumann
Our Evolving Public Telephone Networks
Fred B. Schneider and Steven
M. Bellovin
The Risks of Hubris
Peter B. Ladkin
The danger of what you ask is infinite --
To yourself,
to the whole creation.
...
You are my son, but mortal. No mortal
Could
hope to manage those reins.
Not even the gods are allowed to touch them.
Over the whole five zones
of heaven.
Poised it by his
ear,
Then drove the barbed flash point-blank into Phaethon.
The
explosion
Snuffed the ball of flame
As it blew the chariot to fragments.
Phaethon
Went spinning out of his life.
Towards Trustworthy Networked Information Systems
Fred B. Schneider
Risks of E-Education
Peter G. Neumann
Members of the ACM Committee on Computers and Public Policy and
the Computing Research Association Snowbird workshop provided valuable input to
this column. (As we go to press, I just saw a relevant article by R.B. Ginsberg
and K.R. Foster, ``The Wired Classroom,'' IEEE Spectrum, 34, 8, 44--51,
August 1998.)
Y2K Update
Peter G. Neumann
Computer Science and Software Engineering:
Peter J. Denning
Filing for Divorce?
2. Zelkowitz, M.,
and D. Wallace. Experimental models for validating technology. IEEE Computer,
May 1998.
3. Tichy, W. Should Computer Scientists Experiment More? IEEE
Computer, May 1998.
Laptops in Congress?
Peter G. Neumann
+ Note-taking that can be recorded
and later turned into memos or even legislation
+ Immediate access to
proposed and past legislation
-- Risks of overdependence on laptops (part of
a generic risk of technology)
+ Immediate nonintrusive prompting by
staffers
+ Immediate access to proposed wording changes
+ Ability for
e-mail with traveling colleagues
+ Remote countdowns to impending votes
+
Ability to vote remotely from a hearing room (a real convenience, but apparently
anathema to Senators + Greater experience with the benefits and risks of on-line
technology; risk awareness might inspire Congress to realize the importance of
good nonsubvertible computer-communication security and cryptography (soundly
implemented, without key-escrow trapdoors).
= Discovering that Windows (95 or
98) isn't all that's advertised (at the risk of legislation on the structure of
operating systems and networks!)
-- Penetrations by reporters, lobbyists, and
others, obtaining private information (as in the recent House cell-phone
recording and Secret-Service pager interceptions), altering data, etc.
+ Ability to communicate by e-mail with
colleagues who are traveling
+ Ability to browse the World Wide Web for
timely information (although there are risks of being confused by
misinformation)
+ Rapid information dissemination
+ Ability to vote
remotely even when travelling (requiring an increase in nonsubvertible
computer-communication security), and thereby not being chastised at election
time for a poor voting record. (Can you imagine there being no excuse for not
voting other than the desire to avoid being on the record?)
= Possibility of
receiving unsolicited e-mail spams and denial-of-service attacks -- or improving
security!
-- Possibility of being influenced by lobbyists (not really a
laptop-specific risk)!
Infrastructure Risk Reduction
Harold W. Lawson
In Search of Academic Integrity
Rebecca Mercuri
On Concurrent Programming
Fred B. Schneider
(a) is decreased by some program actions that must eventually
run, and
(b) is not increased by any other program action.
Are Computers Addictive?
Peter G. Neumann
Internet Gambling
Peter G. Neumann
Protecting the Infrastructures
Peter G. Neumann
More System Development Woes
Peter G. Neumann
Spam, Spam, Spam!
Peter G. Neumann and Lauren Weinstein
Corrupted Polling
Rebecca Mercuri
Voting-Machine Risks
Rebecca Mercuri
Risks in Computerized Elections
Peter G. Neumann