First Monday

Posthuman Law:Information Policy and the Machinic World by Sandra Braman

Abstract
Posthuman Law:Information Policy and the Machinic World by Sandra Braman
It has been an unspoken assumption that the law is made by humans for humans. That assumption no longer holds: The subject of information policy is increasingly flows between machines, machinic rather than social values play ever-more important roles in decision-making, and information policy for human society is being supplemented, supplanted, and superceded by machinic decision-making. As the barrier between the human and machinic falls with implantation of chips within the body and other types of intimate relationships, and as dependence upon the information infrastructure continues to grow, the question of the rights of technological systems themselves is entering the legal system. This paper explores information technologies as the policy subject, as determinant of the values that inform information policy, and as policy-makers. All of these are manifestations of a transformation in the legal system so fundamental that it may be said that we are entering a period of posthuman law.

Contents

Introduction
Technologies as the Subject of Information Policy
Machinic Values in Information Policy
Technologies as Policy-makers
Information Policy as Posthuman Law

 


 

++++++++++

Introduction

It is an unspoken assumption that the law is made by humans for humans. That assumption no longer holds: The information, communication, and culture that are the subject of information law and policy increasingly flow between machines, or between machines and humans. Machinic rather than social values play ever-more important roles in the decision-making calculus. Information policy-making for human society is being supplemented, supplanted, or superceded by machinic decision-making. With the implantation of technologies in the human body — it is now possible to connect computer chips directly to the neural cells of the brain — the legal distinction between the human and machinic may well fall altogether. This would be a logical next step in the progression from viewing information and communication technologies as:

barriers to experience, (see, eg, Carey, 1989; Enzensberger, 1992; Habermas, 1989; Held, 1980; Hyde, 1983; Negt, 1978; Ronell, 1989);

shaping experience, (see, eg, Comor, 1997; Innis, 1951; Meyrowitz, 1986; Mumford, 1934);

experience itself (see, eg, Bolter, 1984; Boorstin, 1961, 1992; Goody, 1977; Havelock, 1963; McLuhan, 1964; Ong, 1982; Sontag, 1977);

the reality being experienced (see, eg, Baudrillard, 1983; Gray, 1995; Haraway, 1991; Hookway, 1999).

These transformations are profound enough that we may begin to refer to the emergence of posthuman law.

This is of course not the first time the content and practice of the law have mutated in response to new conditions. Technological developments such as the railroad and electricity; the shift from an agrarian to an industrial economy; the social changes wrought by urbanization, the rise of the middle class, populism, and successive waves of immigration; the geographic expansion of the West; and changing relations with other countries have all brought U.S. law into new areas of activity and stimulated the development of new types of policy tools (Friedman, 1985). Ideological and theoretical developments have generated political and economic conflict that resulted in legal change (Horwitz, 1992; Skocpol, 1992). Information-related issues have been among the stimuli for changes in the law since the beginning (Chandler and Cortada, 2000) though it is only in recent decades that they have come to dominate.

This article examines technologies as the subject of information policy, as determinant of the values that inform information policy, and as policy-makers. The word "technologies" as used here refers to both technologies and to meta-technologies — that is, to the technologies limited in the range of inputs, outputs, and processing steps that characterized the industrial era as well as to the meta-technologies of unlimited inputs, outputs, and processing steps that characterize the information society (Braman, in press). The assemblage presented by the often-intertwined types of technologies is described as "machinic."

 

++++++++++

Technologies as the Subject of Information Policy

The notion of the information society is based on Engels' law — the quantitative increase in the number of information technologies upon which we are dependent and the number of ways in which we are dependent upon them has yielded a qualitative change in the nature of society. As human dependence upon the information infrastructure grows the network and other information technologies are increasingly included within the social universe to which policy applies. Already, policy is being made that changes law as it was designed for the human in order to take care of the needs of the machinic. The U.S. Telecommunications Act of 1996 provides a vivid example by distinguishing between the social and the machinic: Universal service obligations require network access for individuals, while universal access obligations require network access for telecommunications networks. In popular, and often policy, discourse the difference between the two is often elided.

Regulation that mandates the interconnection of telecommunications networks, which first appeared in the U.S. following the positive experience with the utility and value added when the telegraph, telephone, and radio systems were nationalized during World War I, was one of the first types of policy directed at the network itself. Today, there are policies explicitly directed at technologies in the areas of surveillance, encryption, copyright protection, and censorship (filtering). This raises questions about the type of policy tools used when technology is the policy subject, the relationship and differences between the human and the machinic as policy subjects, and problems raised by technology as a policy subject.

Policy Tools for the Regulation of Technology

Policy specialists have been discussing the regulatory implications of computer code since the mid-1990s, (Braman, 1993) and the topic reached public discourse after it was popularized by Lessig (1999). Manipulations of code for regulatory purposes can take place at four different levels of the infrastructure: At the root server level of the domain name system, at the application layer of the TCP/IP protocol that defines the Internet, on individual users' hard drives, and in the design of digital products that may be sold or distributed off-line or online (Biegel, 2001).

Code is not the only means by which technologies are subjected to regulation, however. Technical standard-setting in the private sector, government procurement practices, public statutory and regulatory law, and decision-making by emergent regulatory entities such as ICANN all direct policy tools at digital technologies. Specific policy tools include requiring adaptations to technologies, antitrust law, differentially taxing different types of technologies and/or their production, and promoting the development of certain types of technologies through government procurement practices, funding for research and development (R&D), or simply looking the other way when innovations are put into place that are destructive (as when copyright protection mechanisms destroy hardware and prevent the use of services). Constitutional law, too, regulates technologies, for the question of whether or not information processing is a form of speech and/or information gathering necessary for the exercise of First Amendment rights and therefore deserving of constitutional protections is still being debated [ 1].

Analysis of technologies as a regulatory subject is complex because of the mix of public, private, and "networked" (combined public and private) policy settings in which decision-making takes place. Nor is all pertinent regulation the result of centralized decision-making; notably, much of the technical development of the Internet with social effect was deliberately decentralized [ 2].

Society vs. Technology as the Policy Subject

There is a spectrum of motivations for directing policy at technologies as the regulatory subject. As a result, there are differences in the degree to which such regulation should be considered social policy as well — and in whether the effects on social policy are direct or indirect. At one extreme, the use of a technological intervention to achieve a social goal is explicit, as in the requirement that packet-switched telecommunications networks be adapted to enable surveillance for law enforcement purposes. At the other extreme, decisions are put in place that present themselves as required solely for technical reasons with only technical impact, as exemplified by Internet protocols up to and through the domain name system. That technical decisions in fact have direct tremendous social, political, cultural, and economic impact is now widely understood, however, and first steps towards developing methods for taking such decisions into account in the conduct of information policy analysis are beginning to appear (Mansell and Silverstone, 1996).

The heavy center of this spectrum is filled with policies that appear to be justified for social or economic reasons but that are so oriented around technological issues that they have the effect of permitting the "needs" of technological systems to dominate decision-making for the social world. This is the case, for example, when federal law preempts state law regarding cable television content because doing so was necessary for the growth of the national cable system, an argument upheld by the U.S. Supreme Court as if the rights of the network itself were specified in the Constitution (Capital Cities v Crisp, 1984). Creating exemptions to antitrust law for the purposes of research and development for very high speed integrated chips (VHSIC) in the early 1980s is another example. While the strengthening of foreign competition in this area provided a social justification for this change in the law, the feature of this technology that required a loosening of antitrust law was the requirement to collaborate in the use of intellectual property rights held by multiple firms. This was not new — multiple patents, often with different owners, have been involved in the production of individual information and communication technologies since the late 19th century. Numerous times in the past corporations have merged or combined efforts in various ways in order to share the necessary intellectual property rights, and numerous times antitrust has been used to tear those combinations apart. By the early 1980s, however — a century after the tension between antitrust law and the need to share access to intellectual property rights first became clear — the weight of the argument finally shifted in favor of serving technological rather than social goals.

The notion of "standing" before the law — the right to pursue redress for perceived injustice within the legal system — has steadily broadened over the past couple of hundred years within the social and natural domains. The boundaries of the social domain to which policy applies have expanded in the U.S. through redefinition of the population to be considered for the political purposes of the census to include women, children, immigrants, and those who had been slaves; the franchise — the right to vote — was expanded through a process that took most of the 20th century; and expansion of the public sphere, the site for public discourse about political matters, is still underway. The legal status of the natural world before the law has been raised by those concerned about environmental matters in recent decades, manifesting itself in the concept of environmental security as a form of national security, (Allenby, 2000; Foster, 2001) and in the U.S. Supreme Court consideration of the granting of legal rights to natural environments such as forests (Stone, 1974). It is likely that consideration of the "rights" of the technologies upon which we are dependent will become more explicit over the next few years. Analysis of the liability issues raised by the decisions made and actions taken by intelligent agents will inevitably raise the salience of this type of question on the policy agenda.

Problems Raised by Technology as the Policy Subject

There are both implementation and conceptual problems when regulating technologies as a policy subject. At the implementation level, code-based changes can be countered by other code that bypasses, breaks through, or otherwise circumvents the effects of policy. Alternative technological systems can have the same effect, as in the growing use of a domain name system that falls completely outside the purview of ICANN. Anticircumvention rules may be built into the law, as they were in the 1996 WIPO Treaty and the 1998 U.S. Digital Millennium Copyright Act, but both court challenges and practice are likely to undermine these provisions at the level of implementation. There is a danger that too much regulation will drive users away. And regulation directed at one type of technology may have unintended effects on other technologies or practices, as when copyright protection schemes for software wind up disabling hardware.

Conceptually, technologies and human as agents are different in ways that problematize the extension of legal principles to the machinic world. Whether it is a question of freedom for political speech or antitrust, the goal of information policy is to constrain or encourage the actions of agents in order to minimize or maximize particular types of effects. Causality within the world of policy has been understood as direct — meaning discernible, affected by relatively few intervening variables, and occurring via single or very few causal steps. Agents were identifiable, whether in the individual person or the organizational persons created by the legal fiction of incorporation. In the contemporary environment, however, technological agents are not recognized by the law as fictive or real persons and yet there is clearly implied agency in the identification of technology as a policy subject. This is a conceptual problem that has not yet been adequately resolved within the law. The historical analogue of creation of the fictive person of a corporation with legal status under the law suggests the possible creation of a second category of fictive person with legal status for technologies and/or technological systems.

 

++++++++++

Machinic Values in Information Policy

One of the unique features of information policy is the multiplicity of values that inform decision-making processes. This is exacerbated by the dispersal of policy-making across multiple venues, each with a different portfolio entailing its own value hiearchy, modes of argument, and operational definitions (Braman, 1990). Technology can be considered among these values in both an abstract and a concrete sense. Conceptually, technology is a manifestation of the value of "technique," defined by philosopher of technology Jacques Ellul (1964) as a predetermined means of achieving a predetermined end. On the ground, technology informs information policy-making as a value when machinic analytical techniques are relied upon exclusively for the data inputs upon which decisions are made.

Attitudes towards the desirability of this type of influence on policy-making have gone back and forth. There have been repeated rounds of experimentation with such approaches or their actual incorporation within the law, only to be questioned in the courts or rejected by practitioners (Hadden, 1989). The U.S. Regulatory Reform Act of 1982 (which amended the Administrative Procedure Act), for example, required agencies to perform cost-benefit analysis when proposing and promulgating regulations, (Fisher, 1984) though the U.S. Supreme Court has several times considered the appropriateness of such techniques in particular situations worthy of constitutional consideration (American Textile Manufacturers v Donovan, 1981; Baltimore Gas & Electric v NRDC, 1983; Federal Communications Commission (FCC) v WNCN, 1981). Both justification for accepting a role for machinic values in information policy and arguments against doing so are several.

Justifications

A philosophical basis for the use of computerized approaches to legal decision-making can be found in the probabilistic thinking that lies at the heart of all rules and in utilitarian principles for governance (Held, 1989). "Winning" in such a situation is defined as maximizing benefits of a policy. Thus the "efficiency" claim holds only in the sense betrayed by cost-benefit analysis.

The use of inference strategies to support legal decision-making does have a long history in the distinction between material and procedural law. The utility principle of governance, in accordance with which governments would try through careful calculation to achieve the greatest happiness for the greatest number, offers an alternative justification for using technologically-derived inputs into policy-making. The subtlety of such approaches has improved; even game theory today is sensitive to ways in which small changes in information at the disposal of actors can affect both the existence and character of equilibria achieved (Keohane and Ostrom, 1995). The concept of an algorithm as it is used in computing is broader than that of a mathematical formula, referring to the basic design for how data is to be organized and manipulated in a program in order to establish certain results (Samuelson, 1990). In distributed computing systems, the interactions and results look ever-more organic.

Other advantages can appear when it is possible to use expert systems to make the tacit knowledge of individuals available to others and when the factors being analyzed are quantifiable. Historically, such techniques came to dominate policy-making under the rubric of operations research during World War II when the amount of information to be managed exploded and personnel with training systems analysis became involved in government. Clearly computerization makes it easier to compare the possible consequences of alternative policy approaches or to explore possible implications further out than ordinary human vision can necessarily see (Katsh, 1989). There are times when such techniques are required to dissolve the ambiguity of other types of data, as when the military uses the sophisticated software of MASINT (Measurement and Signatures Intelligence) known as "exploitation algorithms" for this purpose.

The ease of use of computerized approaches to decision-making support for policy-making may be the most overwhelming advantage. Indeed, precisely for this reason analytical techniques that can be so treated tend to be favored over those that may be more representative of the diversity of interests and values that should be incorporated into the decision-making process. On its own, however, this is hardly a strong justification. Such approaches have been most acceptable when they provide inputs into decision-making, assessing alternative policy choices or assisting in the development of legal arguments. Even under such circumstances, though, governments often appear more interested in minimizing cost than in maximizing benefits, responding to the results of such analyses only with conflict avoidance (Rose, 1976).

Weaknesses

While it can be argued that computers "rationalize" policy, however, there are different types of decision-making rationality: Logistic rationality (control maximized at the top at the expense and degradation of those below), tactical rationality (certainty maximized at the top at the expense and degradation of trust and morale below), and strategic rationality (unilateral gains maximized at the top at the expense of negotiation and cooperation) (de Landa, 1991). What is required to maximize one form of rationality may well not serve another.

Other weaknesses of basing policy on machinic values are myriad. Inappropriate values may be established in this way (Sawyer et al., 1995). Such approaches rely on quantifying variables that are not validly quantifiable (Galtung, 1982; Schauer, 1984). They cannot cope with modifications in underlying assumptions (Nicholson, 1987). They may be vulnerable to similarly motivated methods of avoiding detection of a lack of compliance (Chakrabarti and Strauss, 2002) or to manipulation of decision-making processes (Abrash and Ginsburg, 2002; Critical Art Ensemble, 1996, 2001). Issues of morale disappear, differences between simulation and reality blur, situations are idealized in such a way that possible sources of friction are not visible, and it is assumed that all actors think alike. They are prone to data error at every level, can erase distinctions among different phases of decision-making processes, and treat as discrete events matters that are highly inter-related (de Landa, 1991; Wilson, 1986). They are incapable of taking into account differences in the ways the information they provide will be cognitively processed by those who use it, (Magat and Viscusi, 1992) the politics of the situations in which they will be used, (Rose, 1976) or, so far, the realities of networked as opposed to hierarchical environments. The use of computer models by an administrative agency reduces the discretion of agency officials and the courts and precludes in-depth analysis.

Because most attorneys have no training in either research methods or the formal economic, political, and sociological presuppositions of particular types of computer analysis, their actual uses of information provided by models are restricted to three possibilities: They may deny computer models any value, they may believe anything a computer model says, or they may examine those using models on moral grounds such as corruption or bias. What they won't be able to do so is engage in meaningful dialogue with the model builders concerning the basic assumptions that guide the construction of social reality (Ackerman, 1984). Excessive reliance upon computers for policy inputs could even displace human experts, as it has in other social sectors (Entin, 1992).

Reliance upon computers introduces vulnerabilities, such as the possibility that development of an electronic database to handle the massive amounts of information generated through discovery in a major case may bring a trial to a halt if there is a glitch. Competitive use of computerized supports for policy-making can change the balance of power when those who are more technologically sophisticated gain the advantage (Patterson, 1985). And though in the U.S. judges and regulators are required to explain each of their decision as a critical requirement of fairness, computerized decisions are not supported by rationales.

A last class of problems in using computerized inputs into decision-making is its potential for changing the very nature of the policy process. The impact of computer systems on legal research, discovery processes, court planning, (Hamilton, 1983) development of legal arguments, (Rawanpura et al., 2001) administering government services, (Henman and Adler, 2001) and enabling alternative dispute resolution (Katsh and Rifkin, 2001) have received a fair amount of attention in public, professional, and scholarly literatures. Software to serve such purposes continues to be developed (Hargreaves, 1998). There has been far less attention, however, to the impact of computerization on the nature and outcomes of policy-making and legal processes, the relationship between government and society, and the vulnerabilities introduced. Often computers will propose things that humans will not, as when computers cross the nuclear threshold in war games while humans will not. Such technologies can blur the distinctions between different types of decision-making responsibility, such as that between advisors and the executive, and on the other hand treat as if they are discrete questions that cannot be fully understood other than as part of inter-related processes. And there are implications for the relative power of political institutions as they take up such approaches at different speeds and with varying degrees of sophistication. Ultimately, computerization of legal decision-making profoundly affects the nature of the law at the constitutional level without the political discussion that should precede radical changes in the nature of the legal system.

 

++++++++++

Technologies as Policy-makers

As economic historian Alfred Chandler, Jr., (1977) argued in his seminal work The Visible Hand, decision-making power began to be transferred to machines as early as the 1870s. In that book Chandler talks about the impact of automation but the argument is even more important with computerization. The transfer is fundamental to the nature of the law because its effects include the migration of logical structures and decision-making procedures from the human mind through notation and representation systems to the environment itself.

Use of computational aids to decision-making actually began with the appearance of modern war games — distinguished from games of war such as chess by their linkage of conflict modeling with representations of the environment in which conflict was taking place — early in the 19th century (van Creveld, 1991). A shift in attention from conflict to the rules of conflict, development of quantitative approaches to the analysis of conflict, and the steady increase in computing speed [ 2] provided the basis for a real take-off in the use of computerized approaches to decision-making during World War II. When directed at conflict, this was known as game theory, while in other contexts it developed as operations research. In 1959 Dr. Lucien Mehl first presented the idea of constructing a "Law Machine" that would provide legal decisions within highly specialized fields of law when armed with legal concepts, logical functions and facts (Clark and Economides, 1991). By the 1960s a new style of writing in judicial opinions appeared that mimiced or tried to develop formulaic approaches through use of such terms as "tests," "prongs," "requirements," "standards," and "hurdles" (Nagel, 1989). The types of problems addressed via computerization significantly expanded with efforts to model cooperation as well as competition in the 1970s (Axelrod and Keohane, 1985).

By the 1980s the use of computers in communication policy-making began to be explicitly discussed (Dutton and Kraemer, 1985) and they were actually used to allocate spectrum globally. Analyses of courts (Corsi, 1984) and of litigation (Coglianese, 1992) as computerizable systems began to appear, the computerized form of intelligence work known as MASINT (Measurement and Signatures Intelligence, referring to statistical analyses of a number of types of sensory information) came into use, (Sibbet, 1990) and the first widely known use of artificial intelligence (AI) in administration appeared with the logic programming used to implement the British Nationality Act.

Computerized decision-making has infiltrated policy-making processes in three ways: They supplement human decision-making when they provide aids to humans for specific decision-making tasks. They supplant human decision-making when it appears that the decisions are human but in fact are made by machines. And they supersede humans when effective decision-making — even for the social world — is undertaken by autonomously evolving non-human electronic intelligences.

Supplementing Human Decision-making

The process Chandler was talking about in his analysis of the late 19th century was replacement of humans by machines for specific narrowly-defined tasks such as coordinating train schedules or multi-step manufacturing processes. Examples of the replacement of human decision-making with computers in the contemporary legal environment include profiling, sentencing and, in the area of communications policy, spectrum allocation.

Profiling entails developing a statistical portrait on the basis of a few identifying features and differentially treating those individuals who fit the parameters thus defined. This statistical portrait replaces the legal judgment of individuals with the responsibility of examining the facts of a case in the same way that statistical fragments have replaced wholistic approaches to the human body in medicine: Both the medical subject and the legal subject have been shattered into pieces amenable to computerized decision-making.

The first profile was developed to identify drug couriers in 1974 and, despite what many felt to be inconsistencies and absurdities in the results, was rapidly taken up by the police aand courts to both trigger investigations of particular individuals and to justify arrest (Ryan, 1992). Because early U.S. court cases established that mere conformity to some or all of the characteristics of a drug courier didn't amount to probable cause, judges took the position that partial conformity to a profile reaches the lower barrier of reasonable suspicion. When the U.S. Supreme Court examined the practice it was awarded high accolades for innovativeness and for the expertise, training, and organization required to use it (U.S. v Mendenhall, 1980; Reid v Georgia, 1980; Florida v Royer, 1983).

In an interesting example of interactions between the use of a specific decision-making tool and changes in the very nature of decision-making processes themselves, the earliest drug courier profiles relied heavily on evidence used in courts for the data upon which the profiles were based. Computerized databases offered a technological boost, however, that changed the way in which profiles were shaped. While originally profiles were carefully built from analysis of the characteristics of those who had been found guilty of the crimes involved, data matching — linking information about an individual gathered for one purpose and held in one database with other information gathered for another purpose and held in a separate database — made it possible to develop profiles based only on statistical calculations.

The use of profiles for purposes of law enforcement received a tremendous conceptual boost in the early 1990s, when it became necessary to generate a new definition of the "enemy" for the post-Cold War environment. New security theory used a four-fold approach to the enemy which anyone involved with terrorism, drugs, an economic threat to the United States — and anyone whose behavior is statistically unpredictable (Lesser, et al., 1999; Steele, 1991). Data matching was regularly opposed by the U.S. Congress, and the U.S. government recognized that privacy concerns would prevent it from acting on its new definition of the enemy within the borders of the U.S. until the events of 11 September 2001, when the national security argument overcame all barriers. Today statistical profiling is important among the ways in which the U.S. government identifies citizens as targets of surveillance. Nor is the use of this technique restricted to adults potentially capable of terrorist action: Software is being used to profile K-12 students considered to present the potential for socially disruptive behavior at some point in the future in the United Kingdom, which has put itself forward as a test environment for security techniques being considered by other nation-states (Forrest, 2000).

Sentencing is another area in which computers are placing human decision-making. There was speculation about the value of automating judicial reasoning in the 1970s, (Schubert, 1975; Susskind, 1987) and in the 1980s experimentation began around the world. Theoretically, computerized sentencing can lighten the judicial load and thus help relieve backlogs, a matter of efficiency; and, it can prevent idiosyncratic differences among judges from generating significant differences in response to the same type of offense across jurisdictions, a matter of equity. Results, however, have shown that computerizing sentencing results in neither an increase in certainty nor in predictability (de Mulder and Gubby, 1983). Judges proved resistant because analyses of case law show no systematic relationships between fact situations and sentences that can reliably provide a basis for computerized sentencing guidelines, (Henham, 2000) not all sentencing takes place within the context of the kind of bounded discretion in which sentencing guidelines are appropriate, (Zeleznikow, 2000) and its use undermines the role of sentencing as a form of narrative that helps resolve tensions between the general and the particular, rules and unique circumstances, based on a judge's wisdom and expertise (Tata, 2000).

Computerizing the sentencing process inevitably changes its nature in a fundamental way. A judge responds to the behavior, demeanor, and context of the individual she or he sees in the courtroom and uses knowledge of the community involved — thick, qualitative analysis — to make sentencing decisions, while statistically based sentencing relies on only a few variables for thin, quantitative judgment. The actual individual involved in the case disappears, replaced by a mathematical calculation. Thus even when the intent is to mimic human decision-making processes via computerization, the effort to do so changes relationships among the law, intentions of the law, and those whom the law governs. While the question of the applicability of expert systems to legal reasoning remains theoretically open, practitioners in the judiciary reject it.

Probably the earliest example of the use of computerized decision-making in the area of communications policy came in the 1980s, when the spectrum allocation function of the International Telecommunications Union (ITU) was first handled by software rather than people. This function must be repeated every few months in response to changing meteorological and technological conditions. The growing politicization of the ITU over the course of the 1970s and 1980s made the problem even more complex. Computerization offered the impression that a neutral and non-political solution to this political as well as technical problem would be achieved but of course the software used to solve the problem had to be written by someone (who turned out be from the U.S.) and run on someone's computers (they wound up being those of NASA, in Colorado). In the commercial world it has long been recognized that databases and programs are not neutral. The airline reservation system, one of the first databases to be widely used commercially, presents the flights of certain airlines first, making it most likely that those will be the flights booked. Some of the biases built into systems will be those of technological structure, while others will be those of information structure.

Supplanting Human Law

While first religious, and then legal, systems developed historically as means of structuring the social world, today the most critical structural processes may be those that result from software design and harmonization of information systems (Braman, 1993). Harmonization of systems takes place in three ways:

  1. Harmonization of the same type of information or communication system across nation-state borders, as when television delivered by satellite broadcast covers a number of nation-states and television programs are co-produced internationally;
  2. Harmonization of different types of information and communication systems with each other, as when the results of television audience analysis are linked with purchase records and just-in-time manufacturing and delivery systems — or harmonization of the databases of government agencies with quite diverse concerns; and,
  3. Harmonization of information and communication systems with other types of social systems, as in the collapse of the global financial system into the telecommunications network.

Harmonization may arise either through multilateral or international political agreement (that is, involving many or essentially all nation-states), as in the case of the agreements regarding transparency, national trade regulation, and intellectual property rights that are a part of international trade law. Harmonization can also arise as a result of technical decisions, for in order to optimize many of the possibilities of recent technological innovations there must be technical standardization across the network. Sometimes harmonization is achieved indirectly, as when decisions made within the General Agreement on Tariffs and Trade (GATT), General Agreement on Trade in Services (GATS), and the World Trade Organization (WTO) make it easier for accounting firms to operate globally. Because accounting firms take with them information architectures and approaches to information processing, the systems they use in turn influence the shape of the organizations with which they work (including governments) and the regulatory systems within which those efforts take place.

Superseding Human Law

One of the oldest of human stories about machines is the 11th century Golem story, best known after its 19th century reincarnation as the inspiration for Frankenstein. The heart of the Golem stories is the desire of humans to create something to do their work for them. What makes efforts to do this both important and amusing is that inevitably people are not capable of providing direction to a machine in sufficient detail to prevent a disaster from occurring — in each case, repetitions of the activity sought become too much but can't be stopped because of an error in the instructions. Thus a classic Golem story tells of the machinic creature told to fetch water from the river for a household, but not told when to stop so that the house is ultimately flooded.

The Golem story is useful in thinking about the effects of the supercession of human decision-making by autonomous machines. Golem-like effects were first seen in the impact of complexity. By the late 1980s phenomena such as the loss of long-distance telephone service for large portions of the U.S. were occurring as a result of the unforeseen and unforeseeable interactions among the many pieces of software involved. Since then, levels of complexity are increasingly being incorporated into analysis of communications problems but by definition this terrain marks a limit to human understanding.

Biotics is a relatively new area of experimentation that launches autonomously self-reproducing and -acting entities in the web simply in order to watch them interact with each other and develop over time. Many of those involved in this work describe those entities as life forms and their development processes as evolutionary. Because neither the ways in which these electronic life forms act nor how they develop are programmed in by the humans who originally launched them, the nature of the intelligence they evolve and display is non-human. Since experimentation with biotics is quite recent, it is not known how sophisticated such non-human electronic intelligences might become nor what they might do that would affect the human world through Internet-based activities as they become more sophisticated over time. Dyson (1997) notes that since the intelligence that appears within the network will not be human, it may already be there in forms we are unable to recognize.

The global information infrastructure is undeniably the largest single machine every built, for it includes all of the computers linked to it, and all of the sensors linked to them, as well as the telecommunications network itself. The number of nodes and links in the network is on the verge of achieving — or may already have achieved — the number of neurons and connections in the human brain. Some believe this "world brain" can be used to support real-time decision-making by humans for human purposes. Others, however, note that according to self-organizing systems theory (and as suggested by the biological metaphor), at some point the network may achieve awareness of itself, what is described as self-consciousness in humans. When that time comes, the network may choose to act independently of humans on its own behalf.

 

++++++++++

Information Policy as Posthuman Law

For good reason public, scholarly, and policy-making discourse about information policy is today absorbed in how to respond to the qualitative changes in the nature of the environment being regulated. Most of that discourse focuses on relatively small questions raised by conflicts between details of regulatory and statutory law that are the result of decades of articulation of law and regulation for a world that no longer exists. The charge that policy analysis is irrelevant because the questions being asked are too small has already been levied (Noam, 2002). One stage of enlarging the questions asked is to examine fundamental principles as they have been developed for application to different technologies historically as a prelude to rethinking what those principles mean in a new technological context, and this work has begun. The context must be enlarged yet one more time, however. Technological decision-making — often for technologies rather than humans — is already beginning to replace policy made by humans for social ends. The theoretical, legal, and empirical implications of this must now also come onto the research and policy-making agendas. End of article

 

About the Author

Sandra Braman has been doing research on the macro-level effects of the use of digital information technologies and their policy implications since the mid-1980s. She is Chair of the Communication Law and Policy Division of the International Communication Association, former book review editor of the Journal of Communication, and serves on the editorial boards of eight scholarly journals. Work currently being completed includes Change of State: An Introduction to Information Policy (MIT Press), Communication Researchers and Policy-makers: A Sourcebook (MIT Press), The Emergent Global Information Policy Regime (Palgrave Macmillan), and The Meta-technologies of Information: Biotechnology and Communication (Lawrence Erlbaum Associates). In 1997-1998 Braman designed and implemented the first graduate program ("postgraduate") in telecommunication and information policy on the African continent. She is Director of Graduate Studies and Associate Professor at the University of Wisconsin-Milwaukee.
Web: http://www.uwm.edu/~braman
E-mail: braman@uwm.edu

 

Notes

1. Sandra Braman, 1998. "Threats to the right to create: Cultural policy in the fourth stage of the information society," Gazette, volume 60, number 1, pp. 77-91. The Electronic Frontier Foundation Web site (www.eff.org) contains a great deal of information on current legal issues of this type.

2. Janet Abbate, Inventing the Internet (Cambridge, Mass.: MIT Press, 1999).

 

References

Janet Abbate, 1999. Inventing the Internet. Cambridge, Mass.: MIT Press.

Bruce A. Ackerman, 1984. Reconstructing American law. Cambridge, Mass.: Harvard University Press.

Braden R. Allenby, 2000. "Environmental security: Concept and implementation," International Political Science Review, volume 21, number 1, pp. 5-23.

American Textile Manufacturers v Donovan, 452 US 490, 1981.

Robert Axelrod and Robert O. Keohane, 1985. "Achieving cooperation under anarchy: Strategies and institutions," World Politics, volume 38, number 1, pp. 226-254.

Baltimore Gas & Electric Co. v Natural Resources Defense Council (NRDC), 462 US 87, 1983; see also http://www.elr.info/litigation/vol13/13.20544.htm, accessed 4 December 2002.

Jean Baudrillard, 1983. Simulations. Translated by Paul Foss, Paul Patton, and Philip Beitchman. New York: Semiotext(e).

Stuart Biegel, 2001. Beyond our control? Confronting the limits of our legal system in the age of cyberspace. Cambridge, Mass.: MIT Press.

J. David Bolter, 1984. Turing's man: Western culture in the computer age. Durham: University of North Carolina Press.

Daniel J. Boorstin, 1992. The image: A guide to pseudo-events in America. New York: Vintage Books.

Daniel J. Boorstin, 1961. The image; or, What happened to the American dream. New York: Atheneum.

Sandra Braman, 1998. "Threats to the right to create: Cultural policy in the fourth stage of the information society," Gazette, volume 60, number 1, pp. 77-91.

Sandra Braman, 1993. "Harmonization of systems: The third stage of the information society," Journal of Communication, volume 43, number 3, pp. 133-140.

Sandra Braman, 1990. "The unique characteristics of information policy and their U.S. consequences," In: Virgil L.P. Blake and Renee Tjoumas (editors). Information literacies for the twenty-first century. Boston: G.K. Hall, pp. 47-77.

Capital Cities Cable v Crisp, 467 US 691, 1984.

James W. Carey, 1989. Communication as culture: Essays on media and society. Boston: Unwin Hyman.

Samidh Chakrabarti and Aaron Strauss, 2002. "Carnival booth: An algorithm for defeating the computer-assisted passenger screening system,," First Monday, volume 7, number 10 (October), http://www.firstmonday.org/issues/issue7_10/chakrabarti/, accessed 4 December 2002.

Alfred D. Chandler, Jr., 1977. The visible hand: The managerial revolution in American business. Cambridge, Mass.: Belknap Press.

Alfred D. Chandler, Jr. and James W. Cortada (editors), 2000. A nation transformed by information: How information has shaped the United States from colonial times to the present. New York: Oxford University Press.

Andrew Clark and Kim Economides, 1991. "Computers, expert systems, and legal processes," In: Ajit Narayanan and Mervyn Bennun (editors). Law, computer science, and artificial intelligence. Norwood, N.J.: Ablex, pp. 3-32.

Cary Coglianese, 1992. "Legal rules and the costs of environmental litigation," paper presented at the Law and Society Association, Philadelphia (May).

Edward A. Comor (editor), 1996. The global political economy of communication: Hegemony, telecommunication and the information economy. New York: St. Martin's Press.

Jerome R. Corsi, 1984. Judicial politics: An introduction. Englewood Cliffs, N.J.: Prentice-Hall.

Critical Art Ensemble, 2001. Digital resistance: Explorations in tactical media. New York: Autonomedia.

Critical Art Ensemble, 1996. Electronic civil disobedience and other unpopular ideas. Brooklyn, N.Y.: Autonomedia.

Manuel De Landa, 1991. War in the age of intelligent machines. New York: Zone Books.

R.V. de Mulder and H.M. Gubby, 1983. "Legal decision making by computer: An Experiment with sentencing," Computer Law Journal, volume 4, pp. 243-303.

William H. Dutton and Kenneth L. Kraemer, 1985. Modeling as negotiating: The political dynamics of computer models in the policy process. Norwood, N.J.: Ablex.

George Dyson, 1997. Darwin among the machines: The evolution of global intelligence. Reading, Mass.: Addison-Wesley.

Jacques Ellul, 1964. The technological society. Translated by John Wilkinson. New York: Knopf.

Jonathan L. Entin, 1992. "Numeracy, law, and dichotomy," paper presented at the Law and Society Association, Philadelphia (29 May).

Hans Magnus Enzensberger, 1992. Mediocrity & delusion: Collected diversions. Translated by Martin Chalmers. New York: Verso.

Federal Communications Commission (FCC) v WNCN Listeners Guild, 450 US 582, 1981.

B.D. Fisher, 1984. "Controlling government regulation: Cost-benefit analysis before and after the Cotton-Dust case,"Administrative Law Review, volume 36, pp. 179-207.

Brett Forrest, 2000. "UltraViolence Predictor 1.0," Wired, volume 8, number 6 (June), p. 124.

Gregory D. Foster, 2001. "Environmental security: The search for strategic legitimacy," Armed Forces & Society, volume 27, number 3, pp. 373-396.

Lawrence M. Friedman, 1985. A history of American law. Second edition. New York: Simon & Schuster.

Johan Galtung, 1982. "Why do disarmament negotiations fail?" In: Radhakrishna and Mahendra Agrawal (editors). Arms and survival. New Delhi: Satvahan, pp. 218-227.

Jack Goody, 1977. The domestication of the savage mind. Cambridge: Cambridge University Press.

Chris Hables Gray (editor), 1995. The cyborg handbook. New York: Routledge.

Jürgen Habermas, 1989. Structural transformations of the public sphere: An inquiry into a category of bourgeois society. Translated by Thomas Burger with the assistance of Frederick Lawrence. Cambridge, Mass.: MIT Press.

Susan G. Hadden, 1989. "The future of expert systems in government," Journal of Policy Analysis & Management, volume 8, number 2, pp. 203-209.

W.A. Hamilton, 1983. "Computer-induced improvements in the administration of justice," Computer Law Journal, volume 4, pp. 55-76.

Donna Jeanne Haraway, 1991. Simians, cyborgs, and women: The reinvention of nature. New York: Routledge.

James R. Hargreaves, 1998. "PSJ&I and integrated justice," International Review of Law, Computers & Technology, volume 12, number 2, 287-297.

Eric Alfred Havelock, 1963. Preface to Plato. Cambridge, Mass.: Harvard University Press.

David Held, 1989. Political theory and the modern state: Essays on state, power, and democracy. Stanford, Calif.: Stanford University Press.

David Held, 1980. Introduction to critical theory: Horkheimer to Habermas. Berkeley: University of California Press.

Ralph J. Henman, 2000. "On the philosophical and theoretical implications of Judicial Decision Support Systems," International Review of Law, Computers & Technology, volume 14, number 3, pp. 283-296.

Paul Henman and Michael Adler, 2001. "Information technology and transformations in social security policy and administration: A review," International Social Security Review, volume 54, number 4, pp. 23-47.

Brandon Hookway, 1999. Pandemonium: The rise of predatory locales in the postwar world. Princeton, N.J.: Princeton Architectural Press.

Morton J. Horwitz, 1992. The transformation of American law, 1870-1960. New York: Oxford University Press.

Lewis Hyde, 1983. The gift: Imagination and the erotic life of property. New York: Vintage Books.

Harold A. Innis, 1951. The bias of communication. Toronto: University of Toronto Press.

M. Ethan Katsh, 1989. The electronic media and the transformation of the law. New York: Oxford University Press.

M. Ethan Katsh and Janet Rifkin, 2001. Online dispute resolution: Resolving conflicts in cyberspace. San Francisco: Jossey-Bass.

Robert O. Keohane and Elinor Ostrom (editors), 1995. Local commons and global interdependence: Heterogeneity and cooperation in two domain. London: Sage.

Ian O. Lesser, David Ronfeldt and Michele Zanini, 1999. Countering the new terrorism. Santa Monica, Calif.: Rand.

Lawrence Lessig, 1999. Code and other laws of cyberspace. New York: Basic Books.

Wesley A. Magat and W. Kip Viscusi. 1992. Informational approaches to regulation. Cambridge, Mass.: MIT Press.

Robin Mansell and Roger Silverstone (editors), 1996. Communication by design: The politics of information and communication technologies. Oxford: Oxford University Press.

Marshall McLuhan, 1964. Understanding media: The extensions of man. New York: McGraw-Hill.

Joshua Meyrowitz, 1986. No sense of place: The impact of electronic media on social behavior. New York: Oxford University Press.

Lewis Mumford, 1934. Technics and civilization. New York: Harcourt, Brace.

Robert F. Nagel, 1989. Constitutional cultures: The mentality and consequences of judicial review. Berkeley: University of California Press.

Oskar Negt, 1978. "Mass media: Tools of domination or instruments of liberation: Aspects of the Frankfurt School's communication analysis," New German Critique, volume 14, pp. 61-80.

Michael Nicholson, 1987. "Misperceptions and satisficing in international conflict," In: Claudio Cioffi-Revilla, Richard Merritt, and Dina A. Zinnes (editors). Communication and interaction in global politics. Beverly Hills, Calif.: Sage, pp. 117-139.

Eli M. Noam, 2002. "A Report card for the policy analysis community after the dotcom bust," paper presented at the Telecommunications Policy Research Conference, Alexandria, Va. (29 September).

Walter J. Ong, 1982. Orality and literacy: The technologizing of the word. New York: Methuen.

Thomas E. Patterson, 1985. Toward new research on communication technologies and the democratic process. Aspen, Colo.: Aspen Institute.

Janaka Y. Rawanpura, Simaan M. AbouRizk, and Fernando Siri, 2001. "Implementation of computer-based planning and estimating tools for a public utility," Cost Engineering, volume 43, number 10, pp. 39-46.

Avital Ronell, 1989. The telephone book: Technology, schizophrenia, electric speech. Lincoln: University of Nebraska Press.

Richard Rose (editor), 1976. The dynamics of public policy: A comparative analysis. Beverly Hills, Calif.: Sage.

Kevin Ryan, 1992. "Law and the creation of deviance: The case of the drug courier profile," paper presented at the Law and Society Association, Philadelphia (May).

Pamela Samuelson, 1990. "Benson revisited: The case against patent protection and other computer-related inventions," Emory Law Journal, volume 39, pp. 1025-1054.

Alan G. Sawyer, John G. Lynch, Jr., and David Brinberg, 1995. "A Bayesian analysis of the information value of manipulation and confounding checks, " Journal of Consumer Research, volume 21, number 4 (March), pp. 581-595.

Frederick F. Schauer, 1984. Playing by the rules: A philosophical examination of rule-based decision-making in law and life. Oxford: Clarendon Press.

Glendon A. Schubert, 1975. Human jurisprudence: Public law as political science. Honolulu: University Press of Hawaii.

Daniel B. Sibbet, 1990. "MASINT: Intelligence for the 1990s," American Intelligence Journal, volume 11, number 3, pp. 23-26.

Theda Skocpol, 1992. Protecting soldiers and mothers: The political origins of social policy in the United States. Cambridge, Mass.: Belknap Press of Harvard University Press.

Susan Sontag, 1977. On photography. New York: Farrar, Straus and Giroux.

Robert D. Steele, 1991. "Applying the "new paradigm": How to avoid strategic intelligence failures in the future," American Intelligence Journal, volume 12, number 3, pp. 43-46.

Christopher D. Stone, 1974. Should trees have standing? Toward legal rights for natural objects. Los Altos, Calif.: W. Kaufmann.

Richard E. Susskind, 1987. "Some preliminary considerations concerning expert systems in law," Northern Kentucky Law Review, volume 14, number 2, pp. 211-235.

Cyrus Tata, 2000. "Resolute ambivalence: Why judiciaries do not institutionalize their decision support systems," International Review of Law, Computers & Technology, volume 14, number 3, pp. 297-316.

U.S. v Mendenhall, 446 US 544, 1980; also at http://www.druglibrary.org/schaffer/legal/l1980/mendenhall.htm, accessed 4 December 2002.

Martin Van Creveld, 1991. Technology and war: From 2000 BC to the present. Revised edition. New York: Free Press.

J.A. Wilson, 1986. "Methodologies as rules: Computer models and the APA," Columbia Journal of Law and Social Problems, volume 20, pp. 167-202.

John Zeleznikow, 2000. "Building Decision Support Systems in discretionary legal domains," International Review of Law, Computers & Technology, volume 14, number 3, pp. 341-356.


Editorial history

Paper received 23 October 2002; accepted 22 November 2002.


Contents Index

Copyright ©2002, First Monday

Copyright ©2002, Sandra Braman

Posthuman Law:Information Policy and the Machinic World by Sandra Braman
First Monday, volume 7, number 12 (December 2002),
URL: http://firstmonday.org/issues/issue7_12/braman/index.html