NOTE TO READERS:

This file contains a working DRAFT of the copyrighted thesis I am currently

preparing in partial fulfillment of requirements for the degree Master of Arts

in the Andover Newton Theological School. Some parts of the text remain

to be supplied.

TITLE: Experimental Theology, Using Artificially Intelligent Agents.

AUTHOR: Theodore Metzler

THESIS ADVISER: Professor Mark Heim

EXPECTED COMPLETION: December, 2001

 

 

 

 

INTRODUCTION

 

[Text to be supplied here]

 

 

CHAPTER I

AGENT-BASED COMPUTER SIMULATION TOOLS

 

Introduction

Prior to any assessment of possible applications in Christian theology for agent-based computer simulation tools (the concern of Chapter II), some introduction to the nature of these tools should be useful. Accordingly, this initial chapter will:

Although Chapter II will identify ways in which current resources of agent-based computer simulation would need to be extended to be useful in the theological community, it will also become evident that Chapter I describes a pertinent and solid technological and scientific base for the extensions.

System Capabilities

As the branch of computer science known as artificial intelligence (AI) has expanded its ability to mimic intelligent behavior of biological creatures—particularly, humans—it has spawned a special area of research called distributed artificial intelligence (DAI). Within DAI research, investigation of multi-agent systems (MAS) has contributed to development of computer simulation tools that "distribute" capability for intelligent behavior among numbers of relatively autonomous—but interactive—agents. For the purposes of initial description, it should be acceptable to think of agents simply as software models of intelligent creatures that interact with simulated environments containing active models of other intelligent creatures. (A more comprehensive account of agents will presently be given.) The functional capabilities of such agent-based computer simulation systems typically are determined by the so-called "application areas" they serve. Existent applications already embrace a wide range of areas—such as entertainment and education—and some examples will be reviewed later in this chapter. One specific application area, however, seems to promise special relevance for the work of Christian theologians—viz., research in the social sciences. Accordingly, it should be useful at this point to become acquainted with some of the characteristic capabilities of this class of currently available agent-based computer simulation tools.

A particularly clear and concise introduction to this class of tools has been posted on the World Wide Web site maintained (at the present time of writing) by the Centre for Research on Simulation in the Social Sciences (CRESS) at the Department of Sociology, University of Surrey (Guildford, UK). Titled Computer Simulation of Societies, the site supplies links to information about "simulation toolkits" that support development of "multi-agent simulation" systems, as well as an Overview containing several important observations about the capabilities of such systems. First, it is pointed out that when experiments cannot be conducted with populations of live people (for practical and/or moral reasons), the tools make it "possible to build artificial societies of computational agents and carry out experiments under laboratory conditions, trying out different configurations and observing the consequences." Second, the authors suggest use of such computer simulations can inspire "new, process-oriented theories of society." Third, they observe that it is a common and difficult problem in social sciences to "clarify the relationship between large-scale, societal (or ‘macro’) phenomena and small-scale features observable at the level of the individual." In this latter case, it is noted the simulation tools often help researchers explore "the ‘emergence’ of macro properties from micro-level interactions." The manner in which typical agent-based computer simulation systems actually enable such functions might now be illuminated by a brief illustration of their characteristic use in conducting an experiment.

One might imagine that a social scientist has formed an hypothesis (about persons in a particular society) regarding the dependence of variables Y and Z upon levels of some independent variable, X. Working with a so-called user interface (not unlike those supplied for word processors and other software tools), the scientist "sets up" the conditions for a simulation experiment by specifying—among other things—initial levels of the independent variable, X, within several groups of individuals comprising the society to be simulated. For example, if agent capabilities in the simulation system equip the agents for learning, X might be a variable specifying their learning rates. The simulation is then "run" (i.e., its software components "execute"), allowing software agents representing individuals in the (human) society to interact through a dynamic flow of "discrete events" (e.g., sales transactions). In a properly prepared simulation system, the agents perform these transactions approximately as the humans they model would perform them if they each had the values of X specified by the scientist. During the simulation, the scientist might elect to monitor changing values of variables X and Y (e.g., profit and number of sales transactions) through the use of graphical displays assembled and presented by the user interface. Even if results of the simulation fail to satisfy expectations of the scientist’s hypothesis, it may be the case that subsequent inspection of values of other variables will reveal unexpected dependencies—or, in all fairness, sometimes disclose flaws in the simulation tool itself or in the experimental design. It is important, in fact, to recognize that simulation tools of this class are not "magic bullets," mysteriously delivering results that instantly "kill" or "prove" hypotheses. Rather, they are interactive tools that facilitate the experimenter’s creative and disciplined exploration of relationships and processes plausibly occurring in the complex "real world" systems being investigated. As engineer-turned-sociologist Nigel Gilbert has observed, "Computer simulation is not just a new method to add to the social researcher’s armoury, but a new way of thinking about society, and especially social processes" [emphasis added] (Gilbert "Abstract"). Indeed, if an agent-based computer simulation tool does not help the experimenter effectively "think aloud" (in a virtual world) it fails to deliver what is arguably the most important system capability of tools in its class.

Agent Capabilities

The promised "more comprehensive account of agents" must now be presented, for much of the following discussion will assume a richer understanding of "agent" than the foregoing "initial" definition supplies. Although the class of agent-based simulation tools already described (particularly, those serving social science research) certainly does employ agents one may characterize as "software models of intelligent creatures that interact with simulated environments," the software models in some systems are considerably more sophisticated than they are in others. In the simpler cases, one encounters "fine grained agents" which, as Nicholas Avouris observes, are agents "with reactive behaviour, i.e. agents with no complex reasoning capabilities, not owning representations of themselves, other agents or the environment in which they exist" (145). Agents of this kind have been used, for example, to simulate behavior within populations of fairly simple organisms such as ants (Avouris 146). Toward the other end of the spectrum of complexity, one finds systems with so-called "coarse grained" agents that incorporate more impressive capabilities. In their contribution to "Socially Intelligent Agents," a Technical Report of the American Association for Artificial Intelligence (AAAI), Petra Funk and JÜrgen Lind offer definitions of coarse grained agents that they consider "more appropriate for agents in a multi-agent system" (47). The agents they describe are undoubtedly more appropriate as models of humans, for they exhibit a broad repertoire of the capabilities we like to believe are best represented by our species. They learn, deliberate, behave proactively and reactively, possess knowledge about other agents, cooperate with other agents, act autonomously in behalf of other agents, communicate with other agents and incorporate some form of self awareness (47). Members of the family of coarse grained agents, then, appear to display at least the basic kinds of agent capabilities potentially of most interest to Christian theologians. It should be worthwhile at this point, therefore, to examine them somewhat more closely.

Software agents can be equipped to learn by using a number of existent technical approaches. Thomas Haynes and Sandip Sen (University of Tulsa) describe one example of these approaches in their 1996 paper, "Learning Cases to Resolve Conflicts and Improve Group Behavior." Adapting a fairly mature AI technique known as case-based reasoning (CBR), the authors prepared agents for conducting standard DAI experiments in which several "predators" must work together to capture a "prey" agent. Results of the experiments displayed significant improvement in performance of agent groups through use of their "multiagent case-based learning" (MCBL) technique, which incorporated the following technical approach:

Agents find out through interacting with other agents that their behavior is not appropriate in certain situations. In those situations, they learn exceptions to their behavioral rules. They still follow their behavioral rules except when a learned case guides them to act otherwise. (47)

With their 1998 book, Reinforcement Learning: An Introduction, Richard Sutton and Andrew Barto (University of Massachusetts at Amherst) represent another approach to learning that has become a research area in itself. The basic concept of reinforcement learning, as the authors explain in the Preface of their book, is "simply the idea of a learning system that wants something, that adapts its behavior in order to maximize a special signal from its environment." Much in the manner of a human, then, the learning system tries a variety of behaviors until it discovers how it can obtain some "reward" that it values. This approach differs from so-called "supervised training" techniques (e.g., backpropagation training of artificial neural networks) in which the learning system is supplied (during learning sessions) with "correct" responses to given inputs—the system employing reinforcement learning seeks, instead, to maximize long-term measures of reward evoked by its responses. Although this approach has generated substantial active research in AI and a number of related areas (such as control theory and operations research), its specific application to agents has also been demonstrated (for example, by Crites and Barto). Inasmuch as the method also suggests possibilities of modeling "grace-like loops" of reward and learning, it will—not surprisingly—be discussed again in Chapter II.

However they are implemented, learning capabilities of agents typically serve some other capabilities that warrant mention. Nick Jennings, in his 1994 book titled Cooperation in Industrial Multi-Agent Systems, offers a diagram of main components of an agent that includes modules commonly associated with two of these capabilities. In particular, he shows an "acquaintance model" and a "self model" (32)—both of which represent kinds of software models a coarse grained agent normally constructs for itself in the course of simulation executions, using its learning capabilities. An agent may learn distinct acquaintance models for each of the other agents it encounters (knows through "acquaintance") during a simulation. The value of such models to the agent is succinctly described by Castelfranchi, et al., in their 1997 paper titled "Social Attitudes and Personalities in Agents": "In order to be really helpful and, in general, to better interact with other agents, an agent needs some representation of the other agent’s goals, intentions and know-how" (20). As the authors immediately proceed to explain, an agent might construct these useful models of other agents from such sources as memory of interactions with them, their reputations, and their "self-presentation." In some cases, models might even be drawn logically from bits of evidence supplied by behavior of the other agents (20). All of these sources should seem familiar—after all, they correspond to common methods we humans also use to form our "acquaintance models" of other persons. (Since Christian theologians might readily consider extending the meaning of "other persons," in this context, to include a personal God, it is clear this is another topic that will be revisited in Chapter II.)

We humans apparently form "self models" as well. Neurologist Antonio Damasio has argued our brains construct neural patterns he collectively designates the "autobiographical self," which he defines in the following terms:

The autobiographical self is based on autobiographical memory which is constituted by implicit memories of multiple instances of individual experience of the past and of the anticipated future. The invariant aspects of an individual’s biography form the basis for autobiographical memory. Autobiographical memory grows continuously with life experience but can be partly remodeled to reflect new experiences. (174)

Not surprisingly, one finds clear echoes of Damasio’s views in the AI literature of multi-agent systems. Kerstin Dautenhahn, for example, has offered the following parallel kinds of observations in his 1997 paper, "Ants don’t have Friends – Thoughts on Socially Intelligent Agents":

Humans are autobiographic agents. The way how humans remember and understand the world seems to be consist [sic] of constructive remembering and re-collection processes in a life-long perspective, i.e. referring to the autobiographic aspect as an ongoing re-construction of the own history and creating the concept of individual personality. (24)

Such, at least, are some of the kinds of elements that can compose what Jennings (reflecting common terminology in multi-agent research) has called the "self model," formed internally by software agents.

Understandably, any artificially intelligent, coarse grained software agent that has been equipped with capabilities to learn and to form models of other agents (and of itself) will—ipso facto—have a respectable basis for a number of additional capabilities dealing with social behavior. Researchers in AI have explored a broad range of such behaviors. The Multi-Agent Systems Laboratory directed by Victor Lesser (University of Massachusetts at Amherst) illustrates this range, presenting a list of current research areas on its web site that includes the following: "Generic Coordination Strategies for Agents," "Automated Contracting," "Coalition Formation," "Cooperation Among Heterogeneous Agents (TEAM)," "Negotiation among Computationally Bounded Self-interested Agents," and "Cooperative Information Gathering." Two brief examples of work in this field, however, may help illuminate its relations with agent modeling.

In his contribution to the 1996 AAAI Technical Report titled Agent Modeling, Amol Mali has proposed use of "social laws" in multi-agent systems, which "allow agents to develop models of potential interactions among their actions and those of other agents" (53). For a "society" of robots engaged in painting cans, Mali illustrates his concept of a social law with the (commendable) imperative, "do not grab an unpainted can that is being painted" (54). In this case, the recommended capability progression is generally from agent modeling to representation of allowable social interactions among the agents modeled. Bruce Edmonds introduces a somewhat different progression in his 1997 paper, "Modelling Socially Intelligent Agents in Organisations." According to Edmonds, "agents must be able to distinguish, identify, model and address other agents, either individually or in groups" [emphasis added] (37). This capability to interact with other agents as members of groups is a reasonable extension to more primitive agent modeling that Edmonds assumes throughout his paper.

Variables

At the level of software implementation, all agent-based computer simulation tools employ numerous named factors that receive changing values—i.e., they rely upon many so-called "variables." Only some of these factors are normally of interest at the level of users’ concerns, however, and the present discussion will focus even more specifically upon illustrations of variables in simulation systems that do (or could) serve existent applications in social science research.

One class of variables one should expect readily to find in software agents that model people is a class describing human personalities. Indeed, D. Christopher Dryer (IBM’s Almaden Research Center) has argued that "while there are thousands of different facets to personalities, there [are] only a few things that really matter at a more abstract level" (32). The "five important factors" he proceeds to identify furnish clear examples of variables (and their appropriate kinds of values) suitable for software agents representing human individuals in social science applications:

(1) Agreeable (cooperative to competitive);

(2) Extroverted (outgoing to withdrawn);

(3) Neurotic (anxious to calm);

(4) Conscientious (organized to lax);

(5) Open (curious to closed-minded). (32)

Dryer urges that although "there are other things to know about a partner, […] nearly all of them covary with one of these five things or some combination of them" (32).

Another class of variables one might naturally anticipate in simulations of human social behavior is a class representing emotions. Alexander Staller and Paolo Petta (Austrian Research Institute for Artificial Intelligence) provide a fairly recent (January 2001) report of work in this area with their contribution to the online publication, Journal of Artificial Societies and Social Simulation. Using a process representation of such emotions as guilt, shame, contempt and anger (par. 2.1), the authors have added them to software agents for the purpose of exploring interrelations between emotions and social norms. Some plausible relevance of their concerns to the domain of Christian theology is evident in the following representative observations:

Social norms are not only sustained by the sanctions of others, but also by emotions. The violation of a social norm can trigger negative emotions such as shame or guilt in the norm violator, even if nobody can observe the norm violation. (par. 1.3)

Other variable properties of agents promising at least as much relevance to the interests of theologians have appeared in AI research. For example, the degree to which an agent constrains pursuit of its own goals by consideration of the needs of other agents has repeatedly received attention. Nick Jennings illustrates his sustained research concern with this variable in a 1997 paper, co-authored with L. M. Hogg, titled "Socially Rational Agents." For design of multi-agent systems, the authors recommend a "Principle of Social Rationality," according to which an agent "can perform an action whose joint benefit is greater than its joint loss" (61). Their concept of "joint benefit," in turn, introduces "a combined measure which incorporates the benefit provided to the individual and the benefit afforded to the overall system as a result of an action" (61). The function they define for expressing this measure balances "individual utility" against "social utility," thereby determining the degree to which an agent is either "selfish" (placing "more emphasis on its individual utility") or "altruistic" (placing "greater emphasis on its social utility") (62). Needless to say, the authors describe a technical approach that invites comparison with some teachings in the New Testament—and Chapter II will not neglect the invitation.

Again, it is pertinent to observe the variable of research concern in a 1999 AAAI Spring Symposium titled "Agents with Adjustable Autonomy." Autonomy, by definition, can be understood as independence or freedom of the will (and "adjustable" freedom of the will does seem to suggest a variable agent property that might also be useful for simulation systems serving theologians). In any event, the Abstract for the AAAI Technical Report from the symposium just mentioned provides the following summary of its subject:

The adjustable autonomy concept includes the ability for humans to adjust the autonomy of agents, for agents to adjust their own autonomy, and for a group of agents to adjust the autonomy relationships within the group. ("Agents" Abstract)

For purposes of subsequent discussion, special attention is directed to the foregoing notion of agents who may "adjust their own autonomy."

 

User Interfaces

There is probably a sense in which nearly all computer software can be regarded as serving "user interface" purposes. Users of the earliest general-purpose electronic computer, the ENIAC, "programmed" it to solve specific problems by manually altering its physical circuitry on so-called "patchboards"—a tedious and time-consuming process. The entire structure of software languages, compilers, etc., that has evolved in subsequent decades may be viewed as a convenient (and welcome) "bridge" for mapping more natural expressions of user concerns (e.g., diagrams and statements in natural languages) onto the incredibly complex flows of hardware events in our machines. Indeed, artificial intelligence research has begun to contribute interfaces that allow us simply to talk with our computers nearly as comfortably as we do with other humans. Clearly, resources serving this general "user interface" function are no less valuable and important in agent-based simulation systems than they are for many other computer applications.

Not surprisingly, therefore, the subject has received attention in artificial intelligence research. Nicholas Avouris, for example, contributes a full chapter of Distributed Artificial Intelligence: Theory and Praxis to user interface design for DAI applications (such as "multi-agent simulators") especially requiring "user interaction" (141). The level of user interaction, he observes, is typically more intense in "coarse grain" systems employing "complex agents," for "user interaction and understanding depends equally on the reasoning of the individual agents and on their cooperation" (142). In the foregoing discussion of agents it has been noted that the more complex "coarse grained" agents tend to display "basic kinds of agent capabilities potentially of most interest to Christian theologians." Evidently, then, it is an implication of Avouris’ observations that design of user interfaces should be particularly important in any applications of these systems to theology.

One design feature that should be significant for such applications (and has already been addressed in DAI research) involves the capability of software agents to supply human users with explanations of their behavior. As Avouris points out, "In multi-agent systems single node reasoning and cooperative behaviour have to be explained" (144). Software resources allowing single agents to explain their (rule-based) reasoning have been commonly available for decades in AI technology. For the more difficult task of explaining collective (sometimes cooperative) behavior of multiple agents, Avouris recommends provision of a "dedicated node that builds this distributed explanation" (144).

In addition to reporting to users with displays and explanations, good interfaces need to support a variety of "authoring" tasks that allow users to define the data elements and controls they desire for each simulation. Within the entertainment domain, a number of games already furnish supports of this kind, and some review of them will presently be furnished. To serve educational and research applications as well, Alexander Repenning and his colleagues (with funding assistance from the National Science Foundation) have developed an interestingly versatile system answering the requirements. They characterize their product, AgentSheets, as "an agent-based simulation-authoring tool for end-users" (Repenning "Abstract"). End-users who have actually defined and executed simulations with AgentSheets include high school students—demonstrating that agent-based computer simulation tools (with appropriate user interfaces) need not be confined to scientific laboratories. Moreover, AgentSheets exhibits capabilities that warrant some mention at this point, anticipating Chapter II consideration of possible simulation applications in Christian theology.

First, the kinds of simulations the high school students were able to construct represented historical events associated with "protest movements and efforts at social or political reform"—e.g., the California Grape Boycott led by Cesar Chavez and the Montgomery Bus Boycott (par. 5.5). One might reasonably suppose a user interface permitting teenagers to define and simulate events of this sort could be adapted in ways of interest to theologians. Second, AgentSheets enables users to constrain behaviors of agents through prescription of rules—or, by merely supplying "Analogous Examples" (par. 2.5). That is to say, users may also "create new behaviors by defining analogical relationships between agents," such as "cars move on streets like trains move on tracks" (par. 2.5). A user interface capable of shaping agent behavior with analogies of this kind readily invites speculation about accommodating more abstract teachings (e.g., "Again, the kingdom of heaven is like a merchant in search of fine pearls; on finding one pearl of great value, he went and sold all that he had and bought it" (Matthew 13.45-46)).

Uses of Agent-based Simulation

An overview of agent-based computer simulation systems has been presented in terms of system capabilities, agent capabilities, variables and user interfaces. Some additional review of how systems of this kind actually have been used should complete a balanced introductory account of their nature.

One area for extensive application of these systems has been the training of armed forces of the United States (and other countries). Military training often calls for large-scale exercises involving personnel and equipment at installations distributed over sizable geographical areas. Although communication networks can affordably link all participants in these exercises, costs of performing them with realistic numbers of live personnel are frequently prohibitive. To relieve this problem, a technology has been developed to simulate the behavior of real personnel and weapons platforms by substituting computer software ("CGF") entities, comprising so-called "computer-generated forces." In real-time computerized map displays of battlefield exercise activity, for example, some of the "tank" icons may represent actual tanks, while others are just (artificially intelligent) CGF agents that behave (singly and in groups) approximately like actual tanks. (Having personally worked in this technology, the present author must modestly write "approximately," for he has witnessed such occasional embarrassments as platoons of CGF tanks that were "smart" enough to use a bridge to cross a river—but not quite smart enough to know they could not all enter the bridge at the same time. Introduction of more sophisticated AI methods, however, continue to improve agent performance.) In any event, CGF systems have already made useful simulation contributions to large military training exercises, and have even served—as described, for example, by Metzler and Nordyke (LB&M)—to support testing in other AI development projects.

The popularity of PC-executable computer games involving simulations of "agent" behaviors is well known. As pointed out in mid-2001 by John Laird and Michael van Lent (The University of Michigan), however, "current emphasis in computer game AI is on the illusion of humanlike behavior for limited situations," encouraging software engineering techniques such as use of "big C functions or finite-state machines" (17). Although these are methods not fully representative of the class of agent-based simulation systems being addressed in the present essay, the authors add "we expect that game developers will be forced into more and more realistic modeling of human characters" (17). Moreover, existent entertainment applications of this kind present at least two features of potential significance for possible theological applications. First, they typically incorporate flexible and convenient user interface features resembling those of AgentSheets, noted previously. In fact, Alexander Repenning and his colleagues explicitly recognize that "games such as SimCityTM" illustrate systems serving "end-user programming needs" (par. 1.2). Second, Laird and van Lent note the appearance of what they call "God games," which "give the player godlike control over a simulated world" (21). Indeed, a striking example of this genre—promised for release in April, 2001—is the game known as "Black & White." On a pertinent web site, pcgame.com, Black & White is described in the following terms:

You play the role of a deity in a land where the surroundings are yours to shape and its people are yours to lord over. Be an evil, malevolent god or play as a kind, benevolent god … your actions decide whether you create a heaven or hell for your worshipers. ("Overview")

David Smith, in his online review of this curious "game," explains that one rules the people of the simulated land "by either solving their problems for them or squashing them when they hassle you." Whatever one might think of the "entertainment" value of such a simulation system, it represents the current state-of-the-art in its application domain and it illustrates a characteristic limitation of such "God games" that will be further discussed in Chapter II. Specifically, one may notice interactions between the "deity" and (simulated) humans uniformly involve "external" relationships. Although the user of each game enjoys "godlike" powers to influence simulated people through issue of thunderbolts and the like, there is no mention of any more subtle access to their "internal" cognitive and affective structures.

Applications of agent-based computer simulation in education and training have already received some attention in the foregoing discussion (e.g., high school students building simulations of historical events representing "social or political reform" and military forces training with the assistance of CGF systems). Another interesting illustration from this application area is a commercial software product, EcoBeaker 2.0. Used in universities since 1996, this agent-based instructional tool is described on its web site by the producer, BeakerWare, as a "program for teaching ecology, conservation biology, and now evolutionary biology." The feature of this product that is particularly interesting in the present context is its ability to simulate individual elements of our natural environment, for one should expect theological adaptations of such technology to be concerned with ecological issues, as well. BeakerWare explains that the tool "lets students see population and community level dynamics on screen while performing the kinds of experiments done by practicing ecologists."

 

In fact, it should be a helpful preamble to consideration of research applications to observe the actual generality of "agent" in existent agent-based simulation systems. Thus far in the discussion, "agent" has served for descriptions of systems simulating processes of interaction among humans, the natural world and God (albeit, with human role-playing required in the latter case). Craig Reynolds (Reynolds Engineering & Design) maintains a web site on which he introduces the expression "Individual-Based Models" to describe systems in which "individuals might represent plants and animals in ecosystems, vehicles in traffic, people in crowds, or autonomous characters in animation and games." Although it is rather a nonstandard neologism, Reynolds’ "individual" does manage admirably to capture the broad range of applications for "agent" that actual practice reflects (and the present essay assumes). Possible extensions of that range to include more serious religious understandings of God than current computer games countenance will, of course, receive attention in Chapter II.

The previously-mentioned web site by Craig Reynolds also happens to contain an impressive list of hyperlinks to sites reporting various "applications of individual-based models" that principally involve research enterprises. Even when the list has been edited (as in the following—removing all of the specific links and some of the sub-headings), its remarkable scope of topic areas offers an instructive overview of the breadth of this technology:

Ecology and Biology:

Mixed ecosystems

Fish

Mammals

Birds

Insects

Forests

[…]

Modeling Humans (and Artificial Societies)

Human Crowds: motion and psychology

Anthropology

Artificial Societies

Sociology

Interpersonal Communication

Emotion

[…]

Economics

Traffic and vehicle simulations

[…] (Reynolds "Individual-Based Models: an annotated list of links")

Considered at the more specific level of particular research reports, this technology also displays substantial promise for applications in Christian theology. Ana Bazzan, et al., in their 1997 paper, "Agents with Moral Sentiments in an Iterated Prisoner’s Dilemma Exercise," provide a clear illustration of the promise. The simulation experiments they report employed "agents with moral sentiments (altruistic agents)" as well as "rational fools (egoistic agents)"—all of whom could recognize the group to which other agents belonged (5). Involved in simulated games that could reward or punish trusting behavior with "points" (according to behavioral rules including consideration of group memberships), the agents were divided into 3 groups (all egoists, all altruists or equally mixed). Some of the behavioral rules modeled possession of altruistic moral sentiments regarding relative wealth of the agents. Finding the results demonstrated a clear point advantage for "homogeneous groups of altruistic agents" (5), the authors conclude that "in a society where agents have emotions, to behave rationally (in the classical sense in Game Theory) may not be the best attitude in the long run" (6). One might reasonably suppose experimental work of this sort could be of interest among Christian theologians.

Evaluation of Agent-based Simulation

Research applications of agent-based simulation tools do present features that, as Nicholas Avouris has observed, "make them particularly difficult to test" (151). Specific kinds of problems he proceeds to identify include (1) numerous "loci of control" created by (often large) populations of relatively autonomous agents, (2) "communication delays" among the many elements of the systems and (3) the fact that "monitoring a distributed system alters its behaviour" (151). Indeed, these conditions can interfere, respectively, with attempts to modify selected activity, define the state of the system and inspect its momentary conditions. In fairness, however, one may reply that even quite traditional business data processing systems can pose combinatorial explosions of possible states that make exhaustive testing altogether impracticable. In fact, application software development experience teaches that testing—in actual commercial practice—tends to be a relatively neglected part of overall system development precisely because it is normally difficult and time-intensive.

Another type of objection that has commonly been associated with computer simulation tools in general is the complaint reported by economist William Brian Arthur in M. Mitchell Waldrop’s Complexity: the emerging science at the edge of order and chaos—viz., the recalled critical conclusion "that you could prove anything you wanted by tweaking the assumptions deep in your model" (268). This conclusion undoubtedly contains an element of truth. In a typical agent-based social science simulation, for example, behavioral results certainly are affected by properties the experimenter assigns to its agents (from the user interface) during simulation "set-up." Complaining about this, however, can betray blindness to the whole point of using this class of computer tools. In any responsibly designed research tool, after all, properties assigned during set-up for any given simulation are recorded and will be available with its results as a matter of public record. That is exactly what permits the social scientist (or theologian?) to conduct "what if" experiments with the tool and to share the comparative results afterward with others—together with the set-up assumptions made in each case. If some redundancy to serve emphasis may be tolerated, it has already been observed in the foregoing discussion that an agent-based computer simulation tool that "does not help the experimenter effectively ‘think aloud’ (in a virtual world)" simply "fails to deliver what is arguably the most important system capability of tools in its class."

It is also worthy of notice that the subfield of computer science and technology reviewed in this chapter is sufficiently mature to provide comprehensive resources of support for development of new agent-based simulation tools. A working image of the scope of this infrastructure can be obtained by considering even a brief sample of the "Offline resources" contained in the web-based compendium by Craig Reynolds, mentioned previously. His list, in this category, includes books (Simulating Organizations: Computational Models of Institutions and Groups. Ed. Michael J. Prietula, Kathleen M. Carley, and Les Gasser. Cambridge: The MIT Press, 1998.), journals (Journal of Artificial Societies and Social Simulation), academic laboratories (Wildlife Habitat Analysis Lab of the Department of Wildlife & Fisheries Sciences at Texas A&M University), commercial laboratories (the Emergent Systems Group of PricewaterhouseCoopers) and conferences (First International Conference on Virtual Worlds, July 1-3, 1998, International Institute of Multimedia, Paris, France). The science and technology of agent-based computer simulation tools is extensive and robust. A general case remains to be made for adapting its resources to work in the domain of Christian theology; this will be the burden assumed in Chapter II.

 

CHAPTER II

ARE THEY APPLICABLE TO CHRISTIAN THEOLOGY?

 

Introduction

Chapter I supplied a review of agent-based computer simulation tools. To establish a general case for adapting the resources of this science and technology for work in the domain of Christian theology, Chapter II will:

It is important to observe that it is not the enterprise of Chapter II to give reasons simply for using existent agent-based simulation tools in Christian theology, but to recommend feasible extensions of such tools to serve that purpose more effectively. It is also not the aim of the present chapter to give a detailed account of extensions required—its general case for adapting current simulation resources will be supplemented in additional chapters with a more complete exploration of functional requirements. However, it emphatically is the objective throughout this essay to foster development of simulation systems capable of delivering real benefits in science and religion—no technological development is to be recommended simply because it can be done.

System Capabilitiesfor Applications in Christian Theology

Chapter I introduced the topic of agent-based computer simulation systems by locating it—with progressive specificity—in the research contexts of computer science, artificial intelligence, distributed artificial intelligence and multi-agent systems. Similar introductory care seems appropriate in the present chapter, for "Christian theology" marks a topic area at least as broad as "computer science."

For a first approximation of definition, one might profitably consider the following comments from the opening chapter of theologian Daniel Migliore’s book, Faith Seeking Understanding: An Introduction to Christian Theology, which respond to the question, "What is theology?":

It is faith asking questions, seeking understanding. It is disciplined yet bold reflection on Christian faith in the God of the gospel. It is willingness to take rational trouble over the mystery of God revealed in Jesus Christ as attested by Scripture. It is inquiry yoked to prayer. (17-18)

Although it should be inappropriate and unnecessary for the author of the present essay to detail his personal theological positions as a Unitarian Universalist Christian, readers deserve the courtesy of being generally informed regarding the religious perspective from which he is writing. I feel obliged, therefore, to acknowledge the foregoing remarks by Migliore adequately summarize the basic meaning I associate with the expression "Christian theology." Moreover, my adoption of the perspective it describes holds several implications for the scope of this essay that seem worthy of notice.

First, the perspective entails that it is not a concern of this essay to investigate application of agent-based computer simulation systems to Christian apologetics. I proceed from beliefs in the existence and nature of God (and a critically important relationship of that God with the historical Jesus) which have been articulated historically and publicly within the Christian faith tradition—accordingly, this essay is indifferent toward any projects attempting to buttress such beliefs with computer-based "proofs." Second, this essay is not intended to foster development of computer tools for examining and judging the Christian tradition "from the outside" (as in sociology-of-religion projects, for example). Rather, it is directed toward creation of new tools that Christian theologians might use "from the inside," as it were, to articulate, communicate and reflect upon views they hold as Christians. Third, while the importance of interfaith dialogue will be recognized and addressed in this essay, it will be approached from a Christian perspective. In particular, the analysis will not presume it enjoys any "super-alpha" perspective with pretensions of incorporating all major religious traditions (although system capabilities to support quality dialogue among them will certainly not be ignored).

Some additional refinement of possible meanings for the expression "Christian theology" is also available in Migliore’s writing. At a level of greater specificity, for example, he distinguishes biblical theology, historical theology, philosophical theology, practical theology and systematic theology (9). Without prejudging potential applications in any of the other areas, I should suggest his characterization of systematic theology best captures the main concerns of the present essay: "[…] its particular task is to venture a faithful, coherent, timely, and responsible articulation of Christian faith" (9).

Perhaps the foregoing description immediately invites a somewhat embarrassing question—viz., "Who needs theology?" After all, straightforward proclamation of Gospel text would probably seem to many Christians to satisfy Migliore’s requirements for "faithful, coherent, timely, and responsible articulation of Christian faith." Closer inspection of today’s theological landscape, however, can reveal conditions that generate needs for less simplistic methods. First, one apparent legacy of the Enlightenment is a current need for the Christian community to be capable of responsible and meaningful dialogue with the sciences. In fact, dialogue counts as an element in the basic typology Ian Barbour (representing both domains) has proposed for "ways of relating science and religion"; reviewing patterns of conflict, independence, dialogue and integration, Barbour principally finds reasons to support the dialogue and ("with some qualifications") integration options he defines (77). Authentic dialogue between theologians and scientists, moreover, calls for a "responsible articulation of Christian faith" that is somewhat richer than repeated citation of scripture. Second, the emergence of various forms of "liberation theology" on the contemporary theological landscape (e.g., Latin American, feminist, womanist and black theologies) introduces additional needs for more robust "dialogue" resources. Representing the feminist segment of this broad movement, for example, Rosemary Ruether (in Sexism and God-Talk: Toward a Feminist Theology) regretfully observes "The patriarchal theology that has prevailed throughout most of Christian history in most Christian traditions has rigidly barred women from ministry" (194). Nevertheless, Ruether—having endorsed the construct of a "feminist base community" (205)—also recognizes that a "dialectical relationship between base community and historical institution" is required if one is serious about communicating and historically transmitting feminist options (206). Third, it is clear that ecumenical and interfaith projects of contemporary Christian theology encounter complex demands upon dialogue. Representing current work in this area, theologian Mark Heim’s Salvations: Truth and Difference in Religion repeatedly displays evidence of such requirements. Regarding interfaith dialogue on social justice, for example, he argues "if we are serious about an inclusive dialogue, we must recognize that ‘justice’ is already a significantly exclusivistic way of framing the question" (208). In sum, therefore, it appears that numerous kinds of dialogues create a recurrent need in today’s Christian theology for resources that can represent and communicate complex realities. The need must be satisfied if the central messages of Christian scripture are to become fully meaningful in the current world through a "faithful, coherent, timely, and responsible articulation of Christian faith."

Having identified an important need in Christian theology—for ways to represent and communicate complex realities—we are justified in viewing it also as a basic system capability that should be required of agent-based computer simulation tools for the theological application area. Although numerous existent systems of this class already satisfy such a broadly formulated requirement, closer consideration of the "complex reality" actually addressed in Christian theology reveals at least one feature suggesting a call for careful, innovative and non-trivial extension of available simulation technology.

In particular, it is no secret that the Christian worldview is theistic. Beyond simulated behavior of humans and of nature (which Chapter I has shown is within the purview of current computer science and technology), any system capable of expressing distinctly Christian perspectives should also be able to simulate God’s presence in that world. "Worse yet" (if the phrase may be allowed), the God must in some sense be personal, purposive and interactive with the world. That is to say, it unavoidably appears to be a first specific system capability required of simulation systems to serve Christian theology that God be simulated functionally (in software) as an agent.

At once, some sincere people within the Christian faith, and possibly other religious traditions as well, might view this suggestion as a patent example of idolatry or blasphemy (or both). Neither is intended. Rather, it is the modest aim of this essay to explore a vision of supplying the Christian community with a new medium for expressing its understandings of what God’s presence in this world could mean. To be sure, those understandings have already found good expression in language, in art and—most importantly—in acts of love. To support the kinds of theological dialogue previously mentioned, however, new resources for expressing precise, dynamic and repeatable ideas of God’s significance—for situated humans in a complex web of society and nature—seem to represent possibilities at least worth examining. Hopefully, suggesting a step from writing or saying "God works in wonderful ways" to showing it with a temporally unfolding computer simulation need not reawaken another Iconoclastic Controversy. As John of Damascus inquired, "Is not the ink in the most holy Gospel-book matter?" (286).

Philosophical objections might also be provoked by the very suggestion of incorporating agent modeling of deity in a computer simulation system. Indeed, when one apparently proposes mixing software representations of natural and supernatural agents in a virtual world, some complaints are to be expected. If the supernatural agent, for example, were allowed arbitrarily to perform "miracles" during simulations (interrupting rule-based behavior of human and /or environmental agents), would the overall system not become inconsistent—hence, capable of displaying any representable sequences of behavior? With the possible exception of entertainment applications, the purpose of such a system seems difficult to imagine. Again, what conceivable epistemic basis might there be for design of the "divine" agent? Would software engineers be obliged to derive programs from specific revelation? At first blush, then, any system capability to include agents representing nature, humanity and God may appear to be a philosophical "non-starter."

The latter of the objections just mentioned, however, is surely misguided. It would not be the burden of a simulation system developed for theological use to incorporate in its design any epistemically unassailable account of God’s nature. Instead, it is (even now) regularly the point of such computer-based tools to help experimenters identify implications of various hypotheses about such things as agent properties—hypotheses that are user-specified at "set-up" time. Software engineers would never be responsible for "designing God"; they would be engaging the much more modest task of designing tools for expressing various human conceptions of God.

The first of the foregoing objections also becomes less formidable upon closer inspection. Process theologian David Griffin suggests a way toward philosophical resolution with his statement of the following opinions in Religion and Scientific Naturalism: Overcoming the Conflicts:

[…] theism, with its notion that a divine reality exerts variable influence in the world, is true, but it is a falsifying exaggeration to think that this influence can be all-determining, so that it could interrupt the causal powers and principles of the world. (12)

Although details of software design are beyond the scope of the present essay, it is pertinent at this point generally to observe the following: a computer tool permitting its users the capability to specify "variable influence in the world" for its "divine" agent would be no more restricted to simulating tenets of process theology than those of, say, Calvinism. At the same time, though, it could reap substantial benefits by explicitly reflecting in its architecture some of the "process philosophy" of Alfred North Whitehead that shapes David Griffin’s views. First, as Ian Barbour has observed regarding dialogue between science and religion, "Process philosophy is a promising candidate for a mediating role today because it was itself formulated under the influence of both scientific and religious thought […]" (104). Accordingly, it is reasonable to expect theologians equipped with a computer simulation tool acknowledging its incorporation of Whitehead’s process metaphysics could capitalize upon a valuable "bridge" between Christian theology and the sciences that is presently available. Parenthetically, similar benefits might also be forthcoming for interfaith dialogue with Buddhism. As Lynn de Silva notes in The Problem of the Self in Buddhism and Christianity, "Buddhism is thus seen to be a process philosophy and has affinities with the process philosophy of A. N. Whitehead and Charles Hartshorne" (45). A second benefit of patterning the architecture of a simulation tool to reflect process philosophy deserves special attention, for it directly concerns another basic system capability needed in such tools for successful application in Christian theology.

In their introductory exposition of process theology, John Cobb, Jr., and David Griffin record the following view shared by Whitehead and theologian Paul Tillich:

[…] Whitehead affirms that we exist first of all in community and establish relative independence within it. With Tillich he holds that participation and individuality are polar, so that the more we participate with others in community the more we can become individuals, and the more we become individuals, the more richly we participate in community. (82)

Similar recognition of the fundamentally social nature of the human individual is given even broader Christian significance in the following observation by theologian Daniel Migliore: "Trinitarian theology, when it rightly understands its own depth grammar, offers a profoundly relational and communal view both of God and of life created and redeemed by God" (70). In fact, modeling of a rich communal web interrelating humans, their natural world and God appears to constitute a second specific system capability required of any simulation tool equipped to serve Christian theology. This requirement appears to be satisfied by process metaphysics and—at least partially, as demonstrated in Chapter I—by work already in progress using agent-based computer simulation systems for social science and environmental research applications.

The complex reality addressed in Christian theology, then, implies needs for system capabilities to (1) simulate God functionally as an agent and (2) model a rich communal web interrelating humans, their natural world and God. Moreover, an agent-based computer simulation system satisfying needs for these capabilities could make useful contributions to theological dialogue about that complex reality in areas previously identified, such as science and religion, liberation theology and ecumenical and interfaith projects. (Subsequent portions of this essay will illustrate possibilities for such uses.) Thus far, however, a number of traditional topics in Christian theology may seem to have been neglected. After all, has the "complex reality" addressed in this field not also generated eschatological discussions of "afterlife" and disputes about doctrines such as the Trinity?

Undoubtedly, it has. The reality countenanced by Christian theology has—and still does—exceed the scope of our ordinary experience of the natural world. Furthermore, the author of this essay respects as meaningful various poetic and artistic attempts that historically have been made to point toward less commonly accessible levels of "what there is"—I do share the belief immortalized in Michael Polanyi’s (The Tacit Dimension) assertion that "we can know more than we can tell" (4). The very fact that everyday natural language has so often been left behind in such contexts, however, appears relevant to the present discussion. On one hand, it is interesting to conjecture that—while natural languages may not be expressively complete, relative to the entire domain of human knowledge—a future agent-based computer simulation tool might allow theologians to show more than they presently can say. On the other hand, it should be a technically irresponsible non sequitur to claim such discrete event simulations could (even in principle) show anything humanly knowable. Theologians might be well served, in the not-so-distant future, by computer tools that simulate complex temporal processes in a natural world (including "variable influences" of a deity with properties experimentally specified by the user), for the phenomena described are all within the expressive capabilities of such systems. St. Paul’s mystical experience on the road to Damascus, however, represents a different class of phenomena—a class that must honestly and responsibly be acknowledged as exceeding foreseeable capabilities of the subject technology. System capabilities for expressing such aspects of the Christian faith could be stated, but they would be fanciful.

Agent Capabilities for Applications in Christian Theology

A general plan for examining the topic of agent capabilities needed in Christian theological applications is suggested by the foregoing observation that three basic types of agents (representing humanity, God and nature) would be required. In the successive parts of this subsection, each of these agent types will be considered, with occasional references to relevant points that have been introduced in Chapter I.

Software agents representing humans in theological applications would evidently need to be coarse grained agents capable of learning, agent modeling and fairly sophisticated social behavior. Relative to current technology, however, there should be a conspicuous need for extensions in each of these capabilities to accommodate interaction with a "new" type of agent (representing God). Immediately, some comment seems in order regarding the innovation just noted, for it marks another among numerous ways in which application of agent-based simulation to Christian theology could have implications not only for religion but for science as well. In particular, behavioral properties of software agents representing human individuals who can learn about, model and interact with a "deity" agent are likely to be interestingly different from those previously encountered in most artificial intelligence research. An example of this possibility has already been suggested in Chapter I regarding applications of established techniques such as reinforcement learning to Christian notions of grace.

The Christian theological tradition has formulated a number of distinct accounts of grace, and the present essay could not appropriately undertake reviewing all of them. Contributions to the subject by St. Augustine, however, have been very important historically and happen to be especially pertinent to the current discussion. His views regarding the relationships of "law" with grace are particularly relevant. It should not be difficult to imagine the software agents representing humans in a simulation system exhibiting moral behavior constrained by "laws" (i.e., "rule-based" behavior, in the parlance of artificial intelligence). To the extent an agent may learn such laws in the course of a simulation (say, from another agent, with some social authority, pronouncing "Thou shalt not kill"), the learning might be described as illustrating what has been mentioned in Chapter I as an "external" type of interagent relation. The following comments by Augustine "on the grace of Christ," however, introduce the need for a different type of interagent relation:

Law and grace are so different that, although there is no doubt that the law comes from God, still the righteousness which comes from the law is not from God, while the righteousness perfected through grace is from God. The righteousness which is maintained because of the law’s curse is attributed to the law; the righteousness attributed to God comes through the blessing of grace which makes the command attractive rather than threatening. (71-71)

Mapped back into the language of AI, what Augustine is saying might have the following form:

A human-type agent can learn rules from other human-type agents (through external relations) and progressively begin constraining its behavior accordingly. If the agent incorporates reinforcement learning, however, it can also receive—from the God-type agent (through an internal relation)—modifications to what it considers rewarding (reinforcing).

Not only is the foregoing a preliminary description of some kinds of learning human agents would need to be capable of performing in a simulation system extended for service to Christian theology—it is also a description of an interesting (and possibly fairly novel) form of reinforcement learning in which one agent can directly alter what counts as reinforcing among certain other agents during a simulation. The dynamics of simulations executed by agent-based systems incorporating such learning capabilities could be instructive to theologians as well as to members of AAAI.

Another class of capabilities often served by agent learning involves the forms of "agent modeling" discussed in Chapter I. With respect to acquaintance models, one might reasonably expect the human-type agents in a simulation system adapted for theological application to be capable of building models of the historical Jesus, using information they receive through external relations (such as church teachings of other human-type agents). From both external and internal relations, they should also be capable of learning some acquaintance model of God—indeed, this standard DAI concept is arguably an appropriate metaphor for the experiences of real human individuals as they—according to common church vernacular—"get to know God." Moreover, some frankly rather fascinating possibilities are suggested if one adds the reasonable requirement that the human-type agents be adaptively capable of constructing self models. One might anticipate, for example, interesting effects upon their behavior if such coarse grained and sophisticated human-type agents were additionally capable of adapting their own self models to incorporate features of their acquaintance models of God.

The capability of human-type agents to build self models during simulation execution should be an especially important requirement for theological applications. Although it has been observed in Chapter I that DAI research has already begun to experiment with Antonio Damasio’s concept of "autobiographical self," substantial extensions of such work should be expected from development of agent-based simulation systems to serve theology. As Paul Tillich has observed in Morality and Beyond, "Every moral act is an act in which an individual self establishes itself as a person" (20). This fundamental notion of growth of a person is ubiquitous in Christian thinking. One finds theologian Daniel Migliore, for example, urging that "our knowledge of persons requires attention to persistent patterns in their actions that manifest, as we might say, who they really are, what is in their heart, what their true character is" (31). In the same context, he recommends viewing our selves as "agents who manifest their identity and intentions in actions" (31). In fact, this human long-term functional capability to build self models would ideally be represented in a theological agent-based system even to a level of sophistication supporting the emergence of meaning. As Migliore also points out, humans "search not only for physical and emotional satisfaction but for a meaning in life that is very difficult to define or pin down" (128). It has long been a staple of DAI research to think of agents as possessing (often, in fact, simply being assigned) "goals" that influence their behavior. Agents capable of building autobiographical self models which they would concurrently "mine" for patterns defining goals (behaviorally interpreted as personal meanings) should require some challenging extensions of prior work. Indeed, it is likely that some of the functional capabilities identified in this essay could entail significant research and development efforts. This confession, however, is a double-edged sword for it also reveals that—proceeding from the perspective of Christian theology—we may discover some tasks of potential interest and value as artificial intelligence projects.

Besides capabilities for learning and constructing agent models, the coarse grained software agents representing humans in theological applications would also need to be capable of fairly sophisticated social behavior. In reviewing some of the social behaviors that already have received attention in agent-based simulation research, Chapter I identified several (e.g., coordination and coalition formation) that generally involve capabilities of agents to recognize and interact with other agents as members of groups. Although development of such systems for service in theology might not require radically new methods, it should be likely to demand numerous "application specific" extensions. This is particularly true for simulation systems that might serve the needs of liberation theologians. A clear illustration of this specific class of capability requirements is evident in the following comments about "racism as a social structure" by Susan E. Davies:

Racism is a socially constructed relationship between groups within any given society. The dominant cultural, political, and economic group defines one or more other groups as inherently of lesser human value based on the others’ racial or ethnic origin and enforces that definition with its social, political, and economic power. (35)

Undoubtedly, development of a simulation system with human-type agents capable of modeling recognizable racist behavior patterns (and perhaps even capable of learning how to abandon them) would be at once a worthwhile artificial intelligence research challenge and a potential source of useful tools for liberation theologians (e.g., for research, or possibly for instructional applications).

As a first step toward identification of some representative functional capabilities needed for a "God-type" agent, it should probably be prudent immediately to re-emphasize an important observation recorded previously in this chapter. Specifically, it is not "the burden of a simulation system developed for theological use to incorporate in its design any epistemically unassailable account of God’s nature." That is to say, the task being addressed here is emphatically not the task of determining God’s nature—so it can be permanently "programmed" into a software agent (or "hard coded," as software engineers used to say). Instead, the more modest present objective is to identify some representative capabilities that future "experimental theologians" might plausibly wish to assign to a God-type agent at "set-up" time (for purposes of a simulation about to be executed).

For Christian applications one should anticipate need for at least a set of "default" capabilities resembling those possessed by human-type agents in the simulation system (e.g., capabilities for social behavior, such as group recognition). This much would reflect a fairly indisputable characterization voiced by Daniel Migliore: "The God of the biblical witness is not impersonal but personal reality and enters into living relationship with creatures" (67), the latter description also being identified by church consensus with the Holy Spirit (169). Although it has been urged in the foregoing discussion of system capabilities that the envisioned computer tools could not responsibly model the Trinity explicitly, God’s "living relationship with creatures" could reasonably be interpreted tacitly as activity of the Holy Spirit and architecturally implemented in a manner prescribed by process philosophy (e.g., provision of "initial aim" possibilities by the God-type agent to human-type and nature-type agents). More attention will presently be given to this suggested implementation of process philosophy’s constructs. With respect to capabilities of a God-type agent, however, it immediately invites mention of pertinent observations by process theologians Cobb and Griffin. First, it seems likely that individuals representing this relatively new subfield of Christian theology should wish to assign the God-type agent capability to "lure" other types of agents in a simulation toward "more complex actualities" (64) through provision of initial aims, for Whiteheadian process philosophy would normally ascribe this level of "purpose" to such an agent. Second, one might expect process theological simulation experiments of this kind to produce results of potential interest in multi-agent systems research, for the authors note "God, far from being the Sanctioner of the Status Quo, is the source of some of the chaos in the world" (60).

A broad class of divine capabilities that theologians have associated with "economic" qualities of God—omniscience, omnipotence, omnipresence and omnibenevolence—would need to be adjustable (despite their "omni" prefixes) if the envisioned simulation tools were intended to accommodate different views historically found in Christian theology. In addition, they exhibit some functional interdependencies that could strongly constrain capabilities of the God-type agent as well as the dynamics of interagent relations. For example, a minimal-value setting for omnipresence would effectively remove the God-type agent from simulation activity, perhaps representing Deism. If the foregoing suggested use of process philosophy’s constructs were acceptable, omnipotence could readily be adjusted at set-up time by specifying variable levels of "potency" for the initial aims to be furnished by the God-type agent to other agents during simulation execution. Historical theodicy problems amply suggest that initial specification of absolute omnipotence and omnibenevolence could be difficult to reconcile with levels of world evil. Contemporary process theologians would avoid this embarrassment by specifying (terminology notwithstanding) more modest levels for "omnipotence"—as Cobb and Griffin explain, "Since God is not in complete control, the divine love is not contradicted by the great amount of intrinsic evil, or ‘disenjoyment,’ in the world" (56). On the other hand, the authors acknowledge that relaxing assumed levels of God’s power can impact notions of omniscience, for it implies "God’s concrete knowledge is dependent upon the decisions made by the worldly actualities" (47). In any case, development of systems containing a God-type agent with user-specifiable levels of the "economic" qualities should at least entail some extensions of current agent-based simulation technology, and almost certainly reveal some interesting new system dynamics.

Capabilities that should be required of nature-type agents, in simulation systems for theological applications, appear in many respects to be located well within the scope of current DAI science and technology. As illustrated in Chapter I, mixed ecosystems (including human presence, forests, birds and the like) have already been simulated. However, the requirement that agents of this class be normally capable of interaction with a God-type agent undoubtedly introduces a general need for extension of existent resources.

Variablesfor Applications in Christian Theology

Chapter I introduced some illustrations of variables currently used in simulation systems applicable to social science research. Consideration of variables that appear suitable for applications in Christian theology indicates a number of ways in which the existent resources would need to be extended.

Variables involving agents that model humans should especially require attention. First, theologians could be expected to be interested in a triad of ethical dispositions (Aquinas’ "supernatural" virtues)—faith, hope and love. Migliore, in fact, devotes an entire final chapter of his introduction to Christian theology to hope, explaining that "apart from hope, every Christian doctrine becomes distorted" (231). To the extent that Christian understandings of these virtues tend to view God as their source, it is unlikely existent human-type agents model them in ways that would be adequately meaningful to theologians. On the other hand, current agent-based research on representation of human emotions could at least form a potentially useful starting point for the needed extensions.

Second, varying degrees of altruism (versus egoism) in the social behavior of human agents should apparently be a subject of concern in theological applications. Again, Chapter I supplied evidence this already has become a topic of research in social science uses of agent-based computer simulation systems. Approached from the perspective of Christian theology, however, such work should be likely to introduce needs for a number of additional variables. A representative short list of such agent-related variables might include church membership, propositional beliefs, scripture interpretation and "social location." The latter expression has been used by Anthony Ceresko to describe what he views as an "important consideration" for understanding "liberation theology and the Bible"—as he observes, "People in different social and political contexts bring to the Bible quite different questions" (9). A related complication that could recommend some form of user-specified "social condition" variable has also been suggested in observations by Mark Heim regarding "economic autonomy for individual women" (206). Although this topic certainly should be a likely concern in simulation experiments serving feminist (liberation) theology, Heim points out that it is strongly affected by social preconditions such as "a system of private property ownership, an exchange economy, a legal structure prioritizing the individual over communal or family groups, and so on" (206). Parenthetically, such recognition of the complexity of actual social systems is also a reminder of the importance of modesty in defining scope of intended application for future agent-based simulation tools. Although the present chapter has been explicitly introduced as aiming to present a "general case for adapting current simulation resources," attention will be returned at its conclusion to the need for sharper focus when formulations of more specific requirement definitions are undertaken.

Third, an agent-related variable shown in Chapter I to be a subject of research attention for current agent-based systems was generally identified as "adjustable autonomy." The additional observation that this may be interpreted as adjustable "freedom of the will" may elicit some philosophical objections claiming "freedom" marks an irreducible difference between human persons and machines. I have argued elsewhere (Metzler 2001) that a precisely defined conception of personal freedom is consistent with functional capabilities already realizable in the science and technology of artificial intelligence. To this extent, at least, there is warrant for considering possible needs in theological applications for human-type agents with varying levels of autonomy. At once, it should seem that agents capable of adaptively adjusting their own autonomy might permit modeling of some Christian notions of evolving relationships in which an individual "grows closer to God." Interpreted in the kinds of process philosophical terms previously recommended, this could also be described as agents learning to become increasingly receptive to God’s "initial aims" (or—in more traditional terms—to God’s grace). In addition, it seems reasonable to suppose theologians should wish to experiment with behavioral implications of varying user-specified levels of autonomy for human-type agents (either collectively or selectively, to allow comparisons).

Finally, the very presence of user-specified variables allowing theologians to "experiment" with different agent-related "set-up" conditions (such as differences in church membership, propositional beliefs, social location, personal autonomy and the like) presumes another class of variables needed to report "results." Variables of this class would display values of what engineers sometimes call "objective functions," which—in the application domain of theology—might receive some interestingly novel names (such as "worldly realization of the Kingdom of God" or "level of racist behavior within the Church").

User Interfaces for Applications in Christian Theology

With respect to user interfaces, the theological application area should be likely to demand full exploitation of available technical methods because it would conspicuously present needs for strong interaction between human user and computer. As this essay has consistently urged, the greatest value of an agent-based computer simulation system in theological applications would plausibly center upon capabilities that allowed it to serve as a "thinking tool." Perhaps it should be explicitly noted that this is a role not uniformly important in all applications of computer simulation systems. One might consider, for example, an agent-based tool that determines optimal assignments of aircraft to docking locations at a busy airport. A tool of this kind would normally serve its users more clearly by producing fast and accurate answers to a well-defined problem than by helping them explore and understand the dynamics of interactions within a complex system. Accordingly, airport managers would typically be satisfied if the tool’s user interface supported quick specification of parameters for a current situation and useful display of a solution. In contrast, users of a tool serving the latter kind of purpose should normally be more interested, say, in capabilities of its user interface to display trajectories of multiple interacting variables for the course of the simulation period, explain and / or facilitate analysis of patterns revealed in those trajectories, or track adaptive behavior exhibited by individual agents during the simulation. And the latter kinds of capabilities more appropriately describe those one might encounter in theological applications.

An additional type of user interface capability that should be important in theological applications may be derived from one general system capability mentioned previously—viz., the capability to support dialogue. Unlike the foregoing hypothetical airport manager (who might consult the planning tool and share nothing more with other personnel than its prescribed allocations of aircraft to gates), a characteristic Christian theological user would probably wish to discuss details of given simulations with others (representing, for example, the scientific community, other Christian traditions or even separate faiths). Accordingly, agent-based simulation tools in this application area would particularly need user interface capabilities to record and display all set-up specifications employed for each completed simulation and to furnish clear and flexible graphical displays of results.

Uses of Agent-based Simulation for Applications in Christian Theology

How might an agent-based computer simulation system—developed with attention to the kinds of capabilities and extensions indicated in the preceding subsections of this chapter—actually be used by Christian theologians? As Thomas Aquinas might have said, the answer may be given in two parts.

First, any answer offered at this time is likely to be fairly inaccurate and incomplete. According to a common (albeit possibly apocryphal) anecdote in the computer business, several founders of the IBM Corporation anticipated a total market for as many as ten of their machines. The potential utility of innovative artifacts can initially be unclear, as the present author also has learned by contributing to development of some computer simulation systems for governmental clients. The full usefulness of such tools typically is appreciated only after an elementary prototype has been placed in the hands of prospective users (who are then able quickly to begin imagining what one "could do" with it).

Second, it is nevertheless likely that the foregoing material in this essay has established a background context sufficient for supporting some plausible conjectures. In Chapter I it was noted that current research applications of agent-based computer simulation in the social sciences seemed to "promise special relevance for the work of Christian theologians." Chapter II has already specifically marked the field of liberation theology as a candidate for such relevance. Historian Edwin Gaustad, in A Religious History of America, characterizes the "recent intellectual development known as liberation theology" in the following terms:

Poverty, hunger, injustice, and oppression were seen as theological problems, not simply political or social inequities; the theology that failed to address such problems, that failed to concern itself with discrimination and domination, was a theology irrelevant at best, wicked and exploitive at worst. (344)

From this description, one should reasonably expect that some liberation theologians have strong interests in the social sciences. Gustavo Gutierrez, a recognized leader of Latin American liberation theology, confirms this expectation promptly in A Theology of Liberation:

The social sciences, for example, are extremely important for theological reflection in Latin America. Theological thought not characterized by such a rationality and disinterestedness would not be truly faithful to an understanding of the faith. (5)

Subsequently in his book, one finds Gutierrez furnishing more specific illustrations of his generalization; he mentions, for example, the following interesting claim, issued by a group of Chilean priests:

We do not believe persons will automatically become less selfish, but we do maintain that where a socio-economic foundation for equality has been established, it is more possible to work realistically toward human solidarity than it is in a society torn asunder by inequity. (66)

This claim—in an agent-based computer simulation system suitably equipped with the sorts of agent capabilities and variables previously discussed—plausibly could be expressed more rigorously as a repeatable (hence, sharable) experiment, bearing explicitly recorded assumptions and quantified results. That is to say, Latin American liberation theologians apparently could make worthwhile use of appropriately developed agent-based simulation tools.

Dialogue partners, for liberation theologians so equipped, could include governments and the business world in ways even concerning current science and technology. Biologist Stephen Nottingham, in his recent book, Eat Your Genes: How Genetically Modified Food is Entering Our Diet, issues the following claim regarding applications of genetic engineering in agriculture:

[…] the technology as it currently stands is not decreasing the gap between the rich and poor, and is not being adapted to suit the local conditions found in developing countries. (164)

In this context, one may be tempted to imagine liberation theologians demonstrating back-to-back socio-economic-ecological computer simulations—one of which employs business and government groups disposed to implement Christian ethical teachings, while the other features them simply in secular pursuit of wealth and power (including, for each case, some appropriate objective measures of resulting differences in overall social well-being). Admittedly, the theologians’ dialogue partners in business and government might find this new medium of expression somewhat provocative and disturbing—but they almost certainly would pay attention. In fact, criticisms of the tool (e.g., "Your simulation assumes X, but X is not the case") should generally be welcomed by the religious community as entrees to constructive dialogue. It is, after all, a characteristic strength of such systems that they permit rapid and flexible "what if" simulations (e.g., "Thank you – but look what happens when we assume X is not the case").

Expected Benefits of Applying Agent-based Simulation in Christian Theology

Using existent agent-based computer simulation science and technology as a basis for development of tools specifically tailored to applications in Christian theology could yield significant benefits in both religion and science. The envisioned class of tools would model a complex reality comprised of interactive agents representing humanity, God and nature to support dialogue of several kinds. For intra-faith dialogue and for interfaces with the secular world, such systems could especially improve capabilities of process theologians and liberation theologians to articulate their views precisely and flexibly. They would also offer a new medium of expression for theologians that should be readily understood and appreciated in the scientific community. By allowing a broad range of cultural conditions to be represented explicitly, they should be expected, moreover, to improve interfaith dialogue. Within computer science (particularly, the research fields of distributed artificial intelligence and multi-agent systems) many of the extensions of current methods during development of theological applications could be expected to reveal new dynamical properties of theoretical interest.

More generally, the potential role of the recommended innovations as media of expression and communication may warrant attention. Historically, such new resources as moveable type, the radio and television have been employed by the Christian church in myriad enterprises of considerable social significance. It seems not to be irresponsibly ambitious to rank among such resources the emerging capability of artificially intelligent computer systems to simulate complex interactions of agents.

Conclusion

Chapter I furnished an introductory overview of agent-based computer simulation systems, and Chapter II has developed a general case for adapting resources of this type for use in Christian theology. The next step toward actual development of such tools would prudently be formulation of specific functional requirements for a prototype system of manageable scope to serve a particular Christian theological application. Chapter III will initiate a formulation of that kind.

CHAPTER III

ENABLING EXPERIMENTAL THEOLOGY

 

Introduction

Establishing a general case for introducing agent-based computer simulation in Christian theology does not immediately recommend development of an omnibus system to serve all users and purposes. Successful introduction of simulation resources in the theological community will more likely be achieved through development of systems focused upon specific application areas. Several considerations support this judgment. First, software tools of less ambitious scope—being relatively less complex—tend to require smaller investments of learning time (a feature always welcome among busy users). Second, more modest development costs can be realized for systems incorporating artificial intelligence software by concentrating upon particular topic domains. Third, tools constructed for specific applications are more likely to be executable with commonly available computer hardware resources, making them more widely accessible to users.

This chapter will explain why investigation of altruism offers an excellent specific application topic for introducing agent-based computer simulation in Christian theology. The explanation will proceed from review of some useful background information to description of issues in contemporary investigation of altruism—issues, in fact, that already have engaged substantial numbers of scientists, philosophers and theologians in dialogue. Finally, it will be argued that equipping theologians with a new class of agent-based simulation tools—a class explicitly intended to support expression of their views of altruistic behavior—should be undertaken. Stated more dramatically, this concluding segment of the chapter will propose enabling an innovative enterprise ("experimental theology") to facilitate emergence of a fundamentally new approach to the investigation of altruism.

Investigating AltruismDefinitions and Background

Coined in the nineteenth century by pioneer sociologist Auguste Comte, the term "altruism" has been defined somewhat differently in various times and disciplines. Nevertheless, the semantic core of the concept has remained fairly stable. The 1971 Unabridged Edition of The Random House Dictionary of the English Language defines "altruism" as "the principle or practice of unselfish concern for or devotion to the welfare of others." This definition appears to retain essentially the meaning with which "altruism" was coined; psychologists Samuel and Pearl Oliner report that "Comte conceived of altruism as devotion to the welfare of others, based in selflessness" (4). The term is regularly contrasted with its opposite notion, "egoism," which the previously-mentioned dictionary takes to comprise "the habit of valuing everything only in reference to one’s personal interest; selfishness." Both terms are further distinguished as having psychological and ethical meanings—the former being descriptive and the latter normative.

Any society of personal agents capable of action representing only one of these options (altruism or egoism) would necessarily be one in which meaning could no longer be assigned to the distinction marked by both terms. This observation warrants mention only because one occasionally encounters the peculiar argument that—since every act by an agent reflects what the agent desires to do—all acts must be egoistic (by definition). The economist Henry Hazlitt is among the writers who have convincingly challenged this semantic argument with a common sense observation that we sometimes simply desire to do things motivated by concern for others. Accordingly, he concludes the contrasting "essence of egoism" is to be more precisely defined as "the pursuit of personal ends at the cost of those of others" [emphasis added] (par. 4).

Although the term "altruism" first appeared in modern times, theologian Colin Grant points out that a distinctively Christian background for the concept it denotes can hardly be overlooked (167). Indeed, even passages from the Gospel of Luke that have passed the Jesus Seminar’s critical scrutiny furnish ample display of unselfish concern for the welfare of others: "love your enemies" (Lk 6:27), "give to everyone who begs from you" (Lk 6:30), "If you love those who love you, what merit is there in that?" (Lk 6:32) (Funk 291, 294, 296). Another prominent element of the background reflected in contemporary thinking about altruism is undoubtedly the legacy of Darwinism. For reasons of the sort suggested in the following comments by process theologian David Griffin, apparent evidence of altruistic conduct has been—and remains—a source of difficulty for the entire family of Darwinian schools of thought:

The problem is that the theory of natural selection entails that all habits must provide a survival advantage for the organisms (or their genes), and it is hard to see how altruism, which involves self-sacrificial behavior, can be so explained. (267)

As the discussion now turns attention to issues in the contemporary investigation of altruism, Christian theology and Darwinism will regularly be evident as the most prominent background axes along which the issues are articulated.

Investigating AltruismContemporary Activity in Science and Theology

Chapter I supplied some illustrations of computer science research in which altruistic behavior has been recognized as a capability of artificially intelligent agents. Paola Rizzo and colleagues furnish an additional and particularly explicit example of such recognition in their 1997 paper titled "Personality-Driven Social Behaviors in Believable Agents." They define personalities for AI agents that include the "altruist," which "sincerely cares about others, and is willing to help them, even at its own disadvantage" (112). A strikingly less dispassionate reference to such behavior has been attributed to AI pioneer Herbert Simon, who—according to theologian Colin Grant—once proposed "activities that aid others are so foreign to the foundational predilection to self-interest that they can only be attributed to docility and stupidity" (72). The more recent work of L. M. Hogg and Nick Jennings, described in Chapter I, does appear less strident in expressing its assumptions concerning self-interest—indeed, it has already been noted these authors envision a choice function for agents that balances "selfish" against "altruistic" utility assessments in determining actions. That their approach also marks a fairly recent divergence from prevailing work, however, is evident in the following introductory comments they offer:

The exact nature and the underpinning principles of an agent’s decision making function have been studied in a range of disciplines including philosophy, economics and sociology. [ … One notices "theology" is absent from this list. …] The current predominant view is to equate rational decision making with that of maximizing the expected utility of actions as dictated by decision theory […]. (61)

Some departure from this predominant view also was displayed in Chapter I description of recent computer simulation experiments by Bazzan and colleagues—in fact, it was noted their results suggested conditions under which decisions made according to classical Game Theory "may not be the best attitude in the long run." Nevertheless, even the approaches of Hogg, Jennings and Bazzan still reflect tacit naturalistic (specifically, non-theistic) assumptions about the human agents they model. If there has been some divergence from assumptions of purely self-interested agents, the agents remain essentially natural creatures making choices on utilitarian grounds.

In the field of economics, a similar range of assumptions may be found regarding the nature of human agents and their decision-making processes. Although contemporary economists such as William Brian Arthur endorse the methodology of computer simulations (Waldrop 269), philosopher Joseph Des Jardins characterizes the discipline’s basic view of human agency in the following terms:

The fundamental assumption about human nature is that human beings act, primarily if not solely, on the basis of self-interest. Self-interest then is understood in the classic utilitarian sense of maximizing our own satisfactions or ‘utilities.’ Altruism or acting for the best interests of others would require that human nature be ‘rewired.’ (53)

Alfie Kohn, in The Brighter Side of Human Nature, corroborates Des Jardins’ assessment somewhat more bluntly: "Egoism is not an assumption but the assumption underlying neoclassical economics, which is, in turn, the dominant approach to the discipline in this country" (185). Again, philosopher Nicholas Rescher refers to the literature of "Prisoner’s Dilemma" research in an equally direct fashion:

Virtually all writers on the economic aspect of the subject unblushingly identify rationality with what is, in effect, simply self-interested prudence. Without further ado, they assume the stance that any reference to the interests of others would be discounted altogether by the rational man. (38)

In fairness, at least one dissenting voice from the economics community—that of Henry Hazlitt—deserves to be recognized. Consistently with his common sense approach noted previously, Hazlitt has observed that a society comprised entirely of either altruistic or egoistic agents would not be "workable" (par. 5). Nevertheless, one can generally expect social simulations in the field of economics not to incorporate the kinds of assumptions about human potential for altruistic behavior that Christian theologians might propose.

For the discipline of sociobiology, this expectation apparently may be promoted to certainty—biologist Jeffrey Schloss illustrates a common complaint with his simple declaration that "sociobiology remains committed to seeing altruism as self-interest by another name" (248). Moreover, the seriousness with which this discipline has worked to explain altruistic behavior in such terms is reflected in E. O. Wilson’s oft-quoted description of altruism as "the central theoretical problem of sociobiology" (Sober and Wilson 18; Kohn 183; Grant 6). The zeal with which self-interest has been molded to solve this problem is impressive. When birds save their flocks from predators by drawing attention to themselves (as they often do, at substantial individual risk), they are not really exhibiting altruistic behavior—they are merely acting to preserve their own genes (via survival of their kin). If a human risks her life to save a drowning stranger (and the "kin selection" explanation seems unconvincing), we have a clear case of so-called "reciprocal altruism." According to this explanation, the rescuer has (again) acted in self-interest, since burnishing her reputation as a rescuer increases the probability she will benefit from someone’s "reciprocal" heroism at some future time when she is in danger. Not surprisingly, "explanations" of this kind have managed to generate some dialogue between the biologists and members of the Christian theological community.

Ian Barbour illustrates this development, expressing a number of misgivings in his book, Religion and Science, regarding the research approach of sociobiologist E. O. Wilson. The following comments are representative of his critiques:

Wilson’s writing has received criticism from several quarters. For example, anthropologists have replied that most systems of human kinship are not organized in accord with coefficients of genetic similarity and that Wilson does not even consider cultural explanations. (81)

Wilson does acknowledge the plasticity of human behavior and the possibility of change. Yet there is no place for real freedom in his analysis. (256)

Throughout is an implicit metaphysics of materialism and occasionally an explicit advocacy of what he calls "scientific materialism." All of his explanations are on one level only—the action of genes. (257)

Similarly, theologian Colin Grant’s Altruism and Christian Ethics repeatedly challenges aspects of the sociobiological research of Richard Dawkins. Some of Grant’s critiques concern methodology, as represented in the following:

No amount of empirical evidence would convince the hardline sociobiologist of the reality of altruism, because their position is not an empirical one. The self-interest vision, with its allowance only for kin and reciprocal altruism, is a comprehensive paradigm that processes all empirical evidence in its own terms. (80)

In other cases, Grant charges Dawkins with logical inconsistency. He notes, for example, Dawkins’ Preface for The Selfish Gene insists that "we are survival machines—robot vehicles blindly programmed to preserve the selfish molecules known as genes," while his Conclusion surprisingly announces "We have the power to defy the selfish genes of our birth" (97). Although he generally is somewhat more sympathetic in assessing sociobiological work, theologian Stephen Pope also complains (in The Evolution of Altruism and the Ordering of Love) that "[Martin] Buber’s ‘I-Thou’ relations transcend the simple exchange model common to reciprocity theories" (119). On the other hand, theology professor Thomas Hosinski, reviewing a recent lecture by biologist Jeffrey Schloss, has reported scientists may be moving toward acknowledgment that "reductionist approaches to understanding altruism are not sufficient to account for observations"; he concludes "the most recent developments open the possibility of a constructive conversation between science and religion on this topic of altruism."

Progress toward the sort of "constructive conversation" Hosinski suggests must engage issues of theoretical differences distinguishing religious from scientific perspectives on altruism research. The foregoing review of altruism investigations in computer science, economics and sociobiology indicates they have been characterized by naturalistic, non-theistic, utilitarian, and egoistic theoretical assumptions about human agents that are consistently reflected in any computer simulations of social behavior. Philosophers and theologians need comparable simulation tools specifically designed to reflect also their alternative theoretical assumptions if the dialogue is seriously to become constructive. I shall now propose that they be so equipped.

Investigating AltruismEnabling Experimental Theology for a New Approach

David Hume—celebrated philosophical champion of tough-minded empiricists everywhere—posts the following acknowledgment in his Conclusion to An Enquiry Concerning the Principles of Morals:

It is sufficient for our present purpose, if it be allowed, what surely, without the greatest absurdity cannot be disputed, that there is some benevolence, however small, infused into our bosom; some spark of friendship for human kind; some particle of the dove kneaded into our frame, along with the elements of the wolf and serpent. (227)

Why should contemporary Christian theologians not be supplied with state-of-the-art computer tools capable of adding some representation of the dove to those elements of the wolf and serpent already so widely assumed by scientists in their social simulations investigating altruistic behavior?

Objection could hardly be scientific. Neurobiologist William Newsome, in one of his contributions to the 2001 Science and the Spiritual Quest Boston Conference, has commented eloquently on "Assumptions of Science" that dismiss "the existence of God, the possibility of divine revelation to humanity, any notion of universal grounding for right action (ethics), or any possibility that humanity can participate in a reality that transcends itself" (Newsome 5). Characterizing such assumptions collectively as a "radical materialist assumption," Newsome properly reminds us it is neither "a finding of science" nor "logically necessary to the scientific process"—instead, it is simply "an ideological position that individual academics frequently choose to adopt for their own reasons" (Newsome 5). Indeed, there is absolutely no scientific reason agent-based computer simulation tools should not incorporate representation of theoretical constructs such as divine grace.

Furthermore, objection need not be theological. On the contrary, Philip Clayton and Steven Knapp raise the following pertinent and reasonable question in one of their contributions to a recent religion and science anthology:

After all, what better way to justify the inclusion of Christian theological beliefs in the Western scientific web than to show (if this is indeed possible) that a certain Christian belief does the best job of explaining some set of scientific data? (164)

Could agent-based computer simulation tools incorporating representation of theoretical constructs such as divine grace support more adequate explanations of certain altruistic behaviors than comparable tools restricted to (equally extrascientific) non-theistic theories? Perhaps they could not—but fairness recommends we take scientific objectivity seriously and give them at least an opportunity to do so. Moreover, representatives of the contemporary theological community have issued unmistakable calls for innovative directions in Christian theology that would benefit from provision of exactly such tools. The Revd. Canon Dr. Arthur Peacocke, for example, boldly asserts "We require an open, revisable, exploratory theology in all religions" (Peacocke 5). Again, Sallie McFague describes what she calls "heuristic theology"—a theology "that experiments and tests, that thinks in an as-if fashion, that imagines possibilities that are novel, that dares to think differently" (251). The discussion in Chapter I has shown that thinking "in an as-if fashion" is virtually a hallmark of research with computer simulation tools. They are tools ideal for supporting the types of theological innovation recommended by Peacocke and McFague, which shall be comprehensively identified in the remaining sections of this essay as experimental theology.

Finally, there are no strong technological obstacles to development of agent-based computer simulation tools that would permit theologians to join altruism investigation with modes of expression and experimentation already understood and employed by their scientific colleagues. Ample relevant resources have been identified in Chapters I and II for undertaking a project of this sort. Standard systems engineering practice would now recommend the following steps: definition of functional requirements for the tool, formulation of its general and detail design, and implementation of a "proof of concept" version for testing. Subsequent chapters of this essay will complete the first step, identifying in some detail just what functional capabilities Christian theologians would need in an agent-based computer simulation tool to allow expression of their perspective on the subject of altruistic behavior. A final chapter will illustrate how the tool could be employed to mutual benefit in religion and science dialogue.

 

CHAPTER IV

REQUIREMENTS – SYSTEM CAPABILITIES

 

Introduction

[Text to be supplied here]

CHAPTER V

REQUIREMENTS – AGENT CAPABILITIES

 

Introduction

[Text to be supplied here]

 

 

CHAPTER VI

REQUIREMENTS -- VARIABLES

 

Introduction

[Text to be supplied here]

 

CHAPTER VII

REQUIREMENTS – USER INTERFACE

 

Introduction

[Text to be supplied here]

CHAPTER VIII

EXPECTED BENEFITS FOR RELIGION AND SCIENCE

 

Introduction

[Text to be supplied here]

 

Works Consulted

Agents with Adjustable Autonomy: Papers from 1999 AAAI Spring Symposium, David Musliner and Barney Pell, Cochairs. Abstract and Contents of Technical Report SS-99-06, supplied with order forms by The AAAI Press. <https://www.aaai.org/Press/Reports/Symposia/Spring/ss-99-06.html>

"Altruism." Def. The Random House Dictionary of the English Language. The Unabridged Edition. 1971.

Augustine. "On the Grace of Christ." Theological Anthropology. Trans. and ed. J. Patout Burns. Philadelphia: Fortress Press, 1981.

Avouris, N. M. "User Interface Design for DAI Applications: An Overview." Distributed Artificial Intelligence: Theory and Praxis. Ed. Nicholas M. Avouris and LesGasser. The Netherlands: Kluwer Academic Publishers, 1992. 141-162.

Barbour, Ian G. Religion and Science: Historical and Contemporary Issues. New York: HarperCollins, 1997.

Bazzan, Ana L. C., Rafael H. Bordini, and John A. Campbell. "Agents with Moral Sentiments in an Iterated Prisoner’s Dilemma Exercise." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical ReportFS-97-02. Menlo Park: AAAI Press, 1997. 4-6.

Castelfranchi, Cristiano, Fiorella de Rosis, and Rino Falcone. "Social Attitudes and Personalities In Agents." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical Report FS-97-02. Menlo Park: AAAI Press,1997. 16-21.

Ceresko, Anthony R. Introduction to the Old Testament: A Liberation Perspective. Maryknoll: Orbis Books, 1992.

Clayton, Philip, and Steven Knapp. "Is Holistic Justification Enough?" Religion and Science: History, Method, Dialogue. Ed. W. Mark Richardson and Wesley J. Wildman. New York: Routledge, 1996. 161-167.

Cobb, John B. Jr., and David Ray Griffin. Process Theology: an Introductory Exposition. Philadelphia: The Westminster Press, 1976.

Computer Simulation of Societies. Dept. of Sociology, University of Surrey: Centre forResearch on Social Simulation, 1998. 8 Aug. 2001<http://alife.ccp14.ac.uk/cress/research/simsoc/simsoc.html>

Crites, R., and A. Barto. "Elevator Group Control Using Multiple Reinforcement Learning Agents." Machine Learning 33, 1998. 235-262.

Damasio, Antonio R. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Orlando: Harcourt, Inc., 1999.

Dautenhahn, Kerstin. "Ants don’t have Friends – Thoughts on Socially Intelligent Agents." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical Report FS-97-02. Menlo Park: AAAI Press, 1997. 22-27.

Davies, Susan E. "Combating Racism in Church and Seminary." Ending Racism in the Church. Ed. Susan E. Davies and Sister Paul Teresa Hennessee. Cleveland:United Church Press, 1998.

de Silva, Lynn. The Problem of the Self in Buddhism and Christianity. London: The Macmillan Press Ltd., 1979.

Des Jardins, Joseph R. Environmental Ethics: An Introduction to Environmental Philosophy. Belmont: Wadsworth/Thomson Learning, 2001.

Dryer, D. Christopher. "Ghosts in the Machine: Personalities for Socially AdroitSoftware Agents." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical Report FS-97-02. Menlo Park: AAAI Press, 1997.31-36.

"EcoBeaker 2.0." BeakerWare (2001). 12 Aug. 2001 <http://www.ecobeaker.com/>

Edmonds, Bruce. "Modelling Socially Intelligent Agents in Organisations." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium.Technical Report FS-97-02. Menlo Park: AAAI Press, 1997. 37-42.

"Egoism." Def. 1. The Random House Dictionary of the English Language. The Unabridged Edition. 1971.

Funk, Petra, and Jürgen Lind. "What is a Friendly Agent?." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical Report FS-97-02.Menlo Park: AAAI Press, 1997. 46-48.

Funk, Robert W., Roy W. Hoover, and The Jesus Seminar. The Five Gospels: The Search for the Authentic Words of Jesus. New York: HarperCollins Publishers, 1997.

Gaustad, Edwin Scott. A Relligious History of America. New York: HarperCollins Publishers, 1990.

Gilbert, Nigel. "Simulation: an emergent perspective." Dept. of Sociology, University ofSurrey: Centre for Research on Social Simulation, 1998. 8 Aug. 2001 <http://alife.ccp14.ac.uk/cress/research/simsoc/simsoc.html>

Grant, Colin. Altruism and Christian Ethics. Cambridge: Cambridge University Press, 2001.

Griffin, David Ray. Religion and Scientific Naturalism: Overcoming the Conflicts. Albany: State University of New York Press, 2000.

Gutierrez, Gustavo. A Theology of Liberation: History, Politics, and Salvation.Trans. and Ed. Sister Caridad Inda and John Eagleson. Maryknoll: OrbisBooks, 1999.

Haynes, Thomas, and Sandip Sen. "Learning Cases to Resolve Conflicts and ImproveGroup Behavior." Agent Modeling: Papers from the 1996 AAAI Workshop. Technical Report WS-96-02. Menlo Park: AAAI Press, 1996. 46-52.

Hazlitt, Henry. Chapter 13. The Foundations of Morality. Los Angeles: Nash Publishing,1972. 1 Oct. 2001 <http://www.hazlitt.org/e-texts/morality/>

Heim, S. Mark. Salvations: Truth and Difference in Religion. Maryknoll: Orbis Books, 1997.

Hogg, L. M., and N. R. Jennings. "Socially Rational Agents." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical ReportFS-97-02. Menlo Park: AAAI Press, 1997. 61-63.

Hosinski, Thomas E. "The Mystery of Altruism: Jeffrey Schloss on the relationship Between evolution and Christian love." Research News & Opportunities in Science and Theology 1.10 (2001): 9.

Hume, David. "An Enquiry Concerning the Principles of Morals." Hume: Selections. Ed. Charles W. Hendel. New York: Charles Scribner’s Sons, 1955. 194-252.

Jennings, Nick. Cooperation in Industrial Multi-Agent Systems. River Edge: World Scientific Publishing Co., 1994.

John of Damascus. "First Apology Against those who Attack the Divine Images." Inquiring after God: Classic and Contemporary Readings. Ed. Ellen T. Charry. Oxford: Blackwell Publishers Ltd., 2000. 285-86.

Kohn, Alfie. The Brighter Side of Human Nature: Altruism and Empathy in Everyday Life. New York: Basic Books, Inc., Publishers, 1990.

Laird, John E., and Michael van Lent. "Human-Level AI’s Killer Application: Interactive Computer Games." AI Magazine Summer, 2001. 15-25.

Mali, Amol Dattatraya. "Social Laws for Agent Modeling." Agent Modeling: Papers from the 1996 AAAI Workshop. Technical Report WS-96-02. Menlo Park:AAAI Press, 1996. 53-60.

McFague, Sallie. "Models of God for an Ecological, Evolutionary Era: God as Mother of the Universe." Physics, Philosophy, and Theology: A Common Quest for Understanding. Ed. Robert John Russell, William R. Stoeger, S. J., and George V. Coyne, S. J. Vatican City State: Vatican Observatory, 1997. 249-271.

Metzler, Theodore. "And the Robot Asked ‘What do you say I am?’: Can Artificial Intelligence Help Theologians and Scientists Understand Free Moral Agency?" The Journal of Faith and Science Exchange IV (2000): 37-48.

Metzler, Theodore, and Nordyke, John. "Use of ModSAF in Development of anAutomated Training Analysis and Feedback System." Proceedings of 6th Conference on Computer Generated Forces and Behavior Representation. Orlando: Institute for Simulation and Training, 1966. 151-155.

Migliore, Daniel L. Faith Seeking Understanding: An Introduction to Christian Theology. Grand Rapids: William G. Eerdmans Publishing Company, 1991.

Multi-Agent Systems Laboratory. University of Massachusetts at Amherst: Department Of Computer Science. 2001. 8 Aug. 2001 <http://mas.cs.umass.edu/index.shtml>Newsome, William T. "Life of Faith, Life of Science." The Quest for Knowledge, Truth, and Values in Science & Religion: Science and the Spiritual Quest BostonConference, held at The Memorial Church of Harvard University, October 21-23, 2001. Berkeley: Center for Theology and the Natural Sciences, 2001. Newsome1-15.

Nottingham, Stephen. Eat Your Genes: How genetically modified food is entering our diet. Rondebosch: University of Cape Town Press, 1998.

Oliner, Samuel P., and Pearl M. Oliner. The Altruistic Personality: Rescuers of Jews in Nazi Europe. New York: The Free Press, 1988.

"Overview of Black and White." pcgame.com. 12 Aug. 2001 <wysiwyg://66/http://www.pcgame.com/title/black_and_white/main.htm>

Peacocke, Arthur. "Science and the Spiritual Quest: the Intersections Today." The Quest for Knowledge, Truth, and Values in Science & Religion: Science and the Spiritual Quest Boston Conference, held at The Memorial Church of Harvard University, October 21-23, 2001. Berkeley: Center for Theology and the Natural Sciences, 2001. Peacocke 1-7.

Polanyi, Michael. The Tacit Dimension. Gloucester: Peter Smith, 1983.

Pope, Stephen J. The Evolution of Altruism and the Ordering of Love. Washington, D. C.: Georgetown University Press, 1994.

Repenning, Alexander, Andri Ioannidou, and John Zola. "AgentSheets: End-UserProgrammable Simulations." Journal of Artificial Societies and Social Simulation 3.3 (Jun. 2000). 11 Aug. 2001 <http://jasss.soc.surrey.ac.uk/3/3/forum/1.html>

Rescher, Nicholas. Unselfishness: The Role of the Vicarious Affects in Moral Philosophy and Social Theory. Pittsburgh: University of Pittsburgh Press, 1975.

Reynolds, Craig W. "Individual-Based Models: an annotated list of links." Reynolds Engineering & Design (22 Oct. 1999). 12 Aug. 2001 <http://www.red3d.com/cwr/ibm/html>

Rizzo, Paola, Manuela Veloso, Maria Miceli, and Amedeo Cesta. "Personality- Driven Social Behaviors in Believable Agents." Socially Intelligent Agents: Papers from the 1997 AAAI Fall Symposium. Technical Report FS-97-02. Menlo Park: AAAI Press, 1997. 109-114.

Ruether, Rosemary Radford. Sexism and God-Talk: Toward a Feminist Theology. Boston: Beacon Press, 1993.

Schloss, Jeffrey P. "Evolutionary Accounts of Altruism & the Problem of Goodness by Design." Mere Creation: Science, Faith & Intelligent Design. Ed. William A. Dembski. Downers Grove: InterVarsity Press, 1998. 236-261.

Smith, David. "Black & White." ignpsx (16 Jan. 2001). 12 Aug. 2001 <wysiwyg://11/http://psx.ign.com/previews/15892.html>

Sober, Elliott, and David Sloan Wilson. Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge: Harvard University Press, 1998.

Staller, Alexander, and Paolo Petta. "Introducing Emotions into the Computational Study of Social Norms: A First Evaluation." Journal of Artificial Societies and Social Simulation 4.1 (2001). 10 Aug. 2001 <http://jasss.soc.surrey.ac.uk/4/1/2.html>

Sutton, R., and A. Barto. Reinforcement Learning: An Introduction. Cambridge: MIT Press, 1998.

Tillich, Paul. Morality and Beyond. Louisville: Westminster John Knox Press, 1963.

Waldrop, M. Mitchell. Complexity: the emerging science at the edge of order and chaos. New York: Touchstone, 1992.