Design of A DIS Agent, the AISim System:
A progress report
Sakir Kocabas*, Ercan Oztemel**,
1. Abstract
An intelligent system, AISim is being developed by our group at MRC, within the framework of EUCLID RTP 11.3 battlefield simulation project. AISim is being developed to enable a simulated air target (an F16 plane) to behave intelligently in cooperation with other computer generated and man controlled air targets, in tasks and activities in CAP and Escort missions in defensive and offensive scenarios. The system's tasks include Navigation, Patrol, Escort, BVR and WVR Engagement, Air-to-Air Refuelling, Disengage and Return-to-Base.
2. Introduction
The study of intelligent agents in real-time simulation systems has been one of the most challenging research topics in artificial intelligence (see, e.g., Jones, et. al. 1994). The primary purpose of such studies is to examine agent behavior in real-time environments and scenarios, and to prepare more realistic systems for training human operators for certain skills. Recently, extensive research is being carried out on intelligent agents operating in distributed interactive simulation (DIS) environments. The DIS environments enable to use a number of agents with different goals and behavior patterns in real-time scenarios (see, e.g., Oztemel & Kocabas, 1996; Laird, et al., 1995; Tambe, et al., 1995). DIS is mainly concerned with time and space-coherent, synthetic representation of real-world environments and interactions of operational entities in them.
The synthetic environment is created through real-time exchange of data units between distributed, and computationally autonomous simulation applications in the form of simulations, simulators and instrumented equipment interconnected through standard computer communicative services. The computational entities can be in one location or distributed geographically. A DIS system has the following characteristics:
No central computer is used for event scheduling or conflict resolution.. Autonomous simulation stations are responsible for maintaining the state of one or more simulation elements.. There is a standard protocol for communicating ground-truth data.. Receiving stations are responsible for determining what is to be perceived.. Simulation stations communicate only changes in their state.. "Dead-reckoning" algorithms are used to reduce overloads in processing communication data.
An intelligent agent consists mainly of three components: perception, cognition and action. Memory, reasoning, learning, understanding, planning, scheduling, and control are some of the basic characteristics of intelligent behavior. An agent equipped with these capabilities can receive information from its environment, organize its knowledge about the environment, evaluate situations, deduce conclusions, solve problems, and generate actions.
The cooperation of DIS agents depends on the kind of tasks and activities they are expected to do, and the environment in which they operate. There may be three different types of tasks: 1) Agents may perform problem solving in a common domain, 2) agents may be working together to improve their individual performance, and 3) agents may be working together to improve the performance of the overall system they are designed for.
In DIS systems the third type of cooperation is important as it concerns the question of dependency between agents. If an agent needs to communicate with other agents, it has to know the underlying model of these agents. Additionally, there has to be a standard data communication accesible by every entity within the overall system. Some data communication problems are solved by "dead reckoning" algorithms. Such algorithms estimate the future situations in the temporary absence of situational data, ensuring that the system is somewhat fault tolerant with respect to temporary communication failures.
In a complex environment, knowledge used by an egent can be incomplete, and the goals of the agents might be conflicting (Jones, et. al., 1994). If an agent has conflicting goals, a set of heuristics or a classifier can be used to deal with the conflict. However, if different agents have conflicting goals, then there is a need for a negotiator to deal with this problem. The negotiator is an agent which defines the authority of information.
This paper describes the design of an intelligent agent, AISim, operating in a DIS environment. Our study focuses on the following problems in designing such agents:
- Rationality of agent behavior- Agent cooperation and coordination- Resolution of conflicts in agent goals and tasks- Agent situation and behavior explanations- Agent reusability.
The design history of AISim goes back to the design of its prototype RSIM (Kocabas, et. al., 1995). RSIM was a simple model operating in a 2-d space, with capabilities of learning its rules of behavior and explaining its behavior. AISim is a much more developed version with the capabilities of detailed situation assessment, action management and behavior explanation. In the following sections, first a summary of the design history of AISim is provided. Then the system is described in terms of its hardware and software strucutre. Next, AISim's methods and capabilities are discussed in comparison with other related systems. Finally, the paper concludes with a summary of the results.
3. System Development
The following procedure is employed in the development of AISim:
- Domain analysis to define the activities to be simulated in the application.- Requirements analysis, to define the system's goals and functions.- Global design analysis, to ensure that each specified goal is achieved by a set of functions.- Detailed design, to guide the software engineers to code the system in accordance with the specified requirements.- Software development which is the actual code generation process.- Testing, verification and integration to DIS system.
Currently, our work has passed the prototype and design stages, and is now in the software development stage, in which AISim has been integrated with the underlying simulation system.
4. System Description
The basic hardware structure of the DIS system on which AISim runs is shown in Figure 1. The system operates as networked to the simulation system in a DIS environment, where AISim runs on the AI station, and control its agent(s) on a simulation station connected to the same DIS system. The simulation station runs ITEMS* simulation system. The communication between the workstations is carried out by exchanging standard data units in the network, under InterSIM** a DIS network software.
Simulation
AI Station station
------------ ------------
AISim simulation
system
------------ ------------
interface interface
------------ ------------
<- pdus ->
----------------------------------------- ... DIS Network
Figure 1. Hardware structure of the DIS system on which AISim runs.
ITEMS* is the product of CAE Electronics.InterSIM** is the product of TTS.
As to the software architecture of the system, we have selected a hierarchical approach for the design of AISim, in which the system has four levels of goals:
1) Mission goals2) Task goals3) Subtask goals4) Activity and action goals.
DIS scenarios require the definition of mission goals such as air interception and tactical air support. AISim has been designed for two different mission goals: Combat Air Patrol (CAP), and Escort to bombers. When the system's mission goal is defined as CAP, it is divided into a set of task goals such as navigation, patrol, and BVR combat. These task goals are further divided into a set of subtask goals such as trajectory guidance, weapons management, and evasion. The subtask goals in turn, are divided into activities such as firing and guiding a missile, performing an evasion maneuver, and turning towards a target. Activities are also divided into a set of simple actions such as changing heading, speed and altitude.
AISim's control structure supports the goal hierarchy described above. The system has two modules: Situation Assessment (SA) and Action Management (AM). The SA module monitors the situational parameters 10 times a second on avrage, by first selecting a set of situational parameters, calculates the situation, and sends a reduced set of situational indicators in the form of signals to the AM module.
The AM module itself consists of a set of operators in a hierarchy. On the top of this hierarchy is the Task Control Operator (TCO), which controls a set of task operators by deciding which task operator is to be activated under the current situation. Once a task operator is activated, this in turn, fires subtask and activity operators and rules. In this way, AISim directs its agent in the scenario in accordance with its assessment of the current situation.
AISim's TCO has the following operators which can become active in a CAP mission: Takeoff, Navigate, Patrol, BVR Engage, WVR Engage, Disengage, Air-to-Air Refuelling, Return to Base, and Land. Each of these operators have a set of subtask operators which in turn have a set of activity operators, and finally each activity operator has a set of action rules. The task control operator of AISim currently has 23 rules for selecting task operators for CAP missions. The total number of sub operators in these task operators is 52, which in turn have a small set of action rules and procedures. Figure 2 shows a section of AISim's mission, task, subtask and activity hierarchy. In this hierarchic control structure AISim supports the following intelligent agent characteristics: Situation assessment, action management and explanation.
---------------------------------------------------------
Mission CAP
Escort
Task CAP
Navigate
Patrol
BVR Engage
WVR Engage
Disengage
...
Subtask BVR Engage
BVR Approach
BVR Attack
BVR Evade
BVR Escape
Activity BVR Attack
Maintain Angle of Attack
Check Missile Envelope
Missile Launch
...
Action Missile Launch
Launch Missile
Perform f-pole
Guide Missile
...
---------------------------------------------------------
Figure 2. AISim's hierarchy of operators for
mission tasks, subtasks, activities and actions.
5. Discussion
In this section we will discuss AISim and compare it with other related in terms of:
- Domain tasks- System architecture (knowledge organization)- Intelligent agent features . Situation assessment (perception, cognition) . Action management (cognition, action) . Robustness . Timeliness . Flexibility (eg reusability) . Learning, . Explanation.- Performance in mission scenarios.
AISim has been tested in controlling an F 16 against ITEMS and man controlled Mig 29's and F 16s in various CAP scenarios. Tests on the system in escort scenarions are continuing. In CAP scenarios, the AISim agent (AIT) takes off, navigates to a patrol waypoint in a predefined desired engagement zone (DEZ), performs partol in an elliptical orbit towards towards a given threat direction. When a threat approaches a certain distance, AISim's TCO passes control to BVR Engagement operator, and this in turn, to BVR Approach sub-operator, and so AIT leaves patrol and approaches its target in a certain angle. Within a certain range, BVR Attack sub-operator takes control of AIT, guiding it through to own missile envelope, while securing and maintaining radar lock until a certain range. This sub operator is also responsible for launching and guiding BVR missiles. Meanwhile, if a radar lock comes from the opponent in a certain range, control passes the BVR Evade sub-operator, which in turn, guides AIT into evasive maneuvers. Chaff throws and radar jams can automatically be taken care of by the simulation system ITEMS. During BVR Attack or BVR Evade, if AIT has entered WVR engagement range, then TCO passes control to WVR Engage operator which directs AIT in WVR attack, evade and escape maneuvers.
At all times, TCO checks the fuel and missile stocks of AIT. When AIT runs out of BVR and/or WVR missiles, control passes to Disengage and RTB operators depending on the tactical situation. When the fuel level of AIT is below a predefined level, and the mission is still on, TCO passes control to Escape and/or AAR operators, and AIT is directed towards an AAR point where it refuels.
The above is a brief description of AIT's behavior in which a good deal of details are omitted for reasons of the limitations of this paper.
The knowledge organization and control structure of AISim is based on the hierarchic homuncular control (HH) architecture (Kocabas, 1991). Unlike the sequences of operators of Soar-IFOR, in this architecture, AISim's operators are systematically divided into mission, task, subtask and activity operators as shown in Figure 2. This architecture provides effective search control in real-time behavior. Accordingly, at any moment in its activity, the AISim agent can pass from one task (such as BVR Engage) to another taks (Disengage).
The number of operators and rules of AISim are small, compared to the variety of tasks and activities performed by its agents in a scenario. There are two reasons for this:
1) AISim's HH control architecture has proved to be effective in partitioning the control of agent activities.2) Many of the low level activities such as navigation to a waypoint and radar lock are carried out by the ITEMS simulation system.
Like Air-IFOR agents (Laird, et. al. 1995), AISim agents are isolated from the details of the underlying simulation environment, such as missile and plane dynamics, and sensor simulation. However, unlike Air-IFOR agents, AISim controls its agents created in a simulation station in the DIS environment, from a separate workstation connected to the same environment, using the data protocols of the DIS network software InterSIM. In other words, as opposed to Air-Soar systems which run in direct communication with its simulation system ModSAF on the same workstation, AISim runs independently on a separate workstation. Therefore its configuration is more general in terms of data communitation and control than that of Air-Soar.
As to the intelligent agent features of the system, AISim's SA module reads the set of data on the dynamic and static simulation elements, and computes the parameters of the tactical situtation from some of these data, and sends the relevant attribute-values to a message list to be read by the system's TCO operator. AISim reads about 60 different types of data (which are grouped in themselves), and sends about 15 types of data to the DIS network. The whole system's clock cycle is 20. AISim's action management operators, as have been described above, are capable of guiding its agent in different tasks and activities. The current version performs well in 1-v-1 engagements, and has a simple set of prime opponent selection rules to deal with more than one opponent at a time. However, unlike Air-IFOR agents the system has not yet been developed for 1-v-2 and 2-v-2 air combat scenarios.
AISim tests shows that the system is robust in the sense that the system shows reasonable performance in different scenarios in 1-v-1 and 1-v-2 engagements. The system has also passed the timeliness criterion in its current form.
As to the flexibility criterion, AISim architecture has proved to be flexible enough in adapting to other missions (e.g., from CAP to Escort missions) simply by adding new task operators and a small set of task control rules in TCO. Unlike Air/Soar's procedure, in which this system uses a decision procedure to select operators according to the current situation by using a rule set for operator selection, in AISim task selection is done by its task control operator. One advantage of this architecture is that it enables to change the doctrines of the AIT more easily.
We had tested learning methods on our earlier model RSIM (Kocabas, et al, 1995) which learns action rules to perform meaningful maneuvers in 1-v-1 engagements. Learning methods have been applied in limited activities such as learning pure pursuit (Hommertzheim, et. al., 1991) and certain close combat maneuvers (Crowe, 1990). AISim's architecture allows it to learn task control and activity rules, but the system's search space is too large for effective control and action rules. For this reason, we have not yet implemented learning methods in AISim. On the other hand. many military missions and tasks are taught by instruction. Air combat maneuvers are also well defined both in tactics and geometrical paths and trajectories. However, this does not mean that learning is not feasible in such systems, particularly because of the use of new technologies in missiles and planes.
Behavior explanations is an important feature for computer generated agents, as it is useful to know both for development and training purposes, what the agent has been doing at a particular moment during its activities. Behavior explanations can be in the form of post-mission explanations (Johnson, 1994) or in real-time (Kocabas, et al.,1995). Like its predecessor RSIM, AISim explains its agent's behavior in real-time. The system's knowledge organization, particularly its task based hierarchy of operators into tasks, subtasks, activities and actions, facilitates the detailed explanation of its agent's behavior in real-time. Air-Soar agents also have explanation capability, but as post-flight explanations (Johnson, 1994).
The same knowledge organization also facilitiates to include the description of agent goals and intentions beside simple behavior explanations. Goal directed explanations can be useful in monitoring the agent behavior more closely, particularly the agent's situation assessment capabilities. We are currently in the process of implementing this feature in AISim. Under these considerations, we believe that AISim has a more flexible knowledge organization scheme and control architecture than that of Soar which provides the basic knowledge organization scheme to Air-Soar systems.
As opposed to Tac-Air-Soar, AISim can in principle deal with multiple independent goals simultaneously. We are in the process of implementing this feature in the system. AISim can control more than one AI targets in a scenario from one station, although we have tried and tested only one so far.
Like Air-IFOR agents of Air/Soar, the AISim provides the following capabilities to AIT: situation assessment, following flight plans, performing patrol in reference to a certain waypoint and opponent direction, prime opponent selection, attack and missile management, evasion and escape, escort behavior and tactics, fuel management, disengagement, and coordinating with other agents in escort tasks. To these capabilities, own behavioral explanation and target behavior interpretation must be added.
On the other hand, compared with Air-IFOR agents, AISim agents have a limited range of mission simulations, as confined to CAP and Escort. Additionally, the current version of AISim agents have limited capabilities for 1-v-2 air combat.
6. Summary
In this paper we described the design and the architecture of an intelligent system AISim capable of performing tasks and activities in CAP and Escort missions. We have also discussed the system's knowledge organization and control architecture comparing with other related systems. AISim's architecture supports intelligent agent requirements such as situation assessment, action management, timeliness, flexibility and behavior explanation.
7. References
Crowe, M.X. (1990). "The application of artificial neural systems to the training of air combat decision-making skills", In Proceedings of the 12th ITSC., pp. 302-312.
Hommertzheim, D., Huffman, J., and Sabuncuoglu, I. (1991).Training and artificial neural network the pure pursuit maneuver. Computer Ops Res. 18 No.4, pp. 343-353.
Kocabas, S. (1991). Homuncular learning and rule parallelism:An application to BACON. In proceedings of International Conference on Control - 91, pp. 950-954.
Kocabas, S., Oztemel, E., Uludag, M., and Koc, N. (1995).Automated agents that learn and explain their own actions: A progress report. In Proceedings of the 5th Conference on Computer Generated Forces and Behavioral Representation. pp. 63-68.
Laird, J.E., Johnson, W.L., Jones, R.M., Koss, F., Lehman, J.F., Nielsen, P.E., Rosenbloom, P.S., Rubinoff, R., Schwamb, K.B.Tambe, M., Van Dyke, J. van Lent, E., and Wray, R.E. (1995)."Simulated intelligent forces for air: The Soar/IFOR project 1995" In Proceedings of the 5th Conference on Computer Generated Forces and Behavioral Representation. pp. 27-36.
Oztemel, E. and Kocabas, S. (1996). Design principles for intelligent agents in distributed interactive simulation. In Proceedings of SimTect-96, 25-26 March 1996, p. 103-106.
Tambe, M., Johnson, W.L., Jones, R.M., Koss, F., Laird, J.E., Rosenbloom, P.S. and Schwamb, K.B. (1995)."Intelligent agents for interactive simulation environments." AI Magazine, Spring, 1995, pp. 15-39.
Johnson, W.L. (1994). "Agents that explain their own actions", In Proceedings of the 4th Conference on Computer Generated Forces. May 1994, Orlando, Florida.
Jones, R.M., Laird, J.E., Tambe, M. & Rosenbloom, P.S. (1994)."Generating goals in response to interacting goals", In Proceedings of the 4th conference on Computer Generated Forces and Behavioral Representation
8. Authors' Biographies
Sakir Kocabas is the head of the AI Department at MRC and the project manager for EUCLID RTP 11.3 WP2. Dr. Kocabas has a PhD degree in Information Engineering. His research interests are in the areas of Machine Learning and Discovery.
Ercan Oztemel is a researcher at the AI Department of MRC. Dr. Oztemel has a PhD degree in Artificial Intelligence. His research interests are Real-Time Knowledge Based Systems, Inductive Learning and Neural Networks.
Mahmut Uludag is a researcher at the AI Department of MRC. Mr. Uludag has a Masters of Science degree in Mechanical Engineering, and is a PhD student at ITU. His research interests are AI Applications in Real-Time Simulation.
Nazim Koc is a researcher at the AI Department of MRC. He has a Masters of Science degree in Symbolic Computation, and is a PhD student at ITU. His research interests are Symbolic Computation, Parallel Logic Programming and Machine Learning.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment