Smarter Simulation with LLM-Powered AI Agents in VR-Forces#
RTDynamics has developed a prototype AI agent based on large language models (LLMs) and integrated it into the VR-Forces simulation environment, a widely used computer-generated forces (CGF) platform for military training, research, and operational analysis.
This prototype demonstrates how LLM-based agents can enhance simulation interactivity. By incorporating LLM-powered AI agents into VR-Forces, it is possible to enable more intuitive, natural language control of simulated entities.
See the Prototype in Action#
Watch a demonstration of the prototype here:
How Does It Work?#
Each simulated entity in VR-Forces is managed by an AI agent that communicates with a large language model (LLM). When a user command is received, the agent sends the current simulation context (position, altitude, speed, orientation, sensor outputs, loadout etc.) to the LLM, which then generates a tailored response in real time.
We have implemented a special task system that supports two modes of agent behavior:
- Sequential Task Execution: For straightforward actions, the LLM returns a simple sequence of tasks or actions. The agent compiles and executes these steps in order; once one completes, the next begins.
- Finite State Machine (FSM) Generation: For more complex behaviors—such as those involving conditional logic or branching—the LLM generates the definition of a finite state machine (FSM). The agent compiles and runs this FSM, enabling more sophisticated, context-dependent behavior.
In both cases, all code or behavioral logic is generated dynamically at runtime based on the current context and the user’s command, allowing for highly flexible and adaptive entity control within the simulation.
Technology Stack#
The prototype brings together several of RTDynamics’ high-fidelity simulation modules:
- FixedWingLib CGF – Physics-based flight dynamics model with a virtual pilot
- CML – Air combat maneuver logic (CAP, BVR engagement etc.)
- EWAWS – Realistic sensors, weapons, and countermeasures simulation
All these are seamlessly integrated into the VR-Forces simulation platform through the RTDynamics plugin.
Why AI Agents in Simulation?#
AI agents can make interactions with simulation entities much more intuitive by allowing users to issue complex commands, including multiple tasks and conditions, simply by using everyday language. In addition, users receive feedback about current decisions and system states in clear, human-readable form. Since AI agents understand natural language, much of the software’s underlying complexity is hidden from the user. This lets users focus on their objectives rather than technical details.
This approach offers several key advantages. Creating and running training scenarios becomes much simpler, and users such as instructors and operators can be trained more efficiently and quickly. Additionally, because giving commands during simulations is easier and more flexible, each user can effectively control a larger number of entities within the simulation.
See also Jim Kogler's Post on LinkedIn.
Interested in AI topics? Join the RTDynamics AI Newsletter here