Smarter Formation Control with LLM-Powered AI Agents in VR-Forces#
RTDynamics has taken its prototype integration of large language model (LLM)-based AI agents in the VR-Forces simulation environment to the next level. In this latest demonstration, our AI agents go beyond understanding and executing natural language commands for individual entities — they can now coordinate and control entire formations of aircraft.
From Single Entities to Coordinated Formations#
Previously, each simulation entity was managed by its own AI agent, complete with an individual LLM connection and context. This enabled intuitive tasking of individual aircraft via natural language. Now, these agents can act in concert, allowing users to direct complex, formation-level maneuvers with a single prompt.
Each agent remains aware of its own state (flight dynamics, sensor outputs, weapons loadout) while also maintaining shared context at the formation level. Importantly, agents retain conversational history. Once enemies or parameters of flight tasks are specified, they do not need to be repeated in subsequent instructions. This persistence makes interactions faster, more natural, and closer to real-world mission control dynamics.
Why Formation-Level AI Matters#
Formation-level engagements are difficult to script due to their highly dynamic nature. Traditionally, instructors and operators must manage a patchwork of user interface dialogs to assign tasks such as:
- Combat Air Patrol (CAP)
- Cranking maneuvers
- Commit and intercept sequences
- Target sorting
- Sensor management
- Weapon launches
Each task requires careful parameter selection through different dialog types, increasing complexity.
By contrast, our LLM-powered AI agents unify all tasking into a single, natural-language interface. Instead of juggling multiple UIs, operators can simply issue instructions through the prompt. Because the agents share entity and formation-level context, they automatically remember recent activities such as CAP assignments or previously designated targets. This dramatically reduces repetitive input and streamlines tactical decision-making.
For example, suppose a formation is flying a Combat Air Patrol (CAP) at waypoint A, oriented toward threat direction 080 degrees and maintaining an altitude of 30,000 ft. If this formation engages incoming bandits, the system remembers the CAP parameters in its shared context. After the fight, the user can simply issue the command “resume CAP”, and the formation will automatically return to its previous CAP configuration without requiring the user to restate the waypoint, threat direction, or altitude.
Technology Under the Hood#
The prototype integrates RTDynamics’ high-fidelity simulation modules into VR-Forces, a widely adopted CGF environment for military training, research, and operational analysis:
- FixedWingLib CGF – Physics-based flight dynamics with a virtual pilot
- CML Add-on – CAP, BFM, Air combat maneuver logic for BVR/WVR engagements
- Formation Add-on – Coordinated formation flight and formation-level tasking
- EWAWS – Physics-based sensors, weapons, and countermeasures
Together, these modules create a rich simulation environment where AI-driven formations behave in smarter, more realistic ways.
Watch the Demonstration#
Our latest video showcases how these technologies come together in action. The scenario, which runs in VR-Forces, demonstrates a tactical engagement of formations: a two-ship enemy formation, controlled by FixedWingLib CGF CML’s autonomous agents, enters the area of responsibility of a four-ship formation under human operator control. Using natural language prompts, the operator directs the friendly formation to execute a pincer maneuver to engage the bandits.
The demonstration highlights:
- Formation-level control through a single natural language interface
- Context-aware decision-making, where agents remember prior instructions
- Dynamic tactical execution that would be difficult or impossible to script manually
Watch the video to experience how natural language commands, advanced AI behavior, and high-fidelity simulation merge to create the next step in human-AI teaming.
What's Next#
The current prototype demonstrates how natural language can simplify and enhance control of both individual entities and formations in VR-Forces. Building on this foundation, RTDynamics is exploring new extensions to make human-AI teaming even more natural and immersive.
One key area of development is speech recognition for voice control. Future enhancements to the operator interface will incorporate voice input in the style of radio communications, making it easier to issue commands naturally while selecting recipients and designating targets. This approach mirrors real-world tactical communication and further reduces operator workload.
Interested in AI topics? Join the RTDynamics AI Newsletter here