Smarter Communication with AI Agents – Voice-Controlled Command & Control in VR-Forces
In our previous work, we focused on integrating Large Language Models (LLMs) into VR-Forces to make simulation entities smarter and more autonomous. We explored how LLM-powered agents could interpret human intent, generate Lua code, and execute meaningful actions inside complex simulation environments. While these early results were promising, one challenge remained: the user interface.
Each natural language command still required multiple clicks through menus and windows, followed by manual text entry. Communicating with AI agents was powerful—but not effortless.
Our latest update changes that.