A Sandia-led team, in partnership with Idaho National Laboratory, recently made a significant step toward secure and efficient inter-organizational collaboration using advanced generative AI technologies.
Large Language Models (LLMs) are part of artificial intelligence systems that rely on massive datasets to understand and generate human language. LLMs use patterns and context from datasets to complete tasks like summarization, translation, or novel content. However, using these models for communication between different organizations can be tricky, especially when it comes to privacy and collaboration. Organizations often seek to share data and leverage insights from others to enhance decision-making; however, sensitive information in valuable documents can impede collaboration.
The Sandia-led paper introduces a novel decentralized inference meta-agent chatbot that leverages privacy-aware Retrieval-Augmented Generation (RAG)-enabled LLMs for collaborative AI communication across organizations. The platform is built on Microsoft’s Autogen and allows LLMs to improve their responses on their own, making them more accurate and relevant.
To protect sensitive information and avoid disrupting communication between organizations, the tool uses advanced methods to reduce errors and a privacy-focused approach that creates synthetic documents that exclude sensitive information while preserving valuable non-sensitive content. Organizations can specify which attributes in their data should be included or excluded when generating synthetic documents, thereby defining their own security protocols for data sharing.
Initial tests confirm that this platform successfully keeps conversations relevant while maintaining exceptionally high privacy standards. Overall, this work is an important step toward making secure and efficient communication possible between organizations using AI.
For additional information visit our AI for the grid webpage. A detailed paper regarding these findings is set for publication in mid-November.
September 30, 2025