Beyond Tech: Soft Skills Will Define AI Success in the DoD and IC

In the rush to harness the power of artificial intelligence and data, much of the focus has been on acquiring the right technical capabilities—building machine learning models, refining algorithms and integrating vast data sets. But here’s the reality: the best AI and data solutions won’t mean much if the workforce lacks the soft skills to interpret, communicate and act on these insights. 

For the U.S. Department of Defense and Intelligence Community, AI is not just about automating tasks—it’s about augmenting human decision making. And that requires a workforce that can think critically, collaborate effectively and navigate uncertainty with agility. Let’s break down the essential soft skills needed to truly leverage AI and data in high-stakes national security environments. 

Critical Thinking: Questioning the Algorithm 

AI systems can crunch numbers at scale, but they don’t think critically. Analysts and decision makers must be able to assess AI-driven insights, ask the right questions, and spot potential biases or errors.

  • Why it matters. AI models can reinforce algorithmic biases present in their training data. Without critical thinking, the DoD and IC risk blindly trusting flawed outputs, leading to poor decisions.
  • What it looks like in action. A military analyst reviewing an AI-generated threat assessment should be able to question anomalies, seek alternative data sources and challenge assumptions rather than accepting the AI’s conclusions at face value.

Creative Thinking: Beyond the Obvious Insights 

While AI excels at pattern recognition, it doesn’t innovate or think “outside the box.” Creativity is key to using AI-generated insights in unexpected ways and finding solutions beyond conventional approaches.

  • Why it matters. the IC, adversaries adapt quickly. Relying solely on past patterns identified by AI could mean missing emerging threats
  • What it looks like in action. Cybersecurity analysts might use AI to detect attack patterns but need creative thinking to anticipate novel tactics that AI hasn’t yet seen.

Effective Communication: Translating AI into Actionable Intelligence

It’s not enough for AI to generate insights; leaders and operators must understand them to act decisively. Therefore, communicating AI-driven insights in clear, concise and meaningful ways is critical.

  • Why it matters. Misinterpreted AI outputs can lead to operational failures. A complex machine learning model is useless if its insights are not communicated to humans effectively.
  • What it looks like in action. An intelligence officer briefing a commander on AI-driven battlefield risk assessments must be able to explain the findings in simple, actionable terms—without technical jargon that muddies the message.

Collaboration and Cross-Disciplinary Teamwork

AI-driven solutions often require input from data scientists, intelligence analysts, policymakers, military leaders, and so forth. Collaboration ensures that AI outputs are interpreted and applied effectively.

  • Why it matters. AI solutions built in silos often fail to address operational needs. Effective teams must bridge technical expertise and mission experience.
  • What it looks like in action. A data scientist working on predictive maintenance for military aircraft must collaborate with logisticians and mechanics to ensure the AI model aligns with real-world maintenance cycles.

Adaptability and Resilience in an AI-Augmented Battlefield

AI is evolving rapidly, and so are the challenges it presents. The ability to adapt to new tools, workflows and AI-driven insights is essential for staying ahead of adversaries.

  • Why it matters. The DoD and IC cannot afford a rigid workforce that struggles to integrate AI advancements into operations. Adaptability ensures mission effectiveness.
  • What it looks like in action. An intelligence analyst who has spent years relying on manual geospatial analysis must be open to learning how AI-driven geospatial tools can enhance their work rather than resisting change.

Ethical Decision Making: Human Oversight AI Needs

AI can make recommendations, but it cannot make ethical decisions. Human operators must consider the ethical and legal implications of AI-driven actions, particularly in warfare and intelligence operations.

  • Why it matters. AI applications in defense—from autonomous weapons to surveillance—raise complex ethical concerns. Human judgment must remain at the center.
  • What it looks like in action. A drone operator using AI-assisted targeting must ensure compliance with rules of engagement and international law, recognizing when human intervention is necessary.

The Bottom Line

AI and data solutions are powerful force multipliers, but they do not replace the need for human expertise. The DoD and IC must prioritize training and developing a workforce that not only understands AI but also possesses the soft skills needed to wield it responsibly and effectively.

 In an era where AI capabilities will define military and intelligence superiority, the real advantage won’t just come from better algorithms—it will come from better people using them. 

How is your organization preparing its workforce for an AI-driven future? Let’s start the conversation.

Dr. J. Keith Dunbar
Founder and Chief Executive Officer
FedLearn