Humans& Aims to Pioneer Socially Intelligent AI

Humans& Aims to Pioneer Socially Intelligent AI

With AI assistants becoming commonplace, the focus has shifted from single-user tasks to the complex, messy world of team collaboration. One new startup, humans&, backed by a staggering $480 million seed round and a team of alumni from the world’s top AI labs, is betting that the future isn’t just about smarter chatbots, but about socially intelligent models designed to coordinate human effort. We sat down with our SaaS and Software expert, Vijay Raina, to dissect this ambitious venture. We’ll explore the limitations of current AI in collaborative settings, the novel training methods required to teach an AI social intelligence, and the immense risks of challenging established giants by attempting to build an entirely new “collaboration layer” for the modern workforce.

You’ve noted that current AI assistants aren’t designed for the messy work of collaboration. Can you walk us through a specific scenario, like coordinating a team with competing priorities, and explain step-by-step how a socially intelligent model would facilitate a better outcome than today’s tools?

Absolutely. Imagine a product team where engineering wants to focus on refactoring tech debt for long-term stability, while marketing is pushing for a new feature launch to hit quarterly targets. A standard AI assistant, if you ask it for a plan, might just spit out a generic project timeline that doesn’t resolve the core conflict. It operates on a one-shot, ‘correct answer’ basis. A socially intelligent model, however, would act as a facilitator. It would first access its memory of past projects, understanding that engineering’s concerns are valid based on previous outages, while also recognizing the pressure on the marketing team. Instead of giving an answer, it would start a dialogue, asking questions like, “Marketing, what is the minimum viable version of this feature that would satisfy the Q3 goal? Engineering, what is a realistic timeline to address the most critical tech debt before tackling this new feature?” It would then help mediate a compromise, tracking the decision and ensuring follow-through, becoming the connective tissue for that long-running decision rather than just a glorified calculator.

This model is trained for social intelligence using long-horizon and multi-agent reinforcement learning. How does this training approach differ from that of a standard LLM, and how does it enable your AI to understand group dynamics, memory, and motivation over an extended project?

The difference is fundamental. A standard Large Language Model is primarily trained to predict the next word. It’s optimized for two things: how much you immediately like its response and its statistical likelihood of being correct in that single interaction. It’s like a sprinter trained for a 100-meter dash. In contrast, long-horizon and multi-agent reinforcement learning is like training for a marathon, or even coaching a whole team for a season. Long-horizon RL teaches the model to plan, act, and revise its strategy over many steps, rewarding it not for a single good answer but for achieving a complex goal over time. The multi-agent aspect trains it in environments with multiple AIs and humans, forcing it to learn the art of negotiation, influence, and shared context. This is how it develops a memory of you, your team, and the project’s history, allowing it to understand motivations and dynamics that evolve over weeks, not just within a single chat window.

You aim to own the entire “collaboration layer” rather than plug into existing apps. What core limitations do you see in adding AI to current platforms like Slack or Google Docs, and how will your product’s interface be fundamentally different to overcome them?

Plugging AI into existing tools is like putting a jet engine on a horse-drawn carriage. Slack is built for ephemeral, real-time messaging, and Google Docs is a digital piece of paper. Their interfaces aren’t designed for an agent that needs to understand long-term context, team dynamics, and individual motivations. Adding an AI ‘bot’ to a Slack channel just creates more noise; it can summarize a conversation, but it can’t truly steer it or remember the nuances from three weeks ago. The humans& approach is to co-evolve the interface with the model. This means the product won’t just be a chat window. It might look more like a dynamic dashboard that visualizes team alignment, tracks the history of key decisions, and proactively surfaces potential conflicts. The interface itself will be a tool for the AI to interact with the group, making the model’s capabilities clear and accessible in a way that a simple plugin just can’t.

Established AI labs are already integrating collaboration features, like Gemini in Workspace. What is the primary disadvantage of their approach, and what specific, practical advantage does building a new model from the ground up for coordination offer to an average team?

The primary disadvantage is that these big players are bolting collaboration features onto models fundamentally architected for information retrieval. Gemini is brilliant at answering questions, but its core DNA isn’t social intelligence. It’s an afterthought. For an average team, the practical advantage of a ground-up model is profound. Think about planning a team offsite. Using Gemini in Docs might help you draft an itinerary. A model built for coordination would start by understanding the goal—is it for brainstorming, team bonding, or strategic planning? It would then query team members about their preferences, budget constraints, and availability in a conversational way. It would mediate disagreements, track action items, and remember who was responsible for booking the flights. It’s the difference between a tool that helps you write about collaboration and an active participant that is the collaboration.

Recalling the common challenge of getting a team to agree on a new logo, how exactly would your AI act as a facilitator in that process? Please describe the types of questions it would ask to understand individual preferences and guide the group toward a consensus.

This is a perfect example of where a socially intelligent model would shine. Instead of just presenting a poll, it would engage the team like a skilled human facilitator. It would start with open-ended, exploratory questions to understand the underlying values, not just the surface-level preferences. It might ask, “When you think about our company, what are three words that come to mind?” or “What feeling do we want our customers to have when they see our brand?” It would then probe individuals: “Sarah, you mentioned ‘bold.’ Can you show me an example of a logo you feel is bold? John, you said ‘trustworthy.’ What visual elements make a logo feel trustworthy to you?” By asking questions that feel like a friend getting to know you, it avoids the sterile, robotic nature of current bots. It would then synthesize this qualitative data, identify areas of common ground, and propose solutions that aren’t just a mathematical average of everyone’s input, but a true consensus built from shared understanding.

A $480 million seed round provides a significant runway but also highlights the scale of your ambition. Beyond the immense cost of compute, what is the single greatest technical risk you face in developing this new model, and how does your team’s background prepare you for it?

Beyond the astronomical cost of compute, the single greatest technical risk is the data problem for training. You can’t just scrape the internet to teach a model social intelligence and collaboration. That data doesn’t exist in a clean, structured format. The risk lies in creating a novel training methodology and environment where the model can learn these complex, multi-agent, long-horizon skills effectively. It requires building sophisticated simulations and feedback loops involving both humans and AIs. This is where the team’s pedigree becomes their biggest asset. With founders from Anthropic, Meta, OpenAI, xAI, and Google DeepMind, they have firsthand experience from virtually every major effort in scaling foundation models. They have seen what works and, more importantly, what fails when you push models beyond simple text prediction. They possess the rare, collective expertise needed to pioneer the new training paradigms this ambitious goal demands.

What is your forecast for the future of AI-driven collaboration?

I forecast a fundamental shift away from AI as a personal assistant and toward AI as a central, integrated team member. In the next five to ten years, the most effective teams won’t just use AI to automate tasks; they will partner with AI to manage the very fabric of their collaboration. We will see AI facilitators running meetings, mediating disputes, and maintaining organizational memory in a way that prevents knowledge from being lost when an employee leaves. The focus will move from workflow automation to workflow intelligence, where the AI understands the ‘why’ behind the work, not just the ‘what.’ This will make teams more aligned, efficient, and innovative, but it will also require a new set of skills for humans—learning how to collaborate with an intelligent, non-human agent as a peer.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later