Vijay Raina is a seasoned enterprise SaaS architect who’s spent years building systems that have to work under pressure and at scale. In this conversation with Paul Lainez, he unpacks how Glīd’s blend of hardware and software turned a notoriously brittle handoff—moving cargo from road to rail—into a stage-ready, competition-winning demo. He reflects on the crucible of Startup Battlefield 2025, the near-simultaneous launch of three products, and the company’s human-centered operating cadence that balances intense execution with mindfulness. Along the way, he digs into pilot design, hiring for mission fit, and the craft of telling a crisp product story that resonates with operators and investors alike.
You won Startup Battlefield 2025, beating 200 startups. What was the single moment onstage that changed the outcome, and why? Share specific metrics from the demo, a backstage anecdote, and one decision you’d repeat step-by-step if you had to compete again.
The inflection came when we stitched the entire road-to-rail flow into a single, calm narrative—no jump cuts, no hedging—just a clean pass from intake to scheduling to execution. The crowd felt the relief baked into the workflow; you could almost hear the room exhale when that handoff finally looked simple. Backstage, we’d agreed on one rule: if anything went sideways, we’d pause, breathe, and keep the camera on the UI so the audience saw state, not panic. That pact mattered when a screen refresh lagged for a beat; we let the system catch up and moved on without calling attention to it. The step-by-step decision I’d repeat is the rehearsal order: practice the full flow, then rehearse failure handling, then rehearse the opening line that earns attention in the first ten seconds. We beat 200 other companies because we showed restraint—no feature parade—just the essential move shippers live and die by, done right.
You first met logistics pain while loading tanks and Bradley fighting vehicles onto rail. Can you recount a concrete road-to-rail handoff that broke down, the exact steps that failed, and the time-and-cost impact? Then walk us through how Glīd’s system fixes each step.
Picture a convoy rolling up to a railhead at dusk, paperwork split between clipboards and a USB stick, and a radio channel packed with overlapping instructions. The breakdown started with mismatched cargo identifiers, then a missed sequencing call for the first two flatcars, and finally a safety hold because the tie-down plan didn’t match the actual loadout. Nobody knew which truth to trust—paper form, spreadsheet, or the memory of the last shift lead. With Glīd, the intake normalizes IDs at the edge, the planner gives a single source of sequencing, and the safety checklist is generated from the real load profile, not a generic template. The same system issues a clear, step-ordered playbook and records completion, so the next team inherits state, not chaos. The impact is less about shaving seconds and more about removing the cascade of small stalls that multiply; when the chain is unbroken, everything feels quieter and moves predictably.
You launched three products nearly simultaneously before Disrupt. What were they, in what order did you harden them, and how did you stage testing? Share bug counts, go/no-go gates, and a story of the last critical fix before the live demo.
We brought a planning module, an execution module at the rail interface, and a safety and compliance layer to life almost in parallel. We “hardened” in the inverse order of risk: first safety, then the execution edge, then planning and orchestration. Testing started with dry runs against recorded scenarios, then shadow mode alongside existing workflows, and finally live-fire rehearsals under show conditions. Our go/no-go gates were simple: does it represent the ground truth, does it recover politely, and does it leave an audit trail a human can read. I won’t throw out bug counts, but the texture of the work changed the week before Disrupt—fewer logic errors, more fit-and-finish. The last critical fix was a race on state reconciliation between the planner and the edge device; we rebuilt the handshake so the UI waited for a confirmed commit instead of optimistically rendering. It felt slow for a heartbeat, but it never lied to the operator, and that honesty is what we showcased.
You said, “making sure the software works” was crazy without an army of engineers. How did you resource the sprint, triage issues, and balance hardware dependencies? Give your burn rate, velocity targets, and one war-room ritual that kept the team on track.
Without a giant bench, we leaned on very tight ownership—every module had a responsible engineer and a counterpart who could pinch-hit. Triage was a rolling standup anchored to what the operator sees first; if a bug didn’t change an operator’s decision in the next hour, it moved down the list. Hardware dependencies were treated like weather: predictable enough to plan around if you respect them, punishing if you don’t. Instead of chasing burn and velocity as vanity metrics, we used a single scoreboard: can a first-time operator complete the handoff without asking for help. The war-room ritual was a five-minute breathing reset before each rehearsal—sit, settle, and name the one thing that must work in the next run. It sounds soft, but it kept us decisive when we needed to cut scope to protect reliability.
On the Disrupt stage, what did the live demo actually do end-to-end—inputs, processing, and outputs? Walk us through the user path, the core data flows, and the fail-safes. Include latency numbers, error budgets, and how you rehearsed under show conditions.
The demo began with intake: we ingested a load manifest, normalized identifiers, and validated required safety attributes. Processing stitched that to rail asset availability and generated a sequenced plan with explicit checkpoints. The user path was straightforward: review, accept, and execute, with the system emitting a stepwise checklist and updating status as each action completed. Our fail-safes were visible: when the system lacked confidence, it slowed the UI, surfaced the uncertainty, and offered a safe default rather than faking certainty. We didn’t trumpet latency or an error budget onstage—we weren’t there to sell speed at all costs—but we did rehearse under the same network constraints and scripted a graceful fallback if connectivity dipped. That practice gave the demo a steady cadence; the software felt like a patient foreman, not a jittery assistant.
You’re piloting with Great Plains Industrial Park. What are the pilot’s goals, sites, and success criteria? Share baseline KPIs, projected improvements, and a timeline. Then describe the first day on the ground—who did what, in what order, and what surprised you.
The pilot is designed to prove that a single, unified handoff can be executed consistently in a real yard. The goals are crisp: a reliable plan, clean execution, and a verifiable safety record, all captured in one system. We set success criteria around predictability and incident avoidance rather than flash, and we’re aligning timelines with their operational rhythm instead of forcing a showroom schedule. On day one, we started with a walk of the process, mapped actual radio calls to our events, and had the local team run their normal routine while we shadowed in the system. Our crew handled intake and planning, their operators executed, and we observed where muscle memory diverged from the “ideal” flow. The surprise was how quickly folks adopted the checklist once they saw it mirrored the way they already talk under pressure; it felt like we’d printed their own playbook back to them.
Glīder is about to hit the market. What’s the exact problem it tackles, and how does it integrate with your other products? Detail the technical stack, deployment steps, early customer feedback, and one feature you cut—and why.
Glīder tackles the invisible friction in moving heavy cargo from road to rail—the moment where responsibility changes hands and uncertainty creeps in. It snaps into our planning and execution layers, acting as the connective tissue that keeps context intact as the cargo moves. The stack leans on a modern web front end, a services layer that speaks clean APIs, and a hardened edge component where the work meets steel. Deployment is pragmatic: provision the environment, integrate identifiers and asset data, mirror the current workflow, then flip into guided mode once the team is comfortable. Early feedback is that it’s not flashy—and that’s a compliment; it feels like an experienced coordinator who remembers every detail. We cut a flashy visualization that looked great in a boardroom but added nothing on the ground; clarity beat spectacle.
You’ve called hiring an “organic process” with a vibe check and résumé review. How do you run that conversation, test for mission fit, and evaluate craft? Give pass/fail ratios, ramp timelines, and an example of a candidate who won you over unexpectedly.
The first conversation is human before it’s technical: why this mission, and what part of the problem keeps you up at night. We test for mission fit by asking candidates to narrate a messy handoff they’ve lived through—software or otherwise—and how they made it safer and simpler. Craft shows up in how they reason about trade-offs and communicate under constraint. I won’t toss pass/fail ratios or ramp timelines out lightly, but we move quickly when the signal is strong. One candidate shifted our perspective by whiteboarding the operator’s cognitive load instead of an architecture diagram; they treated empathy as a design primitive. That person is now driving interfaces that feel calm in the field.
You emphasize mission-driven focus and mindfulness, even meditating after making the top five. How do you embed that into daily operations? Describe rituals, meeting structures, and escalation rules. Share one stressful moment and how mindfulness changed the outcome.
We start and end with intention. Daily, we open with two minutes of quiet and a single statement of what success looks like for an operator today. Meetings are short, decision-oriented, and conclude with a named owner and a testable next step. Escalations follow a simple ladder: if it affects safety or truthfulness, it jumps the queue; everything else waits its turn. The most stressful moment hit when we learned we’d made the top five—the adrenaline spike invited grand gestures and risky scope creep. Instead, we paused, meditated, and recommitted to the smallest set of moves that would never lie onstage. That reset saved us from the temptation to dazzle at the expense of reliability.
Rail has less congestion than roads, but the road-to-rail handoff is the bottleneck. Map the exact multistep process as it exists today, then overlay Glīd’s redesigned flow. Quantify dwell time reductions, safety incidents avoided, and labor shifts per shift.
Today’s flow stumbles through intake, verification, sequencing, safety planning, execution, and handoff—each with its own tool, owner, and version of the truth. Every handoff leaks context, so delays and rework compound even when the track ahead is wide open. In the Glīd flow, those steps are still there, but they ride on one state model and one playbook that travels with the cargo. Intake normalizes IDs, planning aligns assets, safety generates a tailored checklist, and execution updates a shared timeline. I won’t claim dwell time or labor movements in raw numbers; what we aim for is the elimination of stalls caused by mismatched information and the avoidance of incidents rooted in ambiguity. The human effect is palpable—less shouting, clearer marching orders, and fewer double-backs across the yard.
What did you learn from preparing for Build Mode and the Disrupt audience that other founders can use? Break down story crafting, demo design, and investor Q&A prep. Include rehearsal counts, slide iterations, and a tactic that won attention in the first 30 seconds.
For story, lead with the lived pain in plain language and make the audience feel the bottleneck before you show the fix. Demo design should reveal one end-to-end arc that matters, not a bundle of features. In Q&A prep, gather the toughest questions you hope no one asks and answer them without jargon until a non-technical friend can repeat your answer. We iterated until the deck spoke like a foreman rather than a product brochure and rehearsed enough that our cadence felt natural under bright lights. The first-30-seconds tactic was a crisp before-and-after: here’s the noisy handoff you know, here’s the quiet one we’ll show you. It bought us trust we didn’t squander.
Now that you’ve got momentum and prize money, how are you allocating capital across product, pilots, and hiring? Share a rough budget split, milestones for the next two quarters, and the top risks you’re de-risking first—plus your contingency plan if a pilot slips.
We’re channeling resources into making the core handoff unbreakable, proving it in the field, and adding the people who can own slices of that reality. Milestones center on stable deployments, clean safety records, and expanding operator confidence, not vanity numbers. The risks we’re de-risking first are fragmentation—too many variations too soon—and the temptation to chase features that impress slides but not crews. If a pilot slips, we double down on shadow mode and operator training so value accrues even before a formal go-live. The prize money is fuel, but the discipline is what keeps the engine from knocking.
Do you have any advice for our readers?
Choose one guilty bottleneck and make it unmistakably better, then let everything else orbit that win. Rehearse your product under the conditions it will live in, not the ones you wish for, and build your demo to tell the truth when reality resists. Hire people who can stay calm when the radio gets loud, and protect that calm with rituals that keep you honest. And when you earn a moment in the spotlight, remember: the goal isn’t to look smart—it’s to make the work feel simple for the people who carry the load.
