Product
March 17, 2026

Build robots the way you write code: Swap hardware without rewrites

The same abstraction principles that make software flexible apply to hardware too. Here's what that looks like in practice.
Wofai Ewa
Technical Product Marketer
On this page

If you're a software engineer, you already know how to build systems that survive change. You swap database drivers without rewriting queries. You switch cloud providers without rebuilding your application. You update dependencies through a config file, not a code rewrite. This is just good architecture—separating what changes (implementation details) from what stays stable (your application logic).

Inside traditional robotics development workflows, hardware choices feel more permanent. Changing a camera often means rewriting vision processing code. Upgrading a motor controller turns into rebuilding motion logic. Swap sensors, and you need to refactor integration layers. Hardware becomes outdated architecture—and that's the problem.

Hardware agnosticism offers a different approach. It's the idea that hardware should be the replaceable layer, not the permanent foundation of your robotic system. In practice, it means that your robot's intelligence—its perception, decision-making, and behavior— can outlive any individual component.

For software engineers exploring robotics, this isn't a new concept. It's the same abstraction principles you already know, applied to physical devices. Applied to robotics,  it fundamentally changes what's possible to build.

The problem: when hardware becomes architecture

Here's the pattern most robotics development follows:

You choose a camera and write vision code specifically for that camera's SDK. You pick a motor controller and build motion logic around that vendor's protocol. You select sensors and integrate using vendor-specific drivers.

This is like writing database queries directly in your application code instead of using an ORM. Or building AWS-specific logic throughout your app instead of using cloud-agnostic abstractions. You can do it—but you're creating tight coupling that'll hurt later.

The consequences compound quickly:

  • Hardware changes require cascading software rewrites
  • Component upgrades become full system redesigns
  • Technical debt accumulates with each hardware-specific integration
  • Teams get locked into vendors not by choice, but by code dependency

This happens because there's no abstraction layer between hardware and application logic—a problem software engineering solved decades ago with interfaces, ORMs, and standardized APIs.

Hardware longevity: what it means and why it matters

Building robots that outlive their parts is about architectural resilience. Your robot's intelligence—its perception, decision-making, and behavior—should be the durable layer. Hardware should be the replaceable layer. The system you’re building on should be able to absorb hardware changes without cascading rewrites. 

Consider the difference in architectural structures:

Rigid, tightly coupled: Hardware → Vendor SDK → Application Logic

Resilient, loosely coupled: Hardware → Standardized Interface → Application Logic

This should feel familiar. It's the same principle you apply when you:

  • Use database interfaces (JDBC, SQLAlchemy) instead of vendor-specific drivers
  • Use HTTP/REST instead of writing raw TCP/IP
  • Use cloud abstraction layers instead of AWS/GCP-specific code

Real-world example: Same logic, better hardware 

When we set out to build the SurfaceAI fiberglass sanding solution, we started with a popular off-the-shelf depth camera. Mounted to the robot arm, it scanned boat surfaces to create 3D models for sanding plans. But as we pushed toward production-quality sanding, we discovered its limitations: noisy point clouds, poor performance beyond 50cm distance, and susceptibility to surface glare—all critical issues when scanning the complex curves of fiberglass boat parts.

In traditional robotics development, this would have forced us to make a difficult choice: make do with suboptimal hardware, or spend weeks rewriting drivers and control logic to switch cameras. With Viam's hardware abstraction layer, we had a third option: evaluate alternatives without touching application code.

We identified the Orbbec Astra 2 as a promising candidate and ran side-by-side tests with both cameras. The results were dramatic. At 80cm distance—critical for capturing larger surface areas in fewer scans—the Astra 2 delivered smoother, more accurate point clouds, and produced less noise than the original camera. Most importantly for our reflective fiberglass surfaces, the Astra 2's depth sensing remained unaffected by glare that created large holes in the original camera's point cloud.

Here's what didn't change: The perception code. The sanding logic. The motion planning algorithms. All of the control logic and planning algorithms we wrote with the first depth camera worked out of the box with the Orbbec because they interacted with Viam's camera API that both cameras implement. 

This rapid testing and data-driven decision-making would've been impossible if we'd been locked into our initial camera choice by integration complexity. Instead, we made the switch and moved on to the next challenge.

Get started

The principles: building for hardware longevity

1. Standardize at the interface, not the implementation

Your code should talk to "a camera," not "this specific camera model."

In software terms: You call database.query(), not mysql.execute_raw_query(). The same principle should apply to hardware. In Viam, all cameras  expose GetImage(), all motors respond to SetPower(), all sensors provide GetReadings()—regardless of manufacturer.

Abstraction layers make hardware pluggable instead of structural.

2. Separate intelligence from hardware

Perception logic ≠ sensor driver code. Motion planning ≠ motor controller code.

This is clean separation of concerns, the same way you separate business logic from data access layers. You should keep application logic hardware-agnostic, and let the platform handle vendor-specific details.

3. Make hardware decisions reversible

Ask yourself: "If I need to swap this component in 6 months, what breaks?"

Your architecture should assume hardware will evolve. Your prototype hardware won't be your production hardware, and that's okay. Start with affordable components for prototyping, then upgrade to industrial-grade hardware for production—your code investment stays protected. Design decisions should limit how far changes propagate when requirements shift.

4. Think in lifecycles

Components have shorter lifecycles than systems. Your robot should outlive any single part.

In software, your application outlives any specific dependency or service provider. The same should be true for robotics: replaceability is resilience.

The mechanism: how hardware agnosticism works

With Viam, hardware swapping is a configuration change:

{ "components": [ { "name": "my_camera", "model": "viam:camera:realsense", "type": "camera", "attributes": {...} } ] }

Change "viam:camera:realsense" to "viam:orbbec:astra2", and your application code keeps working. The standardized camera API—GetImage(), GetPointCloud(), GetProperties()—stays identical. Your perception code, motion planning logic, and application behavior remain unchanged.

When you make that config change, viam-server—the runtime managing your robot components—does the work. It pulls the appropriate driver from the Viam Registry (a central repository of hardware modules), initializes the new device, and exposes it through the same consistent interface. You don't write device drivers. You don't worry about protocol differences. You don't rewrite application logic. More on Viam's architecture.

This is the development workflow software engineers are used to—config-based dependency management, standardized APIs, separation of concerns—applied to physical devices.

Architecture for the inevitable

Hardware requirements are bound to change. Better sensors will emerge, suppliers will evolve, and components will fail. What matters is whether your system can adapt without rewrites. Viam applies the software engineering principles you already know to robotics hardware: consistent APIs across manufacturers, config-based hardware changes, loose coupling between components and application logic. Marine manufacturing robots that can handle a mid-deployment camera swap without touching application code. Systems that scale from prototype to production without rewrites.

Whether you use Viam or build your own abstractions, the principle is universal: hardware agnosticism isn't optional for production robotics—it's foundational. Your robot's intelligence should outlive its parts.

Software engineering solved this problem decades ago with abstraction layers and standardized interfaces. Robotics is now applying those same principles to hardware. The development velocity, flexibility, and architectural patterns you're used to in software works for robots too.

twitter iconfacebook iconlinkedin iconreddit icon

Find us at our next event

A spot illustration showing a calendar with all the dates crossed out.
Error! No upcoming events found. Please check back later!