Syntax Watch
A 2.06" AMOLED touchscreen AI companion on your wrist. Always on, always listening for your next command — with a notification drawer for delegated AI task results.
Details below ↓ReSono Labs Syntax is a two-device AI voice terminal ecosystem. A touchscreen watch on your wrist and a companion terminal on your desk — both powered by the same OpenClaw brain, both speaking with one voice.
A 2.06" AMOLED touchscreen AI companion on your wrist. Always on, always listening for your next command — with a notification drawer for delegated AI task results.
Details below ↓A 1.85" round-display desktop companion terminal. Sleek, always-plugged-in, always-connected. The desk anchor for your ambient AI workflow.
Details below ↓Syntax Watch and Syntax share the same UI framework, the same OpenClaw voice protocol, and the same notification inbox. Start a conversation on the desktop, walk away, and continue it on your wrist. Or delegate a heavy task from the watch and get the spoken result when you're back at your desk.
They look different because they are different hardware. But underneath the shell, they're the same device — running the same agnostic ESP32-S3 firmware.
Neither the watch nor the desktop terminal runs an LLM. They're thin clients — they capture your voice, stream it to OpenClaw over WebSocket, receive the AI's audio response, and play it back. The heavy lifting (transcription, inference, text-to-speech) all happens on the server.
This design means no AI API keys ever touch the device. No credentials in firmware, no secrets in NVS. The ESP32 just streams PCM audio and paints pixels. Everything sensitive lives in OpenClaw.
Switching from Gemini Live to OpenAI Realtime takes one CLI command: openclaw config set liveProvider google
The agnostic architecture uses driver tables (ops structs) — the firmware doesn't hardcode any specific screen, speaker, or codec. It dispatches through function pointers defined per board profile. Add a new ESP32-S3 form factor? Implement the driver table. Everything else — the AI, the UI framework, the voicemail system — works out of the box.
Init, flush, sleep, wake, brightness — all dispatched through board-specific function pointers. Works with rectangular AMOLED or round LCD.
I2S mic start/stop, speaker PCM write — via ES8311/ES7210 codecs with automatic 24k→16k resampling for the AI stream.
Touch, tap, and gesture events routed through a board-specific input handler — whatever input hardware the device has.
Battery monitoring, charging state, AXP2101 PMU integration — per-board power profile with shared battery logic.
Captive portal for initial Wi-Fi setup. Credentials never hardcoded — securely provisioned once, stored in NVS.
Custom layered LVGL interface with a top System Drawer, bottom Results Drawer, and a center Orb Service visualizer.
When you ask Syntax to do something complex — research a codebase, draft a report, analyze a codebase — it doesn't block the conversation. It delegates to a background text worker (an OpenClaw subagent) and goes silent.
When the result is ready, the device gets a deskbot.results.ready event. A notification badge appears on the watch or desktop screen. Tap it, and Gemini is briefly briefed on the result and speaks a concise natural-language summary back to you.
The voicemail broker is durable JSON state — tasks survive network blips, device reboots, and context resumptions.
Up to 240 recent conversation turns stored in a fast-access JSON file. Always in context, never re-fetched.
Older turns automatically moved to archive — keeps active snapshots lightning fast without losing any conversation history.
Every Syntax conversation mirrors directly into your main OpenClaw agent. Talk to your desk bot, and your primary AI remembers it on your phone.
Resumption handle logic sustains context across network interruptions and device reboots — no AI repetition, ever.
The Silence Guard architecture monitors AI and user audio state in real time. If you're speaking or Gemini is mid-sentence, background polling and worker spawns are suspended completely. No audio stuttering. No mid-word interruptions.
After you stop speaking, the system waits exactly 1 second before resuming background work — ensuring no "clipping" at the end of your turn.
The Orb Service visualizes AI state on both devices — its animation speed and color theme track whether the system is Idle, Listening, Thinking, or Speaking.
All AI API keys live strictly inside OpenClaw. Gemini and OpenAI credentials never reach the ESP32, never touch NVS, never appear in firmware.
Each device generates a random 6-digit PIN stored in NVS. That PIN is the bearer token for the local web interface — revocable at any time from OpenClaw.
Web sessions expire after 30 minutes of inactivity. PIN re-entry required. Clean kill switch for any lost or compromised device.
Built-in filtering ensures Gemini's internal model reasoning (thoughts) never leaks to the user interface. Only final outputs are surfaced.
Both Syntax devices run on the ESP32-S3, chosen for its dual-core Xtensa architecture, Wi-Fi + BLE 5.0 radio, and broad ecosystem support. The firmware is built on ESP-IDF v5.4.2 with LVGL for the UI framework.
ReSono Labs Syntax is an open-architecture ESP32-S3 platform. Talk to us about custom hardware form factors, white-label deployments, or integrating Syntax into your own product line.