Give Claude, GPT, Gemini, or any AI agent sub-10ms control over Android. One HTTP call. No middleware.
Conventional tools route through middleware, ADB, and serialization layers. NeuralBridge puts the server inside the app.
Agent
โ HTTP Request
โ Middleware Server
โ ADB Bridge
โ UIAutomator2
โ Device
Agent
โ HTTP
โ NeuralBridge App
(in-process execution)
Measured on Pixel 7, Android 14, WiFi. 100 runs per operation.
| Operation | NeuralBridge | Others (Typical) |
|---|---|---|
| Tap | ~2ms | 300โ1000ms |
| Swipe | ~2ms | 300โ1000ms |
| Text Input | ~1.4ms | 500โ3000ms |
| UI Tree | 18โ33ms | 500msโ5s |
| Screenshot | ~60ms | 300โ500ms |
| Average | ~6.4ms | ~800msโ1.5s |
Say tap(text="Login") โ not pixel coordinates. 6-step resolution chain with fuzzy matching.
73% fewer tokens for UI trees. Compact format, interactive-only filtering, smart omission.
Screenshot + semantic UI tree in one ~70ms call. Complete situational awareness.
scroll_to_element finds elements automatically. No guessing scroll counts.
Check touch targets, content descriptions, and contrast in under 50ms.
Nine major versions. Automatic fallbacks for version-specific restrictions.
No root. No middleware server. No ADB forwarding.
Install the companion APK
Enable AccessibilityService
Grant screen capture
Point your agent
{
"mcpServers": {
"neuralbridge": {
"type": "http",
"url": "http://<device-ip>:7474/mcp"
}
}
}
32 tools. Sub-10ms latency. Open source.