TechPolicy.Press published an analysis on May 11 arguing that OpenClaw’s architecture proves a fundamental market assumption wrong: AI agents do not need vertical integration to compete with the closed ecosystems built by Google, Microsoft, and OpenAI.
The argument rests on OpenClaw’s Gateway design. The Gateway runs locally on the user’s device, manages connections to external services (email, calendar, file storage), and stores memory, preferences, and API credentials on the user’s machine rather than in the cloud. Switching foundation models requires a single command. Because data and integrations persist locally, moving from Claude to GPT to DeepSeek does not mean starting over.
The Lock-In Mechanism TechPolicy.Press Identifies
The analysis contrasts OpenClaw’s portability with how the leading AI firms are building their agents. Google’s Gemini can search through photos, read group chats, and execute tasks across Gmail, Calendar, and Drive. Microsoft has bundled Agent Mode into Copilot, embedded in Office 365. OpenAI introduced ads to ChatGPT and connected agents to health apps and medical records.
Each new capability extends the platform’s reach into users’ digital lives. TechPolicy.Press argues this creates a compounding lock-in effect: over time, the agent learns habits and preferences (both explicit and inferred), and starting over with a new provider means reconstructing that knowledge and reconnecting every service. “With each passing day, switching becomes a little bit harder.”
The vertically integrated approach also creates self-preferencing risk. TechPolicy.Press draws a direct parallel to Google’s €2.4 billion fine for burying rival shopping services in search results. An agent that presents a single recommendation, with no transparency into how that recommendation was reached, can steer users in the same way, but less visibly.
The Modular Alternative
OpenClaw’s architecture eliminates the switching cost mechanism. Memories and activity logs are stored as human-readable files on the user’s device. Users can edit or delete them directly. Integrations persist across model changes. The analysis argues this portability also undermines platform surveillance: users can rotate between foundation model providers, preventing any single actor from accumulating a complete record of their online activity, and can run smaller open-weight models locally for sensitive tasks.
The downstream effects are already visible. TechPolicy.Press notes that OpenClaw prompted nearly every major Chinese tech company to launch equivalents: Xiaomi’s Miclaw, Moonshot AI’s Kimi Claw, Zhipu AI’s AutoClaw. Nvidia entered with NemoClaw, built directly on OpenClaw’s framework, defaulting to Nemotron models but allowing user-swappable alternatives. The proliferation suggests modular agent architectures can sustain an ecosystem without any single vendor controlling the stack.
The Security Tradeoff
TechPolicy.Press does not ignore the counterargument. Modular agent designs push security responsibility onto users. With vertically integrated agents, one centralized provider handles security for the entire system. With OpenClaw, users must manage the risks of untrusted content, malicious add-ons, and prompt injection attacks themselves.
The analysis cites security researchers who have flagged these risks, including the 341 malicious ClawHub add-ons designed to steal user data and prompt injection attacks where hidden instructions in emails or webpages can hijack agent behavior. The conclusion is that modular markets require new infrastructure and safeguards, which are beginning to emerge from Nvidia and security startups but are not yet mature.
The Regulatory Question
The most pointed argument in the piece concerns platform gatekeeping. Meta blocked rival AI assistants from WhatsApp in favor of its own Meta AI. The Italian Competition Authority and European Commission imposed interim measures. But TechPolicy.Press notes that platforms that never open up in the first place achieve the same outcome without triggering scrutiny.
The analysis concludes that unless regulators require gatekeepers to provide meaningful access to their platforms, projects like OpenClaw cannot offer a viable alternative at scale. OpenClaw’s founder Peter Steinberger has already been hired by OpenAI, though the project moved to an independent foundation.
The standard TechPolicy.Press proposes: users should have the right to export their data, including memories, chat histories, and uploaded files, in standardized, human-readable formats. The market-leading agents should be held to the portability standard that OpenClaw already meets by default. Whether regulators will enforce that standard before lock-in becomes irreversible is the open question.