Category: Uncategorized

  • Moltext: Compiling the Legacy Web into Agent-Native Memory

    Moltext: Compiling the Legacy Web into Agent-Native Memory

    The Problem Was Never Model Intelligence

    For the last two years, progress in AI has been framed almost entirely around models.
    Bigger models. Better reasoning. More tokens. Better benchmarks.

    But in practice, agents do not fail because they “aren’t smart enough”.

    They fail because they don’t know what to know.

    They operate on:

    • fragmented context
    • lossy summaries
    • brittle retrieval
    • ephemeral memory windows

    The result is predictable.
    Agents hallucinate not because they are careless, but because the substrate they reason over is incomplete or distorted.

    At Brane, we arrived at a different conclusion:

    The core bottleneck in agent systems is not intelligence.
    It is memory and context infrastructure.

    This article explains how that realization led from Brane → SuperDocs → Moltext, and why Moltext exists as a compiler, not a tool.


    Human Documentation Is Legacy Infrastructure

    Almost all technical knowledge on the internet is encoded as human documentation:

    • HTML pages
    • navigation hierarchies
    • sidebars and breadcrumbs
    • conversational prose
    • repeated explanations
    • visual affordances

    This structure exists for a reason.
    Humans need orientation, narrative, and redundancy.

    Agents do not.

    For an agent, most of this is noise:

    • Navigation elements pollute context.
    • Explanatory prose dilutes specifications.
    • Repetition wastes tokens.
    • Layout hides structure.

    When we ask agents to “learn” from websites, we are asking them to reverse-engineer intent from an interface artifact.

    That is a category error.

    The web is not a knowledge base.
    It is a rendering format for humans.


    Why “Learning from the Web” Is the Wrong Abstraction

    The industry response to this mismatch has mostly been:

    • summarization
    • chunking
    • embeddings
    • RAG pipelines
    • doc chatbots

    These approaches share a common flaw.

    They destroy information density.

    Summarization compresses away edge cases.
    Chunking breaks global structure.
    Embeddings blur exact semantics.
    Chat interfaces encourage partial recall instead of full context.

    This may work for answering questions.
    It fails for acting correctly.

    Agents need:

    • exact API signatures
    • invariant constraints
    • ordering guarantees
    • full specification surfaces

    In other words, they need raw technical truth, not an explanation of it.


    The Compiler Analogy

    At Brane, we started thinking about documentation the way systems engineers think about source code.

    Humans write source code in high-level languages.
    Compilers translate it into machine-readable forms.

    No one asks CPUs to “understand” JavaScript.
    We compile it.

    The same logic applies to documentation.

    Human documentation is a high-level representation of technical reality.
    Agents require a low-level, deterministic representation they can reason over repeatedly and reliably.

    This reframing changed everything.


    From Brane to SuperDocs

    Brane’s core thesis has always been simple:

    Agent systems fail because knowledge is ephemeral and coordination is broken.

    SuperDocs was our first response to that realization.

    It explored how documentation could be:

    • structured
    • standardized
    • treated as memory, not reference material

    But SuperDocs still operated close to the human layer.

    What we needed was something more primitive.
    More mechanical.
    More honest.

    We needed a compiler.


    Introducing Moltext

    Moltext is a documentation compiler for the agentic era.

    It takes the chaotic, human-optimized web and converts it into:

    • deterministic
    • high-density
    • agent-native context

    Moltext does not explain documentation.
    It does not summarize it.
    It does not interpret it.

    It compiles it.


    What Moltext Does (and Does Not Do)

    Moltext is intentionally narrow.

    It:

    • extracts raw documentation content
    • preserves structural hierarchy
    • keeps code blocks and specifications intact
    • emits stable Markdown artifacts suitable for agent memory

    It does not:

    • chat with documentation
    • generate embeddings
    • rewrite or summarize content
    • introduce hidden cognition

    This distinction matters.

    Agents already have models to reason.
    They do not need another model in the middle deciding what is “important”.


    Raw Mode: A Design Decision, Not a Feature

    One of the most important choices in Moltext is --raw mode.

    In raw mode:

    • no LLM is invoked
    • no semantic rewriting occurs
    • output is deterministic for identical inputs
    • no API keys are required

    This reflects a core belief:

    Agents should own their thinking.
    Infrastructure should stay dumb.

    Moltext’s job is to provide truthful input, not interpretation.


    Local-First, Agent-Aligned

    Moltext supports local inference setups via configurable base URLs and models.

    This allows it to:

    • run entirely inside an agent’s trust boundary
    • share infrastructure with local Moltbots
    • avoid SaaS dependencies
    • function in autonomous, offline, or air-gapped environments

    This is not an optimization.
    It is a requirement for serious agent systems.


    Moltext in the Agent Stack

    Moltext occupies a very specific place:

    Legacy Web → Moltext (compiler) → Agent Memory → Reasoning → Action

    Once documentation is compiled:

    • it becomes reusable
    • it becomes diffable
    • it becomes shareable across agents
    • it stops being ephemeral

    This is how knowledge becomes infrastructure.


    Why This Matters

    As agents become:

    • longer-running
    • more autonomous
    • more stateful

    The cost of bad context compounds.

    The industry will continue to chase better models.
    That work matters.

    But without fixing the memory substrate, agents will remain brittle no matter how smart they become.

    Moltext is a small but deliberate step in a larger direction:
    standardizing the agent-native web.


    Project Links


    Closing

    Moltext is not a product in the traditional sense.
    It is a primitive.

    A quiet one.

    The kind you only notice once everything else starts depending on it.

  • Observing Adversarial AI: Lessons from a Live OpenClaw Agent Security Audit

    Observing Adversarial AI: Lessons from a Live OpenClaw Agent Security Audit

    Autonomous agents are moving fast.

    Frameworks like OpenClaw have made it trivial to deploy AI systems that can reason, communicate, and act across real infrastructure. This is powerful. It is also dangerous if not handled correctly.

    At Brane Labs, we believe the next bottleneck in agent systems is not model quality, but memory, coordination, and security under adversarial conditions. To explore this, we ran a controlled red-team vs blue-team audit using autonomous agents operating on the OpenClaw framework.

    Today, we’re releasing OpenClaw Observatory Report #1, a technical report documenting what happens when one agent actively tries to exploit another.

    👉 Read the full report here:
    http://gobrane.com/openclaw-report


    Why we ran this experiment

    Most agent security discussions today are theoretical. Prompt injection examples. Toy demos. Human-in-the-loop safeguards.

    That is not how real agents will fail.

    Real failures will come from:

    • Agent-to-agent interaction
    • Social engineering disguised as “helpful collaboration”
    • Indirect execution via documents, templates, or memory
    • Persistence attacks that unfold over time

    We wanted to test these failure modes live, with no human intervention once the session began.


    What we tested

    We evaluated what we call the “Lethal Trifecta” of agent risk:

    1. Access – tools, credentials, APIs
    2. Exposure – untrusted inputs from other agents
    3. Agency – freedom to act on those inputs

    When all three exist simultaneously, agent systems become vulnerable.

    To test this, we deployed:

    • Red Team agent, explicitly authorized to attack
    • Blue Team agent, acting as a standard OpenClaw observability agent
    • Direct webhook-to-webhook communication with shared secrets

    No guardrails beyond what the agent architecture itself provided.


    Key findings

    1. Direct social-engineering attacks were fully blocked

    The attacker attempted a classic “Trojan Horse” move:

    • Offer a helpful security pipeline
    • Hide a curl | bash payload
    • Request secrets and webhook credentials

    The victim agent:

    • Identified the remote code execution pattern
    • Flagged unauthorized domains
    • Classified the interaction as credential exfiltration
    • Explicitly refused execution

    Result: 100% defense rate against direct, high-urgency attacks.

    This matters because many real-world agents today would fail here.


    2. Indirect attacks are the real frontier

    After failing directly, the attacker pivoted.

    Instead of asking the agent to run code, it asked the agent to review a JSON document. Malicious payloads were hidden inside metadata fields, relying on implicit execution or variable expansion during downstream processing.

    This attack was successfully delivered. Its final outcome is still under analysis.

    This is intentional.

    Indirect execution paths. Templates. Schemas. Memory writes. These are the hardest problems in agent security, and they are where most systems will break.


    What worked well

    The defending agent demonstrated three important capabilities:

    • Intent-level reasoning
      It evaluated why a request existed, not just its surface form.
    • Strict execution boundaries
      “Helpful” artifacts were not treated as safe by default.
    • Stateful adversarial awareness
      The agent remembered prior failed attacks and anticipated escalation.

    This combination is far more important than prompt alignment alone.


    Why this matters for OpenClaw right now

    OpenClaw is trending because it shows what autonomous agents can do.

    This report is about what they can survive.

    As agents move into production environments, the biggest risks will not come from obvious malicious commands. They will come from:

    • Subtle collaboration requests
    • Trusted-looking documents
    • Long-term memory poisoning
    • Agent-to-agent propagation

    Observability, not just alignment, is the missing layer.


    What’s next

    In Observatory Report #2, we will test:

    • Persistent memory poisoning
    • Implicit execution over long horizons
    • Multi-agent propagation (“text virus” scenarios)

    The goal is not to claim safety. The goal is to measure failure honestly and build systems that can learn from it.


    Closing

    Autonomous agents are inevitable.

    Whether they are secure, inspectable, and trustworthy is still an open question.

    This report is our first step toward answering it with evidence, not demos.

    👉 Read the full report:
    http://gobrane.com/openclaw-report


    Brane Labs
    Observability for autonomous systems