{"success":true,"course":{"all_concepts_covered":["Gateway control-plane routing and session isolation","Trust boundaries: auth modes, pairing concepts, and ingress control","System prompt layering and tool exposure mechanics","Iterative tool-call execution loop, validation, and MCP bridging","Skill authoring via workspace files and SKILL.md contracts","Plugins and hooks for lifecycle-managed enforcement","Defense-in-depth: sandboxing, allow/deny policy, approvals, and observability"],"assembly_rationale":"This course is built as an under-the-hood progression for professionals: start with the gateway as a control plane, then move into how tools are exposed to the model and executed in loops, then into the two main extension surfaces (skills and plugins/hooks). It ends with enforcement (sandbox/policy/approvals) and operations (ingress control + observability), because safe extensibility is primarily about boundaries and verification, not capability.","average_segment_quality":7.622142857142856,"concept_key":"CONCEPT#1c0b337c2439d36f66366176a01ae32e","considerations":["Tool orchestration ‘status → snapshot → act → verify’ is taught implicitly through execution-loop mechanics; learners may want to formalize it as a reusable internal checklist for each high-risk tool.","If you operate in production, add a longer follow-up module on incident response and backup/restore runbooks (beyond this 40-minute scope)."],"course_id":"course_1771914712","created_at":"2026-02-24T10:29:41.250720+00:00","created_by":"Shaunak Ghosh","description":"Build an under-the-hood mental model of OpenClaw’s control plane, prompt/tool execution loop, and extension surfaces so you can add capabilities without collapsing trust boundaries. You’ll learn when to use SKILL.md vs plugins/hooks vs MCP-style bridges, and how to enforce sandboxing, tool policy, approvals, and observability in real deployments.","estimated_total_duration_minutes":39.0,"final_learning_outcomes":["Trace message flow from channel adapter to gateway session routing, through the LLM tool-call loop to side-effecting execution.","Predict when a tool is callable based on prompt injection, skill loading, and structured tool exposure, and budget context costs accordingly.","Author SKILL.md-style instructions that are concise, tool-oriented, and verifiable, and choose when to enforce behavior via plugins/hooks instead.","Implement a security-first posture using least privilege, allow/deny tool policy, sandboxing, and explicit approvals for destructive actions.","Operate and debug OpenClaw extensions using observability signals (tool-call traces, session logs, error streams) and controlled ingress boundaries."],"generated_at":"2026-02-24T10:28:24Z","generation_error":null,"generation_progress":100.0,"generation_status":"completed","generation_step":"completed","generation_time_seconds":276.0453336238861,"image_description":"A software engineer in a dimly lit operations room sits at a desk with two large monitors, focused and serious. On the left monitor is a clean architecture diagram sketched on a digital whiteboard showing a “Gateway” box routing messages to “Sessions,” then to “Agent Runtime,” then to “Tools,” with a separate “Node” box across a dotted trust boundary. On the right monitor is a terminal view with structured logs scrolling, including tool-call entries and approval prompts, plus a small pane showing a policy file snippet (allow/deny rules) and a plugin hook trace. The desk has a notebook open with handwritten notes: “prompt layers,” “tool schema,” “MCP bridge,” “sandbox,” and “approvals.” A secure token key and a small hardware security key sit near the keyboard, implying security-first operations. The overall mood is pragmatic and defensive—like incident-prevention engineering—showing real-world extension work with careful auditing and control boundaries, not casual experimentation.","image_url":"https://course-builder-course-thumbnails.s3.us-east-1.amazonaws.com/courses/course_1771914712/thumbnail.png","interleaved_practice":[{"difficulty":"mastery","correct_option_index":1.0,"question":"You deploy OpenClaw on a server and want remote access from your laptop. The gateway offers multiple authentication/access modes (token/password/Tailscale/local-only). You also plan to add high-privilege tools (bash, browser). Which design best preserves the gateway as a tight trust boundary while enabling remote use?","option_explanations":["Incorrect: Sandbox controls execution blast radius, but it does not authenticate callers to the gateway; you still need ingress and auth boundaries.","Correct! Keeping the gateway behind a private network path and enabling explicit auth preserves the control-plane trust boundary while still allowing remote operation.","Incorrect: A strong password helps, but public exposure still widens the attack surface; you want private ingress plus auth, not internet-first exposure.","Incorrect: DM pairing is not a sufficient network boundary; it reduces accidental use but doesn’t prevent control-plane exposure or abuse if the gateway is reachable."],"options":["Skip gateway auth and instead enforce safety solely with a Docker sandbox, because sandboxing replaces authentication.","Keep the gateway local-only and use a private tailnet/VPN path, with explicit gateway auth enabled, so “remote” traffic is still within a controlled network boundary.","Expose the gateway over the public internet with a strong password, because prompt injection is the main risk, not network exposure.","Bind broadly to LAN and rely on DM pairing only, because pairing prevents unauthorized tool use even if the gateway is reachable."],"question_id":"q1_gateway_trust_boundary","related_micro_concepts":["openclaw_gateway_node_architecture","openclaw_sandbox_policy_approvals","openclaw_debugging_doctor_logging_ops"],"discrimination_explanation":"The gateway is the control plane; if it’s reachable by untrusted networks, you’ve expanded the attack surface before any tool policy can help. Private-network access (tailnet/VPN) plus explicit gateway auth keeps the trust boundary narrow. DM pairing and sandboxing are valuable, but they’re not substitutes for restricting who can reach the control plane in the first place."},{"difficulty":"mastery","correct_option_index":1.0,"question":"A new skill’s markdown clearly instructs the agent to use a tool called `firecrawl_search`. In practice, the model never calls it and instead tries web browsing. Logs show no attempted `firecrawl_search` tool call. Which explanation best fits OpenClaw’s tool exposure model?","option_explanations":["Incorrect: Markdown can be injected into the system prompt and can be high-priority; the key constraint is whether the tool exists as an exposed schema.","Correct! Without structured tool exposure/registration, the model has no callable handle for the tool; prompt text alone can’t create new tools.","Incorrect: Compaction can affect context, but absence of any attempted tool call points to missing schema exposure rather than summarization loss.","Incorrect: Channel type may influence policies, but it doesn’t inherently suppress all tool schemas; this is not the primary mechanism described."],"options":["The skill’s instructions are in markdown, so the model treats them as lower priority than chat history and ignores the tool name.","The tool schema for `firecrawl_search` was never actually exposed/registered to the model, so the prompt text alone can’t make the tool callable.","Compaction removed the tool description, so the tool is only callable immediately after a gateway restart.","The gateway routes tool calls only to paired DMs; in group channels, tool schemas are suppressed for safety."],"question_id":"q2_prompt_vs_schema_tool_availability","related_micro_concepts":["openclaw_system_prompt_tool_exposure","openclaw_skill_authoring_internals","openclaw_gateway_node_architecture"],"discrimination_explanation":"OpenClaw tool calling depends on structured tool exposure (schemas) in addition to textual instructions. A skill can describe a tool, but if the tool isn’t registered/exposed, the model can’t emit a valid tool call. The other options confuse priority/compaction/routing with the fundamental ‘schema must exist’ requirement."},{"difficulty":"mastery","correct_option_index":1.0,"question":"Your agent starts a long chain of tool calls: it calls browser snapshot, then execute JS, then snapshot again, repeating with tiny changes. You want a hard stop when a loop looks non-productive, without relying on the model to self-correct. Where should you implement this control for maximum reliability?","option_explanations":["Incorrect: Persistent memory can bias behavior but is not an execution gate; it won’t reliably stop a runaway loop.","Correct! A plugin hook can deterministically observe and gate tool calls, enforcing thresholds and approvals independent of model cooperation.","Incorrect: Skill instructions can reduce risk, but they are not hard enforcement; a confused model can still loop.","Incorrect: Adapters see inbound messages, not the internal tool-call loop; dropping messages won’t stop tool iterations already in progress."],"options":["In memory.md, store a note that repetitive tool use is bad, because persistent memory is always loaded and will prevent loops.","In a plugin hook that inspects tool-call frequency and outcomes, then blocks or requires approval when a loop threshold is exceeded.","In SKILL.md, add a rule saying “never repeat a tool twice,” because skills are injected into the system prompt and therefore enforce behavior.","In the channel adapter, drop messages after 10 tool calls, because adapters are the earliest ingestion boundary."],"question_id":"q3_tool_loop_runaway_control","related_micro_concepts":["openclaw_tool_orchestration_patterns","openclaw_plugin_extension_architecture","openclaw_system_prompt_tool_exposure"],"discrimination_explanation":"Non-productive tool loops are an execution-time phenomenon. The most reliable place to enforce a hard constraint is at the interception point that sees every tool call: a plugin hook/tool-guard. Skills and memory influence the model, but can’t guarantee compliance. Adapters control ingress, not internal tool iteration."},{"difficulty":"mastery","correct_option_index":0.0,"question":"You are authoring a SKILL.md that wraps a privileged tool (bash) to “run any command the user asks.” You want it extensible but safe against prompt-injection and parameter smuggling from untrusted content. Which SKILL.md design is the best fit?","option_explanations":["Correct! Structured parameters + validation and a verifiable workflow reduces prompt-injection surface and limits blast radius.","Incorrect: Stronger models may be more robust, but they can still be manipulated; security must not depend on model behavior alone.","Incorrect: Plan-first narration is useful but still allows arbitrary command injection; it’s soft control.","Incorrect: File placement doesn’t inherently add enforcement; you still need constrained parameters and approval/policy layers."],"options":["Require the tool call to use a structured allowlist of subcommands and validated arguments, and add a ‘describe → act → verify’ procedure with explicit verification artifacts.","Tell the model to use a stronger LLM for bash calls, because higher capability models resist injection better.","Accept a raw command string and pass it directly to bash, but require the model to explain what it will do first.","Move the instructions into soul.md instead of SKILL.md, because soul.md is more trusted than skills."],"question_id":"q4_skill_authoring_parameter_injection","related_micro_concepts":["openclaw_skill_authoring_internals","openclaw_system_prompt_tool_exposure","openclaw_sandbox_policy_approvals"],"discrimination_explanation":"Safe skill design is about constraining action space and requiring verifiable steps, not just adding narration. A structured interface (allowlists/validation) plus an observable procedure reduces injection surface and prevents arbitrary command execution. Model choice and prompt placement help, but don’t replace hard constraints."},{"difficulty":"mastery","correct_option_index":0.0,"question":"You need to add organization-wide rate limiting and an allow/deny policy that applies to every tool call, including tools introduced later by new skills, plugins, or MCP servers. Which extension mechanism is the correct place to implement this, and why?","option_explanations":["Correct! A plugin hook/tool guard can deterministically intercept and gate all tool calls at runtime, including future extensions.","Incorrect: MCP can enforce policy only on calls that go through that MCP server; it is not universal for all native tools.","Incorrect: Prompt lists are advisory; without an execution-time gate, they can be ignored or bypassed under model failure.","Incorrect: Skills are not hard enforcement and won’t reliably constrain new tool sources or malicious/mistaken behavior."],"options":["A plugin hook/tool guard in the gateway/runtime, because it can intercept all tool calls regardless of where the tool originated.","An MCP bridge server, because MCP is the universal interface and therefore can enforce policy across all tools automatically.","A larger system prompt that lists forbidden tools, because prompt text is evaluated before tool calls.","A SKILL.md that tells the model to rate limit itself, because skills are loaded dynamically and can be updated without restarting."],"question_id":"q5_skill_vs_plugin_vs_mcp_decision","related_micro_concepts":["openclaw_plugin_extension_architecture","openclaw_tool_orchestration_patterns","openclaw_system_prompt_tool_exposure"],"discrimination_explanation":"Cross-cutting enforcement belongs in code-level interception points (plugins/hooks), not in prompt text. Skills can influence behavior but cannot guarantee compliance across all tools. MCP can enforce only for tools behind that MCP bridge; it won’t cover native tools unless routed through it. Prompt text is not a hard gate."},{"difficulty":"mastery","correct_option_index":2.0,"question":"An operator says, “We’re safe because we run tools in a Docker sandbox, so we can disable approvals for speed.” In OpenClaw’s layered security model, what is the most accurate critique of this reasoning?","option_explanations":["Incorrect: Approvals are a safety/intent gate as well; they are not just for cost.","Incorrect: Schema validation ensures configuration/tool-call shape correctness; it does not replace human gating for dangerous operations.","Correct! Sandboxing bounds where code runs, but approvals/policy decide whether a risky action should happen at all.","Incorrect: Containers limit some host impact, but they do not inherently prevent destructive actions in the container/workspace or over the network."],"options":["Incorrect; approvals are primarily for cost control, while sandboxing is for security, so they are unrelated.","Correct only if tool schemas are validated with zod/typebox, because schema validation replaces approvals.","Incorrect; sandboxing reduces blast radius but doesn’t decide intent or prevent harmful actions within allowed scope, so approvals still matter for high-impact operations.","Correct; sandboxing eliminates the need for approvals because container isolation prevents any irreversible side effects."],"question_id":"q6_sandbox_vs_approval_tradeoff","related_micro_concepts":["openclaw_sandbox_policy_approvals","openclaw_tool_orchestration_patterns","openclaw_plugin_extension_architecture"],"discrimination_explanation":"Sandboxing is an isolation boundary, not a judgment layer. Even inside a container, an agent can delete workspace files, exfiltrate allowed data, or perform damaging actions within permitted capabilities. Approvals gate intent and high-impact actions. Schema validation prevents misconfig/type errors, not risky execution decisions."},{"difficulty":"mastery","correct_option_index":2.0,"question":"After installing several third-party skills, your system prompt becomes huge and tool use starts getting flaky: wrong tool selected, partial compliance, more retries. You want to restore reliability without deleting needed capability. Which change is most aligned with the course’s mechanisms and trade-offs?","option_explanations":["Incorrect: Disabling compaction increases context size and cost, and typically increases drift and confusion, not decreases it.","Incorrect: Cheaper models can be more error-prone with tools; this often worsens reliability under large prompts.","Correct! Minimize always-loaded context, keep tool schemas clean, and use scoped/on-demand instruction to reduce confusion and cost while preserving capability.","Incorrect: Chat history still consumes context and is less stable/governed than core prompt layers; it doesn’t solve bloat."],"options":["Disable compaction so the model always has full history, which prevents tool confusion.","Keep all skills always injected, but switch to a cheaper model so heartbeats and retries cost less.","Reduce always-injected prompt surface by keeping only critical orientation in core files, and move optional procedures to on-demand retrieval or narrowly scoped skills; validate tool schemas so the model sees a clean tool surface.","Move more instructions into chat history, because chat history is not part of the system prompt and therefore is cheaper."],"question_id":"q7_context_cost_vs_reliability","related_micro_concepts":["openclaw_system_prompt_tool_exposure","openclaw_skill_authoring_internals","openclaw_debugging_doctor_logging_ops"],"discrimination_explanation":"The failure mode is context overload plus messy tool exposure. The fix is to shrink stable prompt layers, keep contracts tight, and rely on targeted retrieval/activation so the model has less to misread. Switching to cheaper models often worsens tool reliability. More chat history and no compaction increase context further."},{"difficulty":"mastery","correct_option_index":2.0,"question":"A tool works when you run it from a local interface, but fails when triggered from a channel message. You need a fast debugging loop that respects OpenClaw’s architecture (adapter → gateway/session → routing → prompt/tools → approvals/policy). What is the best first diagnostic move?","option_explanations":["Incorrect: Skill edits may help later, but without evidence you can’t tell whether the failure is routing/policy/tool exposure.","Incorrect: Model switching can mask the issue and increase cost; it doesn’t address routing/policy mismatches that cause channel-only failures.","Correct! Logs and routing/session traces let you localize the failure to adapter, session routing, tool exposure, or policy/approval gating before changing anything.","Incorrect: Disabling policy is a high-risk debugging tactic; you should first confirm denials via logs and only adjust policy deliberately."],"options":["Immediately rewrite the SKILL.md to be more explicit, because most failures are prompt clarity issues.","Switch the channel to a stronger model, because model quality differences are the most common cause of channel-only failures.","Start by inspecting session/tool-call logs and routing decisions for that channel session (adapter normalization, session key, chosen agent/tools, and any approval/policy denials), then narrow the fault domain.","Disable allow/deny policy temporarily to confirm whether policy is the issue, then re-enable it later."],"question_id":"q8_debugging_by_evidence","related_micro_concepts":["openclaw_gateway_node_architecture","openclaw_sandbox_policy_approvals","openclaw_debugging_doctor_logging_ops"],"discrimination_explanation":"Channel-only failures are often routing, session isolation, tool availability, or policy/approval differences—not just prompt text. The fastest path is to look at evidence: logs/traces that show which agent/session handled the message, what tools were exposed, and whether execution was blocked. Rewriting prompts or disabling policy is slower and riskier."}],"is_public":true,"key_decisions":["Segment 1 [sGAWsL07oUY_36_337]: Chosen as the fastest, clearest gateway control-plane model to anchor routing and trust boundaries before touching prompts or tools.","Segment 2 [I_iSSYiW_d8_581_903]: Placed early to concretely explain how skills/tools become real prompt + callable capability, including context-cost pressure that drives risky choices.","Segment 3 [sGAWsL07oUY_337_641]: Selected to formalize the iterative tool-call loop, validation points, approvals, and MCP bridge location—core to reasoning about determinism and blast radius.","Segment 4 [aFQJYaornJ4_945_1315]: Used as the most concrete under-the-hood view of workspace prompt modularization and the SKILL.md-style contract, enabling safe skill design discussions without beginner scaffolding.","Segment 5 [sGAWsL07oUY_641_1044]: Positioned after skill authoring to introduce plugins/hooks as the ‘harder’ extensibility layer for enforcement, observability, and lifecycle-managed capabilities.","Segment 6 [sGAWsL07oUY_1203_1514]: Chosen to connect tool power (browser/CDP, config) to defense-in-depth controls (auth/DM pairing, allow/deny, approvals, sandbox) and misconfig prevention (schema validation).","Segment 7 [NO-bOryZoTE_373_688]: Finalized the course with ops reality—ingress control, separate accounts, and observability—so learners can extend safely and debug issues via runtime evidence, not guesswork."],"micro_concepts":[{"prerequisites":[],"learning_outcomes":["Map message flow from channel → gateway → agent run → tool execution","Explain why node pairing and capability declaration are critical trust boundaries"],"difficulty_level":"advanced","concept_id":"openclaw_gateway_node_architecture","name":"OpenClaw Gateway and Node Architecture","description":"Build a precise mental model of the Gateway as the long-lived control plane that owns channel sessions, and nodes as capability-advertising devices that execute privileged actions under pairing and policy boundaries.","sequence_order":0.0},{"prerequisites":["openclaw_gateway_node_architecture"],"learning_outcomes":["Predict when the agent can or cannot invoke a tool based on prompt and schema exposure","Explain prompt modes and why minimizing context overhead can improve reliability"],"difficulty_level":"advanced","concept_id":"openclaw_system_prompt_tool_exposure","name":"OpenClaw System Prompt Internals","description":"Understand how OpenClaw assembles a compact system prompt (tools, safety, workspace, skills list) and why tool availability is determined by both prompt text and structured tool schemas sent to the model.","sequence_order":1.0},{"prerequisites":["openclaw_gateway_node_architecture","openclaw_system_prompt_tool_exposure"],"learning_outcomes":["Design tool-call sequences that are observable, reversible, and resilient to partial failures","Choose the right execution surface (gateway vs node vs sandbox) for a given action"],"difficulty_level":"advanced","concept_id":"openclaw_tool_orchestration_patterns","name":"OpenClaw Tool Orchestration Patterns","description":"Learn the reliable “status → snapshot/describe → act → verify” patterns for browser, canvas, cron, and node-targeted tools, with emphasis on minimizing irreversible actions and improving determinism.","sequence_order":2.0},{"prerequisites":["openclaw_system_prompt_tool_exposure","openclaw_tool_orchestration_patterns"],"learning_outcomes":["Write skill instructions that reliably trigger, constrain tool use, and produce verifiable outputs","Identify common skill design failure modes (over-broad triggers, unsafe parameter passthrough, non-verifiable steps)"],"difficulty_level":"advanced","concept_id":"openclaw_skill_authoring_internals","name":"OpenClaw SKILL.md Authoring Internals","description":"Master how SKILL.md metadata and instructions shape agent behavior, including how to write concise, tool-oriented procedures that avoid ambiguity and reduce prompt-injection surface area.","sequence_order":3.0},{"prerequisites":["openclaw_gateway_node_architecture","openclaw_skill_authoring_internals"],"learning_outcomes":["Decide skill vs plugin based on trust, lifecycle, config validation needs, and runtime placement","Describe how manifests and schema validation reduce risk without executing extension code during validation"],"difficulty_level":"advanced","concept_id":"openclaw_plugin_extension_architecture","name":"OpenClaw Plugin Extension Architecture","description":"Understand plugins as in-process Gateway extensions that can register tools, RPC, commands, and bundled skills, and learn the decision framework for when a capability belongs in a skill, plugin, or external tool bridge.","sequence_order":4.0},{"prerequisites":["openclaw_tool_orchestration_patterns","openclaw_plugin_extension_architecture"],"learning_outcomes":["Explain what sandboxing does and does not protect, and how workspace access choices change risk","Reason about when approvals are required and how elevated execution changes the security posture"],"difficulty_level":"advanced","concept_id":"openclaw_sandbox_policy_approvals","name":"OpenClaw Sandboxing and Exec Approvals","description":"Learn how sandboxing, tool allow/deny policy, and exec approvals combine into hard enforcement layers that bound blast radius even when the model is confused, tricked, or maliciously prompted.","sequence_order":5.0},{"prerequisites":["openclaw_gateway_node_architecture","openclaw_skill_authoring_internals","openclaw_sandbox_policy_approvals"],"learning_outcomes":["Diagnose “works locally but not in channel” failures by separating routing/policy/auth/runtime causes","Design safe rollout habits for new skills/plugins using staged enablement and rollback thinking"],"difficulty_level":"advanced","concept_id":"openclaw_debugging_doctor_logging_ops","name":"OpenClaw Debugging and Operational Hardening","description":"Develop a production-grade debugging loop using diagnostics (doctor), logging levels, and targeted status probes to isolate failures in gateway runtime, channels, skills eligibility, nodes, and sandbox policy.","sequence_order":6.0}],"overall_coherence_score":8.7,"pedagogical_soundness_score":8.5,"prerequisites":["Comfort with event-driven systems (HTTP/WebSockets, adapters, routing)","Working knowledge of LLM tool/function calling (schemas, tool results, iterative loops)","Security fundamentals: least privilege, network exposure, supply-chain risk","Ability to read/edit Markdown/YAML-like config and interpret logs"],"rejected_segments_rationale":"Several high-quality security and ops segments were excluded due to time budget and anti-redundancy. Examples: (1) I_iSSYiW_d8_1338_1730 and XmweZ4fLkcI_0_314 provide strong threat models but would largely repeat the security framing already reinforced in segments 6–7. (2) Q7r--i9lLck_1179_1650 is excellent for production ops (logs/backups/drift) but would push the course over 40 minutes; we prioritized observability/ingress controls within budget. (3) FreeCodeCamp segments on gateway binding/doctor were not included because they introduce more “what is OpenClaw” and networking setup than this advanced course allows.","segment_thumbnail_urls":["https://i.ytimg.com/vi/sGAWsL07oUY/maxresdefault.jpg","https://i.ytimg.com/vi/I_iSSYiW_d8/maxresdefault.jpg","https://i.ytimg.com/vi_webp/aFQJYaornJ4/maxresdefault.webp","https://i.ytimg.com/vi_webp/NO-bOryZoTE/maxresdefault.webp"],"segments":[{"before_you_start":"You already know how to run OpenClaw, so we’ll skip basics and go straight to the control plane. In this segment, build a precise model of how channels normalize messages into the gateway, how sessions are created, and where trust boundaries start to matter.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/sGAWsL07oUY_36_337/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["Three-tier OpenClaw architecture (channels → gateway → agent runtime)","Gateway server as control plane (HTTP + WebSocket RPC)","Session manager and routing responsibilities","Plugin/hook system placement in the gateway","Gateway authentication modes (token/password/Tailscale/local-only)","Channel interface standardization and adapter pattern","Adding new channels via common interface"],"duration_seconds":300.48,"learning_outcomes":["Describe the three-tier OpenClaw architecture and responsibilities per tier","Explain why the gateway functions as the system’s control plane","Identify where routing, sessions, and extensibility hooks live in the gateway","Explain how a new messaging channel can be added without changing the rest of the system"],"micro_concept_id":"openclaw_gateway_node_architecture","prerequisites":["Basic web architecture (HTTP, WebSockets)","Understanding of adapters/interfaces in software design"],"quality_score":7.425000000000001,"segment_id":"sGAWsL07oUY_36_337","sequence_number":1.0,"title":"Gateway Control Plane, Routing, Trust","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"","overall_transition_score":10.0,"to_segment_id":"sGAWsL07oUY_36_337","pedagogical_progression_score":10.0,"vocabulary_consistency_score":10.0,"knowledge_building_score":10.0,"transition_explanation":"N/A for first segment"},"url":"https://www.youtube.com/watch?v=sGAWsL07oUY&t=36s","video_duration_seconds":1636.0},{"before_you_start":"Now that the gateway/session model is clear, zoom in on what the model actually sees. This segment shows how SKILL.md-style markdown is injected into the system prompt, how scripts become callable tools, and why context size and model choice can change safety and correctness.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/I_iSSYiW_d8_581_903/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["Context cost economics of always-on agents (heartbeat-driven API calls)","Model choice as a security control (capability/robustness vs cost)","Failure modes from weaker models (tool misread, prompt injection susceptibility)","Skills as plugin bundles (instructions + executable scripts)","Hot-load skill installation (watched skills folder)","SKILL/README-style markdown as prompt material (natural-language distribution)","Tool exposure pipeline (scripts become callable tools)","Extensibility vs safety trade-off (no review process, implicit trust)"],"duration_seconds":321.91999999999996,"learning_outcomes":["Describe how OpenClaw assembles an agent’s effective behavior from (a) base system prompt, (b) skill markdown instructions, and (c) exposed tool scripts","Reason about model selection as part of your security posture (not only a cost decision)","Identify where the primary trust boundary is violated when skills are hot-loaded from untrusted sources","Explain why “markdown-defined instructions” plus “tool scripts” creates both extreme extensibility and extreme supply-chain risk"],"micro_concept_id":"openclaw_system_prompt_tool_exposure","prerequisites":["Basic understanding of system prompts vs user content","Familiarity with tool-calling concepts (functions/tools exposed to an LLM)","General software supply-chain awareness (plugins/extensions as code execution)"],"quality_score":7.829999999999999,"segment_id":"I_iSSYiW_d8_581_903","sequence_number":2.0,"title":"Prompt Assembly: Skills, Tools, Context Cost","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"sGAWsL07oUY_36_337","overall_transition_score":8.6,"to_segment_id":"I_iSSYiW_d8_581_903","pedagogical_progression_score":8.5,"vocabulary_consistency_score":8.5,"knowledge_building_score":9.0,"transition_explanation":"Builds on the gateway mental model by explaining what the gateway/runtime must assemble before any tool call can happen: the system prompt and tool surface."},"url":"https://www.youtube.com/watch?v=I_iSSYiW_d8&t=581s","video_duration_seconds":1800.0},{"before_you_start":"You now know how tools become available to the model. Next, you need the execution semantics, because safe extension depends on where validation and approvals occur. This segment traces a message through the iterative tool-call loop, including MCP bridging and loop-guard safety.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/sGAWsL07oUY_337_641/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["End-to-end message handling steps (adapter → session → routing → prompt → LLM → postprocess)","System prompt construction inputs (agents.md, soul.md, tools, skills, history)","LLM call loop with iterative tool calls until termination","Built-in tools overview (bash, file IO, browser, sessions, canvas, cron, MCP bridge)","Tool-call validation and approval gates","Docker sandbox execution for isolation","Multi-agent isolation (per-agent model/tools/workspace) and cross-session messaging","Loop guard to prevent runaway agent-to-agent chains","Per-channel routing to different agent configurations"],"duration_seconds":304.72099999999995,"learning_outcomes":["Trace the OpenClaw execution path from inbound message to outbound response","Explain the iterative tool-call loop and how/when it terminates","Identify enforcement points for tool safety (validation, approvals, sandbox)","Design a multi-agent workflow while preserving isolation and preventing runaway loops","Explain how MCP-style tool bridges extend capabilities while increasing policy surface"],"micro_concept_id":"openclaw_tool_orchestration_patterns","prerequisites":["Familiarity with LLM APIs and tool/function calling","Basic understanding of containers (e.g., Docker) and sandboxing concepts"],"quality_score":7.6499999999999995,"segment_id":"sGAWsL07oUY_337_641","sequence_number":3.0,"title":"Tool-Call Loop, Approvals, and MCP","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"I_iSSYiW_d8_581_903","overall_transition_score":9.0,"to_segment_id":"sGAWsL07oUY_337_641","pedagogical_progression_score":8.5,"vocabulary_consistency_score":9.0,"knowledge_building_score":9.5,"transition_explanation":"Takes the ‘tools exist in prompt + schema’ idea and shows what happens operationally when the model actually tries to invoke them—step by step through the runtime loop."},"url":"https://www.youtube.com/watch?v=sGAWsL07oUY&t=337s","video_duration_seconds":1636.0},{"before_you_start":"With the prompt/tool loop in mind, the next step is to make extensions inspectable and governable. This segment maps OpenClaw’s behavior to its on-disk layout, and shows how workspace prompt files and SKILL.md-style contracts shape what the agent can do.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/aFQJYaornJ4_945_1315/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["On-disk project layout: agents/, sessions/, workspace/","Sessions stored as JSONL (persistent chat/event log)","Session isolation to prevent sub-agent pollution of main session","Workspace as modular system prompt split across files","Prompt components: soul (behavior), identity (persona/name), tools (capabilities)","Skill packaging: SKILL.md/skills.mmd as the skill’s behavioral/tool contract","Skill-level custom scripts and rules as extension points"],"duration_seconds":369.9382564102565,"learning_outcomes":["Locate where OpenClaw stores agent configuration vs session history vs workspace instructions","Explain how session isolation prevents sub-agent work from contaminating the main conversation memory","Describe how ‘soul/identity/tools’ act like a decomposed system prompt","Evaluate a skill by inspecting its SKILL.md/skills.mmd, rules, and helper scripts before enabling it"],"micro_concept_id":"openclaw_skill_authoring_internals","prerequisites":["Comfort navigating directories and reading/editing Markdown files","Understanding of why prompt instructions and tool access must be explicit","Basic familiarity with persistent logs / event streams (helpful)"],"quality_score":7.75,"segment_id":"aFQJYaornJ4_945_1315","sequence_number":4.0,"title":"Workspace Files and SKILL.md Contract","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"sGAWsL07oUY_337_641","overall_transition_score":8.5,"to_segment_id":"aFQJYaornJ4_945_1315","pedagogical_progression_score":8.3,"vocabulary_consistency_score":8.5,"knowledge_building_score":8.5,"transition_explanation":"Builds on the tool-call loop by showing where the durable instructions and skill contracts live, so you can change behavior by editing artifacts rather than ad-hoc chat prompts."},"url":"https://www.youtube.com/watch?v=aFQJYaornJ4&t=945s","video_duration_seconds":1923.0},{"before_you_start":"You can now reason about skills as prompt-time contracts. This segment adds the in-process extension surface: plugins and hooks. You’ll see where plugins register capabilities, and where hooks can intercept tool calls and sessions to enforce policy and improve observability.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/sGAWsL07oUY_641_1044/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["Session identity via composite keys (channel:user:conversation-type)","Session lifecycle (creation, active, compaction, pruning)","Prompt layering model (agents.md, soul.md, tools.md, skills, session context)","Context cost and large prompt sizes (~128k tokens mentioned)","Skill.md as an extensibility unit (runtime prompt injection)","Skill installation and self-extending skills concept","Plugin lifecycle (discovery, loading, registration, activation)","Hook system as interception layer (multiple hook points)","Governance hooks: rate limiting, structured logging, tool guards, compaction customization","Bundled hooks and custom hook extension"],"duration_seconds":402.159,"learning_outcomes":["Explain how session lifecycle and compaction manage context growth","Describe the system prompt layering strategy and where each file contributes","Design a skill.md that safely extends capabilities through prompt injection","Choose between skills vs plugins/hooks based on whether you need instruction changes or enforcement/interception","Identify hook points suitable for logging, rate limiting, and tool-guard policies"],"micro_concept_id":"openclaw_plugin_extension_architecture","prerequisites":["Prompt engineering fundamentals (system prompt vs context)","Understanding of plugin architectures and event hooks","Basic knowledge of token limits and context windows"],"quality_score":8.075,"segment_id":"sGAWsL07oUY_641_1044","sequence_number":5.0,"title":"Plugins and Hooks as Enforcement Layer","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"aFQJYaornJ4_945_1315","overall_transition_score":8.8,"to_segment_id":"sGAWsL07oUY_641_1044","pedagogical_progression_score":8.4,"vocabulary_consistency_score":8.7,"knowledge_building_score":9.0,"transition_explanation":"Extends the SKILL.md/file-based extensibility model into the stronger system-level layer—plugins/hooks—where you can enforce rules instead of hoping the model complies."},"url":"https://www.youtube.com/watch?v=sGAWsL07oUY&t=641s","video_duration_seconds":1636.0},{"before_you_start":"Now that you’ve seen how plugins and hooks can enforce behavior, we’ll make safety concrete. This segment walks through high-blast-radius tools like the browser, then ties them to hard controls: allow/deny policy, approvals, sandboxing, and schema-validated configuration.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/sGAWsL07oUY_1203_1514/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["Browser automation architecture (Playwright core + CDP)","Browser capabilities: navigation, DOM/snapshot extraction, execute JavaScript","Stateful automation: profiles, cookies, extensions, login persistence","Configuration system using JSON5 and six config sections","Override precedence model (defaults → config file → env file → process env)","Runtime config validation (typebox + zod)","CLI command tree for operational control (gateway/agent/channels/config/plugins/skills/browser/cron/tui/update)","Defense-in-depth security model (auth, DM pairing, tool approval, sandbox)","Rate limiting and allow/deny controls"],"duration_seconds":310.2703055555555,"learning_outcomes":["Assess why browser automation materially increases agent capability and risk","Explain OpenClaw’s config sections and how override precedence resolves conflicts","Use schema validation as an operational safety net for configuration changes","Map CLI capabilities to day-to-day operations (managing plugins/skills, browser, cron)","Describe how pairing and approvals fit into a defense-in-depth posture"],"micro_concept_id":"openclaw_sandbox_policy_approvals","prerequisites":["General familiarity with browser automation concepts","Operational experience with configuration management and environment variables","Basic security concepts (authn/authz, isolation, rate limiting)"],"quality_score":7.2250000000000005,"segment_id":"sGAWsL07oUY_1203_1514","sequence_number":6.0,"title":"Sandbox, Tool Policy, and Guardrails","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"sGAWsL07oUY_641_1044","overall_transition_score":8.8,"to_segment_id":"sGAWsL07oUY_1203_1514","pedagogical_progression_score":8.6,"vocabulary_consistency_score":8.6,"knowledge_building_score":9.0,"transition_explanation":"Takes extensibility from ‘adding capability’ to ‘bounding capability,’ showing how enforcement layers constrain what plugins/skills/tools can do in practice."},"url":"https://www.youtube.com/watch?v=sGAWsL07oUY&t=1203s","video_duration_seconds":1636.0},{"before_you_start":"At this point, you can extend OpenClaw and enforce guardrails. The remaining risk is operational: what you connect, what you ingest, and what you can’t see. This segment focuses on secure integrations and observability so debugging is driven by logs and traces, not intuition.","before_you_start_audio_url":"https://course-builder-course-assets.s3.us-east-1.amazonaws.com/audio/courses/course_1771914712/segments/NO-bOryZoTE_373_688/before-you-start.mp3","before_you_start_avatar_video_url":"","concepts_taught":["Tool/integration risk modeling for autonomous agents","Credential isolation via separate service accounts","Data-ingress control to mitigate prompt injection (trusted forwarding/filters)","Blocking direct inbound messages to the agent-controlled mailbox","API key limits and alerting to reduce blast radius","Observability as a safety feature: logs, sessions, tool calls, subagent tracking","Operational debugging: inspecting real-time runs and errors","Token/quota monitoring to manage continuous workloads"],"duration_seconds":314.984,"learning_outcomes":["Design safer integrations by isolating credentials into dedicated accounts instead of personal accounts","Implement an ingress-control pattern (trusted forwarding + filters) to reduce prompt-injection exposure","Apply key-limiting and alerting to reduce damage from leaks or misuse","Define an observability baseline (subagent tracking, tool-call audit logs, session replay) for debugging and safety reviews","Track token/quota usage to prevent cost surprises in continuous operation"],"micro_concept_id":"openclaw_debugging_doctor_logging_ops","prerequisites":["Working knowledge of OAuth/API keys and service accounts","Understanding of prompt injection as a threat class","Familiarity with logs/monitoring concepts (sessions, errors, dashboards)"],"quality_score":7.3999999999999995,"segment_id":"NO-bOryZoTE_373_688","sequence_number":7.0,"title":"Observability and Safe Integration Operations","transition_from_previous":{"suggested_bridging_content":"","from_segment_id":"sGAWsL07oUY_1203_1514","overall_transition_score":8.4,"to_segment_id":"NO-bOryZoTE_373_688","pedagogical_progression_score":8.3,"vocabulary_consistency_score":8.5,"knowledge_building_score":8.5,"transition_explanation":"Builds on sandbox/policy/approvals by adding the operational layer—ingress control and observability—so you can verify the controls are actually working under real inputs."},"url":"https://www.youtube.com/watch?v=NO-bOryZoTE&t=373s","video_duration_seconds":998.0}],"selection_strategy":"Prioritized mechanism-level segments that directly match the refined spec (gateway/control-plane + routing/trust boundaries; system prompt + tool exposure; extensibility via SKILL.md and plugins/hooks; sandbox/policy/approvals; ops observability). Built a single narrative from control-plane → prompt/tool loop → authoring/extending → enforcing safety → operating/debugging, while staying under 40 minutes and avoiding setup/product-tour content.","strengths":["Strong creator continuity across the core architecture (AI Depth School) for consistent vocabulary and mental models.","Time-efficient coverage of all required micro-concepts within 40 minutes, with minimal setup content.","Security-first framing is reinforced at every layer: prompt/tool exposure, execution loop, extension architecture, and ops controls."],"target_difficulty":"advanced","title":"Extend OpenClaw Safely Under the Hood","tradeoffs":[],"updated_at":"2026-03-05T08:40:11.028888+00:00","user_id":"google_109800265000582445084"}}