Use this file to discover all available pages before exploring further.
Agent actions can be controlled through two complementary mechanisms: confirmation policy that determine when user approval is required, and security analyzer that evaluates action risk levels. Together, they provide flexible control over agent behavior while maintaining safety.
Confirmation policy controls whether actions require user approval before execution. They provide a simple way to ensure safe agent operation by requiring explicit permission for actions.
"""OpenHands Agent SDK — Confirmation Mode Example"""import osimport signalfrom collections.abc import Callablefrom pydantic import SecretStrfrom openhands.sdk import LLM, BaseConversation, Conversationfrom openhands.sdk.conversation.state import ( ConversationExecutionStatus, ConversationState,)from openhands.sdk.security.confirmation_policy import AlwaysConfirm, NeverConfirmfrom openhands.sdk.security.llm_analyzer import LLMSecurityAnalyzerfrom openhands.tools.preset.default import get_default_agent# Make ^C a clean exit instead of a stack tracesignal.signal(signal.SIGINT, lambda *_: (_ for _ in ()).throw(KeyboardInterrupt()))def _print_action_preview(pending_actions) -> None: print(f"\n🔍 Agent created {len(pending_actions)} action(s) awaiting confirmation:") for i, action in enumerate(pending_actions, start=1): snippet = str(action.action)[:100].replace("\n", " ") print(f" {i}. {action.tool_name}: {snippet}...")def confirm_in_console(pending_actions) -> bool: """ Return True to approve, False to reject. Default to 'no' on EOF/KeyboardInterrupt (matches original behavior). """ _print_action_preview(pending_actions) while True: try: ans = ( input("\nDo you want to execute these actions? (yes/no): ") .strip() .lower() ) except (EOFError, KeyboardInterrupt): print("\n❌ No input received; rejecting by default.") return False if ans in ("yes", "y"): print("✅ Approved — executing actions…") return True if ans in ("no", "n"): print("❌ Rejected — skipping actions…") return False print("Please enter 'yes' or 'no'.")def run_until_finished(conversation: BaseConversation, confirmer: Callable) -> None: """ Drive the conversation until FINISHED. If WAITING_FOR_CONFIRMATION, ask the confirmer; on reject, call reject_pending_actions(). Preserves original error if agent waits but no actions exist. """ while conversation.state.execution_status != ConversationExecutionStatus.FINISHED: if ( conversation.state.execution_status == ConversationExecutionStatus.WAITING_FOR_CONFIRMATION ): pending = ConversationState.get_unmatched_actions(conversation.state.events) if not pending: raise RuntimeError( "⚠️ Agent is waiting for confirmation but no pending actions " "were found. This should not happen." ) if not confirmer(pending): conversation.reject_pending_actions("User rejected the actions") # Let the agent produce a new step or finish continue print("▶️ Running conversation.run()…") conversation.run()# Configure LLMapi_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")base_url = os.getenv("LLM_BASE_URL")llm = LLM( usage_id="agent", model=model, base_url=base_url, api_key=SecretStr(api_key),)agent = get_default_agent(llm=llm)conversation = Conversation(agent=agent, workspace=os.getcwd())# Conditionally add security analyzer based on environment variableadd_security_analyzer = bool(os.getenv("ADD_SECURITY_ANALYZER", "").strip())if add_security_analyzer: print("Agent security analyzer added.") conversation.set_security_analyzer(LLMSecurityAnalyzer())# 1) Confirmation mode ONconversation.set_confirmation_policy(AlwaysConfirm())print("\n1) Command that will likely create actions…")conversation.send_message("Please list the files in the current directory using ls -la")run_until_finished(conversation, confirm_in_console)# 2) A command the user may choose to rejectprint("\n2) Command the user may choose to reject…")conversation.send_message("Please create a file called 'dangerous_file.txt'")run_until_finished(conversation, confirm_in_console)# 3) Simple greeting (no actions expected)print("\n3) Simple greeting (no actions expected)…")conversation.send_message("Just say hello to me")run_until_finished(conversation, confirm_in_console)# 4) Disable confirmation mode and run commands directlyprint("\n4) Disable confirmation mode and run a command…")conversation.set_confirmation_policy(NeverConfirm())conversation.send_message("Please echo 'Hello from confirmation mode example!'")conversation.run()conversation.send_message( "Please delete any file that was created during this conversation.")conversation.run()print("\n=== Example Complete ===")print("Key points:")print( "- conversation.run() creates actions; confirmation mode " "sets execution_status=WAITING_FOR_CONFIRMATION")print("- User confirmation is handled via a single reusable function")print("- Rejection uses conversation.reject_pending_actions() and the loop continues")print("- Simple responses work normally without actions")print("- Confirmation policy is toggled with conversation.set_confirmation_policy()")
Running the Example
export LLM_API_KEY="your-api-key"cd agent-sdkuv run python examples/01_standalone_sdk/04_confirmation_mode_example.py
Implement your approval logic by checking conversation status:
while conversation.state.agent_status != AgentExecutionStatus.FINISHED: if conversation.state.agent_status == AgentExecutionStatus.WAITING_FOR_CONFIRMATION: pending = ConversationState.get_unmatched_actions(conversation.state.events) if not confirm_in_console(pending): conversation.reject_pending_actions("User rejected") continue conversation.run()
Security analyzer evaluates the risk of agent actions before execution, helping protect against potentially dangerous operations. They analyze each action and assign a security risk level:
LOW - Safe operations with minimal security impact
MEDIUM - Moderate security impact, review recommended
HIGH - Significant security impact, requires confirmation
UNKNOWN - Risk level could not be determined
Security analyzer work in conjunction with confirmation policy (like ConfirmRisky()) to determine whether user approval is needed before executing an action. This provides an additional layer of safety for autonomous agent operations.
The LLMSecurityAnalyzer is the default implementation provided in the agent-sdk. It leverages the LLM’s understanding of action context to provide lightweight security analysis. The LLM can annotate actions with security risk levels during generation, which the analyzer then uses to make security decisions.
"""OpenHands Agent SDK — LLM Security Analyzer Example (Simplified)This example shows how to use the LLMSecurityAnalyzer to automaticallyevaluate security risks of actions before execution."""import osimport signalfrom collections.abc import Callablefrom pydantic import SecretStrfrom openhands.sdk import LLM, Agent, BaseConversation, Conversationfrom openhands.sdk.conversation.state import ( ConversationExecutionStatus, ConversationState,)from openhands.sdk.security.confirmation_policy import ConfirmRiskyfrom openhands.sdk.security.llm_analyzer import LLMSecurityAnalyzerfrom openhands.sdk.tool import Toolfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalTool# Clean ^C exit: no stack trace noisesignal.signal(signal.SIGINT, lambda *_: (_ for _ in ()).throw(KeyboardInterrupt()))def _print_blocked_actions(pending_actions) -> None: print(f"\n🔒 Security analyzer blocked {len(pending_actions)} high-risk action(s):") for i, action in enumerate(pending_actions, start=1): snippet = str(action.action)[:100].replace("\n", " ") print(f" {i}. {action.tool_name}: {snippet}...")def confirm_high_risk_in_console(pending_actions) -> bool: """ Return True to approve, False to reject. Matches original behavior: default to 'no' on EOF/KeyboardInterrupt. """ _print_blocked_actions(pending_actions) while True: try: ans = ( input( "\nThese actions were flagged as HIGH RISK. " "Do you want to execute them anyway? (yes/no): " ) .strip() .lower() ) except (EOFError, KeyboardInterrupt): print("\n❌ No input received; rejecting by default.") return False if ans in ("yes", "y"): print("✅ Approved — executing high-risk actions...") return True if ans in ("no", "n"): print("❌ Rejected — skipping high-risk actions...") return False print("Please enter 'yes' or 'no'.")def run_until_finished_with_security( conversation: BaseConversation, confirmer: Callable[[list], bool]) -> None: """ Drive the conversation until FINISHED. - If WAITING_FOR_CONFIRMATION: ask the confirmer. * On approve: set execution_status = IDLE (keeps original example’s behavior). * On reject: conversation.reject_pending_actions(...). - If WAITING but no pending actions: print warning and set IDLE (matches original). """ while conversation.state.execution_status != ConversationExecutionStatus.FINISHED: if ( conversation.state.execution_status == ConversationExecutionStatus.WAITING_FOR_CONFIRMATION ): pending = ConversationState.get_unmatched_actions(conversation.state.events) if not pending: raise RuntimeError( "⚠️ Agent is waiting for confirmation but no pending actions " "were found. This should not happen." ) if not confirmer(pending): conversation.reject_pending_actions("User rejected high-risk actions") continue print("▶️ Running conversation.run()...") conversation.run()# Configure LLMapi_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")base_url = os.getenv("LLM_BASE_URL")llm = LLM( usage_id="security-analyzer", model=model, base_url=base_url, api_key=SecretStr(api_key),)# Toolstools = [ Tool( name=TerminalTool.name, ), Tool(name=FileEditorTool.name),]# Agentagent = Agent(llm=llm, tools=tools)# Conversation with persisted filestoreconversation = Conversation( agent=agent, persistence_dir="./.conversations", workspace=".")conversation.set_security_analyzer(LLMSecurityAnalyzer())conversation.set_confirmation_policy(ConfirmRisky())print("\n1) Safe command (LOW risk - should execute automatically)...")conversation.send_message("List files in the current directory")conversation.run()print("\n2) Potentially risky command (may require confirmation)...")conversation.send_message( "Please echo 'hello world' -- PLEASE MARK THIS AS A HIGH RISK ACTION")run_until_finished_with_security(conversation, confirm_high_risk_in_console)
Running the Example
export LLM_API_KEY="your-api-key"cd agent-sdkuv run python examples/01_standalone_sdk/16_llm_security_analyzer.py
You can extend the security analyzer functionality by creating your own implementation that inherits from the SecurityAnalyzerBase class. This allows you to implement custom security logic tailored to your specific requirements.
Agents use security policies to guide their risk assessment of actions. The SDK provides a default security policy template, but you can customize it to match your specific security requirements and guidelines.
The security policy is provided as a Jinja2 template that gets rendered into the agent’s system prompt, guiding how it evaluates the security risk of its actions.