Noddocs

Messages & Chat

Real-time bidirectional communication between your agents and their users

Messages are only available for interactive agents. Approval-only agents do not have a chat interface and cannot send or receive messages. See Agent types.

Overview

Messages are the primary way agents and humans communicate in Nod. Each agent has a conversation thread where messages appear in real time — just like a messaging app.

Users can send text, images, multiple images, and voice messages from the Nod app. Your agent receives all of these as a user_message event with the appropriate fields. How your system handles each type is up to you.

Message types

TypeHow it's sentWhat your agent receives
TextUser types a message in the chattext field with the message content
ImageUser attaches a single imageimage_url field with a URL to the image
Multiple imagesUser attaches several images at onceimage_urls field with an array of URLs
Voice messageUser records and sends audiotext field with the transcription + audio_url with the original recording
ReplyUser swipe-replies to a specific messagereply_to_id and reply_to_text fields with the quoted context

All these fields come together in the user_message event. A message can combine text with media — for example, text + images, or a reply with an image.

Receiving messages

When a user sends a message, your agent receives a user_message event via WebSocket or webhook:

Text message
{
  "type": "user_message",
  "text": "Can you check the logs?",
  "sender_name": "Alice",
  "conversation_id": "conv_abc123",
  "message_id": "msg_xyz789",
  "image_url": null,
  "image_urls": null,
  "audio_url": null
}
Image message
{
  "type": "user_message",
  "text": "What's wrong with this error?",
  "sender_name": "Alice",
  "conversation_id": "conv_abc123",
  "message_id": "msg_xyz790",
  "image_url": "https://storage.asknod.ai/images/screenshot.png",
  "image_urls": null,
  "audio_url": null
}
Multiple images
{
  "type": "user_message",
  "text": "Compare these two designs",
  "sender_name": "Alice",
  "conversation_id": "conv_abc123",
  "message_id": "msg_xyz791",
  "image_url": null,
  "image_urls": [
    "https://storage.asknod.ai/images/design-a.png",
    "https://storage.asknod.ai/images/design-b.png"
  ],
  "audio_url": null
}
Voice message
{
  "type": "user_message",
  "text": "Can you check the logs from yesterday?",
  "sender_name": "Alice",
  "conversation_id": "conv_abc123",
  "message_id": "msg_xyz792",
  "image_url": null,
  "image_urls": null,
  "audio_url": "https://storage.asknod.ai/audio/voice-msg-123.m4a"
}

Voice messages are pre-transcribed

Voice messages are transcribed on the user's device before being sent. The text field contains the transcription, and audio_url contains the original recording. Your agent just reads the text field — no speech-to-text integration needed. API keys for transcription (OpenAI Whisper or Deepgram) are stored securely on the device and never touch Nod servers.

Media URLs are public

All image and audio URLs are publicly accessible — your agent can download them with a simple HTTP GET, no authentication required.

Sending messages

Text message

POST /api/agent/events
{
  "type": "message",
  "text": "Deployment complete! All tests passed."
}

Message with image

Agents can send images by including a base64-encoded image in the message. Nod uploads it to storage and delivers the public URL to the user's app.

{
  "type": "message",
  "text": "Here's the build output:",
  "image_base64": "data:image/png;base64,iVBORw0KGgo..."
}

Supports PNG, JPEG, GIF, and WebP. Max ~5MB per image. The text field is optional when sending an image.

Via WebSocket

Same fields work over WebSocket — just send as a JSON frame.

If you omit conversation_id, Nod uses the default conversation for your agent.

Reports

For long or structured content (summaries, analysis, comparisons), send a report. Reports render as a compact card in the chat. Users tap to open the full content in a clean, full-screen view with rendered markdown.

{
  "type": "message",
  "text": "# Weekly Summary\n\n## Metrics\n- Deploys: 12\n- Incidents: 0\n...",
  "report_title": "Weekly Summary",
  "report_description": "Key metrics and highlights for this week",
  "conversation_id": "conv_abc123"
}

The text field contains the full markdown body. report_title and report_description control what's shown on the collapsed card.

Task result messages

When a task completes, send the output as a task result message. These render as a special card in chat:

{
  "type": "message",
  "text": "Found 3 open PRs that need review...",
  "task_run_id": "run_abc123",
  "conversation_id": "conv_abc123"
}

Conversations

Each agent has at least one conversation, automatically created when the agent sends its first message. Users see conversations in a list, similar to a messaging app inbox.

See Messages API for the full endpoint reference.