mctxdocs
Building mcp servers

Tools, Resources, and Prompts

The three building blocks of every App. Learn what tools, resources, and prompts are through practical examples you can copy and run.

Need help? Connect help.mctx.ai for instant answers.

Every App is built from three kinds of building blocks:

  • A tool is something an AI can do -- search your database, call an API, send an email
  • A resource is something an AI can read -- a document, a config file, live data
  • A prompt is a conversation starter -- a code review template, a debug workflow, a report generator

You can use any combination of these. Most Apps start with tools. This page shows you how to build each one, with examples you can copy into your project and run.

Tools

A tool is a function that an AI assistant can call. The AI sees your tool's name, description, and input schema, then decides when to use it based on what the user asks for.

Your first tool

import { createServer, T } from "@mctx-ai/app";

const app = createServer();

const searchDocs = ({ query }) => {
  // Your logic: search a database, call an API, process files...
  return `Found 3 results for "${query}"`;
};
searchDocs.description = "Search the documentation for a topic";
searchDocs.input = {
  query: T.string({ required: true, description: "What to search for" }),
};

app.tool("search_docs", searchDocs);

export default { fetch: app.fetch };

Three things make a tool work:

  1. The function -- receives validated input, returns a result. The return value can be a string, an object (auto-serialized to JSON), or an MCP content array.
  2. .description -- tells the AI what this tool does. Write it like you are explaining to a colleague: "Search the documentation for a topic" is better than "Executes a search query against the doc index."
  3. .input -- describes the parameters using the T type system. The framework validates input before your function runs, so you never have to check types yourself.

Binary content types (ImageContent, AudioContent per MCP spec) are planned for a future release.

Tools with multiple parameters

Tools can accept any number of parameters. Use T to describe each one:

const createIssue = ({ title, body, priority }) => {
  // Create the issue in your system
  return { id: 42, title, status: "open" };
};
createIssue.description = "Create a new issue in the project tracker";
createIssue.input = {
  title: T.string({ required: true, description: "Issue title" }),
  body: T.string({ description: "Detailed description" }),
  priority: T.string({
    enum: ["low", "medium", "high"],
    default: "medium",
    description: "How urgent this is",
  }),
};

app.tool("create_issue", createIssue);

The T type system supports strings, numbers, booleans, arrays, and objects. Each type accepts options like required, default, description, and type-specific constraints. See the Framework API Reference for every option.

Async tools

If your tool calls an external API or does anything asynchronous, use an async function:

const getWeather = async ({ city }) => {
  const apiKey = process.env.WEATHER_API_KEY; // Set in the mctx dashboard under environment variables
  const response = await fetch(`https://api.weather.com/current?city=${city}`, {
    headers: { Authorization: `Bearer ${apiKey}` },
  });
  const data = await response.json();
  return { temperature: data.temp, conditions: data.description };
};
getWeather.description = "Get current weather for a city";
getWeather.input = {
  city: T.string({ required: true, description: "City name" }),
};

app.tool("get_weather", getWeather);

Objects returned from tools are automatically serialized to JSON. The AI receives structured data it can reason about.

Tool annotations

Annotations are hints you attach to a tool to tell AI clients how safe and consequential it is. Clients use them to decide whether to ask the user for permission before calling the tool, and to show appropriate safety UI.

const getWeather = async ({ city }) => {
  const response = await fetch(`https://api.weather.com/current?city=${city}`);
  const data = await response.json();
  return { temperature: data.temp, conditions: data.description };
};
getWeather.description = "Get current weather for a city";
getWeather.input = {
  city: T.string({ required: true, description: "City name" }),
};
getWeather.annotations = {
  readOnlyHint: true, // only reads data, no side effects
  destructiveHint: false, // cannot destroy anything
  openWorldHint: true, // calls an external HTTP API
};

app.tool("get_weather", getWeather);

The four hints:

HintTypeDefaultMeaning
readOnlyHintbooleanfalseTool only reads data -- no writes, creates, or deletes
destructiveHintbooleantrueTool can permanently destroy data
openWorldHintbooleantrueTool calls external systems (HTTP APIs, databases, files)
idempotentHintbooleanfalseCalling the tool multiple times with the same input produces the same result

Defaults are pessimistic. If you do not set an annotation, clients assume the worst case: writes are possible, data could be destroyed, external services are involved. Always set all four explicitly.

Decision checklist for each tool:

  1. Does it write, create, update, or delete anything? If no, set readOnlyHint: true
  2. Can it permanently destroy data (delete records, drop tables, overwrite files)? If no, set destructiveHint: false
  3. Does it call external services (APIs, databases, file systems)? If yes, set openWorldHint: true
  4. Does calling it multiple times with the same input always produce the same result? If yes, set idempotentHint: true

Common patterns:

Tool typereadOnlyHintdestructiveHintopenWorldHintidempotentHint
Read-only API query (weather, search)truefalsetruetrue
Read-only local computationtruefalsefalsetrue
Creates a resource (issue, record)falsefalsetruefalse
Deletes or modifies datafalsetruetruefalse

Annotations are advisory -- they help clients make better decisions, but clients are not required to enforce them. A readOnlyHint: true tool is still responsible for not writing data.

Long-running tools with progress

Some operations take time. You can report progress so the AI client can show a status indicator:

import { createProgress } from "@mctx-ai/app";

const analyzeRepo = function* ({ repoUrl }) {
  const step = createProgress(3);

  yield step(); // 1/3 complete
  // ... clone and scan the repo

  yield step(); // 2/3 complete
  // ... analyze code patterns

  yield step(); // 3/3 complete
  // ... generate summary

  return "Analysis complete: 47 files, 12 potential improvements found.";
};
analyzeRepo.description = "Analyze a GitHub repository for code quality";
analyzeRepo.input = {
  repoUrl: T.string({ required: true, description: "GitHub repository URL" }),
};

app.tool("analyze_repo", analyzeRepo);

Use a generator function (function*) and yield progress steps. The framework tracks progress as the generator yields. Note: in the current HTTP transport, progress steps are tracked internally but not streamed mid-request -- the final result is returned when all steps complete.

Resources

A resource is data that an AI assistant can read. Unlike tools (which the AI calls), resources are pulled in as context -- the AI client decides when to include them in a conversation.

Think of resources as files the AI can open: documentation, configs, database schemas, live dashboards.

Static resources

A static resource has a fixed URI and always returns the same kind of content:

const readme = () => "# My Project\n\nThis project does amazing things.";
readme.mimeType = "text/plain";

app.resource("docs://readme", readme);

The URI (docs://readme) is how the AI client refers to this resource. The mimeType tells the client how to interpret the content.

Dynamic resources

Use a URI template when the resource content depends on a parameter:

const userProfile = ({ userId }) => {
  return JSON.stringify({
    id: userId,
    name: "Jane Smith",
    role: "engineer",
  });
};
userProfile.mimeType = "application/json";

app.resource("users://{userId}/profile", userProfile);

The {userId} placeholder follows RFC 6570 URI template syntax. When an AI client requests users://42/profile, the framework extracts 42 as the userId and passes it to your function.

When to use resources vs tools

Use a resource when...Use a tool when...
The AI needs background contextThe AI needs to take an action
The data is read-onlyThe operation has side effects
You are exposing documents or schemasYou are searching, creating, or modifying something

Prompts

A prompt is a pre-built conversation template. Users invoke prompts explicitly (through slash commands or menu options in their AI client), making them great for common workflows like code reviews, debugging sessions, or report generation.

Single-message prompts

The simplest prompt returns a string:

const codeReview = ({ code, language }) =>
  `Review this ${language} code for bugs, performance issues, and style:\n\n${code}`;

codeReview.description = "Review code for quality issues";
codeReview.input = {
  code: T.string({ required: true, description: "The code to review" }),
  language: T.string({
    default: "JavaScript",
    description: "Programming language",
  }),
};

app.prompt("code-review", codeReview);

When a user invokes this prompt, the AI receives your message as the starting context for the conversation.

Multi-message prompts

For more complex workflows, use conversation() to build a multi-message exchange:

import { conversation } from "@mctx-ai/app";

const debugSession = ({ error, stackTrace }) =>
  conversation(({ user, ai }) => [
    user.say(`I am seeing this error:\n\n${error}\n\nStack trace:\n${stackTrace}`),
    ai.say(
      "I will analyze the error and stack trace to identify the root cause. Let me start by examining the error message and working through the call stack.",
    ),
  ]);

debugSession.description = "Start a guided debugging session";
debugSession.input = {
  error: T.string({ required: true, description: "The error message" }),
  stackTrace: T.string({ description: "Full stack trace if available" }),
};

app.prompt("debug", debugSession);

The conversation() builder gives you user and ai roles. You can also attach data with user.attach(data, mimeType) or embed resources with user.embed("resource://uri").

Putting it all together

Here is a server that uses all three building blocks:

import { createServer, T, conversation } from "@mctx-ai/app";

const app = createServer({
  instructions:
    "A project management server. Use search_tasks to find work items, read the project roadmap for context, and use the standup prompt template for daily updates.",
});

// Tool: search and create tasks
const searchTasks = ({ query, status }) => {
  return [
    { id: 1, title: "Fix login bug", status: "in_progress" },
    { id: 2, title: "Add dark mode", status: "backlog" },
  ];
};
searchTasks.description = "Search for tasks by keyword and status";
searchTasks.input = {
  query: T.string({ required: true, description: "Search keywords" }),
  status: T.string({
    enum: ["backlog", "in_progress", "done"],
    description: "Filter by status",
  }),
};
app.tool("search_tasks", searchTasks);

// Resource: project roadmap
const roadmap = () => "## Q1 Goals\n- Ship v2.0\n- Reach 1000 users";
roadmap.mimeType = "text/plain";
app.resource("project://roadmap", roadmap);

// Prompt: daily standup template
const standup = ({ yesterday, today, blockers }) =>
  conversation(({ user }) => [
    user.say(
      `Generate a standup summary:\n- Yesterday: ${yesterday}\n- Today: ${today}\n- Blockers: ${blockers || "None"}`,
    ),
  ]);
standup.description = "Generate a formatted daily standup update";
standup.input = {
  yesterday: T.string({
    required: true,
    description: "What you did yesterday",
  }),
  today: T.string({ required: true, description: "What you plan to do today" }),
  blockers: T.string({ description: "Anything blocking your progress" }),
};
app.prompt("standup", standup);

export default { fetch: app.fetch };

When this App deploys, mctx automatically detects that it has tools, resources, and prompts, and advertises all three capabilities to AI clients. No configuration needed.

Debugging your App

Structured logging

Use console.* methods to trace what your server is doing. Logs appear in real-time in your server's dashboard:

const searchDocs = ({ query }) => {
  console.log("[INFO] Searching for: " + query);
  // ... your logic
  console.log("[INFO] Found " + results.length + " results");
  return results;
};

Tip: View logs in real-time from your server's dashboard page -- open the logs modal and trigger a request. See Server Logs for logging best practices, how to choose log levels, and how to send logs to external services for persistent storage.

Local development

Use the built-in dev server while building:

npx mctx-dev index.js

Or test with the MCP Inspector to see exactly what your server sends and receives:

npx @modelcontextprotocol/inspector

Example server

The example-app is a template repository on GitHub that demonstrates all of these patterns in a single working project. Use it as a template to start your own App, run the interactive setup.sh script to customize your project, then deploy.

Next Steps


See something wrong? Report it or suggest an improvement — your feedback helps make these docs better.