Skip to main content
The App SDK was designed from the ground up to enable AI-assisted development. It uses React and TypeScript, tools which LLMs understand well. Documentation is available in numerous LLM-friendly formats. Additionally, we provide several other tools to make building with AI easier. This guide provides an overview of how to use these tools, as well as our advice on how to build the best possible apps with AI.

AGENTS.md and CLAUDE.md

AGENTS.md describes a standard interface for providing instructions to code-writing AI agents. AGENTS.md is supported by most of the major AI agents and IDEs, including Cursor, VSCode, Devin, Codex, Jules, CoPilot and Zed. Every new app comes bootstrapped with a default AGENTS.md file. We additionally supply a CLAUDE.md file to provide instructions for Claude Code users. These files are pre-populated with best-practices and context on the App SDK, and will prevent a range of common issues that can arise when building apps with AI. We also recommend modifying these files to provide additional context about your specific project.

Providing documentation context to AI

The documentation you are currently reading comes with numerous features to ensure that AI can read it too.

Markdown URLs

By appending .md to the end of any URL, you will be redirected to a Markdown version of the page in question. Try it for this page here. This can be useful when you want to quickly provide a specific page of documentation to an AI agent. For example:
Update this action so that we display an error toast if the user is not subscribed to a premium plan.
Lookup how to use toasts here: https://docs.attio.com/sdk/notifications/show-toast.md

Context menu

At the top-right of every page, we provide a context menu with various options for reading the content with AI.
  • Copy page - Copies the content of the current page to your clipboard using Markdown.
  • View as Markdown - Redirects you to the Markdown URL for the current page.
  • Open in ChatGPT - Opens the current page in ChatGPT so that you can ask questions about the content.
  • Open in Claude - Opens the current page in Claude so that you can ask questions about the content.
  • Copy MCP Server - Copies the URL for the docs MCP server to your clipboard (see below).
  • Connect to Cursor - Opens Cursor settings to connect to the docs MCP server (see below).
  • Connect to VSCode - Opens VSCode settings to connect to the docs MCP server (see below).

The Attio Documentation MCP

Attio provides an MCP server so that agents may autonomously search our documentation. The URL for the MCP server is: https://docs.attio.com/mcp.
Through experimentation, we have found that the Documentation MCP is the most effective way to provide the content of this site to AI agents.

Enabling for Claude Code

By default, new apps are pre-configured to use the MCP server in Claude Code via a .mcp.json file. To enable this MCP server, please start a new Claude Code session and confirm your usage of the MCP. To enable the MCP server in Claude Code for pre-existing apps, please configure a new or pre-existing .mcp.json file with the following content:
{
  "mcpServers": {
    "attio-docs": {
      "type": "http",
      "url": "https://docs.attio.com/mcp"
    }
  }
}

Enabling in Cursor

By default, new apps are pre-configured to use the MCP server in Cursor via a .cursor/mcp.json file. To enable this MCP server, please head to Cursor Settings > Tools & MCP and then enable the Attio server. To enable the MCP server in Cursor for pre-existing apps, please click the “Connect to Cursor” button in the context menu and follow the instructions. You may also add or modify a .cursor/mcp.json file manually.

Enabling in VSCode

To enable the MCP server in VSCode, please click the “Connect to VSCode” button in the context menu and follow the instructions.

Context7

Context7 is an AI-first documentation repository and search tool. If you are already using the Context7 MCP to provide documentation to your agents from a variety of sources, Attio is also well-indexed by Context7. You can find the link to the latest version of the documentation at https://context7.com/websites/attio.

llms.txt and llms-full.txt

llms.txt is a standard interface for providing information from the web to LLMs. Attio’s docs provide two llms.txt files:
  • llms.txt - A concise summary of the content of the docs
  • llms-full.txt - A detailed summary of the content of the docs

The AI docs agent

docs.attio.com comes with a built-in AI agent to respond to your queries. To use the agent:
  1. Open the search menu by clicking it at the top of the page or by using the keyboard shortcut Command + K (Mac) or Ctrl + K (Windows).
  2. Type your query into the search bar.
  3. Select the “Ask AI assistant” option from the menu.

Writing effective prompts

Attio’s AGENTS.md and CLAUDE.md files will provide a baseline level of context and best-practices for your agents. However, you will also need to write specific prompts for building the features particular to your app. The following tips will help you write these prompts effectively.

Describe features in terms of App SDK functionality

Before writing your prompt, ensure you understand the general functionality provided by the App SDK. You should then write your prompt in terms of this functionality. For example, rather than stating that you want to build a feature to add people to a sequence in your mail sequencing service, instead specify that you want to use a record action with an ‘Add to Sequence’ button.

Starter template context

The default template used when you bootstrap a new app comes with many useful code examples. Try explicitly asking your agent to look at any examples that are similar to the feature you are building.

Utilize a planning phase

Before committing to an implementation, ask your agent to write a plan. Many agents now include a built-in planning feature, but asking your agent to populate a PLAN.md or similar can be as, if not more, effective. Once you have a plan, you can modify it and provide feedback. When you are happy, ask your agent to implement the plan as a separate step.

Specify edge cases

An important component of writing high-quality apps is to ensure that your apps handle edge cases and errors. Call this out explicitly in your prompt, especially if you are aware of particular error cases for your design.

Provide especially clear context when communicating with external APIs

We have found that LLMs are particularly prone to hallucinating specific API interfaces. When calling external APIs, we always recommend providing exact API documentation in the form of examples, Markdown descriptions and OpenAPI specs. Additionally, we recommend reviewing all AI-generated API calls in detail to ensure correctness.

Rely on automated checks

New apps come with a range of commands for automatic linting, type checking and validation. We recommend encouraging your agents to run these checks so that they may fix their own mistakes.

Always review your work

AI can make mistakes. Always review your code carefully and run manual tests to ensure it works as expected. AI can be helpful here too, and we have found a lot of success in using AI not just as a code author, but also as a reviewer.