How Do You Integrate with the Salesforce API Using Agent Connectors?

Summarize this article with:

Salesforce agent integrations often fail on infrastructure details. External AI agents built with off-the-shelf agent frameworks or custom code need API access, OAuth 2.0 handling, schema normalization, and controlled data access. 

A managed connector covers much of that work, so your agent reads and writes Salesforce records without custom API plumbing.

TL;DR

  • Managed agent connectors absorb the OAuth 2.0, pagination, and schema normalization work, so an external AI agent can read and write Salesforce without custom API plumbing.

  • Three REST-layer traps cause most first-integration failures: Salesforce omits expires_in from OAuth token responses, SOQL responses come back in 2,000-record batches that require nextRecordsUrl pagination, and writes need required-field validation before submission.

  • Permission enforcement depends on the path: standard REST calls run as the authenticated user and respect their profile and FLS automatically, but custom Apex the agent invokes runs in system mode by default — opt into enforcement with WITH USER_MODE or Security.stripInaccessible().

  • Build toward production with a dedicated connected app and service account, narrow scopes (api plus refresh_token/offline_access), and a scoped read against a single object before expanding to writes.

What Are the Main Ways to Connect an External AI Agent to Salesforce?

External AI agents usually connect to Salesforce through a few common approaches. Each one comes with different setup speed, maintenance demands, and long-term tradeoffs.

Pathway How It Works Best For Key Limitation
Agent connector (connector-managed infrastructure) A platform handles OAuth 2.0, schema normalization, and API calls; your agent uses a simple software development kit (SDK) or connection abstraction Teams that want to reduce custom Salesforce API work Depends on the connector platform's Salesforce object coverage
Direct framework tool A framework-specific tool wrapper makes REST calls to Salesforce on behalf of the agent Prototypes and single-agent setups where the framework handles orchestration No built-in rate limiting, pagination, or permission enforcement; production setup is manual
Model Context Protocol (MCP) server An MCP server exposes Salesforce operations as tools that any MCP-compatible agent can discover and call Multi-agent systems or IDE-integrated AI assistants needing standardized tool discovery Field-Level Security (FLS) enforcement through MCP can vary by implementation; governance tooling is still evolving

Salesforce's Agentforce is for building agents inside the Salesforce ecosystem. Agents built outside Salesforce with external frameworks or custom code connect through agent connectors, direct API tools, or MCP servers.

Agent Connectors vs. Direct API Integration

When you call Salesforce's REST API directly, you own every layer of the integration. A managed connector takes most of that layer off your plate. The tradeoffs break down cleanly:

Concern Direct API Integration Managed Agent Connector
OAuth 2.0 You implement the handshake, store refresh tokens, and write refresh logic. Because Salesforce's token response does not include `expires_in`, time-based refresh needs a timeout value configured out of band. The connector completes OAuth once and manages token refresh after that, typically via error-driven refresh on `INVALID_SESSION_ID` 401s.
Pagination You detect `done: false`, follow each `nextRecordsUrl`, and assemble results across batches. Miss this and you silently return partial data with a 200 status. The connector traverses cursors and returns complete result sets to the agent.
Rate limits You read `Retry-After` headers and implement exponential backoff per endpoint. Backoff and retry are built in, with visibility into when limits are hit.
Schema You normalize `__c` suffixes on custom fields and relationship traversal paths (e.g., `Contact.Account.Owner.Email`) yourself. The connector exposes a normalized schema or relays describe metadata to the agent.
Error surface A 401 mid-run without refresh logic can surface to the agent as an authorization denial rather than an infrastructure failure, polluting its reasoning. Infrastructure failures stay at the connector layer and don't contaminate agent reasoning.

The failure mode to watch for with direct integration is quiet: the agent's reasoning quality degrades when the connection layer returns partial, stale, or over-permissioned data, even when the prompt and orchestration are correct. Context engineering at the connection layer matters as much as model choice.

What Do You Need Before Connecting an Agent to Salesforce?

Before you write agent code, make sure your Salesforce setup includes these requirements: 

  1. A Salesforce org with API access.
  2. A connected app (or external client app) configured for OAuth 2.0.
  3. A dedicated service account for the agent, with permissions separated from human user accounts.
  4. Verify the required OAuth 2.0 scopes match your specific Salesforce or Agentforce integration requirements.

In production, avoid requesting broad scopes like full unless they are strictly necessary.

Setting Up A Dedicated Connected App For Agent Access

When you create the connected app, pay attention to these configuration choices:

  • OAuth scopes: Select api plus the "Perform requests at any time" scope, which grants both refresh_token and offline_access. When constructing the authorization URL directly, pass them space-separated in the scope parameter.
  • IP restrictions: After creating the connected app, open it in App Manager and go to Manage → Edit Policies → IP Relaxation to restrict access to your agent's deployment IP range. This prevents token use from unauthorized locations.
  • Permitted Users: Set this to "Admin approved users are pre-authorized" to prevent unauthorized OAuth consent.

The scope parameter in your authorization request must be allowed by the connected app and properly approved. If the requested scopes are not allowed or approved, Salesforce can return an OAUTH_APPROVAL_ERROR_GENERIC error.

How Do You Authenticate an AI Agent with Salesforce Through a Connector?

Through a managed connector, authentication usually starts with the OAuth 2.0 handshake, any required consent, storing the returned connection identifier, and verifying the setup with a test call.

For server-to-server agent access, auth flows vary by platform and integration pattern. The JWT Bearer flow, or JSON Web Token Bearer flow, requires you to generate an RSA key pair, upload the public X.509 certificate to the connected app, and sign each token request with the private key. It is more complex, but it avoids user interaction entirely. Other flows use connected app credentials and platform-specific configuration.

A managed connector can manage authentication through a unified configuration or connection reference. You provide credentials or complete OAuth consent once, and the connector manages the token lifecycle after that.

Handling Salesforce's Non-Standard Token Expiration

Salesforce does not return expires_in in the token response. Session timeout is governed by the org's session settings rather than the token payload. Standard OAuth client libraries expect to calculate expiry_time = issued_at + expires_in and trigger a proactive refresh. That pattern does not always work with Salesforce.

Instead of relying on a fixed time-to-live from the token payload, many integrations refresh Salesforce access tokens based on API errors or org-specific session behavior. Managed connectors use error-driven refresh. They try the API call, catch a 401 with INVALID_SESSION_ID, refresh the token, and retry. Custom integrations need to implement that pattern themselves or use proactive time-based refresh with the timeout value configured out of band.

How Do You Execute Read and Write Operations on Salesforce Data?

The REST API is the primary interface for both. At the connector layer, these operations usually reduce to an entity such as Account or Contact, an action such as list, get, create, or update, and a set of parameters.

Read Operations And Schema Discovery

A first read action usually queries Contacts or Accounts with SOQL. Through a connector, you specify the object type and filter criteria, and the connector handles the request.

If an agent needs to discover available objects and fields at runtime, the describe operation becomes critical. The describe endpoint (/services/data/vXX.0/sobjects/{SObjectName}/describe/) returns metadata that includes which fields are filterable, createable, updateable, and their data types. An agent can use that metadata to build valid queries dynamically instead of relying on hardcoded field lists.

Pagination matters here. The REST API returns up to 2,000 records per batch (the default batch size, which you can lower but not raise). When more records match the query, the response includes a nextRecordsUrl field and done: false, and the agent has to follow each cursor until done: true. Separately, the SOQL OFFSET clause is capped at 2,000 rows, so offset-based pagination fails for larger datasets. Use cursor-based pagination through nextRecordsUrl or keyset pagination instead.

Schema and retrieval quality matter at the read layer. Agents need complete record sets and usable field metadata before orchestration logic becomes reliable.

Write Operations And Data Integrity

To create or update a record, you need required metadata. The describe operation's createable and updateable field properties tell the agent what it can write. Before any write, check that all required fields have values. Salesforce rejects the operation if they are missing.

After a write, verify success. Check the response for the new record ID or for a successful HTTP status code. This step matters because write failures are easy to misread as agent logic problems when the real issue is missing metadata or permissions.

How Do You Enforce Salesforce Permissions for Agent Data Access?

How Salesforce enforces Field-Level Security (FLS), object permissions, and sharing rules depends on how your integration talks to the platform. Standard REST API calls — for example, /services/data/vXX.0/sobjects/Account — run as the authenticated user and automatically respect that user's profile and permission sets. 

The rest of this section applies when your integration invokes custom Apex: custom REST endpoints (@RestResource), invocable methods called by Flows or Agentforce, or Apex triggers that fire during an agent's write. Apex runs in system mode by default, which bypasses object permissions, FLS, and sharing rules. An agent operating through system-mode Apex can read and modify fields the connected user's profile restricts unless you explicitly enforce those restrictions in code. Most framework documentation does not address this directly.

System Mode Vs. User Mode, The Default That Breaks Security

The with sharing keyword enforces sharing rules, but it does not enforce create, read, update, and delete (CRUD) permissions or FLS. To enforce all three, use the WITH USER_MODE keyword at the query and Data Manipulation Language (DML) operation level:

List<Account> accounts = [SELECT Id, Name FROM Account WITH USER_MODE];

If you want graceful degradation instead of hard exceptions, Security.stripInaccessible() strips inaccessible fields from results before data reaches the agent.

Apex running in system mode ignores the running user's FLS and CRUD settings at the code level, so an over-permissioned agent can read or modify fields the user's profile would otherwise restrict. The running user's profile still matters for any code path you explicitly guard with WITH USER_MODE or Security.stripInaccessible().

Why Connector-Level Permission Enforcement Matters

Connector platforms with built-in row-level and user-level access controls filter results before data reaches the agent's context window. Even if Salesforce-side configuration is over-permissioned, the connector layer still restricts what the agent sees.

MCP servers and other agent interfaces still need strong data controls underneath them. Standardized tool calling does not remove the need for permission-aware retrieval and filtering.

What Are the Most Common Mistakes When Setting Up a Salesforce Agent Connector?

Early integrations usually fail in a few predictable ways. The table below shows what breaks and how to avoid it.

Failure What Happens How to Avoid It
Token expiration with no refresh logic Agent works for a period, then silently fails when the access token expires; Salesforce does not always return expires\_in in token responses Use a connector with managed authentication, or implement error-driven refresh that catches 401 responses.
SOQL query hits 2,000-record cap Agent retrieves partial data with HTTP 200 and no error indicator, leading to incomplete reasoning Implement nextRecordsUrl pagination in custom integrations, or verify your connector handles pagination automatically.
System mode bypasses Field-Level Security Agent accesses all fields regardless of the connected user's Salesforce permissions, exposing sensitive data Set WITH USER\_MODE in SOQL queries and DML operations, or use a connector with built-in access controls. Never assign System Administrator to an integration user.
Connected app scoped too broadly The OAuth connected app has full scope, which grants broad API endpoint access, but actual Salesforce operations remain limited by the user's profiles, permission sets, and sharing settings. Create a dedicated connected app with only the OAuth scopes needed for your integration, such as api and refresh\_token, offline\_access when API access and offline refresh are sufficient. Verify scopes after configuration changes and during release validation.
Sandbox config doesn't match production Integration works in sandbox but breaks in production due to domain, OAuth endpoint, or schema differences Externalize all environment-specific config into environment variables. Confirm login URLs switch from `test.salesforce.com` (sandbox) to `login.salesforce.com` (production), or to each environment's My Domain URL if you've enforced My Domain–only login.

These failure modes are operational, not theoretical. If a team tests auth, pagination, and permission enforcement early, later debugging becomes much narrower and easier to reason about.

What Is the Fastest Way to Get an AI Agent Reading Salesforce Data?

Start with a scoped read against a single Salesforce object (Accounts or Contacts) to verify the connection works. Then expand to writes, validating required fields before each one. That sequence keeps the integration narrow while you prove out authentication, pagination, schema handling, and permissions one layer at a time.

The pattern to notice: most first integrations don't break on agent logic. They break on API plumbing. Authentication edge cases, pagination limits, and permission gaps eat more debugging time than the framework code itself. And every runtime API call the agent makes is one more point of failure in production.

Airbyte's Agent Engine is built to take that layer off the table. Managed agent connectors handle OAuth, pagination, and schema normalization. Row-level and user-level access controls filter data before it reaches the agent's context window. And the Context Store pre-materializes unified Salesforce context alongside your other sources, so the agent queries what it needs in under a second instead of stitching it together through live API calls at runtime.

For teams that need replicated Salesforce data in RAG pipelines, replication connectors handle batch movement alongside the Agent Engine's access layer. Same connector foundation. Same permissioned data surface.

Get a demo to see how Airbyte powers production AI agents with unified, permission-aware business context.

You build the agent. We'll bring the data.

Authenticate once. Fetch, search, and write in real-time.

Try Agent Engine →
Airbyte mascot


Frequently Asked Questions

Can you connect an AI agent to Salesforce without Agentforce?

Yes. Agentforce is for building agents inside Salesforce's ecosystem. External AI agents built with external agent frameworks or custom frameworks can connect to Salesforce data through agent connectors, direct API tools, or MCP servers without requiring Agentforce.

How do you handle OAuth token refresh for a Salesforce integration?

For production agents, verify authentication behavior in your current version and integration setup. If your stack does not handle OAuth token refresh for you, use a connector platform with managed authentication or implement error-driven refresh logic that catches 401 responses and re-authenticates.

How do you handle SOQL queries above 2,000 records?

Use nextRecordsUrl pagination when result sets exceed 2,000 records. Agents that do not implement pagination get incomplete data without explicit error indicators. That leads to unreliable reasoning and outputs.

How is Field-Level Security enforced in Salesforce MCP setups?

MCP auth behavior means actions execute on behalf of the authenticated user, but FLS enforcement behavior varies across MCP implementations and lacks consistent specification. Verify FLS enforcement in your specific setup. Connector-level filtering and permission-aware retrieval still matter underneath the agent interface.

Should you use live access or a replicated data layer?

Live API calls work for conversational agents that need current records on demand. Replicated data suits retrieval-augmented generation and semantic search where sub-second freshness is less critical. Many production deployments combine live access for lookups and writes with replicated data for indexed retrieval.

Table of contents

Loading more...

Try the Agent Engine

We're building the future of agent data infrastructure. Be amongst the first to explore our new platform and get access to our latest features.