What Is OAuth 2.0 and How Is It Used?

Most teams building AI agents treat authorization as a solved problem until their first token expires at 2 AM on a Sunday. OAuth 2.0 is the open standard that governs how apps request access to user data without ever handling passwords, and getting it right is harder than most agent architectures account for. 

It powers "Sign in with Google," third-party SaaS connections, and AI agents that need permissioned access to tools like Salesforce or Google Drive. It has been the industry standard since 2012 and now shows up directly in agent-tool standards like the MCP spec and related Internet Engineering Task Force (IETF) drafts.

TL;DR

  • OAuth 2.0 lets apps (including AI agents) access user data through tokens with bounded permissions, expiration, and user-controlled revocation instead of passwords.
  • The default flow is Authorization Code with PKCE: user consents → app gets a code → exchanges it for access and refresh tokens.
  • OAuth 2.1 tightens the spec for agents (mandatory PKCE, no Implicit or ROPC grants, strict redirect URIs) and is already required by MCP.
  • The hard part in production is operating OAuth at scale: token lifecycles, scope drift, and revocation across many providers and tenants.


What Is OAuth 2.0?

OAuth 2.0 (Open Authorization 2.0) is an open-standard authorization framework defined in RFC 6749. Applications use it to request limited access to a user's data on another service without the user sharing their password. 

The user explicitly approves what data the application can access (scopes) and for how long. The application receives a token, a credential that represents this approved access, and uses that token to make API requests. 

When the token expires, the application requests a new one using a refresh token without requiring the user to log in again. The user stays in control of access without being interrupted every time the application needs data.

OAuth 2.0 defines four roles:

  • The resource owner is the user who owns the data: an employee with a Salesforce account
  • The client is the application requesting access: your AI agent
  • The authorization server verifies the user's identity and issues tokens: Salesforce's OAuth server
  • The resource server hosts the protected data and accepts valid tokens: Salesforce's API

Separating authorization server and resource server is what makes OAuth work across organizational boundaries, because your application never touches the user's password. For agent builders, this separation is what allows a single agent to access data across dozens of services without managing credentials for any of them.

The key concept is delegated access. The user delegates specific permissions to the application without giving the application their credentials. This delegation is scoped (the application can only access what the user approved), time-limited (tokens expire), and revocable (see RFC 7009). That delegation model is why OAuth became the standard for API access across the internet, and why it is now the foundation for agent data access. 

Delegation only works, though, when every participant in the chain follows the same flow correctly, and that is where most agent teams run into trouble.

How Does OAuth 2.0 Work?

The Authorization Code flow, the most common OAuth flow, works like this:

  1. Your application redirects the user to the authorization server (for example, Salesforce's login page)
  2. The user logs in and sees a consent screen listing what your application is requesting access to: "Read your contacts" and "Read your opportunities" 
  3. The user approves, and the authorization server redirects them back to your application with a temporary authorization code
  4. Your application exchanges this code (plus its own credentials) for an access token and a refresh token
  5. The access token authorizes API requests; the refresh token lets the application get new access tokens when the current one expires without prompting the user again

OAuth 2.0 defines several grant types, each designed for a different trust relationship between the user, the application, and the authorization server. For AI agents, three grant types cover the majority of production use cases.

Grant Type How It Works Agent Use Case When to Use Key Consideration
Authorization Code (with PKCE) User redirected to authorization server, consents to specific scopes, app receives authorization code, exchanges code for access token. PKCE (Proof Key for Code Exchange) prevents code interception. User-delegated agent access: end user connects their Salesforce, Google Drive, or Slack account so the agent can access their data with their permissions. Any time an agent accesses data on behalf of a specific end user who must consent to the access. This is the most common flow for multi-tenant AI applications with embedded data connections. Requires user interaction for initial consent. After consent, the agent uses refresh tokens to maintain access without further user involvement. Authentication complexity (including OAuth token refresh, credential isolation, and rate limiting) is a dominant operational challenge when managing hundreds of users.
Client Credentials Application authenticates directly with authorization server using its own client ID and secret. No user involved. Token represents the application's own authority. System-level agent operations: background data sync across an entire organization's workspace, scheduled analytics aggregation, compliance scanning that operates at the tenant level rather than per user. When the agent operates on behalf of an organization rather than a specific user, and the data access is not scoped to individual user permissions. Token represents broad organizational access. Requires careful scope definition to avoid over-permissioning. Not appropriate when the agent needs to respect per-user data visibility.
Token Exchange (RFC 8693) An agent with one token exchanges it for a different token with different scopes or audience. Allows delegation chains: user token → agent token → downstream service token. Multi-agent and multi-service architectures: a front-end agent receives a user-scoped token and needs to pass delegated authority to a back-end service that accesses a different API on the user's behalf. When agents interact with multiple downstream services and need to propagate user identity and permissions through the chain without each service requiring its own user consent flow. Adds complexity but maintains the delegation chain. Critical for audit trails in enterprise environments where every data access must be traceable to the originating user.

The grant type you choose determines the security boundary of every API call your agent makes. Get it wrong and you either over-permission an agent with organizational access it does not need, or force unnecessary consent flows that degrade the user experience.

What Is OAuth 2.1 and Why Does It Matter for AI Agents?

OAuth 2.1 is not a new protocol. It consolidates security best practices that emerged over a decade of OAuth 2.0 deployment into a single, cleaner spec (see the OAuth 2.1 draft). PKCE becomes mandatory for all clients (see the PKCE requirement). This closes a class of authorization code interception attacks that are especially dangerous when agents operate unattended. 

The spec removes the Implicit Grant and Resource Owner Password Credentials (ROPC) flows (see removed flows) because they expose tokens or credentials in ways that are unacceptable for autonomous agents. Redirect URI matching becomes strict (see redirect URI matching): exact string matching with no wildcards or pattern matching. This prevents redirect manipulation attacks.

Model Context Protocol (MCP) adopted OAuth 2.1 as its auth standard, which means any agent connecting to MCP servers uses the tightened 2.1 rules by default. Any OAuth implementation for agents should follow 2.1 requirements now, even if the libraries you use still reference "OAuth 2.0." Waiting to adopt 2.1 means accumulating security debt that gets harder to unwind as your agent's integration surface grows.

Why Is OAuth Hard to Manage for AI Agents?

Understanding the protocol is straightforward, but operating it across dozens of providers, hundreds of tenants, and thousands of tokens is where agent teams burn engineering time they did not budget for.

Token Lifecycle at Multi-Tenant Scale

Each provider has different token lifetimes. Salesforce OAuth access tokens have a default lifetime of about 2 hours. HubSpot token change cut access tokens from 6 hours to 30 minutes in November 2021 with minimal warning. Refresh tokens have their own expiration and rotation policies: Google refresh quotas can invalidate older refresh tokens when a per-client limit is hit, while HubSpot refresh tokens can last indefinitely.

Proactive refresh orchestration is necessary because waiting for a 401 error before refreshing creates race conditions, cascading retries, and gaps in data freshness. The most common production failure looks like this: two processes detect the same expired token simultaneously and both attempt a refresh. One overwrites the other's new refresh token with a stale one, and subsequent refreshes fail with invalid_grant errors. By the time the team notices, the agent has been silently disconnected from the customer's data for hours.

Scope Management Across Providers

Salesforce scopes are simple strings (api, cdp_ingest_api, refresh_token).

Google scopes are URL-based (https://www.googleapis.com/auth/drive.readonly).

Slack scopes map to specific API methods (users:read, chat:write).

No shared convention exists, which means your agent needs provider-specific logic for every scope interaction.

When providers update their scope definitions, the agent's existing tokens may silently lose access to data they previously could reach. Salesforce, for example, began enforcing valid scopes for the client credentials flow in 2025, causing Connected Apps that had been working with unsupported scopes to immediately return 401 errors. Detecting and handling scope drift across many providers requires monitoring infrastructure that most teams do not build until something breaks.

Consent and Revocation in Multi-Tenant Environments

Users and administrators can revoke OAuth access at any time. Auth0 launched Global Token Revocation that automatically revokes all user sessions and tokens across all applications when Identity Threat Protection detects malicious behavior.

The agent must detect these revocations, surface them to the right tenant, and re-initiate the consent flow. With JSON Web Token (JWT) access tokens validated locally, revocation at the authorization server does not immediately propagate. Services validating JWTs offline will not see the revocation until the token expires (see RFC 9700). In the gap between revocation and token expiry, the agent continues operating on credentials the user has already withdrawn.

How Does Airbyte's Agent Engine Handle OAuth?

The complexity above is exactly what Airbyte's Agent Engine absorbs. The embeddable widget manages the entire OAuth flow for end users: consent screens, token exchange, and credential storage across 600+ connectors. 

The platform handles token refresh, rotation, and revocation detection, so the engineering team does not have to. It tenant-isolates each customer's credentials. When providers change their APIs or scope definitions, Airbyte updates the connector so the agent team does not maintain OAuth infrastructure per provider. 

That means every operational failure described in the previous section becomes Airbyte's problem instead of yours.

Where Is OAuth Heading for AI Agents?

OAuth's role in agent infrastructure is expanding beyond traditional user-to-app delegation. The IETF agent auth drafts are defining how autonomous agents identify themselves and propagate permissions across multi-hop service chains, where the user who originally consented may be several layers removed from the final API call. 

As agent-to-agent communication becomes more common, the authorization layer will need to support full delegation chains: "this user approved this agent, which delegated to that agent, which is now calling this SaaS tool on the user's behalf," with a complete audit trail at every hop.

Teams building agents today should treat OAuth infrastructure as a long-term operational concern that compounds in complexity with every new provider and tenant. 

Get a demo to see how Airbyte Agent Engine handles that lifecycle across 600+ connectors so your team builds agents, not auth infrastructure.

You build the agent. We'll bring the data.

Authenticate once. Fetch, search, and write in real-time.

Try Agent Engine →
Airbyte mascot


Frequently Asked Questions

Can I use OAuth 2.0 libraries for OAuth 2.1 requirements?

Yes. OAuth 2.1 narrows the set of allowed configurations rather than introducing new protocol mechanics, so most mature OAuth 2.0 libraries already support PKCE and strict redirect URI matching. The migration path is configuration: enable PKCE for all clients, remove any Implicit or ROPC flows, and switch redirect URI validation to exact-match.

What is the difference between authentication and authorization?

Authentication verifies identity: who is making the request. Authorization determines permissions: what the requester is allowed to do. OAuth 2.0 is an authorization framework that grants scoped access to resources, often used alongside OpenID Connect (OIDC), which adds an authentication layer by returning an ID token with user identity claims.

Why do AI agents use OAuth instead of API keys?

API keys are static secrets with no built-in expiration, no user-facing consent, and no standard revocation mechanism. OAuth provides delegated, user-approved access that can be withdrawn at any time without rotating infrastructure secrets, which is the delegation model that API keys lack for agents accessing customer data. MCP supports OAuth among several possible authentication mechanisms but does not mandate it.

How do I choose between Authorization Code, Client Credentials, and Token Exchange?

Start with one question: is a specific user consenting? If yes, use Authorization Code with PKCE; if the agent operates at the organization level with no individual user context (batch syncs, compliance scans), use Client Credentials; if the agent already holds a user-scoped token but needs to call a different downstream service on that user's behalf, use Token Exchange. Most teams start with Authorization Code with PKCE alone and add the others only when a concrete use case demands it.

When does building OAuth in-house make sense vs. using a managed platform?

Building in-house is reasonable when you integrate with one or two providers, your user base is small, and your team has capacity to monitor token health and provider API changes. The calculus shifts when you cross roughly 5-10 providers or 100+ tenant connections: at that point, the combinatorial complexity of differing token lifetimes, rotation policies, scope formats, and breaking API changes turns OAuth maintenance into a standing engineering cost. Managed connector platforms absorb that cost as part of the integration infrastructure, freeing the agent team to focus on agent logic rather than credential plumbing.

Table of contents

Loading more...

Try the Agent Engine

We're building the future of agent data infrastructure. Be amongst the first to explore our new platform and get access to our latest features.