Overview
This guide covers AWS-specific integration for CrewAI Platform deployments on Amazon EKS. It focuses on how CrewAI uses AWS services and the platform-specific configuration required, rather than general AWS setup.
This guide assumes you have:
- An EKS cluster running Kubernetes 1.32.0+
- AWS CLI and kubectl configured
- Helm 3.10+ installed
- Basic familiarity with AWS services (RDS, S3, ALB)
Prerequisites
Before configuring CrewAI Platform, ensure these AWS components are in place:
Required AWS Infrastructure
| Component | Documentation Link | Notes |
|---|
| EKS Cluster | AWS EKS Getting Started | Version 1.32.0 or later |
| AWS Load Balancer Controller | AWS LBC Installation | Required for ALB ingress |
| VPC and Subnets | EKS VPC Requirements | Public and private subnets recommended |
Do not proceed with CrewAI installation until these prerequisites are met. The Helm chart will fail to deploy without them.
Amazon Aurora for PostgreSQL
CrewAI Platform requires PostgreSQL 16.8+ for production deployments. This section covers RDS-specific requirements for CrewAI.
Aurora Instance Sizing
Minimum recommended specifications based on CrewAI workload characteristics:
| Deployment Size | RDS Instance Class | vCPU | RAM | Storage |
|---|
| Development | db.t3.medium | 2 | 4 GiB | 50 GiB gp3 |
| Small Production | db.r6g.large | 2 | 16 GiB | 100 GiB gp3 |
| Medium Production | db.r6g.xlarge | 4 | 32 GiB | 250 GiB gp3 |
| Large Production | db.r6g.2xlarge | 8 | 64 GiB | 500 GiB gp3 |
CrewAI’s Rails-based architecture benefits from memory-optimized instances (R6g family). Use gp3 storage with minimum 3000 IOPS for production workloads.
Network Connectivity
CrewAI pods must reach your RDS instance. Two options:
Option 1: RDS in Private Subnet (Recommended)
- Place RDS in private subnets within your EKS VPC
- No internet exposure
- Security group allows PostgreSQL (5432) from EKS node security group
**Option 2: RDS with Public Access **
- Enable public accessibility on RDS instance
- Configure security group to allow EKS NAT gateway IPs
- Requires SSL/TLS (enforce
sslmode=require)
Helm Configuration
# Disable internal PostgreSQL
postgres:
enabled: false
envVars:
DB_HOST: "crewai-prod.cluster-abc123.us-east-1.rds.amazonaws.com"
DB_PORT: "5432"
DB_USER: "crewai"
secrets:
DB_PASSWORD: "your-secure-password"
Amazon S3 for Object Storage
CrewAI Platform uses S3 for storing crew artifacts, tool outputs, and user uploads. This section covers S3 integration and authentication.
S3 Bucket Configuration
# Create S3 bucket for CrewAI
aws s3api create-bucket \
--bucket crewai-prod-storage \
--region us-east-1
# Enable versioning for data protection
aws s3api put-bucket-versioning \
--bucket crewai-prod-storage \
--versioning-configuration Status=Enabled
# Enable default encryption
aws s3api put-bucket-encryption \
--bucket crewai-prod-storage \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
S3 Authentication Options
CrewAI supports three authentication methods for S3. Choose based on your security requirements:
Option 1: Pod Identity (Recommended - Newest)
Best for: New EKS deployments (EKS 1.24+), highest security
Pod Identity provides credentials without OIDC configuration or static keys.
Benefits:
- No long-lived credentials
- Simplified setup vs IRSA
- Automatic credential rotation
- Native EKS integration
Setup Steps:
- Create IAM policy for S3 access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::crewai-prod-storage",
"arn:aws:s3:::crewai-prod-storage/*"
]
}
]
}
- Create IAM role and associate with Pod Identity:
# Create IAM role
aws iam create-role \
--role-name CrewAIPodIdentityRole \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": ["sts:AssumeRole", "sts:TagSession"]
}]
}'
# Attach S3 policy
aws iam attach-role-policy \
--role-name CrewAIPodIdentityRole \
--policy-arn arn:aws:iam::ACCOUNT:policy/CrewAIS3Access
# Create Pod Identity Association
aws eks create-pod-identity-association \
--cluster-name your-cluster \
--namespace crewai \
--service-account crewai-sa \
--role-arn arn:aws:iam::ACCOUNT:role/CrewAIPodIdentityRole
Helm Configuration:
envVars:
STORAGE_SERVICE: "amazon"
AWS_REGION: "us-east-1"
AWS_BUCKET: "crewai-prod-storage"
serviceAccount: "crewai-sa"
rbac:
create: true
Option 2: Static Access Keys
Best for: Development environments, non-EKS Kubernetes clusters
Not recommended for production. Use Pod Identity or IRSA instead.
envVars:
STORAGE_SERVICE: "amazon"
AWS_REGION: "us-east-1"
AWS_BUCKET: "crewai-prod-storage"
secrets:
AWS_ACCESS_KEY_ID: "AKIAIOSFODNN7EXAMPLE"
AWS_SECRET_ACCESS_KEY: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
Option 3: IAM Roles for Service Accounts (IRSA)
See AWS IRSA Setup for instructions.
Application Load Balancer (ALB)
CrewAI Platform requires specific ALB configuration to support long-running crew executions and WebSocket connections.
CrewAI-Specific ALB Requirements
CrewAI’s architecture has specific needs:
- Long-running requests: Crew executions can take 5+ minutes
- WebSocket support: ActionCable requires persistent connections
- Session affinity: Not required (stateless application)
ALB Security Group Configuration
The ALB security group should allow:
- Inbound: HTTPS (443) from your allowed CIDR ranges (e.g.,
0.0.0.0/0 for public access)
- Outbound: HTTP to EKS worker node security group on NodePort range
EKS worker node security group should allow:
- Inbound: HTTP from ALB security group
ACM Certificate
CrewAI requires a valid SSL certificate:
# Request certificate (DNS validation recommended)
aws acm request-certificate \
--domain-name crewai.your-company.com \
--validation-method DNS \
--region us-east-1
# Note the certificate ARN for use in Helm values
Amazon ECR for Container Images
CrewAI Platform requires Amazon ECR for storing crew automation container images. When users create and deploy crews, CrewAI builds container images and pushes them to ECR.
ECR Repository Requirements
Critical Requirements:
- Repository URI must end in
/crewai-enterprise
- Immutable tags must be disabled (CrewAI overwrites tags for crew versions)
- Lifecycle policies recommended to manage old images
Create ECR Repository
# Create ECR repository with correct naming
aws ecr create-repository \
--repository-name your-org/crewai-enterprise \
--region us-east-1 \
--image-scanning-configuration scanOnPush=true
# Disable immutable tags (required for CrewAI)
aws ecr put-image-tag-mutability \
--repository-name your-org/crewai-enterprise \
--image-tag-mutability MUTABLE \
--region us-east-1
# Optional: Set lifecycle policy to clean up untagged images
aws ecr put-lifecycle-policy \
--repository-name your-org/crewai-enterprise \
--lifecycle-policy-text '{
"rules": [{
"rulePriority": 1,
"description": "Remove untagged images after 7 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 7
},
"action": {"type": "expire"}
}]
}'
Valid repository URIs:
- ✅
123456789012.dkr.ecr.us-east-1.amazonaws.com/crewai-enterprise
- ✅
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-org/crewai-enterprise
- ✅
123456789012.dkr.ecr.us-east-1.amazonaws.com/prod/crewai-enterprise
- ❌
123456789012.dkr.ecr.us-east-1.amazonaws.com/crewai (must end in /crewai-enterprise)
- ❌
123456789012.dkr.ecr.us-east-1.amazonaws.com/crewai-platform (wrong suffix)
ECR Authentication with Pod Identity
CrewAI pods require ECR push and pull permissions for building and deploying crew images.
Create IAM policy for ECR access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
],
"Resource": "arn:aws:ecr:us-east-1:ACCOUNT:repository/*/crewai-enterprise"
}
]
}
Attach ECR policy to Pod Identity role:
# Create ECR policy
aws iam create-policy \
--policy-name CrewAIECRAccess \
--policy-document file://ecr-policy.json
# Attach to existing Pod Identity role
aws iam attach-role-policy \
--role-name CrewAIPodIdentityRole \
--policy-arn arn:aws:iam::ACCOUNT:policy/CrewAIECRAccess
Combined IAM Policy (S3 + ECR)
For production deployments using Pod Identity, combine S3 and ECR permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3Access",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::crewai-prod-storage",
"arn:aws:s3:::crewai-prod-storage/*"
]
},
{
"Sid": "ECRAuthToken",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
},
{
"Sid": "ECRPushPull",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
],
"Resource": "arn:aws:ecr:us-east-1:ACCOUNT:repository/*/crewai-enterprise"
}
]
}
Helm Configuration for ECR
envVars:
# ECR registry configuration (REQUIRED - deployment will fail if not set)
CREW_IMAGE_REGISTRY_OVERRIDE: "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-org"
# Note: The /crewai-enterprise suffix is added automatically by CrewAI Platform
# The Helm chart validates this field is set before deployment
# S3 configuration
STORAGE_SERVICE: "amazon"
AWS_REGION: "us-east-1"
AWS_BUCKET: "crewai-prod-storage"
serviceAccount: "crewai-sa" # Associated with Pod Identity
rbac:
create: true
Verifying ECR Access
Test ECR authentication from CrewAI pods:
# Check if pod can authenticate to ECR
kubectl exec -it deploy/crewai-web -- aws ecr get-login-password --region us-east-1
# Test ECR push permissions (requires BuildKit)
kubectl exec -it deploy/crewai-buildkit -- \
buildctl debug workers list
AWS Secrets Manager Integration
AWS Secrets Manager provides centralized secret management with automatic rotation for CrewAI Platform.
Which Secrets to Store
Store in AWS Secrets Manager (sensitive, need rotation):
DB_PASSWORD - Database credentials
SECRET_KEY_BASE - Rails secret key
ENTRA_ID_CLIENT_SECRET / OKTA_CLIENT_SECRET - OAuth secrets
AWS_SECRET_ACCESS_KEY - If using static S3 credentials
GITHUB_TOKEN - For private repository access
Keep in values.yaml (configuration, not secrets):
DB_HOST, DB_PORT, DB_USER, POSTGRES_DB, POSTGRES_CABLE_DB
AWS_REGION, AWS_BUCKET
APPLICATION_HOST
AUTH_PROVIDER
Secret Structure in Secrets Manager
CrewAI expects secrets in specific formats. Two options:
Option 1: Single Secret with Multiple Keys
Create one secret crewai/platform with JSON structure:
{
"DB_PASSWORD": "your-db-password",
"SECRET_KEY_BASE": "your-secret-key-base",
"ENTRA_ID_CLIENT_ID": "your-client-id",
"ENTRA_ID_CLIENT_SECRET": "your-client-secret",
"ENTRA_ID_TENANT_ID": "your-tenant-id"
}
Option 2: Separate Secrets
Create individual secrets:
crewai/db-password
crewai/secret-key-base
crewai/entra-id-credentials (JSON with client_id, client_secret, tenant_id)
External Secrets Operator Setup
CrewAI uses External Secrets Operator (ESO) to sync secrets from AWS Secrets Manager to Kubernetes.
Install ESO (if not already installed):
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets \
external-secrets/external-secrets \
--namespace external-secrets-operator \
--create-namespace
Create IAM Policy for ESO:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:ACCOUNT:secret:crewai/*"
}
]
}
Helm Configuration:
# Enable external secret store
externalSecret:
enabled: true
secretStore: "crewai-secret-store"
secretPath: "crewai/platform" # Path to your Secrets Manager secret
# Control which secrets to sync
includes_aws_credentials: false # Set true if S3 credentials in Secrets Manager
includes_azure_credentials: false
# Configure SecretStore resource
secretStore:
enabled: true
provider: "aws"
aws:
region: "us-east-1"
# Use IRSA for ESO authentication (recommended)
auth:
serviceAccount:
enabled: true
name: "crewai-secrets-reader"
# Annotate with IAM role ARN after deployment:
# kubectl annotate serviceaccount crewai-secrets-reader \
# eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT:role/CrewAISecretsReader
Secret Mapping Example
If using single secret with JSON structure:
externalSecret:
enabled: true
secretStore: "crewai-secret-store"
secretPath: "crewai/platform"
# CrewAI will automatically map these from JSON keys:
# DB_PASSWORD -> crewai/platform:DB_PASSWORD
# SECRET_KEY_BASE -> crewai/platform:SECRET_KEY_BASE
# ENTRA_ID_CLIENT_ID -> crewai/platform:ENTRA_ID_CLIENT_ID
# etc.
Complete AWS Deployment Example
Here’s a complete production configuration for AWS:
# values-aws-production.yaml
# Disable internal services (use AWS managed services)
postgres:
enabled: false
minio:
enabled: false
# Database configuration (RDS)
envVars:
DB_HOST: "crewai-prod.cluster-abc123.us-east-1.rds.amazonaws.com"
DB_PORT: "5432"
DB_USER: "crewai"
POSTGRES_DB: "crewai_plus_production"
POSTGRES_CABLE_DB: "crewai_plus_cable_production"
RAILS_MAX_THREADS: "5"
DB_POOL: "5"
# S3 configuration (using Pod Identity, no credentials needed)
STORAGE_SERVICE: "amazon"
AWS_REGION: "us-east-1"
AWS_BUCKET: "crewai-prod-storage"
# ECR configuration (REQUIRED - using Pod Identity, no credentials needed)
CREW_IMAGE_REGISTRY_OVERRIDE: "123456789012.dkr.ecr.us-east-1.amazonaws.com/production"
# Note: /crewai-enterprise suffix is added automatically
# Chart validates this field is set before deployment
# Application configuration
APPLICATION_HOST: "crewai.company.com"
AUTH_PROVIDER: "entra_id"
RAILS_ENV: "production"
RAILS_LOG_LEVEL: "info"
# External secrets from AWS Secrets Manager
externalSecret:
enabled: true
secretStore: "crewai-secret-store"
secretPath: "crewai/platform"
includes_aws_credentials: false # Using Pod Identity
secretStore:
enabled: true
provider: "aws"
aws:
region: "us-east-1"
auth:
serviceAccount:
enabled: true
name: "crewai-secrets-reader"
# Web application configuration
web:
replicaCount: 3 # HA deployment
resources:
requests:
cpu: "1000m"
memory: "6Gi"
limits:
cpu: "6"
memory: "12Gi"
# ALB ingress
ingress:
enabled: true
className: "alb"
host: "crewai.company.com"
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1:123456789012:certificate/abc-123"
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-group-attributes: idle_timeout.timeout_seconds=300
alb.ingress.kubernetes.io/healthcheck-path: /up
alb.ingress.kubernetes.io/tags: Environment=production,Application=crewai
alb:
scheme: "internet-facing"
targetType: "ip"
certificateArn: "arn:aws:acm:us-east-1:123456789012:certificate/abc-123"
# Worker configuration
worker:
replicaCount: 3
resources:
requests:
cpu: "1000m"
memory: "6Gi"
limits:
cpu: "6"
memory: "12Gi"
# BuildKit for crew builds
buildkit:
enabled: true
replicaCount: 1
resources:
requests:
cpu: "500m"
memory: "2Gi"
limits:
cpu: "4"
memory: "8Gi"
# RBAC for service accounts
rbac:
create: true
serviceAccount: "crewai-sa"
Deploy:
# Deploy CrewAI Platform
helm install crewai-platform \
oci://registry.crewai.com/crewai/stable/crewai-platform \
--values values-aws-production.yaml \
--namespace crewai
Troubleshooting AWS-Specific Issues
ALB Not Provisioning
Symptoms: Ingress shows no ADDRESS after several minutes
kubectl get ingress --namespace crewai
# NAME CLASS HOSTS ADDRESS PORTS AGE
# crewai-ingress alb crewai.company.com 80, 443 5m
Common causes:
- AWS Load Balancer Controller not installed or not running
- Insufficient IAM permissions for LBC
- Subnet tags missing for ALB discovery
Check LBC status:
kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl logs -n kube-system deployment/aws-load-balancer-controller
Verify subnet tags (required for ALB):
- Public subnets:
kubernetes.io/role/elb=1
- Private subnets:
kubernetes.io/role/internal-elb=1
RDS Connection Timeout
Symptoms: Pods show could not connect to server: Connection timed out
Check security groups:
# Verify RDS security group allows inbound from EKS worker nodes
aws ec2 describe-security-groups --group-ids sg-xxxxx
# Check EKS node security group
aws ec2 describe-security-groups --filters "Name=tag:aws:eks:cluster-name,Values=your-cluster"
Test connectivity from pod:
kubectl run -it --namespace crewai --rm debug --image=postgres:16 --restart=Never -- \
psql -h crewai-prod.cluster-abc123.us-east-1.rds.amazonaws.com -U crewai -d crewai_plus_production
S3 Access Denied
Symptoms: Logs show Access Denied or 403 errors for S3 operations
Verify authentication method:
For Pod Identity:
# Check Pod Identity Agent is running
kubectl get daemonset -n kube-system eks-pod-identity-agent
# List associations
aws eks list-pod-identity-associations --cluster-name your-cluster
For IRSA:
# Verify service account annotation
kubectl get serviceaccount crewai-sa -o yaml | grep eks.amazonaws.com/role-arn
# Test from pod
kubectl exec -it deploy/crewai-web -- aws sts get-caller-identity
kubectl exec -it deploy/crewai-web -- aws s3 ls s3://crewai-prod-storage/
For Static Keys:
# Verify secrets exist
kubectl get secret crewai-secrets -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d
Secrets Manager Access Denied
Symptoms: ExternalSecret shows SecretSyncedError
# Check ExternalSecret status
kubectl get externalsecret
kubectl describe externalsecret crewai-external-secret
# Check SecretStore status
kubectl get secretstore
kubectl describe secretstore crewai-secret-store
# Verify ESO can assume role
kubectl logs -n external-secrets-operator deployment/external-secrets