AWS SAM With Python






AWS SAM Python Examples | Real-World Serverless Apps










AWS SAM with Python: Real-World Serverless Examples

Combining AWS SAM with Python provides a powerful framework for building serverless applications. This guide demonstrates practical implementations with complete code examples for common serverless patterns using Python runtime.

AWS SAM Python serverless architecture diagram showing API Gateway, Lambda, and DynamoDB
Fig 1. Serverless architecture using AWS SAM with Python components

Why Python for Serverless?

Python’s strengths in serverless environments:

  • Rapid development with rich ecosystem (boto3, pandas, numpy)
  • Excellent AWS SDK (boto3) integration
  • Support for ML/AI workloads with Lambda layers
  • Smaller package sizes compared to other runtimes
  • Mature testing frameworks (pytest, unittest)

For beginners, see our AWS SAM introduction guide.

Example 1: REST API with CRUD Operations

Architecture

API Gateway → Lambda → DynamoDB

template.yaml

Resources:
  ProductsTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: Products
      AttributeDefinitions:
        - AttributeName: product_id
          AttributeType: S
      KeySchema:
        - AttributeName: product_id
          KeyType: HASH
      BillingMode: PAY_PER_REQUEST

  ProductsFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: app.handler
      Runtime: python3.12
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref ProductsTable
      Environment:
        Variables:
          TABLE_NAME: !Ref ProductsTable
      Events:
        Api:
          Type: Api
          Properties:
            Path: /products/{product_id}
            Method: ANY

app.py

import json
import boto3
from boto3.dynamodb.conditions import Key

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['TABLE_NAME'])

def handler(event, context):
    http_method = event['requestContext']['http']['method']
    product_id = event['pathParameters']['product_id']
    
    if http_method == 'GET':
        response = table.get_item(Key={'product_id': product_id})
        return {'statusCode': 200, 'body': json.dumps(response.get('Item', {}))}
    
    elif http_method == 'PUT':
        item = json.loads(event['body'])
        table.put_item(Item=item)
        return {'statusCode': 200, 'body': 'Item updated'}
    
    elif http_method == 'DELETE':
        table.delete_item(Key={'product_id': product_id})
        return {'statusCode': 204}
    
    return {'statusCode': 400}

Example 2: Event-Driven File Processing

Architecture

S3 → Lambda → SQS → Lambda → DynamoDB

template.yaml

Resources:
  FileProcessor:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: processor.handler
      Runtime: python3.12
      Events:
        S3Event:
          Type: S3
          Properties:
            Bucket: !Ref SourceBucket
            Events: s3:ObjectCreated:*
      Policies:
        - S3ReadPolicy:
            BucketName: !Ref SourceBucket
        - SQSSendMessagePolicy:
            QueueName: !GetAtt ProcessingQueue.QueueName

  ProcessingQueue:
    Type: AWS::SQS::Queue

  DBWriter:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: writer.handler
      Runtime: python3.12
      Events:
        SQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt ProcessingQueue.Arn
            BatchSize: 10
      Policies:
        - DynamoDBWritePolicy:
            TableName: !Ref ProcessedItemsTable

processor.py

import boto3
import os

sqs = boto3.client('sqs')
queue_url = os.environ['QUEUE_URL']

def handler(event, context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        
        # Process file and extract data
        processed_data = process_file(bucket, key)
        
        # Send to queue
        sqs.send_message(
            QueueUrl=queue_url,
            MessageBody=json.dumps(processed_data)
            
    return {'statusCode': 200}

Example 3: Scheduled Data Aggregation

Architecture

CloudWatch Events → Lambda → S3

template.yaml

Resources:
  DataAggregator:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: aggregator.handler
      Runtime: python3.12
      Timeout: 300
      Events:
        DailyTrigger:
          Type: Schedule
          Properties:
            Schedule: cron(0 12 * * ? *)  # Daily at 12pm UTC
      Policies:
        - S3WritePolicy:
            BucketName: !Ref ReportsBucket
        - CloudWatchReadOnlyAccess

  ReportsBucket:
    Type: AWS::S3::Bucket

aggregator.py

import boto3
from datetime import datetime
import pandas as pd

s3 = boto3.client('s3')
cloudwatch = boto3.client('cloudwatch')

def handler(event, context):
    # Fetch metrics from CloudWatch
    response = cloudwatch.get_metric_data(
        MetricDataQueries=[...],
        StartTime=datetime.utcnow() - timedelta(days=1),
        EndTime=datetime.utcnow()
    )
    
    # Process with pandas
    df = pd.DataFrame(response['MetricDataResults'])
    report = df.groupby('MetricName').agg(['mean', 'max', 'min'])
    
    # Save to S3
    csv_buffer = report.to_csv().encode()
    s3.put_object(
        Bucket='reports-bucket',
        Key=f"reports/{datetime.utcnow().date()}.csv",
        Body=csv_buffer
    )

Python-Specific SAM Features

Layer Management

Resources:
  PyMysqlLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: pymysql-layer
      ContentUri: layers/pymysql/
      CompatibleRuntimes:
        - python3.12
      LicenseInfo: MIT
      RetentionPolicy: Retain

  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      Layers:
        - !Ref PyMysqlLayer

Local Testing

# Test locally with SAM CLI
sam local start-api
sam local invoke "MyFunction" -e event.json

Learn more about local testing techniques.

Best Practices for Python in SAM

  • Dependency Management: Use requirements.txt with sam build
  • Cold Start Optimization:
    • Keep deployment package under 50MB
    • Use Provisioned Concurrency for critical functions
  • Error Handling: Implement structured logging
  • Security:
    • Use IAM roles with least privilege
    • Encrypt environment variables
  • Performance:
    • Reuse database connections
    • Initialize SDK clients outside handler

For security best practices, see our serverless security guide.

Advanced Pattern: ML Inference Pipeline

Resources:
  InferenceFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: predictor.handler
      Runtime: python3.12
      MemorySize: 3008
      Timeout: 30
      Layers:
        - arn:aws:lambda:us-east-1:123456789012:layer:ScikitLearn:1
      Environment:
        Variables:
          MODEL_BUCKET: !Ref ModelBucket
          MODEL_KEY: "models/v1/model.pkl"
      Policies:
        - S3ReadPolicy:
            BucketName: !Ref ModelBucket

  ModelBucket:
    Type: AWS::S3::Bucket

Conclusion

These AWS SAM with Python examples demonstrate how to implement common serverless patterns using Python’s powerful ecosystem. By leveraging SAM’s infrastructure-as-code capabilities, developers can build scalable, maintainable applications while focusing on business logic rather than infrastructure management.

Download the complete examples including all code:

Download Full HTML Guide



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top