Marco Marassi

Marco Marassi - Blog

Run a serverless Laravel app with queue workers on AWS Lambda using Bref


Have you ever wanted to leverage the power of serverless computing but always felt stuck by AWS Lambda not providing native support for PHP? Read along!

I started messing around with AWS Lambda at my current job for a NodeJS project, and I wondered why, such a popular scripting language like PHP, which seems to be made exactly for the job of a Lambda function, is not supported natively.

Luckily someone has thought this before me, and setup a project called Bref.

Bref creates PHP bindings for AWS Lambda and abstracts away the complexity of Lambda layers, while also leveraging the Serverless framework, for pre-packaging the application and uploading it to your AWS account.

If you are wondering how is that going to integrate with a framework like Laravel, the guys at Bref have thought about that and provided a handy guide.

Unfortunately there isn't quite yet a "native" support for running queue workers like queue:work, for example to send emails asynchronously from your application, as the Lambda function is a short-lived function which terminates once the execution finishes, Lambda is not really made for long-lived workers processes.

But I still like the idea of having my whole application in a single service (Lambda) without having to summon EC2, Fargate, or ECS, which can get definitely more expensive than Lambda.

We won't get defeated that easily! There are ways around it, here's how.

Step-by-step Guide

  1. Install the npm serverless package globally:
npm i -g serverless
  1. Install the AWS CLI on your development machine, on macOS you can use Brew:
brew install awscli

For other operating systems, make sure to check the AWS CLI documentation.

  1. Login to your AWS account, and create a user with programmatic access, for the Serverless framework to be able to access and upload resources. While you can assign an AdministratorAccess access policy to the user for testing purposes, I'd recommend to use a stricter policy that just gives the required permissions to Serverless. Their documentation recommends to create a policy with the following configuration:
{
  "Statement": [{
    "Action": [
      "apigateway:*",
      "cloudformation:CancelUpdateStack",
      "cloudformation:ContinueUpdateRollback",
      "cloudformation:CreateChangeSet",
      "cloudformation:CreateStack",
      "cloudformation:CreateUploadBucket",
      "cloudformation:DeleteStack",
      "cloudformation:Describe*",
      "cloudformation:EstimateTemplateCost",
      "cloudformation:ExecuteChangeSet",
      "cloudformation:Get*",
      "cloudformation:List*",
      "cloudformation:UpdateStack",
      "cloudformation:UpdateTerminationProtection",
      "cloudformation:ValidateTemplate",
      "dynamodb:CreateTable",
      "dynamodb:DeleteTable",
      "dynamodb:DescribeTable",
      "dynamodb:DescribeTimeToLive",
      "dynamodb:UpdateTimeToLive",
      "ec2:AttachInternetGateway",
      "ec2:AuthorizeSecurityGroupIngress",
      "ec2:CreateInternetGateway",
      "ec2:CreateNetworkAcl",
      "ec2:CreateNetworkAclEntry",
      "ec2:CreateRouteTable",
      "ec2:CreateSecurityGroup",
      "ec2:CreateSubnet",
      "ec2:CreateTags",
      "ec2:CreateVpc",
      "ec2:DeleteInternetGateway",
      "ec2:DeleteNetworkAcl",
      "ec2:DeleteNetworkAclEntry",
      "ec2:DeleteRouteTable",
      "ec2:DeleteSecurityGroup",
      "ec2:DeleteSubnet",
      "ec2:DeleteVpc",
      "ec2:Describe*",
      "ec2:DetachInternetGateway",
      "ec2:ModifyVpcAttribute",
      "events:DeleteRule",
      "events:DescribeRule",
      "events:ListRuleNamesByTarget",
      "events:ListRules",
      "events:ListTargetsByRule",
      "events:PutRule",
      "events:PutTargets",
      "events:RemoveTargets",
      "iam:AttachRolePolicy",
      "iam:CreateRole",
      "iam:DeleteRole",
      "iam:DeleteRolePolicy",
      "iam:DetachRolePolicy",
      "iam:GetRole",
      "iam:PassRole",
      "iam:PutRolePolicy",
      "iot:CreateTopicRule",
      "iot:DeleteTopicRule",
      "iot:DisableTopicRule",
      "iot:EnableTopicRule",
      "iot:ReplaceTopicRule",
      "kinesis:CreateStream",
      "kinesis:DeleteStream",
      "kinesis:DescribeStream",
      "lambda:*",
      "logs:CreateLogGroup",
      "logs:DeleteLogGroup",
      "logs:DescribeLogGroups",
      "logs:DescribeLogStreams",
      "logs:FilterLogEvents",
      "logs:GetLogEvents",
      "logs:PutSubscriptionFilter",
      "s3:CreateBucket",
      "s3:DeleteBucket",
      "s3:DeleteBucketPolicy",
      "s3:DeleteObject",
      "s3:DeleteObjectVersion",
      "s3:GetObject",
      "s3:GetObjectVersion",
      "s3:ListAllMyBuckets",
      "s3:ListBucket",
      "s3:PutBucketNotification",
      "s3:PutBucketPolicy",
      "s3:PutBucketTagging",
      "s3:PutBucketWebsite",
      "s3:PutEncryptionConfiguration",
      "s3:PutObject",
      "sns:CreateTopic",
      "sns:DeleteTopic",
      "sns:GetSubscriptionAttributes",
      "sns:GetTopicAttributes",
      "sns:ListSubscriptions",
      "sns:ListSubscriptionsByTopic",
      "sns:ListTopics",
      "sns:SetSubscriptionAttributes",
      "sns:SetTopicAttributes",
      "sns:Subscribe",
      "sns:Unsubscribe",
      "sqs:CreateQueue",
      "sqs:DeleteQueue",
      "sqs:GetQueueAttributes",
      "states:CreateStateMachine",
      "states:DeleteStateMachine"
    ],
    "Effect": "Allow",
    "Resource": "*"
  }],
  "Version": "2012-10-17"
}

You can find the above on a GitHub Gist.

Once the user is created with the attached policy, copy their access key ID and secret key ID.

  1. Configure your AWS CLI to use the credentials created above with:
aws configure

You will be prompted to insert the above user's access key ID, secret key ID, region where you plan to place your assets and Lambda, and default output format. Feel free to leave the last one empty.

  1. Now it's time to finally move to our project! Start by downloading a few packages:
composer require aws/aws-sdk-php bref/bref bref/laravel-bridge

What is going on here? We are downloading the PHP SDK for AWS to be able to put stuff on the SQS queue, Bref for our Lambda Serverless PHP bindings, and Bref Laravel Bridge to have a worker run on Lambda.

  1. Create a serverless.yml in your Laravel project's root folder, and copy the following content in it:
service: your-app-name

provider:
  name: aws
  region: us-west-1 # Make sure this matches the region of your SQS queue and the region you set when you did `aws configure`
  runtime: provided
  environment:
    APP_DEBUG: false
    APP_ENVIRONMENT: production
    # Logging to stderr allows the logs to end up in Cloudwatch
    LOG_CHANNEL: stderr
    # We cannot store sessions to disk: if you don't need sessions (e.g. API) then use `array`
    # If you write a website, use `cookie` or store sessions in database.
    SESSION_DRIVER: array
    SQS_QUEUE:
      Ref: AlertQueue
    VIEW_COMPILED_PATH: /tmp/storage/framework/views
  iamRoleStatements:
    # Allows our code to interact with SQS
    -   Effect: Allow
      Action: [sqs:SendMessage, sqs:DeleteMessage]
      Resource:
        Fn::GetAtt: [ AlertQueue, Arn ]

plugins:
  - ./vendor/bref/bref

package:
  exclude:
  - node_modules/**
  - public/storage
  - resources/assets/**
  - storage/**
  - tests/**

functions:
  website:
    handler: public/index.php
    timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
    layers:
      - ${bref:layer.php-74-fpm}
    events:
      -   http: 'ANY /'
      -   http: 'ANY /{proxy+}'
  artisan:
    handler: artisan
    timeout: 120 # in seconds
    layers:
      - ${bref:layer.php-74} # PHP
      - ${bref:layer.console} # The "console" layer
  worker:
    handler: worker.php
    layers:
      - ${bref:layer.php-74}
    events:
      # Declares that our worker is triggered by jobs in SQS
      -   sqs:
          arn:
            Fn::GetAtt: [ AlertQueue, Arn ]
          # If you create the queue manually, the line above could be:
          # arn: 'arn:aws:sqs:us-east-1:1234567890:my_sqs_queue'
          # Only 1 item at a time to simplify error handling
          batchSize: 1

resources:
  Resources:
    # Failed jobs will go into that SQS queue to be stored, until a developer looks at these errors
    DeadLetterQueue:
      Type: AWS::SQS::Queue
      Properties:
        MessageRetentionPeriod: 1209600 # maximum retention: 14 days
    # The SQS queue
    AlertQueue:
      Type: AWS::SQS::Queue
      Properties:
        RedrivePolicy:
          maxReceiveCount: 3 # jobs will be retried up to 3 times
          # Failed jobs (after the retries) will be moved to the other queue for storage
          deadLetterTargetArn:
            Fn::GetAtt: [ DeadLetterQueue, Arn ]

Feel free to add any additional environment variables in the environment section, that you need to be passed to your application, that differ from the ones in your dev .env file.

Be careful not to check into version control any secrets like API keys or password!

Also be aware that some environment variables are reserved for AWS Lambda as they are passed by their environment. These are:

  • _HANDLER: The handler location configured on the function.
  • AWS_REGION: The AWS Region where the Lambda function is executed.
  • AWS_EXECUTION_ENV: The runtime identifier, prefixed by AWSLambda—for example, AWS_Lambda_java8.
  • AWS_LAMBDA_FUNCTION_NAME: The name of the function.
  • AWS_LAMBDA_FUNCTION_MEMORY_SIZE: The amount of memory available to the function in MB.
  • AWS_LAMBDA_FUNCTION_VERSION: The version of the function being executed.
  • AWS_LAMBDA_LOG_GROUP_NAME, AWS_LAMBDA_LOG_STREAM_NAME: The name of the Amazon CloudWatch Logs group and stream for the function.
  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN: The access keys obtained from the function's execution role.
  • AWS_LAMBDA_RUNTIME_API: (Custom runtime) The host and port of the runtime API.
  • LAMBDA_TASK_ROOT: The path to your Lambda function code.
  • LAMBDA_RUNTIME_DIR: The path to runtime libraries.
  • TZ: The environment's time zone (UTC). The execution environment uses NTP to synchronize the system clock.

See the AWS docs for an updated list.

  1. Create a worker.php file in your project's root directory and paste in the following content:
<?php declare(strict_types=1);

use Bref\LaravelBridge\Queue\LaravelSqsHandler;
use Illuminate\Contracts\Console\Kernel;
use Illuminate\Foundation\Application;

require __DIR__ . '/vendor/autoload.php';
/** @var Application $app */
$app = require __DIR__ . '/bootstrap/app.php';

$kernel = $app->make(Kernel::class);
$kernel->bootstrap();

return $app->makeWith(LaravelSqsHandler::class, [
  'connection' => 'sqs', // this is the Laravel Queue connection
  'queue' => getenv('SQS_QUEUE'),
]);

This will be the entry-point for our worker function.

  1. Make sure to edit app/Providers/AppServiceProvider.php so that directory is present (Laravel does not create it automatically):
public function boot()
{
  // Make sure the directory for compiled views exist
  if (! is_dir(config('view.compiled'))) {
    mkdir(config('view.compiled'), 0755, true);
  }
}
  1. Configure your SQS queue. Assuming you already have created a queue in your AWS SQS dashboard, make sure to create a programmatic access user with SQS full access, download their credentials, and add them to your .env file:
QUEUE_CONNECTION=sqs
AWS_SQS_ACCESS_KEY_ID=changeme
AWS_SQS_SECRET_ACCESS_KEY=changeme
SQS_PREFIX=https://sqs.us-east-1.amazonaws.com/your-account-id
AWS_SQS_DEFAULT_REGION=your-region-name

And change your config/queue.php in the sqs driver section:

'sqs' => [
  'driver' => 'sqs',
  'key' => env('AWS_SQS_ACCESS_KEY_ID'),
  'secret' => env('AWS_SQS_SECRET_ACCESS_KEY'),
  'prefix' => env('SQS_PREFIX', 'https://sqs.us-east-1.amazonaws.com/your-account-id'),
  'queue' => env('SQS_QUEUE', 'your-queue-name'),
  'suffix' => env('SQS_SUFFIX'),
  'region' => env('AWS_SQS_DEFAULT_REGION', 'us-east-1'),
],

We change the default AWS key env var names, as the original ones are reserved by AWS Lambda and would have conflicted with ours.

  1. Now it's finally time to deploy your application! Run:
serverless deploy

This will create all the necessary AWS resources for you, and should work straight out of the box.

Happy coding! :)

Further reading