serverless pattern example code
General Information
- By default, lambda execution time is up to 29 seconds invoked . In function settings, it can be increased to 15 minutes. Lambda invoked via API gateway is limited to 30 seconds and cannot be overwritten by cloudformation code.
- By default, 128M RAM, can be increased to 3GB
- stateless
- cold start – container needs to be created and initialized before function can be run
- worm start – container is already up and running
- Compare lambda invokation modes
- In synchronous invocations, if the Lambda function fails, retries are the responsibility of the trigger.
- Synchronous invocations are well suited for short-lived Lambda functions
- The Lambda service scales up the concurrency of the processing function as this internal queue grows.
- If an error occurs in the Lambda function, the retry behavior is determined by the Lambda service
Asynchronous invocation
-
-
-
- Destination:
functions: asyncHello: handler: handler.asyncHello destinations: onSuccess: otherFunctionInService onFailure: arn:aws:sns:us-east-1:xxxx:some-topic-name
- Stream-based lambdas have an entirely different way of creating the onFailure destination. You do not configure them on the function destination configuration, rather you do so in the AWS::Lambda::EventSourceMapping. In fact, you cannot even set an onSuccess destination. Presumably, because of scale reasons, AWS doesn’t want to give you the ability to run over the destination services with massively scaled stream infrastructure.
- Your EventSourceMapping onFailure destination can only be one of SNS or SQS
- For testing, you cannot use the console or CLI. It must be an event coming from the actual stream source. You’ll need to change or create records in DynamoDB or push stuff through Kinesis.
- When a function is asynchronously invoked, Lambda sends the event to a queue before it is processed by your function. Invocations that result in an exception in the function code are retried twice with a delay of one minute before the first retry and two minutes before the second. Some invocations may never run due to a throttle or service fault: these are retried with exponential back off until the invocation is six hours old.
- New AWS Lambda controls for stream processing and asynchronous invocations
- When processing data from event sources such as Amazon Kinesis Data Streams, and Amazon DynamoDB Streams, Lambda reads records in batches via shards. A shard is a uniquely identified sequence of data records. Your function is then invoked to process records from the batch “in order.” If an error is returned, Lambda retries the batch until processing succeeds or the data expires.
- Destination:
-
-
- Lambda invokation event filtering
- cloud formation example
- supported since November, 2021
- Lambda dead letter queue
- for Async only
- Documentation
- Concurrency:
- Reserved concurrency – Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function.
- Reserving concurrency has the following effects.
- Other functions can’t prevent your function from scaling – All of your account’s functions in the same Region without reserved concurrency share the pool of unreserved concurrency. Without reserved concurrency, other functions can use up all of the available concurrency. This prevents your function from scaling up when needed.
- Your function can’t scale out of control – Reserved concurrency also limits your function from using concurrency from the unreserved pool, which caps its maximum concurrency. You can reserve concurrency to prevent your function from using all the available concurrency in the Region, or from overloading downstream resources.
- Reserving concurrency has the following effects.
- Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function’s invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.
- Provisioned concurrency counts towards a function’s reserved concurrency and Regional quotas. If the amount of provisioned concurrency on a function’s versions and aliases adds up to the function’s reserved concurrency, all invocations run on provisioned concurrency. This configuration also has the effect of throttling the unpublished version of the function ($LATEST), which prevents it from executing. You can’t allocate more provisioned concurrency than reserved concurrency for a function.
- Reserved concurrency – Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function.
- Build a simple scheduled task with aws lambda and aws cloudwatch event
- Use Lambda to schedule another lambda by creating eventbridge rules via api call
- Lambda with cloudwatch events failure handling
- Lambda concurrency
- In general, each instance of your execution environment can handle at most 10 requests per second. This limit applies to synchronous on-demand functions, as well as functions that use provisioned concurrency.