Scroll Top

Log Routing in ECS with AWS FireLens

Feature-Image

AWS FireLens is an AWS-provided container logging solution that routes logs generated by Docker containers operating on AWS services such as Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). With FireLens, you can easily redirect container logs to multiple locations, enabling storage and analysis capabilities. FireLens supports multiple plugins, including AWS for Fluent Bit. If you want, you can even use your own Fluentd output plugin.

In this blog, we will be utilizing AWS FireLens (with the Fluent Bit plugin) as a sidecar container for our NodeJS application, which is currently running in an ECS container. We will use FireLens to route various log types from our NodeJS application and store them in CloudWatch and S3 according to their tags.

What is Fluent Bit ?

Fluent Bit is an open-source and lightweight data collector and forwarder that is designed to process and route log data. It acts as a unified logging layer, helping to collect, filter, and forward logs from various sources to different destinations.

To learn more about Fluent Bit, click here: Fluent Bit: A brief introduction / Fluent bit manual.

Fluent Bit configuration file

Create a Fluent Bit configuration file, “fluent.conf,” and store it in a S3 bucket.

In this configuration, we are routing the logs to three different destinations based on the type of logs.

  • Name: The destination service to which we are routing the log.
  • Match: A pattern for matching against incoming records’ tags.
  • region: the AWS region of the bucket.
S3
  • bucket: You need to store the log in the specified S3 bucket name.
  • total_file_size: It merges the logs into a single file until it reaches this file size.
  • json_date_key: The log contains the name of the time key. To disable, set it to false.
  • upload_timeout: The service has a time limit for gathering logs into a single file and generating the file. Either the total_file_size or the upload_timeout, whichever elapses first, triggers the creation of a file in S3.
  • compression: enables file compression.
  • s3_key_format: This specifies the folder path and key format in which the file should be stored on the S3. To improve Athena performance, we create the folders based on the log type, allowing us to store different types of logs in different folders and partition them by year, month, and date.
CloudWatch
  • log_group_name: This is the name of the CloudWatch Log Group to which you must send the logs.
  • log_stream_prefix: Prefix for the Log Stream name.
  • auto_create_group: This function automatically creates the log group if it is not already present.

Here, CloudWatch receives the application logs, and the fluent logger routes other logs to various S3 folders according to their tags.

Firelens Task Definition

If you have the fluent.conf file in a S3 bucket, create a Fire Lens container definition with the aws-for-fluent-bit:init image, and add the ARN file to the environment.

Add the Fire Lens container definition to “containerDefinitions” along with the application container definition.

Update the application container definition’s log driver with “awsfirelens”.

Update Task Role

You need to update the task role policy with s3:GetObject and s3:GetBucketLocation permissions to access the fluent.config file, as well as s3:PutObject permissions to upload log files to the specified folders.

Update Application Code

Once the Fire Lens container is running, it automatically adds FLUENT_HOST and FLUENT_PORT to the application container’s environment.

The fluent-logger package allows us to send logs with various tags to the Fire Lens container, which then processes and stores the logs in the respective S3 folder according to the tags specified in the fluent.conf file, while CloudWatch receives all other application logs.

Query the log files with Athena

Once we send the various log types using the utils we created from the application, we can locate the log files in their respective S3 paths and the CloudWatch log group we specified in the config file based on the tags.

In conclusion, using Fire Lens with the Fluent-Bit plugin, we can process, filter, and route logs to various destinations, including but not limited to AWS Kinesis Data Firehose, Azure Blog, Data Dog, Open Search, and Google Chronicles, without any extra work in our application container. With these processed log files, we can use Athena to query any specific data for any audit or analytical purpose.

Vijay Akash

+ posts