Logrotate your logfiles and upload to Amazon S3

This is a simple tutorial on how to archive and compress log files using the logrotate command and upload the archived logs to an AWS S3 Bucket.

Why do we need to archive and keep log files? Troubleshooting purposes, Compliance purposes, legal retention etc.


  • Linux server
  • AWS account
  • AWS S3 bucket
  • Python

Step 1

Determine the logs you want to archive. To check the log files in a Linux OS, run the command

cd /var/log

Step 2

Check the logrotate config file by running cat /etc/logrotate.conf. This file contains the default logrotate config file. Ensure that the file contains the line include /etc/logrotate.d . This directory contains all the logrotate config files for different applications include system logs files on the system.

Step 3

In this tutorial, I will be compressing nginx logs and archive the compressed logs to AWS S3 bucket. I will locate my log file by going into the /etc/logrotate.d directory.

Edit the nginx log config file by running the following command sudo nano nginx

My nginx log config file looks like this

/var/log/nginx/*.log {
    rotate 5
    create 0640 www-data adm
        /bin/bash /home/ubuntu/upload_logs.sh
        if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
            run-parts /etc/logrotate.d/httpd-prerotate; \
        fi \
        invoke-rc.d nginx rotate >/dev/null 2>&1

What this does is to rotate and compress the nginx log files weekly and run the upload_logs.sh script.

Step 4 : Configure S3cmd

S3cmd describes itself as a free command-line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers. You can install S3cmd by running the following commands;

sudo apt-get update && sudo apt-get install s3cmd

Because s3cmd is written in Python, a prerequisite is to install Python before installing s3cmd.

Confirm that S3cmd is installed by running s3cmd --version

Configure s3cmd by running the command

s3cmd configure

When running the script for the first time, you would be requested to configure the AWS credentials using your AWS ACCESS KEY ID and SECRET KEY etc.

Step 5: Create a bash script

We will create a bash script that will run when we invoke the logrotate command. Ensure that you have created an S3 bucket before creating this script. Create a file upload_script.sh and copy the following code.


sudo cp /var/log/test/*.gz /tmp/

s3cmd sync /var/log/test/*.gz s3://logrotate/logs/`date +%Y-%m-%dT%H:%M:%SZ`.log.gz/

The compressed logs will be uploaded to the logrotate bucket and tagged with the timestamp when the script was run.

Step 6: Run the logrotate command

Now, we can test the configuration by running the command

sudo logrotate /etc/logrotate.conf --verbose --force


Confirm that the logs were archived by checking your AWS S3 bucket.

If you liked this article, like, share and comment. Follow me on Twitter @devylawyer

No Comments Yet