Amazon Web Services (AWS) has became a cost-effective solution for many companies for all their required IT infrastructure requirements. AWS provides flexibility to its users to choose developing platform and programming model that suits their requirements.
S3 is the storage unit for AWS where we can store our files and access them with the help of an URL, which also comes with a CDN (Content Delivery Network) concept to give an easy and faster access to the files on S3.
I got a chance to work on a project where we were explicitly using AWS services such as EC2, S3, SES (Simple Email Service) etc to provide a free files transfer service.
Since there was no login required for users to access the site, we are creating unique named buckets on S3 to store user uploaded files so the receiver can then download the files from a given URL directly from S3 (CDN). But I faced problem in creating buckets on S3 and after investigating I found that:
– A maximum of 100 buckets can be created on S3 for an account.
– Inside a bucket you can keep any number of files.
So to solve my problem I had to write a cron script which continuously keep processing the buckets created, by zipping up the files stored under it and moving the zipped file to another bucket from where the receiver can download and delete the processed bucket so new buckets can be created.
I found this information helpful for others who are working with AWS S3 bucket creation and similar scenario.