Scroll Top

Maximizing Your AWS Savings: Tips for Optimizing Workload Costs

Blog_OptCost

Public cloud providers, like AWS, promise the convenience of on-demand resources and paying for only what you use. However, many organizations provide resources and leave them running continuously, resulting in unexpectedly high bills and frustration. This can lead to questioning the value of cloud computing and a desire to return to on-premises operations. 

While right sizing EC2 instances and purchasing reserved or spot instances can help lower the cost of running cloud resources, it’s important to remember that the responsibility for managing these resources lies with the consumer.  

Public cloud providers offer immense power and convenience in spinning up resources on demand, but without proper management and cleanup, unnecessary workloads can accumulate and drive up costs. To optimize your AWS environment and keep costs down, it’s crucial to implement effective cost optimization strategies that also prioritize a clean and well-maintained environment. 

Strategies for Cost Optimization 

One way to reduce costs is to leverage the Instance Scheduler, a lambda-based solution introduced by AWS in 2018 for automating when to stop/start EC2 instances and RDS databases. 

For example, in a non-production environment where developers work 12 hours a day, Monday to Friday, but resources run 24x7x365, using the Instance Scheduler to run workloads only during the required hours can reduce AWS costs by up to 65%. 

The Instance Scheduler is a sample lambda that can be customized to meet your specific needs or written from scratch to perform advanced features. For instance, when using an “auto scaling group” setting min/max values to zero can stop all running instances.  This also works with container solutions such as ECS and EKS, allowing services to be paused without uninstalling them. A custom lambda can automate this process easily. 

Another best practice is to delete temporary resources that aren’t intended to be long-lived.  

A similar solution to the Instance Scheduler can be developed using tagging policies to define when instances should be terminated. It’s recommended to tag the stack during deployment and automatically delete it based on a tag containing a date. 

In production, a best practice is to deploy blue/green, which involves standing up a new version of an app, post-certifying it, redirecting traffic to the new app, and deleting the old app after an acceptable period.  

Sometimes, we forget to delete the old version leading to unnecessary increased costs. Defining an expiration tag on the old application during the deployment of the new version, we can then automate the deletion process in a timely and efficient manner. 

By implementing these cost optimization practices in your AWS environment, you can ensure that resources are used efficiently, and costs are kept under control. 

Larry Grant

+ posts
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.