I am attending AWS re:invent. The conference has grown to about 35000 attendees and the sessions span not only the Venetian hotel but also Palazzo, Mirage, and Encore casinos.
re:invent offers an amazing opportunity to ask the AWS engineers questions you have about particular use cases. Often, it’s the minutiae between a couple options that can make all the difference in production.
What is the latency for re-sizing and re-scaling the Elastic Load Balancer (ELB)?
An AWS ELB is nothing more than an auto-scaling group for the instances that handle the incoming requests and forward them to an attached, healthy instance.
According to the Amazon engineers I spoke with, the scaling happens within minutes of consistently high traffic.
After hitting some threshold (this information is proprietary) on the number of instances running your ELB, AWS starts terminating the instances in favor of larger-sized instances.
Is it possible to re-size an RDS managed database without downtime?
When you set up an RDS database, you choose not only the underlying database engine (Postgres, SQL Server, etc) but also the instance size for the instance it runs on to determine the performance for the database.
Over time, you may decide that the database needs faster hardware to keep up with growing demand. According to the Amazon engineers I spoke to, databases cannot be resized without downtime. You will need to allot enough time for a snapshot of your database to be taken and then restored to a new instance
Is it possible to use CloudFront in front of a web service as a caching layer?
This is actually one of the common uses cases. Some teams have used CloudFront but set the TTL to 0, meaning that all requests to CloudFront are then forwarding the request to the origin and thus defeating the purpose. If you are using CloudFront as an edge serer for your web services, make sure you to set your TTL long enough to help manage calls to your service but short enough to keep your data fresh enough for your business needs to SLAs with your clients.
Is there a scenario where lambda functions are more expensive than running auto-scaled instances to serve your web services?
This is a tougher question that doesn’t have a clear or easy answer. I need to discuss the math with some of the engineers here and figure out the break-even point where actual EC2 instances are cheaper.
Speaking to a couple AWS solutions architects, the general guidance is that if your EC2 instance is consistently averaging > 40% CPU utilization, then it would be cheaper to run your API on a standard EC2 instance. Anything less than that, you’ll save money by running your lambda functions.
Why are we limited to 5 security groups per instance?
Performance in AWS degrades as a function of number of security groups attached to the instance. AWS set the initial limit to 5 to keep the multiplier low enough to maintain performance. If you are trying to achieve some micro-optimizations to your AWS architecture, you can also try consolidating some of your security groups to achieve an uptick in performance.