menu
arrow_back

Maintaining High Availability with Auto Scaling (for Linux)

Maintaining High Availability with Auto Scaling (for Linux)

1 个小时 55 分钟 10 个积分

SPL-04 - Version 4.3.10

© 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written permission from Amazon Web Services, Inc. Commercial copying, lending, or selling is prohibited.

Errors or corrections? Email us at aws-course-feedback@amazon.com.

Other questions? Contact us at https://aws.amazon.com/contact-us/aws-training/

Overview

Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using increases seamlessly during demand spikes to maintain performance and decreases automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage.

But Auto Scaling represents more than a way to add and subtract servers. It is also a mechanism to handle failures similar to the way load balancing handles unresponsive servers. This lab will demonstrate configuring Auto Scaling to automatically launch, monitor, and update the load balancer associated with your Elastic Compute Cloud (EC2) instances.

There are two important things to know about Auto Scaling. First, Auto Scaling is a way to set the “cloud temperature.” You use policies to “set the thermostat,” and under the hood, Auto Scaling controls the heat by adding and subtracting Amazon EC2 resources on an as-needed basis in order to maintain the “temperature” (capacity).

An Auto Scaling policy consists of:

  • A launch configuration that defines the servers that are created in response to increased demand.

  • An Auto Scaling group that defines when to use a launch configuration to create new server instances and in which Availability Zone and load balancer context they should be created.

Second, Auto Scaling assumes a set of homogeneous servers. That is, Auto Scaling does not know that Server A is a 64-bit extra-large instance and more capable than a 32-bit small instance. In fact, this is a core tenet of cloud computing: scale horizontally using a fleet of fungible resources; individual resources are secondary to the fleet itself.

Topics covered

By the end of this lab, you will be able to:

  • Create a new launch configuration using command-line tools
  • Create a new Auto Scaling group using command line tools
  • Configure Auto Scaling notifications that are triggered when instance resources become too high or too low
  • Create policies to scale up or scale down the number of currently running instances in response to changes in resource utilization

加入 Qwiklabs 即可阅读本实验的剩余内容…以及更多精彩内容!

  • 获取对“Amazon Web Services 控制台”的临时访问权限。
  • 200 多项实验,从入门级实验到高级实验,应有尽有。
  • 内容短小精悍,便于您按照自己的节奏进行学习。
加入以开始此实验