Architecting for the Cloud: AWS Best Practices


Infrastructure as Code

The application of the principles we have discussed does not have to be limited to the individual resource level. Since AWS assets are programmable, you can apply techniques, practices, and tools from software development to make your whole infrastructure reusable, maintainable, extensible, and testable. AWS CloudFormation templates give developers and systems administrators an easy way to create and manage a collection of related AWS resources, and provision and update them in an orderly and predictable fashion. You can describe the AWS resources, and any associated dependencies or run time parameters, required to run your application. Your CloudFormation templates can live with your application in your version control repository, allowing architectures to be reused and production environments to be reliably cloned for testing.

In a traditional IT infrastructure, you would often have to manually react to a
variety of events. When deploying on AWS there is a lot of opportunity for
automation, so that you improve both your system’s stability and the efficiency of
your organization:

Loose Coupling
As application complexity increases, a desirable attribute of an IT system is that it
can be broken into smaller, loosely coupled components. This means that IT
systems should be designed in a way that reduces interdependencies—a change
or a failure in one component should not cascade to other components.
Well-Defined Interfaces
A way to reduce interdependencies in a system is to allow the various
components to interact with each other only through specific, technology agnostic
interfaces (e.g., RESTful APIs). Amazon API Gateway can be leveraged here

Service Discovery
Applications that are deployed as a set of smaller services will depend on the
ability of those services to interact with each other. Because each of those services
could be running across multiple compute resources there needs to be a way for
each service to be addressed. For example, in a traditional infrastructure if your
front end web service needed to connect with your back end web service, you
could hardcode the IP address of the compute resource where this service was
running. Although this approach can still work on cloud computing, if those services are meant to be loosely coupled, they should be able to be consumed
without prior knowledge of their network topology details. Apart from hiding
complexity, this also allows infrastructure details to change at any time. Loose
coupling is a crucial element if you want to take advantage of the elasticity of
cloud computing where new resources can be launched or terminated at any
point in time. In order to achieve that you will need some way of implementing
service discovery.

Asynchronous Integration
Asynchronous integration is another form of loose coupling between services. This model is suitable for any interaction that does not need an immediate response and where an acknowledgement that a request has been registered will suffice. It involves one component that generates events and another that consumes them. The two components do not integrate through direct point-to-point interaction but usually through an intermediate durable storage layer (e.g., an Amazon SQS queue or a streaming data platform like Amazon Kinesis).
This approach decouples the two components and introduces additional resiliency. So, for example, if a process that is reading messages from the queue fails, messages can still be added to the queue to be processed when the system recovers. It also allows you to protect a less scalable back end service from front end spikes and find the right tradeoff between cost and processing lag. For example, you can decide that you don’t need to scale your database to accommodate for an occasional peak of write queries as long as you eventually process those queries asynchronously with some delay. Finally, by moving slow operations off of interactive request paths you can also improve the end-user experience.

Services, Not Servers
Developing, managing, and operating applications—especially at scale—requires a wide variety of underlying technology components. With traditional IT infrastructure, companies would have to build and operate all those components.
AWS offers a broad set of compute, storage, database, analytics, application, and deployment services that help organizations move faster and lower IT costs.

Architectures that do not leverage that breadth (e.g., if they use only Amazon EC2) might not be making the most of cloud computing and might be missing an opportunity to increase developer productivity and operational efficiency.

Managed Services
On AWS, there is a set of services that provide building blocks that developers can consume to power their applications. These managed services include databases, machine learning, analytics, queuing, search, email, notifications, and more. Eg. Amazon DynamoDB for NoSQL databases, Amazon CloudSearch for search workloads, Amazon Elastic Transcoder for video encoding, Amazon Simple Email Service (Amazon SES)

Serverless Architectures
Another approach that can reduce the operational complexity of running applications is that of the serverless architectures. It is possible to build both event-driven and synchronous services for mobile, web, analytics, and the Internet of Things (IoT) without managing any server infrastructure. These architectures can reduce costs because you are not paying for underutilized servers, nor are you provisioning redundant infrastructure to implement high availability.

You can upload your code to the AWS Lambda compute service and the service can run the code on your behalf using AWS infrastructure. With AWS Lambda, you are charged for every 100ms your code executes and the number of times your code is triggered. By using Amazon API Gateway, you can develop virtually infinitely scalable synchronous APIs powered by AWS Lambda. When combined with Amazon S3 for serving static content assets, this pattern can deliver a complete web application.

When it comes to mobile apps, there is one more way to reduce the surface of a server-based infrastructure. You can utilize Amazon Cognito, so that you don’t have to manage a back end solution to handle user authentication, network state, storage, and sync

NoSQL Databases
NoSQL is a term used to describe databases that trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally. NoSQL databases utilize a variety of data models, including graphs, key-value pairs, and JSON documents. NoSQL databases are widely recognized for ease of development, scalable performance, high availability, and resilience. Amazon DynamoDB is a fast and flexible NoSQL database23 service for applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.
NoSQL database engines will typically perform data partitioning and replication to scale both the reads and the writes in a horizontal fashion. They do this transparently without the need of having the data partitioning logic implemented in the data access layer of your application. Amazon DynamoDB in particular manages table partitioning for you automatically, adding new partitions as your table grows in size or as read- and write-provisioned capacity changes.
In order to make the most of Amazon DynamoDB scalability when designing your application, refer to the Amazon DynamoDB best practices24 section of the documentation.
High Availability
The Amazon DynamoDB service synchronously replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone disruption.

Data Warehouse
A data warehouse is a specialized type of relational database, optimized for analysis and reporting of large amounts of data. It can be used to combine transactional data from disparate sources (e.g., user behavior in a web application, data from your finance and billing system, CRM, etc.) making them available for analysis and decision-making.
Traditionally, setting up, running, and scaling a data warehouse has been complicated and expensive. On AWS, you can leverage Amazon Redshift, a managed data warehouse service that is designed to operate at less than a tenth the cost of traditional solutions

Applications that require sophisticated search functionality will typically outgrow the capabilities of relational or NoSQL databases. A search service can be used to index and search both structured and free text format and can support functionality that is not available in other databases, such as customizable result ranking, faceting for filtering, synonyms, stemming, etc.
On AWS, you have the choice between Amazon CloudSearch and Amazon Elasticsearch Service (Amazon ES). On the one hand, Amazon CloudSearch is a managed service that requires little configuration and will scale automatically. On the other hand, Amazon ES offers an open source API and gives you more control over the configuration details. Amazon ES has also evolved to become a lot more than just a search solution. It is often used as an analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

Detect Failure
You should aim to build as much automation as possible in both detecting and reacting to failure. You can use services like ELB and Amazon Route53 to configure health checks and mask failure by routing traffic to healthy endpoints. In addition, Auto Scaling can be configured to automatically replace unhealthy nodes. You can also replace unhealthy nodes using the Amazon EC2 auto-recovery28 feature or services such as AWS OpsWorks and AWS Elastic Beanstalk.

Security as Code
Traditional security frameworks, regulations, and organizational policies define security requirements related to things such as firewall rules, network access controls, internal/external subnets, and operating system hardening. You can implement these in an AWS environment as well, but you now have the opportunity to capture them all in a script that defines a “Golden Environment.” This means you can create an AWS CloudFormation script that captures your security policy and reliably deploys it. Security best practices can now be reused among multiple projects and become part of your continuous integration pipeline. You can perform security testing as part of your release cycle, and automatically discover application gaps and drift from your security policy.
Additionally, for greater control and security, AWS CloudFormation templates can be imported as “products” into AWS Service Catalog40. This enables centralized management of resources to support consistent governance, security, and compliance requirements, while enabling users to quickly deploy only the approved IT services they need.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s