Given the requirements in The Cloud Resume Challenge - AWS, the StocksCloud website was built with several architecture principles in mind based on my interpretation of the design spec while considering the six pillars of the AWS Well-Architected Framework.
I used the MOSCOW method (MUST, SHOULD, COULD, WONT) as described below. Although all requirements might be judged initially to be MUST, where appropriate I used tooling which allows for some improvements versus the original design spec.
MUST implement Security best practices: I've used principle of least privilege for all IAM users and policies. Data is encrypted in transit (HTTPS with TLSv1.3) and data is encrypted at rest (DynamoDB table objects using DynamoDB-owned KMS key, S3 bucket objects using SSE-S3). I've added a bucket policy which only allows a CloudFront Distribution endpoint to access the S3 origin using Origin Access Control.
MUST utilise Cost Optimization and Performance Efficiency: A Serverless architecture with on-demand (pay-per-use) DynamoDB table, API Gateway fronting a Lambda function which updates the site hit counter instead of long-running EC2 instances or a Container platform for compute. Lambda is always free up to 1 million invocations per month :)
MUST apply Operational Excellence: Using DevOps automation and event-driven architectures, the website's backend and frontend is deployed through GitHub Actions upon git push to the Main repo branch. Monitoring for failed GitHub pipelines and CloudWatch monitoring are both available out-of-the-box, further improving operational responsiveness.
MUST perform Reliably: By leveraging AWS-managed services, the site is highly available within a single region, easily achieving enough reliability for my static website use case.
MUST consider Sustainability: Employing a Serverless architecture ensures that workloads only run for as long as necessary. Using CDK templates and CloudFormation stacks means that infrastructure is declarative and can be quickly disposed of when no longer required.
SHOULD use CI/CD, HTML, CSS and a JavaScript website hit counter for the frontend and store code in source control: I hand-crafted much of the HTML but found that I couldn't write a JavaScript hitcounter function! Here, I leant on Claude AI to assist me with writing the JavaScript function to interface with a REST API Gateway endpoint. The API invokes a Python 3.9 Lambda function to increment the page hit count. The hit count is persisted in a DynamoDB table. CSS is used to style the HTML body elements of each page.
SHOULD use an API, Python (including Tests), a Database and CI/CD for the backend: I utilised GitHub Actions, enabling the backend components to be created as a single CloudFormation stack using cdk deploy within the GitHub Actions workflow. The CDK template was written in TypeScript; coding was again ably assisted by Claude AI who wrote much of the template.
COULD use SAM (Serverless Application Model) templates to define the backend infrastructure and store code in source control: Here I deviated from the design spec by opting to use CDK (Cloud Development Kit), a well-supported AWS framework for defining infrastructure-as-code in a number of popular languages. I opted for TypeScript due to its support by AWS and usage within my own business environments, allowing me to upskill in CDK. I covered the CI/CD requirements by implementing GitHub Actions workflows for both the frontend (website) and backend (infrastructure) repos and set the workflows to run upon a git push to the Main branch.
COULD write a blog post about my experience. I did!
Architecture 🏗
Fig 1: Architecture Diagram
The diagram shows the CDK Infrastructure repo (GitHub name: cloud_resume_challenge_python_backend) and the Website repo (GitHub name: cloud_resume_challenge_website) interacting with AWS using GitHub Actions workflows (one workflow per repo) and an AWS IAM User containing an Inline Policy.
Clients (Users) access the website via Route 53 (DNS) and CloudFront (Content Delivery Network).
The GitHub Actions workflows are triggered via 'git push' to the main branch of each repo. They can also be triggered manually from the GitHub Actions UI, useful for troubleshooting failed workflows.
Future Improvements 📈
While good security practices have generally been followed, the implementation of the StocksCloud website includes some IAM issues which can be addressed. Encrypted secrets used by GitHub Actions are placed into environment variables and work in concert with least-privileged AWS users/policies to perform tasks such as updating the CloudFormation stack, file PUT requests to S3, or CloudFront cache invalidations. These use long-lived access keys, which although encrypted is not advisable due to the potential for a credentials leak. Most companies recommend moving away from using these access methods.
AWS recommend implementing IAM Roles to issue STS (Security Token Service) calls such as sts:AssumeRoleWithWebIdentity, which provides temporary AWS access credentials by issuing the requestor with short-lived security tokens. The timebound nature of security tokens makes them by definition more secure than using long-lived credentials. I aim to create IAM Roles and integrate them with my GitHub Actions as the main future website enhancement.
Final Thoughts 🤔
You might be thinking "It's a static website, you could have hosted it in S3!" Yes I could, however there is justification for taking my approach:
Principally, I would have failed to meet the design spec of the Cloud Resume Challenge; reason enough to steer away from the S3 web hosting solution.
The site is HTTPS because it has a Route 53 hosted zone including an ACM (AWS Certificate Manager) certificate issued by the Amazon CA. This is more secure than a plain HTTP site hosted from an S3 bucket (or indeed anywhere which does not offer HTTPS).
The inclusion of a CloudFront Distribution provides the advantage of a CDN (edge and regional edge caches) to serve the site pages and static assets. This makes site content available to a global user base with decreased latency & jitter, regardless of which geography/region the user is located. Direct S3 bucket access cannot offer a globally-distributed caching layer. As mentioned in the Requirements section, only the Distribution can access the S3 origin, strengthening the security posture of the site. Due to the globally-distributed CloudFront CDN edge locations, some mitigation benefits against DDoS attacks may also be realised. When running production workloads, WAF and Shield Advanced should also be implemented for improved web and DDoS protection. This is beyond the scope of running a small-scale development website.
Finally, in the S3 bucket properties, AWS recommend using Amplify for static web hosting instead of using S3.