Ah, the inner machinations of performance testing are an enigma.
Attending FOSDEM 2024 was an eye-opening experience, especially the lecture on performance testing, which underscored its vital role in software development.
It’s incredible how something seemingly technical can have such a profoundly impact, the reliability, speed, and scalability of applications. In this article, we’ll explore the critical role of performance testing in software development. We’ll discuss its importance in identifying bottlenecks, enhancing user experience, and why it’s essential for businesses. Additionally, we’ll address the challenges of performance testing and introduce Gatling as a tool for database performance monitoring. Let’s dive in!
Performance testing is crucial in guaranteeing that applications not only meet but surpass the expectations of both users and businesses. It transcends basic functionality testing, delving deeply into the nuances of how an application operates under diverse conditions and workloads.
One of the primary objectives of performance testing is to identify bottlenecks within an application. These bottlenecks could manifest as slow response times, excessive resource consumption, or scalability limitations. By pinpointing these bottlenecks early in the development cycle, developers can take proactive measures to address them, thereby optimizing the application’s overall performance.
In today’s digital landscape, user experience reigns supreme. Poor performance can lead to frustration, dissatisfaction, and ultimately, abandonment of the application. Performance ensures that the application delivers a seamless and responsive user experience, regardless of the user’s device, location, or network conditions. Businesses can enhance user satisfaction, loyalty, and retention by optimizing performance.
Here are several compelling reasons for prioritizing performance testing:
In essence, performance testing is not just a technical necessity but a strategic imperative for businesses looking to deliver high-quality, high-performing applications that delight users and drive success in today’s hyper-competitive digital landscape.
Acknowledging the imperfections inherent in performance testing is crucial for understanding the challenges faced in this domain. Despite these challenges, it’s imperative to recognize the importance of imperfect performance testing and how it continues to provide value in software development:
Even in its imperfect state, performance testing is vital for early issue detection and continuous improvement. By uncovering critical issues and bottlenecks early in the development cycle, developers can take corrective action before they escalate into more significant problems. This iterative refinement process fosters a culture of continuous improvement, leading to incremental performance gains over time.
Imperfect performance testing provides invaluable insights into how applications behave under real-world conditions, including stress, resource limitations, and unexpected usage patterns. However, testing such complex systems requires a deep understanding of their inner workings, posing a significant challenge. Additionally, the dynamic nature of software environments introduces further complexity, making replicating every possible scenario in controlled testing environments impossible.
Performance testing often demands substantial resources, including hardware, software, and human expertise. Not all organizations have access to such resources, making comprehensive testing challenging. Moreover, keeping pace with evolving technology landscapes and ever-changing requirements adds another layer of complexity. However, despite these challenges, performance testing remains indispensable for ensuring application reliability, scalability, and efficiency.
Despite these challenges and recognizing the imperfect nature of performance testing, its value remains undeniable. It acts as a proactive safeguard, enabling developers to identify and address issues early in the development cycle, paving the way for continuous improvement and optimization. Even in its imperfect state, performance testing provides invaluable real-world insights into application behavior across diverse conditions, empowering developers to make informed decisions and prioritize enhancements that resonate with end users.
When undertaking performance testing, it’s vital to consider various components to ensure a thorough evaluation:
Load testing involves simulating real-world loads on web services to assess their performance under normal conditions, while stress testing pushes the system to its limits to identify its breaking point. Tools like Gatling offer scripting capabilities to define complex load scenarios and analyze performance under varying loads. Load testing helps gauge how the application handles typical user loads, while stress testing identifies weaknesses and potential failure points under extreme conditions.
Endurance testing evaluates the application’s performance over prolonged periods, ensuring stability without memory leaks or performance degradation. Continuous load tests running for several hours or days monitor metrics like memory usage, CPU utilization, and other system parameters. Endurance testing is crucial for identifying any degradation in performance over time, ensuring that the application remains stable and reliable during extended usage.
Scalability testing assesses how well the application can handle increased load by measuring metrics like throughput, response time, and resource utilization. Tools like Gatling can simulate gradual increases in the number of users or transactions to observe how the system scales horizontally or vertically. Scalability testing helps identify performance bottlenecks and capacity limits before deploying the application to production, ensuring that it can accommodate growth without sacrificing performance.
APIs are critical in modern web applications, facilitating communication between different components. API performance testing using tools like Apache JMeter, involves sending HTTP requests and measuring response times to ensure that APIs meet performance requirements. Simulating various API calls with different payloads helps analyze how backend services respond under different load conditions. API performance testing ensures that APIs can handle expected traffic volumes without degradation, maintaining optimal performance for end users.
With the adoption of DevOps practices, automation, and integration are essential for streamlining the performance testing process. Tools like Apache Maven enable the automation of performance tests as part of the continuous integration pipeline. Integrating performance testing tools with build servers like Jenkins or CI/CD platforms like GitHub ensures that performance tests are run automatically on code changes. Additionally, developers can use open-source automation frameworks like Selenium can be used to automate web application tests, complementing performance testing efforts and ensuring comprehensive test coverage across the application stack. Automation and integration help improve efficiency, consistency, and reliability in performance testing processes, enabling faster feedback and quicker resolution of performance issues.
Gatling is a powerful open-source tool designed for performance testing, renowned for its efficiency and flexibility. It allows developers to simulate real-world scenarios and assess the performance of their applications under various load conditions. Gatling uses a scenario-based approach, where users define test scenarios using a simple yet expressive DSL (Domain-Specific Language). These scenarios can simulate user interactions, such as browsing web pages, submitting forms, or making API calls.
During our devcase, we came into contact with Gatling for the first time. The task was to build an application that could retrieve data from a intelligent electricity meter and translate it into something understandable for people. So, data had to be stored and processed. We used a lambda to convert the raw data and store it in a Timestream database.
It was essential to perform performance testing to avoid the application crashing when more than five users simultaneously send the data from their meter. It was important to know approximately how many simultaneous users the system could handle before it failed. For this, we used Gatling.
The Gatling code is structured as follows:
The httpProtocolBuilder is a fundamental component in Gatling scripts, responsible for defining the configuration of the HTTP protocol used in our performance tests. It allows us to set various parameters and characteristics related to HTTP communication, ensuring accurate simulation of real-world scenarios.
The configuration options provided by the HTTPProtocolBuilder
enable precise control over aspects such as:
By customizing these settings within the httpProtocolBuilder, we can create test scenarios that closely resemble the behavior of real users interacting with our application or API.
In Gatling, the ScenarioBuilder crucially crafts realistic user interactions with the web application under test. It acts as the blueprint for defining various user journeys or workflows, detailing the sequence of HTTP requests to make and the order they should follow.
A ScenarioBuilder
typically includes the following key elements:
The ScenarioBuilder enables the creation of diverse and intricate scenarios that mirror the complexity of user interactions in production environments. Accurately replicating user behavior allows for assessing application performance under different usage scenarios and identifying any performance bottlenecks or issues.
This method in Gatling lets you set up the test scenarios and configure the simulation before executing it. It accepts one or more scenarios and executes them. A scenario consists of 2 parts:
scn.injectOpen(…): This segment of the code is responsible for configuring the injection of user behavior into the scenario (scn). Here, we define the pattern or strategy for simulating user interactions within our test scenarios. For instance, we can specify how users are injected into the scenario over time, whether it’s a gradual ramp-up, a constant load, or a spike in user activity.
rampUsers(100).during(100): This part of the setup specifies the injection pattern for users. This example indicates that the number of virtual users will gradually increase from 0 to 100 for 100 seconds. In simpler terms, with each passing second, Gatling will introduce an additional virtual user into the scenario until reaching a total of 100 concurrent users. This gradual ramp-up helps simulate realistic user load patterns and allows us to observe how the system performs under increasing stress levels.
Following the completion of Gatling tests, a comprehensive report is generated, providing valuable insights into the performance of the tested application. Here are some key observations typically found in the report:
Response Times: The report includes data on various metrics related to response times, such as average, minimum, maximum, and 95th percentile response times. These metrics indicate how quickly the application responds to different types of requests. Lower response times are generally preferable as they signify faster application responsiveness.
Errors: The report documents information regarding any errors encountered during the test. These errors may include server errors, timeouts, or incorrect responses. Identifying and addressing these errors is crucial for improving application reliability and user experience.
Based on the results, we infer that the tested lambda function and database can handle requests from 100 concurrent users. However, we note that the average response time for these requests is around 1200 ms. Although the system functions, this response time is relatively long, indicating potential for optimization to improve overall system performance.
In software development, performance testing is the backbone, ensuring applications stand firm regarding reliability, scalability, and efficiency. While the pursuit of perfection may seem daunting, the quirks and challenges in the process remind us of the complex nature of modern software and users’ evolving expectations.
This analysis highlights the critical importance of performance testing. Observers noted that, although the application could handle the requests, processing them took a long time. This underscores the significance of optimizing performance. Addressing these issues and implementing necessary improvements can enhance application efficiency, scalability, and security, ultimately delivering a superior user experience.
The key takeaway from the FOSDEM 2024 lecture is crystal clear: although performance testing isn’t flawless, its value is immeasurable. It acts as a safety net, capturing issues early in the development cycle. By embracing these imperfections, developers pave the path for continuous improvement, making their applications more resilient and effective in the long run.
]]>The power of Open Source is the power of the people. The people rule.
Terraform, an industry-leading infrastructure as code (IaC) tool, has been a cornerstone of cloud provisioning and infrastructure management since its inception in 2014. Over the years, it has fostered a vibrant community comprising thousands of users, contributors, vendors and an extensive ecosystem of open source modules and extensions.
August 10th 2023 markes a significant shift in Terraform’s trajectory when HashiCorp, its stewarding organization, changed the license from the Mozilla Public License (v2.0) to the Business Source License (v1.1). This abrupt change sent ripples across the Terraform community, sparking concerns about its future.
In response, the open source community initiated the OpenTofu project, which was adopted by the Linux Foundation on September 20th. OpenTofu, starting off as a fork from Terraform 1.5.6, aims to create a truly open source alternative to Terraform.
On January 10, 2024, OpenTofu achieved a pivotal milestone by releasing its first production-stable version, OpenTofu 1.6.0, promising a seamless migration for current Terraform users.
In this article we’ll delve into a comparative analysis of OpenTofu and Terraform. We’ll also discuss the impact of the license change on future development and outline steps required to migrate between both tools. We hope this can help you make an informed decision when choosing the right tool for your orchestration needs.
In terms of functionality, both tools are currently equivalent. The OpenTofu community has aimed to maintain feature parity with Terraform for the time being. As such, both currently offer the same large set of features making them well suited tools for IaC workloads.
Some small features were added with the release of OpenTofu 1.6.0 however, including:
The upcoming version 1.7 aims to introduce more community requested features not available in Terraform, including:
On Terraform’s side, version 1.6 implemented performance improvements to the ‘terraform test’ command, as well as changes to the S3 backend.
These changes signal the end of feature parity between both tools. As for compatibility, as both tools evolve separately, it remains to be seen whether the OpenTofu community will stay committed to incorporating Terraform’s development in its own tool.
OpenTofu and Terraform both use Hashicorp Configuration Language (HCL) to define their resources. HCL is declarative, describing an intended goal rather than the steps to reach that goal.
In terms of configuration options, both are also similar.
Both Terraform and OpenTofu are open-source tools, providing free usage.
As said, OpenTofu uses the Mozilla Public License (MPL 2.0), which allows developers to freely use, modify, and distribute software for both commercial and non-commercial use.
Terraform has now shifted to the Business Source License (BUSL 1.1). This also permits free use of source code, including commercial use, except when it’s used to provide an offering that competes with HashiCorp. An example of this could be running Terraform in a hosted way in CI/CD and offering this as a production service.
So while, Terraform remains free to use in your projects, there can be an issue when incorporating Terraform in a service offering.
In terms of pricing, both tools are free to use, with some limitations for commercial use when it comes to Terraform.
Hashicorp does offer paid solutions, starting with Terraform Cloud, a centralized platform for managing Terraform code. Key features include version control integration, secure variable storage, remote state management, and policy enforcement, enabling organizations to efficiently maintain control over their cloud infrastructure.
Pricing for Terraform Cloud follows a Resources Under Management (RUM) model, where Terraform counts the amount of objects it manages and calculates cost accordingly. The Standard Tier offering is billed at $0.00014 per hour per resource. For the Plus and Enterprise tiers, pricing is negotiated directly with Hashicorp sales.
Let’s take the example for Terraform Enterprise. This is a self-hosted solution offered as a private installation rather than a SaaS solution. Judging from the AWS Marketplace and Azure Marketplace offerings of Terraform Enterprise, pricing starts from $15,000/year. This includes just five workspaces, likely to be insufficieint for a large enterprise.
Terraform currently holds the advantage when it comes to community support. Backed by Hashicorp and benefiting from years of community-wide support as an open-source project pre-license change. It offers a large breadth of resources for user to take advantage of. The impact of the license changes is starting to show however, with community contributions to Terraform drying up almost completely in the last couple of months. Time will tell what the final impact of this shift in community support will mean for users.
OpenTofu, on the other hand, is a growing tool with an expanding community. Although not as large as Terraform’s, its community is rapidly expanding as more users move to OpenTofu. It is backed by the Linux Foundation as well as companies such as Gruntwork, env0, Scalr and our very own Sopra Steria AS. This growing support will provide ever increasing resources for users of the tool in the future.
Terraform is proven as a mature, stable and extremely popular orchestration tool capable of handling enterprise-grade infrastructure deployments. OpenTofu is a new tool. However, as a fork of Terraform, it stems from the same code base and thus is expected to perform similarly in terms of stability.
To summarize, the following table captures the comparison between the two tools.
Feature | OpenTofu | Terraform |
---|---|---|
Features | Similar to Terraform, with some improvements | Large featureset |
Configuration | Declarative | Declarative |
Ownership | Part of Linux Foundation | Owned by Hashicorp |
Open Source | Yes | Source-available |
Licensing | Mozilla Public License (MPL 2.0), open-source | Business Software License (BUSL 1.1), source-avalaible |
Pricing | Free to use | Free to use, with restrictive commercial use. Paid offerings available. |
Community | Smaller, but expanding | Large, but support is shrinking due to sentiment |
Stability | Similar to Terraform | Proven stability and robustness |
OpenTofu promises a seamless migration for Terraform users. As both OpenTofu 1.6 and Terraform 1.6 are compatible, the migration process is relatively straightforward. However, it is advised to have a disaster recovery plan in place as the migration still poses a non-trivial change to the architecture. The migration process is detailed in the OpenTofu Migration Guide.
In summary, the process looks like this:
Apply all changes with Terraform
terraform apply
Test if you can successfully execute the tofu
command:
tofu --version
Back up your state file
If you are using a local state file, you can simply make a copy of your terraform.tfstate
file in your project directory.
If you are using a remote backend such as an S3 bucket, make sure that you follow the backup procedures for the backend and that you exercise the restore procedure at least once.
The terraform state pull
command can be used to easily backup remote states to local machines.
Initialize OpenTofu
tofu init
Inspect the Plan
tofu plan
Test out a small change
tofu apply
And that’s it. Migration complete!
In order to rollback, it suffices to create a backup of the new state file and perform
terraform init
terraform plan
to verify that everything is working correctly. Do note that a rollback is only possible between compatible versions (1.6.0).
Despite the emergence of OpenTofu, Terraform remains a stalwart in the IaC realm, with a vast user community ensuring its relevance. HashiCorp’s reputation however, has taken a hit with the sudden shift in its licensing policy and it remains to be seen how this strategic shift will affect users long-term.
For OpenTofu, at least for now, the future looks bright. The community’s rapid growth, the Linux Foundation’s stewardship, and the release of version 1.6.0 are signals of the community’s commitment to keeping orchestration truly open-source. New features like advanced testing, enhanced S3 state backend and the prospect of a host of other community-requested features being added show that OpenTofu is capable of delivering on its promise and evolve according to its user’s needs.
When you are a service provider looking to incorporate Terraform in one of your offerings, it can be worthwhile to look at OpenTofu as a truly open-source alternative to avoid licensing costs or legal trouble.
For regular users, not much has changed on the surface. Terraform still is a very robust and well supported too capable of handling your orchestration needs. OpenTofu does offer some benefits in ease of use and configuration, and as both tools are currently compatible and functionally equivalent it may be interesting to take try out OpenTofu and see what it has to offer.
The open-source community supporting OpenTofu will decide the future of the project. While the early signs are positive, it remains to be seen whether the tool can keep up in functionality and compatibility with Terraform, or even surpass it in some areas. At Jworks, We are certainly interested in its future development.
In a follow-up post, we will attempt to migrate some of our projects from Terraform to OpenTofu. Here will we go through the migration process step-by-step and compare the performance of the two tools in a production setting.
Stay tuned for more!
]]>What’s a common vulnerability in applications? Isn’t it the traditional username & password login? Perhaps it’s time to embrace FIDO & Passkeys for a more secure login method.
Every week, we encounter articles detailing stolen user login credentials, hacked databases with compromised usernames and passwords (Car maintenance company leaks 12.7k US phone numbers, emails and MD5 unsalted passwords, COMB: largest breach of all time leaked online with 3.2 billion records). Users often reuse the same password across multiple platforms, fail to rotate passwords, or use weak ones. Have you explored the website ‘have i been pwned?’? Take a look; you’ll be surprised at how many of your passwords have already been leaked.
It appears to be high time to address the vulnerabilities associated with traditional username and password logins and transition towards the use of FIDO.
FIDO (Fast IDentity Online) is a set of open authentication standards that aims to replace traditional passwords with more secure and convenient methods.
Key benefits of FIDO authentication:
To understand how passkeys work, let’s take a step back and delve into the realm of asymmetric cryptography. In asymmetric cryptography, we deal with a key pair consisting of a private key, which must remain private and undisclosed, and a public key, which can be shared openly.
Through asymmetric encryption, a message encrypted with the public key can only be deciphered by the possessor of the private key. Conversely, if a message is encrypted with the private key, it can be decrypted using the public key. This method is used in signatures, to validate the identity of the sender of the information.
So, how is this technology employed in FIDO? During registration, your device generates a unique passkey, which contains a public key and a private key. The public key is stored on the service provider’s server, while the private key remains securely on your device (smartphone, tablet, or a dedicated FIDO security key).
To register as a new user, you must send our public key to the server for validation. The server will return a challenge that you must sign using your private key. If the server can verify the signature using your public key, the registration is complete.
To authenticate a user, send your username to the server, and the server will answer with a challenge encrypted with your public key. On your device, we need to decrypt the challenge with your private key, solve the challenge, and return your response encrypted with your private key. After sending the encrypted response, the server can validate the response using your public key. If the challenge is correct, the server approves the login.
In this way we no longer store the user’s passwords, but only their public key which can be shared openly.
For a deeper understanding and visual examples, check out the video below.
In our Spring Boot application, we are going to handle the registration and login.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>com.yubico</groupId>
<artifactId>webauthn-server-core</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
We use Flyway to set up our database; we need to keep track of our users and their passkeys.
We need to save the following data for the user.
The username is the identifier specified by the user, and the user handle is the unique identifier generated by the system, used to streamline communication between the FIDO server and the authenticator.
The assertion contains information such as user identification, the authentication method used, cryptographic key information, and challenge to ensure the authenticity of the user. It is rebuilt by the server when a user attempts to log in, and it will expire after a short time.
The authenticator on the client uses this information to validate the server and to solve the challenge with the private key.
The public key JSON is generated on the server side and will be used during the registration process. It contains all the parameters and options necessary for generating a new public key for the respective user. The authenticator on the client will use these options to generate the new key.
We keep a boolean registration complete, indicating whether the registration has been successfully completed.
We need to save the following data for the passkey.
The key id is a unique identifier assigned to a registered key and is used to identify that specific key.
The public key is the user’s public key generated during the registration process with an authenticator.
The signature count refers to a counter indicating the number of signatures with a specific FIDO authenticator. With this value, the server can verify if the counter has been correctly updated compared to the previous value. If the value is not correctly updated, it may indicate the possibility of a replay or an attack.
Transport refers to the manner in which communication occurs between the FIDO client (for example, an authenticator like a USB security key) and the FIDO server during the authentication process. The transport mechanism determines how data is exchanged between the authenticator and the server.
The type refers to the type of authenticator used to perform FIDO-based authentication.
There are various transport mechanisms defined in FIDO, including:
Create a SQL file V1__init.sql
.
CREATE TABLE users
(
id CHAR(36) NOT NULL,
username CHARACTER VARYING(255) NOT NULL,
user_handle VARBINARY(255) NOT NULL,
assertion MEDIUMTEXT,
public_key_json MEDIUMTEXT,
registration_complete BIT NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE passkey
(
id CHAR(36) NOT NULL,
key_id VARBINARY(255) NOT NULL,
public_key VARBINARY(255) NOT NULL,
signature_count INTEGER NOT NULL,
transport CHARACTER VARYING(255) NOT NULL,
type CHARACTER VARYING(255) NOT NULL,
user_handle VARBINARY(255) NOT NULL,
PRIMARY KEY (id)
);
And configure the h2 database in the application.properties
file.
spring.datasource.url=jdbc:h2:mem:db;DB_CLOSE_DELAY=-1
spring.datasource.driverClassName=org.h2.Driver
Because Yubico’s dependency works with their own object ByteArray instead of byte[], we create a small class to easily convert between them.
Create a ByteArrayUtils
class.
@NoArgsConstructor(access = AccessLevel.PRIVATE)
public class ByteArrayUtils {
public static byte[] byteArrayToBytes(ByteArray byteArray) {
return byteArray.getBytes();
}
public static ByteArray bytesToByteArray(byte[] bytes) {
return new ByteArray(bytes);
}
}
Create a model User
and a model Passkey
.
@Getter
@Setter
@Entity
@Table(name = "users")
public class User {
@Id
@GeneratedValue(generator = "UUID")
@JdbcTypeCode(java.sql.Types.VARCHAR)
@Column(name = "id")
private UUID id;
private String username;
@Column(length = 1000000)
@Lob
private String publicKeyJson;
@Column(length = 1000000)
@Lob
private String assertion;
private byte[] userHandle;
private boolean registrationComplete;
}
@Getter
@Setter
@Entity
public class Passkey {
@Id
@GeneratedValue(generator = "UUID")
@JdbcTypeCode(java.sql.Types.VARCHAR)
@Column(name = "id")
private UUID id;
private byte[] userHandle;
private byte[] publicKey;
private byte[] keyId;
private String type;
private String transport;
private long signatureCount;
}
Create a repository UserRepository
, PasskeyRepository
and MyCredentialRepository
.
public interface UserRepository extends JpaRepository<User, UUID> {
Optional<User> findByUsername(String username);
Optional<User> findByUserHandle(byte[] userHandle);
}
public interface PasskeyRepository extends JpaRepository<Passkey, UUID> {
List<Passkey> findAllByUserHandle(byte[] userHandle);
List<Passkey> findAllByKeyId(byte[] keyId);
Optional<Passkey> findByUserHandleAndKeyId(byte[] userHandle, byte[] keyId);
}
In addition to the repositories that interact with the database, we also have an implementation of the CredentialRepository from Yubico.
This is used by RelyingParty to look up credentials, usernames and user handles from usernames, user handles and credential ids.
@Slf4j
@RequiredArgsConstructor
public class MyCredentialRepository implements CredentialRepository {
private final UserRepository userRepository;
private final PasskeyRepository passkeyRepository;
@Override
public Set<PublicKeyCredentialDescriptor> getCredentialIdsForUsername(String username) {
log.info("Get credentials id's for {}", username);
Optional<User> user = userRepository.findByUsername(username);
if (user.isPresent()) {
log.info("Username {} found", username);
Set<PublicKeyCredentialDescriptor> descriptors = new HashSet<>();
passkeyRepository.findAllByUserHandle(user.get().getUserHandle())
.forEach(descriptor -> {
log.info("Found credential for {}", username);
descriptors.add(PublicKeyCredentialDescriptor.builder()
.id(bytesToByteArray(descriptor.getKeyId()))
.transports(getTransports(descriptor.getTransport()))
.build());
});
return descriptors;
}
return Collections.emptySet();
}
@Override
public Optional<ByteArray> getUserHandleForUsername(String username) {
log.info("Get user handle for {}", username);
Optional<User> user = userRepository.findByUsername(username);
if (user.isPresent()) {
log.info("User handle found for {}", username);
return Optional.of(bytesToByteArray(user.get().getUserHandle()));
}
return Optional.empty();
}
@Override
public Optional<String> getUsernameForUserHandle(ByteArray userHandle) {
log.info("Get username for user handle");
Optional<User> user = userRepository.findByUserHandle(byteArrayToBytes(userHandle));
if (user.isPresent()) {
log.info("Username: {} found for user handle", user.get().getUsername());
return Optional.of(user.get().getUsername());
}
return Optional.empty();
}
@Override
public Optional<RegisteredCredential> lookup(ByteArray credentialId, ByteArray userHandle) {
log.info("Get key for credential id and user handle");
Optional<Passkey> key = passkeyRepository
.findByUserHandleAndKeyId(byteArrayToBytes(userHandle), byteArrayToBytes(credentialId));
if (key.isPresent()) {
log.info("Key found for credential id and user handle");
RegisteredCredential db = RegisteredCredential.builder()
.credentialId(bytesToByteArray(key.get().getKeyId()))
.userHandle(bytesToByteArray(key.get().getUserHandle()))
.publicKeyCose(bytesToByteArray(key.get().getPublicKey()))
.signatureCount(key.get().getSignatureCount())
.build();
return Optional.of(db);
}
return Optional.empty();
}
@Override
public Set<RegisteredCredential> lookupAll(ByteArray credentialId) {
log.info("Get keys for credential id");
List<Passkey> passkeys = passkeyRepository.findAllByKeyId(byteArrayToBytes(credentialId));
if (passkeys.isEmpty()) {
log.info("No keys found for credential id");
} else {
log.info("Keys found for credential id");
}
Set<RegisteredCredential> registeredCredentials = new HashSet<>();
passkeys.forEach(passkey -> {
RegisteredCredential db = RegisteredCredential.builder()
.credentialId(bytesToByteArray(passkey.getKeyId()))
.userHandle(bytesToByteArray(passkey.getUserHandle()))
.publicKeyCose(bytesToByteArray(passkey.getPublicKey()))
.signatureCount(passkey.getSignatureCount())
.build();
registeredCredentials.add(db);
});
return registeredCredentials;
}
private Set<AuthenticatorTransport> getTransports(String transport) {
Set<AuthenticatorTransport> transports = new HashSet<>();
String[] transportAsArray = transport.split(",");
Arrays.stream(transportAsArray).toList().forEach(t -> {
transports.add(AuthenticatorTransport.of(t));
});
return transports;
}
}
Create a configuration class ServerConfiguration
.
The term “Relying Party Identity” refers to the identification information of the relying party.
A “Relying Party” (RP) is an entity, typically a website or an online service, that relies on FIDO-based authentication to verify the identity of users. The relying party initiates and manages the FIDO authentication process to ensure secure and user-friendly login experiences.
The AuthenticatorSelectionCriteria
is a set of criteria used to specify preferences for the characteristics of the authenticator that should be used during the credential creation process.
The AuthenticatorAttachment this enumeration’s values describe authenticators’ attachment modalities.
CROSS_PLATFORM
: Passkey will be stored on another device. (hardware key, smartphone if you are working on a computer)PLATFORM
: You need to register with a built-in biometric. (fingerprint, face id)The ResidentKeyRequirement this enumeration’s values describe the Relying Party’s requirements for client-side discoverable credentials.
DISCOURAGED
: The client and authenticator will try to create a server-side credential if possible, and a discoverable credential otherwise.PREFERRED
: The client and authenticator will try to create a discoverable credential if possible, and a server-side credential otherwise.REQUIRED
: The client and authenticator will try to create a discoverable credential, and fail the registration if that is not possible.The UserVerificationRequirement a WebAuthn Relying Party may require user verification for some of its operations.
DISCOURAGED
: This value indicates that the Relying Party does not want user verification.PREFERRED
: This value indicates that the Relying Party prefers user verification for the operation if possible, but will not fail if the verification isn’t available.REQUIRED
: Indicates that the Relying Party requires user verification for the operation and will fail if the verification isn’t available.The PublicKeyCredentialParameters
is a data structure used to specify the cryptographic algorithms and key types that a relying party (or website) is willing to accept during the credential creation process.
@RequiredArgsConstructor
@Configuration
public class ServerConfiguration {
private final UserRepository userRepository;
private final PasskeyRepository passkeyRepository;
@Bean
public RelyingPartyIdentity relyingPartyIdentity() {
RelyingPartyIdentity rpIdentity = RelyingPartyIdentity.builder()
.id("localhost")
.name("WebAuthn - Nicholas Meyers")
.build();
return rpIdentity;
}
@Bean
public RelyingParty relyingParty() {
RelyingParty rp = RelyingParty.builder()
.identity(relyingPartyIdentity())
.credentialRepository(new MyCredentialRepository(userRepository, passkeyRepository))
.allowOriginPort(true)
.build();
return rp;
}
@Bean
public AuthenticatorSelectionCriteria authenticatorSelectionCriteria() {
AuthenticatorAttachment authenticatorAttachment = AuthenticatorAttachment.CROSS_PLATFORM;
ResidentKeyRequirement residentKeyRequirement = ResidentKeyRequirement.PREFERRED;
UserVerificationRequirement userVerificationRequirement = UserVerificationRequirement.PREFERRED;
return AuthenticatorSelectionCriteria.builder()
.authenticatorAttachment(authenticatorAttachment)
.residentKey(residentKeyRequirement)
.userVerification(userVerificationRequirement)
.build();
}
@Bean
public List<PublicKeyCredentialParameters> publicKeyCredentialParameters() {
List<PublicKeyCredentialParameters> pubKeyCredParams = new ArrayList<>();
PublicKeyCredentialParameters param1 = PublicKeyCredentialParameters.ES256;
PublicKeyCredentialParameters param2 = PublicKeyCredentialParameters.RS256;
pubKeyCredParams.add(param1);
pubKeyCredParams.add(param2);
return pubKeyCredParams;
}
}
The ClientExtensionOutputs
refers to the output of client extensions during the WebAuthn process,
such as creating a public key credential or authentication. This output may contain information specific to the used extensions.
public class CustomClientExtensionOutput implements ClientExtensionOutputs {
@Override
public Set<String> getExtensionIds() {
return Collections.emptySet();
}
}
public record StartRegisterCredentialResponseResource(ByteArray id, String type, String[] transports) {
}
public record StartRegisterRequestResource(String username) {
}
public record StartRegisterResponseResource(String challenge, RelyingPartyIdentity rp, UserIdentity user,
List<PublicKeyCredentialParameters> pubKeyCredParams,
long timeout, String attestation,
List<StartRegisterCredentialResponseResource> excludeCredentials,
AuthenticatorSelectionCriteria authenticatorSelection) {
}
public record VerifyClientRequestResource(ByteArray attestationObject, ByteArray clientDataJSON,
List<String> transports) {
}
public record VerifyAttestationRequestResource(ByteArray id, ByteArray rawId,
VerifyClientRequestResource response, String type,
Object clientExtensionResults, String authenticatorAttachment) {
}
public record VerifyRegistrationRequestResource(String username, VerifyAttestationRequestResource response) {
}
public record VerifyRegistrationResponseResource(boolean verified) {
}
Create a service StartRegistrationService
.
In the start registration service, we prepare everything for the registration of a new user.
We create a user, challenge, registration options, and send them back in the response with the information of our application.
@RequiredArgsConstructor
@Service
public class StartRegistrationService {
private final UserRepository userRepository;
private final AuthenticatorSelectionCriteria authenticatorSelection;
private final RelyingPartyIdentity relyingPartyIdentity;
private final RelyingParty relyingParty;
private final List<PublicKeyCredentialParameters> publicKeyCredentialParameters;
private final Random random = new Random();
public StartRegisterResponseResource startRegistration(StartRegisterRequestResource resource) throws JsonProcessingException {
UUID userId = UUID.randomUUID();
byte[] userHandle = new byte[36];
random.nextBytes(userHandle);
UserIdentity userIdentity = createUserIdentity(resource.username(), bytesToByteArray(userHandle));
StartRegistrationOptions startRegistrationOptions = createStartRegistrationOptions(userIdentity);
PublicKeyCredentialCreationOptions pbOptions = relyingParty.startRegistration(startRegistrationOptions);
User user = createUser(userId, resource.username(), pbOptions.toJson(), userHandle);
userRepository.save(user);
return new StartRegisterResponseResource(pbOptions.getChallenge().getBase64Url(), relyingPartyIdentity,
userIdentity, publicKeyCredentialParameters, 60000, "none", Collections.emptyList(), authenticatorSelection);
}
private UserIdentity createUserIdentity(String username, ByteArray userHandle) {
return UserIdentity.builder()
.name(username)
.displayName(username)
.id(userHandle)
.build();
}
private StartRegistrationOptions createStartRegistrationOptions(UserIdentity userIdentity) {
return StartRegistrationOptions.builder()
.user(userIdentity)
.timeout(60000)
.authenticatorSelection(authenticatorSelection)
.build();
}
private User createUser(UUID userId, String username, String publicKey, byte[] userHandle) {
User user = new User();
user.setId(userId);
user.setUsername(username);
user.setPublicKeyJson(publicKey);
user.setUserHandle(userHandle);
user.setRegistrationComplete(false);
return user;
}
}
Create a service VerifyRegistrationService
.
In the verify registration service, we will verify the registration.
The frontend application has processed the response from the start registration service and sends the result back to the verify service.
If the verification is successful, the user is registered.
@Slf4j
@RequiredArgsConstructor
@Service
public class VerifyRegistrationService {
private final RelyingParty relyingParty;
private final UserRepository userRepository;
private final PasskeyRepository passkeyRepository;
public VerifyRegistrationResponseResource verify(VerifyRegistrationRequestResource resource) throws Base64UrlException, IOException {
Optional<User> user = userRepository.findByUsername(resource.username());
if (user.isEmpty()) {
throw new RuntimeException("User not found");
}
AuthenticatorAttestationResponse authenticatorAttestationResponse
= createAuthenticatorAttestationResponse(resource.response().response());
PublicKeyCredentialCreationOptions publicKeyCredentials = createPublicKeyCredentialCreationOptions(user.get().getPublicKeyJson());
PublicKeyCredential publicKeyCredential = createPublicKeyCredential(resource.response().id(), authenticatorAttestationResponse);
FinishRegistrationOptions finishRegistrationOptions = createFinishRegistrationOptions(publicKeyCredentials, publicKeyCredential);
RegistrationResult registrationResult;
try {
registrationResult = relyingParty.finishRegistration(finishRegistrationOptions);
} catch (RegistrationFailedException e) {
user.get().setPublicKeyJson(null);
user.get().setRegistrationComplete(false);
userRepository.save(user.get());
return new VerifyRegistrationResponseResource(false);
}
user.get().setPublicKeyJson(null);
user.get().setRegistrationComplete(true);
userRepository.save(user.get());
byte[] publicKey = byteArrayToBytes(registrationResult.getPublicKeyCose());
byte[] keyId = byteArrayToBytes(registrationResult.getKeyId().getId());
String type = registrationResult.getKeyId().getType().getId();
String transport = "";
long signatureCount = registrationResult.getSignatureCount();
Optional<SortedSet<AuthenticatorTransport>> transports = registrationResult.getKeyId().getTransports();
if (transports.isPresent()) {
List<String> transportList = transports.get().stream().map(AuthenticatorTransport::getId).toList();
transport = String.join(",", transportList);
}
Passkey passkey = new Passkey();
passkey.setId(UUID.randomUUID());
passkey.setUserHandle(user.get().getUserHandle());
passkey.setPublicKey(publicKey);
passkey.setKeyId(keyId);
passkey.setType(type);
passkey.setTransport(transport);
passkey.setSignatureCount(signatureCount);
log.info("Save passkey for transports {}", transport);
passkeyRepository.save(passkey);
return new VerifyRegistrationResponseResource(true);
}
private AuthenticatorAttestationResponse createAuthenticatorAttestationResponse(VerifyClientRequestResource client) throws Base64UrlException, IOException {
Set<AuthenticatorTransport> transports = new HashSet<>();
client.transports().forEach(transport -> {
transports.add(AuthenticatorTransport.of(transport));
});
return AuthenticatorAttestationResponse.builder()
.attestationObject(client.attestationObject())
.clientDataJSON(client.clientDataJSON())
.transports(transports)
.build();
}
private PublicKeyCredentialCreationOptions createPublicKeyCredentialCreationOptions(String json) throws JsonProcessingException {
return PublicKeyCredentialCreationOptions.fromJson(json);
}
private PublicKeyCredential createPublicKeyCredential(ByteArray id, AuthenticatorAttestationResponse authenticator) {
CustomClientExtensionOutput extensionOutput = new CustomClientExtensionOutput();
return PublicKeyCredential.builder()
.id(id)
.response(authenticator)
.clientExtensionResults(extensionOutput)
.build();
}
private FinishRegistrationOptions createFinishRegistrationOptions(PublicKeyCredentialCreationOptions publicKey, PublicKeyCredential credential) {
return FinishRegistrationOptions.builder()
.request(publicKey)
.response(credential)
.build();
}
}
Create a controller RegistrationController
with 2 endpoints, the start registration
and verify registration
.
@RequiredArgsConstructor
@RestController
@RequestMapping("/register")
@CrossOrigin("http://localhost:4200")
public class RegistrationController {
private final StartRegistrationService startRegistrationService;
private final VerifyRegistrationService verifyRegistrationService;
@PostMapping("/start")
public ResponseEntity<StartRegisterResponseResource> startRegistration(@RequestBody StartRegisterRequestResource resource) throws JsonProcessingException {
return ResponseEntity.ok(startRegistrationService.startRegistration(resource));
}
@PostMapping("/verify")
public ResponseEntity<VerifyRegistrationResponseResource> verifyRegistration(@RequestBody VerifyRegistrationRequestResource resource) throws Base64UrlException, IOException {
return ResponseEntity.ok(verifyRegistrationService.verify(resource));
}
}
public record AllowCredentialsResponseResource(ByteArray id, String type,
Set<AuthenticatorTransport> transports) {
}
public record AssertionRequestResource(ByteArray id, ByteArray rawId, AssertionResource response,
String type, Object clientExtensionResults,
String authenticatorAttachment) {
}
public record AssertionResource(ByteArray authenticatorData, ByteArray clientDataJSON, ByteArray signature) {
}
public record LoginRequestResource(String username) {
}
public record LoginResponseResource(String challenge, List<AllowCredentialsResponseResource> allowCredentials,
int timeout, String userVerification, String rpId) {
}
public record VerifyLoginRequestResource(String username, AssertionRequestResource response) {
}
public record VerifyLoginResponseResource(boolean verified) {
}
Create a service StartLoginService
.
In the start login service, we will check if the user is registered and prepare everything for the login.
We create a challenge and send it back in the response along with the information of our application and the details of
the key used in the challenge.
@Slf4j
@RequiredArgsConstructor
@Service
public class StartLoginService {
private final RelyingParty relyingParty;
private final UserRepository userRepository;
public LoginResponseResource startLogin(LoginRequestResource resource) throws JsonProcessingException {
User user = getUser(resource.username());
StartAssertionOptions assertionOptions = createStartAssertionOptions(user.getUsername());
AssertionRequest assertionRequest = relyingParty.startAssertion(assertionOptions);
List<AllowCredentialsResponseResource> credentials = getAllowCredentials(assertionRequest);
user.setAssertion(assertionRequest.toJson());
userRepository.save(user);
return new LoginResponseResource(assertionRequest.getPublicKeyCredentialRequestOptions().getChallenge().getBase64Url(),
credentials, 60000, "preferred", relyingParty.getIdentity().getId());
}
private User getUser(String username) {
Optional<User> user = userRepository.findByUsername(username);
if (user.isEmpty() || !user.get().isRegistrationComplete()) {
if (user.isEmpty()) {
log.error("User with username {} not found", username);
} else {
log.error("Registration for username {} is not complete", username);
}
throw new RuntimeException(String.format("User with username %s not registered", username));
}
return user.get();
}
private StartAssertionOptions createStartAssertionOptions(String username) {
return StartAssertionOptions.builder()
.timeout(60000)
.username(username)
.userVerification(UserVerificationRequirement.PREFERRED)
.build();
}
private List<AllowCredentialsResponseResource> getAllowCredentials(AssertionRequest assertionRequest) {
List<AllowCredentialsResponseResource> allowCredentialsList = new ArrayList<>();
Optional<List<PublicKeyCredentialDescriptor>> keys = assertionRequest.getPublicKeyCredentialRequestOptions().getAllowCredentials();
if (keys.isPresent()) {
keys.get().forEach(key -> {
if (key.getTransports().isPresent()) {
log.info("Transports found");
Set<AuthenticatorTransport> transports = key.getTransports().get();
allowCredentialsList.add(new AllowCredentialsResponseResource(key.getId(), key.getType().getId(), transports));
} else {
log.error("Transports not found");
}
});
}
return allowCredentialsList;
}
}
Create a service VerifyLoginService
.
In the verify login service, we will verify the login. The frontend application has processed the response from the start login service
and sends the result back to the verify service. If the verification is successful, the user is logged in.
@Slf4j
@RequiredArgsConstructor
@Service
public class VerifyLoginService {
private final RelyingParty relyingParty;
private final UserRepository userRepository;
public VerifyLoginResponseResource verify(VerifyLoginRequestResource resource) throws IOException, Base64UrlException {
User user = getUser(resource.username());
AssertionRequest assertionRequest = createAssertionRequest(user.getAssertion());
AuthenticatorAssertionResponse assertionResponse = createAuthenticatorAssertionResponse(resource);
PublicKeyCredential publicKeyCredential = createPublicKeyCredential(resource, assertionResponse);
FinishAssertionOptions finishAssertionOptions = createFinishAssertionOptions(assertionRequest, publicKeyCredential);
AssertionResult result;
try {
result = relyingParty.finishAssertion(finishAssertionOptions);
} catch (AssertionFailedException e) {
user.setAssertion(null);
userRepository.save(user);
return new VerifyLoginResponseResource(false);
}
user.setAssertion(null);
userRepository.save(user);
if (result.isSuccess()) {
return new VerifyLoginResponseResource(true);
} else {
return new VerifyLoginResponseResource(false);
}
}
private User getUser(String username) {
Optional<User> user = userRepository.findByUsername(username);
if (user.isEmpty() || !user.get().isRegistrationComplete()) {
if (user.isEmpty()) {
log.error("User with username {} not found", username);
} else {
log.error("Registration for username {} is not complete", username);
}
throw new RuntimeException(String.format("User with username %s not registered", username));
}
return user.get();
}
private AssertionRequest createAssertionRequest(String json) throws JsonProcessingException {
return AssertionRequest.fromJson(json);
}
private AuthenticatorAssertionResponse createAuthenticatorAssertionResponse(VerifyLoginRequestResource resource) throws Base64UrlException, IOException {
return AuthenticatorAssertionResponse.builder()
.authenticatorData(resource.response().response().authenticatorData())
.clientDataJSON(resource.response().response().clientDataJSON())
.signature(resource.response().response().signature())
.build();
}
private PublicKeyCredential createPublicKeyCredential(VerifyLoginRequestResource resource, AuthenticatorAssertionResponse response) {
CustomClientExtensionOutput customClientExtensionOutput = new CustomClientExtensionOutput();
return PublicKeyCredential.builder()
.id(resource.response().id())
.response(response)
.clientExtensionResults(customClientExtensionOutput)
.build();
}
private FinishAssertionOptions createFinishAssertionOptions(AssertionRequest assertionRequest, PublicKeyCredential publicKeyCredential) {
return FinishAssertionOptions.builder()
.request(assertionRequest)
.response(publicKeyCredential)
.build();
}
}
Create a controller LoginController
with 2 endpoints, the start login
and verify login
.
@RequiredArgsConstructor
@RestController
@RequestMapping("/login")
@CrossOrigin("http://localhost:4200")
public class LoginController {
private final StartLoginService startLoginService;
private final VerifyLoginService verifyLoginService;
@PostMapping("/start")
public ResponseEntity<LoginResponseResource> startLogin(@RequestBody LoginRequestResource resource) throws JsonProcessingException {
return ResponseEntity.ok(startLoginService.startLogin(resource));
}
@PostMapping("/verify")
public ResponseEntity<VerifyLoginResponseResource> verifyLogin(@RequestBody VerifyLoginRequestResource resource) throws Base64UrlException, IOException {
return ResponseEntity.ok(verifyLoginService.verify(resource));
}
}
Create a new Angular project with routing enabled.
ng new frontend --routing true --style css
Install the needed packages.
The @simplewebauthn/browser
package in Angular provides a convenient and simplified way to integrate WebAuthn (Web Authentication) functionality into your Angular applications.
By using this package, you can streamline the implementation of WebAuthn features such as secure and passwordless authentication,
enhancing the overall security of your application. This package abstracts the complexities of the WebAuthn API,
making it easier for developers to incorporate modern authentication methods without delving into intricate details,
saving time and effort in the development process.
The @ng-bootstrap/ng-bootstrap
package in Angular provides a set of native Angular directives for Bootstrap components.
npm i @simplewebauthn/browser
ng add @ng-bootstrap/ng-bootstrap
Complete the app modules in the app.module.ts
file.
Add the FormsModule
and HttpClientModule
.
The FormsModule
is a module that provides support for two-way data binding through the ngModel directive.
This means that changes to the model in the component are automatically reflected in the associated view, and vice versa.
The HttpClientModule
is a module that provides the HttpClient service, which is a powerful and feature-rich HTTP client for making requests to a server.
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
AppRoutingModule,
NgbModule,
FormsModule,
HttpClientModule,
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
We are going to create 2 services, the RegisterService
and LoginService
.
ng generate service service/register
ng generate service service/login
Paste the following code in the register.service.ts
file.
In the registration service, we will be constructing a client that can send requests to our backend service for registration.
export class RegisterService {
constructor(private http: HttpClient) { }
register(username: string): Observable<PublicKeyCredentialCreationOptionsJSON> {
const httpOptions = {
headers: new HttpHeaders({
'Accept': 'application/json',
'Content-Type': 'application/json'
})
};
const body = {
'username': username
};
return this.http.post<PublicKeyCredentialCreationOptionsJSON>('http://localhost:8080/register/start', body, httpOptions).pipe();
}
verify(username: any, attestationResponse: any) {
const httpOptions = {
headers: new HttpHeaders({
'Accept': 'application/json',
'Content-Type': 'application/json'
})
};
const body = {
'username': username,
'response': attestationResponse
};
return this.http.post('http://localhost:8080/register/verify', body, httpOptions).pipe();
}
}
Paste the following code in the login.service.ts
file.
In the login service, we will be constructing a client that can send requests to our backend service for authentication.
export class LoginService {
constructor(private http: HttpClient) { }
login(username: string): Observable<PublicKeyCredentialRequestOptionsJSON> {
const httpOptions = {
headers: new HttpHeaders({
'Accept': 'application/json',
'Content-Type': 'application/json'
})
};
const body = {
'username': username
};
return this.http.post<PublicKeyCredentialRequestOptionsJSON>('http://localhost:8080/login/start', body, httpOptions).pipe();
}
verify(username: string, assertionResponse: any) {
const httpOptions = {
headers: new HttpHeaders({
'Accept': 'application/json',
'Content-Type': 'application/json'
})
};
const body = {
'username': username,
'response': assertionResponse
};
return this.http.post('http://localhost:8080/login/verify', body, httpOptions).pipe();
}
}
ng generate component component/home -s
ng generate component component/user -s
Delete the existing content in the app.component.html
file and fill in the following code.
In Angular, <router-outlet></router-outlet>
is a directive that plays a crucial role in managing the routing of your application.
<router-outlet></router-outlet>
Complete the routing configuration in the app.routing.module.ts
file.
const routes: Routes = [
{path: 'home', component: HomeComponent},
{path: 'user', component: UserComponent},
{path: '', component: HomeComponent},
{path: '**', redirectTo: 'home'}
];
Paste the following code in the home.component.ts
file.
In the register
function, we send a request to our backend service with a username for which we want to create a new account.
If everything is okay, the response will contain the necessary result to initiate our registration with WebAuthn.
WebAuthn will then display the appropriate screens to create the passkeys.
The result of the WebAuthn action is then sent back to the backend service to complete the registration.
In the login
function, we send a request to our backend service with a username for which we want to initiate a login.
If everything is okay, the response will contain the necessary result to start the login process with WebAuthn.
WebAuthn will then display the screens to solve the challenge with the previously created passkey.
The result of the WebAuthn action is then sent back to the backend service to complete the authentication.
import {Component} from '@angular/core';
import {LoginService} from "../../service/login.service";
import {RegisterService} from "../../service/register.service";
import {startAuthentication, startRegistration} from "@simplewebauthn/browser";
import {Router} from "@angular/router";
@Component({
selector: 'app-home',
templateUrl: './home.component.html',
styles: [
]
})
export class HomeComponent {
username: string;
constructor(private registerService: RegisterService, private loginService: LoginService, private router: Router) {
this.username = '';
}
register() {
this.registerService.register(this.username).subscribe(result => {
startRegistration(result).then(result => {
this.registerService.verify(this.username, result).subscribe(result => {
type ObjectKey = keyof typeof result;
const verifiedVar = 'verified' as ObjectKey;
const verified = result[verifiedVar].toString();
if (verified === 'true') {
alert('Registration success.');
} else {
alert('Registration failed.');
}
});
}, error => {
console.error(error);
alert('Registration failed.');
});
});
}
login() {
this.loginService.login(this.username).subscribe(result => {
startAuthentication(result).then(result => {
this.loginService.verify(this.username, result).subscribe(result => {
type ObjectKey = keyof typeof result;
const verifiedVar = 'verified' as ObjectKey;
const verified = result[verifiedVar].toString();
if (verified === 'false') {
alert('Login failed.');
} else {
this.router.navigateByUrl('/user');
}
});
}, error => {
console.error(error);
alert('Login failed.');
});
});
}
}
Paste the following code in the home.component.html
file.
On this page, we display an input text field for the username, a button for registration, and a button for login.
<div class="container mt-5">
<div class="row">
<div class="col-sm-12 col-md-8 info-text mt-5">
<p>Experience a new era of online security with FIDO Passwordless Authentication. Say goodbye to the hassle of remembering and managing passwords.
Our cutting-edge technology ensures a seamless and secure login process for users, making online interactions effortless and worry-free.</p>
</div>
<div class="col-sm-12 col-md-4">
<form>
<div class="mb-3">
<label for="username" class="form-label">Username</label>
<input type="text" class="form-control" id="username" aria-describedby="usernameHelp"
[(ngModel)]="username" [ngModelOptions]="{standalone: true}">
<div id="usernameHelp" class="form-text">We'll never share your info with anyone else.</div>
</div>
<button type="submit" class="btn btn-primary" (click)="register()">Register</button>
<button type="submit" class="btn btn-primary ms-2" (click)="login()">Login</button>
</form>
</div>
</div>
</div>
Paste the following code in the user.component.html
file.
We show this welcome screen for the user when the login is successful.
<p>Welcome</p>
Embracing FIDO and Passkeys represents a leap forward in security and user-friendliness. By shifting away from traditional password-based authentication methods, users benefit from heightened security measures while enjoying a more seamless and user-friendly experience. The combination of FIDO’s robust security protocols and the convenience of passkeys not only enhances protection against cyber threats but also simplifies the user authentication process, contributing to a more secure and user-centric online environment. Embracing these advancements not only aligns with the evolving landscape of digital security but also empowers users with a more efficient and trustworthy means of safeguarding their online identities. It’s a win-win that positions FIDO and passkeys as a compelling choice for the future of secure and user-friendly authentication. Curious which companies are already switched to a password less authentication? Check out this website
If you’re interested in exploring the implementation details, you can access the frontend and backend code on GitHub.
If you want to start the application locally, you can start the Docker containers and visit the project locally.
docker run -p 8080:8080 nicholas95/passwordless-backend:v1
docker run -p 4200:80 nicholas95/passwordless-frontend:v1
Note: I encountered issues with local development when testing in a browser with the Bitwarden extension enabled.
]]>During Ordina’s Young Professionals Program of 2022, we worked on this spectacular dev case. As a software developer, you must integrate different technologies to build a functional and efficient system. This dev case required us to use various tools and technologies, from Spring Boot to AWS and Helm. This blog post will share our experience working with these technologies (Spring Boot, Terraform, Google Assistant, GitHub Actions, etc.) to build a scalable and reliable system that meets the project’s requirements. In addition, we will discuss the main problems we encountered during the dev case and how we overcame them.
Our mission? Develop two distinct applications harnessing the prowess of Tesla Cars.
Application One: Tesla Rock-Paper-Scissors Duel
Imagine two Teslas engaging in a classic game of Rock-Paper-Scissors. This was what our first application sought to achieve. Here’s how the game was mapped:
Two dedicated users, each connected to their own Tesla, would choose an action equivalent to a Tesla command. Beyond the sheer fun of it, we envisioned this application as a hit at conferences, designed to turn heads and pique curiosity.
Application Two: Voice-Controlled Tesla
Our second application was an integration with Google Home. The aim? To control a Tesla using simple voice commands. Whether it was initiating or halting charging, or inquiring about the battery’s status, we felt the thrill of integrating with a third-party tool. This app was not just about functionality but also about the joy of innovation.
To kick things off, both car owners need to sign in using their Tesla accounts and submit their tokens to our application, which subsequently stores them securely in a vault.
The stage is set. Players pick their moves, which the system then translates into respective Tesla commands.
The chosen commands spring into action on the Teslas.
Users connect our Tesla Application to their Google Home accounts.
This section is about the steps we’ve taken to accomplish our results and some challenges we faced.
Before we started the project, we researched the mobile Tesla App and the corresponding API. For this, we made use of unofficial Tesla API Documentation.
As direct requests to the authentic Tesla API are not always feasible, we opted to create a Stub API that emulates the functionality of the actual API but with dummy data. This enabled us to locally test all components, systematically implement features, and receive better error messages.
We then developed a Proxy API that can be used by any of our applications that interact with Teslas. The proxy is responsible for manipulating all incoming requests in a way acceptable to Tesla. This includes the addition of tokens, object alterations, parameter additions, etc.
To simplify our development process, we worked with three separate profiles within the proxy: one that interacts directly with the authentic Tesla API, another with our own Stub API, and a final profile that returns dummy data without an external service. This way, we could switch easily and validate our code quickly.
A Tesla vehicle will go into a sleeping state at random moments. It could take several seconds before the vehicle is online and ready to receive commands. To wake up the vehicle, we need to call an endpoint. It could take a few seconds, so we created a retry mechanism with a timeout.
Diagram Explanation: When the vehicle is in sleeping mode, the Tesla API returns a 408 error while trying to execute a command. Our proxy will catch this error, send a wake-up request, and try to execute the command again after 10 seconds. This will be done a maximum of five times.
Another challenge we faced developing the Proxy API was authentication. Because someone must trust us with his Tesla vehicle (and tokens), we aimed to use an identity management solution that we have complete control over, like AWS Cognito, to link users to a Tesla account and manage access. This way, there are no Tesla tokens on the client side, and we can manage the tokens in a secret vault on the server side.
Tesla tokens expire every eight hours, so we need to refresh them automatically to keep using the vehicles in our applications. We managed to do this on AWS using the Secrets Manager and Lambda services.
Diagram Explanation: The owner of the Tesla must log in with his credentials and send his tokens directly to our Proxy application, which will save them in AWS Secrets Manager. A lambda will be executed every seven hours to refresh the Access Token.
Now that our “foundation” was ready, we started creating our rock-paper-scissors application, where we wanted to implement the business logic to play it between two Teslas.
Finally, we added the Google Home integration consulting its documentation. We wanted to use Tesla Authentication directly so that Google could manage the tokens. The only problem was that we couldn’t directly set the redirect URL from the Tesla client to our Google Application.
We developed a Custom Auth API that involved a simple process to resolve this issue. First, users must log in through our Auth API to add a device to the Google Home App, which redirects them to the Tesla login page. After logging in, they needed to copy the authorization code and send it to our application. Our Auth API handled the rest, and the user logged in successfully.
To summarize, we researched the Tesla App and API, created a Stub API, developed a Proxy API, implemented the rock-paper-scissors game, and integrated Google Home. Despite challenges, we completed the project.
This section is about the architecture that is used. We are starting with more information about the global architecture. Later, there will be zoomed in on the architecture used for the two different applications that are built: Rock-Paper-Scissors and Google Integration.
Above, you can see a complete diagram of the architecture we’ve used to accomplish our results. Everything within the black-lined square represents services running on AWS. The purple square represents our applications running on Kubernetes in the AWS EKS-Cluster.
We’ve used GitHub and GitHub actions combined with Terraform, Helm, and Docker for deploying and CI/CD.
In the context of testing, we’ve used Postman to test REST and Gatling for performance testing.
A user will first need to authenticate themselves using Cognito before sending requests.
An API Gateway was not necessary but was added for educational purposes. As a result, we needed to link it to our RPS(Rock-Paper-Scissors)-application via a VPC link to our Network Load Balancer (NLB) next to an Application Load Balancer (ALB) that will be connected to our RPS application.
The RPS application then communicates with our Proxy-API that will (based on the active profile) communicate with one of the Tesla-APIs.
Finally, the key rotation mentioned in the previous section is also included in the diagram.
The applications are deployed on Kubernetes and made accessible through Ingress and Route53. The Google Nest Speaker, integrated with Google Assistant, is linked to a Google Home account, which, in turn, is connected to our applications. The Proxy API is communicating with the Tesla APIs
The development of our project required the use of a variety of technologies to ensure a successful outcome. Here’s a brief overview of each technology and why it was chosen.
AWS: AWS was chosen as the cloud platform for our project because of its scalability, efficiency, and security benefits. By leveraging the resources offered by AWS, we were able to take advantage of the many benefits of cloud computing.
Terraform: Terraform is a tool used for infrastructure as code. This tool allowed us to automate the provisioning and management of our infrastructure on AWS, making it easier for us to manage and maintain our infrastructure over time. The whole infrastructure will be deployed on Monday morning and destroyed on Friday night using a cronjob.
GitHub + GitHub Actions: GitHub is a version control platform that we use to store and manage our code. GitHub Actions is a continuous integration and deployment (CI/CD) platform that allows us to automate the build, test, and deployment of our application. The CI/CD will run every time a pull request is created. It validates that the application can be built using Maven. When code changes are pushed directly to the “develop” branch, this action serves as the second trigger for the CI/CD pipeline. In addition to running build and test processes, this trigger also initiates a Terraform deployment to our development environment. This workflow ensures that not only the is code validated but also the underlying infrastructure in the Dev environment is modified or extended as required.
Java Spring Boot: Java Spring Boot is a framework used for building microservices and web applications. In our project, we utilized the power of Java Spring Boot to build the back end of our application. Our project consisted of several different microservices, including the RPS proxy, RPS backend, stub Tesla API, Google application, and Tesla authenticator. Each of these microservices played a crucial role in the functionality and performance of our overall application.
Google Actions: Google Actions is a platform for building conversational experiences for Google Assistant. We used Google Actions to build and integrate conversational interfaces into our application. For authenticating our users, Google Actions is linked to our AWS Cognito. Google actions send different request, like for instance a synchronization, to the webhook of our Google Application Microservice, which we have developed.
Docker: Docker is a containerization technology used for deploying and managing applications. We used Docker to containerize our application, which allowed us to easily deploy and manage our application.
Kubernetes + Helm: Kubernetes is a container orchestration platform used for automating the deployment, scaling, and management of containerized applications. We used Kubernetes to manage our Docker containers and Helm to simplify the deployment and management of our application.
Renovate: Renovate is a tool used for automating the updating of dependencies in our application. We used Renovate to ensure that our dependencies were always up-to-date and secure. It automatically makes a pull request when a new version of a dependency is found. We’ve set a limit at 2 pull requests made by Renovate, so that we can maintain a good overview.
Postman: Postman is a tool used for testing and documenting APIs. We used Postman to test our APIs, which allowed us to ensure that our APIs were working correctly and that they were easy to use for other developers.
Gatling: Gatling is a load-testing tool used to test the performance of our application. We used Gatling to ensure that our application could handle high volumes of traffic and remain performant under heavy load.
Our final product consists of two applications designed to offer a unique and engaging user experience. The first application is a fun and interactive game allowing users to play the classic rock paper scissors with their Tesla car.
The second application is a Google Home app that enables users to control a Tesla car using Google Assistant. With this application, users can quickly check their vehicle’s temperature and battery level and start and stop charging with just a few simple voice commands. Users can connect Google Assistant to their Tesla car quickly and easily, offering a fun solution to manage their vehicle.
Developing an application can be complex, especially when dealing with unfamiliar technologies. Our project encountered several challenges, but our most significant achievement was learning and growing our skills.
Communication was one of the significant challenges we faced during the project. It could have been more evident who was responsible for which task. This resulted in duplicated functions and conflicting efforts. To address this issue, we had to improve our communication and agile working and ensure that all team members were aware of each other’s progress.
Deploying a complex application in the cloud can also present several challenges, especially when dealing with unfamiliar platforms like AWS. To set up all the necessary resources in AWS, we had to learn how to work with Terraform, which required attending workshops, consulting documentation, and utilizing tutorials to learn this essential skill.
Furthermore, deploying the application using Kubernetes proved incredibly challenging, as we needed to gain experience with this technology. This required us to explore various solutions, test different approaches, and collaborate closely with our team members to overcome this challenge.
In the end, the challenges we faced proved to be valuable lessons. We completed our project by improving our communication and acquiring new skills. Our experience has taught us the importance of collaboration and adaptability when working with unfamiliar technologies. These lessons will prove beneficial in future projects and enable us to tackle even more complex challenges in application development.
]]>Like stars in the night sky, managed Kubernetes services vary in brilliance; not all shine equally, but the right one can illuminate your journey to cloud-native excellence. - ChatGPT 2023
Kubernetes has been the de-facto standard for cloud-native container orchestration for some years now. Organizations that rely on a microservices architecture, often rely on Kubernetes to be their platform to deploy to. One of the biggest selling points, especially in recent years, is that Kubernetes can provide a very similar user experience for developers across different on-premise and cloud environments. It can provide a common abstraction from the underlying infrastructure through its API. This also allows solutions built on top of Kubernetes to be easily portable across different environments. Or at least, that’s the promise that is being made by Kubernetes platform teams across the industry.
Running Kubernetes yourself can be very hard to do properly. Managing all the moving parts, especially when the workloads are changing frequently can be challenging, to say the least. Maintaining the hardward that runs the cluster, maintaining the hypervisor on top of that, maintaining, updating and configuring all the components of a Kubernetes controlplane (etcd, schedulers, kubelets, …) is hard. A teams of higly skilled engineers with different expertise (networking, deep OS knowledge, storage, …) are needed to do this on-premise. There are distributions available that help by automating a lot of the configuration and providing a paved road to set it up, but those come at a significant premium. Mix in the usage of more advanced features like a service mesh, custom operators or multi-cloud cluster and it can become a real challenge to maintain. All hyper-scale (and more and more smaller scale) cloud providers have managed Kubernetes offerings for customers who want to leverage the power of Kubernetes, but don’t want the hassle of managing all the different pieces of the solution. Google has its Google Kubernetes Engine, GKE for short, Azure has its Azure Kubernetes Service, AKS, and lastly, Amazon has its Elastic Kubernetes Service, EKS for short. Although all of these services deliver a Kubernetes cluster, they aren’t all created equally as will become clear further in this blog post.
For most deployments of Kubernetes, Kubernetes itself is only part of the software stack of a software project.
A plethora of extensions, add-ons and so-called operators are used to extend the default functionality and integrate external services into Kubernetes for easier management of those resources.
Some common examples are the external-dns
operator, certmanager
and the cluster auto scaler
projects.
Each of these components extends a cluster’s functionality by providing DNS management, certificate management, and auto-scaling capabilities, respectively.
Many articles have been written about how the different offerings compare w.r.t. speed, scaling and default feature set. This blog post aims to provide a comparison from a practical, end-user perspective when installing all the bells and whistles needed to use the platform to host all required components to build and run a web application in a cloud-native way.
This section will go into detail about the software stack used for the comparison and how that stack was selected. If you’re just interested in the comparison, feel free to skip this section.
The platform used in the comparison is based on a real-world use case for a customer in the financial services sector. They operate the platform to build and host their core application, a microservices architecture consisting of 35+ services including third-party services like Apache Kafka, PostgreSQL and Elasticsearch just to name a few.
The design principles can be summarized into the following:
This has led to the following architecture:
This setup can be reviewed in the demo repositories for this blog post: Azure, (AWS){:target=”_blank” rel=”noopener noreferrer”}[https://github.com/pietervincken/k8s-on-aws] and (common){:target=”_blank” rel=”noopener noreferrer”}[https://github.com/pietervincken/renovate-tekton-argo-talk]
The platform was originally built to run on top of Azure AKS, with the mindset that it should be easy to migrate to another cloud provider at some point. For this comparison, the setup was re-platformed on top of AWS EKS with the same design principles in mind. Although careful thought has gone into making the comparison as fair as possible, some decisions might have been influenced due to this history, so keep that in mind when reading the comparison later in this post. This comparison will only consider AWS and Azure cloud platforms.
This post will provide a high-level comparison between the two cloud providers AWS and Azure. The stack that was described above will be deployed to both clouds according to the design principles. In line with the design principles, choices were made to integrate with different managed services, as long as the same developer experience in the cluster is maintained.
From an architectural perspective, both setups are very similar and highly specific to the cloud of choice at the same time.
Let’s start with the Kubernetes control plane. This is very similar for both AWS and Azure as these are both fully managed services that are deployed and maintained by the provider. In both cases, no access to the actual control plane nodes is available, nor are they even visible. The etcd is not exposed to the customer in either of the provider’s managed services. They both support public (default) and private control plane deployments.
With regards to the data plane, the setups are similar as well. On AWS, the data plane runs on AWS EC2 nodes using Auto-Scaling Groups (ASG). On Azure, it runs on top of Azure Virtual Machines using VM Scale-Sets. Out of the box, these nodes are deployed into their equivalent of a private network, VPC or VNET, and a group of nodes is assigned to a subnet of that network.
AWS has the option to not run directly on top of EC2 instances, but use AWS Fargate instead. This service allows EKS to deploy containers without having to spin up and maintain nodes. A similar option is available on Azure, namely Azure Container Instances. Due to some caveats with the integrations which cause bad portability, these options were excluded from this comparison.
One of the big advantages of using a public cloud provider is that your “hardware” footprint can easily scale up and down, the so-called elasticity of the cloud. It’s such a big selling point for the public cloud that AWS used it for their service naming scheme.
It’s no wonder that it’s also a key feature for the managed Kubernetes offerings for the different cloud providers. Both AWS and Azure support using the standard Kubernetes autoscaler project to automatically provision and destroy the additional capacity for the data plane. However, their support and documentation are significantly different.
Enabling autoscaling on AKS is as easy as enabling the option during cluster creation. When adding the default (or any other) nodepool, the autoscaling option can be enabled. This option will make sure that all required components, resources and configurations are created to support autoscaling of the data plane nodes. This includes installing the cluster auto scaler component into the cluster and setting up the required roles and rights for it to manage the node pool(s). The Azure AKS documentation shows how it can be enabled. There is some configuration possible, but the cluster auto-scaling configuration will be managed by Azure and isn’t available to the user to change.
On EKS, the journey is a bit more hands-on. AWS EKS documentation lists two default options to enable auto-scaling of the data plane nodes: Karpenter and cluster auto scaler. Enabling these options is completely up to the infrastructure maintainer and the resources required to enable the integration need to be deployed by the infrastructure maintainer, it’s not a single “add-on” that can be enabled. Installing the cluster auto scaler into EKS required setting up an IAM role for the auto scaler to access the AWS EC2 Auto Scaling Groups, a deployment of the cluster auto scaler itself into the cluster and making sure it’s configured correctly to update the correct cluster (some auto-discovery is possible). There are Helm charts, Terraform modules and good documentation available to set all of it up easily. The advantage of this approach is that it’s highly configurable, but it’s more work to configure.
This blog post won’t go into the discussion of whether running stateful workloads on Kubernetes is a good or bad idea. It will however discuss how persistent storage can be used and integrated into the Kubernetes clusters. For the use case discussed earlier, persistent storage is needed for the CI tool: Tekton. For disk integration, only integrations with Kubernetes Persistent Volumes are considered.
Both AWS and Azure have support for integrating the disk-based storage solutions directly into the cluster: Elastic Block Storage (EBS) for AWS and Managed Disks for Azure. This allows a developer to specify a Kubernetes Persistent Volume Claim and the cloud provider will automatically provision the required disks, attach it to the node running the workload and make it available to the pod. For both cloud providers, most of their disk types are supported.
Support for persistent volumes is enabled by default for an AKS cluster. A standard deployment of an AKS cluster includes the deployment of the required Azure Disks CSI driver into the cluster. The Azure Disks CSI driver is the default storage class used in the cluster. Disks created through storage classes enabled by this driver will be backed by Azure Managed Disks automatically.
On AWS’s EKS, persistent storage support needs to be explicitly enabled. AWS provides an add-on that can be enabled to provide support for EBS on EKS. The EBS CSI driver add-on provides support for using EBS volumes in the cluster. Any persistent volumes created in the cluster will trigger the creation of an EBS volume and the required actions to make the volume available to the pod.
From an application developer’s perspective, both solutions are identical and provide a similar level of support. As of the time of writing, both support more advanced features like encryption, snapshotting and resizing of the volumes as part of the CSI driver implementation.
Other storage solutions are available on both providers: Azure Files, AWS EFS, AWS FSx, and AWS File Cache. Since these aren’t needed to support the use case, they are not included in the comparison and are only mentioned for completeness.
Encryption for a cluster needs to be considered in multiple locations: (node) disk encryption, ETCD (secrets) encryption and API access encryption. By default, all Kubernetes API access uses TLS in-flight on both AKS and EKS.
ETCD secrets encryption is available on both EKS and AKS, but not enabled by default.
For EKS, an option can be enabled by providing a Key Management Service (KMS) Key. A key has to be created upfront and passed to the EKS service during cluster creation or during an update after the cluster is created. The role associated with the cluster needs to have the appropriate rights to access the key. The action of encrypting the secrets is irreversible once enabled. Rotation of the key is done automatically by AWS in the KMS service, yearly by default.
For AKS, a similar setup is available. A key needs to be created in an Azure Key Vault and access to the key needs to be provided to the identity associated with the AKS cluster. Rotation of the keys is supported but is a manual activity on AKS. All secrets need to be updated during the rollout of a new key. AKS does support disabling the encryption if that’s ever desired.
Encryption of persistent volume disks is enabled by default on EKS. A custom key can be used if desired by providing a reference to the key during the volume creation (through PV or PVCs) or adding it to the configuration of a storage class. If no key is specified, the default EBS encryption key for the account is used for the encryption of the volume. EKS supports using different keys for different disks. The root disks attached to the data plane EKS nodes aren’t encrypted by default. A custom launch template is needed to enable this or encryption needs to be enabled across the entire AWS account
On AKS, all volumes, both persistent volumes and data plane node root volumes are encrypted by default using Microsoft-managed keys. AKS supports customer-provided keys through the usage of a Disk Encryption Set backed by an Azure Key Vault key. Enabling root volume encryption can be configured in the cluster configuration by providing the required references to the disk encryption set. To encrypt persistent volumes automatically using a customer-provided key, the storage class can be adapted. This means that all disks created using that storage class will have the same encryption key associated with it.
The following topics are very similar across both services or don’t have a fundamental impact on the experience, but are included for completeness.
Supported by both services:
Unique on AWS:
Unique on Azure:
AWS EKS and Azure AKS both provide a set of add-ons on top of their standard Kubernetes offering. Both add-on lists are focused on integrating their own services into the cluster. A few add-ons worth investigating are AWS Distro for OpenTelemetry (EKS), Open Service Mesh (AKS) and KEDA (AKS).
The out-of-the-box experience for both services is quite different but in line with the general experience on the platforms.
AWS’s EKS implementation feels focused on providing the right building blocks to create a great platform. It supports a high level of customization and provides many integration points with other AWS services. The integration with Key Management Service for the encryption of disks is a good example. The fact that they automatically rotate the keys makes it so that operators can configure this once and forget about it. Another point where the customization helps is the ability to encrypt the node pool disks. It’s possible, but the necessity to provide a complete launch template just to enable encryption on the nodes feels a bit overkill. The freedom to configure almost anything comes at the cost of an easy getting started. Getting all the I’s dotted and T’s crossed can be tricky and debugging it can be hard.
Azure’s AKS feels focused on a decent “end-user” experience. There are a lot of sane defaults that make sense for most deployments. Customization is possible in some locations, but if it’s not supported out of the box, there is no way to work around it like on AWS. The integration with other Azure services is also a hit or miss. If it’s natively integrated, it tends to work well. If not, don’t hold your breath for the AKS team to implement support for it any time soon.
Both AWS and Azure are quite public about their roadmap and suggestions can be raised by customers easily through their respective GitHub projects. From experience, it also doesn’t hurt to raise requests through support or your representatives for Azure or AWS.
This blog post discussed the findings based on the experience that was gathered from the use case mentioned at the start. This means that this is not an exhaustive list of pros and cons for both platforms and their respective Kubernetes implementations. This blog post is the first in a series of blog posts about this comparison.
Later blog posts will be created that zoom in on the following topics:
If you have any other topics you’d like to see discussed, you want help with your own Kubernetes journey or you have feedback about this blog post, feel free to contact Pieter on LinkedIn!
]]>In today’s interconnected digital landscape, enterprises are increasingly relying on a mix of cloud solutions to meet their diverse needs. One common challenge is efficiently managing user identities and authentication across these platforms. But what happens when you’re already invested in Azure Active Directory as your primary identity provider?
While many companies utilize Azure AD, they may not fully adopt the entire Azure ecosystem. This is especially noteworthy since a significant number of enterprises rely on the Microsoft ecosystem, including the widely-used Office 365 suite. Implementing Single Sign-On is considered a security best practice because it simplifies the process of onboarding and offboarding employees, facilitates access management, and enables policy enforcement for specific users. Due to the widespread use of Office 365, Azure AD is commonly employed by enterprises for managing Single Sign-On.
However, if your applications are hosted on AWS, it may be more beneficial to opt for AWS Cognito to oversee the Single Sign-On for these applications. Although an AWS Lambda function could authenticate with Azure AD, AWS Cognito provides a convenient approach to incorporate authentication within AWS services like API Gateway and CloudFront.
In this article, we’ll explore how to efficiently integrate users from your Azure AD environment into AWS Cognito. Can these two seemingly distinct systems work together to provide a unified authentication experience? Let’s delve into the details and discover how this integration can be seamlessly achieved.
To keep things straightforward, we will utilize Terraform to establish the infrastructure. This setup can be easily replicated across various AWS and Azure accounts for convenience.
The setup process will span across the two specified cloud platforms: Azure and AWS. The initial phase involves establishing an App Registration within Azure AD, followed by the creation of a user pool in AWS Cognito. These components will then be synchronized together.
Optionally, there’s the option to include a Lambda function within AWS Cognito that facilitates the synchronization of groups from Azure AD into AWS Cognito. This becomes significant when users within Azure AD are categorized into groups, and if you seek to manage permissions based on the various types of groups present.
We should create an App Registration within Azure AD which will be utilized in a later stage by AWS Cognito. Every authentication request should redirect back with a response to our Cognito domain.
Optionally, you are able to synchronize Azure AD groups with AWS Cognito.
This synchronization involves utilizing the required_resource_access
block.
This synchronization can be valuable for permissions administration linked to distinct groups like MANAGEMENT.
More information about the different resource apps and which ID’s to use can be found here.
For Microsoft Graph, the ID that we should use is 00000003-0000-0000-c000-000000000000
.
More information about the different roles for Microsoft Graph and which ID’s to use can be found here.
For the Directory.Read.All
role, the ID should be 7ab1d382-f21e-4acd-a863-ba3e13f7da61
.
Please note, you need to give consent manually to the API permissions of your App Registration.
This can be done in the API permissions
section of your App Registration.
Additionally, it’s essential to generate a client secret for the application with the azuread_application_password
resource.
This secret facilitates Cognito’s authentication with Azure AD and enables configuring your identity provider within the Cognito environment.
resource "azuread_application" "application" {
display_name = "example-application-for-aws"
web {
redirect_uris = [
"https://example-application.auth.eu-west-1.amazoncognito.com/oauth2/idpresponse"
]
}
required_resource_access {
resource_app_id = "00000003-0000-0000-c000-000000000000" # Microsoft Graph
resource_access {
id = "7ab1d382-f21e-4acd-a863-ba3e13f7da61" # Directory.Read.All
type = "Role"
}
}
}
resource "azuread_application_password" "application_password" {
application_object_id = azuread_application.application.object_id
}
Overview
Redirect URL
Client secret
Should you choose to synchronize your Azure AD user groups with AWS Cognito, it is mandatory to generate this Lambda function. This Lambda function will be triggered following the confirmation of your account and after each instance of account authentication. The source code of the Lambda can be found here.
This provided Terraform code enables you to deploy the Lambda function within your AWS account with the correct set of permissions. Terraform was selected for deployment due to its widespread use throughout the entire project; however, it’s worth noting that alternatives like AWS SAM and Serverless are also available.
data "archive_file" "user_to_group_lambda_file" {
type = "zip"
source_dir = "<path_to_source_code>"
output_path = "lambda-add-user-to-groups.zip"
}
data "aws_iam_policy_document" "user_to_group_lambda_iam_policy_document" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "user_to_group_lambda_iam_role" {
name = "lambda-add-user-to-groups-role"
assume_role_policy = data.aws_iam_policy_document.user_to_group_lambda_iam_policy_document.json
}
resource "aws_lambda_function" "user_to_group_lambda_function" {
filename = "lambda-add-user-to-groups.zip"
function_name = "lambda-add-user-to-groups"
handler = "index.handler"
role = aws_iam_role.user_to_group_lambda_iam_role.arn
source_code_hash = data.archive_file.user_to_group_lambda_file.output_base64sha256
runtime = "nodejs18.x"
}
resource "aws_lambda_permission" "user_to_group_lambda_permission" {
statement_id = "AllowExecutioaws_lambda_permissionnFromCognito"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.user_to_group_lambda_function.function_name
principal = "cognito-idp.amazonaws.com"
source_arn = aws_cognito_user_pool.cognito_user_pool.arn
}
data "aws_iam_policy_document" "user_to_group_lambda_cognito_iam_policy_document" {
statement {
effect = "Allow"
actions = [
"cognito-idp:AdminAddUserToGroup",
"cognito-idp:CreateGroup"
]
resources = [aws_cognito_user_pool.cognito_user_pool.arn]
}
}
resource "aws_iam_policy" "user_to_group_lambda_cognito_iam_role" {
name = "lambda-user-to-group-cognito-policy"
description = "IAM policy for add user to cognito user group from a lambda"
policy = data.aws_iam_policy_document.user_to_group_lambda_cognito_iam_policy_document.json
}
resource "aws_iam_role_policy_attachment" "user_to_group_lambda_cognito_iam_role_policy_attachment" {
role = aws_iam_role.user_to_group_lambda_iam_role.name
policy_arn = aws_iam_policy.user_to_group_lambda_cognito_iam_role.arn
}
To start, the initial step involves setting up a Cognito user pool.
If you opt to synchronize user groups from Azure AD to Cognito, it’s necessary to fill in the lambda_config
block.
The post_confirmation
trigger is invoked by AWS Cognito once a newly registered user confirms their user account.
Similarly, the post_authentication
trigger is activated after a user successfully signing in.
Both triggers invoke the same lambda.
resource "aws_cognito_user_pool" "cognito_user_pool" {
name = "example-application-for-aws-user-pool"
lambda_config {
post_confirmation = aws_lambda_function.user_to_group_lambda_function.arn
post_authentication = aws_lambda_function.user_to_group_lambda_function.arn
}
}
User pool properties
A redirect URL was previously mentioned in the Azure AD App Registration. The following block will then generate the associated domain URL within AWS Cognito.
resource "aws_cognito_user_pool_domain" "cognito_user_pool_domain" {
domain = "example-application"
user_pool_id = aws_cognito_user_pool.cognito_user_pool.id
}
Domain
To utilize users from our Azure AD, we must integrate an identity provider rather than generating new users within Cognito. In this integration, we’ll reference the previously created client_id and client_secret from Azure AD.
Additionally, as we aim to synchronize all useful user attributes from Azure AD to AWS Cognito, we’re required to complete an attribute mapping.
The data in the response body of the attributes_url
variable will be aligned with the corresponding properties in AWS Cognito.
Please note: Don’t forget to fill in your tenant ID.
resource "aws_cognito_identity_provider" "cognito_identity_provider" {
user_pool_id = aws_cognito_user_pool.cognito_user_pool.id
provider_name = "AzureAD"
provider_type = "OIDC"
provider_details = {
authorize_scopes = "email profile openid"
client_id = azuread_application.application.application_id
client_secret = azuread_application_password.application_password.value
oidc_issuer = "https://login.microsoftonline.com/${var.tenant_id}/v2.0"
authorize_url = "https://login.microsoftonline.com/${var.tenant_id}/oauth2/v2.0/authorize"
token_url = "https://login.microsoftonline.com/${var.tenant_id}/oauth2/v2.0/token"
attributes_request_method = "GET"
attributes_url = "https://graph.microsoft.com/oidc/userinfo"
jwks_uri = "https://login.microsoftonline.com/${var.tenant_id}/discovery/v2.0/keys"
}
attribute_mapping = {
email = "email"
name = "name"
given_name = "given_name"
family_name = "family_name"
picture = "picture"
email_verified = "email_verified"
}
}
Azure AD Identity Provider
For the integration of the authentication process within services such as API Gateway, having the user pool ID is necessary. This is why we are storing the ID in the Parameter Store, enabling us to retrieve it whenever necessary.
resource "aws_ssm_parameter" "ssm_parameter" {
name = "example-application-for-aws-user-pool-id"
type = "String"
value = aws_cognito_user_pool.cognito_user_pool.id
}
If you already have an established AD setup and wish to synchronize it with Cognito, this is a recommended approach. However, you do have the option to authenticate your users directly within your existing AD setup without involving Cognito. In doing so, though, you give up the advantages of the seamless integration that AWS and Cognito offer for their services. It’s worth noting that using Keycloak with AWS is also discouraged, as it requires manual management and lacks native integration with AWS services.
By synchronizing AWS Cognito with Azure AD, we can seamlessly authenticate our enterprise users across AWS services. Leveraging two managed services from both clouds, the majority of the authentication process is abstracted away. Consequently, there’s no need to manually make calls to Azure AD for authentication within our AWS-hosted applications. Instead, we can effortlessly integrate Cognito’s native authentication mechanism into services such as API Gateway, CloudFront, and other services.
In this example, we used Terraform to deploy our Lambda function on AWS. While this method is functional, it is advisable to consider using a framework such as Serverless or AWS SAM. These frameworks offer a more managed and streamlined approach for deploying Lambda functions on AWS.
]]>SPIFFE, developed by AT&T and Bell Labs and maintained by CNCF, stands as an open-source solution. It operates as a standardized framework dedicated to the secure identification and fortification of communication channels among diverse application services.
In an era where data breaches and cyberattacks dominate headlines (Outdated password exposed, leaked passwords on GitHub), traditional password-based authentication methods increasingly fail to provide the security and convenience that today’s dynamic, multi-cloud environments demand. Enter SPIFFE, the Secure Production Identity Framework for Everyone, a revolutionary solution that not only addresses the shortcomings of passwords but also unlocks a new era of identity management in complex, distributed systems.
As businesses expand their operations across multiple cloud platforms, maintaining consistent and robust authentication mechanisms becomes all the more daunting. The complexities of managing access across various clouds, services, and applications while safeguarding sensitive data demand a paradigm shift. This is where going “passwordless” emerges as a beacon of innovation, promising enhanced security, streamlined user experiences, and simplified administration.
However, the current approach of relying on passwords and keys for authentication presents its own set of challenges. Rotating passwords and keys in a timely manner often becomes an overlooked practice due to factors like limited time, budget constraints, lack of established procedures, or simply being forgotten. Moreover, the usage of weak passwords is all too common, undermining the very foundation of security.
What’s more, passwords and keys are frequently shared through emails, chat platforms, and even unintentionally committed to version control systems. They might find their way onto employees’ devices, creating vulnerabilities that can be exploited by malicious actors. The need for a more secure, systematic, and foolproof identity management solution is evident.
In the dynamic landscape of distributed systems, SPIFFE stands as a powerful solution that revolutionizes identity management. By discarding the vulnerabilities associated with conventional password-based methods, SPIFFE offers a robust and secure approach. It ensures reliable authentication without the risks of easily compromised passwords and keys. In doing so, SPIFFE not only bridges security gaps but also propels us towards an authentication landscape that is more efficient, dependable, and poised for the future.
This blog post will explain how to install SPIFFE on your on-premises Kubernetes cluster. Additionally, we will deploy an application that will establish a connection with an S3 bucket.
Authentication with SPIFFE relies on mutual Transport Layer Security (mTLS) protocol. mTLS is a form of secure communication that ensures both parties—client and server—authenticate each other using digital certificates, enhancing the security of data exchanged over the network.
So, SPIFFE will generate a CA certificate that we’ll utilize to establish trust and verify the certificates created by SPIFFE. It’s crucial to upload this CA certificate to our AWS account, ensuring that AWS recognizes and trusts all incoming requests from our cluster.
SPIFFE generates short-lived certificates upon application startup for making requests. To maintain connection stability, SPIFFE periodically renews these certificates before expiration, ensuring seamless communication.
As our pod initializes, SPIFFE comes into play by mounting certificates within the pod. These certificates serve a pivotal role in enabling secure calls to AWS via mutual TLS (mTLS). During these calls, our pod aims to acquire AWS credentials, which it can access internally. However, just like the certificates, these AWS credentials follow a short-lived lifecycle. The remarkable aspect here is that SPIFFE is responsible for automatically renewing these credentials. This autonomous renewal process ensures that our AWS credentials remain up-to-date and effective, seamlessly aligning with SPIFFE’s ethos of continual security enhancement.
An example of a certificate.
Installing cert-manager on your Kubernetes cluster before installing SPIFFE is a crucial step. cert-manager is a widely used tool that automates the management of TLS certificates within Kubernetes.
Integrating SPIFFE into your Kubernetes cluster relies on secure communication channels established through TLS certificates. cert-manager simplifies the management and issuance of these certificates, ensuring that the identities and connections established by SPIFFE are trustworthy and adequately encrypted.
We begin by creating a namespace where we will deploy cert-manager and SPIFFE.
Use the following two commands to create the namespaces and switch to the desired namespace:
kubectl create namespace cert-manager
kubectl config set-context --current --namespace=cert-manager
In order to install cert-manager and SPIFFE, we can make use of the Helm package manager and install these Helm charts on our cluster. To achieve this, we need to fetch the Jetstack repository locally.
Use the following commands to achieve this:
helm repo add jetstack https://charts.jetstack.io
helm repo update
Now that we’ve added Jetstack as a helm repository, we can proceed to install cert-manager on our Kubernetes cluster.
Use the following commands to install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml
helm upgrade --install cert-manager jetstack/cert-manager --version v1.11.0 --set prometheus.enabled=false
Now that cert-manager is installed, we can proceed to the installation of SPIFFE.
To do this, use the following command:
Please note: for the trusted domain, you could use a different name; I suggest using the name of your Kubernetes cluster. You will need this domain name later when setting up AWS policies.
helm upgrade --install cert-manager-csi-driver-spiffe jetstack/cert-manager-csi-driver-spiffe --wait \
--set image.tag=aws \
--set image.repository.driver=ghcr.io/joshvanl/cert-manager-csi-driver \
--set image.repository.approver=ghcr.io/joshvanl/cert-manager-csi-driver-approver \
--set "app.logLevel=1" \
--set "app.trustDomain=demo.home.cluster" \
--set "app.approver.signerName=clusterissuers.cert-manager.io/csi-driver-spiffe-ca" \
--set "app.issuer.name=csi-driver-spiffe-ca" \
--set "app.issuer.kind=ClusterIssuer" \
--set "app.issuer.group=cert-manager.io"
To obtain and renew certificates automatically, we use the following ClusterIssuer.
Deploy it using the following command:
kubectl apply -f https://raw.githubusercontent.com/nicholasM95/spiffe-demo/main/setup/clusterissuer.yaml
Verify if all pods have started and check whether the secret csi-driver-spiffe-ca
has been created.
Create a S3 bucket with a relevant and easy name and upload a text file named ‘hello.txt’. Our application will read this file later.
In the context of AWS, a trusted anchor might involve setting up a trust relationship with an external identity provider or certificate authority. This ensures that the SPIFFE identities generated within the Kubernetes cluster can be verified and trusted by AWS services when defining policies, access controls, or other security-related configurations.
To achieve this, we need a CA certificate that is utilized by SPIFFE on our Kubernetes cluster. Use the following command to obtain the CA certificate:
kubectl get secret csi-driver-spiffe-ca -o jsonpath='{.data}'
Copy the value of ca.crt and decode it using base64. Example:
echo 'copied-value' | base64 -d
Navigate to the IAM section within AWS and access the “Roles” tab.
Scroll down and choose “Roles Anywhere.”
Ensure that you are still within the desired region.
Generate a trust anchor.
Opt for the external certificate bundle option.
Paste the decoded value of ca.crt
from the secret.
A trusted profile establishes trust, defines access controls, and enables secure interaction between SPIFFE-issued identities within a Kubernetes cluster and AWS resources. It ensures a smooth and secure integration between these two environments while maintaining a solid security posture.
We require a policy that allows our application, operating in the ‘spiffe’ namespace with service account ‘file-read-app’, to gain read access to files within our designated bucket, all within the ‘demo.home.cluster’ cluster.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetObject",
"Condition": {
"StringLike": {
"aws:PrincipalTag/x509SAN/URI": "spiffe://demo.home.cluster/ns/spiffe/sa/file-read-app"
}
},
"Effect": "Allow",
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*",
"Sid": ""
}
]
}
Create a new role named demo_k8s_read
, establish a trust relationship, and append the previously mentioned policy to the permission policy.
Don’t forget to change the Arn source to your trust anchor.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "rolesanywhere.amazonaws.com"
},
"Action": [
"sts:TagSession",
"sts:SetSourceIdentity",
"sts:AssumeRole"
],
"Condition": {
"StringLike": {
"aws:PrincipalTag/x509SAN/URI": "spiffe://demo.home.cluster/ns/*/sa/*"
},
"ArnEquals": {
"aws:SourceArn": "arn:aws:rolesanywhere:eu-west-1:930970667460:trust-anchor/6eac6a56-97c3-439f-9074-f33c576ab08d"
}
}
}
]
}
Go back to the “Roles Anywhere” page and create a new trust profile, again check your region. Use the role you created before.
We begin by creating a namespace where we will deploy our application.
Use the following two commands to create the namespaces and switch to the desired namespace:
kubectl create namespace spiffe
kubectl config set-context --current --namespace=spiffe
We require a service account named ‘file-read-app’ for our application. Create a new file named ‘serviceaccount.yaml’ with the following content.
apiVersion: v1
kind: ServiceAccount
metadata:
name: file-read-app
Deploy service account.
kubectl apply -f serviceaccount.yaml
We require a role named ‘file-read-app’ for our application. Create a new file named ‘role.yaml’ with the following content.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: file-read-app
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificaterequests"]
verbs: ["create"]
Deploy role.
kubectl apply -f role.yaml
We require a role binding named ‘file-read-app’ for our application. Create a new file named ‘rolebinding.yaml’ with the following content.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: file-read-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: file-read-app
subjects:
- kind: ServiceAccount
name: file-read-app
Deploy role binding.
kubectl apply -f rolebinding.yaml
Create a new file named ‘deployment.yaml’ with the following content. If necessary, update the environment variables. Fill in the three AWS resources with the one you have generated.
SPIFFE will mount files in the volume we created here, SPIFFE would create a certificate to communicate with AWS. After the certificate is generated, SPIFFE will use this to retrieve AWS credentials and mount them.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-spiffe
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: demospiffe
template:
metadata:
labels:
app.kubernetes.io/name: demospiffe
spec:
serviceAccount: file-read-app
containers:
- name: demospiffe
image: nicholas95/spiffedemo:dev-1
imagePullPolicy: Always
env:
- name: AWS_REGION
value: eu-west-1
- name: AWS_BUCKET
value: <BUCKET_NAME>
volumeMounts:
- mountPath: /root/.aws
name: spiffe
volumes:
- csi:
driver: spiffe.csi.cert-manager.io
readOnly: true
volumeAttributes:
aws.spiffe.csi.cert-manager.io/enable: "true"
aws.spiffe.csi.cert-manager.io/role: arn:aws:iam::930970667460:role/demo_k8s_read
aws.spiffe.csi.cert-manager.io/trust-anchor: arn:aws:rolesanywhere:eu-west-1:930970667460:trust-anchor/1bf58d4e-35c6-465c-a05c-1bdcce05054b
aws.spiffe.csi.cert-manager.io/trust-profile: arn:aws:rolesanywhere:eu-west-1:930970667460:profile/fa6c37f9-f64a-4bdf-b0f6-c39ec5282aa4
name: spiffe
Deploy application.
kubectl apply -f deployment.yaml
When your pod is started, you can open a shell into it using the following command and install curl. Once curl is installed, you can send a GET request to ‘/file/hello.txt’, the response will be the content of the file in the S3 bucket.
kubectl exec -it <pod-name> -- /bin/sh
apk add curl
curl http://localhost:8080/file/hello.txt
exit
We’ve also included a file watcher to monitor the renewal of credentials and certificates.
AWS’s IRSA has been the go-to method for granting permissions to Kubernetes pods running on Amazon EKS. However, as the landscape of distributed systems evolves, SPIFFE offers a fresh perspective on securing these connections.
By setting up our connection with S3 in this manner, we eliminate the need to manage access keys and secret keys ourselves. As a result, the likelihood of our keys being leaked is minimized. Additionally, both our certificates and AWS credentials have a short lifespan.
Furthermore, we are relieved from the task of key rotation as this process occurs automatically.
In essence, SPIFFE’s focus on secure identity issuance and management can be applied to a variety of scenarios where authenticated and secure communication is crucial. Its flexibility and adaptability make it valuable across a spectrum of modern distributed and cloud-native computing environments.
SPIFFE becomes especially valuable when dealing with complex scenarios such as hybrid and multi-cloud setups. In the realm of security, SPIFFE shines by offering a passwordless solution that not only enhances protection but also eliminates the vulnerabilities and complexities associated with traditional passwords and keys.
Curious about establishing a connection to an AWS database without using passwords? This post explains how to connect to an RDS from EKS without needing a password. Since we’ve already set up SPIFFE, you can also achieve this from your on-premises, azure, gcloud, … environment.
In this repository, you’ll find all the resources that were used to create this blog post.
]]>In today’s world, there are numerous navigation options available, with every mobile phone equipped with GPS functionality. However, what if there was an even easier way? Our project offers a solution where you can embark on a seamless walk, bike ride, or any other journey without the need to constantly retrieve your phone from your pocket. By following our app, you can effortlessly navigate your chosen route while remaining informed of crucial information such as direction, speed, distance, and potentially more. The beauty lies in our app’s ability to project all this data onto your eyewear, enhancing your overall experience.
Simply upload your routes using GPX files, select the desired route, and you’re ready to begin your adventure!
The project we were about to embark on described building an Android application that runs on the Vuzix Blade AR glasses and communicates with an API that runs in a Cloud environment.
In today’s market, there is a considerable variety of “smart glasses” available. However, when compared to other smart eyewear options, the Vuzix Blade stands out due to its subtle design, making it particularly well-suited for outdoor use. Consequently, our app is specifically tailored to function seamlessly with this particular eyewear. The only drawback worth mentioning is its limited battery life, which lasts for approximately one hour. As a result, extended excursions would require the use of a power bank to ensure uninterrupted functionality.
If you desire to use these glasses for yourself here are some introductory steps to get you started on utilizing the full potential of the Vuzix Blade.
The Vuzix Blade uses the Android 5 OS, so we developed our app using Android, utilizing the Java programming language. While both of us were new to Android development, we already had experience working with mobile applications and Java, which allowed us to quickly grasp the necessary concepts and tools. There were some difficulties since we were limited to features from Android API 22, but we still managed to bring the application to a desirable outcome.
Throughout the development process, we explored various features and functionalities that could enhance the user experience. For instance, we implemented real-time GPS tracking to accurately monitor the user’s location and provide precise navigational instructions. This way we could also show the user what speed they were travelling. We use the magnetic field sensor in the glasses to determine the user’s direction and make sure they are always being directed in the right direction.
By utilising the Vuzix Blade’s display, we successfully projected relevant data, such as distance, speed and direction directly onto the user’s field of view, ensuring an intuitive and hands-free experience. With a simplistic interface the user can easily select the route he wants to traverse and since it is displayed on smart glasses the black background makes it transparent, so the user has optimal visibility while traversing his path. Furthermore, after having selected a route, the user will see information about the previous times he travelled the route as you can see on the image on the right.
A user also has multiple inputs he can use to enhance the user experience. They can go forward or back a point since most GPX file coordinates are rounded which can make the points that are marked sometimes appear inside of buildings. Even when they are user-generated routes, and the user never came within a 10-meter radius of the building. To further help with this but also for people who don’t want to begin at the start of their route or people who just got lost. There is a rebase function that just puts you on the point closest to where you are regardless of where in your route you were. The user can also at any point reload his statistics for the route he is travelling or start a new route.
By incorporating these enhancements, we have created an Android app that provides users with a seamless and intuitive navigation experience on the Vuzix Blade, leveraging real-time GPS tracking, user-friendly inputs, and optimized performance.
Our application leverages the power of a Spring Boot API integrated with a Postgres database. This robust combination enables us to effortlessly retrieve routes from the cloud, providing users with easy access to their desired routes anytime, anywhere.
By utilizing Spring Boot as a middleware, our API serves as a bridge between our application and the Postgres database. Spring Boot’s extensive set of modules and libraries provide a cohesive framework for building RESTful services, ensuring smooth and reliable communication between the application and the database.
The Postgres database acts as a centralized repository, storing and managing all the route data and statistics. With this centralized approach, we ensure data integrity and efficient querying capabilities, allowing users to access and analyze their route information effectively. We save our GPX files as a String in a SQL Database with a Unique name, So the user won’t confuse their files and the API can query by name. Route statistics are saved in the database, establishing a many-to-one relation with the GPX model. These statistics capture essential information such as start and end times, enabling our application to calculate the duration of routes and average speeds accurately.
The GPX object
{
"id": 0,
"name": "string",
"gpx": "string"
}
The RouteStatistic object
{
"id": 0,
"startDate": "string",
"endDate": "string",
"gpx": {}
}
Experience hassle-free route management with our integrated API and Postgres database solution. Seamlessly retrieve and manage routes with ease, enhancing your navigation experience. Our user-friendly approach prioritizes data integrity, delivering a streamlined and intuitive route management solution.
As the API took shape it was time to start thinking about how our infrastructure in the cloud would look like. To manage and provision our infrastructure we used a little nifty tool called Terraform.
Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It enables the automation of provisioning and managing infrastructure resources across various cloud providers and on-premises environments. With Terraform, you define your desired infrastructure configuration using a declarative language called HashiCorp Configuration Language (HCL). These configurations, known as Terraform code, describe the desired state of your infrastructure, specifying the resources, dependencies, and configurations needed. By executing Terraform commands, it compares the current state of the infrastructure with the desired state and determines the necessary actions to bring the infrastructure into the desired state. Terraform’s ability to manage infrastructure as code allows for version control, collaboration, and reproducibility, making it an efficient and scalable solution for infrastructure management.
To begin our quest for provisioned infrastructure we needed a place to store the state of our configuration. In Terraform x AWS an S3 bucket is used for exactly this. Another necessary resource for setting up the S3 backend is a DynamoDB table. This single table stores the lock of the state so that it cannot be accessed by multiple persons at a time, which could otherwise lead to configuration drift.
backend "s3" {
bucket = "vuzix-blade-intership-tfstate"
key = "terraform/terraform.tfstate"
region = "eu-west-1"
profile = "vuzix-blade-internship"
dynamodb_table = "vuzix-blade-internship-state-lock"
}
backend "s3" {}
provides the bucket to store the state, this block resides inside the main terraform {}
block inside the main.tf
file.
U can also see the dynamodb_table
parameter which references the table for the state lock.
To start deploying our API to the cloud we’re going to need a place to store our built images, which the runner is going to use to launch the API. Remember that we don’t even have a database and runner in the cloud yet. The repository also has a lifecycle policy, this implies that the repository should only keep the 5 latest images, to keep it from overcrowding.
resource "aws_ecr_repository" "vuzix-blade-internship-container-repository" {
name = "vuzix-blade-internship-container-repository"
image_scanning_configuration {
scan_on_push = true
}
tags = var.resource_tags
}
To push an image to the repository we could manually push it with a couple of docker commands, but it seemed more efficient to use GitHub Actions for this. We can make a workflow file in our project which is going to run whenever we merge into the main branch from a pull request. For now, we’ll take a break from Terraform to focus on the workflow which is written in YAML.
This is the first and probably most important part of the workflow: defining a trigger. In our application, the trigger is pulling into the main branch.
name: build/push workflow vuzix-blade-internship
on:
push:
branches:
- main
When the trigger is set, we’re ready to start defining jobs. In our case we have 2 jobs, the deployment has been split into CI and CD respectively.
Inside this CI job, we are going to run our integration tests of the API to make sure that all the logic works to our liking before we build and push the image to the repository.
ci:
name: CI
runs-on: [self-hosted, Linux]
steps:
- uses: actions/checkout@v2
- name: Set up Java
uses: actions/setup-java@v2
with:
distribution: "zulu"
java-version: "17"
- name: "Run tests"
run: |
chmod +x mvnw
./mvnw test
The CD job will only start when the CI job was successful. Inside the CD job, we need to set up a couple of preparations before we can build and push to the repository. We need permission to push to AWS, to get this permission we made a custom IAM role (this role was manually created) that we can assume in the runner.
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: $
aws-region: eu-west-1
- name: Login To Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
role-to-assume
lets us specify a role to assume with its ARN, this role must have the necessary permissions to perform all the actions inside the workflow.
Once the login attempt is successfully authenticated, the associated role will be assumed, granting the necessary permissions for ECR (Elastic Container Registry).
- name: Build Docker image
env:
ECR_REGISTRY: $
run: |
chmod +x mvnw
./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=$ECR_REGISTRY/$ECR_REPO_NAME:latest -DskipTests
- name: Push docker image
env:
ECR_REGISTRY: $
run: |
docker push $ECR_REGISTRY/$ECR_REPO_NAME:latest
We utilize a built-in Spring Boot plugin called Paketo.
The command spring-boot:build-image mentioned above follows the prescribed naming convention for tagging the image before pushing it to an AWS repository.
The docker push
command automatically recognizes that it needs to push to the specified repository, hence the naming convention.
If done correctly we can see that the repository in AWS receives and stores our image.
The database consists of an RDS resource in AWS which is running a Postgres 15 engine.
Before we can begin setting up our RDS instance we have to decide how we’re going to handle the network situation.
We want to put our RDS in a VPC (Virtual Private Cloud) to keep it secure inside our public Cloud, we can specify this using an aws_db_subnet_group
.
To ensure an optimal setup for the RDS (Relational Database Service), we require a couple of additional resources: an aws_db_parameter_group and an aws_security_group.
These resources play a crucial role in configuring and securing the RDS environment.
In the aws_db_subnet_group
we just need to define our subnets and AWS will automatically recognize the correct VPC.
The aws_db_parameter_group
is a default parameter group for RDS with the corresponding engine (Postgres 15).
Our aws_security_group
which is specifically for our RDS will open the necessary ports in the VPC for inbound and outbound traffic from and to our App Runner (more on the App Runner later).
resource "aws_db_instance" "dev-vuzix-blade-internship-db" {
allocated_storage = 10
instance_class = "db.t3.micro"
db_name = "blade"
identifier = "dev-vuzix-blade-internship-db"
engine = "postgres"
engine_version = "15.2"
username = var.db_username
password = var.db_password
parameter_group_name = "dev-vuzix-blade-internship-db"
vpc_security_group_ids = [aws_security_group.vuzix-blade-internship-security-group.id]
db_subnet_group_name = aws_db_subnet_group.vuzix-blade-internship-db-subnet-group.name
skip_final_snapshot = true
tags = var.resource_tags
depends_on = [aws_db_parameter_group.dev-vuzix-blade-internship-db, aws_db_subnet_group.vuzix-blade-internship-db-subnet-group]
}
U can see in our aws_db_instance
where we have defined the security group, VPC and parameter group in the parameters vpc_security_group_ids
, db_subnet_group_name
and parameter_group_name
.
Now for the fun part: finally running our API in the cloud. To achieve this, we’re using the App Runner resource. There are a couple of things you need to take care of before trying to run your app in an App Runner. The first requirement is for the resource to reside within the same Virtual Private Cloud (VPC) as the RDS, enabling connectivity between them. The second requirement entails the need for a custom role assigned to the runner, granting permissions to fetch the image from the ECR (Elastic Container Registry). Additionally, the runner must have a security group configured to establish a connection with the RDS (Relational Database Service).
In our aws_security_group
that is specifically for our App Runner we specify the ports in the VPC for inbound and outbound traffic from and to the RDS.
The endpoint of our App Runner however is public, because of this the endpoint of the App Runner needs to be handled like a secret.
Similar to the aws_db_subnet_group
, the aws_apprunner_vpc_connector
which puts the App Runner in the desired VPC takes an Array of subnets as a parameter and automatically recognizes the VPC.
In the aws_apprunner_service
you have to specify a role that has access to the image repository for pulling the image into the App Runner.
The role needs the permissions to be assumed by the App Runner and to perform actions on our image repository.
The specifying of a role is mandatory if the App Runner needs to access a private image repository.
resource "aws_apprunner_service" "vuzix-blade-internship-apprunner-service" {
service_name = "vuzix-blade-internship-apprunner-service"
tags = var.resource_tags
network_configuration {
egress_configuration {
egress_type = "VPC"
vpc_connector_arn = aws_apprunner_vpc_connector.vuzix-blade-internship-vpc-connector.arn
}
}
source_configuration {
authentication_configuration {
access_role_arn = aws_iam_role.vuzix-blade-internship-apprunner-service-role.arn
}
image_repository {
image_identifier = "930970667460.dkr.ecr.eu-west-1.amazonaws.com/vuzix-blade-internship-container-repository:latest"
image_repository_type = "ECR"
image_configuration {
port = 8080
runtime_environment_variables = {
ENV = "prod"
DATABASE_ENDPOINT=aws_db_instance.dev-vuzix-blade-internship-db.endpoint
DATABASE_NAME=aws_db_instance.dev-vuzix-blade-internship-db.db_name
POSTGRES_USER=var.db_username
POSTGRES_PASSWORD=aws_secretsmanager_secret_version.vuziz-blade-internship-db-pwd-v1.secret_string
}
}
}
}
depends_on = [aws_iam_role.vuzix-blade-internship-apprunner-service-role]
}
In the aws_apprunner_service
above you can see the network_configuration {}
block where theaws_apprunner_vpc_connector
is specified and the source_configuration {}
block where we specify the role for access to the ECR, what the image identifier is and how we want to configure the application.
After the App Runner was set up we wanted a way to automatically apply and destroy our Terraform configuration. Because we do not want to lose any previously built images or data inside the database, the only resource being automatically destroyed and set up again is the App Runner.
name: Terraform Apply On Schedule
on:
schedule:
- cron: '0 6 * * 1-5'
The trigger for our Terraform Apply workflow. This workflow runs every day from Monday to Friday at 6 am (UTC), and the Terraform Destroy workflow runs at 4 pm (UTC).
- name: Terraform Init
run: |
terraform init -input=false
- name: Terraform Apply
env:
DB_USERNAME: $
DB_PASSWORD: $
run: |
terraform apply -var="db_username=$DB_USERNAME" -var="db_password=$DB_PASSWORD" --auto-approve
We initialize our Terraform and apply the configuration, the terraform destroy
is very similar except for the fact that in the destroy we use the -target
tag to target our App Runner resource.
Displayed below you will find two images, one representing the project as a whole and how it intertwines with our cloud infrastructure. The second picture represents the entire Terraform state and the dependencies of each resource.
Project Visual
Terraform Graphical
Our project encompasses various technologies and tools to enhance the navigation experience. We leverage the Vuzix Blade smart glasses, utilizing their subtle design for outdoor use. Our app, developed using Android and Java, seamlessly integrates with the Vuzix Blade, projecting useful information onto the user’s eyewear without hindering their sight.
Furthermore, our system incorporates a Spring Boot REST API integrated with a Postgres database. This combination enables us to effortlessly send or retrieve routes and their analytics from the cloud, offering efficient data management and reliable access to essential navigation information. To efficiently manage our cloud resources, we employ Terraform as our infrastructure-as-code tool. With Terraform’s configuration files, we define and provision the required cloud resources, ensuring consistent and tailored infrastructure setups. With a pipeline to automatically build and destroy cloud resources to have optimal availability and low costs.
Overall, our project integrates different technologies and tools, providing users with a convenient, hands-free navigation experience while optimizing resource management and leveraging cloud capabilities.
We also created this visual to represent all the technologies and tools we used to bring this project to fruition.
Last but not least a special thanks to our mentors Jeroen Vereecken and Stijn Geerts for guiding us through our internship and helping us along the way. And a big thanks to Nils Devos for giving us this opportunity.
]]>After an absence of 3 years, AWS was happy to announce that it was reviving its annual real-life conference in Amsterdam. The summit brought industry leaders, cloud enthusiasts and tech idealists together. With a packed schedule of an overwhelming keynote and inspiring sessions, it proved to be a very exciting day. If you want to know more about AWS, from introduction to advanced sessions and topics, then this day is a great opportunity to do so. You will be able to meet all kinds of people: AWS employees, partners, other companies, sponsors and, if you have any questions, you can talk to an AWS expert. An app was available which could be downloaded through the App Store or the Play Store, which contained a detailed schedule about all the sessions taking place that day. It’s a must-have when you’re attending this event.
AWS Summit Amsterdam 2023 entrance
The keynote was opened by Marielle Lindgren, who is the General Manager of the Nordics, Baltics and the Benelux of AWS. She gave an explanation of what the day would look like, who is present and what you are able to do on this day at the RAI. The biggest sponsors of the event were named by Marielle and also every sponsor, big or small, was shown in the presentation. She talked about the Benelux and the impact that AWS has on the region and the investments that AWS does in order to be carbon-neutral in the future.
Marielle Lindgren - carbon neutrality
Andy Warfield, the VP & Distinguished Engineer of AWS continued the Keynote and spoke about the newest features and services that AWS has implemented over last year.
Andy Warfield - the newest features and services
Andy also talked about Artificial Intelligence and Machine Learning, which is a clear trend in IT and on AWS. Amazon Bedrock, which is presented by Andy, allows you to create generative AI applications by using foundation models. Their main mission, as always, is that clients should be able to focus only on innovation and their applications, and the purpose of them, and not the surrounding infrastructure.
With these kinds of topics, privacy is often a concern. Users want to know how their data is used and what measures are taken to keep it safe. AWS continues to assure that the data of their users remains safe. They always design their newest features and services with privacy as a primary concern, and they will to continue to invest in ways to ensure that they live up to their own standards.
In addition to product announcements, the keynote featured customer success stories, highlighting how organizations from various sectors have leveraged AWS to achieve significant business outcomes. They spoke of companies that had successfully migrated their infrastructure to the cloud.
Some of these companies shared their experience on stage. One of the best examples was the Efteling (an amusement park in The Netherlands), which was presented by Jonas Rietbergen. They added a functionality to their application where they estimate how occupied the park at any point in time. Thanks to these estimations, they were able to predict waiting times and recommend other alternatives in case the current ride was too tied up. Efteling did this by creating a data lake, integrating it with Sagemaker Model and saving predictions every 30 minutes to Aurora. These predictions could then be fetched by Lambda and exposed through API Gateway.
Efteling - predictions explained by Jonas Rietbergen
Other companies were HEMA, PostNL, Epic Games, which were presented by Andy and how these companies used the newest services and features. For example, PostNL was able to speed up the start time of their Lambda’s by 10x by using the Lambda SnapStart feature.
Takeaway (Just Eat) also gave a small presentation on how they use AWS to improve customer experience in their application. Their application needs to be safe, secure, stable and scalable at all times as demand for their application fluctuates throughout the day. Making sure that the application can handle high amounts of load during lunch & dinner times is a top priority, all while making sure that the cost and customer experience stays optimized. This is why Takeaway partnered with AWS as they were able to meet the demands easily.
There were sessions throughout the day from both AWS and partners. Community sessions by other partners were mostly held in the community lounge. However, the most interesting sessions by far were the breakout sessions, which were held in the same room as the keynote. There were multiple sessions in the room, so AWS provided headsets to avoid the speakers talking over each other.
The most interesting session for me was the Serverlesspresso talk, presented by Marcia Villalba. Serverlesspresso is a serverless application that handles multiple coffee orders a day from customers. The espresso bar was present at the summit and were able to handle production-like traffic without delay, all thanks to their serverless design and the surrounding architecture. The presenter also gave a breakdown of the cost, where most of the services used were covered by the AWS free tier. Thanks to this presentation, I came across this website where they explain a lot of information, concepts and ideas related to serverless architecture. It’s definitely worth to check it out.
This summit was definitely a fun experience and I will join the edition in Amsterdam next year. If you are near Amsterdam, or want to visit Amsterdam and combine it with the AWS summit, I can highly recommend attending the next edition. See you next year!
]]>Introducing Cloud Kickstart Components: Simplifying Application Deployment on AWS (Internship Project)
We are excited to introduce Cloud Kickstart Components, an internship project aimed at simplifying the process of deploying applications on Amazon Web Services (AWS). As interns, we have developed a template that enables the automatic deployment of applications to AWS, harnessing the power of multiple AWS services effortlessly.
Our internship project focuses on automating the deployment process, eliminating the need for manual configurations and reducing the chances of errors. With this template, developers can seamlessly deploy their applications to AWS, allowing them to concentrate on their core application logic and development tasks.
Through Cloud Kickstart Components, we provide an easy-to-use template that integrates with various AWS services, including EKS and ECR instances, Amazon CloudWatch, Amazon S3 for storage and Amazon IAM. This integration empowers developers to take advantage of multiple AWS services without individual setup and configuration complexities.
The first step in our deployment process is for the developer to push their code to the designated repository, such as GitHub. This ensures that the latest changes and updates are available for deployment. Once the code is pushed, our deployment pipeline kicks into action. The Spring Boot application is built, packaged, and transformed into a Docker image. This image encapsulates the application and its dependencies, making it portable and ready for deployment. The Docker image is then stored in a Docker registry, such as Docker Hub. This step ensures the image is prepared and available for deployment across different environments.
We use the power of GitHub Actions, a workflow automation tool, to streamline the deployment process. Using predefined workflows, we configure GitHub Actions to automatically trigger the deployment process whenever a new Docker image is available. GitHub Actions pulls the Docker image from the registry, fetching the latest version of the Spring Boot application built in the previous steps. This ensures that the deployment uses the most up-to-date version of the application.
With the Docker image available, we utilize AWS services to deploy the application. Depending on the specific requirements, it can automatically provision resources such as Amazon Elastic Kubernetes Service (EKS) instances, Amazon Elastic Container Registry (ECR), Amazon Simple Storage Service (S3) buckets and many more. After deployment, we implement continuous monitoring and testing mechanisms using CloudWatch. We set up custom metrics, create dashboards for visualization, and define alarms to detect anomalies or performance issues. Additionally, we leverage CloudWatch Logs to collect application logs for troubleshooting and analysis.
Docker Hub is a central repository allowing us to store, manage, and distribute our Docker images. At the same time, AWS provides the ideal platform for deploying and running our containerized applications. To begin the deployment process, we build and package our application code into a Docker image, encapsulating all the necessary dependencies and configurations. This Docker image acts as a self-contained unit, ensuring consistent deployment across various environments. We leverage the power of Docker Hub, a trusted and widely used container registry, to store and manage our Docker images. By utilizing Docker Hub, we can easily version our images, making tracking and managing changes over time simple. This ensures that our Docker images are always up to date, incorporating the latest changes and enhancements. Once our Docker images are prepared on Docker Hub, we will deploy them to AWS.
We use the capabilities of GitHub Actions, a workflow automation tool. We have automated various tasks with GitHub Actions, including deploying to Docker and seamlessly integrating our applications with multiple Amazon Web Services (AWS).
One of the key benefits of GitHub Actions is its ability to automate the deployment of applications using Docker images. Using a simple configuration, we have set up workflows that automatically build our applications, package them into Docker images, and push those images to a Docker registry. This automation saves us valuable time and effort, ensuring our applications are always up-to-date and readily available for deployment.
This automation allows us to seamlessly deploy our applications, configure the necessary settings, and utilize the full capabilities of AWS without manual intervention. For example, when triggering a deployment workflow, GitHub Actions pulls the Docker image from the registry, distributes it to ECR and deploys it to an Elastic Kubernetes Service (EKS) cluster. Simultaneously, it can create S3 buckets for storage, create/add some CloudWatch metrics, and set up the alarms and necessary permissions and configurations, all in an automated and reliable manner. This level of automation significantly reduces the complexity and time required to deploy and integrate our applications with AWS services.
In conclusion, GitHub Actions has become an invaluable tool, empowering us to automate the deployment to Docker and seamlessly utilize multiple AWS services.
We have chosen Amazon Web Services (AWS) as our preferred cloud computing platform. With AWS, we have access to a comprehensive suite of cloud services that enable us to build, deploy, and manage our applications and infrastructure with flexibility, scalability, and reliability.
By leveraging AWS as our cloud computing platform, we can take advantage of a vast array of services and features that enable us to build and scale our applications efficiently. AWS’s flexibility, scalability, and reliability empower us to focus on innovation and deliver exceptional experiences to our users while benefiting from the robust infrastructure and services AWS provides.
We utilize AWS Elastic Container Registry (ECR) as a pivotal component in our deployment and containerization strategy. AWS ECR is a secure and fully managed container registry, enabling us to store, manage, and deploy container images effortlessly. With AWS ECR, we can securely store our Docker container images, ensuring their availability for deployment across various environments. The integration of ECR within the AWS ecosystem allows us to seamlessly incorporate it into our deployment pipelines, simplifying the process of deploying containerized applications.
Moreover, AWS ECR provides powerful monitoring and management capabilities. We can track image usage, monitor repository activity, and gain insights into resource utilization through integration with AWS CloudWatch. This allows us to monitor the performance of our container images and repositories, enabling proactive management and optimization.
To deploy these containers effectively, we have adopted Amazon Elastic Kubernetes Service (EKS) from Amazon Web Services (AWS). AWS EKS provides a robust and reliable platform to deploy, scale, and easily manage our containerized applications. AWS EKS is a fully managed Kubernetes service that simplifies the deployment and management of containerized applications. This allows us to streamline our development and operations processes, enabling faster time-to-market and improved agility.
Amazon Elastic Kubernetes Service (EKS) pod is the smallest and most basic unit of deployment within a Kubernetes cluster. A pod represents a single or multiple instance(s) of a running application workload within the cluster. It encapsulates one or more tightly coupled containers that share the same network namespace, IP address, and storage volumes. These containers within a pod often work together to fulfill a specific task or service.
AWS CloudWatch plays a vital role in our operations by providing us with real-time insights into the performance and health of our AWS resources and applications. With AWS CloudWatch, we can collect and analyze a wide range of metrics across various AWS services, including compute instances, databases, storage, and networking. This comprehensive monitoring solution allows us to gain deep visibility into the performance and utilization of our resources, enabling us to make informed decisions and optimize our infrastructure.
Additionally, AWS CloudWatch provides us with the flexibility to create customized dashboards. These dashboards offer a consolidated view of our key metrics, allowing us to monitor and analyze critical aspects of our infrastructure and applications in a centralized and intuitive manner.
The example image above shows the different metrics we included, such as CPU usage, Incoming log events,…
AWS CloudWatch also supports log management and analysis through CloudWatch Logs. This feature enables us to centralize and collect logs generated by our applications and services. We can then search, filter, and analyze these logs, making troubleshooting and debugging more efficient. CloudWatch Logs simplifies investigating issues and monitoring application behavior by consolidating logs in a single location.
As shown image above, different logs are being shown inside CloudWatch Logs. Every endpoints inside our application sends the following logs :
Depending on the endpoint itself, the values are going to be different.
One of the key features we leverage in AWS CloudWatch is the ability to set up custom alarms. These alarms enable us to define specific thresholds and conditions for our metrics. When a metric breaches a threshold for a specific timeframe or meets a predefined condition.
CloudWatch triggers an alarm, notifying us of potential issues or deviations from expected behavior. This proactive monitoring approach empowers us to address potential problems before they impact our applications or services. In the example image above, the CPU usage is being monitored. The red line represents the upper CPU usage limit, meaning the maximum value that can be reached. If the CPU surpasses the maximum value indicated by the red line, as shown in the image, a notification will be sent to a specific email address.
Container Insights, powered by Amazon CloudWatch, offers real-time monitoring and deep visibility into the performance and health of your containers. Integrating Container Insights into our deployment template enables developers to gain valuable insights and make data-driven decisions to optimize their containerized applications. With Container Insights, you can effortlessly monitor crucial metrics such as CPU and memory utilization, network performance, disk I/O, and container-level resource allocation. This level of observability empowers you to identify performance bottlenecks, proactively troubleshoot issues, and optimize resource allocation for better efficiency. CloudWatch automatically collects metrics for many resources, such as CPU, memory, disk, and network. At the same time, Container Insights supports collecting metrics from clusters deployed on Fargate for both Amazon ECS and Amazon EKS.
For example, we utilize Container Insights to monitor our EKS cluster .
In our project, we use the capabilities of Terraform to perform a wide range of tasks automatically. Terraform is a powerful infrastructure as code (IaC) tool that allows us to define, provision, and manage our project resources seamlessly and efficiently. With Terraform, we can customize our infrastructure requirements and represent them in a declarative configuration language. This enables us to specify the desired state of our infrastructure, including the resources, dependencies, and configurations needed for our project.
One of the key advantages of using Terraform is its ability to automate the provisioning and management of resources across various cloud providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). This eliminates manual interventions, reduces human error, and ensures consistent deployments across environments.
The code shown above shows a snippet of terraform code inside our project. This snippet code will configure the required provider and backend settings. It ensures that the project can interact with AWS resources using the specified provider and store the state of the infrastructure in an S3 bucket.
Our cloud kickstart project has been an extraordinary journey, and we express deep appreciation for the exceptional support and guidance offered by our mentors, Pieter Vincken and Sigriet Van Breusegem. This experience has truly been transformative, allowing us to acquire invaluable knowledge and skills, significantly enhancing our proficiency in cloud computing and automation.
Through this project, we have developed a comprehensive understanding of various aspects, including:
Undoubtedly, this Cloud Kickstart project has been instrumental in broadening our horizons and equipping us with the expertise needed to thrive in cloud computing and automation. We are deeply grateful for the opportunity and look forward to applying our newfound knowledge to future endeavors.
]]>