Monthly Archives :

January 2022

The Importance of Certificates in Modern Application Development

The Importance of Certificates in Modern Application Development 1500 844 ELVT Consulting

By: Jeff Sugden, Heber Lemus

Certificates are used everyday by our computers. They function similarly to our ID cards providing proof of who we are supported by technology that weeds out fakes. Certificates are the way computers and servers announce who they are and establish trust between the connection and target. It allows us to know that we are connecting to the server we intended to and not to an impostor. It also allows us to seal files in their current state – like important documents in pdf form – indicating that they are not to be tampered with. Certificates can be used by rendering software to detect tampering and display a warning that the file is no longer legitimate.

You can very easily create your own certificate. This is best done on a Linux machine with OpenSSL, but any computer with OpenSSL will do. Looking up just about any guide on how to create a self signed certificate via your preferred search engine will lead you in the right direction. Excellent! You have your certificate! And it works, too! You can now install it to your computer’s certificate manager and use it to digitally sign a file. Now your program will recognize the certificate as trusted.

But what if you sent the same file to a friend? Well, it wouldn’t work. Your friend’s PC doesn’t recognize the certificate. This is similar to the problem of paper money in the early days of banks. Paper notes from one bank may not be taken by other banks due to lack of trust. In our scenario, your friend would need to install your certificate in order for their pc to trust the file. This would need to happen for every person you give your file to because the programs trying to run the verifications (the file hashes) don’t recognize the certificate. I’m sure you’re thinking this sounds like a very cumbersome process. What if there was a way to have people trust your file without having to manually distribute your specific certificate? This is the purpose of the Certificate Authority.

Certificate Authorities, or CA for short, operate on the same concept as government issued IDs. Because of the trust in the government that produces the ID, other parties will accept your ID as proof of your identity. When a certificate is signed with the backing of a CA, you can be assured that other people who receive your files or connect to your server will do so without problems. CA chains are typically between 256 and 4096 characters in length in order to spread out the trust levels and speed up the process of issuing certificates and protecting the root certificate. If the root CA is ever exposed, the entire chain and everyone who depends on it is now compromised and must be thrown out.

Having to make certificate signing requests (CSR) against the root certificate all the time also exposes the root certificate. The more offline it is, the more secure it will be. To alleviate this potential issue we rely on intermediate certificates to make our CSR’s against. Most CA’s set the backer of their certificates against government issued certificates and most computers come with a set of root and intermediate certificates that are marked as trusted. This is why you can connect to most websites without having to install any certificates after installing your Operating System.

Some of you may be asking yourself, well how does a certificate do what it does? How do programs verify a certificate? We can start with a basic concept: file hashing. The first example of a hash that I came across in my actual work experience came during a time when I was doing an upgrade on our reporting software. The client I was working with asked if the downloaded files I needed from the software company had a hash that could be verified. I had no clue what he was talking about, but upon further investigation, the answer was a resounding yes! The picture below shows the hash information available from the software company’s download site.

This is when I first learned how security minded companies can verify that downloaded files are legitimate and secure. For further information and a much more technical deep dive on how that hash value can be used, howtogeek.com has an excellent article

Certificates operate on a similar principle – that the signature on the file or request created via the private key can be verified via the hash and the public key. Public keys are like the seals and special effects on your ID card. Anyone can see them and use them to verify the ID is legitimate. The private key is like the method to create those seals and special effects. Only select people or entities can create it and have access to the method. You do not want that secret method to become public. Similarly, you never want your private keys exposed.

This is what also keeps your browsing safe and secure. When you go to ‘chase.com’, you trust that you are connecting and interacting with the Chase Bank servers. If we were to operate only on faith it would lead to man-in-the-middle attacks where someone’s website ‘posing’ as Chase would intercept and send information to your browser giving a false and potentially dangerous appearance of communicating directly with the Chase Bank servers. Now when you connect to certain websites you should see a lock symbol typically on the left side of the address bar. This is where you can see your browser telling you if you are in fact connecting to the correct website and servers and not a fake copy of the site.

Wrap Up

The internet wouldn’t be able to function as it does without certificates. Anything involving personal information would be highly risky to use as malicious actors could intercept your data as it travels between the servers and you. This is why the indicators on your browser that the certificate checks passed are so important, and also why you should appreciate that your browser stops you from connecting to a page when there is a certificate error. As you develop your applications incorporate certificates as a means of providing your users with a secure experience. And if you have questions as you do so don’t hesitate to reach out to the Elevate Team.

By: Jeff Sugden, Heber Lemus

Dockerizing a Ruby on Rails Application

Dockerizing a Ruby on Rails Application 1500 844 ELVT Consulting

By: Alex English, Jaehyuk Lee

What are Containers?

Docker containers are lightweight packages that include a minimal operating system (much like a virtual machine) and any associated software that has a specific purpose. Typical containers include API Servers, Frontend (web) Servers, Databases, Cache Servers, Logging Servers, etc. These containers can be packaged up, stored in a repository, and deployed to a hosting environment where they get exposed to the internet or internal networks. Typically we start with a base image (node.js, etc), add all of our code and resources (images, css, etc), and put that into a repository. That image can then be loaded onto whichever environment we desire.

Why Do We Care?

One of the most important advantages of containerization is repeatability. Here are some prime examples that illustrate the utility of containers in maintaining repeatability across environments and situations:

  1. We have a Test Server and a Production Server set up to host our node.js application. We copy our code onto the Test Server, test it, and decide that it’s working properly. We take that same code then upload it to the Production Server and it explodes in a spectacular and unforeseen fashion. Why? The Test and Production Servers use slightly different versions of node.js. 
  2. Same situation as #1. We’ve updated Production to use the same version of node.js, yet the Production Environment still fails using the same code that works on the Test Server. This time, this is due to a difference in the way the Operating System handles file permissions (ex: ubuntu vs redhat linux)
  3. We go through the trouble of perfectly aligning our Test and Production server and now it’s time to scale out. Now we have the same problem, but at a much higher scale. We now have to ensure that all of our Production Servers are updated to use the same OS, OS version, and node.js version.

You can see why this becomes exponentially more difficult as we scale out and/or add different environments. In a container, our code is packaged along with its server AND the underlying OS into a lightweight package that we can then put wherever we want. If it works in the Test Environment we can be much more confident that it will work in Production. Problem solved.

Containerization Tools

Now that we have a baseline understanding of containers and why we would wish to utilize them, let’s look at a couple of the most commonly used containerization tools. Below we’ll walk through the aspects of Docker and Kubernetes with a brief example of how they’re used in practice.

Docker

One of the most commonly used containerization tools is Docker. Docker lets us run “containers” – self-contained, portable mini-copies of an operating system and associated software – much like virtual machines. Docker containers can be downloaded from a repository and started quickly with a simple command. Some examples of common software run on containers include:

  • Databases (mysql, postgresql, mongo)
  • Cache Servers (memcache, redis)
  • Web Applications (Angular, React, etc)

Docker is widely used with both an open source and proprietary version. There are various other options available should they be of interest, however quite often the network and support avenues for Docker make it the preferred choice.

Kubernetes

Now that we have our application put into containers, what can we do with them? Today’s applications are complicated and involve many containers (Frontend, Backend, Databases, etc.) all from different teams working together. We have to have a way to ‘orchestrate’ these containers and define relationships between them. Say we have an application that is composed of a frontend Angular application hosted by nginx, a backend application hosted in node.js, and an additional API server required by the backend server. We have all of these containerized and now we need to make them work together.

Kubernetes (often abbreviated as k8s) is a container orchestration framework. This means that it stands at a layer above Docker, and coordinates the activity of different containers, allowing them to talk to each other and access the outside world. Kubernetes runs on a few basic concepts:

    • Nodes Nodes represent the physical layer of Kubernetes. They are the actual hardware that containers run on. They can themselves be virtual machines in a cloud environment (like EC2 in AWS) or a container service like Fargate on AWS
    • Pods A Pod represents one or more containers that can run on a Node. Many Pods can run on a Node. A Pod defines shared storage and networking for these containers and represents the basic deployable unit in Kubernetes
    • Deployment A Deployment represents a set of Pods that scale out. A Deployment contains a set of identical Pods
    • DaemonSet A DaemonSet is a Pod that runs on every Node. These are great for cross-cutting concerns like logging and monitoring.
    • Services A Service in Kubernetes represents a logical unit of access to a load-balanced resource. Typically this is a Deployment or DaemonSet. 
    • Ingress Internet-facing applications use Ingresses to allow access from the internet into an application. Depending on where it’s running (AWS, Azure, Datacenter), the Ingress will have different implementations (AWS ALB, Azure Load Balancer, or Nginx)

Why use Docker and Kubernetes?

Docker and Kubernetes are used widely in a number of different applications and environments. The technology has been hardened over the years and has a robust community of support. Below we detail some specific advantages to their use:

  • Easily Manageable Images Docker registries make for easy storage of built containers. Containers can be rolled back to specific versions in the event of a bad deployment
  • Scalability Deployment instances can be easily scaled up with Kubernetes managing the load balancing of the Pods
  • Portability Docker containers and Kubernetes implementations are consistent across many different cloud providers and setups from AWS to Azure to Google and even Local Data Centers
  • Application Architecture Larger applications can be split up into smaller Docker containers allowing organizations to adopt a microservices-oriented approach to application Development and Deployment
  • Predictability Docker containers that run in a Test Environment run exactly the same way in Production

As an example app we’re using “Chili Friends”, an application that matches people together based on their chili sauce preferences. This is a Ruby-on-Rails (RoR) application available here.

This application runs on RoR using a SQL database (postgresql). To function properly we’ll have to set up an Ingress for internet users to connect. For Chili Friends we’ll set up the following:

  • A Deployment for our RoR application using our dockerized application
  • A Deployment for our postgresql database
  • A Service that allows load-balanced access to our RoR deployment
  • An Ingress that can bridge access from the internet into our service

We’re going to deploy this application on AWS’s Kubernetes service, the Elastic Kubernetes Service (ECS). We’ll start with the docker file:

The App

To get started, the demo application we’re using was created with ruby’s basic scaffolding to create sign in, sign up, and sign out functionality, as well as the default welcome screen to make sure the app is up and running.

From my local computer, I can start the local dev server using rails server and am presented with this screen:

To test the database, I’m going to sign in here.

Once I do, I’m redirected to the default welcome page. Now that I know it works on my local machine, I’ll create a docker image using this Dockerfile:

FROM ruby:2.6.6-alpine

ENV BUNDLER_VERSION=2.2.17

ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache curl py-pip
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools


RUN apk add --update --no-cache \
binutils-gold \
build-base \
curl \
file \
g++ \
gcc \
git \
less \
libstdc++ \
libffi-dev \
libc-dev \
linux-headers \
libxml2-dev \
libxslt-dev \
libgcrypt-dev \
make \
netcat-openbsd \
nodejs \
openssl \
pkgconfig \
postgresql-dev \
sqlite-dev \
tzdata \
yarn

RUN gem install bundler -v 2.2.17

WORKDIR /app

COPY Gemfile Gemfile.lock ./

RUN bundle config build.nokogiri --use-system-libraries

RUN bundle check || bundle install

COPY package.json yarn.lock ./

RUN yarn install --check-files

COPY . ./

ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]

Some explanation about what these commands do:

FROM ruby:2.6.6-alpine

This means we’ll build our docker image on top of an existing one. In this case, we’re using a stripped down version of linux (alpine linux) with ruby version 2.6.6 installed. This keeps everything lightweight.

For subsequent commands this is as if you were installing the app on a fresh machine. We’re installing some Compiler Utilities, Database Libraries, and the like. The ‘ENV’ command creates environmental variables and the ‘RUN’ command executes shell instructions as if we were to ssh into a machine or sit at a terminal. We use ‘COPY’ to move files from our local machine to the docker image when building and ‘WORKDIR’ to set the directory inside the image when we copy files and execute commands.

Now that that’s set up, let’s look at some of the Kubernetes configurations:

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: chili-friends-deployment
labels:
app: chili-friends
spec:
replicas: 3
selector:
matchLabels:
app: chili-friends
template:
metadata:
labels:
app: chili-friends
spec:
containers:
- name: nginx
image: chili-friends:1.0
ports:
- containerPort: 3000

This yaml file configures our deployment in Kubernetes. In a Kubernetes cluster with many applications going, we use the ‘label’ feature to define different areas that work together (in our case chili-friends). Some of the important parts of this are:

  • Replicas: 3
    • This means that we’ll load-balance across 3 containers. By increasing this number we can scale up the application.
  • Spec: Containers:
    • This defines which containers we want to use. In our case, we’ve uploaded our container to <<Docker Hub>>, and defined the image here
  • Ports: ContainerPort:
    • Here we say that we’re exposing port 3000 on the container to the cluster. Note that this is different from exposing a port inside the container to one outside the container. For our sanity, we’ll always use port 3000

Now let’s configure the service:

apiVersion: v1
kind: Service
metadata:
 name: chili-friends-service
spec:
 selector:
   app: chili-friends
 ports:
   - protocol: TCP
     port: 80
     targetPort: 3000

Now we’re up one level of abstraction. Similar to software development, a Service defines a logical resource that we can use with an address ‘chili-friends-service’ and a port 80. This service manages the load-balancing between the different containers they point to. The important parts:

  • Selector: App: Chili-Friends
    • In our deployment we set the selector app label to ‘chili-friends’ and the way we map this service to that deployment is by using that same label. This points our service to that deployment
  • Ports
    • In our deployment we set the containerPort to 3000. Here in the service, we map TCP port 80 to port 3000 on the containers. When network requests go against our service on port 80, they’ll hit port 3000 on the containers

Now we’ve got all of the internal parts of our Kubernetes application going. We’ve got our app container deployed to the cluster, and a service that points to that deployment. So far, all of these network paths are contained within the cluster, not accessible to the outside. Now we need an Ingress. An ingress acts as a gateway to the internet. It connects aspects of a request (http url, etc) to a particular service. Here’s the ingress we can use for chili friends:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: chili-friends-ingress
annotations:
Kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:3723842:certificate/23423-3746-345-234234-34534
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- host: www.my-chili-friends.com
http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: chili-friends-service
servicePort: 80

There’s a fair amount going on here. Let’s start with the annotations. In AWS, Kubernetes Ingresses are implemented as Application Load Balancers. That’s what each of the annotations is for. They describe the open ports, which certificate in Amazon Certificate Manager to use, and how to redirect ports to SSL. It should be noted here that we’re delegating all SSL functions to the load balancer. The connection between the client and the ingress over the internet is encrypted via SSL, then our internal server deals with raw HTTP. This saves us the hassle of having to deal with certificate management inside of our app. Additionally, keep in mind that a DNS record must be created with a DNS provider (or for testing, using /etc/hosts or similar methods on windows) in order to actually hit this address.

Conclusion

Once all of these components are deployed to Kubernetes, we should be able to see our application. We’re now able to scale up the number of application pods with just one command. Typically as an organization grows, we might start with one application deployed to a server without a container. As the application gets more complex and more services and dependencies are added, we containerize that application so we can move it from one environment to another. As the application base and organization grows even further, we can use Kubernetes to scale the application and its dependencies out.

By: Alex English, Jaehyuk Lee