Learn more about technology today.

Web App Development Commonly Asked Questions

Web App Development Commonly Asked Questions 2240 1260 ELVT Consulting

By: Kevin Schreck

One of the great things about being in the tech development business is that we get the opportunity to work with a wide array of industries and an even wider array of people. It’s an exciting process when a new client approaches us with an idea that they’ve been shaping for months or even years. While it is exciting, we also recognize that there’s a certain amount of fear that can delay or, at times, completely stop the development process from beginning. As clients try to make sense of technology, we’ve noticed some commonality in the questions they ask. In this blog, we’ll be answering those questions, including:

Can I Learn to Develop an App on my Own?
How Much Will the Project Cost?
Should I Outsource Overseas?
Is a Cheaper Rate Better?

Let’s dive in…

Can I learn to develop an app on my own?

Sure. But to that I’d ask – do you have multiple years to dedicate to the craft of software development? I’m guessing the answer is no. You’ve got a business to run and goals to attain. Trying to build your own application is going to undoubtedly increase your time to market, decrease the stability of your application, and take you away from the things you need to do (raising capital, marketing, road shows, etc.). In fact, it’s not uncommon for our clients to have a background in development and still trust their development to the Elevate team. Having people with decades of experience and hundreds of applications built on your side is only going to [supercharge] your development and allow you to focus on the things that drive your business forward.

How much will the project cost?

The short and honest answer is it depends. If someone quotes you a number in your first conversation, be very suspicious. A whole host of factors come into play when evaluating the scope of an application. The level of customization, number of integrations, number of users, acceptable level of downtime, and many other factors must be considered. Some applications can be built in a matter of days by a single developer. Others may take months-to-years with an expansive team at your disposal. The important thing is to know the order of magnitude that you can afford, identify your near term vs long term targets, and be prepared to prioritize features should everything you want not be feasible in your budget.

Should I outsource overseas?

There are many perceived benefits of offshoring your development and in some instances those benefits are achieved. We recommend that you analyze the complexity of what you’re looking to do, your ability to overcome time zone differences, will it be used by the US government, and probably most importantly, your ability or capacity to oversee an offshore team. It’s a regular occurrence for prospective clients to reach out to us after a failed experience overseas. Oftentimes it has little to do with the capability of the offshore team but rather tied to the friction created by offshoring itself. Be introspective in analyzing offshore options and think not only of your short term objectives but also your long term objectives. Lower rates may seem like an instant win, but the total cost of ownership is more often than not higher in an offshoring arrangement. Which leads us to our next question.

Is a cheaper rate better?

Much like the overseas question, this often comes down to what it is you’re trying to accomplish. If you have very simple needs that require little ongoing effort, then the cheapest option shows a lot of promise. We’re guessing that’s not your case, however, as software is inherently complex and you likely have big goals you’re looking to accomplish. We recommend that in your initial review of development options you remove rates from the equation unless they are wildly out of sync with your budgets. Focus on capability, responsiveness, and ability to understand your vision. The right development team is going to be able to steer you in the right direction and, regardless of rates, reduce your total cost of ownership by selecting a technology stack that fits your individual situation, minimizing rework, and building scalable solutions that meet your short term and long term needs.

We get it. When it comes time to bring your idea to life, the process of launching your development can be very daunting. Just know that this isn’t uncommon and that with the right team in play, you can meet all your goals and more.

Have more questions? Or just want to discuss how this applies to your specific situation? Feel free to reach out to me directly at or go straight to my calendar here.

How to Design a Robust API

How to Design a Robust API 1720 1000 ELVT Consulting

By: Daniel Ors, Gabe Martinez

In this guide, we will walk through how to go from the need for an API, through its design and documentation, and to its subsequent implementation. This is an entry in our API Design and Documentation Series. If you haven’t read our Attributes of a Quality API installment, I recommend you start here.

Designing and implementing an API can be a daunting task. Our goal is to provide you with a comprehensive guide on how to approach designing an effective API and its documentation. To do so, we will break the process down into 4 steps. The first step in your API’s design and development is to precisely define what is needed out of your API.


Your API needs to provide the right functionality for its consumers which means you need to rigorously define what that functionality should be. Accurately defining the purpose and scope of your API will provide you with crucial guidance on it’s design and implementation. The purpose of your API should be determined by considering the problem that you are trying to provide a solution for. To elaborate, you should consider who the API will be used by (who needs the solution) and what they want to do with your API. Ensure that you have acquired the necessary domain knowledge for the problem space and communicate with the parties who will utilize your design as well as the parties who are affected by your design. These are the consumers of your API and thus, they will be able to assist you in collection and clarification on the use cases and requirements of your API. This communication is key to gain perspective on what your design should offer. We also strongly recommend that you research existing solutions and learn from their strengths and weaknesses. Rigorously collecting and detailing this information will give you what you need to design your API in a way which will be both comprehensive and effective.


Now that you have determined the purpose of your API and its requirements you can start to transform this list into your design specification. To begin, filter and refine the list of requirements and use cases that you want to cater to. Incorporate initial features and potential future enhancements into your considerations. Detail the overarching workflows and usages. Outline the expected behaviors and business logic of the API. Understand and document potential dependencies and interactions, both internal and external. From here, Define models, their relationships, and how the API will interact with them. Important: Document this information for future reference. Utilize tools available to you. Entity Relationship diagrams, flow charts, and other visual aids are invaluable.

With this information in hand you will find that your specification starts to construct itself. Start to document your design and specifications with best practices in mind. Check out our API Design Best Practices guide for a refresher. Once you have a specification detailed refer back to the information you have organized and adjust if necessary. Important: Your design will change over time as you document and implement the API. Use your knowledge base and an Agile Methodology to keep it flexible enough to cater to changing requirements.


Some may wonder why the documentation comes before the implementation. But this is a crucial step. Documenting your API specification will more often than not bring your attention to design modifications that can or should be made. Furthermore, it will help guide your implementation to be effective and efficient. Once again, tools like API Description Languages (OpenAPI/Swagger, RAML, etc.) or API development platforms like Postman, can be quite beneficial. Not only will they increase the efficiency of your documentation process but they can help you identify these modifications. There are many tools that will help you build your API documentation, and they include a suite of useful capabilities. Some of these capabilities include, helping you publish your documentation, providing consumers with test beds, building automated testing cases, or even generating code as a baseline for your implementation. Determining which tool to use varies but we highly recommend you research how they can benefit your documentation process.

Well-Defined, Discrete Endpoints

Many APIs allow significant overlap in their data and endpoints, which, depending on the subject matter, may be appropriate. However, distinguishing between data areas has value in charting out your API. Discrete endpoints will prevent developers from getting bogged down in individual features, digging down into infinite JSON soup for applicable data for their use case. This improves usability significantly for your audience.

It is important to design and define these endpoints so that developers will have clear expectations for access and delivery of data. For example, if you have an API that routinely updates current information about a company in its own endpoint, and has a separate data endpoint for employees, it is better to only allow end users to request employee data from the top level employee endpoint (over which you will have more fine-tuned throttling control), rather than having unrestricted access to a separate api/company/employees endpoint. By restricting access to specific endpoints and resources, it will prevent misuse of your dataset and API, and improve the cost of hosting and maintaining it. This can be achieved by including clear rate limits for each endpoint in your documentation along with your authentication protocol for accessing the API.


Finally, you now have your API specification thoroughly defined and well thought out. It’s time for the implementation. Of course, the details of your implementation are specific to your API. However, there are some common best practices when it comes to implementing your API. Consider implementing Contract Testing. Throughout your development, contract testing will help you catch and handle any inconsistencies between your implementation and your design. APIs will almost always change over time. Keeping your design patterns consistent as your API evolves is vital for the health of your API specification and corresponding documentation. Similar to a code style guide, you can create a Design Style Guide to protect the longevity of your API. In addition to consistency, a style guide will also ensure future design decisions and development are unambiguous and smooth. Refer back to our API Design Best Practices guide for more examples of what kinds of things you can include in a design style guide.

Final Thoughts

Defining an API specification is no simple task. It is for this reason that designing one will surely not be concluded once you have published your API for its initial consumers. Much like all software development, it is an iterative, Agile process, and your API will be refined and augmented over time. Maintaining your API is the next part of the design journey. Be open to feedback from your consumers and refer back to the process and design standards you set out for your organization.

This series will be continued in next week’s installment, API Documentation.

API Design – Attributes of a Quality API

API Design – Attributes of a Quality API 1920 1080 ELVT Consulting

By: Daniel Ors, Gabe Martinez

Series Introduction

An essential aspect of our massive software world is collaboration. Everything from open source software communities, product integrations, and even microservice communications require collaboration to be successful. Software, and its continued development, rely on the collaborative efforts of those generating it to truly scale into large user adoption. When it comes to communication about software and its development, effective APIs (Application Programming Interfaces), and their corresponding documentation, are crucial to the development of a successful application. In this series we will cover the core attributes of a quality API as well as how to construct effective APIs that support efficient adoption both internally and by external 3rd-parties.

An API is a defined method of interacting with a specific software application describing standardized request formats and what to expect in return for each request type. Since the format for requests is frequently strictly defined, it makes it essential that the documentation is clear, unambiguous, and accurately updated over time. This allows developers to work with unfamiliar systems in a standardized way with zero to minimal involvement from the creator of the API itself. If you’re interested in reading more about the basics of APIs in plain English, FreeCodeCamp has an excellent blog post on the subject.

Over the coming weeks we’ll be taking a deep dive into the key components of designing a successful API — from how specific API attributes make them compelling and easy to work with, to creating your own starting point for an API, as well as an all comers’ guide to how to use an API. We will begin with an examination of the Attributes of a Quality API, then the ins and outs of API Design, and conclude with a deep dive into how to Document your API to maximize its potential and usage.

Attributes of a Quality API

Quality APIs are identified by several notable attributes:

  • Clear Purpose
  • Strong Documentation that is Easy to Understand
  • Well-defined and Discrete Endpoints
  • Rich Data that Presents Significant Value to the Developer and End User
  • Potential for Extensibility in the Open Source Community
  • Conform to an Established Conventional API Architectural Style (e.g. REST, GraphQL, RPC, Falcor, etc.)
    • How these concepts help craft more maintainable APIs
  • Strong Community Supporting its Development via Active Repository Management
  • Standards for Maintaining
  • Graceful Error Handling
  • Solid Security Practices

An API does not require all of these attributes to possess quality or potential, but the very best APIs all adhere to these principles when building out their functionality. If you are getting started with your API, these are clear goals to strive for in getting it off the ground.

Clear Purpose

Good APIs will have a clear mission statement that outlines the goals and objectives of its functionality. Without standards for maintaining and a strong community understanding of the API’s purpose, its long-term dependability will be very low, as it will appear that the core maintainers do not have strong investment in the success of their API. In addition, the intended audience for the API should only influence who it is made available to — all standards for quality mentioned here will apply to any of the most common types of APIs — private, public, or commercial (also known as partner APIs).

Strong Documentation

Documentation of functionality that is easy to understand is critical to the success of any API. It is essential that the documentation is also not overly succinct in providing understandable instructions, but rather balancing detail with clarity. Without documentation, an API will be nearly impossible to access, and it will be difficult to parse what data, endpoints, or feature frameworks are available to use. This is a frequent issue with closed APIs, where the core users are a limited group of developers that hold all the keys to the castle in terms of knowledge. In this scenario, when new members of the team are introduced to the API, it is unlikely that quality contributions will be produced by the new developer unless the API is well-documented. Person-to-person knowledge transfer is a poor substitute for clear documentation, as clear documentation will always allow for a more transparent and complete picture to be communicated and provides a persistent resource available for reference.

Well-Defined, Discrete Endpoints

Many APIs allow significant overlap in their data and endpoints, which, depending on the subject matter, may be appropriate. However, distinguishing between data areas has value in charting out your API. Discrete endpoints will prevent developers from getting bogged down in individual features, digging down into infinite JSON soup for applicable data for their use case. This improves usability significantly for your audience.

It is important to design and define these endpoints so that developers will have clear expectations for access and delivery of data. For example, if you have an API that routinely updates current information about a company in its own endpoint, and has a separate data endpoint for employees, it is better to only allow end users to request employee data from the top level employee endpoint (over which you will have more fine-tuned throttling control), rather than having unrestricted access to a separate api/company/employees endpoint. By restricting access to specific endpoints and resources, it will prevent misuse of your dataset and API, and improve the cost of hosting and maintaining it. This can be achieved by including clear rate limits for each endpoint in your documentation along with your authentication protocol for accessing the API.

Rich Data that Presents Significant Value

Without data that presents value to the developer and the user, an API will surely wallow in obscurity — whether it is being built as a personal project or for an organizational purpose. If there is a single use case for the API you are thinking about, that clearly does not exist yet, it is far more likely that a parallel or adjacent API will be available to support development of your idea. Extending that API’s functionality to include yours may also present a more significant value-add to your own resume or organization’s reputation in the marketplace.

Potential for Extensibility in the Open Source Community

Without future potential for innovation, APIs will likely become largely stagnant and more focused on issue handling rather than providing room for growth. It is important to keep an eye on the horizon for the roadmap you envision for your API, or what the community suggests would represent quality additions to the feature set. Your stakeholders — whether they are private clients or the open source community — will have a vested interest in contributing to your API’s improvements and future viability.

Conform to an Established Conventional API Architectural Style

There are several standard API architectural patterns that have emerged over years, from REST, SOAP, GraphQL, to RPC. While it is important which format you pick for modeling your API on, it is also critical that you make it clear in your documentation that it follows that convention. This will aid developers significantly in picking up your API and understanding its design, expectations, and, of course, quirks!

When you invest in following a particular API architectural pattern, it also improves the ability of your own engineers, as well as any potential open source developers, to maintain and extend the functionality of your application. This presents a very strong value-add for your organization’s product and its viability as a solution in the long-term.

Using accepted conventions of API style will also aid in its long-term viability and maintainability. In conforming to a pattern, a wider range of developers and engineers will quickly be able to jump in, identify solutions, and become a key contributor. It will aid in developer retention and overall productivity — the easier and more rewarding you make it for individuals to contribute and be an active member of the community, whether open or closed source, the more likely you are to reap exponential rewards.

Strong Community Supporting Development in Github/GitLab

Hosting and managing your API via active repository management and diligent issue documentation and management is crucial. This practice allows active developers, internal and external, to contribute to open issues and gain increased familiarity with your technology and expectations. This also allows you to better manage releases of your API, protecting your stable code branch while enabling feature enhancements and extensions of functionality to be tested in less stable branches.

Using a heavily trafficked versioning tool such as Github or GitLab will also encourage growth opportunities as they increasingly move to a more sophisticated social format for hosting code repositories, particularly open source repositories.

Standards for Maintaining

Setting baseline standards for your API is key to ensuring quality maintenance and fulfillment of the product roadmap. It will also make it easier for developers to contribute, as clear expectations will be present for issue resolution as well as feature requests.

Your standards should include several key points — a style guide for code, expectations for contributing, resources required, and rules for communication along with key maintainers.

A style guide will make it clear which specific conventions should be followed in submitting contributions to the codebase. This should include basics such as standard naming conventions, giving priority to commenting new code with sufficient detail, and best practices for specific syntax within the majority programming language being used in your API. Airbnb’s style guide for JavaScript is an excellent example of a strong syntaxical standard.

Your section on expectations for contributing should demonstrate what a proper contribution process will look like — how to identify a suitable open issue to address, whether tests are required, and general timeframe for when to expect approvals or responses from key maintainers. (ex: “Our team spends about 10 hours per week on this project, average response time is 5-7 days.”)

A section covering the resource requirements to get started maintaining will also encourage developers to invest their time — as it will lower the bar to contributing significantly. By documenting your processes and what tools are used by the core team, it will be easier for new contributors to quickly get comfortable with the code and complete their submissions. This is an opportunity to document any existing known issues for developers in working with the project’s standard resources, allowing others to suggest new solutions from their own experience.

Rules for communication will provide clear guidelines for what should be communicated in the contribution process — from reasonable pull requests to what is allowed in the slack/gitter channel for the project. One key requirement that is common to most shared projects is that all decisions, support requests, or feature requests should be submitted in public channels, so that transparent communication is present to all involved. These standards will streamline moderation and management of project channels. Along with these guidelines, a list of key maintainers and project managers should be included. This will add points of contact to relieve pain points if the standard contribution process does not proceed as expected. 

Homebrew’s open source repository follows these principles and provides excellent examples with their Code of Conduct and Contributing Guide.

Graceful Error Handling

As your primary audience and user base will be developers, it is important that you provide for detailed error handling along with graceful exiting. It will be a far more dev-friendly experience to end the terminally errored process when it reaches the error state — using conventions such as try/catch blocks to end the connection or request/response operation. It is also key to use specific error handling messages that are informative, concise, and provide the correct level of detail to enable successful troubleshooting.

Solid Security Practices

In order to protect developers, users, and yourself from exposed and non-encrypted connections where API keys and other secrets could be easily transcribed, you should make it a requirement to use security practices such as SSL-only connections and using secure authentication methods such as OAuth to verify access.

Wrapping Up

If you’ve followed these key steps in building your API, congratulations! You’ve very likely created a strong application that will encourage future development, extension, and user growth. The most critical next step to take is to continue to invest time in cultivating your API’s contributions and release roadmap. Focusing on maintaining the release cycle will ensure continually increased quality of service for both developers and end users.

This series will be continued in next week’s installment, How to Design a Robust API.

By: Daniel Ors, Gabe Martinez

Designing with Grids

Designing with Grids 1500 844 ELVT Consulting

By: Dimitri Tenke Fokoua, Siva Nammi

Designing a whole web page or sometimes just an input field can be very challenging depending on the approach we take. In today’s development world, and in the frontend in particular, everything can be its own component. Having in mind that everything can be considered a table or container can make your life as a developer a whole lot easier! In this blog we will go over some examples on how we can design about anything with this approach and use flex to style it.

Grid Layout

Everything is a table! When designing a component, a tab or a whole page, a very useful way is to divide them into multiple tables. This provides a flexible structure to your User Interface and ultimately results in a clean experience for the User. In angular, for example, a very easy way to accomplish such a task would be using the angular Fx Layout. Let’s try to build a very simple top bar menu component with dropdowns.

First we need to define the main direction we want to give to our component. In our case, the menu is going left-to-right and we will also have dropdowns that will show under the menu. In such a situation, our direction will be column, or top-to-bottom using the Angular directive FxLayout.


<div fxLayout="column"></div>

In this main container, every included container will appear top-to-bottom. But the main menu bar we want should be showing left-to-right! To solve this, we can nest tables and give it different directions as we wish.


<div id=’main-container’  fxLayout="column">
     <div id=’row-container’ fxLayout="row" >

In the above example the nested container ‘row-container’ will have a row direction meaning left-to-right, thus everything within this container will appear from left-to-right. Pretty easy for the menu bar that just contains items from left-to-right.

What about the dropdown that will not only go left-to-right for each menu item, but also may contain multiple items going top-to-bottom? For this case we can take advantage of the Angular Grid System

Continuing to build off of our above example, in the main container we want the second row to be the dropdown and only show upon click. We can think of it like a big table that will be divided in rows and columns then use the appropriate space we want to allocate to any part of it.


 <div id=’main-container’  fxLayout="column">

     <div id=’row-container’ fxLayout="row" >




    <div id=’dropdown-container’  gdColumns="1fr 1fr 1fr 1fr 1fr 1fr" gdRows="1fr 1fr 2fr">



In the ‘dropdown-container’ we defined a set of 6 columns that will all be equal in terms of width and a set of 3 rows where the first and second rows will equally take 1 fraction of the space and the third will take 2 fractions. In this grid, you can position anything anywhere by giving it an axis position coordinate using gdColumn and gdRow.


<div id=’dropdown-container’  gdColumns="1fr 1fr 1fr 1fr 1fr 1fr" gdRows="1fr 1fr 2fr">

<div  gdColumn="1" gdRow="1">dropdown1</div>

<div  gdColumn="2" gdRow="2">dropdown2</div>

<div  gdColumn="3" gdRow="2">dropdown3</div>

<div  gdColumn="4" gdRow="1/3">dropdown4</div>

<div  gdColumn="5" gdRow="2">dropdown5</div>

<div  gdColumn="6" gdRow="1/4">dropdown6</div>


In this example the first element (container) will take column 1 and row 1 position (*Picture-1).


The second element will be located in the second column second row (*Picture-2).


The fourth element will be in the fourth column and take row 1 to 3 which in this case will be the first two rows (*picture-3).


The sixth element here will take the last column and all available rows (*picture-4).


With this method, you can pretty much design everything on a page and place elements anywhere you want in a structured and orderly manner. Now you can start with styling it and hiding/showing it appropriately.

Now we’ll take a look at what can be done using Flex-Box.


As discussed above, Grid Layout has been designed to work alongside other parts of CSS, as part of a complete system for doing the layout. In this section, I will explain how a grid fits together with Flexbox.

Grid and Flex-Box:

The basic difference between CSS Grid Layout and CSS Flexbox Layout is that flexbox was designed for layout in one dimension – either a row or a column. Grid was designed for two-dimensional layout – rows, and columns at the same time.

One-Dimensional Versus Two-Dimensional Layout:

A simple example can demonstrate the difference between one- and two-dimensional layouts.

In this first example, I am using flexbox to lay out a set of boxes. I have five child items in my container, and I have given the flex properties values so that they can grow and shrink from a flex-basis of 150 pixels.

I have also set the flex-wrap property to wrap, so that if the space in the container becomes too narrow to maintain the flex basis, the items will wrap onto a new row.

<div class="wrapper">







.wrapper {

  width: 500px;

  display: flex;

  flex-wrap: wrap;


.wrapper > div {

  flex: 1 1 150px;


In the image, you can see that two items have wrapped onto a new line. These items are sharing the available space and not lining up underneath the items above. This is because when you wrap flex items each new row (or column when working by column) is an independent flex line in the flex container. Space distribution happens across the flex line.

A common question then is how to make those items line up? This is where you want a two-dimensional layout method: You want to control the alignment by row and column, and this is where grid comes in.

A couple simple questions to ask yourself when deciding between grid or Flex-Box is:

  • Do I only need to control the layout by row or column
    – use a Flex-Box
  • Do I need to control the layout by row and column
    – use a Grid

Content Out or Layout in?

In addition to the one-dimensional versus two-dimensional distinction, there is another way to decide if you should use flexbox or grid for a layout. Flexbox works from the content out. An ideal use case for flexbox is when you have a set of items and want to space them out evenly in a container. You let the size of the content decide how much individual space each item takes up. If the items wrap onto a new line, they will work out their spacing based on their size and the available space on that line.

Grid works from the layout in. When you use CSS Grid Layout you create a layout and then you place items into it, or you allow the auto-placement rules to place the items into the grid cells according to that strict grid. It is possible to create tracks that respond to the size of the content, however, they will also change the entire track.

If you are using Flex-Box and find yourself disabling some of the flexibility, you probably need to use CSS Grid Layout. An example would be if you are setting a percentage width on a flex item to make it line up with other items in a row above. In that case, a grid is likely to be a better choice.

Now that we have a better understanding of when to choose grid over Flex-Box or vice versa, let’s dive in deep with what Flex-Box offers us to make our lives easier.


  • Main Axis The main axis of a flex container is the primary axis along which flex items are laid out. Beware, it is not necessarily horizontal; it depends on the flex-direction property (see below).
  • Main-Start | Main-End The flex items are placed within the container starting from main-start and going to main-end.
  • Main Size A flex item’s width or height, whichever is in the main dimension, is the item’s main size. The flex item’s main size property is either the ‘width’ or ‘height’ property, whichever is in the main dimension.
  • Cross Axis The axis perpendicular to the main axis is called the cross axis. Its direction depends on the main axis direction.
  • Cross-Start | Cross-End Flex lines are filled with items and placed into the container starting on the cross-start side of the flex container and going toward the cross-end side.
  • Cross Size – The width or height of a flex item, whichever is in the cross dimension, is the item’s cross size. The cross size property is whichever of ‘width’ or ‘height’ that is in the cross dimension.

Flex Box Properties:

Properties are specific for flex containers and flex items.

Parent Properties:

  • Display:  It enables a flex context for all its direct children.
  • Flex-direction:  This establishes the main-axis, thus defining the direction flex items are placed in the flex container. 
  • Justify-Content:  This defines the alignment along the main axis.
  • Align-Items:  This defines the default behavior for how flex items are laid out along the cross axis on the current line.

Code Samples

Let us consider the previous dropdown container example that we covered in the previous Grid Layout section. Dropdown at option6 is a container which has 3 items in it. Let’s look at different layouts.

<div class="container" fxLayout="row"  fxLayoutAlign="start start">

 <div class="child-1">1. One</div>

 <div class="child-2">2. Two</div>

 <div class="child-3">3. Three</div>


Children Properties:

  • Order:  By default, flex items are laid out in the source order. However, the order property controls the order in which they appear in the flex container.
  • Flex-Grow: This defines the ability for a flex item to grow if necessary.
  • Flex-Shrink: This defines the ability for a flex item to shrink if necessary.
  • Flex-Basis: This defines the default size of an element before the remaining space is distributed.

Code Samples

<div class="container" fxLayout="row"  fxLayoutAlign="center stretch">

 <div class="child-1" style="order: 3;">1. One</div>

 <div class="child-2" style="order: 2;">2. Two</div>

 <div class="child-3" style="order: 1;">3. Three</div>


<div class="container" fxLayout="row" fxLayoutAlign="start stretch">
<div class="child-1" fxFlex='auto'>1. One</div>
<div class="child-2" fxFlex=30>2. Two</div>
<div class="child-3" fxFlex=20>3. Three</div>

The Conclusion

Flexbox and CSS Grid are both great, but neither one is a replacement for the other. They are best used together to create clean, manageable, and flexible web pages. Use the bullets below as a guide for when to use each:

  • CSS Grid is best used for two-dimensional layouts, meaning columns and rows. Think big picture, and overall layout of the page.
  • Flex-Box works best in one-dimension (columns or rows). Think of individual content areas, such as a menu or sidebar.

With almost full browser support, CSS Grid will be a necessary skill for front-end developers to learn and master.

By: Dimitri Tenke Fokoua, Siva Nammi

The Importance of Certificates in Modern Application Development

The Importance of Certificates in Modern Application Development 1500 844 ELVT Consulting

By: Jeff Sugden, Heber Lemus

Certificates are used everyday by our computers. They function similarly to our ID cards providing proof of who we are supported by technology that weeds out fakes. Certificates are the way computers and servers announce who they are and establish trust between the connection and target. It allows us to know that we are connecting to the server we intended to and not to an impostor. It also allows us to seal files in their current state – like important documents in pdf form – indicating that they are not to be tampered with. Certificates can be used by rendering software to detect tampering and display a warning that the file is no longer legitimate.

You can very easily create your own certificate. This is best done on a Linux machine with OpenSSL, but any computer with OpenSSL will do. Looking up just about any guide on how to create a self signed certificate via your preferred search engine will lead you in the right direction. Excellent! You have your certificate! And it works, too! You can now install it to your computer’s certificate manager and use it to digitally sign a file. Now your program will recognize the certificate as trusted.

But what if you sent the same file to a friend? Well, it wouldn’t work. Your friend’s PC doesn’t recognize the certificate. This is similar to the problem of paper money in the early days of banks. Paper notes from one bank may not be taken by other banks due to lack of trust. In our scenario, your friend would need to install your certificate in order for their pc to trust the file. This would need to happen for every person you give your file to because the programs trying to run the verifications (the file hashes) don’t recognize the certificate. I’m sure you’re thinking this sounds like a very cumbersome process. What if there was a way to have people trust your file without having to manually distribute your specific certificate? This is the purpose of the Certificate Authority.

Certificate Authorities, or CA for short, operate on the same concept as government issued IDs. Because of the trust in the government that produces the ID, other parties will accept your ID as proof of your identity. When a certificate is signed with the backing of a CA, you can be assured that other people who receive your files or connect to your server will do so without problems. CA chains are typically between 256 and 4096 characters in length in order to spread out the trust levels and speed up the process of issuing certificates and protecting the root certificate. If the root CA is ever exposed, the entire chain and everyone who depends on it is now compromised and must be thrown out.

Having to make certificate signing requests (CSR) against the root certificate all the time also exposes the root certificate. The more offline it is, the more secure it will be. To alleviate this potential issue we rely on intermediate certificates to make our CSR’s against. Most CA’s set the backer of their certificates against government issued certificates and most computers come with a set of root and intermediate certificates that are marked as trusted. This is why you can connect to most websites without having to install any certificates after installing your Operating System.

Some of you may be asking yourself, well how does a certificate do what it does? How do programs verify a certificate? We can start with a basic concept: file hashing. The first example of a hash that I came across in my actual work experience came during a time when I was doing an upgrade on our reporting software. The client I was working with asked if the downloaded files I needed from the software company had a hash that could be verified. I had no clue what he was talking about, but upon further investigation, the answer was a resounding yes! The picture below shows the hash information available from the software company’s download site.

This is when I first learned how security minded companies can verify that downloaded files are legitimate and secure. For further information and a much more technical deep dive on how that hash value can be used, has an excellent article

Certificates operate on a similar principle – that the signature on the file or request created via the private key can be verified via the hash and the public key. Public keys are like the seals and special effects on your ID card. Anyone can see them and use them to verify the ID is legitimate. The private key is like the method to create those seals and special effects. Only select people or entities can create it and have access to the method. You do not want that secret method to become public. Similarly, you never want your private keys exposed.

This is what also keeps your browsing safe and secure. When you go to ‘’, you trust that you are connecting and interacting with the Chase Bank servers. If we were to operate only on faith it would lead to man-in-the-middle attacks where someone’s website ‘posing’ as Chase would intercept and send information to your browser giving a false and potentially dangerous appearance of communicating directly with the Chase Bank servers. Now when you connect to certain websites you should see a lock symbol typically on the left side of the address bar. This is where you can see your browser telling you if you are in fact connecting to the correct website and servers and not a fake copy of the site.

Wrap Up

The internet wouldn’t be able to function as it does without certificates. Anything involving personal information would be highly risky to use as malicious actors could intercept your data as it travels between the servers and you. This is why the indicators on your browser that the certificate checks passed are so important, and also why you should appreciate that your browser stops you from connecting to a page when there is a certificate error. As you develop your applications incorporate certificates as a means of providing your users with a secure experience. And if you have questions as you do so don’t hesitate to reach out to the Elevate Team.

By: Jeff Sugden, Heber Lemus

Dockerizing a Ruby on Rails Application

Dockerizing a Ruby on Rails Application 1500 844 ELVT Consulting

By: Alex English, Jaehyuk Lee

What are Containers?

Docker containers are lightweight packages that include a minimal operating system (much like a virtual machine) and any associated software that has a specific purpose. Typical containers include API Servers, Frontend (web) Servers, Databases, Cache Servers, Logging Servers, etc. These containers can be packaged up, stored in a repository, and deployed to a hosting environment where they get exposed to the internet or internal networks. Typically we start with a base image (node.js, etc), add all of our code and resources (images, css, etc), and put that into a repository. That image can then be loaded onto whichever environment we desire.

Why Do We Care?

One of the most important advantages of containerization is repeatability. Here are some prime examples that illustrate the utility of containers in maintaining repeatability across environments and situations:

  1. We have a Test Server and a Production Server set up to host our node.js application. We copy our code onto the Test Server, test it, and decide that it’s working properly. We take that same code then upload it to the Production Server and it explodes in a spectacular and unforeseen fashion. Why? The Test and Production Servers use slightly different versions of node.js. 
  2. Same situation as #1. We’ve updated Production to use the same version of node.js, yet the Production Environment still fails using the same code that works on the Test Server. This time, this is due to a difference in the way the Operating System handles file permissions (ex: ubuntu vs redhat linux)
  3. We go through the trouble of perfectly aligning our Test and Production server and now it’s time to scale out. Now we have the same problem, but at a much higher scale. We now have to ensure that all of our Production Servers are updated to use the same OS, OS version, and node.js version.

You can see why this becomes exponentially more difficult as we scale out and/or add different environments. In a container, our code is packaged along with its server AND the underlying OS into a lightweight package that we can then put wherever we want. If it works in the Test Environment we can be much more confident that it will work in Production. Problem solved.

Containerization Tools

Now that we have a baseline understanding of containers and why we would wish to utilize them, let’s look at a couple of the most commonly used containerization tools. Below we’ll walk through the aspects of Docker and Kubernetes with a brief example of how they’re used in practice.


One of the most commonly used containerization tools is Docker. Docker lets us run “containers” – self-contained, portable mini-copies of an operating system and associated software – much like virtual machines. Docker containers can be downloaded from a repository and started quickly with a simple command. Some examples of common software run on containers include:

  • Databases (mysql, postgresql, mongo)
  • Cache Servers (memcache, redis)
  • Web Applications (Angular, React, etc)

Docker is widely used with both an open source and proprietary version. There are various other options available should they be of interest, however quite often the network and support avenues for Docker make it the preferred choice.


Now that we have our application put into containers, what can we do with them? Today’s applications are complicated and involve many containers (Frontend, Backend, Databases, etc.) all from different teams working together. We have to have a way to ‘orchestrate’ these containers and define relationships between them. Say we have an application that is composed of a frontend Angular application hosted by nginx, a backend application hosted in node.js, and an additional API server required by the backend server. We have all of these containerized and now we need to make them work together.

Kubernetes (often abbreviated as k8s) is a container orchestration framework. This means that it stands at a layer above Docker, and coordinates the activity of different containers, allowing them to talk to each other and access the outside world. Kubernetes runs on a few basic concepts:

    • Nodes Nodes represent the physical layer of Kubernetes. They are the actual hardware that containers run on. They can themselves be virtual machines in a cloud environment (like EC2 in AWS) or a container service like Fargate on AWS
    • Pods A Pod represents one or more containers that can run on a Node. Many Pods can run on a Node. A Pod defines shared storage and networking for these containers and represents the basic deployable unit in Kubernetes
    • Deployment A Deployment represents a set of Pods that scale out. A Deployment contains a set of identical Pods
    • DaemonSet A DaemonSet is a Pod that runs on every Node. These are great for cross-cutting concerns like logging and monitoring.
    • Services A Service in Kubernetes represents a logical unit of access to a load-balanced resource. Typically this is a Deployment or DaemonSet. 
    • Ingress Internet-facing applications use Ingresses to allow access from the internet into an application. Depending on where it’s running (AWS, Azure, Datacenter), the Ingress will have different implementations (AWS ALB, Azure Load Balancer, or Nginx)

Why use Docker and Kubernetes?

Docker and Kubernetes are used widely in a number of different applications and environments. The technology has been hardened over the years and has a robust community of support. Below we detail some specific advantages to their use:

  • Easily Manageable Images Docker registries make for easy storage of built containers. Containers can be rolled back to specific versions in the event of a bad deployment
  • Scalability Deployment instances can be easily scaled up with Kubernetes managing the load balancing of the Pods
  • Portability Docker containers and Kubernetes implementations are consistent across many different cloud providers and setups from AWS to Azure to Google and even Local Data Centers
  • Application Architecture Larger applications can be split up into smaller Docker containers allowing organizations to adopt a microservices-oriented approach to application Development and Deployment
  • Predictability Docker containers that run in a Test Environment run exactly the same way in Production

As an example app we’re using “Chili Friends”, an application that matches people together based on their chili sauce preferences. This is a Ruby-on-Rails (RoR) application available here.

This application runs on RoR using a SQL database (postgresql). To function properly we’ll have to set up an Ingress for internet users to connect. For Chili Friends we’ll set up the following:

  • A Deployment for our RoR application using our dockerized application
  • A Deployment for our postgresql database
  • A Service that allows load-balanced access to our RoR deployment
  • An Ingress that can bridge access from the internet into our service

We’re going to deploy this application on AWS’s Kubernetes service, the Elastic Kubernetes Service (ECS). We’ll start with the docker file:

The App

To get started, the demo application we’re using was created with ruby’s basic scaffolding to create sign in, sign up, and sign out functionality, as well as the default welcome screen to make sure the app is up and running.

From my local computer, I can start the local dev server using rails server and am presented with this screen:

To test the database, I’m going to sign in here.

Once I do, I’m redirected to the default welcome page. Now that I know it works on my local machine, I’ll create a docker image using this Dockerfile:

FROM ruby:2.6.6-alpine


RUN apk add --update --no-cache curl py-pip
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools

RUN apk add --update --no-cache \
binutils-gold \
build-base \
curl \
file \
g++ \
gcc \
git \
less \
libstdc++ \
libffi-dev \
libc-dev \
linux-headers \
libxml2-dev \
libxslt-dev \
libgcrypt-dev \
make \
netcat-openbsd \
nodejs \
openssl \
pkgconfig \
postgresql-dev \
sqlite-dev \
tzdata \

RUN gem install bundler -v 2.2.17


COPY Gemfile Gemfile.lock ./

RUN bundle config build.nokogiri --use-system-libraries

RUN bundle check || bundle install

COPY package.json yarn.lock ./

RUN yarn install --check-files

COPY . ./

ENTRYPOINT ["./entrypoints/"]

Some explanation about what these commands do:

FROM ruby:2.6.6-alpine

This means we’ll build our docker image on top of an existing one. In this case, we’re using a stripped down version of linux (alpine linux) with ruby version 2.6.6 installed. This keeps everything lightweight.

For subsequent commands this is as if you were installing the app on a fresh machine. We’re installing some Compiler Utilities, Database Libraries, and the like. The ‘ENV’ command creates environmental variables and the ‘RUN’ command executes shell instructions as if we were to ssh into a machine or sit at a terminal. We use ‘COPY’ to move files from our local machine to the docker image when building and ‘WORKDIR’ to set the directory inside the image when we copy files and execute commands.

Now that that’s set up, let’s look at some of the Kubernetes configurations:


apiVersion: apps/v1
kind: Deployment
name: chili-friends-deployment
app: chili-friends
replicas: 3
app: chili-friends
app: chili-friends
- name: nginx
image: chili-friends:1.0
- containerPort: 3000

This yaml file configures our deployment in Kubernetes. In a Kubernetes cluster with many applications going, we use the ‘label’ feature to define different areas that work together (in our case chili-friends). Some of the important parts of this are:

  • Replicas: 3
    • This means that we’ll load-balance across 3 containers. By increasing this number we can scale up the application.
  • Spec: Containers:
    • This defines which containers we want to use. In our case, we’ve uploaded our container to <<Docker Hub>>, and defined the image here
  • Ports: ContainerPort:
    • Here we say that we’re exposing port 3000 on the container to the cluster. Note that this is different from exposing a port inside the container to one outside the container. For our sanity, we’ll always use port 3000

Now let’s configure the service:

apiVersion: v1
kind: Service
 name: chili-friends-service
   app: chili-friends
   - protocol: TCP
     port: 80
     targetPort: 3000

Now we’re up one level of abstraction. Similar to software development, a Service defines a logical resource that we can use with an address ‘chili-friends-service’ and a port 80. This service manages the load-balancing between the different containers they point to. The important parts:

  • Selector: App: Chili-Friends
    • In our deployment we set the selector app label to ‘chili-friends’ and the way we map this service to that deployment is by using that same label. This points our service to that deployment
  • Ports
    • In our deployment we set the containerPort to 3000. Here in the service, we map TCP port 80 to port 3000 on the containers. When network requests go against our service on port 80, they’ll hit port 3000 on the containers

Now we’ve got all of the internal parts of our Kubernetes application going. We’ve got our app container deployed to the cluster, and a service that points to that deployment. So far, all of these network paths are contained within the cluster, not accessible to the outside. Now we need an Ingress. An ingress acts as a gateway to the internet. It connects aspects of a request (http url, etc) to a particular service. Here’s the ingress we can use for chili friends:

apiVersion: extensions/v1beta1
kind: Ingress
namespace: default
name: chili-friends-ingress
annotations: alb ip internet-facing arn:aws:acm:us-west-2:3723842:certificate/23423-3746-345-234234-34534 '[{"HTTP": 80}, {"HTTPS":443}]' '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
- host:
- path: /*
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
serviceName: chili-friends-service
servicePort: 80

There’s a fair amount going on here. Let’s start with the annotations. In AWS, Kubernetes Ingresses are implemented as Application Load Balancers. That’s what each of the annotations is for. They describe the open ports, which certificate in Amazon Certificate Manager to use, and how to redirect ports to SSL. It should be noted here that we’re delegating all SSL functions to the load balancer. The connection between the client and the ingress over the internet is encrypted via SSL, then our internal server deals with raw HTTP. This saves us the hassle of having to deal with certificate management inside of our app. Additionally, keep in mind that a DNS record must be created with a DNS provider (or for testing, using /etc/hosts or similar methods on windows) in order to actually hit this address.


Once all of these components are deployed to Kubernetes, we should be able to see our application. We’re now able to scale up the number of application pods with just one command. Typically as an organization grows, we might start with one application deployed to a server without a container. As the application gets more complex and more services and dependencies are added, we containerize that application so we can move it from one environment to another. As the application base and organization grows even further, we can use Kubernetes to scale the application and its dependencies out.

By: Alex English, Jaehyuk Lee