How Serverless will Change DevOps

This post on Serverless is from guest author Limor Wainstein, email is limor@agileseo.co.il.

During the past decade, both DevOps and serverless have become quite popular for software development. These not only challenged the way software is being developed but also affected the entire software development lifecycle. First of all, let’s try to understand serverless and DevOps are.

Serverless and DevOps

Serverless is commonly used in conjunction with terms such as architecture, computer, and services. In brief, serverless architecture refers to the solution architecture of software applications built using fully managed third-party services (serverless services) as core dependencies. At its core, serverless computing provides runtimes to execute code, which is also known as function as a service (FaaS) platforms.

DevOps is a collection of practices and tools that increase the ability to deliver (build, test and deploy) applications and services more efficiently. DevOps requires more automation and eliminating overhead in setting up the infrastructure as well as in the build and deployment of code. DevOps does not only affect the technology; DevOps is also targeted at people and team structures.

Continuous Integration (CI) and Continuous Delivery (CD) tools are used to build, test and deploy code and are often done using build automation  Tools that provide fast resolution of errors and bugs are equally crucial.

In addition to using FaaS, fully-managed Docker containers as a service are also evolving. For example, technologies such as AWS Fargate allow to run these Docker containers and scale them without needing to provision and manage any underlying infrastructure. This is useful for DevOps since the containers can also be used to build, test and deploy applications.

Fully Managed DevOps Services

Fully managed DevOps services are beginning to emerge for CI and CD where these services are offered in software as a service (SaaS) models, or hosted services like Travis, Circle CI where the respective third-party service providers manage the underlying infrastructure and services. These services could range from semi-managed to fully managed serverless services that are used for the core of DevOps.

Docker containers underly many of these managed services, and these containers are used to allow isolated and customized build environments. Some services such as AWS Code Build enable defining the Docker image together with the required libraries and tools. This allows direct execution of the build job without needing to reinstall for each build. Various orchestration tools are used by these managed services, such as Kubernetes, Docker Swarm, AWS ECS, to control the fleets of containers.

DevOps Tools with Serverless.

There is also a tendency towards implementing DevOps using serverless architectures. The ability to implement unique and efficient CI/CD tailored to the application requirements, consumption-based cost models, supportive services (e.g., AWS Code Build, AWS Code Pipeline, etc.) create attraction towards this approach. By using serverless services, it is possible to implement an entire build, test and deploy pipeline by writing the glue code using serverless services, without using any hosted solutions.

Open Source and Serverless

Over the past few years, many open source DevOps tools started emerging. One of the most popular open source DevOps tools is the Serverless Framework.

The Serverless Framework allows simplification by:

  • Declaring the microservices infrastructure, endpoints and mapping it with execution logic (for example, with a Lambda function).
  • Building artifacts, archiving and deploying to infrastructure.
  • Extending DevOps capabilities using plugins.

Cloud providers are also beginning to develop similar tools. For example, AWS recently introduced its AWS Serverless Application Model (SAM) which could potentially replace the Serverless Framework functionality when it comes to declaring the serverless APIs. Contributions by the open source community do keep the Serverless Framework supporting newly-introduced AWS services as they appear.

In addition, open source security is becoming a challenge when using third-party libraries. On the other hand, vulnerability scanning is also becoming a commodity, especially with the introduction of the NPM scanning process. This has helped to reduce the development risks in building applications.

Infrastructure as Code

With the adoption of serverless architectures, infrastructure provisioning has become a significant part of the deployment process. Applications are using different serverless services as building blocks, so it is crucial to be sure that these services are correctly configured and connected. One of the best practices is to define the infrastructure in the code using declarative tools and languages like Terraform. This enables versioning of the infrastructure along with the code thus reducing the manual work on the DevOps process.

Immutable Infrastructure

Another effect of serverless for software application development is how we look at the serverless services modifications. It is often possible to provision new instances of serverless services while keeping the existing instances running. This means you can deploy new instances of serverless services and switch to the more recent version upon success. Serverless has made Blue-Green and Canary deployments both more practical and affordable for various applications.

Distributed DevOps

Using serverless technologies for application development means that DevOps has to deal with distributed systems. While it’s possible to use different serverless services from different providers, configuring a CI/CD pipeline involves coordinating these operations. This adds further complexity to the DevOps processes. It is important to implement more visibility into each serverless service with detailed monitoring when these services are provisioned once a change had been made.

Summary

Overall, we’ve outlined a few ways in which serverless is changing DevOps practices used for software development. To wrap up, let’s look at several best practices for serverless and DevOps.

  • Implement automation for CI/CD.
  • When implementing tests such as code style, unit tests, integration tests, UI tests, make them a part of development routine checks.
  • Upon committing the code, try to run the code style checking using git lifecycle hooks if possible.
  • Automate as needed to support building the artifacts, running all the test cases and send the results back to Pull Request using a CI tool.
  • Rerun the tests before deployments, preferably through an appropriate deployment trigger.
  • Include the infrastructure in the code.
  • Use managed services for CI/CD whenever possible to reduce the overhead of infrastructure management.
  • Design an application for rollback support for both previous versions and database migrations.
  • If it’s a database change, create scripts that can both apply and revert the changes.
  • Create version and store deployments in a durable place.
  • Allow deploying specific versions of application deployment.
  • Choose tools that help streamline the debug and monitoring cycle.

Separating the deployment cycles of serverless projects allows teams to make changes without redeploying the entire application. It remains vital to learn and understand the changes introduced by DevOps and serverless process and technologies and to adapt accordingly so that the DevOps processes are aligned to support serverless applications.

Bio:
Limor is a technical writer and editor at Agile SEO, a boutique digital marketing agency focused on technology and SaaS markets. She has over 10 years’ experience writing technical articles and documentation for various audiences, including technical on-site content, software documentation, and dev guides. She specializes in big data analytics, computer/network security, middleware, software development and APIs.
Twitter: @LimiMaayan

The 10 Commandments of Logging

When you search for things on the internet, sometimes you find treasures like this post on logging, e.g. creating meaningful logs.

This post, from a few years back, is authored by Brice Figureau (found on Twitter as @_masterzen_). His blog clearly shows he understand the multiple aspects of DevOps and is worth a visit.

In recent discussions with Brice, he stated that he believes the rules are still quite valid with some tweaks needed for containers and Kubernetes and that in today’s world of GDPR, logging sensitive data is a non -starter. But these rules are informative and worth consideration.

Our thanks to Brice for letting us repost his blog under Creative Commons CC-BY.

Guest author Brice Figureau

2 stone tablets with Roman numerals from 1 to 10
Copyright: moises / 123RF Stock Photo

After writing an answer to a thread regarding monitoring and log monitoring on the Paris DevOps mailing list, I thought back about a blog post project I had in mind for a long time.

I wrote this blog post while wearing my Ops hat and this is mostly addressed to developers.

Knowing how and what to log is, to me, one of the hardest tasks a software engineer will have to solve. Mostly because this task is akin to divination. It’s very hard to know what information you’ll need during troubleshooting… That’s the reason I hope those 10 commandments will help you enhance your application logging for the great benefits of the ops engineers 🙂

1. Thou shalt not write log by yourself

Read More

CI/CD Tools: How to Choose Yours Wisely

Continous integration (CI) and continuous deployment (CD) tools allow teams to merge, build, test, and deploy code branches automatically. Implementing them along with conventions like “commit frequently” forces developers to test their code when it’s combined with other people’s work. Results include shorter development cycles and better visibility of code evolution among different teams.

Once you commit to using CI/CD in your software development cycle, you’re immediately faced with a galore of options: Travis, Jenkins, GitLab, CodeShip, TeamCity, and CircleCI, among others. Their names are catchy, but they hardly describe what the tools do. So here’s a roadmap for choosing the right tool for your needs.

CI/CD Tools: Choosing Wisely

Read More

Five To-Dos When Monitoring Your Kubernetes Environment

If you’re on the DevOps front line, Kubernetes is fast becoming an essential element of your production cloud environment. Since container orchestration is critical to deploying, scaling, and managing your containerized applications, monitoring Kubernetes needs to be a big part of your monitoring strategy.

Container environments don’t operate like traditional ones. So, if you are monitoring your applications and infrastructure, you need to be thoughtful about how you monitor your container environment in which they are running. Here are five best practices to inform your strategy:

  1. Centralize your logs and metrics. Orchestrating your containerized services and workloads through Kubernetes brings order to the chaos, but remember that your environment is still decentralized. You will give yourself a fighting chance if you centralize your logs and metrics.
  2. Account for ephemeral containers. The beauty of container orchestration is it’s easy to start, stop, kill, and clean up your containers in short order. However, monitoring them may not be so easy. You still need to debug problems and monitor cluster activity, even when services are coming and going. The trick is to grab the logs and metrics before they’re gone. If you don’t, your metrics will look more like the graph on the left than the one on the right.
    log files examples for transient containers
  3. Simplify, simplify, simplify. With all of the moving pieces in your container environment (services, APIs, containers, orchestration tool), you need to monitor without introducing unneeded complexity. Rather than bloating your container with various monitoring agents, each requiring updates on unique schedules, abstract your monitoring and management tools from what you’re monitoring and managing. This will also help your engineers focus on building and delivering software, not operating the delivery platform.
  4. Monitor each layer explicitly. You will need to collect logs and monitor for errors, failures, and performance issues at each layer – the pod, the container, and the controller manager – of your environment. For example, you’ll need to be able to troubleshoot pod issues, ensure the container is working, and collect runtime metrics in the controller manager.
  5. Ensure data consistency across layers. For fast, accurate debugging, you need to ensure data consistency across all the layers in your container environment. Things like accurate timestamps, consistent units of measurement (such as milliseconds vs. seconds), and collecting a common set of metrics and logs across applications and components will help you troubleshoot and debug quickly and accurately across all of your layers.

One best practice for accomplishing these to-dos in a simple, straightforward manner is to monitor the containers in your Kubernetes environment without touching your application containers. Do this by introducing a DaemonSet, or alternatively a sidecar, into your Kubernetes environment(s) that sits alongside your containerized services and includes your logging and metrics collection agent. Deploying in this method will ensure consistent data collection, minimize the changes required to your application containers, and most importantly, eliminate the possibility of selective blindness in your production environment.

A few ways to implement this include:

  • Introduce a DaemonSet with the Fluentd logging agent (this will give you logging but not metrics). If you already have an ELK cluster configured, this is probably the option for you. Learn more here.
  • Introduce a DaemonSet or sidecar with the Prometheus metrics agent (CoreOS has done an excellent job of integrating Prometheus and Kubernetes). Running Prometheus on your Kubernetes cluster will give you metrics instrumentation, querying, and alerting. Learn more here.
  • A variety of metrics and performance monitoring tools, including Heapster, DataDog, cAdvisor, New Relic, Weave/VMware, and several others also offer a DaemonSet or sidecar options for Kubernetes monitoring.
  • Scalyr, log management for the DevOps front line, has a preconfigured DaemonSet containing the open source Scalyr agent available for download and use. The Scalyr DaemonSet natively supports both Kubernetes logging and metrics. You can download the YAML file for deploying the containerized Scalyr agent from GitHub here. Note that you also can download the full open-source Scalyr agent from GitHub here.

 

Scalyr in the dark

Ying-yang symbol in black and whiteSometimes you want your dashboards to be dark.

And by dark, we mean dark backgrounds with light text and discernable colors. For many people who watch dashboards, looking for alerts and concerns, a darkened screen is more restful. And while research may have concluded that dark letters on a white background are easier to process, it also points out white backgrounds in a darkened environment may disrupt low light vision adjustments.

Scalyr dashboards are white, with dark text and colors in use to indicate the desired metrics. These dashboards are easy to read, understand, and clearly detail the tracked data.

Scalyr dashboard for Linux Processes with white background

However, it is reasonably easy to change this view by making use of the accessibility features in Chrome and Firefox. These reversed colors hack is not perfect as it impacts all of the sessions on the respective browser but quite suitable for a long-lived view or stable operations center.

Scalyr dashboard for Linux processes with black background

This hack is also not making use of the system-wide accessibility features, so only the browser is impacted. In short, the rest of your applications and background are viewable as usual, but the view of your browser has changed. Just be aware that other activity on your tabs may have rather wild results.

Chrome offers several accessibility extensions that allow some control over the look of a page presented in the browser. These extensions are available in the Chrome web store and are easily found via the Preferences (or Settings) page. The extension allows easy toggling of visual impact.

Firefox has built-in preferences that allow you to create the dark scheme. While it is easier than Chrome, requiring no extension installation, it does not provide an easy way to toggle off and on. It does have additional personalization features and does not change the data representation colors. (Note, an add-on exists that allowed toggling but is not supported in FireFox Quantum at this time.)

A similar capability exists on Safari, Windows Internet Explorer, and Edge but requires a CSS stylesheet insertion. I will cover those in a later blog.

The following directions are from my MacBook; however, the same workflow exists for all Firefox or Chrome browsers regardless of operating system. If you are having difficulty with your version, drop me a comment and I will try to help out.

Come to the Dashboard Dark Side. 

Step by step for Chrome:
  1. Open your Chrome browser.
  2. Click on the menu button.
  3. Choose the Settings or Preference menu item (alternatively, you could enter “chrome://settings” directly into the address bar).
  4. Scroll down the page and select Advanced.
  5. Scroll down to Accessibility and click on Add Accessibility Features. This click will open a new tab (or window).image cut of Chrome accessibility store
  6. In the Chrome store, find High Contrast.
  7. Click on Add to Chrome. This click will open a pop-up window. 
  8. Click Add Extension. High Contrast icon in Chrome browser bar
    A small icon will appear in Chrome in the upper right corner.
  9. Now the fun stuff. Click on the icon and Enable the extension. There are some cases where installing the extension automatically enables it. If so, just click the icon to bring up the selections.pop up window to control a11y settings
  10. Now that the extension is enabled click on Inverted Colors.
  11. To toggle back and forth, either make use of the keyboard accelerators or click the appropriate choice in the extension.
Step by step for Firefox:
  1. Open the Firefox browser
  2. Click the Menu button.
  3. Select Preferences or Settings (alternatively, you could enter “about:preferences” directly into the address bar).
  4. Scroll to Fonts & Colors (under Language and Appearance).Firefox Languages section of preferences
  5. Click on ColorsMenu of choices to change Firefox color selections
  6. In the Colors menu, change the associated colors.
    1. Text to White.
    2. Background to Black.
    3. Unvisited Links to Yellow (Optional).
    4. Visited Links to Light Blue (Optional).
      Please note that in Firefox, you can use what colors you would like. The above choices are offered as a starting point.
  7. In the Override the Colors box select Always.
  8. Click OK.

Reversing this choice requires entering preferences again and setting the Override the Colors choice to Never. As noted, the extension that offered a toggle button is not supported in Firefox Quantum.

So there you have it, a quick and dirty approach to getting your dashboards in the dark.

However, note that these changes will impact all tabs and windows of the browser, not just the dashboard views. For that reason, you may want to limit this use to a long-lived display or make use of the toggle capability of Chrome.

To learn more about accessibility in Chrome, check out Use Chrome with accessibility extensions – Google Chrome Help. For information about accessibility in Firefox, take a look at Accessibility features in Firefox. And to learn more about dashboards in Scalyr, check out how log analysis can lead you to needed actions.

If you try it out, tell me about your experiences in the comments.

 

The Build vs Buy Decision Tree

Chocolate or vanilla? Pancakes or waffles? Coke or Pepsi? We decide between similar choices every day. Some of us have preferences, and other times it’s just a feeling in the moment. A common decision in the IT world is the “build vs buy” decision. Sometimes this decision is not so cut and dry.

Can the decision to build or buy paralyze us with fear? Certainly. Do some have preferences? Definitely. However, all is not lost. There can be a logical system to decide whether to build or buy when it comes to software.

 

The Build vs Buy Decision Tree

 

Read More

DevOps Security Means Moving Fast, Securely

In this world of lightning-fast development cycles, MVPs, and DevOps, it may intuitively feel like security gets left behind. You might be thinking, “Aren’t the security guys the ones who want to stop everything and look at our code to tell us how broken it is right before we try to deliver it?” Many feel that DevOps security is a pipe dream.

Is it possible to be fast and secure? Lately, I’ve been drooling over a sports car—namely, the Alfa Romeo Giulia Quadrifoglio. Long name, fast car. It holds some impressive racing records and sports 505 horsepower but also is a Motor Trend Car of the Year and an IIHS Top Safety Pick. These awards are due to automatic braking technology, forward-collision warning, lane-keeping assistance, blind-spot monitoring, and rear cross-traffic alert. It is possible to be fast and safe.

The key to DevOps security is to move forward with development. Security teams need to understand why DevOps practices are so effective and learn to adopt them.

Man Running Fast with Scalyr Colors

Read More

Verbose Logging: Your Magnifying Glass for Bad Application Behavior

You probably don’t think of verbose logging as the stuff that hackathons and startups are made of.  Nor would most programmers consider it an especially advanced technique.  But it is important, and enough people ask about it that it’s worth covering.

Part of the reason that so many people inquire about the subject of verbose logging is that it’s kind of general in the same way that searching for “logging” is general.  So let’s start by at least getting more specific with a definition.

Chat bubbles with Scalyr colors

Read More

Java Exceptions and How to Log Them Securely

As a security consultant, I perform assessments across a wide variety of applications. Throughout the applications I’ve tested, I’ve found it’s common for them to suffer from some form of inadequate exception handling and logging. Logging and monitoring are often-overlooked areas, and due to increased threats against web applications, they’ve been added to the OWASP Top 10 as the new number ten issue, under the name “Insufficient Logging and Monitoring.”

So what’s the problem here? Well, let’s take a look.

Java Exceptions alert sign
Read More

Common Ways People Destroy Their Log Files

For this article, I’m going to set up a hypothetical scenario (but based on reality) that needs logging. We’re writing an application that automates part of a steel factory. In our application, we need to calculate the temperature to which the steel must be heated. This is the responsibility of the TemperatureCalculator class.

The class is fed a lot of parameters that come from external sensors (like current temperature of the furnace, weight of the steel, chemical composition of the steel, etc.). The sensors sometimes provide invalid values, forcing us to be creative. The engineers said that, in such a case, we should use the previous value. This isn’t something that crashes our application, but we do want to log such an event.

So the team has set up a simple logging system, and the following line is appended to the log file:

An invalid value was provided. Using previous value.

Let’s explore how this well-meant log message doesn’t actually help. In fact, combined with similar messages in our log file, the log file ends up being a giant, useless mess.

 

Trash Fire Depicting Way People Destroy Log Files

 

Read More