How Serverless will Change DevOps

This post on Serverless is from guest author Limor Wainstein, email is limor@agileseo.co.il.

During the past decade, both DevOps and serverless have become quite popular for software development. These not only challenged the way software is being developed but also affected the entire software development lifecycle. First of all, let’s try to understand serverless and DevOps are.

Serverless and DevOps

Serverless is commonly used in conjunction with terms such as architecture, computer, and services. In brief, serverless architecture refers to the solution architecture of software applications built using fully managed third-party services (serverless services) as core dependencies. At its core, serverless computing provides runtimes to execute code, which is also known as function as a service (FaaS) platforms.

DevOps is a collection of practices and tools that increase the ability to deliver (build, test and deploy) applications and services more efficiently. DevOps requires more automation and eliminating overhead in setting up the infrastructure as well as in the build and deployment of code. DevOps does not only affect the technology; DevOps is also targeted at people and team structures.

Continuous Integration (CI) and Continuous Delivery (CD) tools are used to build, test and deploy code and are often done using build automation  Tools that provide fast resolution of errors and bugs are equally crucial.

In addition to using FaaS, fully-managed Docker containers as a service are also evolving. For example, technologies such as AWS Fargate allow to run these Docker containers and scale them without needing to provision and manage any underlying infrastructure. This is useful for DevOps since the containers can also be used to build, test and deploy applications.

Fully Managed DevOps Services

Fully managed DevOps services are beginning to emerge for CI and CD where these services are offered in software as a service (SaaS) models, or hosted services like Travis, Circle CI where the respective third-party service providers manage the underlying infrastructure and services. These services could range from semi-managed to fully managed serverless services that are used for the core of DevOps.

Docker containers underly many of these managed services, and these containers are used to allow isolated and customized build environments. Some services such as AWS Code Build enable defining the Docker image together with the required libraries and tools. This allows direct execution of the build job without needing to reinstall for each build. Various orchestration tools are used by these managed services, such as Kubernetes, Docker Swarm, AWS ECS, to control the fleets of containers.

DevOps Tools with Serverless.

There is also a tendency towards implementing DevOps using serverless architectures. The ability to implement unique and efficient CI/CD tailored to the application requirements, consumption-based cost models, supportive services (e.g., AWS Code Build, AWS Code Pipeline, etc.) create attraction towards this approach. By using serverless services, it is possible to implement an entire build, test and deploy pipeline by writing the glue code using serverless services, without using any hosted solutions.

Open Source and Serverless

Over the past few years, many open source DevOps tools started emerging. One of the most popular open source DevOps tools is the Serverless Framework.

The Serverless Framework allows simplification by:

  • Declaring the microservices infrastructure, endpoints and mapping it with execution logic (for example, with a Lambda function).
  • Building artifacts, archiving and deploying to infrastructure.
  • Extending DevOps capabilities using plugins.

Cloud providers are also beginning to develop similar tools. For example, AWS recently introduced its AWS Serverless Application Model (SAM) which could potentially replace the Serverless Framework functionality when it comes to declaring the serverless APIs. Contributions by the open source community do keep the Serverless Framework supporting newly-introduced AWS services as they appear.

In addition, open source security is becoming a challenge when using third-party libraries. On the other hand, vulnerability scanning is also becoming a commodity, especially with the introduction of the NPM scanning process. This has helped to reduce the development risks in building applications.

Infrastructure as Code

With the adoption of serverless architectures, infrastructure provisioning has become a significant part of the deployment process. Applications are using different serverless services as building blocks, so it is crucial to be sure that these services are correctly configured and connected. One of the best practices is to define the infrastructure in the code using declarative tools and languages like Terraform. This enables versioning of the infrastructure along with the code thus reducing the manual work on the DevOps process.

Immutable Infrastructure

Another effect of serverless for software application development is how we look at the serverless services modifications. It is often possible to provision new instances of serverless services while keeping the existing instances running. This means you can deploy new instances of serverless services and switch to the more recent version upon success. Serverless has made Blue-Green and Canary deployments both more practical and affordable for various applications.

Distributed DevOps

Using serverless technologies for application development means that DevOps has to deal with distributed systems. While it’s possible to use different serverless services from different providers, configuring a CI/CD pipeline involves coordinating these operations. This adds further complexity to the DevOps processes. It is important to implement more visibility into each serverless service with detailed monitoring when these services are provisioned once a change had been made.

Summary

Overall, we’ve outlined a few ways in which serverless is changing DevOps practices used for software development. To wrap up, let’s look at several best practices for serverless and DevOps.

  • Implement automation for CI/CD.
  • When implementing tests such as code style, unit tests, integration tests, UI tests, make them a part of development routine checks.
  • Upon committing the code, try to run the code style checking using git lifecycle hooks if possible.
  • Automate as needed to support building the artifacts, running all the test cases and send the results back to Pull Request using a CI tool.
  • Rerun the tests before deployments, preferably through an appropriate deployment trigger.
  • Include the infrastructure in the code.
  • Use managed services for CI/CD whenever possible to reduce the overhead of infrastructure management.
  • Design an application for rollback support for both previous versions and database migrations.
  • If it’s a database change, create scripts that can both apply and revert the changes.
  • Create version and store deployments in a durable place.
  • Allow deploying specific versions of application deployment.
  • Choose tools that help streamline the debug and monitoring cycle.

Separating the deployment cycles of serverless projects allows teams to make changes without redeploying the entire application. It remains vital to learn and understand the changes introduced by DevOps and serverless process and technologies and to adapt accordingly so that the DevOps processes are aligned to support serverless applications.

Bio:
Limor is a technical writer and editor at Agile SEO, a boutique digital marketing agency focused on technology and SaaS markets. She has over 10 years’ experience writing technical articles and documentation for various audiences, including technical on-site content, software documentation, and dev guides. She specializes in big data analytics, computer/network security, middleware, software development and APIs.
Twitter: @LimiMaayan

The 10 Commandments of Logging

When you search for things on the internet, sometimes you find treasures like this post on logging, e.g. creating meaningful logs.

This post, from a few years back, is authored by Brice Figureau (found on Twitter as @_masterzen_). His blog clearly shows he understand the multiple aspects of DevOps and is worth a visit.

In recent discussions with Brice, he stated that he believes the rules are still quite valid with some tweaks needed for containers and Kubernetes and that in today’s world of GDPR, logging sensitive data is a non -starter. But these rules are informative and worth consideration.

Our thanks to Brice for letting us repost his blog under Creative Commons CC-BY.

Guest author Brice Figureau

2 stone tablets with Roman numerals from 1 to 10
Copyright: moises / 123RF Stock Photo

After writing an answer to a thread regarding monitoring and log monitoring on the Paris DevOps mailing list, I thought back about a blog post project I had in mind for a long time.

I wrote this blog post while wearing my Ops hat and this is mostly addressed to developers.

Knowing how and what to log is, to me, one of the hardest tasks a software engineer will have to solve. Mostly because this task is akin to divination. It’s very hard to know what information you’ll need during troubleshooting… That’s the reason I hope those 10 commandments will help you enhance your application logging for the great benefits of the ops engineers 🙂

1. Thou shalt not write log by yourself

Read More

How are you doing “observability”?

In today’s world of complex code and deployment, people on the DevOps front line face challenges in monitoring, alerting, tracing distributed systems and log aggregation/analytics. Jointly, these are often called “observability” (reference: Twitter blog).

hand drawn checklist with yes and no choices

These are challenges faced by multiple groups like DevOps, core engineering teams and Web 2.0 developers. We see concerns in web applications and traditional enterprise applications. We foresee even more issues in emerging spaces like IoT, event-driven design and microservices.

And as is usual in most complexity-bound problems, there are a lot of ways to solve these challenges. These include the use of discrete tools and procedural methods. Such approaches often cause gaps and leave edges uncovered. After all, the matrix of product types and usage (use cases) are large and growing and the need and scale are also increasing.

So, what do you think about observability? Scalyr is hosting a short survey to find what the current state of observability is. The survey looks are several areas, including tools and related issues and we invite you to chime in with your thoughts. While the survey is the best place to express your views, feel free to leave us a comment on this topic.

Please take the survey (at https://bit.ly/2JKezDb), and you can opt-in to get a copy of the results.

DevOps Security Means Moving Fast, Securely

In this world of lightning-fast development cycles, MVPs, and DevOps, it may intuitively feel like security gets left behind. You might be thinking, “Aren’t the security guys the ones who want to stop everything and look at our code to tell us how broken it is right before we try to deliver it?” Many feel that DevOps security is a pipe dream.

Is it possible to be fast and secure? Lately, I’ve been drooling over a sports car—namely, the Alfa Romeo Giulia Quadrifoglio. Long name, fast car. It holds some impressive racing records and sports 505 horsepower but also is a Motor Trend Car of the Year and an IIHS Top Safety Pick. These awards are due to automatic braking technology, forward-collision warning, lane-keeping assistance, blind-spot monitoring, and rear cross-traffic alert. It is possible to be fast and safe.

The key to DevOps security is to move forward with development. Security teams need to understand why DevOps practices are so effective and learn to adopt them.

Man Running Fast with Scalyr Colors

Read More

DevOps Engineer: What Does It Take to Land the Job?

Every day, more and more companies are looking for people with DevOps knowledge. As I’m writing this post, there are 49,000 results on LinkedIn for DevOps jobs. That’s a lot of job openings.

The high demand for DevOps engineers must be for a reason, right? Well, according to devops.com’s “The State of DevOps Adoption and Trends in 2017” report, DevOps adoption has increased in the last couple of years, especially since 2016, and it won’t stop. The problem is that some organizations are finding it hard to start the journey.

Although starting a DevOps initiative is not an easy task, looking for people that have the necessary skills is even harder. It doesn’t matter what your background is; developer or sysadmin, you’ll need to improve or acquire some new skills and knowledge to succeed in your day-to-day job.

Let’s explore what you need to land a DevOps job.

Microscope over resume with Scalyr colors

Read More

Growing a High-Performance DevOps Culture

Culture is one of those things where we all know what it is but can’t explain it. Well, according to Wikipedia, culture is “the social behavior and norms found in human societies.” But in simple words, it’s all about people: how they interact, how they behave, how they talk, and what they practice. And culture is the foundation of a successful implementation of DevOps.

John Willis, an established speaker and writer on the subject of DevOps, coined the term CAMS (culture, automation, measurement, sharing) at a talk where he explained that DevOps culture is about breaking down silos. But what I find most striking about his discussion of culture, as summarized in the DevOps Dictionary, is the observation that “fostering a safe environment for innovation and productivity is a key challenge for leadership and directly opposes our tribal managerial instincts.” So the starting point for your DevOps journey is good leadership. After that, it’s just about how to grow your team to become a high-performing one.

A high-performing team in DevOps, according to recent research, is one that

  • Does deployments often, meaning several times a day.
  • Delivers a change with a fast lead time (minutes) after it’s been pushed to a shared repository.
  • Has a short (again, minutes) mean time to recover (MTTR).
  • Has a small change failure rate (described here).

So how do you grow a high-performance DevOps culture? You create a culture that will produce a team that delivers on time with confidence in a predictable manner. Here are the things that will help you get there.

High performance gauge with Scalyr colors

Read More

But I’m a Dev, Not a DevOps!

My experience with DevOps began before I even knew there was a name for the approach, when my boss asked me for some help in operations. The company I worked for was small at that time, so I always had the opportunity to get my hands dirty in the release automation process. I knew a few things about servers and Linux, so I was up for the challenge. To my surprise, I loved it. I knew it wasn’t the classic way of doing operations by manually managing physical servers, firewalls, virtual machines, and the like. We were using a cloud vendor. This meant that to spin up a new server, it wasn’t necessary to know which buttons to click.

The cloud vendor had his own API and SDKs for several languages, so I never really felt like I stopped programming. Of course, that was just the tip of the iceberg because systems administration is not just about spinning up new servers, adding more storage or rebooting servers. I had to take care of the architecture and which cloud services were needed for the job. But I was sure I could apply some development skills to operations, and I did. I created some scripts that launched a new environment from scratch, made backups, and restored databases.

Then, I found out about DevOps and all its practices. And because my background was in development, I was able to work with developers and explaining in their language how they could be destroying our log files and why it was important.

So if you’re a developer new to this DevOps world, trust me. You’ll like this new way of working.

Developer with a tie considering DevOps

 

Read More

Sexy But Useless DevOps Trends

What’s sexy but useless? A Ferrari in a traffic jam. It’s beautiful, but all that power means nothing. When trapped in traffic, it can’t live up to its full potential.

Same with DevOps. While there are some critical DevOps functions that you absolutely need, there are some sexy but useless DevOps trends that are good to be aware of. Truth be told, there’s no recipe that will tell you how to succeed in DevOps. Everyone will have different opinions, and what worked for others might not work for you. But you can trust one thing: there are some actions that will guide you directly to frustration with DevOps.

With the amount of information out there about DevOps, you might get overwhelmed and think it’s not for you. You also might think the learning curve is too steep—that you need to change too many things before you get started. Maybe you’ll need a new team, new tools, more metrics, more time… you name it.

My advice is this: don’t get distracted by all things that people say about DevOps. These things I’m going to talk about here, for instance, are all style and no substance.

 

Like this Ferrari if it were stuck in a traffic jam, some DevOps trends are sexy but useless.

Read More

5 Critical DevOps Practices

DevOps is like pizza. We can’t think of pizza without considering critical ingredients: dough, sauce, cheese, and your preferred choice for vegetables and proteins. Everyone likes different toppings. In my case, I can’t think about pizza without extra cheese and meat. You might choose differently, but I think we can agree there are some ingredients that are critical for this food to be called pizza. Quality and ingredients will vary, but some things will always remain true.

Well, it’s the same with DevOps practices. There are some critical practices, and you can’t think about DevOps without considering them. Everyone will have preferred choices regarding the tools and the process, but the practice will remain and each practice complements the other.

Every critical DevOps practice takes time to get down, but the end result will be magnificent. So, let’s discuss what they are and how to implement them.

Pizza with Scalyr Colors

Read More

DevOps: Past, Present, and Future

While DevOps is no longer a brand-new field or movement, there continues to be rapid innovation in the space. Every day new tools are announced, new technologies are created, and teams around the world have to sort through the noise to figure out what’s actually important to their team and their environment. Some tools are transformative (VMWare), some are useful but quickly supplanted (LXC), some prove to be neat technology but never find wide adoption (VRML).

In this post I explore DevOps’ past, present, and future, reflecting on 2017 and looking to 2018 and beyond. What once was revolutionary is now commonplace, and the future holds the promise of much greater efficiencies, albeit with a significant amount of retooling to get there.

DevOps Adoption is: The Past

Today, DevOps is the de facto standard for modern IT and Operations. No longer are you an early adopter if you roll out DevOps. In 2017 only ~25% of companies hadn’t started down the DevOps path. In 2018 most companies will complete the journey, leaving those without DevOps in the minority. Is DevOps for everyone? I’d argue that the core concepts of DevOps— collaboration, communication, and efficiency—are beneficial to any company, in any industry.  The benefits that companies receive are both material and increase over time.

Through 2018 and beyond, DevOps will continue to entrench itself as the new normal in how companies build and run large-scale software systems. The business world will continue its adoption of the DevOps mindset and tools, occasionally crossing over from traditional software engineering into other practices, much as Toyota’s Lean manufacturing inspired the creation of Lean Startups and forms the foundation for next-generation hospital operations. The maturation of the DevOps space will drive the development of new terminology—for good reasons (more precise language and descriptions) and bad (our product is different because we target DevSecAIFooOps). These new labels (and inevitable jargon) will muddy the waters around DevOps a bit, mostly due to vendors fighting to stand out from the crowd and avoid becoming a casualty of industry consolidation.

The next phase of DevOps evolution involves marrying Agile product development processes with DevOps-centric production release environments. This workflow evolution promises to dramatically increase knowledge worker efficiency—but while very early implementers will start this process in 2018, the practice won’t become widespread until 2019 or later.

Containers are: The Present

Containers as a concept (be they Kubernetes, Docker, or some other new format) will start to “cross the chasm” and see much more widespread adoption in a variety of use cases. Most early adoptions will focus on enabling engineer productivity and software distribution. More evolved environments will use containers to incrementally deploy their microservices and ensure consistency among dev, test, and production environments.

Where the first wave of virtualization was all around efficient hardware utilization, the wave of containerized virtualization is about enabling consistent software environments. Expect to see more tools enabling software release orchestration using containers as the deployable code artifact. This will have benefits in terms of testability and reproducibility but will require teams to invest in new tools and modes of operation in order to make full use of the potential.

Increased adoption of containers will lead to a dilution of the core message and some industry pushback due to:

  • Fear of change
  • Valid attempts by engineers to separate fluff from reality
  • Lack of good tools and example use cases

Ultimately, containers will evolve into a core component of modern IT infrastructure much as virtualization did 15 years ago.

Unikernels are: The Future

Unikernels are a very promising bit of technology, but are still a couple years away from widespread adoption. For those not familiar with them, Unikernels are a newer type of container technology that embeds the operating system in your application (i.e., the reverse of your typical OS). Most operating systems need things like support for multiple users and applications, and, in most cases, a user interface. Unikernels ditch all of that overhead and pare the OS down to the absolute minimum. This is good from a security and reliability standpoint, but will require engineering and operations teams to retool to support monitoring, debugging, and deployment of Unikernel applications.

Unikernels have the potential to dramatically change the paradigms for production software environments (primarily due to dramatically altering the security landscape). In 2018 you’ll hear a lot more about them and vendors will start to tout their support—but expect to see only limited production adoption. My hope is that Unikernel standards will start to emerge, and at least one or two will be ready for early production deployments of common app platforms (UniK? IncludeOS? Xen?).

Hybrid Clouds are: Still to be defined

“Hybrid cloud” means many different things to many different people. To some it means running your environment across your physical datacenter and AWS/Azure; to others it means stitching together SaaS offerings from multiple providers; to still others it means only mixing of IaaS providers. The hybrid cloud story has been touted since the very early days of AWS with Microsoft, VMWare, IBM/Softlayer, Rackspace, and others all making their pitch to the market about what a hybrid cloud should look like. Often more than once.

The industry demand for hybrid cloud functionality continues to grow. Engineering and operations teams must be able to build, deploy, and run applications across multiple providers, and have those providers interoperate securely and efficiently. They need the ability to migrate selected IT workloads to cloud providers without undertaking massive retooling efforts. And yet there still doesn’t seem to be an agreed-upon set of solutions. Too many vendors are building walled gardens to keep customers in instead of building tools that allow seamless, secure, cross-platform communication. But, Microsoft, VMWare, Google, and others continue to invest in this area, so I’m hopeful we’ll start to see some type of consensus and standards developed over the next few years.  

I’ll be tracking the success of my predictions throughout 2018…and invite you to do the same. Commend me, call me out, or leave your own predictions in the comments below.