Real-World Applications of Increased Visibility

What can change in an organization when you increase visibility? A lot.

Previously I wrote about how providing visibility to key information is a core enabler of high-functioning, high-speed teams. When put into practice, information visibility increases can lead to transformative results. In this post I’ll use a mix of Scalyr customers and others I’ve worked with in my couple of decades here in Silicon Valley to show you concrete examples where companies have realized these benefits.

Common to all of these use cases is the elimination of “middlemen” and dramatically decreasing latency in the information retrieval process. Giving employees direct, rapid access to the information they need to make effective decisions facilitates decentralized decision-making and chips away at organizational silos. Enhancing knowledge worker productivity using this approach is not new. Harvard Business School analyzed the implications of decentralized decision-making, and GE conceptualized its path to eliminating silos more than 25 years ago. Unsurprisingly, in both cases the benefits far outweighed the costs.

Whether we’re talking about engineers or customer service specialists (and we’ll cover both) remember that Data != Information. Simply having access to data—even if it represents every event happening everywhere in your environment—isn’t enough. Care and effort must be taken to ensure that data is processed and organized to be immediately consumable by the intended audience.

As a general rule of thumb, figure that half of the work will be in gathering, storing, and calculating the raw data. The other half of the work is around the presentation and organization of information.

Engineering and SaaS Use Cases

These next examples walk through the benefits that result from giving engineers increased visibility into production environments. Similar impacts can be seen in Dev/Test environments, visibility into CI/CD pipelines, testing status, and related environments. In short, any situation with multiple teams and a potential “black box” is a candidate to reap the benefits of increased transparency.

Shortening the Product Defect Lifecycle

This is such a common—and important—use case for increased visibility that we wrote an entire post on it. Visibility is the first step in the process: Is the Customer Support team immediately alerted to issues? Can your CS and Dev teams get direct access to logs when troubleshooting? Do all of your teams have clear visibility into the same data? Answer no to any of those and your teams are wasting valuable time because they lack the visibility required to shorten the defect lifecycle.

Our customers report that their internal latency times around bug triage, inter-team escalations, and root cause analysis typically decrease by a factor of 5-10 when using Scalyr. Interestingly Scalyr customers have told us that this change matters less over time because increased visibility into log data doesn’t just shorten the product defect lifecycle—it actually decreases the number of product defects. They attribute this decrease to individual engineers’ very high engagement with the log data leading to them catching a correspondingly greater percentage of issues earlier in the development process.

Next Generation Deployment Techniques

Imagine if you will a traditional code deployment pipeline, one where the engineering team hands over a release to Ops, Ops deploys it during a specific window within which QA tests, and both Ops and Customer Support stand by post-deployment to verify the health of the running system. But if your goal is to deploy continuously, with multiple releases per week (or per day!) or partial releases via feature flags, blue/green deployments, or similar incremental deployment strategies, the traditional process quickly breaks down.

Why? In traditional environments, engineers monitor releases with prebuilt dashboards and tools (like daily email reports) but cannot access individual server logs or system/application performance metrics for the full stack. As companies move to a more integrated code release pipeline, developers need a more granular and up-to-date view of their code operating in production.

The continuous delivery model can only succeed if engineers have easy access to:

  • The current state of production systems
  • The detailed state of their code (dashboards aren’t enough)
  • All relevant log files (and when in doubt, let them see the data)

Logs as Primary Data

This next use case is slightly different since not only do employees need access to logs, but they need it fast enough to use in their typical decision-making workflow. Once you have that in place something magic happens… your logs become a primary information source, not one of last resort. The specific implications of this are pretty wide-ranging, but among Scalyr customers, the most common benefits are:

  • Better logging. Once developers know they can get to the logs for real debugging, they start putting more, and cleaner, logging events in their code.
  • Democratized access to logs. When engineers can freely explore how applications are running in production, more eyes are on the lookout for problems, engineers build code for “what is” vs. how things were described to them, and teams operate more asynchronously.
  • Better tools. Knowledge that the data you need is reliably in a central location allows enterprising teams to build specific tools to assist with team-specific issues. This is particularly powerful as over time teams build numerous small tools that would never make the official roadmaps but still provide tangible benefits.

The exact implications for you will depend on how your teams decide to make use of this new power. As the saying goes, “Garbage in, garbage out,” but clean and descriptive logs can transform a business,  as I’ll show in the next use cases.

From Engineering and SaaS to Customer Service

Visibility is not just a high-leverage tool for teams reporting to the CIO or VP of Engineering. Any team working to decentralize decision-making or increase organizational efficiency can benefit. The next two examples highlight how non-technical customer-facing teams made transformative changes by enabling employee visibility into operational metrics and data.

Improving Customer Support

Recently Return Path, a leading provider of outbound email services, granted all of their Tier 1 customer support employees direct access to the production application logs. This simple but dramatic shift reduced ticket turnaround times from three business days to about five minutes for customer issues like the following.

Previously, when a support rep received a ticket from a customer complaining that an email wasn’t delivered, the three-day investigation process went something like this:

  1. Work with the customer to verify common email client or other end-user issues weren’t to blame.
  2. Contact Ops to verify that no known issues for the application were to blame.
  3. Create a ticket for the Ops team to pull the relevant logs.
  4. Receive the logs and review the delivery status of the email(s) in question.
  5. Get back to the customer and if required, open a second ticket with Ops or Engineering for any application issues found.

Not the best experience for the customer…

Fast-forward to today and that the same ticket is handled much differently. While on the phone or chat with the customer, the support rep:

  1. Gets the customer’s message ID.
  2. Queries the application logs for the full status of that message (or any other potentially relevant messages) to identify the issue.
  3. Gives the customer an immediate answer and if required, creates a ticket for Ops or Engineering.

Not only is the customer experience dramatically improved, both the customer support and Ops teams can spend more time on actual work and less time passing around tickets.

Contact Center Employee Optimization

My last example veers off the standard software development and SaaS path to a very different type of organization: contact centers. For those of you not familiar with the space, contact centers consist of inbound customer support centers, inbound or outbound sales teams, and medium- to large-scale call centers. Contact centers have long had a multitude of metrics used to track their performance. These metrics are used for several key things, most importantly the contact center’s financial and employee performance.

A startup I once worked with called Merced Systems, stepped into the contact center space with a fairly simple proposition. If employees, frontline managers, and company executives had access to key metrics in a timely manner through a user interface that allowed them to understand the raw data, they could use that information to drive more efficient and successful customer engagements. In other words, they built a product that enabled employee visibility into contact center operational metrics and allowed their customers to operate more efficiently.

Customers realized these efficiency gains in several key areas:

  • Employees could self-optimize their actions to meet real-time goals.
  • Managers could evaluate employee performance based on actual vs. perceived performance.
  • Executives could analyze contact center performance along various dimensions.

Net result? Extremely happy customers like T-Mobile, Coca Cola, Echostar, and many others— and Merced Systems going from idea to $170m acquisition in less than 10 years. All from the simple idea that granting everyone visibility to key information leads to more efficient operations.

These examples give you some ideas on where, and how, you can apply increased visibility to your environment. If you have a story about how visibility into the right information transformed your environment, we’d love to hear it about it in the comments below!

Next time I’ll be talking about the nuts and bolts of enabling visibility in SaaS environments and where we’ve seen the biggest bang for the buck.

DevOps: Past, Present, and Future

While DevOps is no longer a brand-new field or movement, there continues to be rapid innovation in the space. Every day new tools are announced, new technologies are created, and teams around the world have to sort through the noise to figure out what’s actually important to their team and their environment. Some tools are transformative (VMWare), some are useful but quickly supplanted (LXC), some prove to be neat technology but never find wide adoption (VRML).

In this post I explore DevOps’ past, present, and future, reflecting on 2017 and looking to 2018 and beyond. What once was revolutionary is now commonplace, and the future holds the promise of much greater efficiencies, albeit with a significant amount of retooling to get there.

DevOps Adoption is: The Past

Today, DevOps is the de facto standard for modern IT and Operations. No longer are you an early adopter if you roll out DevOps. In 2017 only ~25% of companies hadn’t started down the DevOps path. In 2018 most companies will complete the journey, leaving those without DevOps in the minority. Is DevOps for everyone? I’d argue that the core concepts of DevOps— collaboration, communication, and efficiency—are beneficial to any company, in any industry.  The benefits that companies receive are both material and increase over time.

Through 2018 and beyond, DevOps will continue to entrench itself as the new normal in how companies build and run large-scale software systems. The business world will continue its adoption of the DevOps mindset and tools, occasionally crossing over from traditional software engineering into other practices, much as Toyota’s Lean manufacturing inspired the creation of Lean Startups and forms the foundation for next-generation hospital operations. The maturation of the DevOps space will drive the development of new terminology—for good reasons (more precise language and descriptions) and bad (our product is different because we target DevSecAIFooOps). These new labels (and inevitable jargon) will muddy the waters around DevOps a bit, mostly due to vendors fighting to stand out from the crowd and avoid becoming a casualty of industry consolidation.

The next phase of DevOps evolution involves marrying Agile product development processes with DevOps-centric production release environments. This workflow evolution promises to dramatically increase knowledge worker efficiency—but while very early implementers will start this process in 2018, the practice won’t become widespread until 2019 or later.

Containers are: The Present

Containers as a concept (be they Kubernetes, Docker, or some other new format) will start to “cross the chasm” and see much more widespread adoption in a variety of use cases. Most early adoptions will focus on enabling engineer productivity and software distribution. More evolved environments will use containers to incrementally deploy their microservices and ensure consistency among dev, test, and production environments.

Where the first wave of virtualization was all around efficient hardware utilization, the wave of containerized virtualization is about enabling consistent software environments. Expect to see more tools enabling software release orchestration using containers as the deployable code artifact. This will have benefits in terms of testability and reproducibility but will require teams to invest in new tools and modes of operation in order to make full use of the potential.

Increased adoption of containers will lead to a dilution of the core message and some industry pushback due to:

  • Fear of change
  • Valid attempts by engineers to separate fluff from reality
  • Lack of good tools and example use cases

Ultimately, containers will evolve into a core component of modern IT infrastructure much as virtualization did 15 years ago.

Unikernels are: The Future

Unikernels are a very promising bit of technology, but are still a couple years away from widespread adoption. For those not familiar with them, Unikernels are a newer type of container technology that embeds the operating system in your application (i.e., the reverse of your typical OS). Most operating systems need things like support for multiple users and applications, and, in most cases, a user interface. Unikernels ditch all of that overhead and pare the OS down to the absolute minimum. This is good from a security and reliability standpoint, but will require engineering and operations teams to retool to support monitoring, debugging, and deployment of Unikernel applications.

Unikernels have the potential to dramatically change the paradigms for production software environments (primarily due to dramatically altering the security landscape). In 2018 you’ll hear a lot more about them and vendors will start to tout their support—but expect to see only limited production adoption. My hope is that Unikernel standards will start to emerge, and at least one or two will be ready for early production deployments of common app platforms (UniK? IncludeOS? Xen?).

Hybrid Clouds are: Still to be defined

“Hybrid cloud” means many different things to many different people. To some it means running your environment across your physical datacenter and AWS/Azure; to others it means stitching together SaaS offerings from multiple providers; to still others it means only mixing of IaaS providers. The hybrid cloud story has been touted since the very early days of AWS with Microsoft, VMWare, IBM/Softlayer, Rackspace, and others all making their pitch to the market about what a hybrid cloud should look like. Often more than once.

The industry demand for hybrid cloud functionality continues to grow. Engineering and operations teams must be able to build, deploy, and run applications across multiple providers, and have those providers interoperate securely and efficiently. They need the ability to migrate selected IT workloads to cloud providers without undertaking massive retooling efforts. And yet there still doesn’t seem to be an agreed-upon set of solutions. Too many vendors are building walled gardens to keep customers in instead of building tools that allow seamless, secure, cross-platform communication. But, Microsoft, VMWare, Google, and others continue to invest in this area, so I’m hopeful we’ll start to see some type of consensus and standards developed over the next few years.  

I’ll be tracking the success of my predictions throughout 2018…and invite you to do the same. Commend me, call me out, or leave your own predictions in the comments below.

Visibility = Speed

Waiting … to … find … out … something … breaks … everything.

If you found yourself wanting to skip over that sentence, you’re not alone.

For engineers, and knowledge workers in general, milliseconds can mark the difference between a person’s willingness to wait for information and their need to take action. If they wait, they risk falling behind. If they act on incomplete information, they make suboptimal decisions.

As business trends—and the release cycles they drive—speed up and companies struggle to fill engineering roles, this tradeoff becomes even more important. If your teams are chronically understaffed by 10-20%, can you afford to have existing staff executing at anything less than 100% efficiency?

Rapid information flow is key to ensuring that employees have maximum visibility into the information they need, when they need it. In an ideal world teams use that visibility to move with speed AND accuracy—even Facebook realized that a maturing company can’t just move fast and break things. But given that the faster you move, the higher probability you have of breaking something, navigating the speed vs. accuracy conundrum becomes paramount. Giving employees a complete view of the environment and the results of their actions is the single biggest thing you can do to enable success. Put simply:

Maximum visibility depends on knowing four key things:

  1. What to do
  2. When to do it
  3. The starting state of the system
  4. What actually happened/is happening

Effective information flow for the first two are core tenets of the Agile movement. Done right, Agile makes it clear to both engineers and project managers what needs to be done, and when. Engineers no longer need to wait to learn (or guess at) what a product manager was intending, and product managers no longer have to guess how far along a project is, or if it can be built as desired. This visibility increase between product and engineering forms the basis of many of Agile’s advantages.

Numbers 3 and 4 might lack their own manifesto, but seasoned developers and ops engineers instinctively understand how critical they are. The methods and tools deployed to gain visibility into an environment fall broadly into five categories:

  • Application Performance Monitoring (APM)
  • Systems and Network Monitoring
  • Metrics Dashboards
  • Log Aggregation
  • Configuration Management

Collectively these categories represent a more than $15 billion-dollar market, and that’s not accounting for dominant open-source players in the space like Nagios, Grafana, ELK, and Ansible (among many, many others).

Why are so many resources aimed at solving this visibility issue?

The Benefits of Increased Visibility

Let’s use two fictitious organizations: Acme Corp and Nadir Corp, to explore how visibility impacts behavior and execution speed. In both companies any employee can access any piece of information—but the method and speed of access differ greatly.

Acme Corp has built a culture of radical transparency where every employee has immediate access to every piece of company information through a lightning-fast application accessible from anywhere in the world on any device. Employees have a top-level view of key information and can do ad-hoc data exploration, for near-perfect visibility into the operation of the system at all times.

At Nadir Corp, every request for information goes through a rigorous process, occasionally with hard-copy sign-offs, before being granted. Employees must find out where the data is stored, who to request it from, justify their request, and wait for approval. Once all of that work is complete they can finally try to answer their question using the data they received.

In practice, of course, no company is as open as Acme (for very good security reasons!) and very few are as convoluted as Nadir. But from this example it’s brutally apparent which company will be able to investigate, reach decisions, and execute faster.

Employees at Nadir either 1) won’t bother trying to get data unless they absolutely have to, or 2) will look for shortcuts that allow quicker access to a slice of the data. Both of these factors lead to a continuation of the speed vs. accuracy conundrum mentioned above. Employees at Nadir are forced to either wait for key information to act, or act with limited information.

Teams or individuals who take the first option get left behind, those that take the second option make more than their share of errors.

Every company has elements of Nadir Corp in them. Sometimes for good reasons (HR records), sometimes for no good reason (lack of priority/time), and sometimes for bad ones (silo building).

Companies that aspire to be more like Acme Corp and invest in finding and eliminating silos and legacy barriers to data will quickly realize the gains of increased visibility:

  • Increased visibility drives use of optimal data sources
  • Fast access to optimal data leads to more efficient work
  • More efficient work equals faster execution

In the age-old debate of good vs. fast vs. cheap, what should you do if you want good and fast but don’t have an unlimited budget? Invest in tools that allow employees to quickly get to key information, rapidly assess the results of their work, and continually refine their actions. Do that and those chronically overworked engineers and operations staff will be able to operate faster and with fewer errors. And isn’t that what we’re all building toward?

In my next posts, I’ll delve into the practical implications of increased visibility and common tools of the trade that promote visibility.