Log File Too Big — What Should I Do?

You have a problem. But you don’t just have an ordinary problem. You have one of the most frustrating kinds of problems in the technical world. In the most basic terms, you’re trying to open a log file that’s too big to open. But “log file too big” doesn’t fully capture the frustration or the problem.

You need something out of your log file, so you go to open it. Then you wait. And wait. And wait.

After some amount of time, your text editor just crashes. Hoping it’s a fluke, you try again, waiting 15 minutes before another crash. So you’re half an hour in and not only have you not solved your actual problem — you haven’t even successfully taken what should be the simplest imaginable step toward solving it.

This combination of a long feedback loop with a non-deterministic outcome is what makes this so maddeningly frustrating. But fear not. Let’s take a look at how you can solve this, starting with the quickest and most superficial route and working toward root cause analysis.Log file too big? It can seem as though the thing is crushing you.

Pick a Different Tool at Your Disposal

If opening this log file is crashing an editor, like, say, Notepad, then your easiest step is to use a different editor. At least that way you can know that fate will reward your waiting with an opened file rather than with a crash.

Your path of least resistance here is to use something you already have installed. So consider the following utilities for each application.

  • For Windows, you can use WordPad. If you have enough memory to cover the size of the file you want to edit, WordPad will load it. So these days, that’s quite likely to apply to files even topping a gig in size.
  • For Mac, use Vim. It should be able to handle as big a file as you have memory, and with good search besides.
  • There are a lot of different flavors of Linux out there, so it’s a little harder to talk about default installations. But if you have it, you can also use Vim here. If not, you can install it easily, and you can use tail -X at the command line, where X is the number of lines you’d like to see.

That should at least get you started. You should be able to see your file without needing to wait for something to maybe or maybe not crash.

Download and Use a Text Editor Meant for This

If you have a little more patience, you should ask yourself whether your current need is a one-time-only situation or if you’ll be viewing and editing a lot of large files. If the latter, you’ll want to get more deliberate about the tools in your toolbox. I might suggest this even if you think this is a one-time need. Familiarizing yourself with a new, powerful text editor can’t hurt anything.

The number of text editors available to you is FAR too large to enumerate here. But Wikipedia has an extensive page on them, including specific information about file size.

If the problem solved by opening your large file isn’t too pressing, you could always engage in some yak-shaving. But you should probably solve that problem first, using a tool at your disposal. Then come back and spend some time evaluating your text editor options. Find one that can open large files and that has other features you like besides. I’d say even try out a few of them.

If large files figure to be part of your life going forward, you should have a plan of attack for them.

Read More

How You Can Shorten the Defect Life Cycle

Ah, the software defect. It’s the bane of our collective existence, and it also seems unavoidable. Okay, frankly, it probably is unavoidable, for all intents and purposes. But that doesn’t mean we’re powerless to do anything about it. We can chip away at its impact by reducing its severity and by shortening the defect life cycle.

What Is the Defect Life Cycle?

Muting the impact of defects is a self-explanatory endeavor, but what do I mean by “defect life cycle”? Well, first, consider the word “cycle.” This borrows from the lean idea of cycle time, which is fancy Six-Sigma speak for “How long does it take from start to finish?”

Okay, so why not just call it “defect lifetime”? I suppose I could have used that term. But it omits a subtle yet crucial consideration. A defect in our software moves through a series of phases and steps as people work to correct it.

“Lifetime” makes it sound like fate gives birth to the defect, and then it simply exists until it naturally expires. But that’s not at all what happens. Instead, the team collaborates methodically to track down the defect, assess it, address it, and roll out a fix of some kind. So we think about a defect life cycle rather than a defect just living its life quietly out in the country somewhere.

But terminology and philosophy aside, how do you shorten the defect life cycle?

Production defects tend to generate stress and keep people up at night. From the moment a user reports it until the moment someone resolves the problem, tensions run higher. Let’s take a look at how to reduce the length of that tense time.

Read More

Log Management: What Is It and Why You Need It

To understand log management, you first need to understand what problem it solves. Once you see that, you’ll know both what it is and why you need it.

Software these days involves a lot of complexity that didn’t exist once upon a time. We’ve moved things into the cloud, created software/platforms/infrastructure as services, and embraced distributed computing.

That’s a sea change from the good ol’ days of the 1990s. Back then, you’d write a bunch of code, build it, put it on CDs or floppy disks, and mail it to people. It’s even a sea change from the 2000s, when the web application took over. Instead of CDs, you’d set up a web server, deploy your software to that, and let users and their browsers have at it.

But today, we have containers and microservices. We have software intelligence distributed around the globe, spinning up and down on demand, collaborating and orchestrating. We’ve traded the simplicity of the historical monolith for the flexibility and complexity of distributed intelligence.

Log Files in a Distributed World

Think about the change I’ve just described. And now imagine what that means for the existence of a log file.

In the 1990s, you’d add code to your application that dumped information to a single log file. If your users had problems, they could zip up that log file, along with an OS log file for good measure, and send those to you for troubleshooting. With 2000s web applications, that same application log file, along with the web server log file and the database log file, did the trick.

But now? Good luck. Your production operations include six RESTful microservices on six different servers, a bunch of on-demand containers, a few miscellaneous web apps, a service bus, and who knows what else? Each of those concerns is contained, isolated, simple, and useful.

But troubleshooting across those concerns, when the issue happens in the gaps, can be a mess. And gathering 20 different log files that you attempt to reassemble into some facsimile of order doesn’t help matters at all.

Log Management to the Rescue

That is where the idea of log management as a first class need enters the picture. If you have a desktop app or a simple web app, you can probably get by with grep, text editors, and elbow grease. But as soon as you grow beyond that, you’re going to need a better approach.

Log management is that better approach. Instead of regarding your applications’ logs as separate, unrelated entities, you conceive of them as parts of a whole. You weave them together and then use them to paint a dynamic, intelligent, and visual picture of the health of all your systems.

If that sounds daunting, don’t worry. You don’t need to implement all of this yourself. In fact, you definitely shouldn’t do it yourself any more than you should write your own source control. A lot of talented toolmakers have invested significant effort in helping you with your log management.

But rather than focus on specific tools, let’s take a look at log management as a function of its components. What does a good log management scheme involve, and what should you expect out of it?

Read More

Zalando Engineering Team Standardizes on Scalyr for Log Management   

Overview 

Zalando, Europe’s leading online fashion platform, made the transition to the cloud two years ago. As part of the move to AWS, they were looking for a log management tool that was flexible enough to fit their agile engineering culture, powerful enough to scale, and fast enough to allow them to investigate incidents. After evaluating several solutions, they standardized on Scalyr as their log management solution across their entire engineering team.

About Zalando

Zalando is Europe’s leading online fashion platform for women, men and children. They offer their customers a one-stop, convenient shopping experience with an extensive selection of fashion articles including shoes, apparel, and accessories, with free delivery and returns. Their assortment of almost 2,000 international brands ranges from popular global brands, fast fashion, and local brands, and is complemented by their private label products. Their localized offering addresses the distinct preferences of their customers in each of the 15 European markets they serve: Austria, Belgium, Denmark, Finland, France, Germany, Italy, Luxembourg, the Netherlands, Norway, Spain, Sweden, Switzerland, Poland and the United Kingdom.

Customer Challenges

Zalando transitioned to the cloud two years ago. They went from a monolith code base to microservices in the cloud, which changed their log management needs. They evaluated Scalyr along with three other solutions.

During their evaluation process, their evaluation criteria required:

  • An agent that can collect all the logs on every service
  • UI where engineers can search logs
  • Search specific applications
  • Ability to see every single log in the UI
  • Ability to scale
  • Would fit with the engineering culture of Radical Agility

After evaluating the four solutions, they narrowed it down to two to let the teams decide. They liked that with Scalyr it was easy to implement the agent and roll it out onto EC2 instances. They were able to define custom parsers for log lines.

The engineering culture at Zalando is built on Radical Agility. In order to empower their teams with autonomy, they need to automate everything around how they provision machines. This includes giving people the tools they need to do everything in a compliant way in their accounts. They found that the custom parsers were particularly important in giving each team flexibility to do things in their own way, which is a key pillar of the success of the engineering team.

Results of Using Scalyr

Scalyr is now deployed across the entire engineering team at Zalando. The main ways the team uses Scalyr are:

  • Respond to incidents and incident mitigation
  • Analysis of what’s happening on the service
  • Metrics for monitoring
  • Proactive investigations

They were able to get Scalyr up and running very fast. Once set up, their teams were enabled with access to their logs. They didn’t need to configure the agent and were able to instantly see their logs.

Given the number of autonomous services Zalando runs, they needed a coherent solution for how to get to the logs.

When asked how Scalyr has helped them, Tim Kröger, Head of Engineering – Visibility and Andreas Pfeiffer, Cloud and Network Architect, responded with it feels like asking how breathing helped you with your life.”

Before Scalyr, when an application crashed, the developer had to go to the log server, grab all the logs and find the host where the app was running. This would take at least 10 minutes. With Scalyr, developers can now deploy an application, get issues on the error, see the logs immediately, log into Scalyr, give the app ID and see all the logs from the deployment. They were able to go from 10 minutes of work to 13 seconds (which includes logging into Scalyr!).

Overall, Scalyr has helped Zalando make the transition to the cloud and mitigated the risk or increasing errors while moving to AWS.

Be Kind to Your Log File (And Those Reading It)

The log file has existed since programmer time immemorial. The software world moves really fast, but it’s not hard to imagine someone decades ago poring over a log file. This is, perhaps, as iconic as programming itself.

But sadly, for many shops, the approach hasn’t really evolved in all of that time. “Dump everything to a file and sort it out later, if you ever need it” was the original idea. And that holds up to this day.

Put another way, we tend to view the log file as a dumping ground for any whim we have while writing code. When in doubt, log it. And, while that’s a good impulse, it tends to lead to a “write once, read never” outcome. If you’ve ever started to troubleshoot some issue you have via OS logs and then given up due to being overwhelmed, you know what I mean.

But, even if you don’t, you can picture it. A log file with gigabytes of cryptic, similar looking text page after page discourages meaningful troubleshooting. Reading it is so boring you start to find yourself hypnotized before you blink, thinking to yourself that this isn’t going to help, and move on to other means of troubleshooting.

Rethinking the Log File

So, with that in mind, let’s rethink the log file. People focus an awful lot on intuitive user experiences these days, and we can draw from that. Think of the log file not as some dusty dumping ground for every errant thought your production code has, but as something meant to be consumed.

Who will read your log file? And for what purpose? And, with the answers to those questions, how can we make it easier for them?

This is an important enough consideration that products have emerged to help facilitate consuming log files, making troubleshooting and data gathering easier. People are buying these products to make their lives easier, so that should tell you something. They desperately want to filter the signal from the noise and get valuable data out of their logs.

So let’s take a look at how you can help with that when writing to log files. These tools have sophisticated parsing capabilities, but that doesn’t mean you shouldn’t do your part to help consumers of your log files. Here’s how you can do that.

Read More

Calculating the ROI of Log Analysis Tools

Anyone in a technology organization can relate to a certain frustration. You know that adopting a certain tool or practice would help you. So you charge forward with the initiative, looking for approval. But then someone — a superior most likely — asks you to justify it. “Give me the business case for it,” they say. And then, flummoxed a little, the gears start turning in your head. Today, I’d like to talk about that very issue in the specific context of log analysis tools.

If you have significant operations of any kind in production, you’re almost certainly generating logs. If not, you should be. You’re also probably monitoring those logs, in some fashion or another. And if you’re consuming them, you’re analyzing them in some fashion or another. But maybe you’re doing this manually, and you’d rather use a tool for log analysis. How do you justify adopting that tool?  How do you justify paying for it?

ROI: The Basic Idea

To do this, you have to veer into the world of business and entrepreneurship for a moment. But don’t worry — you’re not veering too far into that world. Just far enough to acquire a skill that any technologist ought to have.

I’m talking about understanding the idea of return on investment (ROI). Follow the link and you’ll see a formula, but the idea is really dead simple. If you’re going to pay for something, will that something bring you as much or more value than what you paid? When the answer is “yes,” then you have a justifiable decision. If the answer is “no,” then you can’t make a good case for the investment.

So, for log analysis tools, the question becomes a pretty straightforward one. Will your group realize enough cost savings or additional revenue generation from the tool to justify its cost?

Employing Back-of-the-Napkin Math

When you’re asked to justify purchasing a tool, you might wonder how much rigor you must bring to bear. People working with technology tend to have an appreciation for objective, empirical data.

When making a business case, if you can back it with objective, empirical data, that’s great. You should absolutely do so. But that’s often hard because it involves making projections and generally reasoning about the future. We humans like to believe we’re good at this, but if that were true, we’d all be rich from playing the stock market.

So you need to make some assumptions and build your case on the back of those assumptions. People sometimes refer to that as “back-of-the-napkin math” and it’s a perfectly fine way to build a business case, provided you highlight the assumptions that you make.

For instance, let’s say that I wanted to spend $50 on a text editor. I might project that its feature set would save me 20 minutes per day of brainless typing. I’d highlight that assumption and say that, if true, the investment would pay off after less than a week, given my current salary. These are the sorts of arguments that bosses and business folks appreciate.

First, the Cost of Log Analysis Tools

To make a business case and a credible projection of ROI, you need two projected pieces of data: the cost (i.e., the amount of the investment you’re looking for a return on) and the savings or revenue benefit. I’ll dedicate the rest of this post to talking about how log analysis tools can save companies money or even add to their bottom line. But first, let’s take a look at their costs.

The most obvious cost is the sticker price of the tool. That might be an initial lump sum, but in this day and age, it’s usually going to be a recurring monthly subscription cost. So when making your case, be sure to take that into account.

There’s also a second, subtler cost that you should prepare yourself to address. Installing, learning, and managing the tools takes time from someone in the IT organization. You can (and should) argue that it winds up saving time in the end, but you also must acknowledge that investing employee time (and thus salary) is required.

Once you have those costs established, you can start to reason about the benefits.

Read More

Scalyr October Product Updates

The engineering team has been hard at work on new features and updates over the last few weeks. We are excited to share these changes with you and would love to hear your feedback.

Agent

  • By popular demand, we’ve raised the limit on log messages to 10,000 bytes (from 3500). If you’re using the Scalyr Agent, please upgrade to version 2.0.30 (available by the end of next week) to take advantage of this change: https://www.scalyr.com/help/scalyr-agent#upgrades.

  • Amazon EC2 Container Services (ECS) support in the Scalyr Agent.
  • The Scalyr Agent can now rename log files before uploading them to Scalyr.
  • Improved support for Kubernetes logs. You can now configure the Scalyr Agent to parse the JSON records generated by Kubernetes and extract the original log text.
  • The Scalyr Agent can now redact sensitive data (e.g. user email addresses) by replacing it with a hash, allowing you to see patterns without revealing the raw data.
  • The Scalyr Agent’s URL monitor plugin can now generate POST and PUT requests, and can specify HTTP headers and a request body.

 

Integrations

 

 

Improved Search and Team Settings

  • Improved bookmark and link-sharing support for multiple teams. Search URLs now record the team being viewed. When you (or another user) later open the URL, if you’re not linked to the correct team, you’ll be prompted to switch.

  • When you customize the fields shown in the log view, your settings will now persist until you change them. (If you work with multiple teams, your settings are saved separately for each team.)
  • You can now customize the default search time period (normally 4 hours). See “Set Default Search Time Span” on the Tips & Tricks page.

 

Simplified Pricing

  • We are making some changes to our pricing to provide additional flexibility and to make the full Scalyr feature set available to everyone. You can view the new options at scalyr.com/pricing. All affected customers have received an email with more details. These changes will go live November 1st.

Application Logging Practices You Should Adopt

When talking about logging practices, you could segment the topic into three main areas of concern. Those would include, in chronological order, application logging, log aggregation, and log consumption. In plain English, you set up logging in your source code, collect the results in production, and then monitor or search those results as needed.

For aggregation and consumption, you will almost certainly rely on third-party tools (an advisable course of action). But the first concern, application logging, is really up to you as an application developer. Sure, logging frameworks exist.  ut, ultimately, you’re writing the code that uses them, and you’re dictating what goes into the log files.

And not all logging code is created equal. Your logging code can make everyone’s lives easier, or it can really make a mess. Let’s take a look at some techniques for avoiding the mess situation.

What Do You Mean by Application Logging Practices?

Before diving into the practices themselves, let’s briefly clarify what application logging practices are. In the context of this post, I’m talking specifically about things that you do in your application’s source code.

So this means how you go about using the API. How do you fit the logging API into your source code? How do you use that API? And what kinds of messages do you log?

Read More

OkCupid Falls For Scalyr

OkCupid has a long history of building things in-house. They have always been a deeply technical company that doesn’t settle for what’s already available and isn’t afraid to build something new. In fact, OkCupid doesn’t use Apache or Nginx — they built their own web server. So it isn’t surprising that when it came time to reconsider how they manage logs, they considered both existing solutions and the possibility of building something in-house. In the end, they found that Scalyr both met their log management needs for today and has the potential to grow as a solution for them to solve a number of other challenges they face. I recently had a chance to catch up with Alex Dumitriu, CIO at OkCupid, about their early experiences with Scalyr.

Read More

Introduction to Continuous Integration Tools

In a sense, you could call continuous integration the lifeblood of modern software development. So it stands to reason that you’d want to avail yourself of continuous integration tools. But before we get to the continuous integration tools themselves, let me explain why I just made the claim that I did. How do I justify calling continuous integration the lifeblood of software development? Well, the practice of continuous integration has given rise to modern standards for how we collaborate around and deploy software.

What Is Continuous Integration?

If you don’t know what continuous integration is, don’t worry. You’re not alone. For plenty of people, it’s a vague industry buzzword. Even some people who think they know what it means have the definition a little muddled. The reason? Well, the way the industry defines it is a little muddled.

The industry mixes up the definitions of continuous integration itself, and the continuous integration tools that enable it. To understand this, imagine if you asked someone to define software testing, and they responded with, “oh, that’s Selenium.” “No,” you’d say, “that’s a tool that helps you with software testing — it isn’t software testing itself.” So it goes with continuous integration.

Continuous integration is conceptually simple. It’s a practice. Specifically, it’s the practice of a development team syncing its code quite regularly. Several times per day, at least.

Merge Parties: The Continuous Integration Origin Story

To understand why this matters, let me explain something about the bad old days. 20 years ago, teams used rudimentary source control tools, if any at all. Concepts like branching and merging didn’t really exist in any meaningful sense.

So, here’s how software development would go. First, everyone would start with the same basic codebase. Then, management would assign features to developers. From there, each developer would go back to his or her desk, code for months, and then declare their features done. And, finally, at the end, you’d have a merge party.

What’s a merge party? It’s what happens when a bunch of software developers all slam months worth of changes into the codebase at the same time. You’re probably wondering why anyone would call this a party when it sounds awful. Well, it got this name because it was so awful and time-consuming that teams made it into an event. They’d stay into the evenings, ordering pizza and soda, and work on this for days at a time. The “party” was partially ironic and partially a weird form of team building.

But whatever you called it, and party favors or not, it was really, really inefficient. So some forward thinking teams started to address the problem.

“What if,” they wondered, “instead of doing this all at once in the end, with a big bang, we did it much sooner?” This line of thinking led to an important conclusion. If you integrated changes all of the time, it added a little friction each day, but it saved monumental pain at the end. And it made the whole process more efficient.

Read More