Getting Started Quickly With Django Logging

Django is one of the most popular Python web application frameworks, used by numerous organizations. Since it employs the built-in Python logging facility, logging in Django is a snap.

Still, there are a few specific Django configuration options to keep in mind. In this article, we’ll

  • create a Python virtual environment,
  • set up a small Django project to work with,
  • write a basic logging example,
  • configure the Django logger, and
  • explore the Django logging extensions.

If you want to skip writing code, you can find all of the code in this article over on GitHub.

Also, note that this article won’t go into details on the specifics of Python logging. If you want a primer on those, check out this previous post, Get Started Quickly With Python Logging, which covers the basics.

Now ready, set, log!

Django_logging_with_Scalyr_colors
Read More

Why Distributed Tracing Will Be So Important in 2019

As we round the bend into 2019, it’s worth thinking about where our industry is headed. There are many exciting and challenging developments ahead: blockchain scalability, functions as a service, databases as a service—the list goes on. We’re also moving more and more into an increasingly complex, distributed world. This means distributed tracing will become especially important.

Image_of_many_people_using_etch_a_sketch_signifying_distributed_tracing
Read More

AWS Lambda Use Cases: 7 Reasons Devs Should Use Lambdas

AWS Lambda is Amazon’s serverless compute service. You can run your code on it without having to manage servers or even containers. It’ll automatically scale depending on how much work you feed into it. You can use it in data pipelines or to respond to web requests or even to compose and send emails. It’s the jack-of-all-trades way to execute code in the AWS cloud.

Although intended for use in the cloud, you can absolutely run Lambdas locally during development. When you do run Lambdas in the cloud, you’ll only pay for what you use. In some cases, this can save significant sums of money compared to the cost of running VMs or containers. While I’ve already touched on some of the reasons you’ll benefit from using AWS Lambdas, I want to focus on seven specific reasons in more detail.

7 lambda with scalyr colors
Read More

A Guide to Container Lifecycle Management

Containers have changed the way we develop and maintain applications. One of the main promises of containers is that you’ll be able to ship software faster. But sometimes how this happens seems a bit obscure. If you want to understand the benefits of containers, you first need to know what the lifecycle management is. Once you understand that, it’s going to be easier to connect all the points; then the aha moment will come naturally.

In this guide, I’ll use the Docker container engine—that way it’s easier to understand the lifecycle management behind it. Commands might be different in other container engines, but the concept is still valid. I’ll start with the application development, and I’ll finish with how to ship application changes. A container’s lifecycle only takes minutes to complete and is a reusable process.

Lifecyle_arrows_with_Scalyr_colors
Read More

Introducing deeper support for Kubernetes, collaboration features and stack tracing for easier troubleshooting

As the move to the cloud and containers continues to make software delivery faster and easier, environments are getting more complex. At Scalyr, we believe that observability solutions need to help engineering and operations teams get access to all the data they need, fast. Along those lines, we are announcing new features to help teams support the latest container and orchestration technologies, improve collaboration, and streamline workflows for faster and easier issue identification and resolution.

Kubernetes Cluster-Level Logging

Our new Kubernetes cluster-level logging enables engineering teams to effectively monitor and troubleshoot Kubernetes environments by centralizing and visualizing logs by deployment. Regardless of source, Scalyr intelligently ingests, parses and organizes logs to give developers an application-level view for faster issue resolution in complex container environments.

As shown in the screenshot above, users are presented with log summaries by deployment  rather than by each individual container. The automatic grouping of logs pertaining to a deployment gives developers a more holistic view of each application or service that may be running on multiple containers and pods. Insight into individual pods and nodes is also available but that level of detail is abstracted by default so developers can focus on their application right away.

Read More

Get Started Quickly With Scala Logging

We’ve covered how to log in eight different languages so far: C#, Java, Python, Ruby, Go, JavaScript, PHP, and Swift We’ve also included a few libraries and platforms, like Log4J, Node.js, Spring Boot, and Rails. Now, it’s time to show how you can get started quickly with Scala logging.

First, I’ll show you with a quick example of manual logging with Scala. I’ll use IntelliJ IDEA to create and run a Scala project, using sbt to build the code. So, you can use the application to get started on any platform that supports Scala and Java.

Then, I’ll discuss details of how and why logging matters. Finally, I’ll move on to using the Scala Logging wrapper in an application and how it can improve your ability to monitor your applications and issues.

Let’s get started!

Scala logging logo with Scalyr colors
Read More

Everyone has a part in making the planes fly

Waking up at 0445 to the sound of Reveille blaring across the Lackland Air Force Base Giant Voice System, rushing down the dormitory stairs just so my training instructors could bark orders at myself and the other Airmen while we ran miles around the outside track until the sun came up, I wondered if I was going to be able to make it. I wondered what the Air Force was going to be like. I wondered how things were going to change.

Read More

Microservices Communication: How to Share Data Between Microservices

For some, the ideal picture of a modern application is a collection of microservices that stand alone. The design isolates each service with a unique set of messages and operations. They have a discrete code base, an independent release schedule, and no overlapping dependencies.

As far as I know, this type of system is rare, if it exists at all. It might seem ideal from an architectural perspective, but clients might not feel that way. There’s no guarantee that an application made up of independently developed services will share a cohesive API. Regardless of how you think about Microservices vs. SOA, services should share a standard grammar and microservices communication is not always a design flaw.

The fact is, in most systems you need to share data to a certain degree. In an online store, billing and authentication services need user profile data. The order entry and portfolio services in an online trading system both need market data. Without some degree of sharing, you end up duplicating data and effort. This creates a risk of race conditions and data consistency issues.

At the same time, how do you share data without building a distributed monolithic service instead of a micro? What’s the effective and safe way to implement microservice communication? Let’s take a look at a few different mechanisms.

First, we’ll go over different sharing scenarios. Depending on how you use the data, you can share it via events, feeds, or request/response mechanisms. We’ll take a look at the implications of each scenario.

Then, we’ll cover several different mechanisms for microservice communication, along with an overview of how to use them.

Microservices talking to one another illusrating microservices communication
Read More

How to Redirect Docker Logs to a Single File

Sometimes, when troubleshooting or monitoring a Docker container, we need to see the application’s output streams. Containerized applications generate standard output (stdout) and standard error (stderr) like any other software. The Docker daemon merges these streams and directs them to one of several locations, depending on which logging driver is installed. The default driver makes the container output easy to access, and if we want to copy the information to another location, we can redirect docker logs to a file.

Let’s take a look at how we can manipulate logs generated by the default json-file log driver. This driver places the log output in a system directory in JSON formatted files but provides a command line tool for displaying log contents in its original format.

Docker whale leaping into a single file with Scalyr colors signifying a way to redirect docker logs to file
Read More

Top 4 Ways to Make Your Microservices Not Actually Microservices

Microservices have gained such a level of popularity these days that they’re often touted as “the way” to build software. That’s all well and good, except there are a lot of microservice anti-patterns flying around.

The vast majority of microservice systems I’ve seen aren’t really microservices at all. They’re separately deployable artifacts, sure. But they’re built in a way that, for developers, causes more pain than solves problems.

So let’s dive a little more into this. First, we’ll talk about what a microservice was intended to be. Then we’ll get into some of these anti-patterns.

Disguise_face_in_Scalyr_colors
Read More