It’s been over three years since Google open-sourced the Kubernetes project. Even so, you might still be wondering what Kubernetes is and how you can get started using it. Well, you’re in the right place! In this post, I’m going to explain the basics you need to know to get started with Kubernetes. And I won’t just be throwing concepts at you—I’ll give you real code examples that will help you get a better idea of why you might need to use Kubernetes if you’re thinking about using containers.Read More
Still, there are a few specific Django configuration options to keep in mind. In this article, we’ll
- create a Python virtual environment,
- set up a small Django project to work with,
- write a basic logging example,
- configure the Django logger, and
- explore the Django logging extensions.
If you want to skip writing code, you can find all of the code in this article over on GitHub.
Also, note that this article won’t go into details on the specifics of Python logging. If you want a primer on those, check out this previous post, Get Started Quickly With Python Logging, which covers the basics.
Now ready, set, log!Read More
As we round the bend into 2019, it’s worth thinking about where our industry is headed. There are many exciting and challenging developments ahead: blockchain scalability, functions as a service, databases as a service—the list goes on. We’re also moving more and more into an increasingly complex, distributed world. This means distributed tracing will become especially important.Read More
AWS Lambda is Amazon’s serverless compute service. You can run your code on it without having to manage servers or even containers. It’ll automatically scale depending on how much work you feed into it. You can use it in data pipelines or to respond to web requests or even to compose and send emails. It’s the jack-of-all-trades way to execute code in the AWS cloud.
Although intended for use in the cloud, you can absolutely run Lambdas locally during development. When you do run Lambdas in the cloud, you’ll only pay for what you use. In some cases, this can save significant sums of money compared to the cost of running VMs or containers. While I’ve already touched on some of the reasons you’ll benefit from using AWS Lambdas, I want to focus on seven specific reasons in more detail.Read More
Containers have changed the way we develop and maintain applications. One of the main promises of containers is that you’ll be able to ship software faster. But sometimes how this happens seems a bit obscure. If you want to understand the benefits of containers, you first need to know what the lifecycle management is. Once you understand that, it’s going to be easier to connect all the points; then the aha moment will come naturally.
In this guide, I’ll use the Docker container engine—that way it’s easier to understand the lifecycle management behind it. Commands might be different in other container engines, but the concept is still valid. I’ll start with the application development, and I’ll finish with how to ship application changes. A container’s lifecycle only takes minutes to complete and is a reusable process.Read More
As the move to the cloud and containers continues to make software delivery faster and easier, environments are getting more complex. At Scalyr, we believe that observability solutions need to help engineering and operations teams get access to all the data they need, fast. Along those lines, we are announcing new features to help teams support the latest container and orchestration technologies, improve collaboration, and streamline workflows for faster and easier issue identification and resolution.
Kubernetes Cluster-Level Logging
Our new Kubernetes cluster-level logging enables engineering teams to effectively monitor and troubleshoot Kubernetes environments by centralizing and visualizing logs by deployment. Regardless of source, Scalyr intelligently ingests, parses and organizes logs to give developers an application-level view for faster issue resolution in complex container environments.
As shown in the screenshot above, users are presented with log summaries by deployment rather than by each individual container. The automatic grouping of logs pertaining to a deployment gives developers a more holistic view of each application or service that may be running on multiple containers and pods. Insight into individual pods and nodes is also available but that level of detail is abstracted by default so developers can focus on their application right away.
For many of the complex software issues faced by engineers, increased collaboration is key to finding solutions quickly. To improve collaboration between engineering and operations and between different engineering teams, our new chart annotations provide a way for users to call attention to specific points or windows of time with markers and customizable notes. Annotations can be manually added to dashboard charts to highlight potential issues and shared with any team member. This improves communication and productivity among engineering team members, giving them additional context to more quickly hone in on the specific logs related to the problem at hand.
We’ve extended our integration with Slack to provide more native interaction within the Scalyr UI. When viewing a chart, users can select “Share with Slack” from the Share dropdown menu and immediately send the chart to another user or channel in Slack.
Stack Trace Linking
Scalyr now makes it possible to jump directly from log events with stack traces into the reference source code in your repositories. This streamlines your debugging workflow, making it faster and simpler to get under the hood, make tweaks, test hypothesis, and ultimately solve problems. Investigating exceptions is now as easy as one mouse-click. Scalyr supports any web-accessible repository such as GitHub.
We’ve improved our support for AWS CloudWatch to provide a simpler and more reliable way to import AWS logs. By importing your AWS logs into Scalyr, you’ll get a centralized and more holistic view of all of your services, including serverless AWS Lambda functions and other AWS services.
We will be releasing these new features in the coming weeks. If you’ll be at AWS
First, I’ll show you with a quick example of manual logging with Scala. I’ll use IntelliJ IDEA to create and run a Scala project, using sbt to build the code. So, you can use the application to get started on any platform that supports Scala and Java.
Then, I’ll discuss details of how and why logging matters. Finally, I’ll move on to using the Scala Logging wrapper in an application and how it can improve your ability to monitor your applications and issues.
Let’s get started!Read More
Waking up at 0445 to the sound of Reveille blaring across the Lackland Air Force Base Giant Voice System, rushing down the dormitory stairs just so my training instructors could bark orders at myself and the other Airmen while we ran miles around the outside track until the sun came up, I wondered if I was going to be able to make it. I wondered what the Air Force was going to be like. I wondered how things were going to change.
I thought basic training would be the most difficult part of the military because that’s what movies had depicted, but that was not the reality. After graduation, the real work began. Life was filled with on-the-job training, thick volumes of professional development courses that I had to memorize and be tested on, not to mention having to perform my job 10 hours each day, maintain physical fitness standards, go to college on my personal time, and somewhere in between all of that, find the time to sleep. Every day was insanely busy and challenging, but still rewarding. The folks I was lucky enough to work with were (and still are) some of the most inspiring people I have ever met and made even the most challenging days seem easy. It was all worth it. To say that I am fortunate having spent 10 years in the United States Air Force is an understatement; I am extremely grateful. All of these experiences have given me the tools necessary to navigate the civilian world, especially my first job as the Office Manager here at Scalyr, but how do these experiences translate?
A Sense of Urgency
Both the military and tech environments require a sense of urgency, regardless of what your job title is. In my previous profession, a sense of urgency revolved around things like performing complex data analysis, drafting hundreds of pay orders for folks traveling all over the world, or even providing critical support to our Crisis Action Team. Nowadays, my sense of urgency has shifted to planning fun events that continue building upon our company’s culture, making sure our employees have the necessary equipment to perform their jobs effectively and alerting the office when our daily lunch delivery might be running late! My job duties may have changed, but without a sense of urgency, nothing would be accomplished.
Service Before Self
Even though this is one of the Air Force’s core values, I believe this to be a universal value which everyone benefits from. Having the ability to put personal desires aside and function for the greater good of the team should be something that everyone works towards in a professional setting. Just like each position is vital to the success of the mission regardless of how big or small, I have discovered that the same applies to
Being flexible and accepting of change was as important in the military as it is in the tech world and more specifically, at a tech startup. “Aren’t you bored?” is a question I am asked frequently. A lot of people may think that I had more work to do in the military compared to where I am now, but the answer to their question is no. I still have a never-ending list of things I need to do, every day is impossible to plan because there is always something that inevitably comes up which alters the course of my day, but all of that is good. Change is good. Expecting things to change and not getting comfortable is a part of life.
So what does all of this have to do with log management? Absolutely nothing! I am just here to give a different perspective and hope that
For some, the ideal picture of a modern application is a collection of microservices that stand alone. The design isolates each service with a unique set of messages and operations. They have a discrete code base, an independent release schedule, and no overlapping dependencies.
As far as I know, this type of system is rare, if it exists at all. It might seem ideal from an architectural perspective, but clients might not feel that way. There’s no guarantee that an application made up of independently developed services will share a cohesive API. Regardless of how you think about Microservices vs. SOA, services should share a standard grammar and microservices communication is not always a design flaw.
The fact is, in most systems you need to share data to a certain degree. In an online store, billing and authentication services need user profile data. The order entry and portfolio services in an online trading system both need market data. Without some degree of sharing, you end up duplicating data and effort. This creates a risk of race conditions and data consistency issues.
At the same time, how do you share data without building a distributed monolithic service instead of a micro? What’s the effective and safe way to implement microservice communication? Let’s take a look at a few different mechanisms.
First, we’ll go over different sharing scenarios. Depending on how you use the data, you can share it via events, feeds, or request/response mechanisms. We’ll take a look at the implications of each scenario.
Then, we’ll cover several different mechanisms for microservice communication, along with an overview of how to use them.Read More
Sometimes, when troubleshooting or monitoring a Docker container, we need to see the application’s output streams. Containerized applications generate standard output (stdout) and standard error (stderr) like any other software. The Docker daemon merges these streams and directs them to one of several locations, depending on which logging driver is installed. The default driver makes the container output easy to access, and if we want to copy the information to another location, we can redirect docker logs to a file.
Let’s take a look at how we can manipulate logs generated by the default json-file log driver. This driver places the log output in a system directory in JSON formatted files but provides a command line tool for displaying log contents in its original format.Read More