Java Archives - Lightrun https://lightrun.com/tag/java/ Developer Observability Platform Tue, 14 Nov 2023 13:58:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://lightrun.com/wp-content/uploads/2022/11/cropped-fav-1-32x32.png Java Archives - Lightrun https://lightrun.com/tag/java/ 32 32 Expert Guide to IntelliJ License Server https://lightrun.com/expert-guide-to-intellij-license-server/ Wed, 16 Dec 2020 17:52:26 +0000 https://lightrun.com/?p=4225 JetBrains is a world-class vendor of developer tools that are loved by millions of geeks. IntelliJ IDEA, ReSharper, PhpStorm, PyCharm, and WebStorm are all JetBrains products that have become household names in their respective developer communities. As development teams grow and get more diverse, companies start to purchase more subscriptions to JetBrains tools. However, buying […]

The post Expert Guide to IntelliJ License Server appeared first on Lightrun.

]]>
JetBrains is a world-class vendor of developer tools that are loved by millions of geeks. IntelliJ IDEA, ReSharper, PhpStorm, PyCharm, and WebStorm are all JetBrains products that have become household names in their respective developer communities.

As development teams grow and get more diverse, companies start to purchase more subscriptions to JetBrains tools. However, buying subscriptions is just the first step. Engineering teams need to distribute licenses among existing developers, provide licenses to new developers as they come on board, and revoke licenses from developers as they leave or switch to a different technology stack.

License distribution takes time and effort, and neglecting license management leads to confusion, downtime, and overspending.
Fortunately, JetBrains provides a set of tools that take the pain away from license management. One of these tools is JetBrains Floating License Server.

If you are an engineering manager or CTO in an organization using JetBrains tools, this guide is for you. Here’s what we’ll cover in this comprehensive guide to JetBrains License Server:

  • A quick summary of what JetBrains License Server is
  • Reasons to consider using License Server
  • Common development team scenarios that License Server addresses
  • Your next steps, if you decide that your company needs to use License Server

What Is JetBrains License Server?

JetBrains Floating License Server (sometimes called “IntelliJ License Server” or just “License Server”) helps dynamically distribute licenses between JetBrains products used in your company. This means you do not need to issue and revoke licenses manually.

License Server gives you better control over product usage with features such as whitelists, blacklists, and priority lists. Additionally, it monitors the adoption of JetBrains tools in your company, letting you know how many licenses are currently in use, and how many are available.

JetBrains License Server management portal

License Server is a free on-premises application that you can install in your company’s internal network. It’s available to companies that have 50+ commercial subscriptions to any JetBrains products that are part of All Products Pack, namely:

  • Integrated development environments: IntelliJ IDEA Ultimate, WebStorm, PhpStorm, PyCharm, Rider, CLion, GoLand, DataGrip, RubyMine, and AppCode.
  • Visual Studio extensions: ReSharper, ReSharper C++, and dotCover.
  • Profilers: dotTrace and dotMemory.

Finally, License Server has recently started to support licenses to third-party plugins to JetBrains tools that are distributed via the JetBrains Marketplace. However, it does not support JetBrains team tools such as YouTrack, TeamCity, or Space.

If you work in a company with fewer than 50 commercial subscriptions, you’ll need to manually assign and revoke licenses. It’s recommended to look at JetBrains Account and see what it can do in terms of license management.

Why Would Your Company Need License Server?

If your company has a stable in-house developer workforce where each developer uses only one JetBrains product, you may not need License Server at all. However, License Server gets increasingly useful when:

  • Demand for licensed JetBrains tools changes over time
  • Many users only need part-time access to JetBrains tools
  • Developers often switch between different code bases
  • Your company uses contractors, consultants, or temporary employees

Let’s consider a few typical scenarios for License Server usage.

Use Case 1: One JetBrains Product Is Used Every Day, Other Products Are Used as Needed

Let’s say that each developer on your team uses one JetBrains tool as their primary development environment but occasionally uses different JetBrains tools. For example, on .NET development teams, it’s common for developers to use the following setup that includes multiple JetBrains tools:

  • JetBrains ReSharper is used with Microsoft Visual Studio for regular development activities like reading, writing, debugging, and refactoring code.
  • JetBrains dotTrace (performance profiler) and dotMemory (memory profiler) are used sparingly to investigate occasional performance problems or memory leaks.

If all developers on a team use All Products Pack or dotUltimate subscriptions, they’re covered for both ReSharper and profilers. But if the company’s procurement practices are focused on cost savings, this may not be the case. For example, if a team has 70 developers, it may have:

  • 65 ReSharper subscriptions (ReSharper is included, profilers are not).
  • 5 dotUltimate subscriptions (both ReSharper and profilers are included).

In a setup like this, how can you make sure that when a performance problem or memory leak occurs, any developer can use dotTrace or dotMemory?

  • Without License Server: licenses that include the profilers would be provided to a fixed group of 5 developers, and you’d have to manually reassign licenses using JetBrains Account.
  • With License Server: any developer who wants to use a profiler to investigate a performance problem can do it without any administrative overhead. They simply launch dotTrace or dotMemory, and License Server takes care of selecting the license that includes the profiler.

This setup only works as long as profilers are used by no more than five developers at the same time. But if concurrent profiler usage grows, you can buy additional dotUltimate subscriptions.

Use Case 2: Splitting Time Between New and Old Codebases

A related scenario is when a team writes a new application while maintaining a legacy application, and the two applications use different tech stacks. For example, let’s say the new application is developed in Node.js and React, and the legacy application uses Spring Boot.

Developers spend most of their time developing the new application in JetBrains WebStorm. From time to time, they need to update the legacy application’s codebase for maintenance purposes, and they use IntelliJ IDEA for this. Do all developers on the team need both WebStorm and IntelliJ licenses? Probably not. Instead, you can:

  • Provide all developers with WebStorm licenses for their main line of work.
  • Add a few IntelliJ licenses to maintain the legacy application.
  • Make the licenses available via IntelliJ License Server so that whoever occasionally needs IntelliJ IDEA can automagically obtain a license.

As soon as a maintenance task on the legacy codebase is completed and the developer closes PhpStorm, the license returns to the license pool and becomes available for use by any other developer.

Use Case 3: Providing Licenses to Part-Time Users

Another related scenario is provisioning licenses to employees who aren’t exactly full-time developers. For example, your team might have:

  • DevOps engineers who use a repository of automation scripts that are stable and don’t need regular maintenance.
  • Product Marketing Managers who change the copy on your website once in a while.
  • Security experts who are mostly focused on code review but sometimes need to refactor pieces of vulnerable code.

Having a License Server with a few JetBrains All Products Pack licenses available to these employees will help you reduce costs. Rather than purchasing licenses for each of them, you reassign the existing licenses to team members as they need them.

Use Case 4: Providing Licenses to Contractors, Consultants, or Interns

Even engineering teams with in-house developers might send some development tasks to freelance contractors. This is quite common when you need temporary help on a project or extra hands to help get a product to launch faster.

When you find a contractor, you may want to give them access to your VCS repository and provide them with JetBrains tools so they can effectively work with the rest of your team. If you take this approach, give them a link to your IntelliJ License Server so that they can use your company’s licenses.

When their contract expires, you simply revoke access to your License Server (along with any other internal resources that you make available to contractors). The license then goes back to the pool, ready to be used by the next freelancer you bring on board.

Similarly, you can use License Server to provide licenses to students who join your company for summer internships. In this scenario, the students come in batches and spend a few months on their projects. By using IntelliJ License Server, it takes a bit of an administrative burden off you.

Common Questions About JetBrains Floating License Server

Q. What impact does remote work have on using License Server? Is it more or less important for distributed teams and why?

A. Remote work is unlikely to be a factor in your decision to use License Server. Whether developers are co-located or distributed, they usually connect to a corporate VPN anyway – which is where License Server is usually set up for them.

Q. Is there a SaaS version of License Server?

A. Not right now, although based on this job opening, JetBrains is considering a SaaS version of License Server in the future. Integrating its functionality into JetBrains Account feels like the most natural path forward.

Q. My company uses IntelliJ License Server to manage pre-subscription JetBrains licenses purchased before 2015. Is this the same thing?

A. No, the current version of License Server is very different. See this deprecation notice for more details.

Make sure to also check out the FAQ section in JetBrains License Server docs.

Setting Up JetBrains Floating License Server

JetBrains does a great job documenting how to install, configure, and maintain License Server. Instead of diving deep into this, I’ll give you a quick checklist of the major steps you need to make IntelliJ License Server available in your company.

  1. Contact the JetBrains Sales team to enable License Server for your company’s JetBrains Account.
  2. If your company’s hardware meets system requirements (make sure you check out both Windows and Linux/macOS requirements), install and start License Server.
  3. Register your License Server with your company’s JetBrains Account.
  4. Add licenses from JetBrains Account to License Server.
  5. Configure usage reporting and notifications.
  6. Let your developers know that they can now use License Server.

Once developers have access to your server, they’ll need to configure each of their JetBrains products with License Server.

License Server Configuration

Anyone on your team can now start a supported JetBrains product and configure it to fetch licenses from your License Server installation.

Let’s see how to do this in PyCharm, IntelliJ IDEA, or any other JetBrains IDE:

  1. On the main menu, select Help | Register.
  2. Set Get license from: to License server.
  3. If you know the URL of your company’s License Server, paste it into the Server address: field. If you don’t, click Discover Server to make PyCharm lookup License Server on your local network.
  4. Click Activate.

Configuring License Server access from PyCharm

For a more detailed procedure, see the Register section of your JetBrains IDE documentation. This page also shows you how to configure License Server discovery for silent IDE installations.

The process is slightly different for other JetBrains tools. Here’s how you make ReSharper, dotTrace, or dotCover connect to JetBrains License Server:

  1. On the main menu, select ReSharper | Help | License Information.
  2. In the License Information dialog, select Use license server.
  3. ReSharper will try to auto-detect your License Server, but if it fails, click the + icon to specify your License Server’s URL.

License Server configuration in ReSharper

For more details, see License Information dialog and Specify License Information in ReSharper docs. Among other things, the latter article describes differences between floating and permanent license tickets. Permanent tickets are useful if you need to use JetBrains products without a steady connection to your License Server (for example, if you’re about to catch a flight).

Wrapping Up

JetBrains Floating License Server is a great free tool for streamlining license distribution in large development teams.

License Server is worth exploring if:

Your company uses 50 or more subscriptions to JetBrains IDEs, extensions, profilers, or third-party plugins.

And

You want to stop thinking about provisioning licenses every time someone joins or leaves a team.

Even if your company doesn’t qualify for License Server yet, don’t forget to check out what JetBrains Account can do for you. It can help you with other license-management tasks like bulk-assigning licenses, revoking and reassigning licenses, and managing auto renewals.

The post Expert Guide to IntelliJ License Server appeared first on Lightrun.

]]>
8 Debugging Tips for IntelliJ IDEA Users You Never Knew Existed https://lightrun.com/eight-debugging-tips-for-intellijidea-users-you-never-knew-existed/ Sun, 14 Jun 2020 12:57:11 +0000 https://lightrun.com/?p=2077 As developers, we’re all familiar with debuggers. We use debugging tools on a daily basis – they’re an essential part of programming. But let’s be honest. Usually, we only use the breakpoint option. If we’re feeling frisky, we might use a conditional breakpoint. But guess what, the IntelliJ IDEA debugger has many powerful and cutting-edge […]

The post 8 Debugging Tips for IntelliJ IDEA Users You Never Knew Existed appeared first on Lightrun.

]]>
As developers, we’re all familiar with debuggers. We use debugging tools on a daily basis – they’re an essential part of programming. But let’s be honest. Usually, we only use the breakpoint option. If we’re feeling frisky, we might use a conditional breakpoint.

But guess what, the IntelliJ IDEA debugger has many powerful and cutting-edge features that are useful for debugging more easily and efficiently. To help, we’ve compiled a list of tips and tricks from our very own developers here at Lightrun. We hope these tips will help you find and resolve bugs faster.

Let’s get started.

1. Use an Exception Breakpoint

Breakpoints are places in the code that stop the program, to enable debugging. They allow inspecting the code behavior and its functions to try to identify the error. IntelliJ offers a wide variety of breakpoints, including line breakpoints, method breakpoints and exception breakpoints.

We recommend using the exception breakpoint. This breakpoint type suspends the program according to an exception type, and not at a pre-defined place. We especially recommend the IntelliJ Exception breakpoint because you can also filter the class or package the exceptions are a part of.

So you can define a breakpoint that will stop on a line that throws NullPointerException and ignore the exceptions that are thrown from files that belong to other libraries. All you have to do is define the package that has your project’s files. This will help you focus the analysis of your code behavior.

Exception breakpoint in IntelliJ IDEA

Lightrun offers snapshots – breakpoints that do not stop the program from running. Learn more here.

2. Use Conditions in Your Breakpoints

This is one of the most under-utilised tools in debuggers and possibly one of the most effective ones. Use conditions to narrow down issues far more easily, to save time and the work of hunting for issues. For example, in a loop you can define a breakpoint that will only stop on the actual bug, relieving you from manually going over loops until you run into an issue!

In the loop below, you can see the breakpoint will stop the service when the agent id value is null. So instead of throwing a null pointer exception we’ll be able to inspect the current state of the VM (virtual machine) before it does.

Notice that a condition can be very elaborate and even invoke methods as part of the condition.

Breakpoint condition in IntelliJ IDEA

Lightrun offers conditions for all its actions: snapshots, logs etc. Learn more here.

3. Enable the “Internal Actions” Menu for Custom Plugin Development  

If you’re writing a custom IntelliJ/IDEA plugin, enable Internal Actions (Tools -> Internal Actions) for easy debugging. This feature includes a lot of convenient options, like a component inspector and a UI debugger. It’s always handy to have a wide set of tools at your disposal, providing you with options you may have never thought of yourself.

To enable Internal Actions select Help -> Edit Custom Properties. Then type in

idea.is.internal=true

and save. Upon restart you should see the new option under the Tools menu.

Internal Actions menu for custom plugin development in IntelliJ IDEA

4. Use the “Analyze Thread Dump” Feature

A thread dump is a snapshot that shows what each thread is doing at a specific time. Thread dumps are used to diagnose system and performance issues. Analyzing thread dumps will enable you to identify deadlocks or contention issues.

We recommend using IntelliJ’s “Analyze Thread Dump” feature because of its convenient browsing capabilities that make the dump easy to analyze. “Analyze Thread Dump” automatically detects a stack trace in the clipboard and instantly places it with links to your source code. This capability is very useful when traversing stack dumps from server logs, because you can instantly jump to the relevant files like you can with a local stack trace.

To access the feature go to the Analyze menu. The IDE supports activating this feature dynamically when the IDE detects a stack trace in the clipboard.

5. Use the Stream Debugger

Java 8 streams are very cool to use but notoriously hard to debug. Streams condense multiple functions into a single statement, so simply stepping over the statements with a debugger is impractical. Instead, you need a tool that can help you analyze what’s going on inside the stream.

IntelliJ has a brand new cool tool, the stream debugger. You can use it to inspect the results of the stream operation visually. When you hit a breakpoint on a stream, press the stream debugger icon in the debugger. You will see the UI mapping of the value of the stream elements at each stage/function of the stream. Thus, each step is visualized and you can see the operations in the stream and detect the problem.

Stream debugger in IntelliJ IDEA (1)

Stream debugger in IntelliJ IDEA (2)

Stream debugger in IntelliJ IDEA (3)

6. Use Field Watchpoints

The Field Watchpoint is a type of breakpoint that suspends the program when the defined field is accessed or modified. This can be very helpful when you investigate and find out that a field has a wrong value and you don’t know why. Watching this field could help finding the fault origin.

To set this breakpoint, simply add it at the line of the desired field. The program will suspend when, for example, the field is modified:

Field watchpoints in IntelliJ IDEA

7. Debug Microservices with the Lightrun Plugin

Lightrun’s IntelliJ plugin enables adding logs, snapshots and performance metrics, even while the service is running. Meaning, you can add instrumentation to the production and staging environments. You can debug monolith microservices, Kubernetes, K8, Docker Swarm, ECS, Big Data workers, serverless, and more. Multi-instance support is available through a tagging mechanism.

The Lightrun plugin is useful for saving time, so instead of going through multiple iterations of local reproduction of environments, restarts and redeployments you can debug straight in production.

Lightrun plugin for IntelliJ IDEA

Want to learn more? Request a demo.

8. Use a Friend – Real or Imaginary

When it comes to brainstorming, 1+1=3. And when it comes to dealing with complex debugging issues, you are going to need all the brainpower you can get. Working with someone provides a fresh set of eyes that views the problem in a different manner and might identify details you missed. Or you both complement each other until you reach the solution. Just by asking each other questions and undermining some of each other’s assumptions, you will reach new conclusions that will help you find the problem. You can also use each other for “Rubber Duck Debugging”, or as we like to call it, “Cheetah debugging”.

Cheetah debugging

We hope these tips by our own developers will help you with your debugging needs. Feel free to share your debugging tips and best practices with us and to share this blog post to help others.

As we mentioned in tip no. 7, Lightrun’s IntelliJ plugin enables developers to debug live microservices without interrupting them. You can securely add logs and performance metrics to production and staging in real-time, on-demand. Start using Lightrun today, or request a demo to learn more.

The post 8 Debugging Tips for IntelliJ IDEA Users You Never Knew Existed appeared first on Lightrun.

]]>
4 Tools Every Java Programmer Should Know https://lightrun.com/4-tools-every-java-programmer-should-know/ Tue, 18 Aug 2020 06:19:58 +0000 https://lightrun.com/?p=3232 The Java tooling ecosystem is pretty wide. Read this post to learn about the key tools we use to ramp up our development efforts.

The post 4 Tools Every Java Programmer Should Know appeared first on Lightrun.

]]>
Java is the most popular programming language. As such, it’s no surprise there are many tools whose primary function is to assist the day-to-day work of Java programmers. In this blog post I will introduce some open-source (and free!) tools and platforms that I’ve used personally for years, and can warmly recommend. 

The following tools immensely improve the coding, building, testing and profiling of Java applications, and I think every Java programmer will benefit from a familiarity with them. I built this list based on my experience as a professional Java developer in recent years, and I hope you find them helpful when you code like I did.

Mockito

Mockito is an open-source Java mock library with a simple API and large community support, often regarded as the foremost tool in its space. Mockito helps you create, verify and stub mocks – which are objects that simulate (i.e. mock) complex production objects, without fully creating them in practice. 

Therefore, mocks in general (and Mockito specifically) is very useful in the context of unit testing – it allows for proper, isolated, single-unit tests: we can mock the dependencies of each unit and focus on the behavior we want to test.

You have two choices when working with Mockito – you can either use the library’s API methods manually, or you can use the annotations the library provides.

A lot of developers choose to use annotations since they reduce a significant amount of repetitive, boilerplate code you have to write to use the library. The following is a short list of annotations I use the most:

  • @Spy – Spy the real object. Note that you can use @SpyBean for resolving Spring-specific dependencies.
  • @Mock – Create and inject mocked instances. In Spring, the corresponding annotation is @MockBean.
  • @InjectMocks – Automatically inject instances of your mocks and spies into the annotated class.

You can see more available annotations and usage examples here.

One comment before we go to the next part – note that mocking should not be overused.

A test that includes many mocks – 10 for example – usually indicates that your class has too many dependencies (i.e. it is responsible for too much). In addition, the more mocks you use, the less you’re testing the real environment – so use them wisely!

Sonar

Sonar is the leading automated service for detecting bugs, code smell and security vulnerabilities in your pull requests. By letting Sonar analyze your code, you can identify vulnerabilities and issues before deployment.

Sonar integrates with Github, Bitbucket, Azure and GitLab and fits snugly into most teams’ development processes. That means that every time you open a Pull Request, you can also get a review from Sonar.

Consider the following example of a sonar report result in a Github PR:

SonarCloud in action

As you can see, Sonar found 26 code smells in this PR’s code and 52.3% coverage. We can drill further down the inspection by clicking on one of the issues, which will redirect us to Sonar for a breakdown of the selected issue.

In Sonar, you can choose between SonarCloud or SonarQube. SonarQube is meant to be integrated with on-premise solutions, like GitHub Enterprise or BitBucket Server, and SonarCloud is for cloud solutions like Github or BitBucketCloud. SonarQube is open-source and free, and Sonarcloud is free for public projects.

I recommend adding a Sonar analysis step every time your code is ready to be merged or released, to ensure maximum code quality. You can also integrate Sonar directly into your favorite CI/CD integration (Jenkins, Azure DevOps, etc.) to have it triggered automatically.

By the way, you can still use Sonar even if you’re not a Java developer. Sonar also supports C#, C/C++, Objective-C, TypeScript, Python, PLSQL and a host of other languages.

IntelliJ IDEA

If you’re a Java developer and you’re one of the 38% who haven’t experienced IntelliJ IDEA – it’s about time you do!

With smart code completion, refactoring and debugging, open-source IntelliJ IDEA leads developers to efficient and convenient coding. It is your armchair assistant, there to aid and redirect you to the correct path at every step of the way.

Compared to other Java IDEs, IntelliJ IDEA’s strength is its intelligence. IntelliJ IDEA understands what you want to do and does it. For example, let’s say there is a method that accepts User and a property of User, and you want to simplify it by simply sending User.

IntelliJ in action!

With the inline shortcut (CTRL+ALT+N) on the “activated” parameter, IntelliJ IDEA understands you want to use user.getActivated() and removes the parameter from the method and its usages! No manual changes are needed. The result:

IntelliJ in action - again!

For more refactoring tips with IntelliJ IDEA, take a look at the refactoring guide right here, or learn some new debugging tricks with IntelliJ IDEA.

And a quick plug – if you’re already an IntelliJ IDEA user (or a soon-to-be-convert!), we offer a plugin for real-time production debugging, right from inside your IDE. Schedule a demo!

VisualVM

VisualVM is a powerful open-source profiling tool for Java applications. It supports local and remote profiling, memory and CPU profiling, thread monitoring, thread dumps and heap dumps. VisualVM displays a monitoring and performance analysis of your app, so you can fix your code prior to real-time crashes.

VisualVM displays the running Java applications on the left pane:

All running Java apps in VisualVM

After selecting a Java application, we can see CPU usage, heap space, classes and threads in the monitor tab:

VisualVM Monitoring Window

This enables you to understand, for example, if your application takes too much CPU or memory. Moreover, you can detect memory leaks using a heap dump – a snapshot of the current Java objects and classes in the heap. See all the possible options and some usage examples here

In Summary 

There are many Java tools and platforms available that can help you code faster and smarter, by streamlining various coding-adjacent activities – like testing, analyzing, profiling, building and releasing. 

These tools will make your coding more productive and efficient, and reduce many of the mundane activities we deal with when we program. There are also many Java communities that can provide assistance and consultation. Let me know in the comments section if you have any other favorite tools you like to use, and I’ll try to incorporate them into the next article! 

The post 4 Tools Every Java Programmer Should Know appeared first on Lightrun.

]]>
When Disaster Strikes: Production Troubleshooting https://lightrun.com/when-disaster-strikes-production-troubleshooting/ Wed, 04 May 2022 08:38:01 +0000 https://lightrun.com/?p=7235 Tom Granot and myself have had the privilege of Vlad Mihalcea’s online company for a while now. As a result we decided to do a workshop together talking about a lot of the things we learned in the process. This workshop would be pretty informal ad-hoc, just a bunch of guys chatting and showing off […]

The post When Disaster Strikes: Production Troubleshooting appeared first on Lightrun.

]]>
Tom Granot and myself have had the privilege of Vlad Mihalcea’s online company for a while now. As a result we decided to do a workshop together talking about a lot of the things we learned in the process. This workshop would be pretty informal ad-hoc, just a bunch of guys chatting and showing off what we can do with tooling.

In celebration of that I thought I’d write about some of the tricks we discussed amongst ourselves in the past to give you a sense of what to expect when joining us for the workshop but also a useful tool in its own right.

The Problem

Before we begin I’d like to take a moment to talk about production and the role of developers within a production environment. As a hacker I often do everything. That’s OK for a small company but as companies grow we add processes.

Production doesn’t go down in flames as much. Thanks to staging, QA, CI/CD and DevOps who rein in people like me…

So we have all of these things in place. We passed QA, staging and everything’s perfect. Right?

All good, right? Right???

Well… Not exactly.

Sure. Modern DevOps made a huge difference to production quality, monitoring and performance. No doubt. But bugs are inevitable. The ones that slither through are the worst types of vermin. They’re hard to detect and often only happen on scale.

Some problems, like performance issues. Are only noticeable in production against a production database. Staging or dev environments can’t completely replicate modern complex deployments. Infrastructure as Code (IaC) helps a lot with that but even with such solutions, production is at a different scale.

It’s the One Place that REALLY Matters

Everything that isn’t production is in place to facilitate production. That’s it. We can have the best and most extensive tests. With 100% coverage for our local environments. But when our system is running in production behavior is different. We can’t control it completely.

A knee jerk reaction is “more testing”. I see that a lot. If only we had a test for that… The solution is to somehow think of every possible mistake we can make and build a test for that. That’s insane. If we know the mistake, we can just avoid it. The idea that a different team member will have that insight is again wrong. People make similar mistakes and while we can eliminate some bugs in this way. More tests create more problems… CI/CD becomes MUCH slower and results in longer deploy times to production.

That means that when we do have a production bug. It will take much longer to fix because of redundant tests. It means that the whole CI quality process which we need to go through, will take longer. It also means we’ll need to spend more on CI resources…

Logging

Logging solves some of the problems. It’s an important part of any server infrastructure. But the problems are similar to the ones we run into with testing.

We don’t know what will be important when we write a log. Then in production we might find it’s missing. Overlogging is a huge problem in the opposite direction. It can:

  • Demolish performance & caching
  • Incur huge costs due to log retention
  • Make debugging harder due to hard to wade through verbosity

It might still be missing the information we need…

I recently posted to a reddit thread where this comment was also present:

“A team at my company accidentally blew ~100k on Azure Log Analytics during the span of a few days. They set the logging verbosity to a hitherto untested level and threw in some extra replicas as well. When they announced their mistake on Slack, I learned that yes, there is such a thing as too much logging.”  – full thread here.

Again, logging is great. But it doesn’t solve the core problem.

Agility

Our development team needs to be fast and responsive. We need to respond quickly to issues. Sure, we need to try and prevent them in the first place… But like most things in life the law of diminishing returns is in effect here too. There are limits to tests, logs, etc.

For that we need to fully understand the bug fast. Going through the process of reproducing something locally based on hunches is problematic at best. We need a way to observe the problem.

This isn’t new. There are plenty of solutions to look at issues in production e.g. APM tools provide us invaluable insight into our performance in production. They don’t replace profilers. They provide the one data point that matters: how fast is the application that our customers are using!

But most of these tools are geared towards DevOps. It makes sense. DevOps are the people responsible for production, so naturally the monitoring tools were built for them. But DevOps shouldn’t be responsible for fixing R&D bugs or even understanding them… There’s a disconnect here.

Enter Developer Observability

Developers observability is a pillar of observability targeted at developers instead of DevOps. With tools in this field we can instantly get feedback that’s tailored for our needs and reduce the churn of discovering the problem.  Before these tools if a log didn’t exist in the production and we didn’t understand the problem… We had to redeploy our product with “more logs” and cross our fingers…

In Practice and The Workshop…

I got a bit ahead of myself explaining the problem longer than I will explain the solution. I tend to think that’s because the solution is so darn obvious once we “get it”. It’s mostly a matter of details.

Like we all know: the devil is in the details…

Developer observability tools can be very familiar to developers who are used to working with debuggers and IDEs. But they are still pretty different. One example is breakpoints.

It’s Snapshots Now

We all know this drill. Set a breakpoint in the code that doesn’t work and step over until you find the problem. This is so ingrained into our process that we rarely stop to think about this at all.

But if we do this in a production environment the server will be stuck while waiting for us to step over. This might impact all users in the server and I won’t even discuss the security/stability implications (you might as well take a hammer and demolish the server. It’s that bad).

Snapshots do everything a breakpoint does. They can be conditional, like a conditional breakpoint. They contain the stack trace and you can click on elements in the stack. Each frame includes the value of the variables in this specific frame. But here’s the thing: they don’t stop.

So you don’t have “step over” as an option. That part is unavoidable since we don’t stop. You need to rethink the process of debugging errors.

currentTimeMillis()

I love profilers. But when I need to really understand the cost of a method I go to my trusted old currentTimeMillis() call. There’s just no other way to get accurate/consistent performance metrics on small blocks of code.

But as I said before. Production is where it’s at. I can’t just stick micro measurements all over the code and review later.

So developer observability tools added the ability to measure things. Count the number of times a line of code was reached. Or literally perform a tictoc measurement which is equivalent to that currentTimeMillis approach.

See You There

“Only when the tide goes out do you discover who’s been swimming naked.” –   Warren Buffett

I love that quote. We need to be prepared at all times. We need to move fast and be ready for the worst. But we also need practicality. We aren’t original, there are common bugs that we run into left and right. We might notice them faster but mistakes aren’t original.

In the workshop we’ll focus on some of the most common mistakes and demonstrate how we can track them using developer observability. We’ll give real world examples of failures and problems we ran into in the past and as part of our work. I’m very excited about this and hope to see you all there!

 

The post When Disaster Strikes: Production Troubleshooting appeared first on Lightrun.

]]>
Top 10 Java Linters https://lightrun.com/top-10-java-linters/ Tue, 06 Jul 2021 13:29:29 +0000 https://lightrun.com/?p=6081 Java Linters make an awesome addition to your development environment. Check out our top Java linters and SAST solutions.

The post Top 10 Java Linters appeared first on Lightrun.

]]>
If you want to ensure code maintainability over the long term, you should follow best coding practices and style guide rules. One of the best ways to achieve this, while also potentially finding bugs and other issues with your code, is to use a linter.

Linters are best described as static code analyzers because they check your code before it even runs. They can work inside your IDE, run as part of your build process, or be inserted into your workflow anywhere in between. While the use cases for linters can be rather varied, their utility usually focuses on code cleanup and standardization. In other words, using a linter helps make your code less sloppy and more maintainable.

Check out the below example for a demonstration of how a linter works, from Checkstyle:

Before:

public abstract class Plant {

  private String roots;

  private String trunk;

  protected void validate() {

    if (roots == null) throw new IllegalArgumentException("No roots!");

    if (trunk == null) throw new IllegalArgumentException("No trunk!");

  }

  public abstract void grow();

}

public class Tree extends Plant {

  private List leaves;

  @Overrides

  protected void validate() {

    super.validate();

    if (leaves == null) throw new IllegalArgumentException("No leaves!");

  }

  public void grow() {

    validate();

  }

}

After:

public abstract class Plant {

  private String roots;

  private String trunk;

  private void validate() {

    if (roots == null) throw new IllegalArgumentException("No roots!");

    if (trunk == null) throw new IllegalArgumentException("No trunk!");

    validateEx();

  }

  protected void validateEx() { }

  public abstract void grow();

}

In this article, I’ll examine ten of the best linters for Java. You’ll find that while most linters aren’t “better” or “worse” than others, there are certainly some that come with a wider breadth of features, making them more powerful or flexible than some of their niche counterparts.

Ultimately, it’s best to choose a linter that works best for your specific business use case and workflow.

1. Checkstyle

checkstyle

Checkstyle is one of the most popular linters available. With this popularity comes regular updates, thorough documentation, and ample community support. Checkstyle works natively with Ant and CLI. It is also available as a plugin for a wide variety of IDE’s and toolsets, including Eclipse, Codacy, Maven, and Gradle – although these plugins are managed by third parties, so there’s no guarantee of long-term support.

Checkstyle comes with pre-made config files that support both Sun Code Conventions and Google Java Style, but because these files are XML, they are highly configurable to support your workflow and production needs.

It is also worth mentioning that a project with Checkstyle built into its build process will fail to build even if minor errors are present. This might be a problem if you’re only looking to catch larger errors and don’t have the resources to fix tiny errors that don’t have a perceptible impact.

2. Lightrun

The second member of this list is not actually a linter per se, but it will help you improve your code quality and prevent bugs before they become serious problems. Whereas everything to this point has been a static code analyzer, Lightrun is a runtime debugger. At the end of the day, static code analysis and linting can only get you so far, so if you need a little more, Lightrun is worth adding to your workflow.

Production is the ultimate stress test for any codebase, especially in the age of cloud computing. Lightrun allows you to insert logs, metrics, and snapshots into your code, even at runtime, directly from your IDE or CLI. Lightrun lets you debug issues in production and run data analysis on your code without slowing down or interrupting your application.

3. PMD

pmd

What do the PMD initials stand for? It seems even the developers don’t know…

Like Checkstyle, PMD is a popular static code analyzer with an emphasis on Java. Unlike Checkstyle, PMD supports multi-language analysis, including JavaScript, Salesforce.com, Apex, and Visualforce. This could be helpful if you’d like to use a single linter for a frontend and backend codebase.

In the developers’ own words, “[PMD] finds common programming flaws like unused variables, empty catch blocks, unnecessary object creation, and so forth.” In addition, it comes with a copy-paste-detector (CPD) to find duplicated code in a myriad of languages, so it is easier to find code that could be refactored.

4. Uncrustify

uncrustify

Uncrustify diverges from the previous linters in that Java is not its primary focus. Instead, it is a “code beautifier” made for C and C-like languages, including Java. On the one hand, Uncrustify is great for projects with a C-based or C-analogue-based workflow. On the other hand, its feature list begins and ends with simply making your code look nicer.

Uncrustify works by running through your code and automatically updating its white space, bracketing, and other formatting conventions to match a ruleset. Because this is an automated process, the developers themselves caution against running Uncrustify on an entire project without checking the changes afterward.

Uncrustify is best used in conjunction with other linters and dev tools. It isn’t particularly powerful on its own but could come in handy for niche workflows that involve multiple C-based languages.

5. Error Prone

Error Prone

Error Prone is an error finder for your code builds, specifically built for Java. It is designed to supplement your compiler’s static type checker, and find potential runtime errors without running the code. The example provided on their website seems trivial, especially to any developers who’ve been working out of an IDE for most of their careers.

But for codebases where the compile and run process can stretch into hours or even days, having that extra check can save a lot of time and headaches, especially if a particular bug might be in an uncommonly accessed block of code.

6. Tattletale

Tattletale

Tattletale might not be considered a linter in the traditional sense. While it does analyze your static code like the other linters on this list, it is less concerned with individual blocks of code or particular development standards, and it is more focused on finding package and library redundancies in your project.

Not only will Tattletale identify different dependencies within your JAR files, but it will also suss out duplicate JAR files, find duplicate or missing classes, and check similar JARs with different version numbers. Long-term, not only will this keep your project size slimmer, but it will also help prevent head-scratching errors where you’re calling two different versions of the same package and getting different results because of changes between versions. All of this information is put into an HTML report for easy viewing.

Because of the high-level intention of this tool, it won’t help much with line-to-line code edits. But with that said, if you’re running a purely Java codebase, Tattletale is a tool worth adding to your arsenal.

7. UCDetector

ucdetector

UCDetector, short for Unnecessary Code Detector, does exactly what its name implies. In addition to finding “dead” code, it also marks classes whose privacy modifier could be changed from public to something more restricted and methods or fields which can be set to final.

One of the earliest OOP concepts taught in school is that programmers should only set classes, methods, and data fields to public if they explicitly know those elements will be accessed or modified by external classes. However, when in the thick of coding, even with the best-defined UMLs, it can sometimes be difficult to determine when a class or method should be public, private, or protected.

After your code is completed and debugged, a pass through the UCDetector will help you catch any code blocks you missed or mistakenly set to the wrong privacy modifier, potentially saving you headaches down the road and preventing sensitive field members from being unintentionally exposed to clients.

8. linter for Scala

Linter for Scala

So, let’s say that you’re looking to move on from Java. You want something familiar and that will integrate well with your current back end. Instead of going with C#, you decide to upgrade to Scala. Not coincidentally, the more niche a language is, the more difficult it will be to find Linting tools for it. That’s not saying that Scala is terribly niche, just that it can be a little more difficult to find support for it than for vanilla Java.

With that said, a nice starting point would be this toolset, simply titled: linter Compiler Plugin. According to the Github page, “Linter is a Scala static analysis compiler plugin which adds compile-time checks for various possible bugs, inefficiencies, and style problems.” Not only is it written for Scala, but it is also written almost exclusively in Scala.

Unfortunately, the last commit on the project was in 2016, so it might not be well-equipped for any new features introduced to the language in the past five years.

9. Scalastyle

scalastyle

The developers of Scalastyle describe it thusly: “Scalastyle examines your Scala code and indicates potential problems with it. If you have come across Checkstyle for Java, then you’ll have a good idea of what Scalastyle is. Except that it’s for Scala obviously.”

So for those of you who loved Checkstyle but are moving to a Scala workflow, rejoice, for Scalastyle is here. It’s kept up to date and more likely a better option than linter for Scala above.

10. Coala

coala

Of all the linters on this list, Coala seems to aim for the most flexibility. It claims that it works by, “linting and fixing code for all languages.” While Fortran is located nowhere on their list of supported languages, Coala does support quite an extensive list, including (of course) Java.

All of these languages can be linted using a single config file, so if you’re working in a multi-language web environment (is there any other kind of web environment?) you’ll find Coala is well-suited to your needs.

Final Thoughts

Linters make an awesome addition to just about any Java development environment. Debugging, error-checking, and issue prevention are all multi-step procedures that should take place in every phase of development. A linter is a great tool for when you want to analyze code without having to run it, maintain coding best practices, and ensure long-term code maintainability. However, each linter’s usefulness only extends so far and should thus be supplemented by other tools, including runtime debugging software like Lightrun.

The post Top 10 Java Linters appeared first on Lightrun.

]]>
Top 8 IntelliJ Debug Shortcuts https://lightrun.com/intellij-debug-shortcuts/ Mon, 06 Jun 2022 16:52:53 +0000 https://lightrun.com/?p=7360 Let’s get real – as developers, we spend a significant amount of time staring at a screen and trying to figure out why our code isn’t working. According to Coralogix, there are an average of 70 bugs per 1000 lines of code. That’s a solid 7% worth of blimps, bumps, and bugs. In addition to […]

The post Top 8 IntelliJ Debug Shortcuts appeared first on Lightrun.

]]>
Let’s get real – as developers, we spend a significant amount of time staring at a screen and trying to figure out why our code isn’t working. According to Coralogix, there are an average of 70 bugs per 1000 lines of code. That’s a solid 7% worth of blimps, bumps, and bugs. In addition to this, fixing a bug can take 30 times longer than writing an actual line of code. But it doesn’t have to be this way. If you’re using IntelliJ (or are thinking about making the switch to it), the in-built debugger and its shortcuts can help speed up the process. But first, what is IntelliJ?

What is IntelliJ?

If you’re looking for a great Java IDE, you should check out IntelliJ IDEA. It’s a robust, feature-rich IDE perfect for developing Java applications. While VSCode is excellent in many situations, IntelliJ is designed for Java applications. Here’s a quick overview of IntelliJ IDEA and why it’s so great.

IntelliJ IDEA is a Java IDE developed by JetBrains. It’s a commercial product, but a free community edition is available. Some of the features include:

  • Intelligent code completion
  • Refactoring
  • Code analysis
  • Support for various frameworks and libraries
  • Great debugger

Debugging code in IntelliJ

If you’re a developer, you will have to debug code sooner or later. But what exactly is debugging? And why do we do it?

Debugging is the process of identifying and removing errors from a computer program. Errors can be caused by incorrect code, hardware faults, or software bugs. When you find a bug, the first thing you need to do is to try and reproduce the bug so you can narrow down the problem and identify the root cause. Once you’ve reproduced the bug, you can then start to debug the code

Debugging is typically done by running a program in a debugger, which is a tool that allows the programmer to step through the code, line by line. The debugger will show the values of variables and allow programmers to change them so they can find errors and fix them.

The general process of debugging follows this flow:

  • identify the bug
  • reproduce the bug
  • narrow down where in the code the bug is occurring
  • understand why the bug exists
  • fix the bug

Debugging process

Most often than not, we spend our time on the second and third steps. Statistically, we spend approximately 75% of our time just debugging code. In the US, $113B is spent on developers trying to figure out the what, where, why, and how of existing bugs. Leveraging the IDE’s built-in features will allow you to condense the debugging process.

Sure, using a debugger will slow down the execution of the code, but most of the time, you don’t need it to run at the same snail’s pace speed through the entire process. The shortcut controls allow you to observe the meta inner workings of your code at the rate you need them to be.

Without further ado – here are the top 8 IntelliJ debug shortcuts, what they do and how they can help speed up the debugging process.

Top 8 IntelliJ Debug Shortcuts

1. Step Over (F8)

Stepping is the process of executing a program one line at a time. Stepping helps with the debugging process by allowing the programmer to see the effects of each line of code as it is executed. Stepping can be done manually by setting breakpoints and running the program one line at a time or automatically by using a debugger tool that will execute the program one line at a time.

Step over (F8) takes you to the following line without going into the method if one exists. This step can be helpful if you need to quickly pass through the code, hunt down specific variables, and figure out at what point it exhibits undesired behavior.

Step Over (F8)

2. Step into (F7)

Step into (F7) will take the debugger inside the method to demonstrate what gets executed and how variables change throughout the process.

This functionality is helpful if you want to narrow down your code during the transformation process.

Step into (F7)

3. Smart step into (Shift + F7)

Sometimes multiple methods are called on the line. Smart step into (Shift + F7) lets you decide which one to invoke, which is helpful as it enables you to target potential problematic methods or go through a clear process of elimination.

Smart step into (Shift + F7)

4. Step out (Shift + F8)

At some point, you will want to exit the method. The step out (Shift + F8) functionality will take you to the call method and back up the hierarchy branch of your code.

Step out (Shift + F8)

5. Run to cursor (Alt + F9)

Alternative to setting manual breakpoints, you can also use your cursor as the marker for your debugger.

Run to cursor (Alt + F9) will let the debugger run until it reaches where your cursor is pointing. This step can be helpful when you are scrolling through code and want to quickly pinpoint issues without the need to set a manual breakpoint.

6. Evaluate expression (Alt + F8)

It’s one thing to run your code at the speed you need; it’s another to see what’s happening at each step. Under normal circumstances, hovering your cursor over the expression will give you a tooltip.

But sometimes, you just need more details. Using the evaluate expression shortcut (Alt + F8) will reveal the child elements of the object, which can help obtain state transparency.

Evaluate expression (Alt + F8)

7. Resume program (F9)

Debugging is a constant stop and start process. The ability to toggle this process is achievable through F9. This shortcut will kickstart the debugger back into gear and get it moving to the next breakpoint.

For Mac, a keycord (Cmd + Alt + R) is required to resume the program.

8. Toggle (Ctrl + F8) & view breakpoints (Ctrl + Shift + F8)

Breakpoints can get nested inside methods – which can be a hassle to look at if you want to step out and see the bigger picture. This is where the ability to toggle breakpoints comes in.

You can toggle line breakpoints with Ctrl+F8. Alternatively, if you want to view and set exception breakpoints, you can use Ctrl+Shift+F8.

For Mac OS, the keycords are:

  • Toggle – Cmd + F8
  • View breakpoints – Cmd + Shift + F8

Toggle (Ctrl + F8) & view breakpoints (Ctrl + Shift + F8)

Improving the debugging process

If you’re a software engineer, you know that debugging is essential for the development process. It can be time-consuming and frustrating, but it’s necessary to ensure that your code is working correctly.

Fortunately, there are ways to improve the debugging process, and one of them is by using Lightrun. Lightrun is a cloud-based debugging platform you can use to debug code in real-time. It is designed to make the debugging process easier and more efficient, and it can be used with any programming language.

One of the great things about Lightrun is that you can use it to debug code in production, which means that you can find and fix bugs in your code before your users do. Lightrun can also provide a visual representation of the code being debugged. This can help understand what is going on and identify the root cause of the problem. Start using Lightrun today!

The post Top 8 IntelliJ Debug Shortcuts appeared first on Lightrun.

]]>
Spring Transaction Debugging in Production with Lightrun https://lightrun.com/spring-transaction-debugging-in-production-with-lightrun/ Mon, 18 Apr 2022 18:23:26 +0000 https://lightrun.com/?p=7205 Spring makes building a reliable application much easier thanks to its declarative transaction management. It also supports programmatic transaction management, but that’s not as common. In this article, I want to focus on the declarative transaction management angle, since it seems much harder to debug compared to the programmatic approach. This is partially true. We […]

The post Spring Transaction Debugging in Production with Lightrun appeared first on Lightrun.

]]>
Spring makes building a reliable application much easier thanks to its declarative transaction management. It also supports programmatic transaction management, but that’s not as common. In this article, I want to focus on the declarative transaction management angle, since it seems much harder to debug compared to the programmatic approach.

This is partially true. We can’t put a breakpoint on a transactional annotation. But I’m getting ahead of myself.

What is Spring’s Method Declarative Transaction Management?

When writing a spring method or class, we can use annotations to declare that a method or a bean (class) is transactional. This annotation lets us tune transactional semantics using attributes. This lets us define behavior such as:

  • Transaction isolation levels – lets us address issues such as dirty reads, non-repeatable reads, phantom reads, etc.
  • Transaction Manager
  • Propagation behavior – we can define whether the transaction is mandatory, required, etc. This shows whether the method expects to receive a transaction and how it behaves
  • readOnly attribute – the DB does not always support a read-only transaction. But when it is supported, it’s an excellent performance/reliability tuning feature

And much more.

Isn’t the Transaction Related to the Database Driver?

The concept of transactional methods is very confusing to new spring developers. Transactions are a feature of the database driver/JDBC Connection, not of a method. Why declare it in the method?

There’s more to it. Other features, such as message queues, are also transactional. We might work with multiple databases. In those cases, if one transaction is rolled back, we need to rollback all the underlying transactions. As a result, we do the transaction management in user code and spring seamlessly propagates it into the various underlying transactional resource.

How can we Write Programmatic Transaction Management if we don’t use the Database API?

Spring includes a transaction manager that exposes the API’s we typically expect to see: begin, commit and rollback. This manager includes all the logic to orchestrate the various resources.

You can inject that manager to a typical spring class, but it’s much easier to just write declarative transaction management like this Java code:

@Transactional
public void myMethod() {
    // ...
}

I used the annotation on the method level, but I could have placed it on the class level. The class defines the default and the method can override it.

This allows for extreme flexibility and is great for separating business code from low level JDBC transaction details.

Dynamic Proxy, Aspect Oriented Programming and Annotations

The key to debugging transactions is the way spring implements this logic. Spring uses a proxy mechanism to implement the aspect oriented programming declarative capabilities. Effectively, this means that when you invoke myMethod on MyObject or MyClass spring creates a proxy class and a proxy object instance between them.

Spring routes your invocation through the proxy types which implement all the declarative annotations. As such, a transactional proxy takes care of validating the transaction status and enforcing it.

Debugging a Spring Transaction Management using Lightrun

IMPORTANT: I assume you’re familiar with Lightrun basics. If not, please read this.

Programmatic transaction management is trivial. We can just place a snapshot where it begins or is rolled back to get the status.

But if an annotation fails, the method won’t be invoked and we won’t get a callback.

Annotations aren’t magic, though. Spring uses a proxy object, as we discussed above. That proxy mechanism invokes generic code, which we can use to bind a snapshot. Once we bind a snapshot there, we can detect the proxy types in the stack. Unfortunately, debugging proxying mechanisms is problematic since there’s no physical code to debug. Everything in proxying mechanisms is generated dynamically at runtime. Fortunately, this isn’t a big deal. We have enough hooks for debugging without this.

Finding the Actual Transaction Class

The first thing we need to do is look for the class that implements transaction functionality. Opening the IntelliJ/IDEA class view (Command-O or CTRL-O) lets us locate a class by name. Typing in “Transaction” resulted in the following view:

Looking for the TransactionAspectSupport class

This might seem like a lot, but we need a concrete public class. So annotations and interfaces can be ignored. Since we only care about Spring classes, we can ignore other packages. Still, the class we are looking for was relatively low in the list, so it took me some time to find it.

In this case, the interesting class is TransactionAspectSupport. Once we open the class, we need to select the option to download the class source code.

Once this is done, we can look for an applicable public method. getTransactionManager seemed perfect, but it’s a bit too bare. Placing a snapshot there provided me a hint:

Placing a snapshot in getTransactionManager

I don’t have much information here but the invokeWithinTransaction method up the stack is perfect!

Moving on to that method, I would like to track information specific to a transaction on the findById method:

Creating a conditional snapshot

To limit the scope only to findById we add the condition:

method.getName().equals("findById")

Once the method is hit, we can see the details of the transaction in the stack.

If you scroll further in the method, you can see ideal locations to set snapshots in case of an exception in thread, etc. This is a great central point to debug transaction failures.

One of the nice things with snapshots is that they can easily debug concurrent transactions. Their non-blocking nature makes them the ideal tool for that.

Summary

Declarative configuration in Spring makes transactional operations much easier. This significantly simplifies the development of applications and separates the object logic from low level transactional behavior details.

Spring uses class-based proxies to implement annotations. Because they are generated, we can’t really debug them directly, but we can debug the classes, they use internally. Specifically: TransactionAspectSupport is a great example.

An immense advantage of Lightrun is that it doesn’t suspend the current thread. This means issues related to concurrency can be reproduced in Lightrun.

You can start using Lightrun today, or request a demo to learn more.

The post Spring Transaction Debugging in Production with Lightrun appeared first on Lightrun.

]]>
Spring Boot Performance Workshop with Vlad Mihalcea https://lightrun.com/spring-boot-performance-workshop-with-vlad-mihalcea/ Wed, 08 Jun 2022 15:07:49 +0000 https://lightrun.com/?p=7377 A couple of weeks ago, we had a great time hosting the workshop you can see below with Vlad Mihalcea. It was loads of fun and I hope to do this again soon! In this workshop we focused on Spring Boot performance but most importantly on Hibernate performance, which is a common issue in production […]

The post Spring Boot Performance Workshop with Vlad Mihalcea appeared first on Lightrun.

]]>
A couple of weeks ago, we had a great time hosting the workshop you can see below with Vlad Mihalcea. It was loads of fun and I hope to do this again soon!

In this workshop we focused on Spring Boot performance but most importantly on Hibernate performance, which is a common issue in production environments. It’s especially hard to track since issues related to data are often hard to perceive when debugging locally. When we have “real world” data at scale, they suddenly balloon and become major issues.

I’ll start this post by recapping many of the highlights in the talk and conclude by answering some questions we missed. We plan to do a second part of this talk because there were so many things we never got around to covering!

The Problem with show-sql

After the brief introduction, we dove right into the problem with show-sql. It’s pretty common for developers to enable thespring.jpa.show-sqlsetting in the configuration file. By setting this to true, we will see all SQL statements performed by Hibernate printed on the console. This is very helpful for debugging performance issues, as we can see exactly what’s going on in the database.

But it doesn’t log the SQL query. It prints it on the console!

Why do we Use Loggers?

This triggered the question to the audience: why does it matter if we use a logger and not System.out?

Common answers in the chat included:

  • System.out is slow – it has a performance overhead. But so does logging
  • System.out is blocking – so are most logging implementations but yes you could use an asynchronous logger
  • No persistence – you can redirect the output of a process to a file

The reason is the fine grained control and metadata that loggers provide. Loggers let us filter logs based on log level, packages, etc.

They let us attach metadata to a request using tools like MDC, which are absolutely amazing. You can also pipe logs to multiple destinations, output them in ingestible formats such as JSON so they can include proper meta-data when you view all the logs from all the servers (e.g. on Elastic).

Show-sql is Just System Output

It includes no context. It’s possible it won’t get into your Elastic output and even if it does. You will have no context. It will be impossible to tell if a query was triggered because of request X or Y.

Another problem here is the question marks in the SQL. There’s a very limited context to work with. We want to see the variable values, not questions.

Adding a Log with Lightrun

Lightrun lets you add a new log to a production application without changing the source code. We can just open the Hibernate file “Loader.java” and add a new log toexecuteQueryStatement.

Adding a log with Lightrun

We can fill out the log statements in the dialog that prompts us. Notice we can use curly braces to write Java expressions, e.g. variable names, method calls, etc.

These expressions execute in a sandbox which guarantees that they will not affect the application state. The sandbox guarantees read only state!

Once we click OK, we can see the log appear in the IDE. Notice that no code changed, but this will act as if you wrote a logger statement in that line. So logs will be integrated with other logs.

Lightrun log annotation in the editor

Notice that we print both the statement and the arguments so the log output will include everything we need. You might be concerned that this weighs too heavily on the CPU and you would be right. Lightrun detects overuse of the CPU and suspends expensive operations temporarily to keep execution time in check. This prevents you from accidentally performing an overly expensive operation.

Logpoints and quotas

You can see the log was printed with the full content on top but then suspended to prevent CPU overhead. This means you won’t have a performance problem when investigating performance issues…

You still get to see the query, and values sent to the database server.

Log Piping

One of the biggest benefits of Lightrun’s logging capability is its ability to integrate with other log statements written in the code. When you look at the log file, the Lightrun added statements will appear “in-order” with the log statements written in code.

As if you wrote the statement, recompiled and uploaded a new version. But this isn’t what you want in all cases.

If there are many people working on the source code and you want to investigate an issue, logging might be an issue. You might not want to pollute the main log file with your “debug prints”. This is the case for which we have Log Piping.

Log piping lets us determine where we want the log to go. We can choose to pipe logs to the plugin and in such a case, the log won’t appear with the other application logs. This way, a developer can track an issue without polluting the sanctity of the log.

Spring Boot Connection Acquisition

Spring Boot connection acquisition

Ideally, we should establish the relational database connection at the very last moment. You should release it as soon as possible to increase database throughput. In JDBC, the transaction is on auto-commit by default and this doesn’t work well with the JPA transactions in Spring Boot.

Unfortunately, we’re at a Chicken and Egg problem. Spring Boot needs to disable auto-commit. In order to do that, it needs a database connection. So it needs to connect to the database just to turn off this flag that should have been off to begin with.

This can seriously affect performance and throughput, as some requests might be blocked waiting for a database connection from the pool.

Logging eager connections

If this log is printed, we have a problem in our auto-commit configuration. Once we know that the rest is pretty easy. We need to add these two fields that both disable auto-commit and tell Hibernate that we disabled it. Once those are set, performance should be improved.

DB connection configuration

Query Plan Cache

About compiling JPQL

Compiling JPQL to native SQL code takes time. Hibernate caches the results to save CPU time.

A cache miss in this case has an enormous impact on performance, as evidenced by the chart below:

How a cache miss impacts performance

This can seriously affect the query execution time and the response time of the whole service.

Hibernate has a statistics class which collects all of this information. We can use it to detect problematic areas and, in this case, add a snapshot into the class.

Snapshots

A Snapshot (AKA Non-breaking breakpoint or Capture) is a breakpoint that doesn’t stop the program execution. It includes the stack trace, variable values in every stack frame, etc. It then presents these details to us in a UI very similar to the IDE breakpoint UI.

We can traverse the source code by clicking the stack frames and see the variable values. We can add watch entries and most importantly: we can create conditional snapshots (this also applies to logs and metrics).

Conditional snapshots let us trigger the snapshot only if a particular condition is met. A common problem is when a bug in a system is experienced by a specific user only. We can use a conditional snapshot to get stack information only for that specific user.

Eager Fetch

When we look at logs for SQL queries, we can often see that the database fetches a lot more than what we initially asked for. That’s because of the default setting of JPA relations which is EAGER. This is a problem in the specification itself. We can achieve significant performance improvement by explicitly defining the fetch type to LAZY.

About eager fetching

We can detect these problems by placing a snapshot in theloadFromDatasource()method ofDefaultLoadEventListener.

In this case, we use a conditional snapshot with the condition:event.isAssociationFetch().

Creating a conditional snapshot in Lightrun

As a result, the snapshot will only trigger when we have an eager association, which is usually a bug. It means we forgot to include the LAZY argument to the annotation.

As you can see, this got triggered with a full stack trace and the information about the entity that has such a relation.

Stack frames shown in Lightrun Snapshots view

You can use this approach to detect incorrect lazy fetches as well. Multiple lazy fetches can be worse than a single eager fetch, so we need to be vigilant.

Open Session in View Anti-Pattern

Open session in view

On the surface, it doesn’t seem like we’re doing anything wrong. We’re just fetching data from the database and returning it to the client. But the transaction context finished when the post controller returned and as a result we’re fetching from the database all over again. We need to do an additional query as data might be stale. Isolation level might be broken and many bugs other than performance might arise.

This creates an N+1 problem of unnecessary queries!

We can detect this problem by placing a snapshot on theonInitializeCollectioncall and seeing the open session:

Lightrun snapshot reveals an open session

Now that we see the problem is happening we can solve the problem by definingspring.jpa.open-in-view=false

It will block you from using this approach.

Q&A

There were many brilliant questions as part of the session. Here are the answers.

Could you please describe a little bit about Lightrun?

Lightrun is a developer observability platform. As such, it lets you debug production safely and securely while keeping a tight lid on CPU usage. It includes the following pieces:

  • Client – IDE Plugin/Command Line
  • Management Server
  • Agent – running on your server to enable the capabilities

I wrote about it in depth here.

Could Lightrun Work Offline?

Since you’re debugging production, we assume your server isn’t offline.

However, Lightrun can be deployed on-premise, which removes the need for an open to the Internet environment.

Wondering about this sample, will this be available for our reference?

The code is all here.

As the Instrumentation/manipulation happens via a Server, given that I do not host the instrumentation server myself, what kind and what amount of data is being transmitted? Is the data secured or encrypted in any way?

The instrumentation happens on your server using the agent.

The Lightrun server has no access to your source code or bytecode!

Source code or bytecode never goes on the wire at any stage and Lightrun is never exposed to it.

All transmissions are secured and encrypted. Certificates are pinned to avoid a man in the middle attack. The Lightrun architecture received multiple rounds of deep security reviews and is running in multiple Fortune 500 companies.

Finally, all operations in Lightrun are logged in an administrator log, which means you can track every operation that was performed and have a full post mortem trail.

You can read more about Lightrun security here.

As mentioned, these logs are aged out in 1 hr. Is it possible to save those and re-use them for later use rather than creating log entries manually every time?

Lightrun actions default to expire after 1 hour to remove any potential unintentional overhead. You can set this number much higher, which is useful for hard to reproduce bugs.

Notice that when an action is expired, you can just click it and re-create it. It will appear in red within the IDE and can still be used for reference.

Is IntelliJ IDEA the only way to add breakpoints/logging? Or how is debugging with Lightrun done in production?

You can use IntelliJ (also PyCharm and WebStorm) as well as VSCode, VSCode.dev and the command line.

These connect to production through the Lightrun server. The goal is to make you feel as if you’re debugging a local app while extracting production data. Without the implied risks.

Is there any case where eager loading should be configured always for One-to-Many or Many-to-Many or Many-to-One relations? I always configure lazy loading for the above relations. Is it okay?

Yes. If you see that you keep fetching the other entity, then eager loading for this case makes sense. Having eagerness as the default makes little sense for most cases.

Do we need to restart an application with the javaagent?

The agent would run in the background constantly. It’s secure and doesn’t have overhead when it isn’t used.

If we are using other instrumentation tools like say AppDynamics or dynatrace …… does this work alongside?

This varies based on the tool. Most APMs work fine besides Lightrun because they hook up to different capabilities of the JVM.

Does this work with GraalVM?

Not at this time since GraalVM doesn’t support the javaagent argument. We’re looking for alternative approaches, but hopefully the GraalVM team will have some solutions.

Is it free to use?

Using Lightrun comes at a cost, but a free trial is available to everyone.

Does it impact app performance?

Yes, but it’s minimal. Under 0.5% when no actions are used, and under 8% with multiple actions. Notice you can tune the amount of overhead in the agent configuration.

Does it work for Scala and Kotlin?

Yes.

How to use it in production without IDE?

The IDE will work even for production, since you don’t connect directly to the production servers and don’t have access to them. The IDE connects to the Lightrun management server only. This lets your production servers remain segregated.

Having said that, you can still use the command-line interface to get all the features discussed here and much more.

Apart from injecting loggers, what other stuff can we do?

The snapshot lets you get full stack traces with the values of all the variables in the stack and object instance state. You can also include custom watch expressions as part of the snapshot.

Metrics let you add counters (how many times did we reach this line), tictocs (how much time did it take to perform this block), method duration (similar to tictocs but for the whole method) and custom metrics.

You can also add conditions to each one of those to narrowly segment the data.

How do we hide sensitive properties from beans? Say Credit card number of user?

Lightrun supports PII Reduction, which lets you define a mask (e.g. credit card) that would be removed before going into the logs. This lets you block an inadvertent injection into the logs.

It also supports blocklists, which let you block a file/class/group from actions. This means a developer won’t be able to place a log or snapshot there.

How can we use it for performance testing?

I made a tutorial on this here.

When working air gapped on prem is required, how do you provide the Server, as a jar or docker…?

This is something our team helps you set up.

Will it consume much more memory if we run with the Lightrun agent?

This is minimal. Running the petclinic demo on my Mac with no agent produces this in the system monitor:

With the agent, we have this:

At these scales, a difference of 17mb is practically within the margin of error. It’s unclear what overhead the agent has, if at all.

Finally

This has been so much fun and we can’t wait to do it again. Please follow Vlad, Tom, and myself for updates on all of this.

There are so many things we didn’t have time to cover that go well beyond slow queries and spring data nuances. We had a really cool demo of piping metrics to Grafana that we’d love to show you next time around.

The post Spring Boot Performance Workshop with Vlad Mihalcea appeared first on Lightrun.

]]>
Debugging the Java Message Service (JMS) API using Lightrun https://lightrun.com/debugging-the-java-message-service-jms-api-using-lightrun/ Mon, 25 Apr 2022 10:24:14 +0000 https://lightrun.com/?p=7216 The Java Message Service API (JMS) was developed by Sun Microsystems in the days of Java EE. The JMS API provides us with simple messaging abstractions including Message Producer, Message Consumer, etc. Messaging APIs let us place a message on a “queue” and consume messages placed into said queue. This is immensely useful for high […]

The post Debugging the Java Message Service (JMS) API using Lightrun appeared first on Lightrun.

]]>
The Java Message Service API (JMS) was developed by Sun Microsystems in the days of Java EE. The JMS API provides us with simple messaging abstractions including Message Producer, Message Consumer, etc. Messaging APIs let us place a message on a “queue” and consume messages placed into said queue. This is immensely useful for high throughput systems – instead of wasting user time by performing a slow operation in real-time, an enterprise application can send a message. This non-blocking approach enables extremely high throughput, while maintaining reliability at scale.

The message carries a transactional context which provides some guarantees on deliverability and reliability. As a result, we can post a message in a method and then just return, which provides similar guarantees to the ones we have when writing to an ACID database.

We can think of messaging somewhat like a community mailing list. You send a message to an email address which represents a specific list. Everyone who subscribes to that list receives that message. In this case, the message topic represents the community mailing list address. You can post a message to it, and the Java Message Service handler can use a message listener to receive said event.

It’s important to note that there are two messaging models in JMS: the publish-and-subscribe model (which we discussed here) and also point-to-point messaging, which lets you send a message to a specific destination.

Let’s go over a quick demo.

A Simple Demo

In order to debug the Java Message Service calls, I’ve created a simple demo application, whose source code can be found here.

This JMS demo is a simple database log API – it’s a microservice which you can use to post a log entry, which is then written to the database asynchronously. RESTful applications can then use this database log API to add a database log entry and without the overhead of database access.

This code implements the main web service:

@RestController
@RequiredArgsConstructor
public class EventRequest {
   private final JmsTemplate jmsTemplate;
   private final EventService eventService;
   private final Moshi moshi = new Moshi.Builder().build();

   @PostMapping("/add")
   public void event(@RequestBody EventDTO event) {
       String json = moshi.adapter(EventDTO.class).toJson(event);
       jmsTemplate.send("event", session ->
               session.createTextMessage(json));
   }

   @GetMapping("/list")
   public List<EventDTO> listEvents() {
       return eventService.listEvents();
   }
}

Notice the event() method that posts a message to the event topic. I didn’t discuss message bodies before to keep things simple, but note that in this case I just pass a JSON string as the body. While JMS supports object serialization, using that capability has its own complexities and I want to keep the code simple.

To complement the main web service, we’d need to build a listener that handles the incoming message:

@Component
@RequiredArgsConstructor
public class EventListener {
   private final EventService eventService;

   private final Moshi moshi = new Moshi.Builder().build();

   @JmsListener(destination = "event")
   public void handleMessage(String eventDTOJSON) throws IOException {
       eventService.storeEvent(moshi.adapter(EventDTO.class).fromJson(eventDTOJSON));
   }
}

The listener is invoked with the JSON string that is sent to the listener, which we parse and send on to the service.

Debugging the Hidden Code

The great thing about abstractions like Spring and JMS is that you don’t need to write a lot of boilerplate code. Unfortunately, message-oriented middleware of this type hides a lot of fragile implementation details that can fail along the way.

This is especially painful in a production scenario where it’s hard to know whether the problem occurred because a message wasn’t sent properly. This is where Lightrun comes in.

You can place Lightrun actions (snapshots, logs etc.) directly into the platform APIs and implementations of messaging services. This lets us determine if message selectors are working as expected and whether the message listener is indeed triggered.

With Spring with JMS support as shown above, we can open the JmsTemplate and add a snapshot to the execute method:

Adding a snapshot to the execute method

As you can see, the action is invoked when sending to a topic. We can review the stack frame to see the topic that receives the message and use conditions to narrow down the right handler for messages.

We can place a matching snapshot in the source of message so we can track the flow. E.g. a snapshot in EventRequest can provide us with some insight. We can dig in the other direction too.

In the stack above, you can see that the execute method is invoked by the method send at line 584. The execute method wraps the caller so the operation will be asynchronous. We can go further down the stack by going to the closure and placing a snapshot there:

Placing another snapshot

Notice that here we can place a condition on the specific topic and narrow things down.

Summary

We pick messaging systems to make our application reliable. However, enterprise messaging systems are very hard to debug in production, which works against that reliability. We can see logs in the target of messages, but what happens if we did not reach it?

With Lightrun, we can place actions in all the different layers of messaging-based applications. This helps us narrow down the problem regardless of the messaging standard or platform.

The post Debugging the Java Message Service (JMS) API using Lightrun appeared first on Lightrun.

]]>
Debugging jsoup Java Code in Production Using Lightrun https://lightrun.com/debugging-jsoup-java-code-in-production-using-lightrun/ Mon, 11 Apr 2022 12:24:11 +0000 https://lightrun.com/?p=7156 Scraping websites built for modern browsers is far more challenging than it was a decade ago. jsoup is a convenient API that makes scraping websites trivial via DOM traversal, CSS Selectors, JQuery-Like methods and more. But it isn’t without its caveat. Every scraping API is a ticking time bomb. Real-world HTML is flaky. It changes […]

The post Debugging jsoup Java Code in Production Using Lightrun appeared first on Lightrun.

]]>
Scraping websites built for modern browsers is far more challenging than it was a decade ago. jsoup is a convenient API that makes scraping websites trivial via DOM traversal, CSS Selectors, JQuery-Like methods and more. But it isn’t without its caveat. Every scraping API is a ticking time bomb.

Real-world HTML is flaky. It changes without notice since it isn’t a documented API. When our Java program fails in scraping, we’re suddenly stuck with a ticking time bomb. In some cases, this is a simple issue that we can reproduce locally and deploy. But some nuanced changes in the DOM tree might be harder to observe in a local test case. In those cases, we need to understand the problem in the parse tree before pushing an update. Otherwise, we might have a broken product in production.

What is jsoup? The Java HTML Parser

Before we go into the nuts and bolts of debugging jsoup let’s first answer, the question above and discuss the core concepts behind jsoup.

The jsoup website defines it as:

jsoup is a Java library for working with real-world HTML. It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors.

jsoup implements the WHATWG HTML5 specification and parses HTML to the same DOM as modern browsers do.

With that in mind, let’s go directly to a simple sample also from the same website:

Document doc = Jsoup.connect("https://en.wikipedia.org/").get();
log(doc.title());
Elements newsHeadlines = doc.select("#mp-itn b a");
for (Element headline : newsHeadlines) {
  log("%s\n\t%s",
    headline.attr("title"), headline.absUrl("href"));
}

This code snippet fetches headlines from wikipedia. In the code above, you can see several interesting features:

  • Connection to URL is practically seamless – just pass a string URL to the connect method
  • There are special cases for some element children. E.g. Title is exposed as a simple method that returns a string without selecting from the DOM tree
  • However, we can select the entry using pretty elaborate selector syntax

If you’re looking at that and thinking “that looks fragile”. Yes, it is.

Simple jsoup Test

To demonstrate debugging, I created a simple demo that you can download here.

You can use the following Maven dependency to install jsoup into any Java program. Maven will download jsoup jar seamlessly:

<dependency>
  <groupId>org.jsoup</groupId>
  <artifactId>jsoup</artifactId>
  <version>1.14.3</version>
</dependency>

This demo is a trivial Java app that returns a complete list of external links and elements with src attributes in a page. This is based on the code from here, converted to a Spring Boot Java program. The jsoup applicable code is relatively short:

public Set<String> listLinks(String url, boolean includeMedia) throws IOException {
   Document doc = Jsoup.connect(url).get();
   Elements links = doc.select("a[href]");
   Elements imports = doc.select("link[href]");

   Set<String> result = new TreeSet<>(String.CASE_INSENSITIVE_ORDER);
   if(includeMedia) {
       Elements media = doc.select("[src]");
       for (Element src : media) {
           result.add(src.absUrl("src"));
           //result.add(src.attr("abs:src"));
       }
   }

   for (Element link : imports) {
       result.add(link.absUrl("abs:href"));
   }

   for (Element link : links) {
       result.add(link.absUrl("abs:href"));
   }

   return result;
}

As you can see, we fetch the input String URL. We can also use input streams, but this makes things slightly more complicated when parsing relative URLs (we need a base URL anyway). We then search for links and objects that have an src attribute. The code then adds all of them into a set to keep the entries sorted and unique.

We expose this as a web service using the following code:

@RestController
public class ParseLinksWS {
   private final ParseLinks parseLinks;

   public ParseLinksWS(ParseLinks parseLinks) {
       this.parseLinks = parseLinks;
   }

   @GetMapping("/parseLinks")
   public Set<String> listLinks(@RequestParam String url, @RequestParam(required = false) Boolean includeMedia) throws IOException {
       return parseLinks.listLinks(url, includeMedia == null ? true : includeMedia);
   }
}

Once we run the application can the application, we can use it with a simple curl command:

curl -H "Content-Type: application/json" "http://localhost:8080/parseLinks?url=https%3A%2F%2Flightrun.com"

This prints out the list of URLs referred to in the Lightrun home page.

Debugging Content Failures

Typical string scraping issues occur when an element object changes. E.g. wikipedia can change the structure of their pages and the select method above can suddenly fail. This is often a nuanced failure, e.g. missing DOM element in the Java object hierarchy which can trigger a failure of the select method.

Unfortunately, this can be a subtle failure. Especially when dealing with nested node elements and inter-document dependencies. Most developers solve this by logging a huge amount of data. This can be a problem due to two big reasons:

  • Huge logs – they are both hard to read and very expensive to ingest
  • Privacy/GDPR Violations – a scraped site might include user specific private information. Worse!
    The scraped site might change to include private information after scraping was initially implemented. Logging this private information might violate various laws.

If we don’t log enough and can’t reproduce the issue locally, things can become difficult. We’re stuck in the add logs, build, test, deploy, reproduce – rinse repeat loop.

Lightrun offers a better way. Just track the specific failure directly in production, verify the problem, and create a fix that will work with one deployment.

NOTE: This tutorial assumes you installed Lightrun and understand the basic concepts behind it. If not, please check out the docs.

Finding your way in Browser DOM 

Assuming you don’t know where to look, a good place to start is inside the jsoup API. This can lead you back to user code. The cool thing is that this works regardless of your code. We can find the right line/file for the snapshot by digging into the API call.

I ctrl-clicked (on Mac use Meta-click) the select method call here:

Elements links = doc.select("a[href]");

And it led me to the Element class. In it I ctrl-clicked the Selector “select” method and got to the “interesting” place.

Here, I could place a conditional snapshot to see every case where an “a[href]” query is made:

Conditional snapshot with Lightrun

This can show me the methods/lines that perform that query:

Seeing methods that perform a query

This can help a lot in narrowing down the general problematic area in the document object hierarchy.

Sometimes, a snapshot might not be enough. We might need to use a log. The advantage of logging is that we can produce a lot of information, but only for a specific case and on-demand.

The value of logs is that they can follow an issue in a way that’s very similar to stepping over code. The point where we placed the snapshot is problematic for logs. We know the query sent but we don’t have the value that’s returned yet. We can solve this easily with logs. First, we add a log with the following text:

"Executing query {query}"

Formatting a virtual log in Lightrun

Then, to find out how many entries we returned, we just go to the caller (which we know thanks to the stack in the snapshot) and add the following log there:

Links query returned {links.size()}

Logging returned queries

This produces the following log which lets us see that we had 147 “a[href]” links. The beauty of this is that the additional logs are interlaced with the pre-existing logs in-context:

Feb 02, 2022 11:25:27 AM org.jsoup.select.Selector select
INFO: LOGPOINT: Executing query a[href]
Feb 02, 2022 11:25:27 AM com.lightrun.demo.jsoupdemo.service.ParseLinks listLinks
INFO: LOGPOINT: Links query returned 147
Feb 02, 2022 11:25:27 AM org.jsoup.select.Selector select
INFO: LOGPOINT: Executing query link[href]
Feb 02, 2022 11:25:27 AM org.jsoup.select.Selector select
INFO: LOGPOINT: Executing query [src]

Avoid Security and GDPR Issues

GDPR and security issues can be a problem with leaking user information into the logs. This can be a major problem, and Lightrun helps you reduce that risk significantly.

Lightrun offers two potential solutions that can be used in tandem when applicable.

Log Piping

The big problem with GDPR is the log ingestion. If you log private user data and then send it to the cloud, it’s there for a long time. It’s hard to find after the fact and it’s very hard to fix.

Lightrun provides the ability to pipe all of Lightrun’s injected logging to the IDE directly. This has an advantage of removing noise from other developers who might work with the logs. It can also skip the ingestion (optionally).

To send logs only to the plugin, select the piping mode as “plugin”.

Sending logs to Lightrun's IDE plugin only

PII Reduction/Blocklists

Personally Identifiable Information (PII) is at the core of GDPR and is also a major security risk. A malicious developer in your organization might want to use Lightrun to siphon user information. Blocklists prevent developers from placing actions in specific files.

PII reduction lets us hide information matching specific patterns from the logs (e.g. credit card format etc). This can be defined in the Lightrun web interface by a manager role.

TL;DR

With Java content scraping, jsoup is the obvious leader. Development with jsoup is far more than string operations or even handling the connection aspects. Besides getting the document object, it also handles complex aspects required for DOM element and scripting.

Scraping is a risky business. It might break in the blink of an eye when a website changes slightly.

Worse, it can break to some users in odd ways that are impossible to reproduce locally.

Thanks to Lightrun, we can debug such failures directly in the production environment and publish a working version swiftly. You can start using Lightrun today, or request a demo to learn more.

The post Debugging jsoup Java Code in Production Using Lightrun appeared first on Lightrun.

]]>