troubleshooting Archives - Lightrun https://lightrun.com/tag/troubleshooting/ Developer Observability Platform Wed, 27 Dec 2023 13:42:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://lightrun.com/wp-content/uploads/2022/11/cropped-fav-1-32x32.png troubleshooting Archives - Lightrun https://lightrun.com/tag/troubleshooting/ 32 32 Reduce 60% of your Logging Volume, and Save 40% of your Logging Costs with Lightrun Log Optimizer https://lightrun.com/reduce-60-of-your-logging-volume-and-save-40-of-your-logging-costs-with-lightrun-log-optimizer/ Wed, 01 Mar 2023 18:00:29 +0000 https://lightrun.com/?p=10957 As organizations are adopting more of the FinOps foundation practices and trying to optimize their cloud-computing costs, engineering plays an imperative role in that maturity. Traditional troubleshooting of applications nowadays relies heavily on static logs and legacy telemetry that developers added either when first writing their applications, or whenever they run a troubleshooting session where […]

The post Reduce 60% of your Logging Volume, and Save 40% of your Logging Costs with Lightrun Log Optimizer appeared first on Lightrun.

]]>
As organizations are adopting more of the FinOps foundation practices and trying to optimize their cloud-computing costs, engineering plays an imperative role in that maturity. Traditional troubleshooting of applications nowadays relies heavily on static logs and legacy telemetry that developers added either when first writing their applications, or whenever they run a troubleshooting session where they lack telemetry and need to add more logs in an ad-hoc fashion. Other observability solutions only mirror the logs that are in the code that’s running. These tools are not scanning or reflecting the effectiveness of existing new and legacy log statements. To modify or add new and more effective logs to an application, existing observability tools rely on developers to add new ad-hoc logs through an iterative, time consuming, and costly redeployment process. Furthermore, a great amount of these logs will most likely be never consumed or used by the developers, hence, they are piling up and creating more noise in the code than value, and they are costly

To get engineers to act upon cloud cost optimization, they need shared knowledge and visibility into what contributes to the rising costs, and they need this knowledge accessible from their native environment (their IDEs).

Without actionable data and visibility, engineers cannot be aware of cost related attributes, therefore, cannot address such objectives.

To solve such problems and help developers better understand the correlation of static logs to the overall cloud costs, Lightrun is happy to introduce the Log Optimizer. This new solution is a revolutionary automated log-optimization and logging cost-reduction solution that is part of the Lightrun IDE plugins. It allows developers to scan their source code (single file or complete projects) for log waste, and get in seconds the log lines that are replaceable by Lightrun’s dynamic logs.

With the new offering, developers can maintain a cleaner code with less static and inefficient log statements, and engineering managers and business management can benefit from a lower cloud bill that’s associated with logging costs. Such collaborative solutions should contribute to a better culture of cost awareness within the engineering group, and support the adoption of FinOps practices throughout the organization. 

In this post, you will learn how by adopting the Log Optimizer you can:

Gaining Visibility Into Redundant Logs and Eliminating Noise in your Code

The new Log Optimizer solution, comes pre-packaged with Lightrun IDE plugins, and it is an automated log optimization solution that enables developers to scan their source code for log “waste”.

Developers can run a scan of a single source code file or their entire project written in either Java, JavaScript, TypeScript or Node.JS and get an immediate output of recommended log lines to be omitted from the code. 

To get started with the Log Optimizer, please ensure you get the latest version of the Lightrun platform, and follow the instructions on the setup page.

Lightrun IDE Plugin Installation

Upon completion of the IDE plugin installation and the authentication steps, you should be ready to go. 

From your project folder in the IDE (e.g. IntelliJ Idea), run a Log Optimizer scan for either a specific java source file or the entire project directory (see illustration of the supported scan types of the solution).

Keep in mind that to run the solution, you will need to have the Docker client software running on the machine that scans the source code with its ability to download Docker images from the Docker Hub.

Accessing Log Optimizer in the Lightrun plugin

The above add-on to the Lightrun IDE plugin allows you to continuously scan your projects as you add more logs and code into it. The scanning action results in a detailed log optimizer scan output that you can review and analyze in the dedicated Log Optimizer IDE console (see examples below).

Upon a Log Optimizer scan of a single Java source file, a developer will get as an example the following output to the IDE dedicated console.

Log Optimizer output

As you can see, the single Java source file ‘OwnerController.java” has 5 logs that can be omitted from the source code and save the team money on redundant logs, as well as contribute to a cleaner code. The Log Optimizer detected in the above class in line 85 a marker log that is only used for marking that the software reached a certain point in the code. This specific log does not provide new information or insight as part of a troubleshooting process, hence, can be removed. Other examples for logs that can be replaced with dynamic ones or improved and were spotted in the same Java class are logs that contain multiple instructions and are not merged into a single log statement.

With the above in mind, when the Log Optimizer™ is executed on the full Java project, the output showcases a complete scan results with potentially more opportunities for log optimization actions. As can be observed in the below screenshot, a complete scanning output of the Java project within the IntelliJ IDE on the same PetClinic project was able to uncover 9 unique logs that are candidates for exclusion from the project. 

The wider scan has identified more cases for log optimization such as the one in PetController.java” where the tool detected a log within the method “initCreationForm” that has a marker just before the return statement.

More log optimization opportunities discovered

Now that the value of log optimization around cost savings and cleaner source code is clear, the next step is to decide which log lines are going to be removed and be replaced as needed with dynamic logs, and which ones have a justification to remain inside the project. This scanning activity should be an ongoing and continuous process since the code keeps on changing on a constant basis. 

Embed Continuous Log Optimization Practice within the Engineering Team 

As mentioned above, making engineering aware and accountable for logging costs and providing them with the right tools to optimize such recurring expenses, starts with solutions that are accessible to the developers from their native IDEs and with concrete actionable data.

By visualizing to developers the aggregated redundant log lines in their project, it triggers an action that results in optimization. The best and most recommended practice to make engineering adopt such log optimization tools and techniques is to make this activity be part of the CI/CD per software iteration or release.

By continuous execution of a log optimization tool, developers get used to the practice, and are realizing the value of logging only when needed and where needed within their source code.

Establish a Culture of Cost Awareness within Engineering and Shift Left FinOps Practices

Engineering feels more empowered and accountable for cost efficiency around their activities when they get clear visibility and information around their software deliverables within their comfort zone environment.

That is why we’ve built the Log Optimizer solution on top of our IDE plugins, so it fully aligns with how developers are already troubleshooting and monitoring their code. With the ability for developers to run on-demand a logging scan on a single source file or on the entire code repository, they are in a frictionless and natural manner adopt a culture of cost-optimization and accountability. That step of awareness and accountability falls under the category of “Inform” within the FinOps framework methodology. With this gathered information, engineering and FinOps teams can move towards the higher maturity phases that are Optimize and Operate, and find the right paths toward cost optimization. 

Log optimization workflow
It is an ongoing process, however, as engineers practice and develop a culture of cost awareness, the entire organization can be better aligned, aware, and accountable around FinOps practices and opportunities for cost savings.

Bottom Line

Lightrun’s Log Optimizer solution aims to enable organizations ability to cut on logging costs through a unique visualization of redundant logging data directly within the developer IDE. It is a great step toward enabling both engineering and FinOps teams to break down silos between them; act on optimization logging cost based on pure data; and make this practice a continuous and automated way of developing and delivering new software.

Learn mode from our documentation on how to get started with the Log Optimizer.

The post Reduce 60% of your Logging Volume, and Save 40% of your Logging Costs with Lightrun Log Optimizer appeared first on Lightrun.

]]>
Maximizing CI/CD Pipeline Efficiency: How to Optimize your Production Pipeline Debugging? https://lightrun.com/maximizing-ci-cd-pipeline-efficiency-how-to-optimize-your-production-pipeline-debugging/ Mon, 15 May 2023 11:43:07 +0000 https://lightrun.com/?p=11611 Introduction At one particular time, a developer would spend a few months building a new feature. Then they’d go through the tedious soul-crushing effort of “integration.” That is, merging their changes into an upstream code repository, which had inevitably changed since they started their work. This task of Integration would often introduce bugs and, in […]

The post Maximizing CI/CD Pipeline Efficiency: How to Optimize your Production Pipeline Debugging? appeared first on Lightrun.

]]>
Introduction

At one particular time, a developer would spend a few months building a new feature. Then they’d go through the tedious soul-crushing effort of “integration.” That is, merging their changes into an upstream code repository, which had inevitably changed since they started their work. This task of Integration would often introduce bugs and, in some cases, might even be impossible or irrelevant, leading to months of lost work.

Hence Continuous Integration and continuous deployment (CI/CD) was born, enabling teams to build and deploy software at a much faster pace since the inception of the paradigm shift allowing for extremely rapid release cycles.

Continuous Integration aims to keep team members in sync through automated testing, validation, and immediate feedback. Done correctly, it will instill confidence in knowing that the code shipped adheres to the standards required to be production ready. 

However, although many positive factors are derived from CI/CD pipeline, this has since evolved into a complex puzzle of moving parts and steps for organizations where problems occur frequently.

Usually, errors that occur in the pipeline happen after the fact. You have N number of pieces to the puzzle that could fail even if you can resolve some of these issues by piping your logs to a centralized logging service and tracing from there. You are not able to replay the issue.

You may argue for the case of static debugging instances. In this process, one usually traces an error via a stack trace exception or error that occurs. Then you make calculated guesses about where the issue may have happened.

This is then usually followed by some code changes and local testing to simulate the issue and then followed by Deploying the code and going through a vicious cycle of cat and mouse to identify issues.

Issues with CI/CD Pipelines and Debugging 

Let’s break down some fundamental issues plaguing most CI/CD pipelines. CI/CD builds, and production deployments rely on testing and performance criteria. Functional testing and validation testing could be automated but challenging due to the scope of different scenarios in place.

Identifying the root cause of the issue

It can be challenging to determine the exact cause of a failure within a CI/CD pipeline while debugging complex pipelines consisting of many stages and interdependent processes can be difficult to understand and comprehend what went wrong and how to fix it.

At its core, a lack of observability and limited access to logs or lack of relevant information can make it challenging to diagnose issues, and at times, the inverse excessive logging and saturation cause tunnel vision.

Another contributing factor If code coverage is low as well edge case scenarios that could potentially be breaking your pipeline will be hard to discover for those that work in a Monorepo environment, issues are exacerbated where shared dependencies and configurations originate from multiple teams or developers that push code without verification cause a dependence elsewhere to break the build deploy pipeline.

How to Optimize your CI/CD Pipeline?

There will be times when you believe you’ve done everything correctly, but something still needs to be fixed. 

  • Your pipeline should have a structured review process. 
  • You need to ensure the pipeline supports automated tests. 
  • Parallelization should be part of your design, with caching of artifacts where applicable. 
  • The pipeline should be built, so it Fails fast with — a feedback loop. 
  • Monitoring should be by design. 
  • Keep builds and tests lean. 

All these tips won’t help much if you don’t have a way to observe your pipeline.

Why Should Your CI/CD Pipeline be Observable?

A consistent, reliable pipeline helps teams to integrate and deploy changes more frequently. If any part of this pipeline fails, the ability to release new features and bug fixes grinds to a halt.

An observed pipeline helps you stay on top of any problems or risks to your CI/CD pipeline. 

An observed pipeline provides developers with the visibility of their tests, and will they will finally know whether the build process they triggered was successful or not. If it fails, the “Where failed” question is answered immediately. 

Not knowing what’s going on in the overall CI/CD process and need to know overall visibility to see how it’s going and overall performance is no longer a topic of discussion. 

Tracing issues via different interconnected services and understanding the processing they undergo end to end can be difficult, especially when reproducing the same problem in the same environment is complex or restrictive.

Usually, DevOps and Developers generally try to reproduce issues via their Local environment to understand the root cause, which brings additional complexities of local replication.

Architecture CI/CD pipeline

To put things into context — let’s work through an example of a typical production CI/CD pipeline.



CI/CD pipeline with GitActions and AWS CDK AWS Beanstalk



The CODE

The CI/CD pipeline starts with the source code via Github with git actions to trigger the pipeline. GitHub provides ways to version code but does not track the impact of changes of commits developers make into the repository. For example, 

  • What if a certain change introduces a bug? 
  • What if a branch merge resulted in a successful build but failed deployment? 
  • What if the deployment was successful, then a user received an exception, and it’s already live in production?

BUILD Process

The build process with test cases for code coverage is a critical point of failure for most deployments. If a build fails, the team needs to be notified immediately in order to identify and resolve the problem quickly. You may say they are options like setting alerts to your slack channel or email notifications that can be configured. 

Those additional triggers can alert you though they need to provide the ability to trace and debug the issues in a timely manner as one still needs to dig into the code. Failure may be due to some more elusive problems such as missing dependencies. 

Unit & Integration TESTS

It’s not enough to know that your build was successful. It also has to pass tests to ensure changes don’t introduce bugs or regressions. Frameworks such as JUnit, NUnit, and pytest generate test result reports though these reports output failed cases but not the how part. 

Deploy Application

Most pipelines have pivoted to infrastructure as code where code dictates how Infrastructure provisioning is done. In our example AWS CDK lets you manage the infrastructure as code using Python. While empowering Developers we have the additional complexity of code added which becomes hard to debug. 

Post-Deploy Health Checks

Most deployments have an extra step to verify the health such as illustrated in our pipeline. Such checks may include Redis health, and Database health. Since these checks are driven by code we yet have another opportunity for failure that may hinder our success metric. 

Visually Illustrating Points of Failure in the CI/CD Pipeline

Below illustrates places that can potentially go wrong i.e Points of failure. This exponentially gets more complex depending on how your CI/CD pipeline has been developed.




Points of failure in our example CI/CD pipeline

Dynamic Debugging and Logging to Remedy your Pipeline Ailments 

Let’s take a look at how we can quickly figure out what is going on in our complex pipeline. A new approach of shifting left observability which is the practice of incorporating observability into the early stages of the software development lifecycle via applying a Lightrun CI/CD pipeline Observable Pattern. 

Lightrun takes a developer-native observability first approach with the platform; we can begin the process of strategically adding in agent libraries in each component in our CI/CD pipeline, as illustrated below.


Lightrun CI/CD pipeline pattern

Each agent will be able to observe and introspect your code as part of the runtime, allowing you to directly hook into your pipeline line directly from your IDE via a Lightrun plugin or CLI. 

This will allow you then to add virtual breakpoints with logging expressions of your choosing to your code in real-time directly from your IDE, i.e., in essence, remote debugging and remote logging such as you would do on your local environment by directly linking into production.

Since virtual breakpoints are non-intrusive and capture the application’s context, such as variables, stack trace, etc., when they’re hit, This means no interruptions to execute code in the pipeline and no further redeployments would be required to optimize your pipeline.

Lightrun Agents can be baked into Docker images as part of the build cycles. This pattern can be further extended by making a base Docker image that has your Lightrun unified configurations inherited by all microservices as part of the build, forming a chain of agents for tracking. 

Log placement in parts of the test and deploy build pipeline paired with real-time alerting when log points are reached can minimize challenges in troubleshooting without redeployments.

For parts of code that do not have enough code coverage — all we will need to do is add Lightrun counter metric to bridge the gap to form a coverage tree of dependencies to assist in tracing and scoping what’s been executed and its frequency.

Additional Metrics via the Tic & Toc metric that measures the elapsed time between two selected lines of code within a function or method for measuring performance. 

Customized metrics can further be added using custom parameters with simple or complex expressions that return a long int results

Log output will immediately be available for analysis via either your IDE or Lightrun Management Portal. By eliminating arduous and time-consuming CI/CD cycles, developers can quickly drill down into their application’s state anywhere in the code to determine the root cause of errors.

How to Inject Agents into your CI/CD pipeline?

Below we will illustrate using python. You’re free to replicate the same with other supported languages.  

  1. Install the Lightrun plugin.
  2. Authenticated IDE pycharm with your Lightrun account
  3. Install the python agent by running python -m pip install lightrun.
pip install lightrun
  1. Add the following code to the beginning of your entrypoint function
import os

LIGHTRUN_KEY = os.environ.get('YOUR_LIGHTRUN_KEY')

LIGHTRUN_SERVER = os.environ.get('YOUR_LIGHTRUN_SERVER_URL')

def import_lightrun():

   try:

       import lightrun

       lightrun.enable(com_lightrun_server=LIGHTRUN_SERVER, company_key=LIGHTRUN_KEY, lightrun_wait_for_init=True, lightrun_init_wait_time_ms=10000,  metadata_registration_tags='[{"name": "<app-name>"}]')

   except ImportError as e:

       print("Error importing Lightrun: ", e)

as part of the enable function call, you can specify lightrun_wait_for_init=True and lightrun_init_wait_time_ms=10000 as part of the Python agent configuration. 

These two configuration parameters will ensure that the Lightrun agent starts up fast enough to work within short-running service functions and apply a wait time of about 10000 milliseconds before fetching Lightrun actions from the management portal. take note these are optional parameters that can be ignored if it doesn’t make sense to apply them for long-lived code execution cycles e.g running a Django project or fast API microservice applications if your using another language like java the same principles apply.

Once your agent is configured, you can call import_lightrun() function in __init__.py part of your pipeline code can be made to ensure agents are invoked when the pipeline starts.

Deploy your code, and open your IDE with access to all your code, including your deployment code. 

Select the lines of code you wish to trace and open up the Lightrun terminal and console output window shipped with the agent plugin. 

Adding logging to live executing code with Lightrun directly from your IDE

 

Achieving Unified output to your favorite centralized logging service

If we wish to pipe out logs instead of using the IDE, you can tap into third-party integrations to consolidate the CI/CD pipeline, as illustrated below.  

If you notice an unusual event, you can drill down to the relevant log messages to determine the root cause of the problem and begin planning for a permanent fix in the next triggered deploy cycle.

Validation of CI/CD Pipeline Code State

One of the benefits of an observed pipeline is that we can fix the pipeline versioning issues. Without correct tagging, how do you know your builds have the expected commits it gets hard to tell the difference without QA effort. 

By adding dynamic log entries at strategic points in the code, we can validate new features and committed code in the pipeline that was introduced into the platform by examining dynamic log output before it reaches production. 

This becomes very practical if you work in an environment with a lot of guard rails and security lockdowns on production servers. You don’t worry about contending with incomplete local replications.

Final thoughts 

A shift left approach observability in CI/CD pipeline optimization approach can increase your MTTR average time it takes to recover from production pipeline failures which can have a high impact on deploying critical bugs to production. 

You can start using Lightrun today, or request a demo to learn more.

The post Maximizing CI/CD Pipeline Efficiency: How to Optimize your Production Pipeline Debugging? appeared first on Lightrun.

]]>
Lightrun Launches New .NET Production Troubleshooting Solution: Revolutionizing Runtime Debugging https://lightrun.com/lightrun-launches-new-net-production-troubleshooting-solution-revolutionizing-runtime-debugging/ Mon, 24 Apr 2023 11:31:49 +0000 https://lightrun.com/?p=11269 Introduction and Background Lightrun, the leading Developer Observability Platform for production environments, announced today that it has extended its support to include C# on its plugins for JetBrains Rider, VSCode, and VSCode.dev. With this new runtime support, .NET developers can troubleshoot their apps against .NET Framework 4.6.1+, .NET Core 2.0+, and .NET 5.0+ technologies. This […]

The post Lightrun Launches New .NET Production Troubleshooting Solution: Revolutionizing Runtime Debugging appeared first on Lightrun.

]]>
Introduction and Background

Lightrun, the leading Developer Observability Platform for production environments, announced today that it has extended its support to include C# on its plugins for JetBrains Rider, VSCode, and VSCode.dev. With this new runtime support, .NET developers can troubleshoot their apps against .NET Framework 4.6.1+, .NET Core 2.0+, and .NET 5.0+ technologies.

This new runtime support enables developers to seamlessly integrate Lightrun’s dynamic instrumentation and live debugging capabilities into their .NET applications running on these popular development platforms without requiring any code changes or redeployments.

It’s important to understand that Microsoft’s .NET technology underwent a significant evolution over the past years from being a closed development technology up until .NET 5.0 release in 2021, when it became open-source and publicly available on other operating systems like Linux.

With the above transformation and growing adoption of .NET, development teams are faced with the ongoing task of delivering code at an increasingly rapid pace without sacrificing the critical components of quality and security. This challenge is exacerbated by the intricate and elaborate nature of distributed architecture. While this architecture allows for ease of development and scalability, it also presents a unique challenge in terms of identifying and resolving issues that may arise. 

Discovering an issue within your code, whether through personal observation or client feedback, can be an unpleasant experience. The cloud environment, particularly within a distributed and serverless context, can make it challenging to understand the root cause of the problem. As multiple microservices communicate with each other, it is difficult to gain a comprehensive view of the entire application. Moreover, recreating the same situation locally is almost impossible. The situation is compounded by the dynamic nature of the environment, where instances are deployed and removed unpredictably, making it even more challenging to gain visibility into the issue at hand.

Troubleshooting .NET – Current Options

As mentioned earlier, troubleshooting .NET applications can be a very complex task for developers. Within the native IDEs, .NET developers have various options to address issues encountered during the production phase. Here are some of them:

  • Crash Dumps can provide insight into why code crashes, but analyzing them requires specific skills. However, crash dumps cannot identify the root cause of the issue leading to the crash.
  • Metric and Counters to collect and visualize data from remotely executing code can provide better insights into the system. However, there are limitations to dynamically adjusting the scope of focus to specific variables or methods.
  • Logging is a common and straightforward technique to help developers identify issues in running code. It provides a wealth of information, but irrelevant data can be stored, particularly when logging at scale. Consolidating logs into a single service, such as Application Insights, requires accurate filtering to separate important data from the noise. Additionally, static logging does not provide flexibility in what to log or the duration of the logging session.
  • Snapshot debugging allows for remote debugging of code within the IDE. When exceptions are thrown, a debug snapshot is collected and stored in Application Insights, including the stack trace. A minidump can also be obtained in Visual Studio to enhance visibility into the issue’s root cause. However, this is still a reactive solution that requires a more proactive approach to problem-solving. It also only works on specific ASP.NET applications.
  • Native IDE Debuggers – built-in debuggers like the one that comes with the JetBrains Rider IDE is good for local debugging, however, it has few drawbacks. It stops the running app for breakpoints and other new telemetry collections, and cannot really help with complex recreation of production issues.

Live Debugging of .NET Applications with Lightrun!

Lightrun provides a unique solution for developers who want to improve their code’s observability without the need for code modifications during runtime. The magic ingredient is the Lightrun agent that enables dynamic logging and observability that is added to the solution in the form of a NuGet package. Once included, it is initialized in your application starting code. And that is it. From there, the Lightrun agent takes care of everything else. And by that, it means connecting to your application at any time to retrieve logs and snapshots (virtual breakpoints) on an ad-hoc basis, without the unnecessary code modification/instrumentation, going through the rigorous code changes approval process, testing and deployment. 

Getting Started with Lightrun for .NET within VSCode and Rider IDEs

As shown in the below workflow diagram, .NET developers can get started with Lightrun by following the 5 simple steps.

Detailed Step by Step Instructions: VS Code IDE

STEP 1: Onboarding and Installation

To get started with Lightrun for .NET in VS Code IDE, create a Lightrun account.

Then follow the below steps to prepare your C# application to work with Lightrun (Code sample is available in this Github repository).

Once you’ve obtained your Lightrun secret key from the Lightrun management portal (https://app.lightrun.com), please install the VS Code IDE plugin from the extensions marketplace as shown in the below screenshot.

Post the IDE plugin installation and your user authentication, you should be able to start troubleshooting your C# application directly from your IDE by running it from either your IDE, command-line, or GitHub Actions. 

STEP 3: Snapshots

From the VS Code IDE, run your C# application. Once the app is running, developers can add virtual breakpoints (called Lightrun Snapshots) without the need to stop the running app as well as add dynamic and conditional logs.The below screenshot shows how a Lightrun snapshot would look inside the VS Code plugin, and how it provides the developers with the full variable information and call stack so they can quickly understand what’s going on and resolve the issues at hand.

The above example demonstrates a Lightrun snapshot hit on the C# Fibonacci application (line 42). 

STEP 4: Conditional Snapshots 

In the same way developers can use the Lightrun plugin to add snapshots, they can also include conditions to validate specific use cases throughout the troubleshooting workflow. As seen in the below example, developers can add a conditional snapshot that only captures the fibonacci number that is divisible by 5 (line 51 in the Fibonacci code sample).

 

STEP 5: Logs 

Below screenshot shows how a log could be used within the VS Code IDE plugin to gather specific variable outputs that could be piped into the Lightrun IDE console or the STDout. Such troubleshooting telemetry can be also easily shared from within the plugin via Slack, Jira, or a simple URL as a way of facilitating cross-team collaboration.

Step 6: Conditional Logs

To add a condition to the troubleshooting logs output, you can use the conditional logs field within the IDE plugin as shown below. For the fibonacci code sample, we can only pipe to the Lightrun console the numbers (n) that are divisible by 5 (n % 5 == 0).



Detailed Step by Step Instructions: Rider IDE

In a similar method like with the VS Code IDE, developers can follow the steps below within the Rider IDE and troubleshoot their C# applications. 

STEP 1: Onboarding and Installation

To get started with Lightrun for .NET in Rider IDE, create a Lightrun account.

Then follow the below steps to prepare your C# application to work with Lightrun.

Once you’ve obtained your Lightrun secret key from the Lightrun management portal as noted above, please install the Rider IDE plugin from the extensions marketplace as shown in the below screenshot.

Post the IDE plugin installation and your user authentication, you should be able to start troubleshooting your C# application directly from your IDE by running it from either your IDE, command-line, or GitHub Actions. 

STEP 3: Snapshots

Within the Lightrun Rider IDE once the application is running, developers can add virtual breakpoints (called Lightrun Snapshots) without the need to stop the running app as well as add dynamic and conditional logs.The below screenshot shows how a Lightrun snapshot would look inside the VS Code plugin, and how it provides the developers with the full variable information and call stack so they can quickly understand what’s going on and resolve the issues at hand.

The above example demonstrates a Lightrun snapshot hit on the C# Fibonacci application (line 52). 

STEP 4: Conditional Snapshots 

In the same way developers can use the Lightrun plugin to add snapshots, they can also include conditions to validate specific use cases throughout the troubleshooting workflow. As seen in the below example, developers can add a conditional snapshot that only captures the fibonacci number that is divisible by 10 (line 52 in the Fibonacci code sample).

STEP 5: Logs 

Below screenshot shows how a log could be used within the VS Code IDE plugin to gather specific variable outputs that could be piped into the Lightrun IDE console or the STDout. Such troubleshooting telemetry can be also easily shared from within the plugin via Slack, Jira, or a simple URL as a way of facilitating cross-team collaboration.



The above example demonstrates a Lightrun logpoint that was added to the C# Fibonacci application (line 31) and the log output displayed in the dedicated Lightrun console. 

Step 6: Conditional Logs

To add a condition to the troubleshooting logs output, you can use the conditional logs field within the IDE plugin as shown below. For the fibonacci code sample, we can only pipe to the Lightrun console the numbers (n) that are divisible by 10 (i % 10 == 0).

Bottom Line

With Lightrun, developers can identify and resolve issues proactively, reducing downtime, and enhancing the user experience while also reducing overall costs of logging. If you’re a .NET developer looking to simplify your debugging process, give Lightrun a try and experience the difference it can make in your development workflow.

To see it live, please book a demo here!

The post Lightrun Launches New .NET Production Troubleshooting Solution: Revolutionizing Runtime Debugging appeared first on Lightrun.

]]>
Mastering Complex Progressive Delivery Challenges with Lightrun https://lightrun.com/mastering-complex-progressive-delivery-challenges-with-lightrun/ Sun, 21 May 2023 15:08:37 +0000 https://lightrun.com/?p=11731 Introduction Progressive delivery is a modification of continuous delivery that allows developers to release new features to users in a gradual, controlled fashion.  It does this in two ways.  Firstly, by using feature flags to turn specific features ‘on’ or ‘off’ in production, based on certain conditions, such as specific subsets of users. This lets […]

The post Mastering Complex Progressive Delivery Challenges with Lightrun appeared first on Lightrun.

]]>
Introduction

Progressive delivery is a modification of continuous delivery that allows developers to release new features to users in a gradual, controlled fashion. 

It does this in two ways. 

Firstly, by using feature flags to turn specific features ‘on’ or ‘off’ in production, based on certain conditions, such as specific subsets of users. This lets developers deploy rapidly to production and perform testing there before turning a feature on. 

Secondly, they roll out new features to users in a gradual way using canary releases, which involves making certain features available to only a small percentage of the user base for testing before releasing it to all users.

Practices like these allow developers to get incredibly granular with how they release new features to their userbase to minimize risk. 

But there are downsides to progressive delivery: it can create very complex code that is challenging to troubleshoot. 

Troubleshooting Complex Code 

Code written for progressive delivery is highly conditional. It can contain many feature flag branches that respond differently to different subsets of users based on their data profile or specific configuration. You can easily end up with hard-to-follow code composed of complex flows with many conditional statements.

This means that your code becomes very unpredictable. It’s not always clear which code path will be invoked as this is highly dependent on which user is active and in what circumstances. 

The difficulty comes when you discover an issue, vulnerability or bug that is related to one of these complex branches and only occurs under certain very specific conditions. It becomes very difficult to determine which code path contains the bug and what information you need to gather to fix it. 

This becomes a major barrier to identifying any problems and resolving them effectively. 

The Barriers To Resolving Issues In Progressive Delivery

When a problem arises in this complex progressive delivery context, your developers can spend a huge amount of time trying to discern the location and nature of the actual problem amidst all the complexity. 

There are three main ways this barrier manifests:

  • Parsing conditional statements in the code path

Developers have to determine the actual code path that is being executed when the problem arises, a non-trivial issue when there are many different feature flags that are being conditionally triggered by different users in unpredictable ways. 

Among all these different possibilities it is very hard to determine which conditional statements will run and therefore to statically analyze the code path that will be executed. 

Developers have to add new logs to track the flow of code, forcing them to redeploy the application. Sometimes many rounds of logging/redeployment is required before they get the information they need, which is incredibly time-consuming.  

  • Emulating the production environment locally

Secondly, once the right code path has been isolated, they have to replicate that complex, conditional code on their local machine to test potential fixes. 

But if there are many feature flags and conditional statements, it is very hard to emulate that locally to reproduce and assess the problem given the complexity of the production environment. 

A huge amount of time and energy is needed to do this, with no guarantee that you will be able to perfectly replicate the production environment.

  • Generating synthetic traffic that matches the user profiles

Thirdly, when the code path that is executed is highly dependent on specific data (e.g. user data)  it is hard to simulate the workloads synthetically in order to properly test the solution in a way that accurately mirrors the production environment. 

Yet more time and energy must be expended to trigger the issue in the test environment in a way that gives developers the information they need to properly resolve the issue.

Using Lightrun to Troubleshoot Progressive Delivery

Developer time is extremely valuable. They can waste a lot of time dealing with these niggling hurdles to remediation that could be spent creating valuable new features.

But there is a new approach that can overcome these barriers: dynamic observability. 

Lightrun is a dynamic observability platform that enables developers to add logs, metrics and snapshots to live applications—without having to release a new version or even stop the running process.

In the context of progressive delivery, Lightrun enables you to use real-time dynamic instrumentation to:

  • Identify the actual workflow affected by the issue
  • Capture the relevant information from that workflow

This means that you can identify and understand your bug or vulnerability without having to redeploy the application or recreate the issue locally, regardless of the complexity of the code.

There are two features of Lightrun that are particularly potent in this regard: Lightrun dynamic logs and snapshots. 

Dynamic Logs

You can deploy dynamic logs within each feature flag branch in real-time, providing full visibility into the progressive delivery process without having to redeploy the application.

Unlike regular logging statements which are printed for all requests served by the system. Dynamic logging can target specific users or user segments, using conditions, making them more precise and much less noisy

If there’s a new issue you want to track or a new feature flag branch you want to start logging, you can just add it on the fly. Then you can flow through the execution and watch your application’s state change with every flag flip using real user data, right from the IDE, without having to add endless ‘if/else’ statements in the process.

Granular Snapshots

Similarly, you can place Snapshots – essentially virtual breakpoints – inside any flag-created branch, giving you debugger-grade visibility into each rollout. This gives your developers on-demand access to whatever information they need about the code flows that are affected by your issue. 

All the classic features you know from traditional debugging tools, plus many more, are available in every snapshot, which:

  • Can be added to multiple instances, simultaneously
  • Can be added conditionally, without writing new code
  • Provides full-syntax expression evaluation
  • Is safe, read-only and performant
  • Can be placed, viewed and edited right from the IDE

By enabling your developers to track issues through complex code and gather intelligence on-demand – all without having to redeploy the app or even leave the IDE – makes troubleshooting progressive delivery codebases much easier.

Developer Benefits Of Using Lightrun

  • Determine which workflow is being executed during a given issue

Developers can identify exactly which workflow is relevant. This means they no longer have the hassle of troubleshooting sections of code that are not vulnerable because they are not being executed or redeploying the application to insert log messages to track the code flow. 

  • No need to locally reproduce issues

By dynamically observing the application pathways at runtime, you avoid the need to invest significant time and energy into reproducing your production environment locally along with all the complexity of feature flags and config options. 

  • No need to create highly-specific synthetic traffic

Similarly, there is no need to emulate customer workloads by creating highly conditional synthetic traffic to trigger the particular code path in question.

Overall, developers can save a huge amount of time and energy that was previously being sunk into investigating complexity in different ways. 

Final Thoughts

Dynamic observability gives you much deeper and faster insights into what’s going on in your code. 

With Lightrun’s revolutionary developer-first approach, we enable engineering teams to connect to their live applications and continuously identify critical issues without hotfixes, redeployments or restarts. 

If you need a hand troubleshooting complex code flows or dealing with highly conditional progressive delivery scenarios, get in touch.



The post Mastering Complex Progressive Delivery Challenges with Lightrun appeared first on Lightrun.

]]>