Isaac.

Implement Logging Effectively

A practical guide to designing, implementing, and maintaining reliable application logging across platforms.

Why logging matters

Logging is the primary way applications communicate their internal state to developers and operators. Good logging helps with debugging, monitoring, security auditing, and capacity planning. Poor logging — or no logging — makes incidents harder to detect and recover from, increases time-to-resolution, and can hide security breaches.

Core principles

  1. Be consistent: use structured logs with a common schema (timestamp, level, service, trace id, message, fields).
  2. Log levels: DEBUG, INFO, WARN, ERROR, FATAL (use them deliberately).
  3. Don’t log secrets: redact or avoid PII and credentials.
  4. Contextual information: include request ids, user ids, and other context to correlate events.
  5. Structured logs: prefer JSON or key-value over plain text for machine parsing.
  6. Sampling & volume control: avoid log storms; sample verbose logs or use rate limits.
  7. Centralize: ship logs to a central system (ELK, Loki, Datadog, Splunk) for search & alerting.

Examples & code snippets

Below are compact examples showing good logging patterns in popular stacks. Each snippet uses structured logging where practical and includes a short explanation.

1) ASP.NET Core (C#)

// Program.cs (minimal hosting model)
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Logging;

var builder = WebApplication.CreateBuilder(args);

// Configure logging providers: console + JSON formatting
builder.Logging.ClearProviders();
builder.Logging.AddSimpleConsole(options =>
{
    options.TimestampFormat = "yyyy-MM-ddTHH:mm:ss.fffZ ";
    options.IncludeScopes = true; // include scopes for contextual data
});

var app = builder.Build();

app.MapGet("/hello", (ILogger<Program> logger) =>
{
    using (logger.BeginScope(new Dictionary<string, object> { ["trace_id"] = Guid.NewGuid() }))
    {
        logger.LogInformation("Hello endpoint hit");
    }

    return Results.Ok(new { message = "Hello" });
});

app.Run();

Explanation: Uses built-in ILogger with scopes to attach contextual fields. Configure console output to include timestamps. For production, replace console with structured sinks (e.g., Seq, Elasticsearch, Application Insights).

2) Spring Boot (Java)

// build.gradle dependencies (snippet)
dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-web'
    implementation 'net.logstash.logback:logstash-logback-encoder:7.4'
}

// src/main/resources/logback-spring.xml
<configuration>
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
        <timestamp />
        <pattern>
          <pattern>{"level":"%level"}</pattern>
        </pattern>
        <loggerName />
        <message />
        <mdc />
      </providers>
    </encoder>
  </appender>

  <root level="INFO">
    <appender-ref ref="STDOUT" />
  </root>
</configuration>

Explanation: Use Logstash Logback encoder to emit JSON logs. Add contextual data using MDC (Mapped Diagnostic Context) — useful for request ids and user ids. Centralized JSON logs are easy to ingest into ELK or similar systems.

3) Express (Node.js)

// app.js
const express = require('express');
const pino = require('pino');
const pinoHttp = require('pino-http');

const logger = pino({ level: process.env.LOG_LEVEL || 'info' });
const app = express();

app.use(pinoHttp({ logger }));

app.get('/items/:id', (req, res) => {
  req.log.info({ route: '/items/:id', id: req.params.id }, 'fetching item');
  // ...handle request
  res.json({ id: req.params.id });
});

app.listen(3000);

Explanation: Pino is a fast JSON logger for Node. pino-http attaches a logger to each request (req.log) with useful request-scoped fields. Logs are structured and efficient for high-throughput apps.

4) Next.js (TypeScript) — server and client considerations

// server/logger.ts
import pino from 'pino';

export const serverLogger = pino({
  level: process.env.LOG_LEVEL || 'info'
});

// pages/api/hello.ts
import type { NextApiRequest, NextApiResponse } from 'next';
import { serverLogger } from '../../server/logger';

export default function handler(req: NextApiRequest, res: NextApiResponse) {
  serverLogger.info({ path: req.url, method: req.method }, 'api hit');
  res.status(200).json({ name: 'John Doe' });
}

// Note: client-side logs should never include secrets — send only non-sensitive events from the browser.

Explanation: Keep server-side logging separate from client telemetry. Use a server logger for backend traces; for client telemetry, use analytics or an event pipeline that strips sensitive fields.

5) Flask (Python)

# app.py
import logging
from flask import Flask, request

app = Flask(__name__)

# Configure structured logging (simple example)
handler = logging.StreamHandler()
formatter = logging.Formatter('{"ts":"%(asctime)s","level":"%(levelname)s","msg":"%(message)s","path":"%(pathname)s"}')
handler.setFormatter(formatter)
app.logger.addHandler(handler)
app.logger.setLevel(logging.INFO)

@app.before_request
def attach_request_context():
    # attach request id or other context if you have it
    pass

@app.route('/ping')
def ping():
    app.logger.info(f"ping from {request.remote_addr}")
    return {"pong": True}

Explanation: The example emits JSON-like strings (simple formatter). For production, use a structured logger like python-json-logger or logging configuration that outputs strict JSON, and include correlation ids via Flask request hooks.

6) Laravel (PHP)

// config/logging.php (excerpt)
'channels' => [
    'stack' => [
        'driver' => 'stack',
        'channels' => ['daily', 'papertrail'],
    ],

    'papertrail' => [
        'driver' => 'monolog',
        'handler' => MonologHandlerSyslogUdpHandler::class,
        'handler_with' => [
            'host' => env('PAPERTRAIL_URL'),
            'port' => env('PAPERTRAIL_PORT'),
        ],
    ],
],

// usage in code
Log::info('Order processed', ['order_id' => $order->id, 'user_id' => $user->id]);

Explanation: Laravel uses Monolog under the hood. Configure channels (daily files, remote sinks). Use the array context parameter to attach structured fields which Monolog can render as JSON or key-value pairs.

Common patterns across examples

  • Prefer structured output (JSON) for machines and humans.
  • Include trace/request ids for cross-service correlation.
  • Use appropriate log levels and avoid over-logging in hot loops.
  • Protect sensitive information by redaction and filtering.

Conclusion

Effective logging is not an afterthought — it is an essential part of system design. When implemented well, logging shortens debugging time, enables better monitoring and alerting, and supports security and compliance needs. If logging is ignored or done inconsistently, teams face longer incident response times, missed alerts, poor observability, and increased risk of data leakage or regulatory non-compliance.

Key takeaways

  • Design a consistent, structured logging schema across services.
  • Attach context (trace ids, user ids) to make logs actionable.
  • Protect secrets — never log passwords, tokens, or sensitive PII.
  • Centralize and monitor logs; set alerts on important signals.
  • Control volume with sampling and rate limits to keep costs manageable.

Use this article as a checklist when improving logging in your applications — small improvements (a trace id here, a structured field there) compound into much better observability and reliability.