Structured Logging with Tethys.Logging: JSON, Context, and Correlation IDsStructured logging transforms plain text log lines into machine-readable records—typically JSON—that carry both a human-friendly message and discrete fields you can query, filter, and analyze. For systems that need observability, auditability, and reliable troubleshooting at scale, structured logs are essential. This article explains how to adopt structured logging with Tethys.Logging, covering configuration, JSON formatting, contextual enrichment, and managing correlation IDs for distributed tracing.
Why structured logging matters
- Searchable fields: You can filter logs by userId, requestId, statusCode, etc., instead of relying on fragile string searches.
- Better dashboards and alerts: Tools like Kibana, Grafana Loki, or Datadog can aggregate numeric fields and build meaningful metrics.
- Easier troubleshooting: Contextual fields let you quickly correlate related events across services.
- Compliance and auditing: Structured output simplifies record retention, export, and analysis.
What is Tethys.Logging?
Tethys.Logging is a .NET-centric logging library (or wrapper/abstraction) designed to integrate with common sinks and provide flexible enrichment. It exposes configuration options for formatters, sinks (console, file, HTTP), and middleware/enrichers that attach context to each log entry. The examples in this article assume you are using .NET (Core or later) and familiar with dependency injection and middleware pipelines.
JSON formatting: configuration and examples
JSON is the canonical format for structured logs. Tethys.Logging includes a JSON formatter that emits a compact, parseable object per log entry.
Example minimal JSON schema:
- timestamp: ISO 8601 UTC time
- level: log level (Debug, Information, Warning, Error, Critical)
- message: human-readable message
- logger: source/class generating the log
- exception: serialized exception info (if any)
- fields: object with arbitrary key/value pairs (userId, orderId, etc.)
Example configuration (C#):
// Startup.cs or Program.cs using Tethys.Logging; using Microsoft.Extensions.Logging; var builder = WebApplication.CreateBuilder(args); // Configure Tethys.Logging builder.Logging.ClearProviders(); builder.Logging.AddTethys(options => { options.Formatter = new JsonLogFormatter(); options.Sinks.Add(new ConsoleSink()); options.Sinks.Add(new FileSink("logs/app.log")); options.Enrichers.Add(new EnvironmentEnricher()); }); var app = builder.Build();
Example JSON log line:
{ "timestamp": "2025-08-29T12:34:56.789Z", "level": "Information", "message": "User login succeeded", "logger": "MyApp.AuthService", "exception": null, "fields": { "userId": "u-12345", "ip": "203.0.113.42", "method": "POST", "path": "/api/login", "durationMs": 120 } }
Tips:
- Use ISO 8601 UTC timestamps for consistency.
- Keep messages concise; put searchable data in fields.
- Avoid logging sensitive data (PII, secrets) unless masked/encrypted.
Contextual enrichment: enriching each log with useful metadata
Contextual enrichers automatically attach environment and runtime data to every log entry. Common enrichers:
- Environment (env name, region)
- Host (hostname, instance id)
- Application (version, build)
- Thread and process ids
- User identity (if available)
- Request/HTTP context: method, path, statusCode, duration
- Custom business fields: tenantId, orderId, correlationId
Example request middleware (ASP.NET Core):
public class RequestLoggingMiddleware { private readonly RequestDelegate _next; private readonly ILogger<RequestLoggingMiddleware> _logger; public RequestLoggingMiddleware(RequestDelegate next, ILogger<RequestLoggingMiddleware> logger) { _next = next; _logger = logger; } public async Task Invoke(HttpContext context) { var sw = Stopwatch.StartNew(); try { // Add request-scoped properties using (_logger.BeginScope(new Dictionary<string, object> { ["traceId"] = context.TraceIdentifier, ["method"] = context.Request.Method, ["path"] = context.Request.Path })) { await _next(context); } } finally { sw.Stop(); _logger.LogInformation("Request handled", new { durationMs = sw.ElapsedMilliseconds, statusCode = context.Response.StatusCode }); } } }
BeginScope makes these properties available to the formatter so they appear under “fields” in each JSON event.
Correlation IDs: design and propagation
A correlation ID is a unique identifier assigned to a request (or transaction) that travels across services, enabling you to stitch together logs from multiple components.
Strategy:
- Generate a correlation ID at the edge (API gateway, load balancer, or first service receiving the request). Use UUID v4 or a shorter base62 token.
- Accept incoming correlation IDs via a header (commonly X-Request-ID or X-Correlation-ID). If present, use it; otherwise generate a new one.
- Inject the correlation ID into outgoing requests’ headers so downstream services can continue the chain.
- Log the correlation ID in every log entry (via enricher or scope).
Example middleware that ensures correlation ID:
public class CorrelationIdMiddleware { private readonly RequestDelegate _next; private const string HeaderName = "X-Correlation-ID"; public CorrelationIdMiddleware(RequestDelegate next) => _next = next; public async Task Invoke(HttpContext context) { if (!context.Request.Headers.TryGetValue(HeaderName, out var cid) || string.IsNullOrWhiteSpace(cid)) { cid = Guid.NewGuid().ToString("N"); context.Request.Headers[HeaderName] = cid; } using (context.RequestServices .GetRequiredService<ILogger<CorrelationIdMiddleware>>() .BeginScope(new Dictionary<string, object> { ["correlationId"] = cid.ToString() })) { context.Response.Headers[HeaderName] = cid; await _next(context); } } }
Downstream HTTP clients should copy the header:
var request = new HttpRequestMessage(HttpMethod.Get, url); request.Headers.Add("X-Correlation-ID", correlationId); await httpClient.SendAsync(request);
Log levels and when to use them
- Debug: detailed diagnostic information for developers. Not usually enabled in production.
- Information: high-level events (startup, shutdown, user actions).
- Warning: unexpected situations that aren’t errors but may need attention.
- Error: recoverable failures; include exception details.
- Critical: catastrophic failures requiring immediate action.
Include structured fields to provide the remediation context (e.g., userId, endpoint, stack trace).
Working with downstream log stores and observability tools
- Elastic Stack: use Filebeat or Logstash to parse JSON and map fields.
- Grafana Loki: push JSON lines; use labels for cardinality-sensitive fields.
- Datadog/Seq/Splunk: ingest JSON directly; map fields to attributes for dashboards and monitors.
Best practices:
- Keep high-cardinality fields out of labels (in Loki) or indexed fields to avoid performance issues.
- Standardize field names (snake_case or lowerCamelCase across services).
- Version your log schema when adding/removing fields.
Performance considerations
- Avoid allocating large objects inside hot logging paths. Use message templates and structured parameters rather than string concatenation.
- Use sampling for noisy debug logs.
- Buffer writes to disk or network sinks. Configure batching to reduce overhead.
- Be mindful of synchronous I/O in logging sinks—prefer async or background workers.
Example of efficient logging:
_logger.LogInformation("Order processed {@OrderSummary}", orderSummary);
The serializer will expand orderSummary into fields rather than preformatting a big string.
Security and privacy
- Mask or redact sensitive fields (SSNs, credit card numbers, passwords) before logging.
- Use access controls on log storage.
- Consider field-level encryption for highly sensitive attributes.
- Retention policies: keep logs only as long as needed for compliance and debugging.
Example: Putting it all together
- Configure Tethys.Logging with JsonLogFormatter and console/file sinks.
- Add enrichers for environment, host, and application version.
- Add CorrelationIdMiddleware and RequestLoggingMiddleware.
- Ensure outgoing HTTP clients propagate X-Correlation-ID.
- Send logs to your centralized store and build dashboards on correlationId and request duration.
Checklist for adoption
- JSON formatter enabled
- Correlation ID generated and propagated
- Request-scoped fields (method, path, statusCode, duration)
- Standardized field names
- Sensitive data redaction
- Appropriate log levels & sampling
- Integration with log store & dashboards
If you want, I can provide a downloadable sample project (dotnet) that demonstrates this end-to-end.
Leave a Reply