Production-Grade Logging in Node.js with Winston
When you're building a side project, console.log is perfectly fine. But the moment you deploy a distributed system, where you have multiple instances of an app running behind a load balancer console.log becomes an absolute liability.
In production, logs are not text files for humans to read. They are structured telemetry data meant to be indexed, searched, and aggregated by machines. I was re-exploring Winston today, and I want to break down how to actually set this up for a real-world Node.js environment.
The Architecture of Winston
Winston has survived this long in the Node ecosystem because it fundamentally gets the abstraction right. It separates what you are logging from where it is going. It does this using three concepts:
Levels - The severity of the log (e.g.,
error,info,debug).Formats - The shape of the log (e.g., raw text vs. structured JSON).
Transports - The destination of the log (e.g., the console, a file, or an external API).
By decoupling these, you can say, "Send all info logs to the console as text, but send all error logs to an external service as JSON."
1. Setting Up the Baseline
First, install the package: npm install winston.
When setting up your logger, the non-negotiable rule for production is JSON formatting. If your logs are JSON.
const winston = require('winston');
const { combine, timestamp, json, errors } = winston.format;
const logger = winston.createLogger({
level: 'info',
format: combine(
errors({ stack: true }),
timestamp(),
json()
),
defaultMeta: { service: 'payment-service', env: process.env.NODE_ENV },
transports: [
new winston.transports.Console()
]
});
By default, if you pass a JS Error object to a logger, it often just prints the message ("Database connection failed"). In an outage, you need the stack trace to find the exact file and line number. This format rule ensures the stack trace is preserved in the JSON output.
2. The Real Power: Context via Child Loggers
A log that says logger.info("Fetching user cart") is useless if you have 1,000 requests per second. Which user? Which request?
You need context. But passing userId and requestId to every single logger.info() call throughout your codebase is messy and prone to errors. Instead, you use Child Loggers.
app.use((req, res, next) => {
req.logger = logger.child({
requestId: req.headers['x-request-id'] || generateId(),
userId: req.user?.id || 'anonymous'
});
next();
});
function processPayment(req) {
req.logger.info("Starting payment processing");
}
3. Quick Profiling
If you don't have a heavy Application Performance Monitoring (APM) tool set up yet, you can measure how long bottlenecks take directly through your logs.
const profiler = req.logger.startTimer();
await database.query('...');
profiler.done({ message: 'Database query executed', queryName: 'fetch_user_cart' });
Because your logs are in JSON, this outputs a key like "durationMs": 342. You can then go to your logging platform and write a query like durationMs > 500 to instantly find all slow database queries.
4. Shipping Logs Off-Server
If your application is running in a container, the local filesystem is ephemeral. If the app crashes and the container restarts, any .log files stored locally are permanently deleted—along with the exact error logs you need to figure out why it crashed.
const { Logtail } = require("@logtail/node");
const { LogtailTransport } = require("@logtail/winston");
const logtail = new Logtail(process.env.LOGTAIL_TOKEN);
logger.add(new LogtailTransport(logtail));
The logger just starts quietly streaming the data over the network in the background.