Logging
What is Logging? #
Logging in Orchesty provides structured, contextual information about what happens during integration execution. Every log entry automatically includes metadata like correlation IDs, topology context, user information, and timestamps, making it easy to trace data flow and debug issues.
Key concepts:
- Structured logging: JSON-formatted logs with consistent fields
- Automatic context: Correlation IDs, topology info added automatically
- Log levels: debug, info, warning, error for different severities
- UI visibility: Logs appear in Orchesty Admin for each execution
- Pino-based: Built on high-performance Pino logger
Why Logging Matters #
Debugging #
- Trace execution: Follow data through connectors and nodes
- Find errors: See exactly where and why failures occur
- Understand behavior: Know what your code is doing
- Reproduce issues: Replay scenarios using log context
Monitoring #
- Track performance: Measure execution times
- Detect patterns: Identify recurring issues
- Monitor health: Watch for errors and warnings
- Audit trail: Record all operations
Production Support #
- Troubleshoot user issues: Search by user or correlation ID
- Understand failures: See full context of errors
- Verify operations: Confirm actions were taken
- Track integrations: Monitor API calls and responses
How Logging Works #
Logging Architecture #
graph LR
Code[Your Code] -->|logger.info| Logger[Logger Service]
Logger -->|Enrich with context| Context[Add ProcessDto metadata]
Context --> Console[Console Output]
Context --> WorkerAPI[Worker API]
WorkerAPI --> UI[Orchesty Admin UI]
style Logger fill:#e1f5ff
style UI fill:#e8f5e8
Log Context Flow #
sequenceDiagram
participant C as Connector
participant L as Logger
participant P as ProcessDto
participant W as Worker API
participant UI as Admin UI
C->>L: logger.info("Fetching data", dto)
L->>P: Extract context (correlation ID, etc.)
L->>L: Format structured log
L->>Console: Write to console
L->>W: Send to Worker API
W->>UI: Display in UI
Using the Logger #
Basic Logging #
import logger from '@orchesty/nodejs-sdk/lib/Logger/Logger';
import AConnector from '@orchesty/nodejs-sdk/lib/Connector/AConnector';
import ProcessDto from '@orchesty/nodejs-sdk/lib/Utils/ProcessDto';
export default class MyConnector extends AConnector {
public getName(): string {
return 'my-connector';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
// Debug logging
logger.debug('Starting data processing', dto);
// Info logging
logger.info('Fetching customer data', dto);
const customer = await this.fetchCustomer();
// Info with details
logger.info(`Customer fetched: ${customer.id}`, dto);
// Warning
if (!customer.email) {
logger.warning('Customer has no email address', dto);
}
dto.setJsonData(customer);
return dto;
}
}
Log Levels #
Debug #
For detailed diagnostic information:
logger.debug('Input data validation passed', dto);
logger.debug(`Processing ${items.length} items`, dto);
logger.debug('Cache hit for key: user-123', dto);
Info #
For general informational messages:
logger.info('API request successful', dto);
logger.info('Data transformation complete', dto);
logger.info(`Processed order ${orderId}`, dto);
Warning #
For potentially problematic situations:
logger.warning('Rate limit approaching (90% used)', dto);
logger.warning('Deprecated field used in request', dto);
logger.warning('Fallback to cached data', dto);
Error #
For error conditions:
logger.error('API request failed', dto);
logger.error('Database connection lost', dto);
logger.error(`Invalid data format: ${error.message}`, dto);
Logging with Error Objects #
try {
const result = await this.callAPI();
dto.setJsonData(result);
} catch (error) {
// Log error with stack trace
logger.error(
'Failed to call API',
dto,
error instanceof Error ? error : new Error(String(error))
);
throw error;
}
Automatic Context #
Every log automatically includes:
Standard Context Fields #
{
"timestamp": 1704123456789,
"service": "sdk",
"levelName": "info",
"message": "Fetching customer data",
// Topology context
"topologyId": "topo-abc-123",
"topologyName": "order-processing",
"nodeId": "node-xyz-456",
"nodeName": "get-customer",
// Tracking IDs
"correlationId": "corr-789-def",
"previousCorrelationId": "corr-456-abc",
"processId": "proc-123-456",
"parentId": "parent-789",
"sequenceId": "seq-001",
// User context
"userId": "user@example.com",
"applications": "shopify",
// Result (if present)
"resultCode": "success",
"resultMessage": "Data processed successfully"
}
Accessing Context #
The logger automatically extracts context from ProcessDto:
// Just pass dto - context is extracted automatically
logger.info('Processing data', dto);
// Context includes:
// - Correlation ID for tracing
// - Topology and node names
// - User information
// - Previous node information
Logging Patterns #
Pattern 1: Logging API Calls #
export default class ApiCallConnector extends AConnector {
public getName(): string {
return 'api-call';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const { customerId } = dto.getJsonData();
logger.info(`Fetching customer ${customerId} from API`, dto);
try {
const requestDto = new RequestDto(
`https://api.example.com/customers/${customerId}`,
HttpMethods.GET,
dto
);
// API calls are automatically logged by CurlSender
const response = await this.getSender().send(requestDto);
logger.info(
`Successfully fetched customer ${customerId}`,
dto
);
dto.setData(response.getBody());
return dto;
} catch (error) {
logger.error(
`Failed to fetch customer ${customerId}`,
dto,
error
);
throw error;
}
}
}
Pattern 2: Logging Progress #
export default class BatchProcessorConnector extends AConnector {
public getName(): string {
return 'batch-processor';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const items = dto.getJsonData().items;
logger.info(`Starting batch processing of ${items.length} items`, dto);
let processed = 0;
let failed = 0;
for (const item of items) {
try {
await this.processItem(item);
processed++;
// Log progress every 100 items
if (processed % 100 === 0) {
logger.info(
`Progress: ${processed}/${items.length} items processed`,
dto
);
}
} catch (error) {
failed++;
logger.warning(
`Failed to process item ${item.id}: ${error.message}`,
dto
);
}
}
logger.info(
`Batch complete: ${processed} processed, ${failed} failed`,
dto
);
dto.setJsonData({ processed, failed });
return dto;
}
}
Pattern 3: Conditional Logging #
export default class ConditionalLogConnector extends AConnector {
private readonly DEBUG = process.env.DEBUG === 'true';
public getName(): string {
return 'conditional-log';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const input = dto.getJsonData();
if (this.DEBUG) {
logger.debug(`Raw input: ${JSON.stringify(input)}`, dto);
}
logger.info('Processing data', dto);
const result = await this.processData(input);
if (this.DEBUG) {
logger.debug(`Raw output: ${JSON.stringify(result)}`, dto);
}
logger.info('Processing complete', dto);
dto.setJsonData(result);
return dto;
}
}
Pattern 4: Performance Logging #
export default class PerformanceLogConnector extends AConnector {
public getName(): string {
return 'performance-log';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const startTime = Date.now();
logger.info('Starting expensive operation', dto);
const result = await this.expensiveOperation();
const duration = Date.now() - startTime;
if (duration > 5000) {
logger.warning(
`Slow operation completed in ${duration}ms (threshold: 5000ms)`,
dto
);
} else {
logger.info(
`Operation completed in ${duration}ms`,
dto
);
}
dto.setJsonData(result);
return dto;
}
}
Pattern 5: Structured Data Logging #
export default class StructuredLogConnector extends AConnector {
public getName(): string {
return 'structured-log';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const order = dto.getJsonData();
// Log structured information
logger.info(
`Order received: ID=${order.id}, Total=$${order.total}, Items=${order.items.length}`,
dto
);
// Process order
const result = await this.processOrder(order);
logger.info(
`Order processed: ID=${order.id}, Status=${result.status}, ProcessTime=${result.duration}ms`,
dto
);
dto.setJsonData(result);
return dto;
}
}
HTTP Request Logging #
HTTP requests made via CurlSender are automatically logged:
// When you call:
const response = await this.getSender().send(requestDto);
// Automatically logs:
// - Request method, URL, headers, body
// - Response status, headers, body
// - Request duration
// - Success (< 300) or error (>= 300)
Example HTTP Log Output #
{
"levelName": "info",
"message": "Request success. Method: GET, Url: https://api.example.com/users, Response: Code: 200, Reason: OK",
"correlationId": "corr-123",
"reqBody": {},
"resBody": {"users": [...]},
"resHeaders": {"content-type": "application/json"}
}
Debugging with Logs #
Tracing Data Flow #
Use correlation IDs to follow messages through topologies:
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const correlationId = dto.getHeader('correlation-id');
logger.info(`[${correlationId}] Processing started`, dto);
// Do work...
logger.info(`[${correlationId}] Processing complete`, dto);
return dto;
}
Then search logs in Orchesty Admin by correlation ID to see entire flow.
Finding Errors #
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
try {
const result = await this.complexOperation();
dto.setJsonData(result);
return dto;
} catch (error) {
// Log detailed error information
logger.error(
`Complex operation failed at step: ${this.getCurrentStep()}`,
dto,
error
);
// Log additional context
logger.error(
`Error context: ${JSON.stringify(this.getErrorContext())}`,
dto
);
throw error;
}
}
Reproducing Issues #
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
// Log input for reproduction
logger.debug(
`Input data: ${JSON.stringify(dto.getJsonData())}`,
dto
);
// Log configuration
logger.debug(
`Config: ${JSON.stringify(this.getConfig())}`,
dto
);
// Process...
// Log output
logger.debug(
`Output data: ${JSON.stringify(dto.getJsonData())}`,
dto
);
return dto;
}
UI Logs #
Viewing Logs in Orchesty Admin #
Logs appear in two places:
- Process Execution View: See logs for specific execution
- Node Detail View: See logs for specific node
Log Filtering #
In Orchesty Admin, you can filter logs by:
- Level: Show only errors, warnings, etc.
- Time range: Show logs from specific period
- Node: Show logs from specific node
- Correlation ID: Show all logs for one message
- User: Show logs for specific user
UI-Specific Logs #
Send logs specifically for UI visibility:
// Third parameter: isForUi = true
logger.info('User-facing message', dto, true);
This ensures the log appears prominently in the UI.
Best Practices #
1. Log at Appropriate Levels #
// Good - appropriate levels
logger.debug('Cache lookup result: hit');
logger.info('Customer order created');
logger.warning('API rate limit approaching');
logger.error('Database connection failed');
// Bad - wrong levels
logger.error('Customer order created'); // Not an error!
logger.debug('Database connection failed'); // Too quiet for errors!
2. Include Relevant Context #
// Good - includes IDs and relevant data
logger.info(`Processing order ${orderId} for customer ${customerId}`, dto);
// Bad - vague
logger.info('Processing order', dto);
3. Don't Log Sensitive Data #
// Bad - logs credentials!
logger.info(`API Key: ${apiKey}`, dto);
// Good - indicate auth used without exposing secrets
logger.info('Using API key authentication', dto);
4. Use Consistent Format #
// Good - consistent format
logger.info(`Customer ${id}: status=${status}, total=${total}`, dto);
logger.info(`Order ${id}: status=${status}, items=${count}`, dto);
// Bad - inconsistent
logger.info(`Customer id is ${id} and status: ${status}`, dto);
logger.info(`Order: ${id} (${status}) - ${count} items`, dto);
5. Log Decisions #
if (order.total > 1000) {
logger.info(`Order ${order.id} flagged as high-value (${order.total})`, dto);
dto.setForceFollowers('high-value-handler');
} else {
logger.info(`Order ${order.id} routed to standard handler`, dto);
dto.setForceFollowers('standard-handler');
}
6. Don't Over-Log #
// Bad - too verbose
for (const item of items) {
logger.debug(`Processing item ${item.id}`, dto);
logger.debug(`Item ${item.id} has ${item.count} units`, dto);
logger.debug(`Item ${item.id} costs ${item.price}`, dto);
await this.processItem(item);
logger.debug(`Item ${item.id} processed`, dto);
}
// Good - summary logging
logger.info(`Processing ${items.length} items`, dto);
await Promise.all(items.map(item => this.processItem(item)));
logger.info(`All items processed successfully`, dto);
7. Log Entry and Exit #
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
logger.info('Connector started', dto);
try {
const result = await this.doWork(dto);
logger.info('Connector completed successfully', dto);
return result;
} catch (error) {
logger.error('Connector failed', dto, error);
throw error;
}
}
Common Mistakes #
Mistake 1: Not Passing ProcessDto #
// Bad - loses context!
logger.info('Processing data');
// Good - includes context
logger.info('Processing data', dto);
Mistake 2: Logging Too Much #
// Bad - every loop iteration
items.forEach(item => {
logger.info(`Processing item ${item.id}`, dto);
});
// Good - summary
logger.info(`Processing ${items.length} items`, dto);
Mistake 3: Not Logging Errors #
// Bad - silent failure
try {
await this.doSomething();
} catch (error) {
// Error not logged!
}
// Good - log the error
try {
await this.doSomething();
} catch (error) {
logger.error('Operation failed', dto, error);
throw error;
}
Mistake 4: String Concatenation #
// Bad - expensive even when not logged
logger.debug('Data: ' + JSON.stringify(largeObject), dto);
// Good - only stringifies if debug enabled
if (process.env.DEBUG) {
logger.debug(`Data: ${JSON.stringify(largeObject)}`, dto);
}
Performance Considerations #
Avoid Expensive Operations #
// Bad - always executes expensive operation
logger.debug(`Complex calc: ${this.complexCalculation()}`, dto);
// Good - calculate only if needed
if (debugEnabled) {
logger.debug(`Complex calc: ${this.complexCalculation()}`, dto);
}
Use Debug Level Appropriately #
Debug logs may be disabled in production for performance.
// Use info for important messages (always logged)
logger.info('Order processed', dto);
// Use debug for detailed diagnostics (may be disabled)
logger.debug('Detailed processing steps...', dto);
Related Concepts #
- Data Flow - Understanding ProcessDto context
- Error Handling - Logging errors effectively
- Connector - Where to add logging
- Retry Policy - Logging retry attempts
- Debugging - General debugging strategies
API References #
- Logger - Logger service
- ProcessDto - Data transfer object with context
- CurlSender - Automatic HTTP logging
Next Steps #
- Review Logger source code for advanced features
- Learn about Error Handling for logging errors
- Understand Data Flow to see what context is available
- Explore Connectors to see where logging fits in your code