Log Event
Achieve true end-to-end visibility for your integrations with Nodinite Log Events. This page explains how Log Events work, the benefits of synchronous and asynchronous logging, and how to leverage JSON-based events for robust, decoupled, and future-proof solutions.
What you'll find on this page:
✅ How Log Events enable end-to-end logging across any platform
✅ Synchronous vs. asynchronous logging patterns explained
✅ JSON-based Log Events for flexible, vendor-neutral integration
✅ Best practices for decoupled, reliable event delivery
✅ Context options and repository auto-mapping
Nodinite enables End-to-End Logging using either "out of the box" Log Agents or your own custom code. You can use both synchronous and asynchronous message exchange logging patterns to fit your architecture and business needs.
Architectural examples with synchronous and asynchronous logging.
We recommend the asynchronous logging pattern for maximum reliability and scalability.
What is a Log Event?
A Nodinite Log Event is a JSON object that represents a business or technical event. When a Log Event arrives, Nodinite processes it through several stages that enable powerful search and correlation capabilities.
Delivery methods:
You can send Log Events directly to the REST-based Log API or persist them for the Nodinite Pickup Log Events Service Logging Agent to fetch asynchronously.
Asynchronous delivery benefits:
- Your logging solutions have no dependency on the Log API (reboot/update whenever you like)
- No need to implement custom retry logic
- Improved concurrency and reliability — Nodinite Pickup Log Events Service Logging Agent manages delivery and prevents overload
- Flexible, decoupled architecture for future-proof integrations
A Log Event consists of three parts:
Log Event Details (Mandatory) – Generic information about the event – WHEN, HOW
LogDateTime– When the event occurredEndPoint– Where the event happenedMessageType– What format/schema the message follows- Event direction, status, and processing details
Payload/Body (Optional) – The actual business transaction data – WHAT
- Usually XML, JSON, flat files (EDI/X12), or zip files
- Format identified by the Message Type
- The Logging Service extracts Search Field values from this payload
Context Properties (Optional) – Key-value pairs for additional metadata – WHAT
- Business data:
InvoiceId,CustomerNumber,OrderNumber - Technical data:
CorrelationId,SessionId,RetryCount - Search Fields can extract values from context properties too
- Use Context Options to control how Nodinite processes events
- Business data:
The architectural layout of a Log Event
What Happens After You Log an Event?
When you send a Log Event to Nodinite, a powerful processing pipeline activates to enable search and correlation:
1. Event Received (Unprocessed)
- Log Event arrives via Log API (REST) or Pickup Service (asynchronous)
- Stored in Log Database with status "Unprocessed"
- Awaits processing by the Logging Service
2. Message Type Determination
- The Logging Service determines the Message Type:
- From the
MessageTypefield in the Log Event (if provided) - Extracted from XML payload (namespace + root node)
- Extracted from JSON schema property
- Extracted from EDIFACT UNA field
- Overridden by Context Options (e.g.,
ExtendedProperties/1.0#MessageTypeName)
- From the
- The Message Type identifies what format/schema the message follows
3. Search Field Extraction
- The Logging Service finds all Search Field Expressions assigned to this Message Type
- Each expression extracts values from payload or context:
- XPath expressions extract from XML payloads
- JSON Path expressions extract from JSON payloads
- RegEx expressions extract from EDI/text formats
- Message Context Key expressions extract from context properties
- Formula expressions combine and transform values
- Extracted values become searchable in Log Views
4. Event Processed
- Status changes from "Unprocessed" to "Processed"
- Message Type and Search Field values are now indexed
- Event appears in Log Views with full search capabilities
- Business users can find it using familiar terms (Order Number, Customer ID, etc.)
The Magic: This pipeline transforms raw integration events into business-friendly, searchable data—automatically. No custom queries, no database knowledge required!
Example flow:
Learn More: See Message Types Overview to understand the binding between formats and extraction, and Search Fields to learn about expression plugins.
Models
The Log API includes details from the following Model entities:
- Log Event – JSON Object
How do I send Log Events using REST?
Follow the 'Log using REST' article to learn more about sending Log Events via REST.
Repository
Log Events can "auto-populate" the Repository Model with special content. This drastically reduces administration and automatically transfers know-how from developers to business and IT operations teams.
Important
You must enable the 'AllowRepositoryAutomapping' System Parameter for the Logging Service to honor the provided Repository Model related data.
Next Steps
Start logging:
- Log API – REST-based API for sending Log Events
- JSON Log Event – Understand the Log Event structure and fields
- Context Options – Control Message Type, Search Field extraction, and Repository binding
- Asynchronous Logging – Best practice for reliable, decoupled logging
Understanding the Processing Pipeline
Message Types Overview – Learn how Message Types identify formats and enable extraction
Search Fields – Understand how Search Field Expressions extract searchable values
Logging Service – The engine that processes Log Events and extracts Search Field values
Log Databases – Where processed events and extracted values are stored
Related Topics
Repository Model – Auto-populate with Context Options
Log Views – Search and filter processed events
Log Agents – Out-of-the-box logging for BizTalk, Logic Apps, Mule, etc.
Endpoint Types – Types of integration endpoints
Endpoint Directions – Direction semantics for events