- 0 minutes to read

Log Event

Achieve true end-to-end visibility for your integrations with Nodinite Log Events. This page explains how Log Events work, the benefits of synchronous and asynchronous logging, and how to leverage JSON-based events for robust, decoupled, and future-proof solutions.

What you'll find on this page:

✅ How Log Events enable end-to-end logging across any platform
✅ Synchronous vs. asynchronous logging patterns explained
✅ JSON-based Log Events for flexible, vendor-neutral integration
✅ Best practices for decoupled, reliable event delivery
✅ Context options and repository auto-mapping

Nodinite enables End-to-End Logging using either "out of the box" Log Agents or your own custom code. You can use both synchronous and asynchronous message exchange logging patterns to fit your architecture and business needs.

graph LR subgraph "Log Monitoring Agent Configuration" roMessageBroker[fal:fa-archive Nodinite Log Agents
BizTalk, Mule, Logic Apps, ...] roLogSink("fal:fa-bolt Custom Logging Solution") --> roId1["fal:fa-list Queue
fal:fa-folder Folder
..."] end subgraph "Web Server" roLogAPI(fal:fa-cloud-arrow-down Log API) roPS(fal:fa-truck-pickup Pickup Service)--> roLogAPI roId1 -.Log Event.-x roPS roLogSink --> |Log Event|roLogAPI roMessageBroker --> |Log Event| roLogAPI end

Architectural examples with synchronous and asynchronous logging.

We recommend the asynchronous logging pattern for maximum reliability and scalability.

What is a Log Event?

A Nodinite Log Event is a JSON object that represents a business or technical event. When a Log Event arrives, Nodinite processes it through several stages that enable powerful search and correlation capabilities.

Delivery methods:

You can send Log Events directly to the REST-based Log API or persist them for the Nodinite Pickup Log Events Service Logging Agent to fetch asynchronously.

Asynchronous delivery benefits:

  • Your logging solutions have no dependency on the Log API (reboot/update whenever you like)
    • No need to implement custom retry logic
  • Improved concurrency and reliability — Nodinite Pickup Log Events Service Logging Agent manages delivery and prevents overload
  • Flexible, decoupled architecture for future-proof integrations

A Log Event consists of three parts:

  1. Log Event Details (Mandatory) – Generic information about the event – WHEN, HOW

    • LogDateTime – When the event occurred
    • EndPoint – Where the event happened
    • MessageType – What format/schema the message follows
    • Event direction, status, and processing details
  2. Payload/Body (Optional) – The actual business transaction data – WHAT

  3. Context Properties (Optional) – Key-value pairs for additional metadata – WHAT

    • Business data: InvoiceId, CustomerNumber, OrderNumber
    • Technical data: CorrelationId, SessionId, RetryCount
    • Search Fields can extract values from context properties too
    • Use Context Options to control how Nodinite processes events
graph TD subgraph "Event" subgraph "Details" roED[fal:fa-bolt Event Details
LogDateTime = 2018-05-03 13:37:00+02:00
EndPoint = https://api.nodinite.com/...
MessageType=Invoice
...] end subgraph "Payload" ro[fal:fa-envelope Message
base64EncodedMessage] end subgraph "Context Properties" roKey[fal:fa-key Key Values
InvoiceNo = 123
CorrelationId=456
...] end end

The architectural layout of a Log Event

What Happens After You Log an Event?

When you send a Log Event to Nodinite, a powerful processing pipeline activates to enable search and correlation:

1. Event Received (Unprocessed)

2. Message Type Determination

  • The Logging Service determines the Message Type:
    • From the MessageType field in the Log Event (if provided)
    • Extracted from XML payload (namespace + root node)
    • Extracted from JSON schema property
    • Extracted from EDIFACT UNA field
    • Overridden by Context Options (e.g., ExtendedProperties/1.0#MessageTypeName)
  • The Message Type identifies what format/schema the message follows

3. Search Field Extraction

  • The Logging Service finds all Search Field Expressions assigned to this Message Type
  • Each expression extracts values from payload or context:
    • XPath expressions extract from XML payloads
    • JSON Path expressions extract from JSON payloads
    • RegEx expressions extract from EDI/text formats
    • Message Context Key expressions extract from context properties
    • Formula expressions combine and transform values
  • Extracted values become searchable in Log Views

4. Event Processed

  • Status changes from "Unprocessed" to "Processed"
  • Message Type and Search Field values are now indexed
  • Event appears in Log Views with full search capabilities
  • Business users can find it using familiar terms (Order Number, Customer ID, etc.)

The Magic: This pipeline transforms raw integration events into business-friendly, searchable data—automatically. No custom queries, no database knowledge required!

Example flow:

graph LR A[XML Invoice arrives] --> B[Message Type: Invoice/1.0] B --> C[Extract InvoiceNumber via XPath] B --> D[Extract CustomerID via XPath] B --> E[Extract Amount via XPath] C --> F[Searchable in Log Views] D --> F E --> F F --> G[Business users search:<br/>'InvoiceNumber = 12345']

Learn More: See Message Types Overview to understand the binding between formats and extraction, and Search Fields to learn about expression plugins.

Models

The Log API includes details from the following Model entities:

How do I send Log Events using REST?

Follow the 'Log using REST' article to learn more about sending Log Events via REST.

Repository

Log Events can "auto-populate" the Repository Model with special content. This drastically reduces administration and automatically transfers know-how from developers to business and IT operations teams.

Important

You must enable the 'AllowRepositoryAutomapping' System Parameter for the Logging Service to honor the provided Repository Model related data.


Next Steps

Start logging:

  1. Log API – REST-based API for sending Log Events
  2. JSON Log Event – Understand the Log Event structure and fields
  3. Context Options – Control Message Type, Search Field extraction, and Repository binding
  4. Asynchronous Logging – Best practice for reliable, decoupled logging

Understanding the Processing Pipeline

Message Types Overview – Learn how Message Types identify formats and enable extraction
Search Fields – Understand how Search Field Expressions extract searchable values
Logging Service – The engine that processes Log Events and extracts Search Field values
Log Databases – Where processed events and extracted values are stored

Repository Model – Auto-populate with Context Options
Log Views – Search and filter processed events
Log Agents – Out-of-the-box logging for BizTalk, Logic Apps, Mule, etc.
Endpoint Types – Types of integration endpoints
Endpoint Directions – Direction semantics for events