How do I detect memory leaks in XSLT transformations?
Memory leaks in DataPower XSLT transformations cause gradual memory consumption growth, eventually triggering OutOfMemoryError crashes. Proactive detection using memory trend analysis prevents unexpected service outages.
Memory Leak Detection Strategy
Use memory trend analysis + correlation with service restarts to identify memory leaks before crashes occur.
Step 1: Configure Memory Monitoring
- Create Memory resource for DataPower appliance:
- Navigate: Nodinite Web Client → Repository → Monitoring Resources
- Resource type: Memory
- DataPower appliance:
Prod-Primary
- Poll interval: 5 minutes (288 datapoints/day for granular trend analysis)
- Historical data retention: 90 days minimum (long-term trend analysis required for memory leak detection)
Step 2: Analyze Memory Trends
Monitor View "DataPower Memory Trends - 7 Days" visualizes memory usage patterns over time.
Normal Memory Pattern (No Memory Leak):
Memory Usage (GB)
4.0 GB ┤
3.5 GB ┤
3.0 GB ┤
2.5 GB ┤
2.0 GB ┤ ╭─╮ ╭─╮ ╭─╮ ╭─╮ ╭─╮
1.5 GB ┤ ╭╯ ╰─╯ ╰─╯ ╰─╯ ╰─╯ ╰╮
1.0 GB ┤╭╯ ╰╮
0.5 GB ┼─────────────────────────
Day1 Day2 Day3 Day4 Day5 Day6 Day7
Baseline: 512 MB
Business hours: 512-750 MB (processing load fluctuation)
Overnight: Returns to 512 MB baseline
Pattern: Stable, predictable, no memory growth trend
Memory Leak Pattern (Linear Growth):
Memory Usage (GB)
4.0 GB ┤ ╭──
3.8 GB ┤ ╭─╯
3.5 GB ┤ ╭─╯
3.0 GB ┤ ╭───╯
2.5 GB ┤ ╭────╯
2.0 GB ┤ ╭────╯
1.5 GB ┤ ╭────╯
1.0 GB ┤╭───╯
0.5 GB ┼─────────────────────────
Day1 Day2 Day3 Day4 Day5 Day6 Day7
Baseline growth: Linear increase over time
Day 1: 512 MB → Day 3: 1.2 GB → Day 5: 2.4 GB → Day 7: 3.8 GB
Approaching 4 GB heap limit (crash imminent)
Pattern: Continuous growth, never returns to baseline
Step 3: Correlate with Service Status
If memory leak detected, check Service Health resource history for Multi-Protocol Gateway.
Memory Leak Symptom Cycle:
- Memory grows linearly over days (512 MB → 1.2 GB → 2.4 GB → 3.8 GB)
- Java heap exhaustion occurs when approaching 4 GB heap limit
- OutOfMemoryError thrown by DataPower JVM
- Service crashes → Service status changes from "up" to "down"
- Automatic restart triggered by DataPower watchdog process
- Memory resets to baseline (512 MB after restart)
- Cycle repeats → Service crashes every 48-72 hours
Example Correlation Timeline:
Date/Time | Memory Usage | Service Status | Event |
---|---|---|---|
Oct 10, 6 AM | 512 MB | up (running) | Service healthy, baseline memory |
Oct 11, 6 PM | 1.2 GB | up (running) | Memory growing (600 MB increase in 36 hours) |
Oct 13, 2 AM | 2.4 GB | up (running) | Memory leak evident (linear growth pattern) |
Oct 14, 8 PM | 3.8 GB | up (running) | Approaching heap limit (4 GB), crash imminent |
Oct 15, 6 AM | 4.0 GB | down (crashed) | OutOfMemoryError → Service crashed |
Oct 15, 6:02 AM | 512 MB | up (restarted) | Automatic restart, memory reset to baseline |
Oct 17, 2 PM | 1.5 GB | up (running) | Memory growing again (leak not fixed, cycle repeating) |
Step 4: Root Cause Investigation
Memory leaks in DataPower typically occur in custom XSLT transformations processing large XML payloads.
Common Memory Leak Causes:
Large XML variables not released:
<xsl:variable name="largeDocument" select="document('http://backend.com/500KB-response.xml')"/> <!-- Variable holds 500 KB in memory, not garbage collected until transformation completes -->
Recursive templates without exit condition:
<xsl:template name="processNode"> <xsl:param name="node"/> <xsl:call-template name="processNode"> <!-- Recursive call without base case → infinite recursion → stack overflow → memory exhaustion --> </xsl:call-template> </xsl:template>
Excessive string concatenation:
<xsl:variable name="output"> <xsl:for-each select="//record"> <!-- 10,000 records --> <xsl:value-of select="concat($output, ., ',')"/> <!-- Creates 10,000 string copies in memory --> </xsl:for-each> </xsl:variable>
DataPower Logs Showing OutOfMemoryError:
Navigate to DataPower WebGUI → Troubleshooting → Logs → Service Error Log (or use Nodinite Remote Action "View Service Logs"):
[Oct 15, 2024 06:00:47] [mpgw][error] service 'TradingPartner-MPG': XSLT transformation failed
[Oct 15, 2024 06:00:48] [mpgw][error] java.lang.OutOfMemoryError: Java heap space
[Oct 15, 2024 06:00:48] [mpgw][error] at com.ibm.datapower.xslt.Processor.transform(Processor.java:423)
[Oct 15, 2024 06:00:49] [mpgw][error] service 'TradingPartner-MPG' stopping due to fatal error
Step 5: Proactive Alerting
Set Memory Warning threshold >85% to detect memory leaks before service crashes.
Threshold Configuration:
- Memory baseline: 512 MB (normal idle state)
- Java heap limit: 4 GB (DataPower firmware configuration)
- Warning threshold: 85% (3.4 GB) → Early signal before crash
- Error threshold: 95% (3.8 GB) → Immediate action required
- Critical threshold: 98% (3.92 GB) → Service crash imminent (<10 minutes)
Proactive Alert Timeline:
Date/Time | Memory Usage | Threshold | Alert | Action |
---|---|---|---|---|
Oct 14, 2 PM | 3.2 GB | 80% | Info | Monitor trends, no action required |
Oct 14, 8 PM | 3.5 GB | 85% Warning | Warning alert fires | Operations team investigates proactively |
Oct 14, 9 PM | N/A | N/A | Scheduled maintenance | Operations team schedules service restart during next maintenance window (Saturday 2-6 AM) |
Oct 15, 2 AM | 512 MB | 12% | Service restarted | Memory reset to baseline, crash prevented |
Benefit: Service crashes prevented (proactive restart before OutOfMemoryError), planned maintenance window (no business impact), development team notified to investigate XSLT code inefficiency.
Example: Insurance Company Memory Leak Detection
Challenge: Insurance company processes large HL7 medical claims (500+ KB XML payloads) via DataPower Multi-Protocol Gateway. XSLT transformation converts HL7 to proprietary format for backend claims processing system.
Problem:
- Service crashed every 48-72 hours (OutOfMemoryError)
- Manual restarts required (on-call engineer paged at 2 AM)
- No visibility into memory trends (only reactive alerts after crash)
Solution:
- Configured memory monitoring with 5-minute polling, 90-day historical retention
- Created Monitor View "DataPower Memory Trends - 7 Days" with linear regression trend line
- Set Warning threshold 85% (3.4 GB), Error threshold 95% (3.8 GB)
Results:
- Memory leak detected proactively (Day 6: Warning alert fired at 85% memory, service not yet crashed)
- Planned maintenance restart (Saturday 2 AM restart during maintenance window, zero business impact)
- Root cause identified (XSLT code review found recursive template processing HL7 OBX segments without exit condition)
- XSLT code fixed (development team implemented proper base case, tested in Dev environment, deployed to Prod)
- Zero service crashes since XSLT fix deployment (9 months, previously crashed 12× in 9 months)
Next Steps
After detecting a memory leak in DataPower XSLT transformations, follow these steps to resolve the issue:
Immediate Action - Schedule a proactive service restart during the next planned maintenance window (before the leak reaches critical levels). This prevents unplanned outages during business hours.
Enable Detailed Logging - Use Remote Action "View Service Logs" to capture the exact OutOfMemoryError stack trace, which pinpoints the failing XSLT transformation and line number.
Review XSLT Code - Examine the transformation identified in the error logs, focusing on (a) large variable assignments that aren't released, (b) recursive templates without exit conditions, and (c) string concatenation loops with many iterations.
Optimize the Transformation - Common fixes include: using
xsl:for-each
instead of recursion, implementingselect
expressions to minimize variable scope, and using streaming for large payloads.Test Before Production - Deploy the optimized XSLT to Dev/Test environments, process sample payloads, and verify memory usage returns to normal baseline levels before deploying to Production.
Related Topics
- Memory Monitoring Overview - Configure memory resources, historical trend charts, threshold alerts
- Service Health FAQ - Correlate memory usage with service crashes and analyze crash patterns
- Remote Actions for Service Logs - View Service Logs to investigate OutOfMemoryError stack traces