Guidance: Best Approach for Handling Large-Scale Deletions in Nextworld
1. Why Data Purge is the Recommended Solution
For deleting hundreds of thousands of records, we strongly recommend using the Data Purge framework over Logic Block implementations. Here’s why:
Data Purge Benefits: • Chunked deletion with automatic commits - Deletes in 10,000 record batches with commits between chunks • Built-in dead tuple management - Prevents table bloat by releasing locks between batches • Automatic cascade handling - Manages header-detail relationships correctly • Progress tracking and preview mode - See impact before execution • No code required - Configure through metadata
2. How to Configure Data Purge
To set up a Data Purge:
Step 1: Create a Purge Configuration record with: • Table: (Your specific table) • Filter criteria: (Your business rules) • Purge frequency: How often to run (can be triggered on-demand) • Age threshold: Optional minimum age of records before deletion
Step 2: Use the Data Purge App to: • Run in preview mode first to validate impact • Review estimated deletion counts • Execute the actual purge during off-peak hours
Step 3: For dynamic filters, you can: • Update the purge configuration programmatically before execution • Use the DataPurgeAppService API to trigger with updated criteria
3. If Data Purge Cannot Be Used
If business requirements absolutely prevent using Data Purge (e.g., real-time user-triggered deletes), here’s the least problematic Logic Block approach:
Critical Implementation Requirements:
-
Create a BACK (background) type Logic Block
-
Fetch records in small batches (500-1000 max)
-
Delete each batch in a separate transaction
-
Add delay between batches (100-500ms)
-
Track progress for user visibility
Why This Will Still Be Problematic: • No bulk delete capability - Each record deleted individually • Long execution times - 100k records could take hours • Database impact - Continuous load on the database • No automatic cascade handling - Must manually handle related records • Transaction log growth - Each delete is a separate transaction • Difficult rollback - No built-in recovery mechanism
Sample Pattern (conceptual):
Logic Block: DeleteRecordsFromTable
Type: BACK (Background)
Actions:
1. Query with limit 500
2. For each record: Delete
3. Commit transaction
4. If more records exist: Schedule next batch
Recommendation
Given the volume (hundreds of thousands of records) and the performance implications, I strongly recommend working with your Solution Architect to implement Data Purge rather than Logic Blocks. The infrastructure impact of row-by-row deletion at this scale could affect system performance for all users.