Category: Salesforce

  • Beyond “Try-Catch”: Building Self-Healing Apex with Transaction Finalizers

    We’ve all been there as developers. You build a complex Queueable job. You bulk-test it in the sandbox. Everything looks perfect. Then, production reality hits. A row lock here, a CPU timeout there, and suddenly your process dies a silent death.

    As an Architect, the “silent failure” is my nightmare. In the past, we tried to wrap everything in try-catch blocks, but let’s be honest—you can’t try-catch a Limit Exception. When you hit 10.1 seconds of CPU time, the transaction just… ends.

    That’s why I’ve become an advocate for the System.Finalizer interface. It’s the closest thing we have to a “safety net” for the asynchronous world.

    The Architecture: A “Manager-Worker” Relationship

    Think of a Finalizer as a supervisor who stands outside the factory floor. Even if the factory (your Queueable) collapses, the supervisor is still standing there with a clipboard, ready to log the incident and call for help.

    The Glue: The IRetryable Interface

    To ensure our Finalizer can talk to any Queueable job without knowing its specific business logic, we define an interface. This allows the Finalizer to ask the job, “Are you allowed to try again?” and “What is your current retry count?”

    The Implementation

    Here is how I structure this pattern to ensure resiliency. We are going to build a Self-Healing Worker that can detect its own failure and attempt a retry.

    Architect’s Warning: Salesforce limits successive re-queuing from a Finalizer to 5 consecutive attempts. If the job fails 5 times in a row, the chain stops to prevent infinite loops.

    1. The Interface

    /**
     * @description Interface to enable self-healing capabilities.
     */
    public interface IRetryable {
        Boolean canRetry();
        void incrementRetryCount();
        Integer getRetryCount();
    }

    2. The Supervisor (The Finalizer)

    /**
     * @description Architect Pattern: Transactional Safety Net
     */
    public class QueueableSafetyNet implements System.Finalizer {
        private Object parentJob; 
    
        public QueueableSafetyNet(Object job) {
            this.parentJob = job;
        }
    
        public void execute(System.FinalizerContext ctx) {
            if (ctx.getResult() != ParentJobResult.SUCCESS) {
                handleFailure(ctx);
            }
        }
    
        private void handleFailure(System.FinalizerContext ctx) {
            Exception ex = ctx.getException();
            System.debug('Async failure detected: ' + ex?.getMessage());
            // 1. Log to your custom error framework
            // insert new Error_Log__c(...);
    
            if (parentJob instanceof IRetryable) {
                IRetryable retryableJob = (IRetryable)parentJob;
                
                if (retryableJob.canRetry()) {
                    retryableJob.incrementRetryCount();
                    System.debug('Self-healing: Retry #' + retryableJob.getRetryCount());
                    System.enqueueJob(parentJob); 
                }
            }
        }
    }

    3. The Worker (The Queueable)

    public class DataSyncJob implements Queueable, IRetryable {
        private List<Id> recordIds;
        private Integer retryCount = 0;
        private static final Integer MAX_RETRIES = 3;
    
        public DataSyncJob(List<Id> ids) { this.recordIds = ids; }
    
        public void execute(QueueableContext qbc) {
            // ATTACH FIRST: Ensure the net is under you before you start walking the wire
            System.attachFinalizer(new QueueableSafetyNet(this));
    
            // Business Logic: High-risk processing goes here
        }
    
        public Boolean canRetry() { return retryCount < MAX_RETRIES; }
        public void incrementRetryCount() { this.retryCount++; }
        public Integer getRetryCount() { return this.retryCount; }
    }

    Comparison: Traditional Try-Catch vs. Finalizers

    ScenarioTry-Catch BlockTransaction Finalizer
    Logic Errors (Null Pointer, etc.)✅ Can catch✅ Can catch
    Governor Limits (CPU/Heap)Cannot catchCan catch
    Assertion FailuresCannot catchCan catch
    ScopeOnly the code inside the blockThe entire execute method

    Why this changes your “Architectural DNA”

    • Resiliency over Rigidity: Instead of just failing on a row lock, your code now says, “I’ll try again in a minute.”
    • True Error Visibility: You can finally report on why things failed in the background without digging through raw Trace Logs.
    • Governance: You’re respecting the platform. Finalizers allow you to fail gracefully rather than leaving data in a partial or “zombie” state.

    The Trade-offs (Architect’s Reality Check)

    • Chain Limits: You can only chain 5 jobs in a row. If your job is fundamentally broken (logic error), retrying won’t help. Use your retry count wisely.
    • State Management: Ensure your Queueable class is serializable. Everything you need to “restart” the job must be stored in the class variables.

    Final Thought

    We’re moving toward a world of “Autonomous Salesforce.” Our systems should be smart enough to detect a hiccup. They should correct it without an admin having to manually click a button. Transaction Finalizers are the foundation of that autonomy.

  • How to Implement CDC Event Filtering in High-Traffic Systems

    The “Event Storm” Problem

    We’ve all been there. You enable Change Data Capture (CDC) on a high-traffic object and suddenly your downstream systems—MuleSoft, Heroku, or AWS—are drowning.

    By default, CDC publishes an event for every field change. If a batch job updates 50,000 records to fix a typo, you just burned 50,000 events from your daily quota. If that change didn’t matter to your integration, you’ve wasted resources and hit limits for nothing.

    This is the “Event Storm.” It kills scalability.

    The Solution: Stream Filtering

    Architects must “shift left.” Don’t make subscribers filter the noise; prevent the noise from ever reaching the bus. Platform Event Channel Filtering turns a high-volume firehose into a high-signal notification service.

    How to Implement (4 Quick Steps)

    Filtering CDC events isn’t (yet) a “point-and-click” journey in the Setup menu. It requires a bit of Metadata/Tooling API work.

    • Create a Custom Channel: You cannot filter the standard ChangeEvents channel. Create a custom one via the PlatformEventChannel object.
    // Tooling API: PlatformEventChannel
    POST /services/data/v63.0/tooling/sobjects/PlatformEventChannel
    {
      "FullName": "HighValueAccount_Chn__chn",
      "Metadata": {
        "channelType": "Event",
        "label": "High Value Account Changes"
      }
    }
    • Add a Channel Member: Bind your object (e.g., AccountChangeEvent) to your new custom channel.
    • Define the Filter: This is where you define the logic. Using the PlatformEventChannelFilter object, you can filter by fields or even change types.
    Example Filter Expression: SELECT Id, AccountStatus__c FROM AccountChangeEvent WHERE Industry = 'Technology' AND AnnualRevenue > 1000000
    // Tooling API: PlatformEventChannelMember
    POST /services/data/v63.0/tooling/sobjects/PlatformEventChannelMember
    
    {
      "FullName": "AccountUpdates_Channel__chn",
      "Metadata": {
        "eventChannel": "HighValueAccount_Chn__chn",
        "selectedEntity": "AccountChangeEvent",
        "filterExpression": "Industry = 'Technology' AND AnnualRevenue > 1000000"
      }
    }
    • Deploy: Use your CI/CD pipeline or CLI to push the metadata.

    Trade-offs at a Glance

    AdvantageDisadvantage
    Protects Quotas: Stops draining your 24-hour delivery limits.Simple Logic Only: No cross-object formulas or complex logic allowed.
    Consumer Efficiency: Middleware stops processing “junk” events.No UI: Must be managed via API/CLI and Git.
    Lower Latency: Less traffic on the bus means faster delivery.Harder to Debug: You can’t easily “see” what was filtered.

    The Bottom Line

    Efficiency isn’t just about fast code; it’s about doing less unnecessary work. Filtering CDC streams is the best way to keep your event-driven architecture lean, cheap, and fast.

  • Beyond SOQL101: Mastering the Stateful Selector Pattern in Apex

    In high-scale Salesforce environments, resource conservation is the ultimate design goal. Without a dedicated data strategy, redundant queries within a single transaction don’t just waste CPU time. They also risk hitting the hard wall of Governor Limits.

    The Problem: Transactional Redundancy

    In complex transactions, the same record is often requested by multiple independent components:

    • Triggers checking record status.
    • Service Classes calculating SLA details.
    • Validation Handlers verifying ownership.

    Without a strategy, each call initiates a fresh database round-trip. This “fragmented querying” leads to System.LimitException: Too many SOQL queries: 101.

    The Solution: The Stateful Selector Pattern

    By centralizing data access and implementing Memoization (Static Caching), we ensure that once a record is fetched, it resides in memory for the duration of the execution context.

    The Core Implementation Steps:

    1. Encapsulate: Use inherited sharing to ensure the selector respects the caller’s security context.
    2. Define a Transaction Cache: Use a private static Map<Id, SObject> as an in-memory buffer.
    3. Apply “Delta” Logic: Identify only the IDs missing from the cache before querying.
    4. Enforce Security: Always use WITH USER_MODE for native FLS and CRUD enforcement.
    5. Serve & Hydrate: Bulk-fetch missing records, update the cache, and return the result set.

    The Pattern in Practice

    Below is a refined implementation of a Stateful Account Selector:

    /**
     * @description Account Selector with Transactional Caching 
     * @author John Dove
     */
    public inherited sharing class AccountSelector {
        
        // Internal cache to store records retrieved during the transaction
        private static Map<Id, Account> accountCache = new Map<Id, Account>();
    
        /**
         * @description Returns a Map of Accounts for the provided IDs.
         * Only queries the database for IDs not already present in the cache.
         */
        public static Map<Id, Account> getAccountsById(Set<Id> accountIds) {
            if (accountIds == null || accountIds.isEmpty()) {
                return new Map<Id, Account>();
            }
    
            // 1. Identify IDs not yet cached
            Set<Id> idsToQuery = new Set<Id>();
            for (Id accId : accountIds) {
                if (!accountCache.containsKey(accId)) {
                    idsToQuery.add(accId);
                }
            }
    
            // 2. Perform bulkified, secured query for the "Delta"
            if (!idsToQuery.isEmpty()) {
                List<Account> queriedRecords = [
                    SELECT Id, Name, Industry, AnnualRevenue, (SELECT Id FROM Contacts)
                    FROM Account
                    WHERE Id IN :idsToQuery
                    WITH USER_MODE
                ];
                
                // 3. Hydrate the cache
                accountCache.putAll(queriedRecords);
            }
    
            // 4. Extract and return the requested subset from the cache
            Map<Id, Account> results = new Map<Id, Account>();
            for (Id accId : accountIds) {
                if (accountCache.containsKey(accId)) {
                    results.put(accId, accountCache.get(accId));
                }
            }
            return results;
        }
    
        /**
         * @description Invalidation method to be called after DML 
         * to ensure the cache doesn't serve stale data.
         */
        public static void invalidateCache(Set<Id> idsToRemove) {
            accountCache.keySet().removeAll(idsToRemove);
        }
    }

    Why This Scales

    • Reduced DB Contention: Minimizing SOQL round-trips frees up database resources for concurrent requests.
    • Idempotency: You can call the selector 50 times in a recursive trigger flow, and it will only hit the database once.
    • Clean Maintenance: Global filters (like IsActive = true) are updated in one method, not across dozens of classes.

    Trade-offs: Advantages & Disadvantages

    FeatureAdvantageDisadvantage
    Governor LimitsDrastically reduces SOQL query count.Can lead to Heap Limit exceptions if caching thousands of large records.
    PerformanceSub-millisecond retrieval for cached records.Increased complexity in handling cache invalidation after DML.
    MaintenanceSingle source of truth for query logic/security.Risk of “Stale Data” if the record is updated but the cache isn’t refreshed.

    Conclusion

    The Stateful Selector pattern is a fundamental building block for enterprise-grade Salesforce architecture. It transforms your data layer from a performance bottleneck into a high-speed, secure, and predictable asset.