Category: Salesforce

  • How to Implement CDC Event Filtering in High-Traffic Systems

    The “Event Storm” Problem

    We’ve all been there. You enable Change Data Capture (CDC) on a high-traffic object and suddenly your downstream systems—MuleSoft, Heroku, or AWS—are drowning.

    By default, CDC publishes an event for every field change. If a batch job updates 50,000 records to fix a typo, you just burned 50,000 events from your daily quota. If that change didn’t matter to your integration, you’ve wasted resources and hit limits for nothing.

    This is the “Event Storm.” It kills scalability.

    The Solution: Stream Filtering

    Architects must “shift left.” Don’t make subscribers filter the noise; prevent the noise from ever reaching the bus. Platform Event Channel Filtering turns a high-volume firehose into a high-signal notification service.

    How to Implement (4 Quick Steps)

    Filtering CDC events isn’t (yet) a “point-and-click” journey in the Setup menu. It requires a bit of Metadata/Tooling API work.

    • Create a Custom Channel: You cannot filter the standard ChangeEvents channel. Create a custom one via the PlatformEventChannel object.
    // Tooling API: PlatformEventChannel
    POST /services/data/v63.0/tooling/sobjects/PlatformEventChannel
    {
      "FullName": "HighValueAccount_Chn__chn",
      "Metadata": {
        "channelType": "Event",
        "label": "High Value Account Changes"
      }
    }
    • Add a Channel Member: Bind your object (e.g., AccountChangeEvent) to your new custom channel.
    • Define the Filter: This is where you define the logic. Using the PlatformEventChannelFilter object, you can filter by fields or even change types.
    Example Filter Expression: SELECT Id, AccountStatus__c FROM AccountChangeEvent WHERE Industry = 'Technology' AND AnnualRevenue > 1000000
    // Tooling API: PlatformEventChannelMember
    POST /services/data/v63.0/tooling/sobjects/PlatformEventChannelMember
    
    {
      "FullName": "AccountUpdates_Channel__chn",
      "Metadata": {
        "eventChannel": "HighValueAccount_Chn__chn",
        "selectedEntity": "AccountChangeEvent",
        "filterExpression": "Industry = 'Technology' AND AnnualRevenue > 1000000"
      }
    }
    • Deploy: Use your CI/CD pipeline or CLI to push the metadata.

    Trade-offs at a Glance

    AdvantageDisadvantage
    Protects Quotas: Stops draining your 24-hour delivery limits.Simple Logic Only: No cross-object formulas or complex logic allowed.
    Consumer Efficiency: Middleware stops processing “junk” events.No UI: Must be managed via API/CLI and Git.
    Lower Latency: Less traffic on the bus means faster delivery.Harder to Debug: You can’t easily “see” what was filtered.

    The Bottom Line

    Efficiency isn’t just about fast code; it’s about doing less unnecessary work. Filtering CDC streams is the best way to keep your event-driven architecture lean, cheap, and fast.

  • Beyond SOQL101: Mastering the Stateful Selector Pattern in Apex

    In high-scale Salesforce environments, resource conservation is the ultimate design goal. Without a dedicated data strategy, redundant queries within a single transaction don’t just waste CPU time. They also risk hitting the hard wall of Governor Limits.

    The Problem: Transactional Redundancy

    In complex transactions, the same record is often requested by multiple independent components:

    • Triggers checking record status.
    • Service Classes calculating SLA details.
    • Validation Handlers verifying ownership.

    Without a strategy, each call initiates a fresh database round-trip. This “fragmented querying” leads to System.LimitException: Too many SOQL queries: 101.

    The Solution: The Stateful Selector Pattern

    By centralizing data access and implementing Memoization (Static Caching), we ensure that once a record is fetched, it resides in memory for the duration of the execution context.

    The Core Implementation Steps:

    1. Encapsulate: Use inherited sharing to ensure the selector respects the caller’s security context.
    2. Define a Transaction Cache: Use a private static Map<Id, SObject> as an in-memory buffer.
    3. Apply “Delta” Logic: Identify only the IDs missing from the cache before querying.
    4. Enforce Security: Always use WITH USER_MODE for native FLS and CRUD enforcement.
    5. Serve & Hydrate: Bulk-fetch missing records, update the cache, and return the result set.

    The Pattern in Practice

    Below is a refined implementation of a Stateful Account Selector:

    /**
     * @description Account Selector with Transactional Caching 
     * @author John Dove
     */
    public inherited sharing class AccountSelector {
        
        // Internal cache to store records retrieved during the transaction
        private static Map<Id, Account> accountCache = new Map<Id, Account>();
    
        /**
         * @description Returns a Map of Accounts for the provided IDs.
         * Only queries the database for IDs not already present in the cache.
         */
        public static Map<Id, Account> getAccountsById(Set<Id> accountIds) {
            if (accountIds == null || accountIds.isEmpty()) {
                return new Map<Id, Account>();
            }
    
            // 1. Identify IDs not yet cached
            Set<Id> idsToQuery = new Set<Id>();
            for (Id accId : accountIds) {
                if (!accountCache.containsKey(accId)) {
                    idsToQuery.add(accId);
                }
            }
    
            // 2. Perform bulkified, secured query for the "Delta"
            if (!idsToQuery.isEmpty()) {
                List<Account> queriedRecords = [
                    SELECT Id, Name, Industry, AnnualRevenue, (SELECT Id FROM Contacts)
                    FROM Account
                    WHERE Id IN :idsToQuery
                    WITH USER_MODE
                ];
                
                // 3. Hydrate the cache
                accountCache.putAll(queriedRecords);
            }
    
            // 4. Extract and return the requested subset from the cache
            Map<Id, Account> results = new Map<Id, Account>();
            for (Id accId : accountIds) {
                if (accountCache.containsKey(accId)) {
                    results.put(accId, accountCache.get(accId));
                }
            }
            return results;
        }
    
        /**
         * @description Invalidation method to be called after DML 
         * to ensure the cache doesn't serve stale data.
         */
        public static void invalidateCache(Set<Id> idsToRemove) {
            accountCache.keySet().removeAll(idsToRemove);
        }
    }

    Why This Scales

    • Reduced DB Contention: Minimizing SOQL round-trips frees up database resources for concurrent requests.
    • Idempotency: You can call the selector 50 times in a recursive trigger flow, and it will only hit the database once.
    • Clean Maintenance: Global filters (like IsActive = true) are updated in one method, not across dozens of classes.

    Trade-offs: Advantages & Disadvantages

    FeatureAdvantageDisadvantage
    Governor LimitsDrastically reduces SOQL query count.Can lead to Heap Limit exceptions if caching thousands of large records.
    PerformanceSub-millisecond retrieval for cached records.Increased complexity in handling cache invalidation after DML.
    MaintenanceSingle source of truth for query logic/security.Risk of “Stale Data” if the record is updated but the cache isn’t refreshed.

    Conclusion

    The Stateful Selector pattern is a fundamental building block for enterprise-grade Salesforce architecture. It transforms your data layer from a performance bottleneck into a high-speed, secure, and predictable asset.

  • The “Evolution or Extinction?”

    My Salesforce Toolkit: A 2018 vs. 2026 Reality Check

    Waking up today after a coma since 2018 makes your Salesforce blog a historical artifact. I looked back at my old posts. I realized my “modern” toolkit from back then belongs in a museum. Here is how the world changed while we were busy debugging.

    1. The “Flow-pocalypse”

    Back in 2018, we used Process Builder. It looked like a friendly flowchart but secretly loved hitting CPU limits. Today, Salesforce sent Process Builder and Workflow Rules to a farm upstate. Flow is the undisputed heavyweight champion now. It handles tasks that once required 200 lines of Apex. Become a “Flownatic” now, or the platform will leave you behind.

    2. Aura vs. LWC: The Breakup

    Remember Aura? The framework made you feel like you were writing code in a dark room wearing oven mitts. Lightning Web Components (LWC) finally showed up and brought modern JavaScript with it. We went from asking “Why is this so slow?” to realizing it works like the rest of the internet.

    3. Change Sets: The Final Boss

    In 2018, “Deployment” meant clicking through a 500-item list. We prayed to the gods that we didn’t forget a single Permission Set. Today, you must use SFDX, VS Code, and DevOps Center. Anything else is essentially tech-support penance. We traded “Refresh and Pray” for “Commit and Deploy,” and my blood pressure is much lower.

    4. From “What’s an API?” to “Everything is an API”

    We used to treat integrations like scary special projects. Now, MuleSoft is baked in, and Data Cloud ingests every data byte. The “siloed” Salesforce org is a myth. You are no longer just a Salesforce dev. You are a data plumber wrangling a firehose.

    5. AI (The Buzzword that Ate the World)

    In 2018, “Einstein” was mostly cute icons and some predictive lead scoring we occasionally trusted. Now, you need Agentforce or an AI assistant writing your unit tests. Otherwise, are you even working? We moved from “The computer might help” to “The computer will write the boilerplate first.”

    The Verdict:

    The 2018 toolkit was a Swiss Army knife with rusted blades. The 2026 version is a lightsaber. It is way more powerful. Just try not to cut your own arm off.