CPU limit errors on the Salesforce platform happen when a single transaction uses more than 10 seconds of CPU time. A single transaction is comprised of all the items that are executing (called executions) for a particular context, such as managed package code, workflows, processes, validation rules, unmanaged code, etc. Salesforce keeps track of how much CPU time has been used for the transaction. When the limit is reached, a CPU limit error is raised by the platform. It's important to note that there is no way to control the order of executions. Salesforce controls that. So the fact that the error happens during a particular trigger, does not mean that there's something wrong with that trigger specifically, but rather that there is too much CPU being used overall during the transaction as a whole. Oftentimes there are too many triggers, workflow rules, and processes doing work within a single transaction, and the combination of executions is using up too much CPU. There's nothing specifically that Full Circle can do to fix this. It's more an issue for the Salesforce admin of the org to do some investigation/analysis on.
The most useful way to do an analysis of CPU usage for a given transaction in an org is by using a debug log. Debug logs contain limit usage information for all namespaces on the entire transaction. Capture a debug log during the transaction in question, and then browse it to find the limit usage information. Below is an example of limit usage info from a debug log for a namespace. Items that are not part of a managed package are categorized under the "default" namespace.
Number of SOQL queries: 2 out of 200
Number of query rows: 6 out of 50000
Number of SOSL queries: 0 out of 20
Number of DML statements: 0 out of 150
Number of DML rows: 0 out of 10000
Maximum CPU time: 0 out of 60000
Maximum heap size: 0 out of 12000000
Number of callouts: 0 out of 0
Number of Email Invocations: 0 out of 10
Number of future calls: 0 out of 0
Number of queueable jobs added to the queue: 0 out of 1
Number of Mobile Apex push calls: 0 out of 10
While this can help you determine the CPU usage of your own Apex code, unfortunately it is not currently a reliable way to determine the CPU time usage of Apex classes in managed packages.
For more info on Salesforce governor limits, check out this resource:
We recommend setting up a debug log with these debug settings as a starting point:
Workflow: Info, or none if the debug log is too large
Apex Code: None
Apex Profiling: Info - This is the most important one!!!!
Watch a Video
Watch 'The Dark Art of CPU Benchmarking', a Dreamforce 2016, session to learn more about processes consume the most CPU.
A Tale of CPU Time Limits
Once upon a time...
Usage limits were measured by the number of lines of Apex
code executed. We called these “Script Limits”
Each package was allowed 200,000 lines of Apex code in a trigger execution context, as was your own code.
Workflows, processes and formulas had no usage limit at all.
If a workflow updates a field and fires a trigger again, the total usage for each package remained 200,000 lines of Apex code. And workflows and processes still had no usage limit.
Then things changed...
Usage limits were changed to calculate CPU time usage.
Packages no longer had their own usage limits. Everyone shared the same CPU time limit.
And now, workflows, processes and formulas is counted as well.
If a workflow updates a field and fires a trigger again, you may double your CPU time usage! A third time may triple it! So field updates are very costly.
You can't tell from the error which code is truly at fault!
In this example, even though most of the time was used up by local code and workflows, the package will get blamed for the exception, because it’s code was running when the CPU limit was hit!
On a system with multiple packages, it’s difficult to know which package is actually using most of the time – again, the code that happens to be running when the limit is hit will appear in the exception message.
How can you tell where time is being used?
- Capture the error in debug logs
- This may take multiple attempts at different logging levels
- Interpret the debug logs
- This is tricky, in that depending on logging level, the debug logs may not attribute CPU time accurately
What else can you do?
- In your own development
- Move operations into future or async calls
- Detect trigger reentrancy due to workflows and avoid reentrant processing if possible
- Follow best practices to optimize Apex code
- Avoid workflows and processes that reevaluate after processing. Here is a helpful article on why 'Evaluate Next Criteria' in the Lightning Process Builder is so CPU intensive. Please review this article.