The Ultimate Guide to Flow Best Practices and Standards

By

Editor’s note: This post was updated on January 31, 2024, with the latest information and resources. 

There’s no way around it: Salesforce Flow is the automation tool of the future. Flow is not just an ‘admin tool’ — it’s the holy grail of declarative development that unites developers AND admins by allowing the use of Lightning Web Components (LWC) and Apex, and letting the admin orchestrate all of it in one place. We’re starting to see a unique collaboration between admins and developers, with both sides learning a little something about Development and Administration.

In this blog, we’ll discuss best practices, ‘gotchas,’ and design tips to make sure your flows scale with your organization.

1. Document your flows!

Documenting your flow allows the next person, or the forgetful future version of yourself, to understand the overall flow’s objective. Flow designers don’t create solutions out of thin air — we need a business case to solve hard problems, and having those breadcrumbs is critical for maintaining the automation long term.

Outline the flow’s purpose

Fill in that description field! Which problem is your flow solving? Be sure to include what your flow does, the objects your flow touches, and where it’s invoked from (such as which page layout to use if it’s a screen flow, which Process Builder process to use if it’s an autolaunched flow, etc.). Even better if you can mention where this flow hooks into the business process and which groups it touches, so the next person can go to them with questions. Have a JIRA or Story ID to link it to a Story? Stick it in the description!

Ensure consistent naming across elements and variables

Stick to naming conventions when creating variables and elements in Flow. Include in the variable description what you’re capturing. A little bit of work upfront will go a long way for future ‘you’ or somebody else that inherits the flow. There’s no right or wrong way to do this; just keep it consistent within the flow. One popular naming convention is called CamelCase. Check out this nifty Wiki article from the Salesforce Exchange Discord Server on suggestions for flow naming.

Document every step

Make sure you write short blurbs in each step and talk through what each Flow element is and its purpose. This will ensure any member of the team can pick up the work if needed. This is especially critical when you’re using a workaround to address a Flow limitation, performing a more advanced function, or calling an Apex invocable.

2. Harness the power of invoked actions

Clean up inefficient flows with invoked actions — don’t be scared of using some reusable code to make nice, clean, presentable flows. In the old days, you could invoke Apex from Flow, but you pretty much had to use it for that object or datatype. Those days are gone, with Flow now supporting generic inputs and outputs. Generic code used in invocable actions will amplify your Flow capabilities across the board — use one action for as many flows as you want!

How invoked actions can clean up inefficient flow.

Flow is fantastic but has its limitations, especially around queries, large data volumes, and working with collections. Find yourself creating loops upon loops and then more nested loops, or hitting element execution limits? It’s time for reusable Apex to do some heavy lifting for you.

There’s a great repository out there for Flow-invoked actions called the Automation Component Library — check it out!

3. Utilize subflows for cleaner, reusable, scalable ways to manage flows

Take this flow as an example of when you should start asking yourself, “Should I use a subflow?”

An inefficient flow that could use a subflow.

Here are some classic use cases for when you should consider a subflow.

  • Re-use: If you’re doing the same thing in your flow multiple times, or doing the same thing you did with another flow, call a subflow to do it for you. The development world calls these ‘Helper’ classes.
  • Complex processes/subprocesses: If your flow involves multiple processes and branching logic, make use of a main flow that launches other secondary flows. For example:
    • ‘Manage Contact Data’ — Main screen flow that launches various disconnected processes:
      • Associate Contact to Companies
      • Check Contact Data
      • Manage Contact’s Associations
  • Organizational chaos: If your flow looks like the one above, you probably need a subflow (or many) just to understand how everything connects and to make sense of the bigger process.
  • Permission handling: Let’s say you have a screen flow running in user context but you need to grab some system object that the user doesn’t have access to. Use a subflow! By using a subflow with elevated permissions, you’re able to temporarily grant that user what they need to continue the flow.

Benefits of subflows:

  • Make changes once instead of in 10 different places.
  • Take advantage of clean, concise, and more organized flows.
  • Maintain a single place for org-wide behaviors, like sending a consistent error email or showing the same error screen across flows (when passing in the $Flow.FaultMessage variable).

Considerations with subflows:

  • Requires more collaboration and testing between groups if changes need to be made to the subflow.
  • Debugging can be tricky with subflows. When you start using Lightning components and Apex actions, you don’t always get detailed errors if the error occurred in a subflow.
  • There is some UX overhead associated with going overboard with subflows — don’t go making too many.

4. Don’t overload flows with hard-coded logic

Logic abstraction

A great way to slow down your development process and reduce your team’s agility is by hard coding all of your logic from within flows. When possible, you should store your logic in one place so that automation tools like Apex, Validation Rules, and other flows can also benefit. You should consider using Custom Metadata, Custom Settings, or Custom Labels in your flows in the following scenarios.

  • Consolidate application or organization data that is referenced in multiple places.
  • Manage mappings (for example, mapping a state to a tax rate or mapping a record type to a queue).
  • Manage information that’s subject to change or will change frequently.
  • Store frequently used text such as task descriptions, Chatter descriptions, or notification content.
  • Store environment variables (URLs or values specific to each Salesforce environment).

You can use things like Custom Labels if you want to store simple values like ‘X Days’, record owner IDs, or values you expect might change in the future.

To give you an idea of how much cleaner Custom Metadata-driven flows are, take a look at the before and after of this After-Save Case flow that maps record types with queues. The solution utilized Custom Metadata records that store a Queue Developer Name, a RecordType Developer Name, and the Department field to update on the case dynamically.

Before Custom Metadata-driven logic

What a flow looks like before Custom Metadata-driven logic

After Custom Metadata-driven logic

What a flow looks like after Custom Metadata-driven logic

I would also strongly recommend you read Jennifer Lee’s post on avoiding hard coding on the Salesforce Admin Blog.

Data-driven screen flows

If you find your team is building 20 to 30 (or more) screens with constantly changing logic and content, then you may need a dynamic screen design pattern based on record data.

Check out this great article on UnofficialSF written by the VP of Product at Salesforce, Alex Edelstein, on how you can build a 500-screen equivalent flow using only custom object records (DiagnosticNode/DiagnosticOutcome) and standard Flow functionality.

Other helpful links on Flow and business logic:

5. Avoid these common ‘builder’ mistakes

Not checking for gaps in your logic

Flow is essentially declarative coding, which means the guard rails are off! You need to aaccount for every scenario when building your flow. This means planning for cases where what you’re looking for might not exist!

Always have a Decision element after a Lookup/Get Records element to check for no results if you plan on using the results later in your flow. Directly after the Lookup, add an ‘Is null EQUALS False’ decision check for the variable created in the Get Records element. If you’re a coder, imagine this is your ‘null’ check.

Why do we want to do this? Imagine your entire flow is based on a single assumption — a record you’re looking for actually exists in your org. If something happens and we don’t actually find that record early in the flow, your flow will attempt all kinds of operations and you may end up with unintended consequences like errors, bad data, or a poor user experience.

Empty Collections: Some invocable actions or screen components will output an ‘empty’ collection which is not considered ‘null’. A classic example of this is the out-of-the-box ‘File Upload’ component which will return an empty text collection if nothing is uploaded. If you encounter this, the easiest way to do a decision here is to assign the collection to a number using an Assignment element and make your decision off the number value.

Hard coding IDs

Flow does not yet let you reference the Developer Name of things like Record Type, Queues, or Public Groups in certain parts of the UI, but that doesn’t mean you should be hard coding the ID.

Building a record-triggered flow? Using a formula for your entry conditions will let you reference a Record Type’s DeveloperName in your triggering conditions.

Set Record Type Name with a formula.

In scenarios where you aren’t able to directly reference a DeveloperName, use the results of a Get Records element, a Custom Label, or Custom Metadata. This will save you headaches when deploying through your environments, since Record Type IDs and other unique identifiers might differ between environments.

As an example, create a Get Records lookup step on the RecordType object. Then, in your conditions, provide the DeveloperName and Object field, and store the found Record ID (Record Type ID) for later use in your flow.

Need to reference a queue or a public group? Do a Get Records using the DeveloperName of the queue on the Group object instead of hard coding the ID.

Learn to get comfortable with the rich documentation Salesforce provides around setup objects like Group, RecordType, and ContentDocumentLink. Understanding the Salesforce data model will make you an infinitely more powerful administrator and Flow designer.

Being careless when looping

There are three main concerns when looping, involving element limits, SOQL limits, and using complex formulas.

[UPDATED GUIDANCE, February 2023] 

[Note: The element iteration limit was removed in the Spring ’23 release, but requires flows to run on API Versions 57 or greater. Although we removed the element limit, you still need to be aware of general governor limits like CPU Timeouts and  SOQL limits.]

1. Beware of the ‘executed element’ limit — Every time Flow hits an element, this counts as an element execution. As of Spring ’21, there are currently 2,000 element executions allowed per flow interview. Consider a scenario where you are looping more than 1,500 contacts.

Within your loop, you have your Loop element (1), a Decision element (2), an Assignment step to set some variables in your looped record (3), and a step in which you add that record to a collection to update, create, or delete later in the flow (4). Each of those four elements within the loop will count toward your execution limit. This means your loop over 1,500 records will have 6,000 executed elements, which will far exceed the iteration limit.

When approaching this limit, you’re likely going to need to either get creative with forcing a ‘stop’ in the transaction or by making your flow more efficient by using invoked actions. Keep in mind that the transaction ends when a Pause element is hit or, in screen flows, when a screen/local action is shown.

2. Do not put data manipulation language (DML) elements inside of a loop (that is, Get Records, Update, Delete, Create) unless you’re 100% sure the scope will not trigger any governor limits. Usually, this is only the case in screen flows when you’re looping over a small subset of records. Instead, use invoked Apex if you need to do complicated logic over a collection.

In Winter ’23, we introduced the new ‘In’ and ‘Not In’ operators so that you can build more performant, scalable flows to avoid those queries within loops that lead to governor limits.

3. Be careful with complex formula variables Per the Record-Triggered Automation Decision Guide, “Flow’s formula engine sporadically exhibits poor performance when resolving extremely complex formulas. This issue is exacerbated in batch use cases because formulas are currently both compiled and resolved serially during runtime. We’re actively assessing batch-friendly formula compilation options, but formula resolution will always be serial. We have not yet identified the root cause of the poor formula resolution performance.”

Not creating fault paths

Before you build your flow, think about what should happen if an error occurs. Who should be notified? Should it generate a log record?

One of the most common mistakes an up-and-coming Flow designer makes is not building in fault paths. A fault path is designed to handle when a flow encounters an error, and you then tell it what it should do — think of it as exception handling. The two most common uses are showing an error screen for screen-based flows or sending an email alert containing the error message to a group of people.

I typically like passing off errors to subflows that handle errors for me; that way, if I ever need to change an aspect of the error path, I only need to do it in one place for all of my flows.

For enterprise-grade implementations, consider using a comprehensive logging strategy in which flows log errors in the same place as your Apex code. You can use an open source tool like Nebula Logger to write to a custom Log object when your flow fails, or just have an elevated-permission subflow create log records for you if you don’t need anything fancy.

6. Screen flows: Pay attention to the flow context

System context risks

Always pay attention to the context your flow runs under when building a screen flow. To protect your data, be extremely careful when using system context in screen flows run by external users from Experience Cloud sites, especially guest users. Without proper care, you could unintentionally leak data.

Below is some general guidance for screen flows run on Experience Cloud sites:

  • Instead of applying system context to the entire flow, create system context-enabled subflows between screens to retrieve or execute the data you need.
    • In these elevated subflows, avoid using the ‘Store all fields’ setting in any Get Records elements that feed data into screen components, as it could lead to data leakage within the browser’s developer tools.
  • Never leave system mode enabled while an Experience Cloud (external) user is on a screen.
  • When updating data in an Update element using outputs from screen components, ensure you update specific fields and not entire record collections from screen components, as they can be manipulated. For example, if you have a Data Table component that can edit records and it outputs those edited records to a collection, do not use that output directly as your data source in your Update element.

Consider the Target User

If you’re making a screen flow run under user context, remember that your user may not have access to the objects in your flow. You don’t want users to have a rough experience in a flow that wasn’t meant for them. Need to control access to the whole flow? Use granular flow permissions. Need to control access to specific pieces of a single flow? Reference a custom permission assigned to the running user using the $Permission variable in a Decision element or in conditional field visibility criteria.

Lastly, not everything respects system context! Lightning components like Lookup and File Upload, and record fields from Dynamic Forms for Flow, do not respect system context. Some actions, such as ‘Post to Chatter’, will also need the running user to have access to the related record even though Flow is running in system mode.

Learn more about flow context here.

7. Evaluate your triggered automation strategy

Every object should have an automation strategy based on the needs of the business and the Salesforce team supporting it. In general, you should choose one automation tool per object. One of the many inevitable scenarios of older orgs is to have Apex triggers mix in with autolaunched flows/processes or, more recently, process builders mixed in with record-triggered flows. This can lead to a variety of issues including:

  1. Poor performance
  2. Unexpected results due to the inability to control the order of operations in the ‘stack’
  3. Increase in technical debt with admins/devs not collaborating
  4. Documentation debt

One common approach is to separate DML activity and offload it to Apex, and let declarative tools handle non-DML activities like email alerts and in-app alerts — just be careful to make sure none of your Apex conflicts. We’re still very much in the ‘wild wild west’ phase of record-triggered flows as architects and admins figure out the best way to incorporate them into their systems.

I’ve seen some more adventurous orgs use trigger handlers to trigger autolaunched flows (see Mitch Spano’s framework as an example). This is a fantastic way to get both admins and developers to collaborate.

Here’s a great article on the pitfalls of mixing various automations by Mehdi Maujood.

In general, you should be moving away from Process Builder and especially Workflow Rules, as both will be retired. Remember that as of Winter ’23 you can no longer create new workflow rules.

Structure the amount of flows on an object based on your business needs

Gone is the ‘One Process Builder Per Object’ guidance from the Process Builder days; however, that doesn’t mean you should be creating hundreds of flows on an object either. Keep the amount of record-triggered flows to a reasonable level with your business needs in mind. While there isn’t really a golden number, a good rule of thumb is to separate your flows by either business function or role so that there’s little to no chance of a conflict. You may also want to factor in the number of admins or developers that have to maintain your flows. It’s historically difficult to maintain big, monolithic flows across many people, which means you may find it easier to build multiple smaller, more maintainable flows ordered by Flow Trigger Explorer with fine-grained entry conditions. 

Refer to the wonderful Automate This! session in the related links section at the bottom of this blog post where we dive into a variety of design patterns for record-triggered flows.

Use entry criteria

Be specific with your entry criteria — you don’t want to run automation on record changes that won’t be used in your flows! The Flow team has vastly improved the computational cost of flows that don’t meet entry criteria, which was a major challenge for Process Builder. This, along with order of execution issues, removed one of the remaining barriers of having too much automation on an object.  

Speaking of benchmarks and performance, always use Before-Save flows whenever you update the same record that triggered the automation. As per the Architect’s Guide to Record-Triggered Automation, Before-Save flows are SIGNIFICANTLY faster than After-Save and are nearly as performant as Apex.

8. Build a bypass in your flows for data loads and sandbox seeding

This isn’t a Flow-specific best practice, but it’s a good idea to include a bypass in your triggers and declarative automation. With such a bypass strategy, you can disable any and all automations for an object (or globally) if you ever need to bulk import data. This is a common way to avoid governor limits when seeding a new sandbox or loading large amounts of data into your org.

There are many ways of doing this — just be consistent across your automations and ensure your bypasses are all in one place (like a Custom Metadata type or custom permission).

9. Understand how scheduled flows affect governor limits

Selecting your record scope at the start of setting up a scheduled flow will have huge governor limit ramifications for said flow. When we say ‘setting up the scope,’ we refer to this screen where we select the sObject and filter conditions:

Option on screen to select the Object and Filter conditions.

When specifying the record scope in the Flow Start (above)

One flow interview is created for each record retrieved by the scheduled flow’s query.

The maximum number of scheduled flow interviews per 24 hours is 250,000, or the number of user licenses in your org multiplied by 200, whichever is greater.

This means you cannot act on 250,000 or more records (or whatever the limit is based on user license) per 24 hours using the above method. If you need to work with more, we recommend going the route of an invocable action and not specifying your scope here. You may need to look into Batch Apex and Asynchronous processing — ask a developer for help in these scenarios.

When specifying the scope within the flow (Invoked Action, Get Records)

Limits will be more aligned with what you’re used to with a single flow, meaning one interview will be created for the flow instead of one per record. If you’re going this route, do not specify a scope for the same set of records (re: above screenshot)! If you do, the flow will run for N² records, hitting limits quickly.

Go this route when you need to have more control over limits and you want to invoke Apex that might involve SOQL or complex processing.

In this scenario, if you have an initial Get Records that returns 800 records, Flow will not try and batch those records. Keep in mind the running user is still an Automated Process user, so that comes with certain quirks like not being able to view all CollaborationGroups (Chatter groups) or Libraries.

Double dipping: Again, DO NOT select a record scope and perform a Get Records step for the same set of records; you’ll effectively be multiplying the amount of work your flow has to do by N² and will hit limits quickly.

Which path do I choose?

There’s no right or wrong answer as to which path to choose. There are fewer user-controlled ways of controlling limits associated with the first route — you cannot specify your batch size or total records in the scope. So, if you need stricter control around limits, it might be best to create an invocable and specify your scope that way. Or, just don’t do this in a scheduled flow — use a scheduled job with Apex. 

There are also some use cases involving record dates where you could configure a scheduled path on a record-triggered flow instead. Scheduled paths have more flexible governor limits and also allow for configurable batch sizes, so that may offer a middle ground between Apex and schedule-triggered flows. 

For more on scheduled flow considerations, check out the official documentation: Schedule-Triggered Flow Considerations.

10. Consult the checklist!

You’re ready to go build great flows! Here’s a handy checklist that applies to every flow.

1. Documented elements and descriptions: Make sure your flow has a solid description, a decent naming convention for variables, and descriptions for Flow elements that might not make sense if you revisit them in 6 months.

2. Null/Empty checks: Don’t forget to check for Null or Empty results in Decision elements before acting on a set of records. Don’t assume a happy path — think of all possible scenarios!

3. Hard-coded IDs: Don’t hard code IDs for things like Owner IDs, Queue IDs, Groups, or Record Type IDs. Do a Get Records for the respective object using the DeveloperName. If you’re creating criteria in an entry condition, you can reference DeveloperName (API Name) fields with a formula.

4. Hard-coded logic: Don’t hard code reference logic in a decision that could change frequently such as Queue IDs, Owner IDs, or numbers (like a discount percentage). Utilize Custom Labels and Custom Metadata!

5. Excessive nested loops & ‘hacks’: Are you stretching Flow’s performance to its limit when code could be better suited? Use generic Apex invocables that your developers build, or utilize components from the Automation Component library or other open source contributions.

6. Looping over large data volumes: Don’t loop over large collections of records that could trigger the Flow element limit (currently 2,000) or Apex CPU limits.

7. Check record and field security & flow context: Don’t assume your user can see and do everything you designed if you’re building a screen flow. Test your flows as both your intended and unintended audiences.

8. Data maniuplation in a loop: Don’t put Create/Update/Delete elements inside of a loop in an autolaunched flow or record-triggered flow. Use the new In/Not In operators where you can! 

9. Flow errors: What do you want to happen when the flow hits an error? Who should be notified? Use those fault paths!

10. Automation bypass: Does your autolaunched or record-triggered flow have a Custom Setting/Custom Metadata-based bypass in place?

Get building!

If you’ve made it this far, congratulations! You’re well on your way to being a pro Flow builder, and you aren’t alone. Join the Salesforce Automation Trailblazer Community to connect with other Salesforce Admins all over the world. The community is a great place to learn more about the flows other admins are building, hear the latest updates from product managers, and ask questions about Flow. I hope you found this guide helpful, and I can’t wait to see all of the flows you build!

Resources

Overcome access dilemmas with permission sets

Use Permission Sets To Overcome Common Access Dilemmas

As an Awesome Admin, it’s probably in your nature to look for any way to optimize a process or situation! As part of that never-ending desire for optimization, I would bet that you’ve spent a lot of time thinking about your permissions setup in Salesforce. Salesforce provides multiple ways to grant permissions to users, each […]

READ MORE
Advance Your Admin Career With Dev Fundamentals

Advance Your Admin Career With Dev Fundamentals

Ready to take the next step in your admin career but unsure where to start? Take a page out of my book and learn development fundamentals to jumpstart your abilities as an advanced admin and extend your Salesforce Platform knowledge. Several years ago, I was at a career tipping point. I felt solid in my […]

READ MORE