Salesforce Data Architecture

This diagram provides a high-level overview of Salesforce Data Architecture concepts and platform capabilities. This is the first in a five-part Salesforce Architecture reference series covering core application and system architecture topics. I hope this series provides a useful reference for Salesforce Architects or those on the journey toward this role.

 

Salesforce Data Architecture (PDF)

Salesforce Marketing Cloud Developer

The document linked below provides a high-level overview of the Marketing Cloud data modelling and management tools, programming languages, and APIs. For architects and developers taking the Salesforce Marketing Cloud Developer certification the document provides a useful technical reference.

 

Salesforce Marketing Cloud Developer

Salesforce Summer ’20 Architect Highlights

I first published a release highlights post on this blog over 8 years ago; since that time the series has been renamed a few times but retained a focus on summarising the key technical aspects delivered with each release. I’ve always found preparing the content for the post a useful way to organise my own release readiness and feedback has indicated that amongst the plethora of release related blog posts available today there remains space for one with a technical slant. I hope so.

This post marks something of a relaunch to the series, this time with a strong focus on concerns significant to the Salesforce architect community (i.e. technical, solution or integration architects).

Release Timeline

Given the current global situation with COVID-19 the release timeline for Summer ’20 is approximately 1 month later than normal for a summer release. Each Salesforce release represents potential technical disruption for customers and managing this in parallel with the ongoing impact of the pandemic seems like an avoidable situation. The Summer ’20 sandbox preview is now scheduled for May 20-30th and the Summer ’20 production release is scheduled for June 12th (1st release window), July 10th (2nd) and July 17-18th *(3rd and final). As ever the trust site provides the full detail of the release timeline.

In advance of the sandbox preview window Summer ’20 pre-release orgs can be requested via the pre-release sign-up page. During the sandbox preview window Summer ’20 preview Scratch Orgs can be created by adding the release option (“release” : “Preview”) to the scratch org definition file.

The Salesforce Summer ’20 release notes are available from today at https://releasenotes.docs.salesforce.com/en-us/summer20/release-notes

 

Architect Highlights (in no order)

Application Architecture

Record-Changed Flows

Previously Before-Save Flows could be implemented to efficiently update Record field values during the before-commit phase of the record save transaction; historically this has been a strong ApexTrigger use case. For many implementations a significant improvement in record-save performance could be achieved by consolidating Processes and before-event ApexTrigger logic into a single Flow. Refactoring Process Builder Processes which served to populate field values on the same record only could achieve a remarkable reduction in save time which in turn delivers a more responsive end user experience and a potential reduction in CPU timeout errors. With Summer ’20 Before-Save Flows are now Record-Changed Flows which support before-save and after-save events; equating to before or after ApexTrigger events. In the former case the Flow is restricted to field value updates (via $Record) but the latter enables a wider range of Flow actions to be utilised.

This new capability enables Workflows and Processes to be reimplemented following a single Flow pattern which should achieve the best performance outcome.

Platform Event Flows

Process automation initiated via Platform Event has been possible with Process Builder up to now, with Summer ’20 this is extended to Flow Builder. Platform Event Flows represent an obvious next step on the emergence of Flow Builder as the hub for all things process automation.

With this model it is possible to build out complex business logic (declaratively) in Flow Builder which fires in response to on-or-off platform asynchronous events; a powerful combination particularly in the data integration context.

Flow System Context

Flows which are specified to run in System Context (that bypass the permissions of the running user) can now run outside of the Sharing Model (effectively Without Sharing in Apex terms). Previously System Context was limited to removing Object and Field Access permission checks with the Sharing Model record-level access checks enforced by the Flow execution.

Apex-Defined Type Flow Variables

An Apex-Defined Type is simply an Apex Object typically representing deserialised JSON response in a data integration scenario. ADT variables could historically be utilised within Flows but could not be passed into the Flow from the calling context. With Summer ’20 it is now possible to hand-off callout response processing to a Flow, as one example use case. Whilst a seemingly minor change this opens up Flow as viable option for defining business logic for key use cases that would otherwise have required Apex code.

Dynamic Forms (Non GA Preview)

Undoubtably one of the most eagerly anticipated features for some time Dynamic Forms makes its preview debut in the Summer ’20 release. The Dynamic Forms feature is billed as an upgrade to Page Layouts where UI composition (sections, fields and visibility logic) occurs directly within Lightning App Builder. Preview support is limited to custom objects and excludes record pages that use pinned-region or custom page templates.

Lightning Message Service

The Lightning Message Service provides a standardised mechanism for communication across components on a page or across pages. Supported component types include Visualforce Pages, Aura Components and LWC. Communication is facilitated by Lightning Message Channel subscription and publication.

Development Lifecycle

Source Tracking in Sandboxes (Beta)

The source tracking deployment type is now supported for the developer sandbox types (Developer and Developer Pro); previously source tracking was restricted to Scratch Orgs. Source tracking enables efficient synchronisation of changed metadata between the local environment and the Salesforce org. Where existing development or release management processes (i.e. build automation and CI) are yet to be transitioned to Scratch Orgs this capability offers increased efficiency and accuracy as tracking of the changed metadata state is managed by the platform.

Org Dependent Unlocked Packages

Most Salesforce implementations suffer from a legacy accumulation of metadata components built up over a period of time by multiple projects, teams and partners. In such cases the benefits of implementing a structured Unlocked Package approach can appear to be unrealistic due to the degree of interdependency across the environment. With Org Dependent Unlocked Packages, metadata validation (i.e. dependency checking) can be deferred to the time of package installation, rather than during package version upload. Org Dependent Unlocked Packages can be created with the orgdependent Salesforce CLI parameter.

Security Architecture

Initiate Two Factor Authentication (2FA) with Apex

Two new methods have been added to the System.UserManagement class to enable Apex initiation of 2FA verification (e.g. Salesforce Authenticator). One method initiates a 2FA verification service (initVerificationMethod), the second completes the verification (verifyVerificationMethod).

Highly Trusted Users

The new “Skip Identity Confirmation at Login” permission allows users to login without second factor identity verification. Clearly one to use with a high degree of caution but useful for certain use cases where 2FA is impossible or impractical.

Minimum Access User Profile

The new “Minimum Access – Salesforce” User Profile provides a base profile for the Salesforce user license type defined following the principles of least privilege, a well-established security best practice. This profile can be cloned and extended as required or used in concert with Permission Sets. This support for a secure-by-default model feels long overdue.

Integration Architecture

Async Platform Event Publish Operations (Pilot)

The initial success of a Platform Event publication can be straightforward to determine (e.g. access to the SaveResult in Apex code) but the eventual state of the event is not accessible. To address this gap a new standard Platform Event (PlatformEventStatus) has been added; subscribers (e.g. ApexTrigger) are pushed status updates for High Volume Platform Events that have the “Track Publish Status” setting enabled.

Functional Architecture

Service Cloud. Omni-Channel Status-Based Capacity Model

Previously Omni-Channel tracked Agent capacity by the number of open tabs within the Agent Console. For short-lived Work Items (e.g. Case or Lead) that are resolved within a single user session the tab-based capacity model is often appropriate and can work well. However, this is not the case where Work Items are longer-lived and have a lifecycle that spans multiple user sessions or days: this leaves Omni-Channel a poor fit for many service operations. To address this Omni-Channel can now be configured to track Agent capacity based on the status of the Work Items that the Agent owns. The new Omni-Channel setting “Enable Status-Based Capacity Model” is used to switch between the default tab-based capacity model and the new status-based model. This could be a game changer for Omni-Channel given the importance of capacity management to service operations and the limitations of the tab-based model.

ISV

Delete Lightning Components from Managed Packages

A long awaited capability for ISV to delete obsolete Lightning Components from released Managed Packages. From personal experience I know how easy it has been over the last few years to build a legacy of old components in the package following refactoring and transition from Aura to LWC. Note, component deletion must be enabled in the Packaging Org (via Salesforce Support Case).

First Generation Managed Package Conversion (Developer Preview)

The Summer ’20 release includes a developer preview for the conversion of first-generation managed packages to second-generation managed packages. The preview is limited to Scratch Orgs and is intended to communicate how this long-awaited capability will function in due course. Personally, I’m looking forward to bringing the benefits of second-generation packaging to previously released managed packages.

Scrum at Scale

The post provides a high-level overview of the challenges to be considered when applying the Scrum delivery framework to larger scale projects. Scale in this context relates to the size of the product being delivered; one product only. Scale here does not relate to multi-product deliveries, distributed team considerations, technical complexity or user concurrency. The term product could relate to a software product, a service or indeed any tangible outcome. The Scrum framework is used worldwide in a wide variety of contexts including fighter aircraft development, farming and education in the classroom.

In essence the Scrum agile framework is founded on the idea that self-empowered, cross-functional teams delivering working product increments in business priority order produce a better outcome in a quicker timeframe than would otherwise be achievable. There’s considerably more to Scrum than this statement affords, however for brevity this context sufficiently sets the scene.

In practice many implementations dilute the value of the Scrum framework by introducing elements of traditional Waterfall style linear process; such hybrid projects are often a disaster as aligning two process model takes discipline and care, in most hybrid cases the very opposite is true. As such the real value of Scrum is achieved through adherence to the framework in its entirety; subjective preference and selective pick-and-mix type implementations are fraught with risk and equally inefficient. Scrum has key advantages over many frameworks in this respect; the principles and practices are clear, short and intuitive and therefore the learning curve is short and initial adoption straightforward.

Before moving to discuss the main topic of this post i.e. Scaling Scrum, it is worth considering the basic mechanics of the Scrum flow.

1. A Product Owner manages a product backlog of user stories prioritised by business value; the higher priority the more detailed the story may be. Lower priority stories may be very high-level and are referred to as epics.
2. A ScrumMaster supports an empowered team to deliver a set of stories within a set timeframe, i.e. sprint cycle. The definition of delivered (or done) is agreed up front but typically equates to production quality working software that satisfies user defined acceptance criteria. The ScrumMaster ensures the team is focused on a defined sprint goal and resolves obstacles.
3. During a sprint the team synchronises on a daily basis during a stand-up meeting; team members make commitments not to the ScrumMaster but to their peers.
4. After each sprint cycle a retrospective meeting is held to inspect and adapt the process.
5. On a continual basis the Product Owner inspects and adapts the user stories on the backlog – adding emerging detail.
6. One sprint leads to the next on iterative basis within the context of a release.

The main sprint artefacts are the Product Backlog (whole-product) and Sprint Backlog (agreed stories for a sprint cycle). Velocity reports are also significant in terms of understanding how accurate estimation has been and how productive the team are.

The typical Scrum team size should be in the 4-8 region a larger team can detract from effective communication and add risk to adoption of the Scrum values of Focus, Courage, Openness, Commitment and Respect; all of which work better in a small team setting. Additionally the Scrum events (ceremonies) start to become less effective the bigger the audience. This is true for any two-way collaboration; effectiveness is inversely proportionate to audience size. And so, if the optimal Scrum team size is less than 10 yet the framework is applicable to large, complex project delivery how does Scrum scale?

Scaling Scrum – Challenges

Before considering approaches to Scaling Scrum it is worthwhile first to consider some of the key challenges likely to be encountered.

1. Focus. The level of difficulty in maintaining an effective product backlog is linear to the scale of the product. Scrum requires a whole-product mindset; the bigger the product the more challenging this can become.

2. Communication. Sub-dividing a larger project across multiple Scrum teams introduces communication boundaries. Face to face communication, a key Agile principle, can be difficult to maintain. More teams typically means more time spent on synchronisation (i.e. communication), this overhead is unavoidable.

3. Interdependency. Product backlog items are seldom standalone concerns with no interdependencies. Interdependencies can be managed effectively within a single team environment, for a multi-team environment interdependencies introduce process overhead.

4. Empowerment. Perhaps a more human factor than the preceding points but important nonetheless. Multiple team environments reduce the empowerment of the single team. This can impact upon the motivation and courage of the team as consideration (respect and so on) to other teams becomes a factor. Empowerment in this context is therefore limited in a manner not encountered by a single team.

5. Scrum thinking. Scaling challenges can often result in non-Scrum practices or adaptations creeping in. This can be perception related “Is Scrum still appropriate?” or a lack of confidence in respect to how to approach the problem in the face of proven linear alternatives.

Scaling Scrum – Typical Approach

A typical approach to Scaling Scrum is to sub-divide the product backlog into themes (or features) and to introduce a Scrum team per feature, i.e. a Feature Team.

1. Focus. Ideally each Feature Team would have a dedicated Product Owner perhaps operating within a Product Owner hierarchy with a Chief Product Owner at the top. The whole-product mindset requires a single product backlog, multiple product backlogs would require cross-prioritisation and as such the approach offers no value. To ensure manageability each Product Owner will have feature specific views on the product backlog. Feature rollups will also be added to provide Product Owners at higher levels of the hierarchy with a summarised view. Also in relation to manageability epic user stories play a big part in keeping the size of the product backlog within the 100-150 region.

2. Communication. A new Scrum event is introduced; the Scrum of Scrums. In short this meeting is focused on the discussion of dependencies and related problem solving. The regularity, duration and attendees for the Scrum of Scrums is not prescribed, instead need should drive all such factors. As with many aspects of Scrum, the Scrum of Scrum meeting can be hierarchical; for example groups of teams of related features may have a Scrum of Scrum meeting, selective attendees of which may attend a higher level Scrum of Scrum meeting at the product level. Release kick-off meetings and shared team members (perhaps on a cycle) are other effective means to make cross-team communication integral to the process. On very large, or very complex projects dedicated integration teams can be introduced to ensure communication and dependency challenges are de-risked.

3. Interdependency. In addition to the communication and resolution of dependency activity occurring within the Scrum of Scrum meetings, look ahead planning is often implemented. This approach requires that a rough idea of the backlog items for the subsequent 2 sprints are tentatively identified during each sprint planning meeting. Historical average velocity can help in terms of sizing. With an indicative view of the sprint backlog well in advance dependencies can be identified and discussed well in advance.

Large Scale Scrum with LeSS

LeSS (Large-scale Scrum) is comprised of 2 scaling frameworks that extend Scrum; basic LeSS supports up to 8 teams, LeSS Huge supports scale into the thousands of participants on a single product delivery. In both cases LeSS provides an approach to scaling that is Scrum in both principle and practice.

LeSS
The LeSS framework is founded on the principle of whole-product focus; one overall Product Owner manages a single product backlog that is delivered by 2-8 teams each with a ScrumMaster. In this context, the scaled Product Owner is supported by multiple teams working directly with customers, business users and stakeholders. On larger scale products the Product Owner role will be focused more on big-picture prioritisation rather than detail clarification; supporting teams will perform the latter. LeSS Sprint cycles are aligned across all teams and whilst each team manages their own sprint backlog one and only one potentially shippable product increment is delivered. The Sprint planning event in LeSS is split into 2 defined events; Sprint Planning One (2 hours) where all teams collectively determine the backlog items for the sprint and Sprint Planning Two (2 hours) where an individual team (or multiple teams working on a common feature) define how the stories will get to done. The Sprint review event is the same in LeSS as single-team Scrum, but an additional Overall Retrospective event is added to provide 2 cycles of process inspection and adaptation; team-level and then product. The approach to communication and interdependency management is left at the discretion of the teams involved however guidance is provided in respect to observer attendance at team Daily Scrum events, multi-team meetings, Scrum of Scrum meetings and the use of Open Space and Continuous Integration techniques.

LeSS is not only a collection of principles and practices but also a process framework with a concrete set of rules (updated February 2020) to guide application.

LeSS Huge
LeSS Huge provides framework support for product deliveries beyond the 8-team limit. The LeSS Huge approach starts with categorisation of the product backlog into requirement or development areas, each with a dedicated Area Product Owner who has an area specific view onto the overall product backlog. Sub-division by requirement areas is customer-focused, sub-division by development areas focuses on the architecture of the product. In either case each individual area can be considered a scaled-up feature team which will be delivered by a (basic) LeSS implementation with up to 8 teams. With LeSS huge sprint cycles are aligned across all areas and a single product increment delivered, not one per-area. Sprint review and retrospectives are held at the area and product levels.

LeSS Huge is a significant undertaking and will have a long adoption cycle.

In conclusion the LeSS frameworks provide a structured, empirically validated extension to Scrum to support product delivery at scale. Given the increased risk LeSS implementations should be guided by expertise and experience with the LeSS principles, framework and rules. The Certified LeSS Practitioner certification and related course should be considered as part of any process risk mitigation strategy.

Scrum@Scale

Scrum@Scale is a framework for multiple Scrum Teams, authored by Jeff Sutherland, which is designed to be lightweight and based on Scrum principles and values.

Scrum@Scale contains two cycles which separate accountability for how work is done from what work is done; the ScrumMaster cycle and the Product Owner cycle. This clear separation reduces organisational conflict and provides the basis for optimal productivity and the coordination of combined effort toward a single goal.

The ScrumMaster cycle (the how) focuses on continuous improvement, impediment removal, cross-team coordination and deployment. The Scrum@Scale framework uses Scrum of Scrum (SoS) principles to organise multiple Scrum teams working together to deliver an integrated set of product increments at the end of every Sprint. A new role, the Scrum of Scrums Master is accountable for the delivery and communication aspects of the combined Sprint, an impediment backlog is introduced to prioritise impediments across coordinating teams. A Scaled Daily Scrum Event is attended by representatives from each team to ensure synchronisation on impediments, dependencies and improvements.

The Product Owner cycle (the what) focuses on strategic vision, backlog prioritisation, decomposition and refinement and release planning. The Scrum Product Owner role is expanded to a Product Owner Team which is responsible for communicating the strategic vision for the product, business alignment, prioritisation of a single product backlog and release planning. A new role, the Chief Product Owner coordinates priorities with the Product Owners that work with each Scrum team within the Scrum of Scrums.

Scrum@Scale introduces two new executive teams. The Executive Action Team serves as the organisation level ScrumMaster with overall responsibility for impediment removal and process quality and continuous improvement. The Executive MetaScrum Team fulfills the organisation level Product Owner role with overall responsibility for aligment to strategic priorities. The Executive MetaScrum Team holds a stakeholder alignment meeting every Sprint; the MetaScrum Event. This event provides an opportunity to make strategic decisions that affect the entire organisation. The Chief Product Owner presents the Product Backlog to the Executive MetaScrum Team in order to support decision making in respect to strategy, funding, resource planning and customer, market or regulatory commitments.

The ScrumMaster cycle and Product Owner cycle have two touchpoints; team level process and product/release feedback. The former simply relates to the non-scaled Scrum process where the team self-organises to improve the velocity by which product backlog items are delivered in a releasable state. The latter relates to continuous improvement in two areas; product feedback impact on the product backlog and release feedback impact on the deployment process. For both touchpoints metrics (improving velocity, quality, team happiness etc.) and transparency (absolute honesty) are key to success.

In conclusion Scrum@Scale follows a scale-free architecture where there are no artificial constraints applied and scaling can follow an organic path, based on actual requirements and time-scales. Whilst simple in concept, the practical application of Scrum@Scale will require significant commitment at all levels of the organisation.

References

Scrum Alliance
Scrum Guide
The LeSS framework
Scrum@Scale

Salesforce Package Versions

In the new world of second generation packaging (2GP) capabilities such as Unlocked Packages mean the technical aspects of packaging are no longer a concern only for ISV developers but are now equally applicable to enterprise development. The Salesforce Developer Experience (SFDX) developer guide documentation provides a great reference for those getting started with packaging which covers all the main implementation considerations. One area that isn’t covered is the management of package versions in practice; I call this a package version scheme, an example of which is included below.

Package Version Scheme:

Convention [Major.Minor.Patch.Build]

    Pre-release development; 0.1.0-x .. 0.2.9-x

  • First-release; 1.0.0-1
  • Post-initial-release bugfixes; 1.0.1-x .. 1.0.3-x
    (Patch number must increment per package version release)
  • Post-initial-release internal builds; 1.1.0-x
    (Only the Build number must increment per package version release)
    Next-release (minor enhancement); 1.1.0-x (Release with the final build number)

  • Post-minor-release bugfixes; 1.1.1-x .. 1.1.3-x
    (Patch number must increment per package version release)
  • Post-minor-release internal builds; 1.2.0-x
    (Only the Build number must increment per package version release)
    Next-release (major enhancements); 2.0.0-1

  • Post-major-release bugfixes; 2.0.1-x .. 2.0.3-x
    (Patch number must increment per release)
  • Post-major-release internal builds; 2.1.0-x
    (Only the Build number must increment per release)

Additional references:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_manpkgs_package_versions.htm
https://developer.salesforce.com/docs/atlas.en-us.sfdx_dev.meta/sfdx_dev/sfdx_dev_intro.htm

Field Service Lightning Data Model

This post provides an annotated Object Model for the Salesforce Field Service Lightning product. The document was produced as part of my preparation for the Salesforce Field Service Lightning Consultant certification and is shared as-is with the community for information purposes only. Please note, the notes were written before I sat the exam so no question or topic hints here I’m afraid.

Salesforce Field Service Lightning Object Model (Spring ’19) – Annotated

 

Salesforce DX in Practice


Salesforce DX introduces an entirely new development paradigm for Salesforce; Source Driven development, Scratch Orgs, Unlocked packages – there’s a lot of new concepts, capabilities and tools to consider when designing a modern, collaborative development process.

This post outlines a Developer Workflow that combines the Salesforce CLI and Git version control system as one basic example of how Salesforce DX can be applied in practice. The workflow includes a reference to the type of commands that may be executed at each step, however specific details in respect to parameters etc. should be referenced at the links below.

git Reference Manual
Salesforce CLI Command Reference

Assumptions:

A Source Driven Development methodology is applied; the source of truth is git never a Salesforce org.
Source Format is always SFDX (git and local).
Scratch orgs are intended for single developer use.

Developer Workflow:

Developer1 and Developer2 will work collaboratively on a new Feature1 for ProjectA.

git: A new branch is created for ProjectA.
git: A new branch is created for Feature1.

1. Developer1 (and 2):

1.1 create a new local branch Feature1Dev1 and check out to this branch.

$ git checkout -b Feature1Dev1

1.2 retrieve the Metadata API components from a Source org and convert to Source format. This could be by package reference or manifest.

$ sfdx force:mdapi:retrieve ..
$ sfdx force:mdapi:convert ..

1.3 add source format files to the git branch (not the metadata source).

$ git add .

1.4 create a Scratch org (optionally with an expiry date; max is 30 days).

$ sfdx force:org:create ..

1.5 push source format to the Scratch org.

$ sfdx force:source:push ..

1.6 transport data to the Scratch org.

$ sfdx data:soql:query ..
$ sfdx force:data:tree:export ..
$ sfdx force:data:tree:import ..

1.7 run configuration scripts e.g. PostInstallScript (or equivalent Apex script) in the Scratch org to configure settings.

$ sfdx force:apex:execute ..

1.8 open the Scratch org to make declarative changes and pull to local folder.

$ sfdx force:org:open ..
$ sfdx force:source:pull ..

1.9 modify code in the VS Code IDE and push to the Scratch org. Note, SFDX tracks changes between the local folder and the Scratch org, i.e a pull command will return only detected changes.

$ sfdx force:source:push ..

1.10 create a Scratch org user for any QA users involved in early system testing.

$ sfdx force:user:create ..

2. Synchronisation point:

2.1 Developer1: pushes local branch Feature1Dev1 to origin.

$ git add .
$ git commit -m "feature1 commit.."
$ git push origin Feature1Dev1

2.2 Developer1: create Pull Request (Feature1Dev1=>Feature1)
2.3 CI: create Scratch org, runs unit tests, runs Lightning Lint (Pull Request specific build behaviour)
2.4 Developer2: review Pull Request (Technical QA) (Feature1Dev1=>Feature1)
2.5 Developer2: merge Pull Request to Feature1 branch
2.6 CI: create Scratch org, runs unit tests, runs Lightning Lint (Commit specific build behaviour)
2.7 Developer2: fetch the Feature1 branch from origin and merge to local Feature1Dev2

$ git fetch origin
$ git merge origin/Feature1

3. Feature1 is ready for testing:

3.1 Developer1: create Pull Request (Feature1=>ProjectA)
3.2 CI: create Scratch org, runs unit tests, runs Lightning Lint
3.3 Technical Lead: review Pull Request (Technical QA) (Feature1=>ProjectA)
3.4 CI: create Scratch org, runs unit tests, converts SFDX Source to Metadata API and pushes to ProjectA Salesforce org for QA (Commit specific build behaviour)

Salesforce Summer ’18 Platform Highlights

For the first time (in memory) this lightweight edition of the platform highlights series refers to the current Salesforce release (v43.0) not the next in line. As such there’s no need to cover release dates, pre-release access, release notes links etc. instead I’ll make a quick mention of the Summer ’18 certification maintenance changes. Previously all maintenance exams were taken via Webassessor and involved a recurring fee; with Summer ’18 most of the maintenance exams have moved to Trailhead and are now free-of-charge. This is great news for everyone but particularly those with multiple certifications to maintain. I found the new maintenance process to be a significant improvement, I particularity liked the practical testing element e.g. the “Get Hands-on with Flows” unit (App Builder certification maintenance unit 2). More of the same please.

– features are GA if not indicated otherwise

Apex Switch Statement

At long last the Apex language now supports Switch (or Case) statements for control flow adding coding simplification and processing efficiency (assumed). Switch statements in Apex support Integer, Long, sObject, String, Enum data types plus lists of literal values.

switch on expression {
    when value1 {
        // code block
    } when anotherValue2, anotherValue3 {
        // code block
    } when null {
        // code block
    } when else {
        // code block
    }
}

SOQL Count Limits

SOQL queries that utilise the Count aggregate function no longer calculate Query Row limit consumption at the underlying record level, instead the number of aggregate groupings is applied. If there is no Group By clause then the limit consumption equals 1. This seemingly small change has a huge impact in enabling code to apply dynamic conditions based on the overall record count. Previously calculating a record count in code would be wasteful in limit terms in most cases, but impossible at higher data volumes. The SOQL Count Limits change in Summer ’18 is perhaps my favourite Apex language enhancement over recent releases, I can definitely see some significant benefits to the practical application of Apex/SOQL.

Apex Code Size Limit Doubled

Not often does a platform limit double in size between releases. The previous Apex character limit (3MB) was a soft-limit that could be increased by Salesforce Support, now however the limit is 6MB by default.

Apex Triggers on User Logout

A new standard object LogoutEventStream enables ApexTriggers to subscribe to User Logout events (not session timeouts). As the names suggests, the object should be considered as a published stream of event information and is read-only with no query support. This capability provides support for new security related use cases (e.g. logout propagation and audit tracking) as well as other data reset/cleardown type scenarios.

Einstein Bots

Einstein Bots are a new Service Cloud AI capability enabling intelligent conversational experiences to be developed that utilise Einstein Platform Services. Bots are comprised of Dialogs (Messages, Questions, Actions or Rules), Slots (Data variables extracted from responses – and passed to Actions), Entities (Groupings of Slots for a given data type) and Dialog Intents (Phrases or words used by the Einstein Platform Intent Service to select the Dialog most likely to match the entered response). Actions can incorporate Apex code into the flow where required.

Salesforce limits the number of free bot conversations to 25 per active Live Agent user.

Cloud Flow Designer – Debugging Support

A new Debug button appears in the Cloud Flow Designer which allows input variable values to be specified and the Flow to be executed with the activity of each step displayed in sequence. Note, the Flow is actually executed meaning DML actions are not rolled back on completion. With Flow gaining a renewed significance in Lighting Experience (as a key component of the declarative development model), it’s great to see continued investment in this tool.

Contactless Users for External Identity Users – Beta

The External Identity user license type now supports a model where no Contact record is required to be associated to the User. Contacts can be added at a later time if required. The Contactless approach avoids the overhead of storing, maintaining and potentially synchronising the 2 records. Where Salesforce is providing identity services only (or a blend of functional and identity based capabilities across the user base) this lightweight license type will provide some significant benefits. For example porting a membership database to Salesforce now becomes easier as the Contact or PersonAccount record could be added to the User at point of access and not up-front; thereby lowering the license cost (assumed) and optimising data storage.

Apex Replay Debugger – Beta

The Apex Replay debugger simulates a live debugging session using the Apex Debug Log generated by an executed transaction. Whilst very limited in general code debugging terms executed code can be stepped through, breakpoints set and variables inspected; all very useful tools to have available. it should be noted that unlike the full Apex Debugger, the Replay Debugger is free of charge and can be applied to logs generated by any org (including production orgs), the Apex Debugger is limited to sandboxes.

With the Replay Debugger a workflow such as (modify code, run to generate log, debug, modify code …)

Salesforce CPQ

This post provides an annotated Object Model for the Salesforce CPQ product (formerly known as Steelbrick). The document was produced as part of my preparation for the Salesforce CPQ Specialist certification and is shared as-is with the CPQ practitioner community for information purposes only. Please note, the notes were written before I sat the exam so no question or topic hints here I’m afraid.

By way of background; the acronym CPQ standards for Configure, Price, Quote. In almost every sales context a flexible and capable quoting engine is required that supports complex product configurations (bundling etc.), discounting and pricing calculation and finally efficient quote document generation and distribution. As standard Sales Cloud functionality provides only a basic capability in these functional areas the AppExchange category for CPQ solutions is one of the most popular in terms of both customer demand and vendor solutions.

As there is no Salesforce CPQ implementation guide (at the time of writing) I found that a practical (hands-on) approach to learning the Salesforce CPQ product worked best. For this I installed the CPQ package into a developer edition org and created my own demo dataset. The Salesforce CPQ product is both powerful and complex in equal measure; given the available help documentation it will require expert consultancy services to implement the product correctly beyond basic use cases.

Salesforce CPQ Object Model (Winter ’18) – Annotated

Scrum Quality Assurance

More often than not quality assurance is considered to be an implicit benefit of agile process with no specific techniques or practices applied. Integrating effective quality assurance can be one of the most challenging aspects of applying the Scrum framework.

Before considering how quality assurance factors can be applied to the Scrum framework, let’s start with a clear definition.

Quality assurance (QA) is a set of practices which collectively ensure that standards of quality are met (i.e. technical standards) and specified requirements are delivered (customer or user needs or expectations). Standards of quality may include for example measures of correctness, integrity, maintainability or usability. Quality assurance does not mean software testing; this is one element of a quality assurance process. Effective QA means there are limited defects in the first place. The core elements of a QA process include standards, audits, solution governance, testing, defect management (analysis, tracking and resolution) and communication.

Scrum Concepts

The following points describe how QA should be considered in relation to some of the core Scrum concepts.

Developer. In Scrum terms this means any project contributor such as Software Testers, UI Designers in addition to Software Developers. This term is obviously confusing and a source of misunderstanding.

Scrum Ceremonies. QA should attend all meetings (Sprint Planning, Product Backlog Grooming, Stand-ups, Reviews and Retrospectives). A team-wide, test-first philosophy is often the driving factor behind high performance Scrum teams. This concept is also referred to as acceptance-test driven development.

Working Software. This means fully tested quality software – not code complete.

Definition of Done. The DoD must include strict criteria for testing and quality.

Customer Satisfaction. Conditions of Satisfaction (or Acceptance Criteria) defined for each User Story drive the quality assurance process.

Scrum Artefacts

The following points describe how QA should be considered in relation to some of the core Scrum artefacts.

User Stories. The User Story is a concise description of a feature told from the perspective of the person who requires it. User Stories are best considered a reminder of a conversation and should be limited to the detail required for effective estimation and prioritisation.

The following (INVEST) factors should be considered in respect to the quality characteristics of a User Story;
Independent – can it be delivered in a single sprint, or do dependencies exist?
Negotiable – the user story is not a specification (reminder of a conversation)
Valuable – is there obvious intrinsic value to the user?
Estimable – can it be estimated (sufficient detail, available knowledge)?
Small – is it sized appropriately (to fit in a sprint)?
Testable – are there appropriate Conditions of Satisfaction – or Acceptance Criteria?

The Conditions of Satisfaction (CoS) are most significant to QA. Each condition is a specific, user-focused requirement for acceptance and should be written using the Given-When-Then format (or heuristic technique) as below.

Given [context] – e.g. I have created a Customer Record and Sales Invoice.
When [event] – e.g. I attempt to enter a Sales Payment above the Sales Invoice Total.
Then [outcome] – e.g. I am warned that the payment exceeds the outstanding amount.

The high-level CoS will drive the team discussion of the User Story (in a test-first approach), a “Design and Test” whiteboard approach can be useful in this respect where the team consider both the design and testing approach at the same time. Each high-level CoS will be translated into lower-level (detailed) test cases (technical; unit and integration or functional).

User Story Test Cases. The Conditions of Satisfaction (CoS) for a given User Story provide a high-level framework for the definition of specific test cases. Such test cases should be structured to address a small, focused end-to-end workflow (or scenario) to avoid interdependencies and should enable collaborative and automated testing. Effective test cases provide a simple set of step-by-step test instructions (or steps). Note, the CoS covers the happy path scenarios only, test cases must be developed also that cover critical path, alternate path, negative path, technical, validation and edge case scenarios.

Product Backlog. The Product Backlog is a single, ordered list of requirements (typically but not exclusively User Stories) used by Scrum teams. The list is constantly evolving, business value ordered and estimated by the team – including QA.

The following factors should be considered in respect to the quality characteristics of the Product Backlog;
Prioritisation – are the Product Backlog Items (PBI) accurately prioritised to a consistent scale based on business value, ROI, interdependencies and risk?
Estimation – are the PBI consistently estimated (and benchmarked) by the team?
Well Groomed – is the backlog frequently groomed by the team?

Product Backlog grooming entails User Story discussion by the whole team (lead by the Product Owner) with a view to revising and refining estimates and adding details such as new Conditions of Satisfaction. Regular sessions should be scheduled for this purpose; ideally 10% of the Sprint time should be dedicated to this task. In practice this is rarely the case, however given the direct relationship between the completeness of the CoS to the efficiency of the quality assurance process it is important that sufficient time is allowed for effective grooming.

Scrum Practices

The following points describe how QA should be considered in relation to some of the core Scrum practices.

In Sprint Testing. User Story testing must be completed within the sprint – full-stop. The Definition of Done must include this criterion. Testing a sprint behind the development is common bad practice and directly violates the working software principle; additionally this approach builds up technical debt and defect debt to be resolved at a later time. Instead, the sprint backlog must be shaped to enable collaborative design, development and testing of each user story within the sprint. High performing Scrum teams will also automate some of the tests within the sprint. The following simplistic process describes how collaborative sprint development can be applied.

Developers and Testers working collaboratively – Team Swarming:
Write the test case
Write the code
Test together (developers and testers)
Fix the bugs (developers) / Automate the test case (testers)
>> next user story

Test Automation. The ability to automate the execution of test cases (i.e. unit tests and acceptance tests) is critical at scale. In order to quickly identify cases where the current sprint development breaks the code or functionality developed in a previous sprint it is necessary to execute the related tests efficiently. Test automation also supports evaluation of the impact of changes (What-if analysis) that can help with design decisions. Further QA related benefits of test automation include the ability to apply load testing (e.g. multiple concurrent users) and performance testing types. Unlike manual testing which must be conducted in-sprint, test automation can be added outside of the sprint and be developed a separate team. High performance Scrum teams will automate critical tests within the sprint.

Test data quality has a significant impact upon the effectiveness of testing. Random data simply doesn’t work and prefabricated test data is often written to expectations and is not representative. Using production data (real data) for test purposes can violate data protection regulations and comes with the risk of inadvertent communication to the customer, supplier etc. or PII or financial information leakage. To mitigate this an approach that combines real data with obfuscation (scrambling or scrubbing) techniques can produce large scale, representative test data sets without the risk. This can take significant time to perfect however the return on this investment will be high.

Note, in addition to automated tests, a quality test plan still includes manual testing to cover exploratory testing and usability testing types. Automated tests are systematic and unable to gauge the quality of the user experience for example.

Regression Testing. The ability to run test cases developed in previous sprints (and perhaps by different teams) enables the impact of a current development to be understood immediately. This testing practice is referred to as regression testing. Regression testing effort gets bigger over time, as with User Story test cases, regression testing should occur in-sprint and automation should be used wherever possible. It is good practice to reserve time at the end of the sprint to allow the team to complete regression testing, this should also include performance and security test cases. At scale, it is often necessary to be selective in terms of which test cases to focus upon. An effective but contentious practice is to reserve the final sprint for end-to-end regression testing – this sprint is referred to as a hardening sprint; the key risk with this approach is that defect resolution from earlier sprints is deferred to this sprint. The hardening sprint should be driven from the regression test plan not the product backlog and be managed by the QA lead.

Post Release Support. Once the product is released – and despite the QA rigour applied – it is inevitable that production defects will arise. Such defects should be reviewed first by the Product Owner to determine priority and relevancy. Where defects are to be addressed immediately (show-stopper bugs etc.) the resolution work should replace similar sized work in the current sprint. In every respect the defect should be treated in the same manner as a User Story. Defects can also be placed on the Product Backlog for grooming. Where the Scrum team is providing defect resolution it is good practice to retain velocity for production defects; this proactive approach reduces the requirement to pull User Stories from the current sprint that may be in progress.