Salesforce Summer ’20 Architect Highlights

I first published a release highlights post on this blog over 8 years ago; since that time the series has been renamed a few times but retained a focus on summarising the key technical aspects delivered with each release. I’ve always found preparing the content for the post a useful way to organise my own release readiness and feedback has indicated that amongst the plethora of release related blog posts available today there remains space for one with a technical slant. I hope so.

This post marks something of a relaunch to the series, this time with a strong focus on concerns significant to the Salesforce architect community (i.e. technical, solution or integration architects).

Release Timeline

Given the current global situation with COVID-19 the release timeline for Summer ’20 is approximately 1 month later than normal for a summer release. Each Salesforce release represents potential technical disruption for customers and managing this in parallel with the ongoing impact of the pandemic seems like an avoidable situation. The Summer ’20 sandbox preview is now scheduled for May 20-30th and the Summer ’20 production release is scheduled for June 12th (1st release window), July 10th (2nd) and July 17-18th *(3rd and final). As ever the trust site provides the full detail of the release timeline.

In advance of the sandbox preview window Summer ’20 pre-release orgs can be requested via the pre-release sign-up page. During the sandbox preview window Summer ’20 preview Scratch Orgs can be created by adding the release option (“release” : “Preview”) to the scratch org definition file.

The Salesforce Summer ’20 release notes are available from today at https://releasenotes.docs.salesforce.com/en-us/summer20/release-notes

 

Architect Highlights (in no order)

Application Architecture

Record-Changed Flows

Previously Before-Save Flows could be implemented to efficiently update Record field values during the before-commit phase of the record save transaction; historically this has been a strong ApexTrigger use case. For many implementations a significant improvement in record-save performance could be achieved by consolidating Processes and before-event ApexTrigger logic into a single Flow. Refactoring Process Builder Processes which served to populate field values on the same record only could achieve a remarkable reduction in save time which in turn delivers a more responsive end user experience and a potential reduction in CPU timeout errors. With Summer ’20 Before-Save Flows are now Record-Changed Flows which support before-save and after-save events; equating to before or after ApexTrigger events. In the former case the Flow is restricted to field value updates (via $Record) but the latter enables a wider range of Flow actions to be utilised.

This new capability enables Workflows and Processes to be reimplemented following a single Flow pattern which should achieve the best performance outcome.

Platform Event Flows

Process automation initiated via Platform Event has been possible with Process Builder up to now, with Summer ’20 this is extended to Flow Builder. Platform Event Flows represent an obvious next step on the emergence of Flow Builder as the hub for all things process automation.

With this model it is possible to build out complex business logic (declaratively) in Flow Builder which fires in response to on-or-off platform asynchronous events; a powerful combination particularly in the data integration context.

Flow System Context

Flows which are specified to run in System Context (that bypass the permissions of the running user) can now run outside of the Sharing Model (effectively Without Sharing in Apex terms). Previously System Context was limited to removing Object and Field Access permission checks with the Sharing Model record-level access checks enforced by the Flow execution.

Apex-Defined Type Flow Variables

An Apex-Defined Type is simply an Apex Object typically representing deserialised JSON response in a data integration scenario. ADT variables could historically be utilised within Flows but could not be passed into the Flow from the calling context. With Summer ’20 it is now possible to hand-off callout response processing to a Flow, as one example use case. Whilst a seemingly minor change this opens up Flow as viable option for defining business logic for key use cases that would otherwise have required Apex code.

Dynamic Forms (Non GA Preview)

Undoubtably one of the most eagerly anticipated features for some time Dynamic Forms makes its preview debut in the Summer ’20 release. The Dynamic Forms feature is billed as an upgrade to Page Layouts where UI composition (sections, fields and visibility logic) occurs directly within Lightning App Builder. Preview support is limited to custom objects and excludes record pages that use pinned-region or custom page templates.

Lightning Message Service

The Lightning Message Service provides a standardised mechanism for communication across components on a page or across pages. Supported component types include Visualforce Pages, Aura Components and LWC. Communication is facilitated by Lightning Message Channel subscription and publication.

Development Lifecycle

Source Tracking in Sandboxes (Beta)

The source tracking deployment type is now supported for the developer sandbox types (Developer and Developer Pro); previously source tracking was restricted to Scratch Orgs. Source tracking enables efficient synchronisation of changed metadata between the local environment and the Salesforce org. Where existing development or release management processes (i.e. build automation and CI) are yet to be transitioned to Scratch Orgs this capability offers increased efficiency and accuracy as tracking of the changed metadata state is managed by the platform.

Org Dependent Unlocked Packages

Most Salesforce implementations suffer from a legacy accumulation of metadata components built up over a period of time by multiple projects, teams and partners. In such cases the benefits of implementing a structured Unlocked Package approach can appear to be unrealistic due to the degree of interdependency across the environment. With Org Dependent Unlocked Packages, metadata validation (i.e. dependency checking) can be deferred to the time of package installation, rather than during package version upload. Org Dependent Unlocked Packages can be created with the orgdependent Salesforce CLI parameter.

Security Architecture

Initiate Two Factor Authentication (2FA) with Apex

Two new methods have been added to the System.UserManagement class to enable Apex initiation of 2FA verification (e.g. Salesforce Authenticator). One method initiates a 2FA verification service (initVerificationMethod), the second completes the verification (verifyVerificationMethod).

Highly Trusted Users

The new “Skip Identity Confirmation at Login” permission allows users to login without second factor identity verification. Clearly one to use with a high degree of caution but useful for certain use cases where 2FA is impossible or impractical.

Minimum Access User Profile

The new “Minimum Access – Salesforce” User Profile provides a base profile for the Salesforce user license type defined following the principles of least privilege, a well-established security best practice. This profile can be cloned and extended as required or used in concert with Permission Sets. This support for a secure-by-default model feels long overdue.

Integration Architecture

Async Platform Event Publish Operations (Pilot)

The initial success of a Platform Event publication can be straightforward to determine (e.g. access to the SaveResult in Apex code) but the eventual state of the event is not accessible. To address this gap a new standard Platform Event (PlatformEventStatus) has been added; subscribers (e.g. ApexTrigger) are pushed status updates for High Volume Platform Events that have the “Track Publish Status” setting enabled.

Functional Architecture

Service Cloud. Omni-Channel Status-Based Capacity Model

Previously Omni-Channel tracked Agent capacity by the number of open tabs within the Agent Console. For short-lived Work Items (e.g. Case or Lead) that are resolved within a single user session the tab-based capacity model is often appropriate and can work well. However, this is not the case where Work Items are longer-lived and have a lifecycle that spans multiple user sessions or days: this leaves Omni-Channel a poor fit for many service operations. To address this Omni-Channel can now be configured to track Agent capacity based on the status of the Work Items that the Agent owns. The new Omni-Channel setting “Enable Status-Based Capacity Model” is used to switch between the default tab-based capacity model and the new status-based model. This could be a game changer for Omni-Channel given the importance of capacity management to service operations and the limitations of the tab-based model.

ISV

Delete Lightning Components from Managed Packages

A long awaited capability for ISV to delete obsolete Lightning Components from released Managed Packages. From personal experience I know how easy it has been over the last few years to build a legacy of old components in the package following refactoring and transition from Aura to LWC. Note, component deletion must be enabled in the Packaging Org (via Salesforce Support Case).

First Generation Managed Package Conversion (Developer Preview)

The Summer ’20 release includes a developer preview for the conversion of first-generation managed packages to second-generation managed packages. The preview is limited to Scratch Orgs and is intended to communicate how this long-awaited capability will function in due course. Personally, I’m looking forward to bringing the benefits of second-generation packaging to previously released managed packages.

Salesforce DX

This post provides a practioner’s introduction to Salesforce DX (Developer Experience) a new software development lifecycle paradigm for Salesforce. Please note at the time of writing (Spring ’17 – v40.0) Salesforce DX is at public beta status.

Salesforce DX (or SFDX) – What is it?

A new set of tools and practices to accelerate the software development lifecycle on the Salesforce platform. The tools in question enable established (technology agnostic) software development models to be applied to the Salesforce context efficiently; the practices provide guidance in how to do this effectively and at scale.
Many aspects of Salesforce DX have become standard practice over the last few years; source-driven development using tools such as Jenkins and git being a good example. Other aspects such as artifact-based development, the CLI and Scratch Orgs introduce a fundamentally new paradigm.

The Salesforce development model has historically been org-centric in concept; with the entire set of metadata types representing a single monolithic code-base. This conceptual model has benefits in respect to ensuring complex interdependencies are not overlooked and code structure and conventions are considered fully in a standardised approach. In the org-centric model – and typically not exclusively – a source-code control repository is mapped at the org-level and deployments are org-based not feature or app based. Salesforce DX provides the tools and practices to change the development model from org-centric to artifact-based; where an artifact represents a collection of components (org subset) – developed, tested and deployed together in safe isolation.

SFDX Tools

CLI (Command-line Interface – sfdx)

The new SFDX CLI provides the foundation for Salesforce DX with capabilities that span authorisation, org management, metadata creation, data population, packaging and deployment. The CLI commands are extensive and clearly structured within the top-level force namespace.

command examples:

//CLI create a new project (specific DX folder structure plus config file)
sfdx force:project:create 

//CLI needs to authorise against the DevHub (OAuth web-based flow or for CI JWT Bearer flow)
sfdx force:auth:web:login -d -a "Hub Org"

//CLI create scratch org (or set existing as default)
sfdx force:org:create -s -f config/project-scratch-def.json

//CLI push source code from local (respects the .forceignore file placed in root folder)
//Conflicts can be overwritten --forceoverwrite
//Note - Local project is the source of truth.
sfdx force:source:push

//CLI run unit tests
sfdx force:apex:test:run

A key concept is the local project which is the primary source of truth. The project structure is different from the Metadata API folder structure with higher-level metadata types such as Object broken down into sub-folders and individual files for sub-types such as listViews and fields. This new sfdx structure (or source shape) is designed for development and source-code control system interactions whereas the longstanding Metadata API structure related to deployment primarily. Conversion between the 2 formats is necessary when creating a new sfdx project from an existing org (or simply from existing metadata) and prior to deploying a sfdx project to a Salesforce org (not Scratch Orgs). The CLI provides commands to perform the format changes as illustrated below.

//Create DX project > Create folder for metadata files then ..

// Retrieve unpackaged metadata
sfdx force:mdapi:retrieve -r ./mdapipkg -u <username> -k ./package.xml

// Retrieve the metadata for a package
sfdx force:mdapi:retrieve -s -r ./mdapipkg -u <username> -p <package name>

// Convert to SFDX format.
sfdx force:mdapi:convert --rootdir <retrieve folder name>

// Convert to Metadata API format.
sfdx force:source:convert -d <metadata folder name>/ --packagename package_name
Scratch Orgs

A Scratch Org is a temporary org that can be created quickly and configured with predefined settings and preferences (Edition, languages, platform features etc.) via a JSON definition file. Scratch Orgs are easily created and deleted via the CLI; up to 50 can be created per-day (per Dev Hub) with up to 25 active concurrently, deletion occurs automatically after 7 days.

A Scratch Org is typically used for development and testing purposes and plays an enabling role in build automation. Note, unlike sandboxes, Scratch Orgs are not constrained by a refresh cycle, but also are not a clone of a source org configuration. A typical development workflow will be to create a short-lived feature branch in the Source Code Control system (SCC), create a local sfdx project (from the branch metadata) and Scratch Org, complete development and testing in the Scratch Org, commit change back to SCC and convert and deploy the change to Sandbox. In a Continuous Integration (CI) environment the CI server (Travis CI, Jenkins etc.) may initiate an automated build from the commit event. In such a scenario, headless Scratch Orgs may be used for automated testing (across editions, languages etc.).

The SFDX CLI provides commands for automating the population of data in a Scratch Org. This can be via CSV file loading or more interestingly via org-to-org copy via SOQL query export and subsequent import.

Finally, a key feature of the CLI is the automated tracking of change (the diff) between the current local project and the default Scratch Org. The CLI push and pull commands that facilitate the movement of change between the local project and the Scratch Org automatically process changed components only.

Dev Hub

The SFDX CLI must authorise to a Dev Hub before Scratch Orgs can be created; each Scratch Org is related to a parent Dev Hub. The Dev Hub is a production Salesforce org (Enterprise Edition or Unlimited Edition) – the feature can be activated from the Setup menu. A Dev Hub trial org can be created to enable safe experimentation with SFDX. Dev Hub trial orgs expire after 30 days and are limited (40 Scratch Orgs created per day, 20 active concurrently) why this is subtly different from the standard limits is unclear.

Authorisation of the Dev Hub org (and indeed any org) by the SFDX CLI is possible via OAuth Web Flow or JWT-based Flow. The latter being preferred for system authentications such as those conducted by a CI server.

For ISV (and others) working with namespaces, the Dev Hub provides a Namespace Registry capability which (as the name suggests) allows Developer Editions containing namespaces to be registered. Once the org link is complete, Scratch Orgs can be created as normal. This capability is great for ISV developing in a namespace registered org (as opposed to utilising a separate packaging org to apply the namespace).

IDE2

The new Force.com IDE2 is an Eclipse plug-in built on the SFDX CLI. Additional features of the IDE2 include the Lightning Code Analyser and Apex Code Analysis tools. The former wraps the capabilities of ESLint and the Salesforce Lightning CLI to evaluate JavaScript code against standard or custom rule sets. The latter covers syntax error detection and highlighting plus code auto-completion. The IDE2 also provides an embedded browser for editing Lightning Pages.

On initial inspection there didn’t appear to be any means to directly login/import metadata from an existing org. Presumably this must be done externally via the SFDX CLI.

Finally, the new IDE2 provides integrated tools for working with Git repositories, a key facet of the envisaged developer workflows centred around Salesforce DX.

Summary

At the time of writing, the available SFDX documentation relates primarily to the technical aspects of the tools, there is minimal information relating to process guidance (or practices). Whilst it’s relatively easy to envisage how the new tools and techniques could be integrated into development, test and deployment workflows, best-practice guidance in such areas would certainly provide confidence and accelerate adoption.

Historically, a key sticking point for source-based development (and deployment in general) in the Salesforce context has been the Metadata API coverage. In short, not all metadata types are supported and therefore automated deployments (whether Change Set or API based) are typically accompanied by pre- and post- deployment steps that manually address the gaps. As there’s no significant change in this regard, it’s likely that Scratch Orgs will require the same approach – however considering Scratch Org use cases will typically be code related this may have limited actual impact.

In general however Salesforce DX is a considerable step forward in terms of applying established software development lifecycle practices to the Salesforce context. The CLI alone is incredibly impressive and provides significant flexibility in respect to development, testing and release process. It will be interesting to see how tools vendors react to the availability of the CLI as a foundation for future, evolved IDE capabilities. Hopefully Salesforce DX represents the first step of many in this direction.

Resources

Salesforce DX Developer Guide (Beta)
Salesforce DX on Trailhead
Salesforce DX Chatter Group

Salesforce Selective Queries

A basic understanding of the selective query concept is fundamental to scalability on the Salesforce platform. Non-selective queries have a significant performance impact on List Views, Reports and SOQL and can often result in Apex Trigger runtime exceptions as below – as the data volume increases over time.

System.QueryException Cause: null Message: Non-selective query against large object type (more than 200000 rows)

SOQL queries executed from Apex Triggers have to be selective where the record count in the queried object is 200,000 or more. The determination of a selective query is a function of volume plus field-level selectivity. It is therefore the case that the selectivity state of a given query is volatile meaning in practical terms that the initiating Apex Trigger may work one day and not the next.

Selective Query Definition

Selectivity is determined by the index state of the query filter conditions and the number of records the filter returns (selectivity) versus the object total records. The thresholds below show the difference between the selectivity calculation for a standard index versus a custom index.

Selectivity Thresholds:
Standard Index – 30% (first 1M) then 15%
Custom Index – 10% (first 1M) then 5%

Unary filter:

e.g. select Name from Account where IndexedField__c=’ABC’

With a custom index on IndexedField__c the filter must return <10% of the total records in the object to be considered selective – up to the first 1 million records from that point the threshold drops to 5%.

Multiple filters AND (exclusively):

e.g. select Name from Account where IndexedField__c=’ABC’ and SecondIndexedField__c=’123′

The Query Optimiser will set the leading operation on the basis of lowest cost. If no filters are selective a table scan is performed.

If all filters have an index then a Composite Index Join optimisation can be applied.

In this case each filter must be less than 2x (two-times) the selectivity threshold.
All filters combined must be less than selectivity threshold.

If all filter fields are standard then use the standard index selectivity threshold – otherwise use custom index selectivity threshold.

Multiple filters OR (at least one):

e.g. select Name from Account where IndexedField__c=’ABC’ or SecondIndexedField__c=’123′

Selective AND filter indexes could be set as the Leading Operation – if none exist, then a table scan occurs unless all filters have an index then a Composite Index Union optimisation becomes possible.

In this case each filter must be less than selectivity threshold.
All filters combined must be less than selectivity threshold.

If all fields are standard then use the standard index selectivity threshold – otherwise use custom index selectivity threshold.

Parent Field Filter:

e.g. select Name from Contact where IndexedField__c=’ABC’ and Account.IndexedField__c=’ABC’

Where parent object fields are referenced in a filter, each filter index is individually and the lowest cost option selected as the leading operation.

Note, the parent field is not indexed on the queried object, so Account.Id can incur a table scan on Opportunity whereas AccountId may allow the standard index to become the leading operation.

The notes above provide a basic outline of the concepts but should be sufficient to convey the key concepts.

 

Implementation Approach

As data volumes grow query behaviour can change dramatically, to mitigate this database queries originating in Apex code, list view and report definitions must consider the future peak data volume and field-level data characteristics (primarily selectivity). This considered approach can help identify an appropriate indexing strategy that maintains query performance by ensuring query selectivity. So, forward planning is absolutely key; queries should be designed to be selective up to the projected peak data volumes. Thinking ahead to this extent is very seldom applied in my experience, particularly where the Salesforce implementation evolves project-by-project and the longer term picture is not a priority.

In order to evaluate the selectivity of a given query – the following 2 approaches can be applied.

REST API Query Resource Feedback Parameter

The Force.com REST API exposes a Query resource that accepts an explain parameter which can set with a SOQL query, List View Id or Report Id. The results show the options considered by the Query Optimiser and the lowest cost option (leading operation) taken. A relative cost value of less than 1 indicates a selective query, anything higher indicates non-selective. The example below shows the output for a report where a report filter hits an indexed field.

selective-queries-api

Developer Console Query Plan

From Summer ’14 on the Developer Console can be enabled [Help>Preferences>Enable Query Plan] to display a Query Plan option on the Query Editor tab. The construct of the output is the same as the API approach. Note, this appproach is limited to SOQL queries.

selective-queries-developer-console

The references section below provides link to the technical detail of the 2 approaches introduced above.

 

References

Query & Search Optimisation Cheat Sheet

Developing Selective Force.com Queries through the Query Resource Feedback Parameter Beta

Developer Console Query Editor

Improve performance with Custom indexes using Selective SOQL Queries

Custom Index Request Checklist

Salesforce Activity Sharing

This brief post serves to clarify the sharing model related to Activities, i.e. Task and Event. For most implementations a public sharing model for Activity is highly appropriate and a necessary element of the CRM process. In some cases however a private model is required, perhaps where strict visibility rules must be applied in respect to Account and Contact, or where the activity itself is of a confidential nature. In the former example details of the Contact (Name, Title, Account Name) are revealed on the Activity record to assigned users (and those above them in the role hierarchy) who don’t necessarily have record visibility to that Contact or Account. This can be a problem. In the latter example, the interpretation of private OWD for Activity relates to editing of records, not visibility of records (as is the case for other objects) meaning there’s no mechanism to restrict access selectively to Activity records related to a given record. In mitigation, field level security (FLS) can be applied to hide fields from certain users, this approach can be effective but is not ideal. As such the implementation of a fully private sharing model for Activity can be difficult to achieve.

Blog Sketches

Activity Org-wide Default Implications
— Private : Read access to the [Related To] record provides read access to the Activity. Only the Assigned User and users above them in the role hierarchy can edit.

— Controlled by Parent : Read access to the [Related To] record provides read access to the Activity. Edit access is possible for the Assigned User and users above them in the role hierarchy or requires edit access to the [Related To] and [Who Id] records.

In considering the OWD settings above, the rules below must also be understood.

1. Activity Visibility. If the Activity is related to a Contact, then View access is required to the Contact, with the exception of the Assigned User (and role hierarchy) who can view the Activity regardless of Contact access. Private contacts can be problematic in this context.

2. Activity Edit. If the Activity is related to a Contact, then Edit access is required to the Contact, with the exception of the Assigned User (and role hierarchy) who can edit the Activity regardless of Contact access.

References
Help and Training – Access to Activities and Calendars
Idea – Support for fully private (no read or write) activity sharing model

Salesforce Environment Hub

Environment Hub was announced back in August 2012 and was initially made available as a Pilot feature of the Winter ’13 release. This post follows up (some 2 years later) on my initial interest in understanding the utility Environment Hub delivers in the context of environment/user management.

Environment Hub – What is it?
In simple terms Environment Hub is an org administration tool that enables multiple orgs (of any type) to be associated and accessed from a central location. Each connected org is a termed a Hub Member, with an imposed constraint that each Hub Member org can only be parented by one Hub org. It is therefore imperative that Hub Members are only added where there is no contention over the appropriateness of the parent Hub. Commonality of target production org or packaging org is a good starting point for this consideration, as is the rule that client orgs should never be parented by an ISV or Consultancy Hub org. The parent Hub org should always be the most-accessed org, the credentials for which will become those by which all org access is made.

For larger programmes of work Environment Hub provides a highly useful means to catalogue the org estate and to provide SSO between the constituent orgs. This latter point enables reduced password maintenance, simplified access and centralised user administration in terms of deactivating a user account in one place only and ability to view login history in once place.

Environment Hub Tab

Environment Hub is enabled by Salesforce support who will require confirmation to proceed with the change. From experience, the enablement process can take a few days.

Key Features

Connect Organisation
Connecting Hub Members to the Environment Hub occurs via the entry of an administrator username for the target org and subsequent OAuth authentication and authorisation flow. The User Permission “Connect Organisation to Environment Hub” is required.

Connect Organisation 1

Connect Organisation 2

Connect Organisation 3 - OAuth

Connected Organisation Detail Page

Once an org is connected to the Environment Hub interesting detail such as the Edition, Org Status and Org Expiry date is revealed. It is also possible to add custom name and description attributes to the Hub Member, which I really like, each org in the estate should be justifiable and have a specific purpose – here we can capture this plus accountable contact etc. A very useful means of cataloguing and tracking the org estate.

The Company Detail page in the connected org will now show the Environment Hub Org Id value as below.

Company Information Page

In addition to the “User Added” origin, Hub Members are also auto-discovered using existing org-to-org relationships as below.

Auto-discovery types; sandbox to production, patch orgs to release org, trialforce source org to trialforce management org, release org to LMO

Create Organisation
New development/test/demo orgs can be created directly within the Environment Hub – this replaces the functionality previously exposed via the Salesforce Partner Portal, for partners at least. I’m unclear how the types of org offered are affected by partnership status etc. or perhaps whether the Environment Hub itself is available only to partners.

Create Organisation

Create Organisation 2

Org types offered;
Development = PDE org.
Test/Demo = Pick Edition for a 30 day time expired org.

Useful information of the distinction between different org types can be found here.

Single Sign-on
SSO can be enabled between each Hub Member and the Hub org; in implementation terms this means the Hub org is configured as an Identity Provider with a Service Provider being configured (automatically via the SSO enablement process) in both the Hub Member org and the Hub org. Both IdP initiated (via the Environment Hub tab) and SPI-initiated (via enablement of the Service Provider as a Login Page Authentication Service) SAML flows are supported. The latter point meaning SSO could be enforced as the only authentication means – thereby switching off standard Salesforce authentication entirely. Each Hub Member Org must have a My Domain configured for SSO to function.

Note, enabling SSO creates a Service Provider in the Hub org, default permissions are provided to the Standard User and System Administrator profiles only. It is therefore a requirement to ensure relevant permissions (Profile or Permission Set) are provided.

Connected Organisation Detail Page SSO

Within the Identity Provider, the Hub Member specific Service Provider configuration is set with “Subject Type=User’s ID determined by Environment Hub”, this setting delegates the user mapping to the Environment Hub settings, defined as below.

3 types of SSO User Mapping;
Method 1. Mapped Users – 1 to 1 mapping of user names – per-user.
Method 2. Federation Id – boolean state, yes = attempt to map users on common Federation Identifier values.
Method 3. User Name Formula – formula expression = attempt to map users via formula result.

Where multiple mapping types are enabled the precedence order above applies. For SSO between a sandbox and production org, user mapping is implicit and not configured as above.

Customisation
Enabling Environment Hub adds the EnvironmentHubMember standard object which is open to the declarative build model; custom fields, page layouts. validation rules, workflow rules, approvals etc.. In addition Apex Triggers can be defined on this object, use cases for which may include notifications relating to status changes etc.

A second standard object EnvironmentHubInvitation is also added, however this object is inaccessible from the Setup menu (although it can be reached via /p/setup/layout/LayoutFieldList?type=EnvironmentHubInvitation&setupid=EnvironmentHubInvitationFields). I’m assuming this object to be either forward looking or a legacy of an invitation-based connection model (as per Salesforce to Salesforce). Regardless, records do seem to be added to this object, the logic for which is unclear.

Related Permissions
Hub Org –
Manage Environment Hub
Environment Hub App and Tab access
EnvironmentHubMember standard object access permissions
Environment Hub Connected App
SSO Service Providers e.g. [00_____0000Cj__] Service Provider Access for SSO (by Profile or Permission Set)

Hub Member Org –
Connect Organisation to Environment Hub

Environment Hub in Practice
For consulting projects Environment Hub offers significant value in terms of management and tracking of the org-estate and centralisation of user administration. In practice this would require all project contributors to access production (Hub) as the primary org and SSO into secondary (Hub Member) orgs as required. In this model, production could be utilised for project collaboration, bug-tracking, project management etc. which is a common approach. The obvious downside being the requirement to license the project team in production – a big challenge on many projects where user licenses aren’t provisioned until a late stage or business use runs parallel to project activity. In such examples, project contributors could be provisioned with low-end user licenses on a temporary basis. Ideally we could do this with a Chatter Plus license as users would only require Chatter, Custom Objects (10 or less) and Environment Hub access.

For ISV projects, the utility of Environment Hub relates more specifically to the ability to catalogue the multitude of environments required for development, test, i11n, packing, release and patch purposes, not to mention TrialForce. Efficiency of access across this estate is also a key factor.

References
Environment Hub Online Help

Conceptual Data Modelling

The biggest area of risk on any Salesforce implementation project is the data model. In my view this assertion is beyond question. The object data structures and relationships underpin everything. Design mistakes made in the declarative configuration or indeed technical components such as errant Apex Triggers, poorly executed Visualforce pages etc. are typically isolated and therefore relatively straightforward to remediate. A flawed data model will impact on every aspect of the implementation from the presentation layer through to the physical implementation of data integration flows. This translates directly to build time, build cost and the total cost of ownership. It is therefore incredibly important that time is spent ensuring the data model is efficient in terms of normalisation, robust and fit for purpose; but also to ensure that LDV is considered, business critical KPIs can be delivered via the standard reporting tools and that a viable sharing model is possible. These latter characteristics relate to the physical model, meaning the translation of the logical model into the target physical environment, i.e. Salesforce (or perhaps database.com). Taking a step back, the definition of a data model should journey through three stages; conceptual, logical and physical design. In the majority case most projects jump straight into entity relationship modelling – a logical design technique. In extreme cases the starting point is the physical model where traditional data modelling practice is abandoned in favour of a risky incremental approach with objects being identified as they are encountered in the build process. In many cases starting with a logical model can work very well and enable a thorough understanding of the data to be developed, captured and communicated before the all important transition to the physical model. In other cases, particularly where there is high complexity or low understanding of the data structures, a preceding conceptual modelling exercise can help greatly in ensuring the validity and efficiency of the logical model. The remainder of this post outlines one useful technique in performing conceptual data modelling; Object Role Modelling (ORM).

I first started using ORM a few years back on Accounting related software development projects where the data requirements were emergent in nature and the project context was of significant complexity. There was also a need to communicate early forms of the data model in simple terms and show the systematic, fact-based nature of the model composition. The ORM conceptual data model delivered precisely this capability.

ORM – What is it?
Object Role modelling is a conceptual data modelling technique based on the definition of facts in the form of natural language and intuitive diagrams. ORM models are subject to rigorous data population checks, the addition of logical constraints and iterative improvement. A key concept of ORM is the Conceptual Schema Design Procedure (CSDP), a prescriptive 7 step approach to the application of ORM, i.e. the analysis and design of data. Once the conceptual model is complete and validated, a simple algorithm can be applied to produce a logical view, i.e. a set of normalised entities (ERD) that are guaranteed to be free of redundancy. This generation of a robust logical model directly from the conceptual schema is a key benefit of the ORM technique.

Whilst many of the underlying principles have existed in various forms since the 1970s, ORM as described here was first formalised by Dr. Terry Halpin in his PhD thesis in 1989. Since then a number of books and publications have followed by Dr. Halpin and other advocates. Interestingly, Microsoft made some investment in ORM in the early 2000’s with the implementation of ORM as part of the Visual Studio for Enterprise Architects (VSEA) product. VSEA offered tool support in the form of NORMA (Natural ORM Architect), a memorable acronym. International ORM workshops are held annually, the ORM2014 workshop takes place in Italy this month.

In terms of tools support ORM2 stencils are available for both Visio and Omnigraffle.

ORM Example
The technique is best described in the ORM whitepaper. I won’t attempt to replicate or paraphrase this content, instead, a very basic illustrative model is provided to give nothing more than a sense of how a conceptual model appears.

ORM2 basic example

Final Thoughts
In most cases a conceptual data model can be an unnecessary overhead, however where data requirements are emergent or sufficiently complex to warrant a distinct analysis and design process, the application of object role modelling can be highly beneficial. Understanding the potential of such techniques I think is perhaps the most important aspect, a good practitioner should have a broad range of modelling techniques to call upon.

References
Official ORM Site
ORM2 Whitepaper
ORM2 Graphical Notation
Omnigraffle stencil on Graffletopia

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Salesforce Application Types

In a typical development process requirements are captured and the information synthesised to form a solution design. The constituent components of the solution design are ultimately translated into physical concepts such as a class, page or sub-page view. This analysis, design, build cycle could be iterative in nature or fixed and may have different degrees of detail emerging at different points, however the applied principle is consistent. In considering the design element of the cycle, interaction design techniques suggest a patterns-based approach where features are mapped to a limited set of well-defined and robust user interface patterns, complemented by policies for concepts that transcend the patterns such as error handling, validation messages, stylistic aspects (fonts, dimensionality etc.). This process delivers efficiency in terms of reusability of code and reduced technical design and testing, but also critically provides a predictable, consistent end-user experience. When building custom applications using the declarative tools, we gain all of these advantages using pre-defined patterns and pre-fabricated building blocks. When building using the programmatic aspects of the platform a similar approach should be taken, meaning follow established patterns and use as much of the pre-fabricated components as possible. I can never fathom the driver to invent bespoke formats for pages that display within the standard UI, the end result is jarring for the end-user and expensive to build and maintain. In addition to delivering a consistent, predicative end-user experience at the component level, the containing application itself should be meaningful and appropriate in type. This point is becoming increasingly more significant as the range of application types grows release-on-release and the expanding platform capabilities introduce relevance to user populations outside of the front-office. The list below covers the application types possible at the time of writing (Spring ’14).

Standard Browser App
Standard Browser App (Custom UI)
Console (Sales, Service, Custom)
Subtab App
Community (Internal, External, Standard or Custom UI)
Salesforce1 Mobile
Custom Mobile App (Native, Hybrid, browser-based)
Site.com Site
Force.com Site

An important skill for Salesforce implementation practitioners is the accurate mapping of required end user interactions to application types within an appropriate license model. This is definitely an area where upfront thinking and a documented set of design principles is necessary to deliver consistency.

By way of illustration, the following exemplar design principles strive to deliver consistency across end user interactions.

1. Where the interaction is simple, confined to a single User, the data relates to the User and is primarily modifiable by the User only and has no direct business relevance then a Subtab App (Self) is appropriate. Examples: “My Support Tickets”, “Work.com – Recognition”.
2. Where a grouping of interactions form a usage profile that requires streamlined, efficient navigation of discrete, immersive, process centric tasks then a Console app is appropriate. Examples: “IT Helpdesk”, “Account Management”
3. Where a grouping of interactions from a usage profile that is non-immersive, non-complex (i.e. aligned with the pattern of record selection and view/edit) and likely to be conducted on constrained devices then Salesforce1 Mobile is appropriate. Examples: “Field Sales”, “Executive Insight”.

Design principles should also provide a strong definition for each application type covering all common design aspects to ensure consistency. For example, all Subtab apps should be built the same way technically, to the same set of standards, and deliver absolute consistency in the end user experiences provided.

Salesforce Implementation Audit

This post provides an outline approach to consider when performing an internal audit of an existing (or emerging) Salesforce implementation. As an individual who specialises in the provision of such quality assurance services from an external perspective, I’m convinced that most projects would benefit from a periodic internal review, perhaps augmented by some occasional external perspective and insight (Salesforce services can help here). However this is approached, in the majority case the internal project team will have the requisite experience and competency to deliver such an introspective review, the challenge is often one of finding the right time, or indeed any time, to conduct it. This is why a retrospective build review should be planned every 3 or 4 sprints (or thereabouts – projects differ) with a full implementation audit scheduled every release. The principal being that whilst the build is in flight, periodic sense checks are made on key quality aspects, technical integrity, platform limits etc. with a comprehensive audit applied pre-release (ideally). The latter may need to consider a combined future deployment state where multiple parallel development streams converge into a single production org.

Note, an implementation audit is build-focused (or solution oriented) and should not assess the fit-for-purpose nature of the functionality in respect to business requirements (i.e. the problem-to-solution mapping). The only exception to this arises where an obvious mapping to a standard feature is missed resulting in a “gap” that is unnecessarily filled by a technical solution option.

Note, in order to cut down on the time required to conduct the audit access to individuals who can describe the functional intent is imperative. In the internal case the programme/project architect should be leading the audit and should be aware of the functional design context.

Note, before diving into the detail of the implementation, it can be highly valuable to re-define the high-level solution architecture (HLSA) in current state terms. The key point being that the macro-level view is often distorted by micro-level design decisions made during the project course. A periodic check is useful to ensure that this organic change is understood and that the integrity of the original architectural vision is maintained.

Indicative review areas are listed below (this is not exhaustive)

Declarative build environment
1. Identify platform limits that are reaching a high percentage of utilisation that may present risk to scalability and future phases of development.
2. Identify any future maintainability risk presented by the conventions applied in the definition of configuration elements (e.g. naming conventions, opportunities for best practice improvements.).
3. Identify functional areas where a mapping to standard features could be achieved.
4. Identify security vulnerabilities (org-access, sharing model etc.).

Technical customisations
1. Identify risks to data integrity and application responsiveness.
2. Document risks to scalability and extensibility imposed by platform execution limits.
3. Document deviations from best practice technical patterns, conventions and coding standards.
4. Identify security vulnerabilities introduced by technical componentry.
5. Document deviations from best practice development practices and process.

Integration architecture
1. Identify risk associated with deviations from best practice integration patterns and practices.
2. Identify opportunities to reduce limits consumption.
3. Identify data integrity and scalability vulnerabilities related to the current state integration architecture.

Identity management
1. Identify risk associated with implemented single sign-on processes and related services/infrastructure.
2. Document deviations from best practices related to identity management.

Salesforce OpenID Connect

In addition to the proprietary Authentication Provider types (Facebook, Janrain, Salesforce) Winter ’14 (v29.0) added support for the OpenID Connect protocol, enabling off-platform authentication via any compatible OpenID Provider (Google, PayPal, Amazon and others). This post provides a basic implementation overview.

OpenID Connect what is it?
OpenID Connect is a lightweight authentication (identity verification) protocol built on top of modern web standards (OAuth 2.0, REST and JSON). OpenID Connect supersedes OpenID 2.0 and amongst other goals is intended to promote interoperability, be accessible to developers and to provide greater support for mobile use cases.

The OpenID Connect standard was recently ratified by members of the OpenID foundation and announced publicly at the Mobile World Congress in Barcelona on 26th February 2014. The standard is supported by Google, Microsoft, Salesforce, AOL, Ping and others.

The protocol works on the principal of an “Authorization Server” or OpenID Provider (OP) (e.g. Google), authenticating users on behalf of a “Client” or Relying Party (RP) (e.g. Salesforce). With the current implementation Salesforce can be configured as a RP but not an OP. In this context an Authentication Provider is configured in Salesforce with the type set to OpenID Connect. Note OP is also referred to as IDP, confusingly we have 3 seemingly interchangeable terms – however the OP term is the one defined in the standard.

Please refer to the excellent OpenID website for more details in regard to specifications, implementations, useful FAQs etc..

Identity Use Cases
In simple terms, users can single sign-on (SSO) into Salesforce using external web application credentials. A Salesforce user record can be created just-in-time on the first authentication event, subsequent events for the user map to this user record. Note, usefully it’s also possible to map existing users via the [Existing User Linking URL].

A key use case here is B2C portals and communities, however internal users and partners can also use this authentication approach. For internal users SSO via a Google Account could make sense where an enterprise has adopted Google Apps for Business.

What’s important to understand is that Salesforce supports external Authentication Providers for all user types (with the exception of Chatter External) with SAML, OAuth and OpenID Connect protocol support. This provides a high degree of flexibility in the terms of how identity management is implemented.

Implementation Steps
The Salesforce help providers a detailed series of steps to follow. The following high-level example shows a basic implementation of a Google Accounts Authentication Provider.

1. Register an OpenID Connect Application.
As per Salesforce Connected Apps, within your Google account an application is required, within which OAuth is configured. Applications are created via the Google developers console. The Redirect URI won’t be known until the Authentication Provider is configured in Salesforce.

7. Google Developers Console

2. Create an Authentication Provider.
Consumer Key = Client ID
Consumer Secret = Client secret
Scope = profile email openid

8. Salesforce Auth Provider Detail Page

3. Update OpenID Connect Application.
Copy the Salesforce [Callback URL] to the Google [Redirect URI] field and save.

4. Add Authentication Provider to Login Page (Standard App or Community).
This step requires a My Domain where the internal app login page is customised.

9. Salesforce Login Page Customisation

5. Configure a Registration Handler class.
Within the Authentication Provider configuration a Registration Handler class can be specified, this class implements the Auth.RegistrationHandler interface and is invoked to create new users or map to existing users in response to authentication events.


global class GoogleAccountsRegistrationHandler implements Auth.RegistrationHandler{
  global Boolean canCreateUser(Auth.UserData data) {
      //Check whether we want to allow creation of a user?
      return true;
  }

  global User createUser(Id portalId, Auth.UserData data){
      if(!canCreateUser(data)) {
          //Returning null or throwing an exception fails the SSO flow
          return null;
      }
      if(data.attributeMap.containsKey('sfdc_networkid')) {
          //We have a community id, so create a user with community access.
          //.. create community user.
      } else {
          //.. create standard user.
          return u;
      }
  }

  global void updateUser(Id userId, Id portalId, Auth.UserData data){
      User u = new User(id=userId);
      //.. update fields if required.
      update(u);
  }
}

Testing
1. Initialisation
The [Test-Only Initialization URL] provided on the Authentication Provider detail page can be pasted into a browser address bar and used to examine the raw output provided back from the Authorization Server.

1. Test Initialisation Output

2. Authentication
The following screenshots show the basic authentication flow. Note, as with SAML based SSO, errors are appended to the URL querystring.

Customised login page showing the Google Account Authentication Provider.

2. Login Page With Auth Providers

Clicking the button redirects the browser to the Google Service Login page to authenticate (unless a Google Accounts session exists).

3. Google Service Login Page

For new users the user consent page is displayed. This page can be customised via the Google Developer console.

10. User Consent page

Authentication errors are appended to the URL, as-per SAML authentication errors.

4. Error Page 1

5. Error Page 2

Finally, a Salesforce session is established. New users can be provisioned automatically, script within the Registration Handler class controls the configuration of such User records.

6. Auto-provisioned User Detail Page

Implementation Notes
1. Google Developer Console. Remember to turn ON – Google+ API access. This is required.

2. The auto-created Registration Handler class template must be modified as the default code will fail in many cases.
canCreateUser is false by default – in most cases this must be changed to true.

The Combination of values below don’t work if the user isn’t configured with US locale.
u.languagelocalekey = UserInfo.getLocale();
u.localesidkey = UserInfo.getLocale();
u.emailEncodingKey = ‘UTF-8’;
u.timeZoneSidKey = ‘America/Los_Angeles’;

3. Activation code entry appears to fail within an Internal Server error, but the code has been successful so subsequent attempts will succeed. This may be specific to my context.

4. As a best practice map the user Id from the OpenID Provider (Google Account Id in the example) into a custom field on the User record. This provides a robust mapping between the 2 system identifiers that can be used by the Registration Handler script.

5. Access can be revoked via the Third Party Account Link related list on the User detail page.

6. Make the OpenID Connect Application name meaningful to the end-users, the Google user consent page will display this in a “[AppName] is requesting access” format, anything weird or meaningless may cause concern.

Protocol Flow
Please treat the diagram below as indicative only, I put this together from a combination of browser profiling and assumptions made on the basis of reading the OpenID Connect specification.

As always, corrections would be appreciated.

OpenID Connect - SF Process Flow

Final Thoughts
OpenID Connect support is a highly useful extension to the Authentication Provider platform capability. For B2C portals and communities it makes sense to offer as many sign-in-via options as possible (Facebook, Google etc.) removing as much friction to user adoption as possible. As a personal opinion, over time I’m becoming less tolerant to having to register a new user account on each and every authenticated web site I interact with, particularly where I view the interaction as transient. Some users may have concerns around data security, i.e. by signing-in with a Google account are they implicitly giving Google access to the data held in the portal? In the majority case however, users will appreciate the improved user experience. For internal users OpenID Connect makes single sign-on via one or more of a multitude of current and future web platforms incredibly straightforward to implement. In the Salesforce context the key challenge will ultimately relate to reconciliation and rationalisation of identities (i.e. User records).

References
http://openid.net/connect/
http://openid.net/developers/specs/
https://developers.google.com/accounts/docs/OAuth2Login
https://console.developers.google.com/project