Salesforce DX

This post provides a practioner’s introduction to Salesforce DX (Developer Experience) a new software development lifecycle paradigm for Salesforce. Please note at the time of writing (Spring ’17 – v40.0) Salesforce DX is at public beta status.

Salesforce DX (or SFDX) – What is it?

A new set of tools and practices to accelerate the software development lifecycle on the Salesforce platform. The tools in question enable established (technology agnostic) software development models to be applied to the Salesforce context efficiently; the practices provide guidance in how to do this effectively and at scale.
Many aspects of Salesforce DX have become standard practice over the last few years; source-driven development using tools such as Jenkins and git being a good example. Other aspects such as artifact-based development, the CLI and Scratch Orgs introduce a fundamentally new paradigm.

The Salesforce development model has historically been org-centric in concept; with the entire set of metadata types representing a single monolithic code-base. This conceptual model has benefits in respect to ensuring complex interdependencies are not overlooked and code structure and conventions are considered fully in a standardised approach. In the org-centric model – and typically not exclusively – a source-code control repository is mapped at the org-level and deployments are org-based not feature or app based. Salesforce DX provides the tools and practices to change the development model from org-centric to artifact-based; where an artifact represents a collection of components (org subset) – developed, tested and deployed together in safe isolation.

SFDX Tools

CLI (Command-line Interface – sfdx)

The new SFDX CLI provides the foundation for Salesforce DX with capabilities that span authorisation, org management, metadata creation, data population, packaging and deployment. The CLI commands are extensive and clearly structured within the top-level force namespace.

command examples:

//CLI create a new project (specific DX folder structure plus config file)
sfdx force:project:create 

//CLI needs to authorise against the DevHub (OAuth web-based flow or for CI JWT Bearer flow)
sfdx force:auth:web:login -d -a "Hub Org"

//CLI create scratch org (or set existing as default)
sfdx force:org:create -s -f config/project-scratch-def.json

//CLI push source code from local (respects the .forceignore file placed in root folder)
//Conflicts can be overwritten --forceoverwrite
//Note - Local project is the source of truth.
sfdx force:source:push

//CLI run unit tests
sfdx force:apex:test:run

A key concept is the local project which is the primary source of truth. The project structure is different from the Metadata API folder structure with higher-level metadata types such as Object broken down into sub-folders and individual files for sub-types such as listViews and fields. This new sfdx structure (or source shape) is designed for development and source-code control system interactions whereas the longstanding Metadata API structure related to deployment primarily. Conversion between the 2 formats is necessary when creating a new sfdx project from an existing org (or simply from existing metadata) and prior to deploying a sfdx project to a Salesforce org (not Scratch Orgs). The CLI provides commands to perform the format changes as illustrated below.

//Create DX project > Create folder for metadata files then ..

// Retrieve unpackaged metadata
sfdx force:mdapi:retrieve -r ./mdapipkg -u <username> -k ./package.xml

// Retrieve the metadata for a package
sfdx force:mdapi:retrieve -s -r ./mdapipkg -u <username> -p <package name>

// Convert to SFDX format.
sfdx force:mdapi:convert --rootdir <retrieve folder name>

// Convert to Metadata API format.
sfdx force:source:convert -d <metadata folder name>/ --packagename package_name
Scratch Orgs

A Scratch Org is a temporary org that can be created quickly and configured with predefined settings and preferences (Edition, languages, platform features etc.) via a JSON definition file. Scratch Orgs are easily created and deleted via the CLI; up to 50 can be created per-day (per Dev Hub) with up to 25 active concurrently, deletion occurs automatically after 7 days.

A Scratch Org is typically used for development and testing purposes and plays an enabling role in build automation. Note, unlike sandboxes, Scratch Orgs are not constrained by a refresh cycle, but also are not a clone of a source org configuration. A typical development workflow will be to create a short-lived feature branch in the Source Code Control system (SCC), create a local sfdx project (from the branch metadata) and Scratch Org, complete development and testing in the Scratch Org, commit change back to SCC and convert and deploy the change to Sandbox. In a Continuous Integration (CI) environment the CI server (Travis CI, Jenkins etc.) may initiate an automated build from the commit event. In such a scenario, headless Scratch Orgs may be used for automated testing (across editions, languages etc.).

The SFDX CLI provides commands for automating the population of data in a Scratch Org. This can be via CSV file loading or more interestingly via org-to-org copy via SOQL query export and subsequent import.

Finally, a key feature of the CLI is the automated tracking of change (the diff) between the current local project and the default Scratch Org. The CLI push and pull commands that facilitate the movement of change between the local project and the Scratch Org automatically process changed components only.

Dev Hub

The SFDX CLI must authorise to a Dev Hub before Scratch Orgs can be created; each Scratch Org is related to a parent Dev Hub. The Dev Hub is a production Salesforce org (Enterprise Edition or Unlimited Edition) – the feature can be activated from the Setup menu. A Dev Hub trial org can be created to enable safe experimentation with SFDX. Dev Hub trial orgs expire after 30 days and are limited (40 Scratch Orgs created per day, 20 active concurrently) why this is subtly different from the standard limits is unclear.

Authorisation of the Dev Hub org (and indeed any org) by the SFDX CLI is possible via OAuth Web Flow or JWT-based Flow. The latter being preferred for system authentications such as those conducted by a CI server.

For ISV (and others) working with namespaces, the Dev Hub provides a Namespace Registry capability which (as the name suggests) allows Developer Editions containing namespaces to be registered. Once the org link is complete, Scratch Orgs can be created as normal. This capability is great for ISV developing in a namespace registered org (as opposed to utilising a separate packaging org to apply the namespace).

IDE2

The new Force.com IDE2 is an Eclipse plug-in built on the SFDX CLI. Additional features of the IDE2 include the Lightning Code Analyser and Apex Code Analysis tools. The former wraps the capabilities of ESLint and the Salesforce Lightning CLI to evaluate JavaScript code against standard or custom rule sets. The latter covers syntax error detection and highlighting plus code auto-completion. The IDE2 also provides an embedded browser for editing Lightning Pages.

On initial inspection there didn’t appear to be any means to directly login/import metadata from an existing org. Presumably this must be done externally via the SFDX CLI.

Finally, the new IDE2 provides integrated tools for working with Git repositories, a key facet of the envisaged developer workflows centred around Salesforce DX.

Summary

At the time of writing, the available SFDX documentation relates primarily to the technical aspects of the tools, there is minimal information relating to process guidance (or practices). Whilst it’s relatively easy to envisage how the new tools and techniques could be integrated into development, test and deployment workflows, best-practice guidance in such areas would certainly provide confidence and accelerate adoption.

Historically, a key sticking point for source-based development (and deployment in general) in the Salesforce context has been the Metadata API coverage. In short, not all metadata types are supported and therefore automated deployments (whether Change Set or API based) are typically accompanied by pre- and post- deployment steps that manually address the gaps. As there’s no significant change in this regard, it’s likely that Scratch Orgs will require the same approach – however considering Scratch Org use cases will typically be code related this may have limited actual impact.

In general however Salesforce DX is a considerable step forward in terms of applying established software development lifecycle practices to the Salesforce context. The CLI alone is incredibly impressive and provides significant flexibility in respect to development, testing and release process. It will be interesting to see how tools vendors react to the availability of the CLI as a foundation for future, evolved IDE capabilities. Hopefully Salesforce DX represents the first step of many in this direction.

Resources

Salesforce DX Developer Guide (Beta)
Salesforce DX on Trailhead
Salesforce DX Chatter Group

Salesforce ISV Environment Strategy

salesforce-isv-environment-strategy-all

Please note – This content was previously posted on the audit9.com site directly and is re-published here as part of the consolidation of 2 blogs.

This post provides an overview of the various environments involved in delivering ISV solutions to the Salesforce AppExchange. As will be obvious to architects working outside of the ISV domain, the ISV environment strategy has some commonality to the non-ISV approach in respect to the development and testing activities but also introduces a further level of complexity and environment roles unique to the AppExchange distribution model.

Primary Org Definitions

Packaging Org

For small-scale, low complexity managed package developments the packaging and development orgs may be combined.

1. Package/Patch Upload
The Packaging Org is where the Managed Package is created and its namespace prefix defined. Package uploads in either Managed Beta or Managed Release state are initiated from this environment.

2. Patch Development Org – PDO
PDO are short-lived orgs created specifically for Patch development. Only certain components can be modified in a PDO. Once ready for distribution a Patch can be uploaded and pushed to Subscriber Orgs.

3. Push Upgrades
Uploaded Major Releases and Patches can be scheduled for push distribution to selected (or all) Subscriber Orgs. Package subscribers have no control over whether to accept pushed changes and as such it is not advisable (unless with tacit agreement) to push changes that impact the end user experience in any material way.

4. Localisation
Translation Workbench based localisation of internationalised resources.

Partner Business Org

A Partner Business Org can be provisioned upon partnership agreement with Salesforce or the same environment roles can be established within an existing production org.

1. Environment Hub
There Environment Hub enables partner orgs to be created for Development, Test and Trialforce usage. Single Sign-on can be implemented across connected orgs. A hub member org can only be connected to one hub at any given point in time. Where Trialforce is being implemented the TMO and Packaging Org must be connected to the same Environment Hub. The Environment Hub can create TSO directly, although bypassing a TMO prevents implementation of custom branding.

2. AppExchange Publishing Org – APO
This org role is for AppExchange management and is where Private and Public Listings and the Publisher Profile are stored.

3. Licence Management Org – LMO
This org role relates to the License Management App, a custom app installed by Salesforce which enables management of Packages, Package Versions, Licenses and Subscriber data resulting from AppExchange interactions. The LMO is also the entry point for Subscriber Org Login Access.

4. Channel Orders Application – COA
This org role relates to the Channel Orders app (also termed the Partner Order app), a custom app installed by Salesforce which supports automation of Service Order submissions (relating to AppExchange sales) to Salesforce.

5. Usage Metrics Reporting
Usage Metrics provides a daily update of statistics relating to Custom Object record counts (snapshot) and Visualforce Page access and performance (summaries) across all production Subscriber Orgs. There is no opt-in required for Package subscribers. The usage data is written to objects that are API accessible only; a visualisation app is available on the AppExchange to provide convenient access. Usage Metrics provide the basis for monitoring usage patterns and proactive customer retention process.

Trialforce Orgs

AppExchange listings can offer customers the ability to trial an app within a new, pre-configured Trial Org (package, data and users). This highly valuable capability is implemented with Trialforce. Trial organisations can also be created directly from Signup forms hosted on an external website or via the API. In addition to offering free trials the Trialforce solution provides a convenient option for the quick creation of pre-configured orgs for QA or demonstration purposes.

1. Trialforce Management Org – TMO
An optional environment used for centralised management of TSO for different packages. TSO can be created directly from the Environment Hub also, however such this direct approach does not support custom branding. The TMO approach does enable custom branding for Login Pages and Email Templates.

2. Trialforce Source Org – TSO
The TSO is where the package is installed (or upgraded) and configured to reflect the trial experience (user profiles, sample data etc.) A snapshot is taken of the Source org state to create a template. Note, TSO expire after a 12 month period unless a partner case is raised to request an extension. TSO are linked to the AppExchange Publishing Console manually, once a link is in place templates are listed on the Trial Templates tab.

3. Trialforce Template.
Templates listed in the Publishing Console can be submitted for Security Review. This reviews takes less time than the Package Security Review. Once approved a template can be connected to an AppExchange listing at which point the link to start a free trial appears on the AppExchange listing.

Note, each time a package is updated the TSO must be upgraded and a new template created, reviewed and connected to the AppExchange listing.

The ISVforce Guide provides further detail on the environments listed in the preceding sections.

Custom Metadata Types – Salesforce Winter ’16

1 Create New Custom Metadata Type

This post provides a high-level overview of the Winter ’16 enhancements to the Custom Metadata Types platform capability.

Custom Metadata Types – The App Configuration Engine for Force.com

There are in my view two distinct ways to consider Custom Metadata Types (CMT); firstly as analogous to Custom Settings and secondly as an architecturally significant paradigm shift in regard to platform extensibility. In the former case CMT can be viewed as a straightforward, almost like-for-like replacement to List Custom Settings – with the added benefit that records can be deployed as metadata. There are of course considerable differences between the two, however conceptually this view is simplistic and approachable. In respect to the latter case, prior to CMT platform extensibility for Force.com could be viewed as a vertical model where new instances of pre-defined metadata types are created to deliver custom interactions. Custom Metadata Types enable a horizontal extensibility model where new type definitions can be introduced and instances created as metadata. The platform is no longer constrained to the pre-defined set of metadata types, developers have freedom to extend the model and deploy both the types and instances freely across environments. This horizontal extensibility model enables a host of new use cases such as bespoke development frameworks that abstract or extend the Force.com platform, an idea variously described as Platform-on-a-Platform or Custom Platform.

Custom Metadata Types were introduced as a beta release in Spring ’15, with the GA release in Summer ’15. The enhancements added to Winter ’16 appear to represent another milestone on the journey toward an increasingly capable platform extensibility model where custom types can be related to standard types, perhaps to override or extend platform behaviour. This is definitely a key area of the Force.com platform to pay attention to over subsequent releases.

Note, the native user interface for Custom Metadata Type administration shown in the following screenshots is a new Winter ’16 feature, previously Metadata API calls were required to define types and manage associated records.

Key Concepts

Metadata Type
As the screenshot below shows, Custom Metadata Types support Custom Fields and Page Layouts, all very consistent with the Custom Object and Custom Setting equivalents (although not layouts in the Custom Setting case). At this point it’s worth considering the fact that all standard metadata types are comprised of a collection of attributes, for example an ApexClass has Name, Body attributes in the same way a CustomField has Name, Label, DisplayType attributes. This is how Force.com platform metadata is structured. The difference between a CMT and a Custom Object or Setting isn’t the definition it’s the type of data stored; with a CMT we’re recording metadata. Taking a somewhat obscure example, we could invent a new proprietary platform language called Opex (;-)), define a CMT called OpexClass with a Body attribute etc., populate it with metadata records that represent a System namespace and ship some actual ApexClass instances to translate and run the Opex code. I’ll concede this isn’t a practical example, however the point I hope should be clear.

2 Custom Metadata Type

The protected component attribute applies to Managed Packages; meaning visibility of the CMT in a Subscriber org.

When defining a new CustomField for a Custom Metadata Type, there are limitations to the field types that can be specified and as-per Custom Settings there are no picklist or relationship fields (as yet anyway).

3 New Field

Field Manageability is a new concept in Winter ’16 to understand, again in relation to Managed Packages. In short this setting provides field-level editability control, selectable values being:

Locked after release : Field value is locked after deployment (includes the developer org).
Subscriber editable : As the name suggests; deployed (developer) updates will not override subscriber field value changes.
Upgradable : Locked in the subscriber org, developer can edit and deploy upgrades.

4 Custom Metadata Type with Fields

Metadata Record
A Metadata Record is really where Custom Metadata Types head off on their own path; a Metadata Record as the name implies represents an instance of the metadata type as a record that can be manipulated by the Metadata API, deployed via Change Set and packaged. The significance of which is obvious but incredibly powerful. It becomes possible, for example, to track Metadata Records using Source Code Control tools and to deploy metadata plus configuration via a single deployment transaction. No more, 2 stage deployments or clumsy post-install data loading.

5 New Metadata Record

As can be seen in the preceding screenshot it is possible to define a Protected Component setting at the Metadata Record level. This enables the type to be public but records to be selectively hidden in the subscriber org – a very flexible capability.

6 Metadata Records

Key Benefits

As mentioned in the introduction the Custom Metadata Type platform feature is still emerging, in my view at least, the most interesting aspects are potentially yet to be revealed, however there are definitely some key benefits to highlight with the Winter ’16 release.

For Enterprise : Manual steps within an otherwise automated Application Lifecycle Management process can cause compliance issues and release management inefficiency. Custom Metadata Types enable application configurations to be deployed as part of a seamless, one-step deployment process thereby removing manual friction. Configuration management tools can also track and version control the application definition and its configuration state.

For Partners : A long time issue for ISVs has been the deployment of application configuration data as part of the managed package installation process. Post install scripts provide one option, but creating data via Apex script doesn’t scale well or deliver the required fine-grained control over subscriber org configurability and upgradeability. Custom Metadata Types address both issues.

The screenshot below shows both a Custom Metadata Type and Metadata Records added to a Managed Package definition.

7 Packageable

Note, the benefits stated above are those practical benefits of the capability in relation to its generic capability, the actual benefit for many developers will be the flex

Implementation Considerations

Audit trail : Changes to both Custom Metadata Types and Metadata Record are visible via the Setup Audit Trail, this is new to the Winter ’16 release.

8 Audit Trail

Metadata Record Access : Metadata Records can be accessed via SOQL query only, there is no direct Apex support. Note the __mdt suffix.

Widget__mdt[] widgets = 
 [select QualifiedApiName, Height__c, Width__c from Widget__mdt];

Metadata Record Modification : Custom Metadata Types do not support DML operations via Apex, the Metadata API must be used. For use cases where configuration data needs to be created via code, CMT may not be an effective approach.

Relationship Fields : At some stage (Spring ’16 perhaps) in the future evolution of Custom Metadata Types I would expect support for relationships to be provided. This I believe is where the feature will really take-off.

Apex Testing : Currently Metadata Records are visible in Apex unit tests (without SeeAllData=true), it’s likely that simulated test data will be supported in a future release to enable testing under different configurations.

Permissions : The permissions model for Custom Metadata Types is limited, the Metadata Records are either visible or not at the org-level. A finer-grained permission model, perhaps just at the record level would be an obvious progression.

References

https://developer.salesforce.com/blogs/engineering/2015/08/custom-metadata-types-winter-16.html

https://help.salesforce.com/HTViewHelpDoc?id=custommetadatatypes_overview.htm

http://releasenotes.docs.salesforce.com/en-us/winter16/release-notes/rn_forcecom_development_custom_metadata.htm

Salesforce Deployment with Ant

This first video post provides a brief introduction to scripted Salesforce deployment with the Force.com Migration Tool and Ant.

Implementation Guiding Principles

In most cases software implementation projects start off with a clear vision of the various macro-level influences, constraints and directives that define the shape of the project at the outset. During the initiation stage the clear, consensus view on the project principles influences fully the micro-level decisions taken in terms of architecture, solution design, resourcing, scope, prioritisation, schedule and so on. The challenge for many projects is maintaining the purity of the vision across the course of the project in light of variations in uncontrollable factors such as the business context, risks becoming reality, project resources and so forth.

An effective tool to mitigate this loss of identity and direction, and the consequential impact this can have on confidence and productivity, is the definition of an unambiguous, concise set of guiding principles that are communicated early and frequently. Additional to the benefits related to a clear and strongly defined direction, principles support increased productivity through empowerment; if low level decisions align with the principles then action can be taken without time-expensive consultation.

Corporates do this, why not projects?
This approach is well established in business, with many corporates defining an aspirational, future state vision and set of guiding principles that underpin the delivery of the vision through applied influence to all activities undertaken and decisions made. Guiding principles can be very effective particularly where a culture is established that prioritises all actions taken in relation to their conformance to the principles. This level of application requires absolute confidence across the business that the principles are current, meaningful, complete and beneficial to all parties. The value of this approach at a business level can apply equally to a project.

Key Points
No more than 7. The primary value of guiding principles is the strength of the message, achieved through brevity. Telephone numbers were originally set at 7 digits as this was believed to be the most people could be expected to remember. 7 therefore seems a reasonable limit in this context.

Revisit during retrospectives. Stale principles will damage the integrity of the approach. Agility should be paramount such that principles are current and enable changes in the direction of travel whilst the project is in flight (excuse the metaphors here).

Communicate frequently. All project artefact (slides, documents etc.) should state the guiding principles and relate the content to the relevant principles, noting deviations.

All design decisions should relate to one or more principles. See above point.

Prioritisation. A simple prioritisation scheme (high, medium, low; 1,2,3 .. n) can be effective in resolving conflicts between principles.

Buy in. All project stakeholders must approve the principles and accept the impact. Without complete buy in the integrity of the approach is diminished.

Principles can be goals. Principles are often directive in nature, goals are an interesting extension to this.

Use Work.com. The Goals functionality of Work.com can provide a useful tool to manage, communicate, collaborate and report on the principles. This functionality also enables user-level goals to be mapped to the higher level goals.

Alternatives
Vision statements can be difficult to articulate, particularly where there are multiple unconnected concerns.

Project charters can be too cumbersome to be a realistic communication tool. It’s human nature to read once and subsequently ignore any project documentation that requires a time investment.

In both cases above, guiding principles can provide a complementary abbreviated formatting.

Examples
Maintainability of the solution by client administrator.

Future extensibility without technical resource.

Sales productivity improvement is the key objective.

Declarative build over technical solution options.

Quality over expediency.

Technical excellence.

Buy over build.

Alpha 1 bug count must be less than 100.

The solution will be productised internationally.

The project delivery date must be achieved.

Release early and iterate.

Business utility to be delivered every 2 weeks.

Only user-accepted features to be released.

User stories estimated at 5 days or more to be split.

Salesforce Environment Hub

Environment Hub was announced back in August 2012 and was initially made available as a Pilot feature of the Winter ’13 release. This post follows up (some 2 years later) on my initial interest in understanding the utility Environment Hub delivers in the context of environment/user management.

Environment Hub – What is it?
In simple terms Environment Hub is an org administration tool that enables multiple orgs (of any type) to be associated and accessed from a central location. Each connected org is a termed a Hub Member, with an imposed constraint that each Hub Member org can only be parented by one Hub org. It is therefore imperative that Hub Members are only added where there is no contention over the appropriateness of the parent Hub. Commonality of target production org or packaging org is a good starting point for this consideration, as is the rule that client orgs should never be parented by an ISV or Consultancy Hub org. The parent Hub org should always be the most-accessed org, the credentials for which will become those by which all org access is made.

For larger programmes of work Environment Hub provides a highly useful means to catalogue the org estate and to provide SSO between the constituent orgs. This latter point enables reduced password maintenance, simplified access and centralised user administration in terms of deactivating a user account in one place only and ability to view login history in once place.

Environment Hub Tab

Environment Hub is enabled by Salesforce support who will require confirmation to proceed with the change. From experience, the enablement process can take a few days.

Key Features

Connect Organisation
Connecting Hub Members to the Environment Hub occurs via the entry of an administrator username for the target org and subsequent OAuth authentication and authorisation flow. The User Permission “Connect Organisation to Environment Hub” is required.

Connect Organisation 1

Connect Organisation 2

Connect Organisation 3 - OAuth

Connected Organisation Detail Page

Once an org is connected to the Environment Hub interesting detail such as the Edition, Org Status and Org Expiry date is revealed. It is also possible to add custom name and description attributes to the Hub Member, which I really like, each org in the estate should be justifiable and have a specific purpose – here we can capture this plus accountable contact etc. A very useful means of cataloguing and tracking the org estate.

The Company Detail page in the connected org will now show the Environment Hub Org Id value as below.

Company Information Page

In addition to the “User Added” origin, Hub Members are also auto-discovered using existing org-to-org relationships as below.

Auto-discovery types; sandbox to production, patch orgs to release org, trialforce source org to trialforce management org, release org to LMO

Create Organisation
New development/test/demo orgs can be created directly within the Environment Hub – this replaces the functionality previously exposed via the Salesforce Partner Portal, for partners at least. I’m unclear how the types of org offered are affected by partnership status etc. or perhaps whether the Environment Hub itself is available only to partners.

Create Organisation

Create Organisation 2

Org types offered;
Development = PDE org.
Test/Demo = Pick Edition for a 30 day time expired org.

Useful information of the distinction between different org types can be found here.

Single Sign-on
SSO can be enabled between each Hub Member and the Hub org; in implementation terms this means the Hub org is configured as an Identity Provider with a Service Provider being configured (automatically via the SSO enablement process) in both the Hub Member org and the Hub org. Both IdP initiated (via the Environment Hub tab) and SPI-initiated (via enablement of the Service Provider as a Login Page Authentication Service) SAML flows are supported. The latter point meaning SSO could be enforced as the only authentication means – thereby switching off standard Salesforce authentication entirely. Each Hub Member Org must have a My Domain configured for SSO to function.

Note, enabling SSO creates a Service Provider in the Hub org, default permissions are provided to the Standard User and System Administrator profiles only. It is therefore a requirement to ensure relevant permissions (Profile or Permission Set) are provided.

Connected Organisation Detail Page SSO

Within the Identity Provider, the Hub Member specific Service Provider configuration is set with “Subject Type=User’s ID determined by Environment Hub”, this setting delegates the user mapping to the Environment Hub settings, defined as below.

3 types of SSO User Mapping;
Method 1. Mapped Users – 1 to 1 mapping of user names – per-user.
Method 2. Federation Id – boolean state, yes = attempt to map users on common Federation Identifier values.
Method 3. User Name Formula – formula expression = attempt to map users via formula result.

Where multiple mapping types are enabled the precedence order above applies. For SSO between a sandbox and production org, user mapping is implicit and not configured as above.

Customisation
Enabling Environment Hub adds the EnvironmentHubMember standard object which is open to the declarative build model; custom fields, page layouts. validation rules, workflow rules, approvals etc.. In addition Apex Triggers can be defined on this object, use cases for which may include notifications relating to status changes etc.

A second standard object EnvironmentHubInvitation is also added, however this object is inaccessible from the Setup menu (although it can be reached via /p/setup/layout/LayoutFieldList?type=EnvironmentHubInvitation&setupid=EnvironmentHubInvitationFields). I’m assuming this object to be either forward looking or a legacy of an invitation-based connection model (as per Salesforce to Salesforce). Regardless, records do seem to be added to this object, the logic for which is unclear.

Related Permissions
Hub Org –
Manage Environment Hub
Environment Hub App and Tab access
EnvironmentHubMember standard object access permissions
Environment Hub Connected App
SSO Service Providers e.g. [00_____0000Cj__] Service Provider Access for SSO (by Profile or Permission Set)

Hub Member Org –
Connect Organisation to Environment Hub

Environment Hub in Practice
For consulting projects Environment Hub offers significant value in terms of management and tracking of the org-estate and centralisation of user administration. In practice this would require all project contributors to access production (Hub) as the primary org and SSO into secondary (Hub Member) orgs as required. In this model, production could be utilised for project collaboration, bug-tracking, project management etc. which is a common approach. The obvious downside being the requirement to license the project team in production – a big challenge on many projects where user licenses aren’t provisioned until a late stage or business use runs parallel to project activity. In such examples, project contributors could be provisioned with low-end user licenses on a temporary basis. Ideally we could do this with a Chatter Plus license as users would only require Chatter, Custom Objects (10 or less) and Environment Hub access.

For ISV projects, the utility of Environment Hub relates more specifically to the ability to catalogue the multitude of environments required for development, test, i11n, packing, release and patch purposes, not to mention TrialForce. Efficiency of access across this estate is also a key factor.

References
Environment Hub Online Help

Apex Unit Test Best Practice

This post provides some general best practices in regard to Apex Unit Tests. This isn’t a definitive list by any means, as such I’ll update the content over time.

Top 10 Best Practices (in no order)

1. TDD. Follow Test Driven Development practice wherever possible. There is no excuse for writing unit tests after the functional code, such an approach is indicative of a flawed development process or lax standards. It’s never a good idea to estimate or deliver functional code without unit tests – the client won’t appreciate an unexpected phase of work at the point of deployment, not to mention the pressure this approach puts on system testing.

2. Code Quality. Ensure unit tests are written to cover as many logical test cases as possible, code coverage is a welcome by-product but should always be a secondary concern. Developers who view unit tests as a necessary evil, or worse, need to be educated in the value of unit tests (code quality, regression testing, early identification of logical errors etc. etc.).

3. Test Code Structure. For some time now I’ve adopted a Test Suite, Test Helper pattern. A suite class groups tests related to a functional area. A test helper class creates test data for a primary object such as Account (i.e. AccountTestHelper.cls), secondary objects such as price book entry would be created within the product test helper class. The suite concept provides a logical and predictable structure, the helper concept emphasises that test data creation should be centralised.

4. Test Code Structure. Put bulk tests in a separate class e.g. AccountTriggerBulkTestSuite.cls (in addition to AccountTriggerTestSuite.cls). Bulk tests can take a long time to complete – this can be really frustrating when debugging test failures – particularly in production.

5. Test Code Structure. Ensure test classes contain a limited number of test methods. I tend to limit this to 10. As with point 4, this relates to test execution time, individual methods can’t be selectively executed – the smallest unit of execution is the class.

6. SeeAllData. Always use SeeAllData=true by exception and at the test method level only. Legacy test code related to pricebooks that historically required this can now be refactored to use Test.getStandardPricebookId(). Also, set the [Independent Auto-Number Sequence] flag to avoid gaps in auto number sequences through the creation of transient test data.

7. Test Case Types. As the Apex Language reference proposes, write unit tests for the following test case types.

Positive Behaviour – logical tests that ensure the code behaves as expected and provides successful positive outcomes
Negative Behaviour – logical tests for code behaviour where parameters are missing, or records do not adhere to defined criteria – does the code protect the integrity of unaffected records – does the runtime exception handling function as expected
Bulk – trigger related tests primarily – how the code behaves with a batch of 200 records – mix the batch composition to stress the code against governor limits
Restricted User – test relevant combinations of user role and profile – this test case type is prone to failure through sharing model adjustments – triggers should delegate processing to handler classes that have the “with sharing” modifier

8. Debugging. Always use the syntax below for debug statements within code (test and non-test code). An efficient practice is to add sensible outputs whilst writing the code. This approach avoids a code update or re-deployment to add debug statements during error diagnostics. Note – in such cases Checkpoints could be a better approach anyway – particularly in production. The use of the ERROR logging level enables a restrictive log filter to be applied such a clear debug log is produced and max log size truncation is avoided – note, log filters can also have a positive impact on transaction execution time.

System.debug(LoggingLevel.ERROR, 'my message');

9. Commenting. Always comment test methods verbosely to ensure the test case intent is clear and that the test code can be mapped to the related non-test code. Test classes should be fully self documenting and be viewed as the primary enabler for the future maintenance of the non-test code.

10. Maintenance. Test code is highly dependent on the environment state. Any configuration change can require test code to be updated; this could be a new mandatory custom field or a sharing model adjustment. In many cases the resultant unit test failure state is not encountered until the next deployment to production, which can’t proceed until the tests are fixed. This scenario will be familiar to many people. The mitigation requires the local administrator to understand the risk, frequently run the full set of unit tests and to manage the test code update cycle proactively.

Example Test Suite Class

/*
Name: RecordMergeTestSuite.cls
Copyright © 2014  CloudMethods
======================================================
======================================================
Purpose:
-------
Test suite covering RecordMerge operations.
Bulk tests are defined in the class RecordMergeBulkTestSuite.cls
======================================================
======================================================
History
------- 
Ver. Author        Date        Detail
1.0  Mark Cane&    2014-09-16  Initial development.
*/
@isTest(SeeAllData=false)
public with sharing class RecordMergeTestSuite {
	/*
     Test cases:	
        singleTestCase1 - postive code behaviour/expected outcome test case 1.
        negativeTestCase1 - negative outcome test case 1.
        restrictedUserTestCase1 - postive/negative code behaviour in the context of specific user role/profile combinations.
        ..
        future test cases to cover : * some coverage provided
        1. tbd.
        2. tbd.
    */
    
    /* */
	static testMethod void singleTestCase1() {
		// Test case 1 : postive outcome test case 1.
        setup();

		// Steps - 1. 
		// Logical tests - 1.
    }
    /* */    

    /* */
	static testMethod void negativeTestCase1() {
		// Negative test case 1 : negative outcome test case 1.
        setup();

		// Steps - 1.
		// Logical tests - 1. 
    }
    /* */    

    /* */
	static testMethod void restrictedUserTestCase1() {
		// Restricted user test case 1 : postive/negative code behaviour in the context of specific user role/profile combinations.		    	    			
		List<User> users;
		
		System.runAs(new User(Id = Userinfo.getUserId())){ // Avoids MIXED_DML_OPERATION error (when test executes in the Salesforce UI).
			setup();		    					
			users = UserTestHelper.createStandardUsers(2, 'Sophie', 'Grigson');
		}
		
		System.runAs(users[0]){
			accounts = AccountTestHelper.createAccounts(1, 'Abc Incorporated');
			
			// Steps - 1. 
			// Logical tests - 1.
		}		
    }
    /* */
	
	// helper methods    
    private static void setup(){
   		SettingsTestHelper.setup();    	
    }
    // end helper methods
}

Conceptual Data Modelling

The biggest area of risk on any Salesforce implementation project is the data model. In my view this assertion is beyond question. The object data structures and relationships underpin everything. Design mistakes made in the declarative configuration or indeed technical components such as errant Apex Triggers, poorly executed Visualforce pages etc. are typically isolated and therefore relatively straightforward to remediate. A flawed data model will impact on every aspect of the implementation from the presentation layer through to the physical implementation of data integration flows. This translates directly to build time, build cost and the total cost of ownership. It is therefore incredibly important that time is spent ensuring the data model is efficient in terms of normalisation, robust and fit for purpose; but also to ensure that LDV is considered, business critical KPIs can be delivered via the standard reporting tools and that a viable sharing model is possible. These latter characteristics relate to the physical model, meaning the translation of the logical model into the target physical environment, i.e. Salesforce (or perhaps database.com). Taking a step back, the definition of a data model should journey through three stages; conceptual, logical and physical design. In the majority case most projects jump straight into entity relationship modelling – a logical design technique. In extreme cases the starting point is the physical model where traditional data modelling practice is abandoned in favour of a risky incremental approach with objects being identified as they are encountered in the build process. In many cases starting with a logical model can work very well and enable a thorough understanding of the data to be developed, captured and communicated before the all important transition to the physical model. In other cases, particularly where there is high complexity or low understanding of the data structures, a preceding conceptual modelling exercise can help greatly in ensuring the validity and efficiency of the logical model. The remainder of this post outlines one useful technique in performing conceptual data modelling; Object Role Modelling (ORM).

I first started using ORM a few years back on Accounting related software development projects where the data requirements were emergent in nature and the project context was of significant complexity. There was also a need to communicate early forms of the data model in simple terms and show the systematic, fact-based nature of the model composition. The ORM conceptual data model delivered precisely this capability.

ORM – What is it?
Object Role modelling is a conceptual data modelling technique based on the definition of facts in the form of natural language and intuitive diagrams. ORM models are subject to rigorous data population checks, the addition of logical constraints and iterative improvement. A key concept of ORM is the Conceptual Schema Design Procedure (CSDP), a prescriptive 7 step approach to the application of ORM, i.e. the analysis and design of data. Once the conceptual model is complete and validated, a simple algorithm can be applied to produce a logical view, i.e. a set of normalised entities (ERD) that are guaranteed to be free of redundancy. This generation of a robust logical model directly from the conceptual schema is a key benefit of the ORM technique.

Whilst many of the underlying principles have existed in various forms since the 1970s, ORM as described here was first formalised by Dr. Terry Halpin in his PhD thesis in 1989. Since then a number of books and publications have followed by Dr. Halpin and other advocates. Interestingly, Microsoft made some investment in ORM in the early 2000’s with the implementation of ORM as part of the Visual Studio for Enterprise Architects (VSEA) product. VSEA offered tool support in the form of NORMA (Natural ORM Architect), a memorable acronym. International ORM workshops are held annually, the ORM2014 workshop takes place in Italy this month.

In terms of tools support ORM2 stencils are available for both Visio and Omnigraffle.

ORM Example
The technique is best described in the ORM whitepaper. I won’t attempt to replicate or paraphrase this content, instead, a very basic illustrative model is provided to give nothing more than a sense of how a conceptual model appears.

ORM2 basic example

Final Thoughts
In most cases a conceptual data model can be an unnecessary overhead, however where data requirements are emergent or sufficiently complex to warrant a distinct analysis and design process, the application of object role modelling can be highly beneficial. Understanding the potential of such techniques I think is perhaps the most important aspect, a good practitioner should have a broad range of modelling techniques to call upon.

References
Official ORM Site
ORM2 Whitepaper
ORM2 Graphical Notation
Omnigraffle stencil on Graffletopia

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Technical Naming Conventions

Challenge – outside of the ISV development model there is no concept of an application namespace that can be used to group the technical components related to a single logical application. To mitigate this issue, and to provide a means to isolate application-specific components, naming schemes such as application specific prefixes are commonplace.

Risk – without application/module/function namespaces etc. all technical components reside as an unstructured (unpackaged) collection, identified only by their metadata type and name. As such maintainability and future extensibility can be inhibited as the technical components related to multiple logical applications converge into a single unstructured code-base.

Options –
1. Application specific prefix. All components related to a specific application are prefixed with an abbreviated application identifier, e.g. Finance Management = “fm”, HR = “hr”. This option addresses the requirement for isolation, but inevitably causes issue where helper classes or classes related to common objects span multiple applications. This option has the advantage of minimising the effort required to remove functionality related to a logical application, only shared classes would need to be modified.

2. Object centric approach. In considering a Salesforce org as a single consolidated codebase where most components (technical or declarative) relate to a primary data object, a strict object-centric approach can be taken to the naming of technical components. With such a mindset, the concept of a logical application becomes less significant, instead components are grouped against the primary data object and shared across the custom functionality that may be related to the object. A strictly governed construction pattern should promote this concept with the main class types defined on a per-object basis. Functional logic not related to a single object should only every reside in a controller class, web service class or helper class. In the controller and web service cases, the class should orchestrate data transactions across multiple objects to support specific functionality. In the helper class case a function-centric approach is appropriate.

In architectural terms, an object-centric data layer is introduced that is called from a function-centric presentation layer.

presentation layer [Object][Function].page —
SalesInvoiceDiscountCalc.page
SalesInvoiceDiscountCalcController.cls

data layer [Object][Class Type].cls —
SalesInvoiceManager.cls
AccountManager.cls

business logic layer [Function][Helper|Utility]–
DiscountCalcHelper.cls

The downside of this approach is contention on central classes in the data layer when multiple developers are working in a single org, plus the effort required to remove functionality on a selective basis. In the latter case using a source code management system such as Git with a smart tagging strategy can help to mitigate the issue. Additionally, code commenting should always be used to indicate class dependencies (i.e. in the header comment) and to convey the context in which code runs, this is imperative in ensuring future maintainability.

Recommended Approach –
1. Option 2. In summary, naming conventions should not artificially enforce the concept of a logical application – the composition of which is open to change by Admins, instead an object-centric approach should be applied that promotes code re-use and discipline in respect adherence to the applied construction patterns.

Whichever approach is taken, it is highly useful to consider how the consolidated codebase will evolve as future functionality and related code is introduced. A patterns-based approach can mitigate the risk of decreasing maintainability as the codebase size increases.