Salesforce DX

This post provides a practioner’s introduction to Salesforce DX (Developer Experience) a new software development lifecycle paradigm for Salesforce. Please note at the time of writing (Spring ’17 – v40.0) Salesforce DX is at public beta status.

Salesforce DX (or SFDX) – What is it?

A new set of tools and practices to accelerate the software development lifecycle on the Salesforce platform. The tools in question enable established (technology agnostic) software development models to be applied to the Salesforce context efficiently; the practices provide guidance in how to do this effectively and at scale.
Many aspects of Salesforce DX have become standard practice over the last few years; source-driven development using tools such as Jenkins and git being a good example. Other aspects such as artifact-based development, the CLI and Scratch Orgs introduce a fundamentally new paradigm.

The Salesforce development model has historically been org-centric in concept; with the entire set of metadata types representing a single monolithic code-base. This conceptual model has benefits in respect to ensuring complex interdependencies are not overlooked and code structure and conventions are considered fully in a standardised approach. In the org-centric model – and typically not exclusively – a source-code control repository is mapped at the org-level and deployments are org-based not feature or app based. Salesforce DX provides the tools and practices to change the development model from org-centric to artifact-based; where an artifact represents a collection of components (org subset) – developed, tested and deployed together in safe isolation.

SFDX Tools

CLI (Command-line Interface – sfdx)

The new SFDX CLI provides the foundation for Salesforce DX with capabilities that span authorisation, org management, metadata creation, data population, packaging and deployment. The CLI commands are extensive and clearly structured within the top-level force namespace.

command examples:

//CLI create a new project (specific DX folder structure plus config file)
sfdx force:project:create 

//CLI needs to authorise against the DevHub (OAuth web-based flow or for CI JWT Bearer flow)
sfdx force:auth:web:login -d -a "Hub Org"

//CLI create scratch org (or set existing as default)
sfdx force:org:create -s -f config/project-scratch-def.json

//CLI push source code from local (respects the .forceignore file placed in root folder)
//Conflicts can be overwritten --forceoverwrite
//Note - Local project is the source of truth.
sfdx force:source:push

//CLI run unit tests
sfdx force:apex:test:run

A key concept is the local project which is the primary source of truth. The project structure is different from the Metadata API folder structure with higher-level metadata types such as Object broken down into sub-folders and individual files for sub-types such as listViews and fields. This new sfdx structure (or source shape) is designed for development and source-code control system interactions whereas the longstanding Metadata API structure related to deployment primarily. Conversion between the 2 formats is necessary when creating a new sfdx project from an existing org (or simply from existing metadata) and prior to deploying a sfdx project to a Salesforce org (not Scratch Orgs). The CLI provides commands to perform the format changes as illustrated below.

//Create DX project > Create folder for metadata files then ..

// Retrieve unpackaged metadata
sfdx force:mdapi:retrieve -r ./mdapipkg -u <username> -k ./package.xml

// Retrieve the metadata for a package
sfdx force:mdapi:retrieve -s -r ./mdapipkg -u <username> -p <package name>

// Convert to SFDX format.
sfdx force:mdapi:convert --rootdir <retrieve folder name>

// Convert to Metadata API format.
sfdx force:source:convert -d <metadata folder name>/ --packagename package_name
Scratch Orgs

A Scratch Org is a temporary org that can be created quickly and configured with predefined settings and preferences (Edition, languages, platform features etc.) via a JSON definition file. Scratch Orgs are easily created and deleted via the CLI; up to 50 can be created per-day (per Dev Hub) with up to 25 active concurrently, deletion occurs automatically after 7 days.

A Scratch Org is typically used for development and testing purposes and plays an enabling role in build automation. Note, unlike sandboxes, Scratch Orgs are not constrained by a refresh cycle, but also are not a clone of a source org configuration. A typical development workflow will be to create a short-lived feature branch in the Source Code Control system (SCC), create a local sfdx project (from the branch metadata) and Scratch Org, complete development and testing in the Scratch Org, commit change back to SCC and convert and deploy the change to Sandbox. In a Continuous Integration (CI) environment the CI server (Travis CI, Jenkins etc.) may initiate an automated build from the commit event. In such a scenario, headless Scratch Orgs may be used for automated testing (across editions, languages etc.).

The SFDX CLI provides commands for automating the population of data in a Scratch Org. This can be via CSV file loading or more interestingly via org-to-org copy via SOQL query export and subsequent import.

Finally, a key feature of the CLI is the automated tracking of change (the diff) between the current local project and the default Scratch Org. The CLI push and pull commands that facilitate the movement of change between the local project and the Scratch Org automatically process changed components only.

Dev Hub

The SFDX CLI must authorise to a Dev Hub before Scratch Orgs can be created; each Scratch Org is related to a parent Dev Hub. The Dev Hub is a production Salesforce org (Enterprise Edition or Unlimited Edition) – the feature can be activated from the Setup menu. A Dev Hub trial org can be created to enable safe experimentation with SFDX. Dev Hub trial orgs expire after 30 days and are limited (40 Scratch Orgs created per day, 20 active concurrently) why this is subtly different from the standard limits is unclear.

Authorisation of the Dev Hub org (and indeed any org) by the SFDX CLI is possible via OAuth Web Flow or JWT-based Flow. The latter being preferred for system authentications such as those conducted by a CI server.

For ISV (and others) working with namespaces, the Dev Hub provides a Namespace Registry capability which (as the name suggests) allows Developer Editions containing namespaces to be registered. Once the org link is complete, Scratch Orgs can be created as normal. This capability is great for ISV developing in a namespace registered org (as opposed to utilising a separate packaging org to apply the namespace).

IDE2

The new Force.com IDE2 is an Eclipse plug-in built on the SFDX CLI. Additional features of the IDE2 include the Lightning Code Analyser and Apex Code Analysis tools. The former wraps the capabilities of ESLint and the Salesforce Lightning CLI to evaluate JavaScript code against standard or custom rule sets. The latter covers syntax error detection and highlighting plus code auto-completion. The IDE2 also provides an embedded browser for editing Lightning Pages.

On initial inspection there didn’t appear to be any means to directly login/import metadata from an existing org. Presumably this must be done externally via the SFDX CLI.

Finally, the new IDE2 provides integrated tools for working with Git repositories, a key facet of the envisaged developer workflows centred around Salesforce DX.

Summary

At the time of writing, the available SFDX documentation relates primarily to the technical aspects of the tools, there is minimal information relating to process guidance (or practices). Whilst it’s relatively easy to envisage how the new tools and techniques could be integrated into development, test and deployment workflows, best-practice guidance in such areas would certainly provide confidence and accelerate adoption.

Historically, a key sticking point for source-based development (and deployment in general) in the Salesforce context has been the Metadata API coverage. In short, not all metadata types are supported and therefore automated deployments (whether Change Set or API based) are typically accompanied by pre- and post- deployment steps that manually address the gaps. As there’s no significant change in this regard, it’s likely that Scratch Orgs will require the same approach – however considering Scratch Org use cases will typically be code related this may have limited actual impact.

In general however Salesforce DX is a considerable step forward in terms of applying established software development lifecycle practices to the Salesforce context. The CLI alone is incredibly impressive and provides significant flexibility in respect to development, testing and release process. It will be interesting to see how tools vendors react to the availability of the CLI as a foundation for future, evolved IDE capabilities. Hopefully Salesforce DX represents the first step of many in this direction.

Resources

Salesforce DX Developer Guide (Beta)
Salesforce DX on Trailhead
Salesforce DX Chatter Group

Salesforce ISV Environment Strategy

salesforce-isv-environment-strategy-all

Please note – This content was previously posted on the audit9.com site directly and is re-published here as part of the consolidation of 2 blogs.

This post provides an overview of the various environments involved in delivering ISV solutions to the Salesforce AppExchange. As will be obvious to architects working outside of the ISV domain, the ISV environment strategy has some commonality to the non-ISV approach in respect to the development and testing activities but also introduces a further level of complexity and environment roles unique to the AppExchange distribution model.

Primary Org Definitions

Packaging Org

For small-scale, low complexity managed package developments the packaging and development orgs may be combined.

1. Package/Patch Upload
The Packaging Org is where the Managed Package is created and its namespace prefix defined. Package uploads in either Managed Beta or Managed Release state are initiated from this environment.

2. Patch Development Org – PDO
PDO are short-lived orgs created specifically for Patch development. Only certain components can be modified in a PDO. Once ready for distribution a Patch can be uploaded and pushed to Subscriber Orgs.

3. Push Upgrades
Uploaded Major Releases and Patches can be scheduled for push distribution to selected (or all) Subscriber Orgs. Package subscribers have no control over whether to accept pushed changes and as such it is not advisable (unless with tacit agreement) to push changes that impact the end user experience in any material way.

4. Localisation
Translation Workbench based localisation of internationalised resources.

Partner Business Org

A Partner Business Org can be provisioned upon partnership agreement with Salesforce or the same environment roles can be established within an existing production org.

1. Environment Hub
There Environment Hub enables partner orgs to be created for Development, Test and Trialforce usage. Single Sign-on can be implemented across connected orgs. A hub member org can only be connected to one hub at any given point in time. Where Trialforce is being implemented the TMO and Packaging Org must be connected to the same Environment Hub. The Environment Hub can create TSO directly, although bypassing a TMO prevents implementation of custom branding.

2. AppExchange Publishing Org – APO
This org role is for AppExchange management and is where Private and Public Listings and the Publisher Profile are stored.

3. Licence Management Org – LMO
This org role relates to the License Management App, a custom app installed by Salesforce which enables management of Packages, Package Versions, Licenses and Subscriber data resulting from AppExchange interactions. The LMO is also the entry point for Subscriber Org Login Access.

4. Channel Orders Application – COA
This org role relates to the Channel Orders app (also termed the Partner Order app), a custom app installed by Salesforce which supports automation of Service Order submissions (relating to AppExchange sales) to Salesforce.

5. Usage Metrics Reporting
Usage Metrics provides a daily update of statistics relating to Custom Object record counts (snapshot) and Visualforce Page access and performance (summaries) across all production Subscriber Orgs. There is no opt-in required for Package subscribers. The usage data is written to objects that are API accessible only; a visualisation app is available on the AppExchange to provide convenient access. Usage Metrics provide the basis for monitoring usage patterns and proactive customer retention process.

Trialforce Orgs

AppExchange listings can offer customers the ability to trial an app within a new, pre-configured Trial Org (package, data and users). This highly valuable capability is implemented with Trialforce. Trial organisations can also be created directly from Signup forms hosted on an external website or via the API. In addition to offering free trials the Trialforce solution provides a convenient option for the quick creation of pre-configured orgs for QA or demonstration purposes.

1. Trialforce Management Org – TMO
An optional environment used for centralised management of TSO for different packages. TSO can be created directly from the Environment Hub also, however such this direct approach does not support custom branding. The TMO approach does enable custom branding for Login Pages and Email Templates.

2. Trialforce Source Org – TSO
The TSO is where the package is installed (or upgraded) and configured to reflect the trial experience (user profiles, sample data etc.) A snapshot is taken of the Source org state to create a template. Note, TSO expire after a 12 month period unless a partner case is raised to request an extension. TSO are linked to the AppExchange Publishing Console manually, once a link is in place templates are listed on the Trial Templates tab.

3. Trialforce Template.
Templates listed in the Publishing Console can be submitted for Security Review. This reviews takes less time than the Package Security Review. Once approved a template can be connected to an AppExchange listing at which point the link to start a free trial appears on the AppExchange listing.

Note, each time a package is updated the TSO must be upgraded and a new template created, reviewed and connected to the AppExchange listing.

The ISVforce Guide provides further detail on the environments listed in the preceding sections.

Implementation Guiding Principles

In most cases software implementation projects start off with a clear vision of the various macro-level influences, constraints and directives that define the shape of the project at the outset. During the initiation stage the clear, consensus view on the project principles influences fully the micro-level decisions taken in terms of architecture, solution design, resourcing, scope, prioritisation, schedule and so on. The challenge for many projects is maintaining the purity of the vision across the course of the project in light of variations in uncontrollable factors such as the business context, risks becoming reality, project resources and so forth.

An effective tool to mitigate this loss of identity and direction, and the consequential impact this can have on confidence and productivity, is the definition of an unambiguous, concise set of guiding principles that are communicated early and frequently. Additional to the benefits related to a clear and strongly defined direction, principles support increased productivity through empowerment; if low level decisions align with the principles then action can be taken without time-expensive consultation.

Corporates do this, why not projects?
This approach is well established in business, with many corporates defining an aspirational, future state vision and set of guiding principles that underpin the delivery of the vision through applied influence to all activities undertaken and decisions made. Guiding principles can be very effective particularly where a culture is established that prioritises all actions taken in relation to their conformance to the principles. This level of application requires absolute confidence across the business that the principles are current, meaningful, complete and beneficial to all parties. The value of this approach at a business level can apply equally to a project.

Key Points
No more than 7. The primary value of guiding principles is the strength of the message, achieved through brevity. Telephone numbers were originally set at 7 digits as this was believed to be the most people could be expected to remember. 7 therefore seems a reasonable limit in this context.

Revisit during retrospectives. Stale principles will damage the integrity of the approach. Agility should be paramount such that principles are current and enable changes in the direction of travel whilst the project is in flight (excuse the metaphors here).

Communicate frequently. All project artefact (slides, documents etc.) should state the guiding principles and relate the content to the relevant principles, noting deviations.

All design decisions should relate to one or more principles. See above point.

Prioritisation. A simple prioritisation scheme (high, medium, low; 1,2,3 .. n) can be effective in resolving conflicts between principles.

Buy in. All project stakeholders must approve the principles and accept the impact. Without complete buy in the integrity of the approach is diminished.

Principles can be goals. Principles are often directive in nature, goals are an interesting extension to this.

Use Work.com. The Goals functionality of Work.com can provide a useful tool to manage, communicate, collaborate and report on the principles. This functionality also enables user-level goals to be mapped to the higher level goals.

Alternatives
Vision statements can be difficult to articulate, particularly where there are multiple unconnected concerns.

Project charters can be too cumbersome to be a realistic communication tool. It’s human nature to read once and subsequently ignore any project documentation that requires a time investment.

In both cases above, guiding principles can provide a complementary abbreviated formatting.

Examples
Maintainability of the solution by client administrator.

Future extensibility without technical resource.

Sales productivity improvement is the key objective.

Declarative build over technical solution options.

Quality over expediency.

Technical excellence.

Buy over build.

Alpha 1 bug count must be less than 100.

The solution will be productised internationally.

The project delivery date must be achieved.

Release early and iterate.

Business utility to be delivered every 2 weeks.

Only user-accepted features to be released.

User stories estimated at 5 days or more to be split.

Conceptual Data Modelling

The biggest area of risk on any Salesforce implementation project is the data model. In my view this assertion is beyond question. The object data structures and relationships underpin everything. Design mistakes made in the declarative configuration or indeed technical components such as errant Apex Triggers, poorly executed Visualforce pages etc. are typically isolated and therefore relatively straightforward to remediate. A flawed data model will impact on every aspect of the implementation from the presentation layer through to the physical implementation of data integration flows. This translates directly to build time, build cost and the total cost of ownership. It is therefore incredibly important that time is spent ensuring the data model is efficient in terms of normalisation, robust and fit for purpose; but also to ensure that LDV is considered, business critical KPIs can be delivered via the standard reporting tools and that a viable sharing model is possible. These latter characteristics relate to the physical model, meaning the translation of the logical model into the target physical environment, i.e. Salesforce (or perhaps database.com). Taking a step back, the definition of a data model should journey through three stages; conceptual, logical and physical design. In the majority case most projects jump straight into entity relationship modelling – a logical design technique. In extreme cases the starting point is the physical model where traditional data modelling practice is abandoned in favour of a risky incremental approach with objects being identified as they are encountered in the build process. In many cases starting with a logical model can work very well and enable a thorough understanding of the data to be developed, captured and communicated before the all important transition to the physical model. In other cases, particularly where there is high complexity or low understanding of the data structures, a preceding conceptual modelling exercise can help greatly in ensuring the validity and efficiency of the logical model. The remainder of this post outlines one useful technique in performing conceptual data modelling; Object Role Modelling (ORM).

I first started using ORM a few years back on Accounting related software development projects where the data requirements were emergent in nature and the project context was of significant complexity. There was also a need to communicate early forms of the data model in simple terms and show the systematic, fact-based nature of the model composition. The ORM conceptual data model delivered precisely this capability.

ORM – What is it?
Object Role modelling is a conceptual data modelling technique based on the definition of facts in the form of natural language and intuitive diagrams. ORM models are subject to rigorous data population checks, the addition of logical constraints and iterative improvement. A key concept of ORM is the Conceptual Schema Design Procedure (CSDP), a prescriptive 7 step approach to the application of ORM, i.e. the analysis and design of data. Once the conceptual model is complete and validated, a simple algorithm can be applied to produce a logical view, i.e. a set of normalised entities (ERD) that are guaranteed to be free of redundancy. This generation of a robust logical model directly from the conceptual schema is a key benefit of the ORM technique.

Whilst many of the underlying principles have existed in various forms since the 1970s, ORM as described here was first formalised by Dr. Terry Halpin in his PhD thesis in 1989. Since then a number of books and publications have followed by Dr. Halpin and other advocates. Interestingly, Microsoft made some investment in ORM in the early 2000’s with the implementation of ORM as part of the Visual Studio for Enterprise Architects (VSEA) product. VSEA offered tool support in the form of NORMA (Natural ORM Architect), a memorable acronym. International ORM workshops are held annually, the ORM2014 workshop takes place in Italy this month.

In terms of tools support ORM2 stencils are available for both Visio and Omnigraffle.

ORM Example
The technique is best described in the ORM whitepaper. I won’t attempt to replicate or paraphrase this content, instead, a very basic illustrative model is provided to give nothing more than a sense of how a conceptual model appears.

ORM2 basic example

Final Thoughts
In most cases a conceptual data model can be an unnecessary overhead, however where data requirements are emergent or sufficiently complex to warrant a distinct analysis and design process, the application of object role modelling can be highly beneficial. Understanding the potential of such techniques I think is perhaps the most important aspect, a good practitioner should have a broad range of modelling techniques to call upon.

References
Official ORM Site
ORM2 Whitepaper
ORM2 Graphical Notation
Omnigraffle stencil on Graffletopia

Salesforce Application Types

In a typical development process requirements are captured and the information synthesised to form a solution design. The constituent components of the solution design are ultimately translated into physical concepts such as a class, page or sub-page view. This analysis, design, build cycle could be iterative in nature or fixed and may have different degrees of detail emerging at different points, however the applied principle is consistent. In considering the design element of the cycle, interaction design techniques suggest a patterns-based approach where features are mapped to a limited set of well-defined and robust user interface patterns, complemented by policies for concepts that transcend the patterns such as error handling, validation messages, stylistic aspects (fonts, dimensionality etc.). This process delivers efficiency in terms of reusability of code and reduced technical design and testing, but also critically provides a predictable, consistent end-user experience. When building custom applications using the declarative tools, we gain all of these advantages using pre-defined patterns and pre-fabricated building blocks. When building using the programmatic aspects of the platform a similar approach should be taken, meaning follow established patterns and use as much of the pre-fabricated components as possible. I can never fathom the driver to invent bespoke formats for pages that display within the standard UI, the end result is jarring for the end-user and expensive to build and maintain. In addition to delivering a consistent, predicative end-user experience at the component level, the containing application itself should be meaningful and appropriate in type. This point is becoming increasingly more significant as the range of application types grows release-on-release and the expanding platform capabilities introduce relevance to user populations outside of the front-office. The list below covers the application types possible at the time of writing (Spring ’14).

Standard Browser App
Standard Browser App (Custom UI)
Console (Sales, Service, Custom)
Subtab App
Community (Internal, External, Standard or Custom UI)
Salesforce1 Mobile
Custom Mobile App (Native, Hybrid, browser-based)
Site.com Site
Force.com Site

An important skill for Salesforce implementation practitioners is the accurate mapping of required end user interactions to application types within an appropriate license model. This is definitely an area where upfront thinking and a documented set of design principles is necessary to deliver consistency.

By way of illustration, the following exemplar design principles strive to deliver consistency across end user interactions.

1. Where the interaction is simple, confined to a single User, the data relates to the User and is primarily modifiable by the User only and has no direct business relevance then a Subtab App (Self) is appropriate. Examples: “My Support Tickets”, “Work.com – Recognition”.
2. Where a grouping of interactions form a usage profile that requires streamlined, efficient navigation of discrete, immersive, process centric tasks then a Console app is appropriate. Examples: “IT Helpdesk”, “Account Management”
3. Where a grouping of interactions from a usage profile that is non-immersive, non-complex (i.e. aligned with the pattern of record selection and view/edit) and likely to be conducted on constrained devices then Salesforce1 Mobile is appropriate. Examples: “Field Sales”, “Executive Insight”.

Design principles should also provide a strong definition for each application type covering all common design aspects to ensure consistency. For example, all Subtab apps should be built the same way technically, to the same set of standards, and deliver absolute consistency in the end user experiences provided.

Salesforce Naming Conventions – Declarative

Updated – 2014-11-18

This post follows on from my last post on Custom Settings and provides coverage of the wider set of naming conventions I apply across the various component types of the declarative build environment. The list isn’t exhaustive or necessarily better than any other set of standards. Having a set of conventions applied consistently across the build is key, the specifics of those conventions can be subjective. A key principle applied is that of consistency with established standard conventions wherever possible. For example, standard objects and fields don’t have underscores in the API names and follow simple patterns in terms of naming, there’s no good reason to deviate from this for custom objects and fields. Naming conventions shouldn’t be considered an area for creative thinking, instead creative energy should be focused on the functional and technical design, the conventions applied should be mundane and predictable.

Convention A – Custom Object

[Object name]. Singular, Pascal Case (upper camel case) and no underscores.
e.g. Data Source -> DataSource__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Data Source -> Data_Source__c

Note – the Description attribute for Custom Objects must always be populated appropriately. A self describing configuration is key to future maintenance.

Convention B – Custom Field

[Field name]. Pascal Case (upper camel case) and no underscores.

e.g. Date of Birth -> DateOfBirth__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Date of Birth -> Date_Of_Birth__c

In the scenario where an implementation is comprised of distinct functional domains, with custom fields relating specifically (and exclusively) to one single domain, the following convention should be applied.

Each functional domain has a defined field name Prefix. e.g. HR, FINANCE, SALES etc.
Fields exclusive to one domain have their API name prefixed with the domain prefix.
e.g. Payroll Number -> FINANCE_PayrollNumber__c
e.g. Industry Segment -> SALES_IndustrySegment__c

This convention allows a logical structure to be applied in isolating fields specific to a single functional domain.

Note – the Description attribute for Custom Fields must always be populated appropriately. A self describing configuration is key to future maintenance.

Note – the Help Text attribute for Custom Fields must always be populated appropriately. Inline user assistance can improve the end user experience greatly and reduce ongoing support.

Convention C – Child Relationships

Singular, Pascal Case (upper camel case) and no underscores.
e.g. Account->AccountMetrics__c relationship = Account.AccountMetrics__r

Convention D – Page Layout

[[Function] | [Object]] Layout

e.g. Pricing Case Layout
e.g. Pricing Case Close Layout

Convention E – Custom Setting

Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

API Name – List Settings
– [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

API Name – Hierarchy Settings
– [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

Convention F – Workflow Rule
Always separate the condition from the outcome when configuring workflow. The Workflow Rule is the condition, the associated Actions are the outcome. In many cases the 2 become entwined resulting in duplicated rules, or rules with actions unrelated to the stated purpose of the rule. A clear set of rules related to conditions promotes re-use.

[Object]: [Criteria Description i.e. Condition]

Convention G – Workflow Action

Field Update –
[Object]: Set [Field] to [Value]

Email Alert –
[Object]: Send [Template short description]

Task –
[Object]: [Task Subject]

Convention H – Sharing Rule

OBS: [Object] [From selection] to [To selection]
CBS: [Object] [Criteria] to [To selection]

Convention I – Custom Report Type

[Primary Object] with [Child object]s [and [Grandchild object]s]

Convention J – Custom Label

Define sensible categories for the labels. e.g. UI Button Label, UI Text, UI Error Message etc.

Name = [Category with underscores]_[Value with underscores] e.g. UI_Button_Label_Proceed
Category = [Category with underscores] e.g. UI_Button_Label
ShortDescription = [Category] [Value] e.g. UI Button Label Proceed
Value = [Value] e.g. Proceed

Convention K – Validation Rule

Single field :
[Field Label] [rule applied]
Mailing City Is Required
Start Date Must Be a Weekday

Multiple fields :
[Field grouping term] [rule applied]
Billing Address Must Be Complete

Cross object :
[Object Name] [Field Label] [rule applied]
Opportunity Stage Is Closed No Edit Of Opportunity Products

Convention L – Publisher Action

[Verb] [Noun]
New Invoice
Update Order
Alert Executive Team

Convention M – User Profile

[[Job Function] | [Department] | [Company]] [[User] | [System Administrator]]
Accounts Payable User
Marketing Executive User
Acme System Administrator

Convention N – Permission Set

Single Permissions :
Name must match the permission assigned.
Case Feed
Manage Dashboards
Manage Public List Views

Simple Combined Permissions :
[Verb] [Noun]
Manage Invoices

Combined Permissions :
[Feature Area Descriptor] [User Type]
Work.com Administrator
CloudInvoices User
Knowledge Contributor

Convention O – Public Group

[Grouping term] [[Users] | [Members]]

EU Users
Sales Users
HR Users
Project A Members

Convention P – Reports and Dashboard Folders

[Grouping term] Reports (adhoc reports related to a specific department, team, project etc.)
[Grouping term] Dashboard Reports (best practice to isolate dashboard reports in clear location)
[Grouping term] Dashboards

Finance Reports
HR Dashboards
HR Dashboard Reports

Note – the [Report Description] attribute for Reports must always be populated appropriately. A self describing configuration is key to future maintenance.

Salesforce Implementation Game Plan

Whether you’re managing a commercial software development, leading a consultancy project or building an IKEA table a game plan is absolutely key to successful delivery. In the latter example IKEA recognises the importance of prescriptive guidance and supplies an instruction leaflet in the box. This however covers only one dimension of successful delivery, the ‘What’ (i.e. what you need to do) – the ‘Who’, the ‘How’ and the ‘When’ are left up to you. In the case of an IKEA table this is acceptable as the resource is probably you (who will likely build the table in your own way regardless of advice received) and the timeline may not be critical. Moving away from this tenuous example, in non-trivial situations all the dimensions of successful delivery are equally significant and must combine cohesively to achieve the defined objective. This calm, controlled, empowering and productive state is precisely what planning is intended to achieve. This success delivery-state is rarely the norm, for various reasons; inexperience, over-optimism, command and control culture, inadequate expertise, process rigidity, poor communication etc. The net effect of such factors being a distress delivery-state where productivity is low and the sense of team is diminished in terms of empowerment, trust and accountability.

Like many people I often find myself misquoting Leo Tolstoy, who didn’t say that failing projects fail for a variety of reasons but succeeding projects succeed for the same reason. He did say this however – “All happy families are alike; each unhappy family is unhappy in its own way.” (Leo Tolstoy, Anna Karina) – where the interpretation comes from. I definitely read this somewhere, apologies if this was your book or blog.

So the idea is that all successful projects succeed for the same reason – that reason being in my view that the project was able to achieve a success delivery-state, i.e. the plan encompassed all the requisite dimensions and maintained the agility to react and adapt during flight. In this context it matters little which project process, methodology, framework etc. you employ what matters is that you have a well conceived plan from the outset, or game plan as I like to call it, and execute on that plan in a disciplined manner.

A game plan can take many forms (spreadsheet, picture, diagram, A3 sheet pinned to the wall etc.) whichever way you go the end result should be an engaging, high-level fusion of vision and planning and be tuned for effective communication to your specific audience.

The game plan should influence the detailed planning, but is a higher level concern that outlines the fundaments of how the project will succeed, covering the essential aspects only. My preference in the past has been an annotated timeline diagram, showing clearly the basis upon which I’m confident of success – this can be highly effective in terms of establishing confidence within the delivery team and across stakeholders. I don’t believe this is possible with a Gantt chart or spreadsheet, even where progression metrics are added.

By way of illustration the following sections outline an example Game Plan related to a fictitious Salesforce implementation project.

gameplan example

In summary, the game plan concept applied to project delivery can be a powerful tool. It matters little how the game plan is presented or what it contains, simply having one in any form can make a big difference in terms of confidence, focus and communication.

Salesforce Source Control and Release Process

This post outlines my preferred approach to managing parallel developments on the Salesforce platform in what I refer to as the Converged Programme Model. I readily acknowledge that there’s a multitude of ways to accomplish this each with it’s own subjective merits. Before adopting a parallel work-stream model take the time to understand the technical complexity, process overhead and time investment required. Of particular concern should be the team’s readiness for such a disruptive change. In my experience it’s better to plug any skills gaps upfront, be very prescriptive with process guidance, start-small and build out incrementally – the risk otherwise is considerable. Typically resistance will come from individuals unaccustomed to a disciplined approach to software development/release process.

SCC

Objectives

  1. Concurrent Development. Support parallel programme workstreams converging into a shared production Salesforce environment.
  2. Automation. Deliver build automation – reducing the manual overhead required to deploy between environments.
  3. Gold Standard. Deliver a best practice approach – the initial design should scale up and down in response to changing programme conditions.
  4. Non-disruptive. Facilitate a staggered approach to adoption – enabling key benefits to be realised quickly without disrupting productivity.
  5. Minimise Release Overhead. Project branches should be regularly and incrementally updated from the master branch – reducing the inherent risk of divergence over time.

Tools

  1. GitHub
  2. – Get started with public repositories, upgrade to a paid plan and use private repositories for any source code you don’t want to share with the world at large.

    – Create an Organisation account to enable Team functionality.

    – Key benefit versus Subversion (CVS etc.) is fast and efficient branch management; parallel workstreams are managed on branches with frequent merging.

    – It is possible, albeit time expensive to implement a Git server within the enterprise. In my view the GitHub administration interface alone is worth the price.

  3. Jenkins
  4. – Deployed on a Windows EC2 instance with an elastic IP. A free usage tier, micro instance provides an ideal server host. Using a Linux host can be beneficial in regard to SSH authentication from Jenkins to GitHub. This is just one advantage of many. Pick the Operating System/Platform the team you’re working with are most familiar, a Linux host that only one team member can administrate makes no sense.

    – On Windows the Jenkins service should be configured to run a specific user account (with least privileges assigned). This is required to generate the key files for SSH authentication.

    – Enable Jenkins security. Particularly relevant if the host is open to the public web. Lock the inbound IP ranges via the EC2 security group if possible.

    – Either store the Ant build files (build.xml, build.properties) in the Git repository or use an XCOPY post-build step to copy the files into the workspace from a file system location – as below. I prefer to keep the build files external to Git – there shouldn’t be any need to version manage such files – plus the build.properties file may contain passwords in plaintext.

    Jenkins Job Build Config

    – Install GitHub and Git Plugins
    Required to build from a GitHub repository and enables build automation via Post-Receive Hooks. Under Jenkins System Configuration; configure “Manually manage hook URLs”, this requires your GitHub repository to have the hook set manually via Service Hooks under repository settings. Add a [Jenkins (GitHub plugin)] service hook like http://yourservername:8080/github-webhook/. The message sent on git-push to the remote repository will trigger any Jenkins job that builds from the branch that has been updated and has the [Build when a change is pushed to GitHub] option set to true.

    – SSH Keys
    In order to use SSH from Jenkins to a private GitHub repository, SSH authentication is required, which uses a generated key pair. The public key is added as a Deploy Key in GitHub under repository settings. This works well but if you want the same Jenkins user to access multiple repositories over SSH you have a problem as each Deploy Key must be globally unique across all GitHub repositories. The answer to this is to use aliasing and a SSH config file (refer: http://www.onemogin.com/blog/2011/9/1/jenkins-and-github-multiple-private-projects.html) however this won’t work with Post-Receive Hooks as the repository URL in the sent message won’t match to the aliased repository URL in the Jenkins job – typically errant behaviour below from the Jenkins log. I can’t see a way around this at the time of writing this post.

    FINE: Skipped GitHub Test - buildautomationtest repository because it doesn't have a matching repository.
    May 7, 2013 6:21:35 PM com.cloudbees.jenkins.GitHubWebHook
    FINE: Considering to poke GitHub Test - buildautomationtest repository
    

    – Chatter Plugin
    I’m a big fan of this plugin by Simon Fell. I tend to use a dedicated release manager user, e.g. release.manager@force365.com, standard user license capacity permitting, and perform all deployment tasks in this user context. This approach provides clarity on changes made by a deployment versus actual user and provides an easy way to be notified of failures etc.

    Key Principles

    1. Fit-for-purpose Org-set
    2. – Org-set is the terminology I use to describe the collection of orgs, and their roles, required to deliver a project safely to production.

      – One size does not fit all. Pick the minimum set of orgs roles required to deliver the project. Each org is a time expensive overhead.

      – Sandbox types. In defining the org-set, factor in the availability of config-only and full-copy sandboxes. The latter must be retained for cases where infrequent refresh is required. Project-level orgs don’t need to be part of the sandbox estate, Developer Edition orgs, or perhaps Partner Developer Edition orgs can be employed. Full-copy sandboxes are incredibly expensive, valuable resources, use only when absolutely necessary for as wide a set of roles as possible.

      – Connected orgs. For projects involving complex integrations, the complexity involved in creating a connected-org may influence the org-set design – there may be an argument to consolidate roles onto a single test org used for QA and UAT perhaps.

    3. Continuous Integration
    4. A best practice org-set design for non-trivial technical projects with multiple technical contributors should require isolation of developer activities into a separate developer orgs with a code-level integration org and Continuous Integration (CI) process in place.

    5. Project-level sandboxes are not refreshed
    6. Project-level orgs are all built from the Git repository. The Pre-production programme-level org must be refreshed from Production pre-deployment to ensure the deployment is verified against the current state.

    7. Commit to the remote project branch is a commitment that metadata is ready for system testing
    8. Build automation will deploy a project branch commit to the project QA org. In my experience it pays to be prescriptive in terms of development process.

    9. Commit to the remote master branch is a commitment that metadata is ready for integration testing
    10. Build automation will deploy a master branch commit to the programme INT (integration) org – this org exists to enable rigorous regression testing to be applied by all project workstream. Post-deployment suites of automated tests should be invoked and reports analysed by the test lead on each project.

    11. Test Automation
    12. It’s a significant resource overhead to execute manual test scripts for each regression test cycle, not to mention error prone. For non-trivial projects, the investment must be made at an early stage in automated-testing. Selenium is a good choice, but the tool utilised doesn’t really matter, what matters is that from the outset of the project the test team start to build-up a comprehensive suite of automated test cases with coverage of the key acceptance criteria defined for each user story. The suites then enable automation of regression testing during deployment phases – the same scripts underpin system testing and provide an often overlooked second stage to CI (unit tests + acceptance tests).

    13. GitHub branch design
    14. – A simple, clean branch design is desirable in the remote repository.

      – Long-lived branches for active project workstreams. Project branches may have sub-branches for each sprint or phase.

      – Long-lived branch for patches. Bug fixes are developed on local branches and committed to the remote support branch when ready for system testing.

      – It can be advisable to consider how important a clean Network Graph is, this is impacted by Git merge versus rebase decisions.

    15. Build automation challenges
    16. In a perfect world, all metadata component types would be covered by the Metadata API. This isn’t the case so the nirvana of simple cloning of an org configuration is yet to exist. Instead a prescriptive process is required which spans manual configuration tasks, metadata deletion and build automation.

      – Proactive management of change
      A nominated release manager should proactively manage change at a programme-level, advise the project teams on release process and strive to minimise deployment conflicts through early involvement in all project developments. A change log should be maintained which lists all changes being made. This could include technical component types (ApexClass, ApexTrigger etc.) being added, modified or deleted, but as a minimum must track configuration changes requiring manual action – enablement of features, field data type changes etc. and required standing data (custom settings etc.). All changes should be mapped to a Change Type of manual or automated and a list of orgs to which the change has been deployed tracked. This is clearly an overhead to the project but without control it can be very easy to lose track of the current state of the orgs in use and face significant time expense in attempting to rationalise the situation through failing deployments. The release manager, or technical lead should apply manual tasks to target orgs pre-emptively to minimise automated build failures.

      – Be prepared for build failures
      Automated builds will fail; this is a fact of life where build-dependencies on manual actions exist. Proactive management will only get you so far. Attempting to minimise this is more realistic than elimination.

      – Data
      Automation of data setup in a target org is possible via Ant and the Data Loader CLI, or other similar means. Alternatively a data file could be deployed as a document or static resource and then loaded from an Apex script (as per the ISV approach).

      – Unsupported metadata component types
      Automation is possible using Selenium scripts, which execute at the UI level and can simulate, for example, a user activating a setting. Such scripts can then be integrated into an automated build. This is highly possible, but takes time and expertise with both Ant and Selenium to accomplish.

    17. Programme-level Integration
    18. The Converged Programme Model involves project workstreams building in isolated org-sets with frequent merge-from-master actions bringing across any changes to the production state. This approach should surface conflicts early, i.e. during development itself, but to be sure that shared component changes have not introduced any functional inconsistencies, regression testing must be applied by each and every project workstream on each occasion any project does a release. This is a strong argument for test automation.

    19. UAT
    20. – Project-level or programme-level?
      In principle UAT should always be applied at the local project-level as the commit to the programme-level integration org is an absolute commitment that the code is production ready. In practice UAT may be two tiered; initial user acceptance of new functionality, followed by some form of secondary acceptance testing in Pre-production, in parallel to deployment verification testing.

    21. Path-to-production Change Management
    22. As with any programme of work, fit-for-purpose Change Management processes should be in place. In context this means a Change Advisory Board (CAB) should be in place to approve deployment, this must include informed and empowered representation across business and technical functions.

      – A Deployment Request Form (DRF), or similar, should be produced to document the change being released, the impact, pre and post deployment tasks, GitHub commit # etc., approval date or rejection reason. The DRF could be approved by a convened board or via email response.

      – The DRF process is absolutely required for the final deployment to Production, but may also be applied to the Pre-Production deployment, i.e. the commencement of the final step of the path-to-production release flow.

Salesforce Exception Reports

I think it’s fair to say that consideration of reporting and analytics is not traditionally a technical architect’s direct concern. In the Salesforce context I always challenge this presumption and promote the idea that a project architect should absolutely be looking at the high-level analytical requirements during the foundation stage of the project. Why you may well ask? The answer to this is simple, in order to design a fit-for-purpose data model you must consider data coming out (via the native reporting tools) as well as data going in efficiently. I make this point in a prior post that outlines my view on the Salesforce development process. The number of projects I see where this thinking hasn’t been applied and 3rd party tools are wheeled in to address the gap continues to surprise me. Key to success here is understanding what can be achieved with the Salesforce reporting functionality. Which brings me on to the topic for this post – exception reports, a type of reporting not obviously catered for by the standard reporting tools. For instance – show me a list of Accounts without an Opportunity in the last year. Such reports are invaluable in identifying omissions, focusing sales activities, generating call lists, tracking inactivity etc. etc.

I’ll use 2 examples below to clarify how to approach such a scenario.

Simple Example.
Accounts without a closed Opportunity in the last 30 days.

For this report we use a “without” Cross Filter as per the screenshots. Cross Filters were added in Spring ’12 and enable records to be filtered based on the existence (or non-existence) of related records which meet defined criteria. Within a Cross filter up to 5 sub-filters can be defined. Note, the report’s top level filters do not apply at the sub-filter level. Cross filters can be added for any related object, custom or standard – the limit is 3 per report (each with up to 5 sub-filters). Powerful functionality indeed and a great convenience that avoids a Custom Report Type for the outer join, note the sub-filter is limited to fields on the related object, no-cross object sub-filter logic is possible.

simple exception report

Complex Example.
Accounts without a Won Opportunity involving a certain Product Family in the last 60 days.

For this report we start with a Cross Filter report to satisfy the “without Won Opportunity” requirement. From there we need a indicator (of the existence of an Opportunity Product linked to a product of the specified family) at the Opportunity level which can be used in a sub-filter. This is actually straightforward to do and relies on the old school technique for exception reports, namely the use of a rollup summary fields and simple workflow.

Solution components :
Opportunity Product – Custom Field – ProductFamilyHardwareFlag
Number (18,0 default 0)

Opportunity Product – Workflow Rule – “Opp Product – New or Changed Product”
Evaluate – Created and Edited – Formula (OR(ISNEW(), ISCHANGED( Product2Id )))

Opportunity Product – Workflow Action – “Opp Product – Set ProdFamilyHardwareFlag”
Field to update [ProductFamilyHardwareFlag] with Formula = IF( TEXT(Product2.Family)=’Hardware’, 1,0)

Opportunity – Custom Field – ProductFamilyHardwareCount
Rollup Summary Field (Opportunity Product – SUM – Field to aggregate = ProductFamilyHardwareFlag). All records included.

How does this all work? In short, a workflow rule on Opportunity Product sets a numeric field equal to 1/0 based on the product family meeting the defined criteria. The numeric field is then rolled up to the Opportunity level via RSF, which is then applied in a sub-filter as below.

complex exception report

Note, this is just an example to illustrate the concept. An actual implementation would typically require a more generic approach.

Salesforce Development Process

There are typically two interpretations of the term “development process” – one being the tools, practices and methods applied in software development (i.e. methodology, plus build automation, standards etc.) the other being the process applied to get from requirements to working software (i.e. iterative or waterfall, plus how the analysis-design-build-test-release disciplines are executed). This post outlines one high-level approach to the latter in the context of Salesforce developments. The intent of this isn’t to be overly prescriptive, generally speaking each project requires its own defined process that factors in resources available (and their skills and experience critically) plus the nature of the work and the timescales. That said it is a truism that failed projects fail for a variety of reasons but successful projects are typically successful for the same reasons. A fundamental success factor being the adoption of a clearly defined and simple process – others being team empowerment and shared commitment.

The process above assumes an iterative process and focuses the initial iteration on the foundation of a robust data model, set of user profiles and permission sets, role hierarchy, record access model and statement of the reporting requirements for the project. Subsequent iterations improve the quality of the foundation over time, as new functional areas are developed. The data model in this context will include a statement of the org-wide defaults for each object and the specifics of each relationship (master-detail, lookup, mandatory lookup etc.). The record access model is critical – this shows how each user population maps to a user profile and role and how they gain access to the data required, i.e. sharing rule, Apex managed sharing etc. In my experience defining an approximate access model upfront and then refining during the feature build-out helps to avoid expensive refactoring later in the process and sets out a clear understanding for all contributors to the declarative and technical build. A piecemeal approach to defining a sharing model is commonplace – this rarely provides a clear and cohesive result. For the similar reasons defining a list of permission sets upfront ensures that user profiles are kept clean and focused, avoiding proliferation of profiles down the line. It may be surprising to see analytics such as reports and dashboards being considered during the foundation stage, this however is one of the primary inputs to the definition of a fit-for-purpose data model. I’ve worked on countless projects where reporting has been overlooked until a late stage, at which point it has become apparent that standard reporting features can’t produce the reports given the structure of the data. Ideally the data model should be designed from the outset to work well for both transaction processing and analytics.

A final point for consideration is the by-exception approach to identifying technical components. When breaking out the solution components required for a certain feature – expertise must be applied to ensure that standard product functionality or declarative options (workflows, reports etc.) are considered fully before bringing expensive technical options such Visualforce or Apex to bear.