Showing posts tagged with:

Best Practice

Salesforce Application Types

In a typical development process requirements are captured and the information synthesised to form a solution design. The constituent components of the solution design are ultimately translated into physical concepts such as a class, page or sub-page view. This analysis, design, build cycle could be iterative in nature or fixed and may have different degrees of detail emerging at different points, however the applied principle is consistent. In considering the design element of the cycle, interaction design techniques suggest a patterns-based approach where features are mapped to a limited set of well-defined and robust user interface patterns, complemented by policies for concepts that transcend the patterns such as error handling, validation messages, stylistic aspects (fonts, dimensionality etc.). This process delivers efficiency in terms of reusability of code and reduced technical design and testing, but also critically provides a predictable, consistent end-user experience. When building custom applications using the declarative tools, we gain all of these advantages using pre-defined patterns and pre-fabricated building blocks. When building using the programmatic aspects of the platform a similar approach should be taken, meaning follow established patterns and use as much of the pre-fabricated components as possible. I can never fathom the driver to invent bespoke formats for pages that display within the standard UI, the end result is jarring for the end-user and expensive to build and maintain. In addition to delivering a consistent, predicative end-user experience at the component level, the containing application itself should be meaningful and appropriate in type. This point is becoming increasingly more significant as the range of application types grows release-on-release and the expanding platform capabilities introduce relevance to user populations outside of the front-office. The list below covers the application types possible at the time of writing (Spring ’14).

Standard Browser App
Standard Browser App (Custom UI)
Console (Sales, Service, Custom)
Subtab App
Community (Internal, External, Standard or Custom UI)
Salesforce1 Mobile
Custom Mobile App (Native, Hybrid, browser-based)
Site.com Site
Force.com Site

An important skill for Salesforce implementation practitioners is the accurate mapping of required end user interactions to application types within an appropriate license model. This is definitely an area where upfront thinking and a documented set of design principles is necessary to deliver consistency.

By way of illustration, the following exemplar design principles strive to deliver consistency across end user interactions.

1. Where the interaction is simple, confined to a single User, the data relates to the User and is primarily modifiable by the User only and has no direct business relevance then a Subtab App (Self) is appropriate. Examples: “My Support Tickets”, “Work.com – Recognition”.
2. Where a grouping of interactions form a usage profile that requires streamlined, efficient navigation of discrete, immersive, process centric tasks then a Console app is appropriate. Examples: “IT Helpdesk”, “Account Management”
3. Where a grouping of interactions from a usage profile that is non-immersive, non-complex (i.e. aligned with the pattern of record selection and view/edit) and likely to be conducted on constrained devices then Salesforce1 Mobile is appropriate. Examples: “Field Sales”, “Executive Insight”.

Design principles should also provide a strong definition for each application type covering all common design aspects to ensure consistency. For example, all Subtab apps should be built the same way technically, to the same set of standards, and deliver absolute consistency in the end user experiences provided.

Salesforce Release Methodology – Simple Case

A very common challenge addressed by architects working with Salesforce is the definition of an appropriate release methodology. By this I mean the identification of the Salesforce orgs required to support the project delivery whether serial or concurrent in nature, the role and purpose of each org and critically, the means by which change is managed and synchronised across environments. With this latter point, a clear definition of the path-to-production is imperative.

In the large-scale, complex project case there is typically time and expertise available to define a bespoke methodology, with build automation, source code control system integration and so forth tailored to the specifics of the programme environment. There’s an abundance of best-practice information available online to help guide the definition of a release methodology for complex projects. For less complex projects, such as those employing the declarative build model only, there is less information available, in such cases what is typically required is a standardised, best-practice approach that can be adopted as-is.

The remainder of this post provides an outline view of an exemplar release methodology for small-to-medium scale, configuration-centric projects (i.e. no Apex code or technical complexities). This information is provided for reference purposes only.

Environment Strategy
The following diagram outlines the environments and their purpose, the defined release steps and a basic approach to change management.

Release Methodology - Simple Case

Key Principles
1. Isolate development from testing activities. This is the golden rule. Testing requires a stable environment unaffected by ongoing development. Development shouldn’t grind to a halt while system testing and acceptance testing processes are applied.
2. Utilise the minimum number of sandboxes as possible. Synchronisation of change is time expensive and error prone, avoid this wherever possible. Preparation of standing data post sandbox refresh can also take time, as can the communication required to establish that a refresh can proceed.
3. Don’t over specify the sandbox type. Sandboxes are an expensive asset, especially full-copy and partial-data sandboxes. Calculate the required storage capacity and map to either Developer or Developer Pro. Retain full-copy sandboxes for purposes that do actually require the copied data.
4. Maintain a Change Control Log in the production org to record all changes (at a reasonably high-level) against applied environments.
5. Use the production org for implementation project collaboration. It can also be a useful adoption tool to create Chatter groups such as “Salesforce: Marketing”, “Salesforce: Finance” where collaboration can occur directly with the business users whilst the project is in flight.
6. Accept that change will inevitably be applied to the production org first; record such changes and apply to development and testing sandboxes asap.
7. Always verify the Change Control Log against the Setup Audit Trail before deployments.
8. Use Change Sets for deployment wherever possible.
9. Encourage a development process where Change Sets are updated continually, rather than retrospectively.
10. Always verify the Change Control Log against the list of Change Set support components.
11. On larger projects a Change Set partitioning strategy may be required; along functional lines, by team or by component type etc.
12. Ensure releases to production are documented and approved. A simple Deployment Request Form (DRF) template should be defined and used to gain approval. This process is key to communication and governance but also helps the team consider fully the pre- and post- deployment steps, risks and rollback strategy.
13. Post-release. Communicate how business processes have been mapped to Salesforce concepts, and the permissions model. Understanding how things work in simple terms can help avoid end-user frustration with a new system. This can also reduce the support burden as end-users can often self diagnose the cause of a problem.

The org strategy diagram above presents an appropriate approach for a serial-release model, i.e. one project or one sprint at a time is being developed, tested then released. In the concurrent-release model, where multiple parallel projects are converging into a single production org, isolated develop and test sandboxes will be duplicated per project with an integration (or pre-production) org providing a synchronisation point where the combined state is validated prior to deployment to production.

Apex Trigger Exceptions

Custom Apex Triggers execute on standard CRM objects (Account, Contact, Lead etc.) and custom objects in response to all data modifications applied via the Salesforce web application, API transactions and custom Apex script. As such it is imperative that trigger code adheres to patterns that promote performance and maintainability, guards against recursive behaviour and most critically protects data integrity.

Apex code outside of triggers (Batch Apex, Visualforce Controllers etc.) is completely isolated to the specific context, trigger code is not; the effect of errant code is felt across the standard web app, mobile apps, API-based integration flows, data loads and so on. As such triggers must be used judiciously and avoided wherever possible on the core standard objects. I often come across multiple complex triggers defined on Account, where the logic applied is non-time critical and could have been handled by Batch Apex, or indeed by declarative methods. In certain circumstances this idealistic view is impractical and triggers are a necessary friend. When writing triggers always stick to a proven pattern, good examples provided by community experts are easy to find, and read the relevant sections in the Apex Language Reference if you haven’t done so for a while.

This post aims to provide some practical insight related to exception behaviour within Apex Triggers that may not be obvious from the referenced resources.

Key Concepts
1. DML Processing Mode
All bulk DML transactions on the platform run in either allOrNothing or partial processing mode. In some cases the behaviour is implicit and fixed, in other cases flags can be set to override the default behaviour.

allOrNothing
– Apex Database statements (e.g. update contactsToUpdate;)

partial processing
– Apex Database methods (Database.update(contactsToUpdate);. The default behaviour can be overridden via DmlOptions.
– API (default behaviour – can be overridden)

2. .addError()
The .addError(custom message) method can be invoked at the record or field level within the trigger context to flag a record as invalid. This is particularly useful in asserting logical errors. A best practice in regard to the custom messages is to use Custom Labels with a defined Apex Trigger Error category.

3. Runtime Exceptions
If an unhandled runtime exception is thrown then the Apex Transaction will immediately rollback and Apex exception notifications will occur. Structured exception handling can be added to the trigger code to catch runtime exceptions and to apply custom handling logic.

Apex Trigger – Exception Test Cases
The following test cases highlight the impact of the .addError() method and runtime exceptions in the context of the 2 processing modes. Tests were conducted using Execute Anonymous and the Apex Data Loader at API version 30.0.

Test Case 1 – Does code continue after .addError()?
Result: Yes (this is necessary in both allOrNothing and partial processing cases to gather a complete set of errors).

Test Case 2.1 – all_or_none – Does .addError() rollback the transaction if any subset of records has .addError() applied?
Result: Yes – Full rollback.

Test Case 2.2 – all_or_none – Does .addError() rollback the transaction if all records have .addError() applied?
Result: Yes – Full rollback.

Test Case 2.3 – all_or_none – Does an un-handled runtime exception rollback the transaction if no records have .addError() applied?
Result: Yes – Full rollback.

Test Case 2.4 – all_or_none – Does a handled runtime exception rollback the transaction if no records have .addError() applied?
Result: No – Records are committed (complete trigger context – regardless of which record caused the runtime exception).

Test Case 3.1 – partial – Does .addError() rollback the transaction if any subset of records has .addError() applied?
Result: Yes – Full rollback of all uncommitted changes, the trigger fires again for the subset of records that have not had addError() applied. Up to 2 retries are performed, after which the “Too many batch retries in the presence of Apex triggers and partial failures.” exception is thrown.

Test Case 3.2 – partial – Does .addError() rollback the transaction if all records have .addError() applied?
Result: Yes – Full rollback.

Test Case 3.3 – partial – Does an un-handled runtime exception rollback the transaction if no records have .addError() applied?
Result: Yes – Full rollback.

Test Case 3.4 – partial – Does a handled runtime exception rollback the transaction if no records have .addError() applied?
Result: No – Records are committed (complete trigger context – regardless of which record caused the runtime exception).

Test Case 4 – partial – In a partial failure case are static variables set within the original trigger invocation reset before retry invocations?
Result No – Static variables are not reset between trigger invocations. This means that statics guard variables defined to protect against recursive behaviour will stop trigger logic being applied to records processed by retry invocations.

Conclusions
1) In all cases .addError() results in the full Apex Transaction being rolled back. Custom logging schemes such as writing exceptions to a Custom Object (including via @future) or sending notification emails are futile gestures when employed in conjunction with .addError().

2) Handled runtime exceptions result in the Apex Transaction being committed, unless .addError() is applied to at least one record.

3) In the partial processing case, governor limits are reset to their original state for each retry trigger invocation, static variables are not.

4) Where static guard variables are used to protect against re-execution of the trigger logic via field updates within update DML transactions, partial processing retry trigger invocations will also be blocked. Such invocations are consistent with those initiated by workflow field updates later in the order of execution.

In order to distinguish between a field update trigger execution and a partial processing retry it becomes difficult to use static variables – batch sizes can be equivalent, sequencing is unpredictable etc.

One possible approach is to use a custom field on the target object (Is Workflow Processing?), a field update action would be added to set this to “True” alongside the existing actions. In the trigger code we can now reliably test this field state to identify a workflow field update initiated trigger execution. The trigger code must always reset the field to “False” before returning, such that any partial processing retry executions safely proceed. Where trigger logic resides in an after update trigger, a before update trigger could be added simply for convenience of reseting the field – in this case a static could be set in the before context and referenced in the after context.

It’s possible there are more efficient ways to provide a robust solution to this. A complete Apex based solution would definitely be preferable to the custom field based solution.

Visualforce Controller Class Convention

A quick post to outline an informal convention for the definition of a Visualforce controller class, key maintainability characteristics being predictable structure, readability and prominent revision history. All developers have a subjective preference in this regard, however consistency is key, particularly in the Salesforce context where multiple developers/consultancies contribute to a codebase over its lifetime. A simple, logical approach always makes sense to maximise adoption.

[sourcecode language=”java”]
/*
Name: MyPageController.cls
Copyright © 2014 Force365
======================================================
======================================================
Purpose:
——-
Controller class for the VF page – MyPage.page
======================================================
======================================================
History
——-
Ver. Author Date Detail
1.0 Mark Cane& 2014-05-20 Class creation.
1.1 Mark Cane& 2014-05-21 Initial coding for page initialisation.
*/
public with sharing class MyPageController {
//& public-scoped properties.
public List<MyWrapperClass> wrapperClassInstances { get; set; }
//& End public-scoped properties.

//& private-scoped variables.
private Boolean isInitialised=false;
//& End private-scoped variables.

//& page initialisation code.
public MyPageController(){
initialise();
}

private void initialise(){ isInitialised=true; }
//& End page initialisation code.

//& page actions.
public PageReference saveAction(){
return null;
}
//& End page actions.

//& data access helpers (class methods accessed from binding expressions).
//& End data access helpers.

//& controller code Helpers (class methods providing helper functions to data access helpers or actions).
//& End controller code helpers.

//& inner classes (wrapper classes typically, extending SObjects for convenience in the UI).
public class MyWrapperClass {
public MyWrapperClass(){}
}
//& End inner classes.
}
[/sourcecode]

Salesforce Implementation Audit

This post provides an outline approach to consider when performing an internal audit of an existing (or emerging) Salesforce implementation. As an individual who specialises in the provision of such quality assurance services from an external perspective, I’m convinced that most projects would benefit from a periodic internal review, perhaps augmented by some occasional external perspective and insight (Salesforce services can help here). However this is approached, in the majority case the internal project team will have the requisite experience and competency to deliver such an introspective review, the challenge is often one of finding the right time, or indeed any time, to conduct it. This is why a retrospective build review should be planned every 3 or 4 sprints (or thereabouts – projects differ) with a full implementation audit scheduled every release. The principal being that whilst the build is in flight, periodic sense checks are made on key quality aspects, technical integrity, platform limits etc. with a comprehensive audit applied pre-release (ideally). The latter may need to consider a combined future deployment state where multiple parallel development streams converge into a single production org.

Note, an implementation audit is build-focused (or solution oriented) and should not assess the fit-for-purpose nature of the functionality in respect to business requirements (i.e. the problem-to-solution mapping). The only exception to this arises where an obvious mapping to a standard feature is missed resulting in a “gap” that is unnecessarily filled by a technical solution option.

Note, in order to cut down on the time required to conduct the audit access to individuals who can describe the functional intent is imperative. In the internal case the programme/project architect should be leading the audit and should be aware of the functional design context.

Note, before diving into the detail of the implementation, it can be highly valuable to re-define the high-level solution architecture (HLSA) in current state terms. The key point being that the macro-level view is often distorted by micro-level design decisions made during the project course. A periodic check is useful to ensure that this organic change is understood and that the integrity of the original architectural vision is maintained.

Indicative review areas are listed below (this is not exhaustive)

Declarative build environment
1. Identify platform limits that are reaching a high percentage of utilisation that may present risk to scalability and future phases of development.
2. Identify any future maintainability risk presented by the conventions applied in the definition of configuration elements (e.g. naming conventions, opportunities for best practice improvements.).
3. Identify functional areas where a mapping to standard features could be achieved.
4. Identify security vulnerabilities (org-access, sharing model etc.).

Technical customisations
1. Identify risks to data integrity and application responsiveness.
2. Document risks to scalability and extensibility imposed by platform execution limits.
3. Document deviations from best practice technical patterns, conventions and coding standards.
4. Identify security vulnerabilities introduced by technical componentry.
5. Document deviations from best practice development practices and process.

Integration architecture
1. Identify risk associated with deviations from best practice integration patterns and practices.
2. Identify opportunities to reduce limits consumption.
3. Identify data integrity and scalability vulnerabilities related to the current state integration architecture.

Identity management
1. Identify risk associated with implemented single sign-on processes and related services/infrastructure.
2. Document deviations from best practices related to identity management.

Salesforce Naming Conventions – Declarative

Updated – 2014-11-18

This post follows on from my last post on Custom Settings and provides coverage of the wider set of naming conventions I apply across the various component types of the declarative build environment. The list isn’t exhaustive or necessarily better than any other set of standards. Having a set of conventions applied consistently across the build is key, the specifics of those conventions can be subjective. A key principle applied is that of consistency with established standard conventions wherever possible. For example, standard objects and fields don’t have underscores in the API names and follow simple patterns in terms of naming, there’s no good reason to deviate from this for custom objects and fields. Naming conventions shouldn’t be considered an area for creative thinking, instead creative energy should be focused on the functional and technical design, the conventions applied should be mundane and predictable.

Convention A – Custom Object

[Object name]. Singular, Pascal Case (upper camel case) and no underscores.
e.g. Data Source -> DataSource__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Data Source -> Data_Source__c

Note – the Description attribute for Custom Objects must always be populated appropriately. A self describing configuration is key to future maintenance.

Convention B – Custom Field

[Field name]. Pascal Case (upper camel case) and no underscores.

e.g. Date of Birth -> DateOfBirth__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Date of Birth -> Date_Of_Birth__c

In the scenario where an implementation is comprised of distinct functional domains, with custom fields relating specifically (and exclusively) to one single domain, the following convention should be applied.

Each functional domain has a defined field name Prefix. e.g. HR, FINANCE, SALES etc.
Fields exclusive to one domain have their API name prefixed with the domain prefix.
e.g. Payroll Number -> FINANCE_PayrollNumber__c
e.g. Industry Segment -> SALES_IndustrySegment__c

This convention allows a logical structure to be applied in isolating fields specific to a single functional domain.

Note – the Description attribute for Custom Fields must always be populated appropriately. A self describing configuration is key to future maintenance.

Note – the Help Text attribute for Custom Fields must always be populated appropriately. Inline user assistance can improve the end user experience greatly and reduce ongoing support.

Convention C – Child Relationships

Singular, Pascal Case (upper camel case) and no underscores.
e.g. Account->AccountMetrics__c relationship = Account.AccountMetrics__r

Convention D – Page Layout

[[Function] | [Object]] Layout

e.g. Pricing Case Layout
e.g. Pricing Case Close Layout

Convention E – Custom Setting

Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

API Name – List Settings
– [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

API Name – Hierarchy Settings
– [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

Convention F – Workflow Rule
Always separate the condition from the outcome when configuring workflow. The Workflow Rule is the condition, the associated Actions are the outcome. In many cases the 2 become entwined resulting in duplicated rules, or rules with actions unrelated to the stated purpose of the rule. A clear set of rules related to conditions promotes re-use.

[Object]: [Criteria Description i.e. Condition]

Convention G – Workflow Action

Field Update –
[Object]: Set [Field] to [Value]

Email Alert –
[Object]: Send [Template short description]

Task –
[Object]: [Task Subject]

Convention H – Sharing Rule

OBS: [Object] [From selection] to [To selection]
CBS: [Object] [Criteria] to [To selection]

Convention I – Custom Report Type

[Primary Object] with [Child object]s [and [Grandchild object]s]

Convention J – Custom Label

Define sensible categories for the labels. e.g. UI Button Label, UI Text, UI Error Message etc.

Name = [Category with underscores]_[Value with underscores] e.g. UI_Button_Label_Proceed
Category = [Category with underscores] e.g. UI_Button_Label
ShortDescription = [Category] [Value] e.g. UI Button Label Proceed
Value = [Value] e.g. Proceed

Convention K – Validation Rule

Single field :
[Field Label] [rule applied]
Mailing City Is Required
Start Date Must Be a Weekday

Multiple fields :
[Field grouping term] [rule applied]
Billing Address Must Be Complete

Cross object :
[Object Name] [Field Label] [rule applied]
Opportunity Stage Is Closed No Edit Of Opportunity Products

Convention L – Publisher Action

[Verb] [Noun]
New Invoice
Update Order
Alert Executive Team

Convention M – User Profile

[[Job Function] | [Department] | [Company]] [[User] | [System Administrator]]
Accounts Payable User
Marketing Executive User
Acme System Administrator

Convention N – Permission Set

Single Permissions :
Name must match the permission assigned.
Case Feed
Manage Dashboards
Manage Public List Views

Simple Combined Permissions :
[Verb] [Noun]
Manage Invoices

Combined Permissions :
[Feature Area Descriptor] [User Type]
Work.com Administrator
CloudInvoices User
Knowledge Contributor

Convention O – Public Group

[Grouping term] [[Users] | [Members]]

EU Users
Sales Users
HR Users
Project A Members

Convention P – Reports and Dashboard Folders

[Grouping term] Reports (adhoc reports related to a specific department, team, project etc.)
[Grouping term] Dashboard Reports (best practice to isolate dashboard reports in clear location)
[Grouping term] Dashboards

Finance Reports
HR Dashboards
HR Dashboard Reports

Note – the [Report Description] attribute for Reports must always be populated appropriately. A self describing configuration is key to future maintenance.

Custom Settings Naming Conventions

A quick post to share some thoughts on the standardisation of naming conventions applied to Custom Settings. With Custom Objects it is an obvious best practice to mirror exactly the conventions applied by the Standard Objects, there are no reasons not to adhere to this approach, none. With Custom Settings however there is no comparable reference and as such a great deal of variation exists in the naming conventions applied. For many, this matters little, however I’d make the usual arguments about maintenance, readability, build quality through predictable convention and so forth. There’s also merit in clearly distinguishing Hierarchy from List types and ensuring a meaningful naming for the latter, which can be utilised across many declarative build elements.

I use the naming conventions defined below, for simplicity and clarity reasons. All that really matters is that a standardised approach is taken.

1. Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

2. API Name – List Settings
– [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

Examples – List
e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

3. API Name – Hierarchy Settings
– [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

Examples – Hierarchy
e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

As a further best practice, it is important that any requisite logic applied to the population of the Name field is described in the Custom Setting Description field. If the population is immaterial then simply state as such. Given the inability to relate settings at the attribute level, it can be the case that the Name field plays some role in grouping related settings.

Finally, where possible always retrieve Custom Settings data via the Custom Settings Methods – not via SOQL query. In the latter case the Application Cache is bypassed and the query counts against the limits context within the Apex transaction.

Salesforce Implementation Game Plan

Whether you’re managing a commercial software development, leading a consultancy project or building an IKEA table a game plan is absolutely key to successful delivery. In the latter example IKEA recognises the importance of prescriptive guidance and supplies an instruction leaflet in the box. This however covers only one dimension of successful delivery, the ‘What’ (i.e. what you need to do) – the ‘Who’, the ‘How’ and the ‘When’ are left up to you. In the case of an IKEA table this is acceptable as the resource is probably you (who will likely build the table in your own way regardless of advice received) and the timeline may not be critical. Moving away from this tenuous example, in non-trivial situations all the dimensions of successful delivery are equally significant and must combine cohesively to achieve the defined objective. This calm, controlled, empowering and productive state is precisely what planning is intended to achieve. This success delivery-state is rarely the norm, for various reasons; inexperience, over-optimism, command and control culture, inadequate expertise, process rigidity, poor communication etc. The net effect of such factors being a distress delivery-state where productivity is low and the sense of team is diminished in terms of empowerment, trust and accountability.

Like many people I often find myself misquoting Leo Tolstoy, who didn’t say that failing projects fail for a variety of reasons but succeeding projects succeed for the same reason. He did say this however – “All happy families are alike; each unhappy family is unhappy in its own way.” (Leo Tolstoy, Anna Karina) – where the interpretation comes from. I definitely read this somewhere, apologies if this was your book or blog.

So the idea is that all successful projects succeed for the same reason – that reason being in my view that the project was able to achieve a success delivery-state, i.e. the plan encompassed all the requisite dimensions and maintained the agility to react and adapt during flight. In this context it matters little which project process, methodology, framework etc. you employ what matters is that you have a well conceived plan from the outset, or game plan as I like to call it, and execute on that plan in a disciplined manner.

A game plan can take many forms (spreadsheet, picture, diagram, A3 sheet pinned to the wall etc.) whichever way you go the end result should be an engaging, high-level fusion of vision and planning and be tuned for effective communication to your specific audience.

The game plan should influence the detailed planning, but is a higher level concern that outlines the fundaments of how the project will succeed, covering the essential aspects only. My preference in the past has been an annotated timeline diagram, showing clearly the basis upon which I’m confident of success – this can be highly effective in terms of establishing confidence within the delivery team and across stakeholders. I don’t believe this is possible with a Gantt chart or spreadsheet, even where progression metrics are added.

By way of illustration the following sections outline an example Game Plan related to a fictitious Salesforce implementation project.

gameplan example

In summary, the game plan concept applied to project delivery can be a powerful tool. It matters little how the game plan is presented or what it contains, simply having one in any form can make a big difference in terms of confidence, focus and communication.

Salesforce Platform Limits – Designing for Scale

A Salesforce instance is a constrained environment where limits exist in respect to capacity and execution. Examples of capacity limits being data storage, number of active users, number of custom objects/custom fields/picklists etc. examples of execution limits being API calls per 24-hour period, SOQL queries executed within an Apex transaction, Viewstate size in Visualforce pages etc.. In both the capacity limit and execution limit case it is imperative that the existence and implications of the constraints are factored into the solution design from the outset. Each and every constrained resource must be treated as a precious asset and consumed in an optimised manner even on seemingly trivial implementation projects. From experience it is often the case that a Salesforce implementation grows (in terms of both use and breadth of functionality) at a rapid rate once it gains traction in an enterprise. If you’ve carelessly exhausted all the constrained resources in the first release, what happens next? Note, some soft limits can be increased by Salesforce on a discretional or paid-for basis, however this doesn’t negate the need to make responsible design decisions and at the very least the highlight the possible additional cost associated with a particular approach. Hard limits do exist in key areas, the Spanning Relationships Limit or cross-object reference limit as it also referred is a strong example of this.

Designing for scale simply requires an intelligent consumption of such resources and appropriate solution design decisions in a limited number of areas. The proliferation of Apex and Visualforce related execution limits don’t necessarily impact the scalability of the implementation, the impact is isolated to the micro level. The selected limits listed below however apply at the org level (Salesforce instance) and can constrain the scalability of an implementation (in functional terms). This list is not exhaustive, for a complete picture refer to the Salesforce Limits Quick Reference Guide.

Limits Primarily Influenced by User License Model

Asynchronous Apex Method Executions :
This limit includes @futures, Batch Apex (start, execute and finish method invocations and Scheduled Apex (execute method invocations). Future method calls made from Apex Triggers can be a risk in relation to this limit. For example, Apex Triggers which fire on record updates which make callouts via @futures can cause scalability issues as data volumes grow. In this example it may become necessary to bulk process the modifications via Batch Apex, assuming a batch style of integration is acceptable. What if near real-time (NRT) is necessary?

The calculated limit is the higher number of 250K or (200 * user license count), where the applicable licenses to this calculation are full Salesforce and Force.com App Subscription only.

Total API Request Limit :
Enterprise Edition = 1,000 per Salesforce and Salesforce platform license, 200 for Force.com App subscription license
Unlimited/Performance Edition = 5,000 per Salesforce and Salesforce platform license, 200 per Force.com app subscription license

Note, sandboxes have a flat limit of 5M which can give a false impression of the limits applied in production.

All inbound API traffic counts towards this limit, including Outlook plug-ins, Data Loader etc. For implementations with limited Standard users this limit can be restrictive, and it is reasonably common for extension packs to be purchased to mitigate this. In all cases consumption must optimised by batching updates and use of the Bulk API where possible.

Limits Primarily Influenced by Salesforce Edition

Workflow Time Triggers Per Hour :
Enterprise Edition = 500
Unlimited/Performance Edition = 1000

This limit can be an issue for implementations with high volume transaction processing throughputs, where time-based workflow is employed to send reminder emails etc. If the hourly limit is exceeded triggers are processed in the next hour and so on. This may cause issue if the actions are time critical.

Workflow Emails Per Day :
1,000 per standard Salesforce license, capped at 2 million.

Apex Emails Per Day:
1,000 in total. The maximum message count per send is limited per edition.
Enterprise Edition = 500
Unlimited/Production Edition = 1000

An unlimited number of emails can be sent per day to Users by using the SingleEmailmessage.setTargetObjectId() and MassEmailmessage.setTargetobjsctIds() methods. This includes customer and partner portal users and even high volume portal users.

This limit is critical to understand and to mitigate in a scalable solution design. In short don’t use Apex to send email unless the recipient is a User. In portal cases use the User Id and not the Contact Id. Prefer Workflow based email sending, as the limits Are considerably higher, and perhaps use Apex script to set criteria picked up by a Workflow rule.

Additional Limits to Consider

Batch Apex (5 pending or active)
Scheduled Jobs (100 scheduled)
Apex Script Characters (3M)
Dynamic Apex Describes
Cross Object References

From a best practice perspective a Platform Limits Reference document should be maintained for all Salesforce implementations that lists the applicable limits and related consumption. This approach surfaces the existence of the limits and should provide design principles such as using Workflow to send customer emails in preference to Apex script. Without an ordered approach where limit consumption is proactively tracked it is highly likely that expensive refactoring exercises or multi-org strategies become necessary over time that could have been minimised,deferred or entirely avoided.

Any-org Design Considerations

The concept of any-org development is an interesting one. The strict definition, to my mind, being the development of a set of components (perhaps packaged) that are designed and coded specifically to install and function in any Salesforce org. This is typically an ISV concern, where testing and maintaining a single-code base can be highly desirable over managing a base package plus multiple extension packages, or in the worse case multiple independent packages. Either way an ISV needs to maximise the addressable market for a product whilst minimising the ongoing effort to do so. The same drivers do not apply in the single-org case, where a consultancy and/or in-house team are delivering technical components to be installed into a known Salesforce org (or multi-org estate). In the single-org case it is common practice to see technical components designed and coded for the current state of the target org(s), with no consideration to how the org(s) may evolve over time. This can often result in situations where costly technical work is required simply to activate an optional product feature, or to provide user access in another locale. In such cases the situation can often be compounded by the fact that the original development team are no longer available.

In short, in my view some degree of future-proofing should be considered in designing for the single-org model, using the techniques applied by ISVs in the any-org model.

    Any-org Design Considerations

  1. Optional Features
  2. Examples; Person Accounts, Quotes

    There are a multitude of optional product features which can be enabled directly in the Salesforce web application UI or via Salesforce support. In the majority of cases such feature activations irreversibly add new objects and fields to the Salesforce instance. From the perspective of keeping simple orgs uncluttered by objects related to unused features this makes perfect sense. From the perspective of designing for the any-org model, this approach poses a few challenges. The main challenge being that Apex code won’t compile where a static reference exists to an object (or field) that doesn’t exist in the org. There is no simple answer to this, instead a selective approach should be taken where optional features that may already be active (or could in the future be activated), that have some impact on your code are accommodated. The approach to achieving this for any-org Apex code basically involves replacing static references with Dynamic SOQL and Dynamic Apex (see coding techniques below).

  3. Multi-currency
  4. The default currency mode of a Salesforce org is single currency, the majority stay this way. It is however common to have multi-currency and perhaps advanced currency management (ACM) activated in orgs where business operations are international. Activation of multi-currency often occurs once the Salesforce org has become established, perhaps in a single region. This can be problematic where technical customisations have been added that aren’t currency aware.

    In the any-org case, all Apex code should be multi-currency aware and use Dynamic SOQL to add the CurrencyIsoCode field to all object queries involving currency fields. Additionally, currency aware logic should include checks to ensure that related transactions are the same currency, and that custom analytics are presenting data in the corporate currency (default and therefore expected behaviour for the native reporting functions). Note, the behaviour of aggregate functions involving currency fields must also be handled.

  5. Editions Support
  6. A key design decision for ISVs is the Salesforce editions to be supported by their managed package. This one has less relevance to the single-org model, unless the multi-org estate includes different editions.

    It is possible to group editions into two distinct groups;
    1. Group (or Team) Edition and Professional Edition
    2. Enterprise Edition and Unlimited Edition

    In the case of group 1 assume that standard objects such as Product, Pricebook2, PricebookEntry, RecordType do not exist and ensure no static references exist in the code. The OrganizationType field on the Organization object tells us which edition the code is executing within.

    [sourcecode language=”java”]
    public static Boolean isTeamOrProEdition(){

    if (isTeamOrProEdition==null){
    List<Organization> orgs = [select OrganizationType from Organization where Id=:UserInfo.getOrganizationId() limit 1];
    if (orgs.size()>0)
    isTeamOrProEdition=(orgs[0].OrganizationType==’Team Edition’ || orgs[0].OrganizationType==’Professional Edition’);
    }
    return isTeamOrProEdition;
    }
    [/sourcecode]

  7. Internationalisation
  8. Whether an international user base is anticipated or not it is general software development best practice to externalise string literals into resource files. In the Salesforce context this means Custom Labels. A best practice here is to apply strict categorisation and a meaningful naming convention. Also ensure all literals are externalised not just labels in the UI, for example trigger error messages.

    Another consideration for i18n is the use of currency and date formatting helpers. Where UI components do not apply default formatting for an SObject field you need to handle this in code. An i18nHelper class which translates ISO locale and currency codes to date format strings and currency format strings plus symbols respectively can be very helpful.

    Useful abbreviations:
    i18n – internationalisation; development practice enabling support for localisation.
    l11n – localisation; act of localising an internationalised software product for a specific locale.

  9. Profile Permissions
  10. Visualforce pages are preprocessed for components directly bound to SObject fields where the user profile does not have CRUD or FLS permissions. In such cases the fields are not displayed or are made read-only, depending on visibility state. This comes as a surprise for many developers who assume that User Profile permissions are entirely ignored on Visualforce pages.

    reference: Enforcing_CRUD_and_FLS

    In the any-org model, where direct SObject field binding is being used in a Visualforce page, this may require a manual check during initialisation to protect the functional integrity of the page. For example, a custom page with no fields displayed and no explanation is not a great user experience, instead the page should simply inform the user they don’t have sufficient permissions, they can then take this up with their Administrators.

    [sourcecode language=”java”]
    private Boolean hasRequiredFLS(){
    // rule 1: all custom fields must be accessible.
    // rule 2: check isUpdateable on all fields where inline editing offered.

    Schema.DescribeFieldResult d;

    Map<String, Schema.SObjectField> siFieldNameToToken=Schema.SObjectType.SalesInvoice__c.fields.getMap();

    for (Schema.SObjectField f : siFieldNameToToken.values()){
    d = f.getDescribe();

    if (!d.isCustom()) continue;
    if (!d.isAccessible()) return false;
    }

    d = siFieldNameToToken.get(‘InvoiceDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else {
    d = siFieldNameToToken.get(‘DueDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else this.isInlineEditable=true;
    }
    return true;
    }
    [/sourcecode]

    Coding Techniques

  1. Dynamic SOQL
  2. Do not statically reference objects or fields that may not exist in the org. Instead compose Dynamic SOQL queries and execute via Database.query(). With this approach, you can build the required query using flags which indicate the presence of optional feature fields such as RecordTypeId, CurrencyIsoCode etc. The Apex Language Reference provides good coverage of Dynamic SOQL. Be very careful to ensure that your composed string does not include user supplied text input – this would open up a vulnerability to SOQL injection security vectors.

    [sourcecode language=”java”]
    public static Id getStandardPricebookId(){
    if (standardPricebookId==null){
    String q=’select Id, isActive from Pricebook2 where IsStandard=true’;
    SObject p = Database.query(q);

    if (!(Boolean)p.get(‘IsActive’)){
    p.put(‘IsActive’,true);
    update p;
    }
    standardPricebookId=(String)p.get(‘Id’);
    }
    return standardPricebookId;
    }

    public SalesInvoice__c retrieveSalesInvoice(String siId){
    try{
    //& Using dynamic Apex to retrieve fields from the fieldset to create a soql query that returns all fields required by the view.
    String q=’select Id,Name,OwnerId’;
    q+=’,TotalGross__c’;

    for(Schema.FieldSetMember f : SObjectType.SalesInvoice__c.FieldSets.invoices__Additional_Information.getFields()){
    if (!q.contains(f.getFieldPath())) q+=’,’+f.getFieldPath();
    }

    if (UserInfo.isMultiCurrencyOrganization()) q+=’,CurrencyIsoCode’;
    if (AppHelper.isPersonAccountsEnabled()) q+=’,PersonEmail,PersonContactId’;

    q+=’,(select Id,Description__c,Quantity__c from SalesInvoiceLineItems__r order by CreatedDate asc)’;
    q+=’ from SalesInvoice__c’;
    q+=’ where Id=\”+siId+’\”;

    return Database.query(q);
    } catch (Exception e){
    throw e;
    }
    }
    [/sourcecode]

  3. Dynamic Apex
  4. Do not statically reference objects or fields that may not exist in the org. Instead use Dynamic Apex techniques such as global describes and field describes. Where a new SObject is required, use the newSObject() method as shown below, this is particularly useful for unit test data creation. The Apex Language Reference provides good coverage of Dynamic Apex, every developer should be familiar with this topic.

    [sourcecode language=”java”]
    public static List<SObject> createPBE(Id pricebookId, List<SObject> products){
    SObject pbe;
    List<SObject> entries = new List<SObject>();

    Schema.SObjectType targetType = Schema.getGlobalDescribe().get(‘PricebookEntry’);
    if (targetType==null) return null;

    for (SObject p : products){
    pbe = targetType.newSObject();

    pbe.put(‘Pricebook2Id’,pricebookId);
    pbe.put(‘Product2Id’,p.Id);
    pbe.put(‘UseStandardPrice’,false);
    pbe.put(‘UnitPrice’,100);
    pbe.put(‘IsActive’,true);
    entries.add(pbe);
    }
    if (entries.size()>0) insert entries;
    return entries;
    }
    [/sourcecode]

  5. UserInfo Methods
  6. The UserInfo standard class provides some highly useful methods for any-org coding such as;
    isMultiCurrencyOrganization(), getDefaultCurrency(), getLocale() and getTimezone(). The isMultiCurrencyOrganization() method will be frequently used to branch code specific to multi-currency orgs.

    [sourcecode language=”java”]
    public static String getCorporateCurrency(){
    if (corporateCurrencyIsoCode==null){
    corporateCurrencyIsoCode=UserInfo.getDefaultCurrency();

    if (UserInfo.isMultiCurrencyOrganization()){
    String q=’select IsoCode, ConversionRate from CurrencyType where IsActive=true and IsCorporate=true’;
    List<SObject> currencies = Database.query(q);
    if (currencies.size()>0)
    corporateCurrencyIsoCode=(String)currencies[0].get(‘ISOCode’);
    }
    return corporateCurrencyIsoCode;
    }
    }
    [/sourcecode]

    Challenges

  1. Unit Test Data
  2. In the any-org model the creation of unit test data can be a challenge due to the potential existence of mandatory custom fields and/or validation rules. To mitigate the former, Dynamic Apex can be used to identify mandatory fields and their data type such that test data can be added (via a factory pattern of some sort). In the latter case there is no way to reliably detect a validation rule condition and as such for ISVs it is a blessing that unit tests do not actual have to pass in a subscriber org (wrong as this may be in principle). In the single-org case we can improve on this (and we have to), by adding a global Validation Rule switch-off flag in a Org Behaviour Custom Setting (see previous post) – this approach is helpful in many areas but for unit test data creation it can isolate test code from Validation Rules added post-deployment. There’s a tradeoff here between protecting unit tests versus the risk of using test data that may not adhere to the current set of Validation Rules.

  3. Unit Test Code Coverage
  4. The addition of multiple conditional code paths, i.e. branching, for the any-org case makes it challenging to achieve a high code coverage percentage in orgs which do not have the accommodated features activated. For example, unit tests executing in a single currency org, will not be run code specific to multi-currency, and therefore the code coverage drops accordingly. To mitigate this, consider adding OR conditions to IF branches which include unit test flags and perhaps Test.isRunningTest() to cover as much code as possible before leaving the branch. During coding always strive to absolutely minimise the feature-specific code – this approach will help greatly in respect to unit test coverage.

  5. QA
  6. In the any-org model, it is imperative to test your code in an org with the accommodated features activated. This will require multiple QA orgs and can increase the overall testing overhead considerably. Also, factor in the lead time required to have features activated by Salesforce support, such as multi-currency and Person Accounts.

  7. Security
  8. Dynamic SOQL queries open up the possibility of SOQL-injection attacks where user-supplied text values are concatentated into an executed SOQL query string. Always sanitise and escape data values where such code behaviour is necessary.

  9. Governor Limits
  10. The any-org model is highly contingent upon the use of limited resources such as Apex Describes. As a best practice employ a helper class pattern with cached values.

    One Approach – Future Proofing Single-org Developments

    Optional Features – selective
    Multi-currency – yes
    Editions Support – no
    i18n – yes
    Unit Test Data – yes
    Profile Permissions – yes

    The list above is my default position on the approach to take on single-org developments, this can change significantly depending on the current state of the org in terms of configuration and customisation, plus the client perspective on the evolution of their Salesforce org and attitude toward investing in future-proofing/extensibility. In the single-org, consultancy project case it’s advisable to be completely open and let the client decide if the additional X% cost is worth the value. I think the real point here is that the conversation takes place and the client has the opportunity to make an informed decision.