Conceptual Data Modelling

The biggest area of risk on any Salesforce implementation project is the data model. In my view this assertion is beyond question. The object data structures and relationships underpin everything. Design mistakes made in the declarative configuration or indeed technical components such as errant Apex Triggers, poorly executed Visualforce pages etc. are typically isolated and therefore relatively straightforward to remediate. A flawed data model will impact on every aspect of the implementation from the presentation layer through to the physical implementation of data integration flows. This translates directly to build time, build cost and the total cost of ownership. It is therefore incredibly important that time is spent ensuring the data model is efficient in terms of normalisation, robust and fit for purpose; but also to ensure that LDV is considered, business critical KPIs can be delivered via the standard reporting tools and that a viable sharing model is possible. These latter characteristics relate to the physical model, meaning the translation of the logical model into the target physical environment, i.e. Salesforce (or perhaps database.com). Taking a step back, the definition of a data model should journey through three stages; conceptual, logical and physical design. In the majority case most projects jump straight into entity relationship modelling – a logical design technique. In extreme cases the starting point is the physical model where traditional data modelling practice is abandoned in favour of a risky incremental approach with objects being identified as they are encountered in the build process. In many cases starting with a logical model can work very well and enable a thorough understanding of the data to be developed, captured and communicated before the all important transition to the physical model. In other cases, particularly where there is high complexity or low understanding of the data structures, a preceding conceptual modelling exercise can help greatly in ensuring the validity and efficiency of the logical model. The remainder of this post outlines one useful technique in performing conceptual data modelling; Object Role Modelling (ORM).

I first started using ORM a few years back on Accounting related software development projects where the data requirements were emergent in nature and the project context was of significant complexity. There was also a need to communicate early forms of the data model in simple terms and show the systematic, fact-based nature of the model composition. The ORM conceptual data model delivered precisely this capability.

ORM – What is it?
Object Role modelling is a conceptual data modelling technique based on the definition of facts in the form of natural language and intuitive diagrams. ORM models are subject to rigorous data population checks, the addition of logical constraints and iterative improvement. A key concept of ORM is the Conceptual Schema Design Procedure (CSDP), a prescriptive 7 step approach to the application of ORM, i.e. the analysis and design of data. Once the conceptual model is complete and validated, a simple algorithm can be applied to produce a logical view, i.e. a set of normalised entities (ERD) that are guaranteed to be free of redundancy. This generation of a robust logical model directly from the conceptual schema is a key benefit of the ORM technique.

Whilst many of the underlying principles have existed in various forms since the 1970s, ORM as described here was first formalised by Dr. Terry Halpin in his PhD thesis in 1989. Since then a number of books and publications have followed by Dr. Halpin and other advocates. Interestingly, Microsoft made some investment in ORM in the early 2000’s with the implementation of ORM as part of the Visual Studio for Enterprise Architects (VSEA) product. VSEA offered tool support in the form of NORMA (Natural ORM Architect), a memorable acronym. International ORM workshops are held annually, the ORM2014 workshop takes place in Italy this month.

In terms of tools support ORM2 stencils are available for both Visio and Omnigraffle.

ORM Example
The technique is best described in the ORM whitepaper. I won’t attempt to replicate or paraphrase this content, instead, a very basic illustrative model is provided to give nothing more than a sense of how a conceptual model appears.

ORM2 basic example

Final Thoughts
In most cases a conceptual data model can be an unnecessary overhead, however where data requirements are emergent or sufficiently complex to warrant a distinct analysis and design process, the application of object role modelling can be highly beneficial. Understanding the potential of such techniques I think is perhaps the most important aspect, a good practitioner should have a broad range of modelling techniques to call upon.

References
Official ORM Site
ORM2 Whitepaper
ORM2 Graphical Notation
Omnigraffle stencil on Graffletopia

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Technical Naming Conventions

Challenge – outside of the ISV development model there is no concept of an application namespace that can be used to group the technical components related to a single logical application. To mitigate this issue, and to provide a means to isolate application-specific components, naming schemes such as application specific prefixes are commonplace.

Risk – without application/module/function namespaces etc. all technical components reside as an unstructured (unpackaged) collection, identified only by their metadata type and name. As such maintainability and future extensibility can be inhibited as the technical components related to multiple logical applications converge into a single unstructured code-base.

Options –
1. Application specific prefix. All components related to a specific application are prefixed with an abbreviated application identifier, e.g. Finance Management = “fm”, HR = “hr”. This option addresses the requirement for isolation, but inevitably causes issue where helper classes or classes related to common objects span multiple applications. This option has the advantage of minimising the effort required to remove functionality related to a logical application, only shared classes would need to be modified.

2. Object centric approach. In considering a Salesforce org as a single consolidated codebase where most components (technical or declarative) relate to a primary data object, a strict object-centric approach can be taken to the naming of technical components. With such a mindset, the concept of a logical application becomes less significant, instead components are grouped against the primary data object and shared across the custom functionality that may be related to the object. A strictly governed construction pattern should promote this concept with the main class types defined on a per-object basis. Functional logic not related to a single object should only every reside in a controller class, web service class or helper class. In the controller and web service cases, the class should orchestrate data transactions across multiple objects to support specific functionality. In the helper class case a function-centric approach is appropriate.

In architectural terms, an object-centric data layer is introduced that is called from a function-centric presentation layer.

presentation layer [Object][Function].page —
SalesInvoiceDiscountCalc.page
SalesInvoiceDiscountCalcController.cls

data layer [Object][Class Type].cls —
SalesInvoiceManager.cls
AccountManager.cls

business logic layer [Function][Helper|Utility]–
DiscountCalcHelper.cls

The downside of this approach is contention on central classes in the data layer when multiple developers are working in a single org, plus the effort required to remove functionality on a selective basis. In the latter case using a source code management system such as Git with a smart tagging strategy can help to mitigate the issue. Additionally, code commenting should always be used to indicate class dependencies (i.e. in the header comment) and to convey the context in which code runs, this is imperative in ensuring future maintainability.

Recommended Approach –
1. Option 2. In summary, naming conventions should not artificially enforce the concept of a logical application – the composition of which is open to change by Admins, instead an object-centric approach should be applied that promotes code re-use and discipline in respect adherence to the applied construction patterns.

Whichever approach is taken, it is highly useful to consider how the consolidated codebase will evolve as future functionality and related code is introduced. A patterns-based approach can mitigate the risk of decreasing maintainability as the codebase size increases.

Salesforce Application Types

In a typical development process requirements are captured and the information synthesised to form a solution design. The constituent components of the solution design are ultimately translated into physical concepts such as a class, page or sub-page view. This analysis, design, build cycle could be iterative in nature or fixed and may have different degrees of detail emerging at different points, however the applied principle is consistent. In considering the design element of the cycle, interaction design techniques suggest a patterns-based approach where features are mapped to a limited set of well-defined and robust user interface patterns, complemented by policies for concepts that transcend the patterns such as error handling, validation messages, stylistic aspects (fonts, dimensionality etc.). This process delivers efficiency in terms of reusability of code and reduced technical design and testing, but also critically provides a predictable, consistent end-user experience. When building custom applications using the declarative tools, we gain all of these advantages using pre-defined patterns and pre-fabricated building blocks. When building using the programmatic aspects of the platform a similar approach should be taken, meaning follow established patterns and use as much of the pre-fabricated components as possible. I can never fathom the driver to invent bespoke formats for pages that display within the standard UI, the end result is jarring for the end-user and expensive to build and maintain. In addition to delivering a consistent, predicative end-user experience at the component level, the containing application itself should be meaningful and appropriate in type. This point is becoming increasingly more significant as the range of application types grows release-on-release and the expanding platform capabilities introduce relevance to user populations outside of the front-office. The list below covers the application types possible at the time of writing (Spring ’14).

Standard Browser App
Standard Browser App (Custom UI)
Console (Sales, Service, Custom)
Subtab App
Community (Internal, External, Standard or Custom UI)
Salesforce1 Mobile
Custom Mobile App (Native, Hybrid, browser-based)
Site.com Site
Force.com Site

An important skill for Salesforce implementation practitioners is the accurate mapping of required end user interactions to application types within an appropriate license model. This is definitely an area where upfront thinking and a documented set of design principles is necessary to deliver consistency.

By way of illustration, the following exemplar design principles strive to deliver consistency across end user interactions.

1. Where the interaction is simple, confined to a single User, the data relates to the User and is primarily modifiable by the User only and has no direct business relevance then a Subtab App (Self) is appropriate. Examples: “My Support Tickets”, “Work.com – Recognition”.
2. Where a grouping of interactions form a usage profile that requires streamlined, efficient navigation of discrete, immersive, process centric tasks then a Console app is appropriate. Examples: “IT Helpdesk”, “Account Management”
3. Where a grouping of interactions from a usage profile that is non-immersive, non-complex (i.e. aligned with the pattern of record selection and view/edit) and likely to be conducted on constrained devices then Salesforce1 Mobile is appropriate. Examples: “Field Sales”, “Executive Insight”.

Design principles should also provide a strong definition for each application type covering all common design aspects to ensure consistency. For example, all Subtab apps should be built the same way technically, to the same set of standards, and deliver absolute consistency in the end user experiences provided.

Salesforce Release Methodology – Simple Case

A very common challenge addressed by architects working with Salesforce is the definition of an appropriate release methodology. By this I mean the identification of the Salesforce orgs required to support the project delivery whether serial or concurrent in nature, the role and purpose of each org and critically, the means by which change is managed and synchronised across environments. With this latter point, a clear definition of the path-to-production is imperative.

In the large-scale, complex project case there is typically time and expertise available to define a bespoke methodology, with build automation, source code control system integration and so forth tailored to the specifics of the programme environment. There’s an abundance of best-practice information available online to help guide the definition of a release methodology for complex projects. For less complex projects, such as those employing the declarative build model only, there is less information available, in such cases what is typically required is a standardised, best-practice approach that can be adopted as-is.

The remainder of this post provides an outline view of an exemplar release methodology for small-to-medium scale, configuration-centric projects (i.e. no Apex code or technical complexities). This information is provided for reference purposes only.

Environment Strategy
The following diagram outlines the environments and their purpose, the defined release steps and a basic approach to change management.

Release Methodology - Simple Case

Key Principles
1. Isolate development from testing activities. This is the golden rule. Testing requires a stable environment unaffected by ongoing development. Development shouldn’t grind to a halt while system testing and acceptance testing processes are applied.
2. Utilise the minimum number of sandboxes as possible. Synchronisation of change is time expensive and error prone, avoid this wherever possible. Preparation of standing data post sandbox refresh can also take time, as can the communication required to establish that a refresh can proceed.
3. Don’t over specify the sandbox type. Sandboxes are an expensive asset, especially full-copy and partial-data sandboxes. Calculate the required storage capacity and map to either Developer or Developer Pro. Retain full-copy sandboxes for purposes that do actually require the copied data.
4. Maintain a Change Control Log in the production org to record all changes (at a reasonably high-level) against applied environments.
5. Use the production org for implementation project collaboration. It can also be a useful adoption tool to create Chatter groups such as “Salesforce: Marketing”, “Salesforce: Finance” where collaboration can occur directly with the business users whilst the project is in flight.
6. Accept that change will inevitably be applied to the production org first; record such changes and apply to development and testing sandboxes asap.
7. Always verify the Change Control Log against the Setup Audit Trail before deployments.
8. Use Change Sets for deployment wherever possible.
9. Encourage a development process where Change Sets are updated continually, rather than retrospectively.
10. Always verify the Change Control Log against the list of Change Set support components.
11. On larger projects a Change Set partitioning strategy may be required; along functional lines, by team or by component type etc.
12. Ensure releases to production are documented and approved. A simple Deployment Request Form (DRF) template should be defined and used to gain approval. This process is key to communication and governance but also helps the team consider fully the pre- and post- deployment steps, risks and rollback strategy.
13. Post-release. Communicate how business processes have been mapped to Salesforce concepts, and the permissions model. Understanding how things work in simple terms can help avoid end-user frustration with a new system. This can also reduce the support burden as end-users can often self diagnose the cause of a problem.

The org strategy diagram above presents an appropriate approach for a serial-release model, i.e. one project or one sprint at a time is being developed, tested then released. In the concurrent-release model, where multiple parallel projects are converging into a single production org, isolated develop and test sandboxes will be duplicated per project with an integration (or pre-production) org providing a synchronisation point where the combined state is validated prior to deployment to production.

Salesforce Naming Conventions – Declarative

Updated – 2014-11-18

This post follows on from my last post on Custom Settings and provides coverage of the wider set of naming conventions I apply across the various component types of the declarative build environment. The list isn’t exhaustive or necessarily better than any other set of standards. Having a set of conventions applied consistently across the build is key, the specifics of those conventions can be subjective. A key principle applied is that of consistency with established standard conventions wherever possible. For example, standard objects and fields don’t have underscores in the API names and follow simple patterns in terms of naming, there’s no good reason to deviate from this for custom objects and fields. Naming conventions shouldn’t be considered an area for creative thinking, instead creative energy should be focused on the functional and technical design, the conventions applied should be mundane and predictable.

Convention A – Custom Object

[Object name]. Singular, Pascal Case (upper camel case) and no underscores.
e.g. Data Source -> DataSource__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Data Source -> Data_Source__c

Note – the Description attribute for Custom Objects must always be populated appropriately. A self describing configuration is key to future maintenance.

Convention B – Custom Field

[Field name]. Pascal Case (upper camel case) and no underscores.

e.g. Date of Birth -> DateOfBirth__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Date of Birth -> Date_Of_Birth__c

In the scenario where an implementation is comprised of distinct functional domains, with custom fields relating specifically (and exclusively) to one single domain, the following convention should be applied.

Each functional domain has a defined field name Prefix. e.g. HR, FINANCE, SALES etc.
Fields exclusive to one domain have their API name prefixed with the domain prefix.
e.g. Payroll Number -> FINANCE_PayrollNumber__c
e.g. Industry Segment -> SALES_IndustrySegment__c

This convention allows a logical structure to be applied in isolating fields specific to a single functional domain.

Note – the Description attribute for Custom Fields must always be populated appropriately. A self describing configuration is key to future maintenance.

Note – the Help Text attribute for Custom Fields must always be populated appropriately. Inline user assistance can improve the end user experience greatly and reduce ongoing support.

Convention C – Child Relationships

Singular, Pascal Case (upper camel case) and no underscores.
e.g. Account->AccountMetrics__c relationship = Account.AccountMetrics__r

Convention D – Page Layout

[[Function] | [Object]] Layout

e.g. Pricing Case Layout
e.g. Pricing Case Close Layout

Convention E – Custom Setting

Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

API Name – List Settings
– [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

API Name – Hierarchy Settings
– [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

Convention F – Workflow Rule
Always separate the condition from the outcome when configuring workflow. The Workflow Rule is the condition, the associated Actions are the outcome. In many cases the 2 become entwined resulting in duplicated rules, or rules with actions unrelated to the stated purpose of the rule. A clear set of rules related to conditions promotes re-use.

[Object]: [Criteria Description i.e. Condition]

Convention G – Workflow Action

Field Update –
[Object]: Set [Field] to [Value]

Email Alert –
[Object]: Send [Template short description]

Task –
[Object]: [Task Subject]

Convention H – Sharing Rule

OBS: [Object] [From selection] to [To selection]
CBS: [Object] [Criteria] to [To selection]

Convention I – Custom Report Type

[Primary Object] with [Child object]s [and [Grandchild object]s]

Convention J – Custom Label

Define sensible categories for the labels. e.g. UI Button Label, UI Text, UI Error Message etc.

Name = [Category with underscores]_[Value with underscores] e.g. UI_Button_Label_Proceed
Category = [Category with underscores] e.g. UI_Button_Label
ShortDescription = [Category] [Value] e.g. UI Button Label Proceed
Value = [Value] e.g. Proceed

Convention K – Validation Rule

Single field :
[Field Label] [rule applied]
Mailing City Is Required
Start Date Must Be a Weekday

Multiple fields :
[Field grouping term] [rule applied]
Billing Address Must Be Complete

Cross object :
[Object Name] [Field Label] [rule applied]
Opportunity Stage Is Closed No Edit Of Opportunity Products

Convention L – Publisher Action

[Verb] [Noun]
New Invoice
Update Order
Alert Executive Team

Convention M – User Profile

[[Job Function] | [Department] | [Company]] [[User] | [System Administrator]]
Accounts Payable User
Marketing Executive User
Acme System Administrator

Convention N – Permission Set

Single Permissions :
Name must match the permission assigned.
Case Feed
Manage Dashboards
Manage Public List Views

Simple Combined Permissions :
[Verb] [Noun]
Manage Invoices

Combined Permissions :
[Feature Area Descriptor] [User Type]
Work.com Administrator
CloudInvoices User
Knowledge Contributor

Convention O – Public Group

[Grouping term] [[Users] | [Members]]

EU Users
Sales Users
HR Users
Project A Members

Convention P – Reports and Dashboard Folders

[Grouping term] Reports (adhoc reports related to a specific department, team, project etc.)
[Grouping term] Dashboard Reports (best practice to isolate dashboard reports in clear location)
[Grouping term] Dashboards

Finance Reports
HR Dashboards
HR Dashboard Reports

Note – the [Report Description] attribute for Reports must always be populated appropriately. A self describing configuration is key to future maintenance.

Salesforce Implementation Game Plan

Whether you’re managing a commercial software development, leading a consultancy project or building an IKEA table a game plan is absolutely key to successful delivery. In the latter example IKEA recognises the importance of prescriptive guidance and supplies an instruction leaflet in the box. This however covers only one dimension of successful delivery, the ‘What’ (i.e. what you need to do) – the ‘Who’, the ‘How’ and the ‘When’ are left up to you. In the case of an IKEA table this is acceptable as the resource is probably you (who will likely build the table in your own way regardless of advice received) and the timeline may not be critical. Moving away from this tenuous example, in non-trivial situations all the dimensions of successful delivery are equally significant and must combine cohesively to achieve the defined objective. This calm, controlled, empowering and productive state is precisely what planning is intended to achieve. This success delivery-state is rarely the norm, for various reasons; inexperience, over-optimism, command and control culture, inadequate expertise, process rigidity, poor communication etc. The net effect of such factors being a distress delivery-state where productivity is low and the sense of team is diminished in terms of empowerment, trust and accountability.

Like many people I often find myself misquoting Leo Tolstoy, who didn’t say that failing projects fail for a variety of reasons but succeeding projects succeed for the same reason. He did say this however – “All happy families are alike; each unhappy family is unhappy in its own way.” (Leo Tolstoy, Anna Karina) – where the interpretation comes from. I definitely read this somewhere, apologies if this was your book or blog.

So the idea is that all successful projects succeed for the same reason – that reason being in my view that the project was able to achieve a success delivery-state, i.e. the plan encompassed all the requisite dimensions and maintained the agility to react and adapt during flight. In this context it matters little which project process, methodology, framework etc. you employ what matters is that you have a well conceived plan from the outset, or game plan as I like to call it, and execute on that plan in a disciplined manner.

A game plan can take many forms (spreadsheet, picture, diagram, A3 sheet pinned to the wall etc.) whichever way you go the end result should be an engaging, high-level fusion of vision and planning and be tuned for effective communication to your specific audience.

The game plan should influence the detailed planning, but is a higher level concern that outlines the fundaments of how the project will succeed, covering the essential aspects only. My preference in the past has been an annotated timeline diagram, showing clearly the basis upon which I’m confident of success – this can be highly effective in terms of establishing confidence within the delivery team and across stakeholders. I don’t believe this is possible with a Gantt chart or spreadsheet, even where progression metrics are added.

By way of illustration the following sections outline an example Game Plan related to a fictitious Salesforce implementation project.

gameplan example

In summary, the game plan concept applied to project delivery can be a powerful tool. It matters little how the game plan is presented or what it contains, simply having one in any form can make a big difference in terms of confidence, focus and communication.

Integration Architecture Patterns

As an architect I’m generally obsessive about three things; patterns, principles and practices. I could probably add to this list but I also prefer to keep things simple. This post is concerned with the first P, Patterns – in the integration architecture context. At what level should they be defined and applied? I tend to consider the logical and physical aspects of data integration flows independently. In the logical case, the focus should be on the definition of an end-to-end business process that spans multiple systems. There should be no technology constraint or perspective applied to the logical view. In the physical case, the logical view should be considered an input, and a technical view defined in full consideration of the following.

Frequency of integration (batch, near-real-time, real-time)
Bi-directional, versus uni-directional
Multi-lateral, versus bi-lateral or uni-lateral
Volumetrics
Security
Protocols and message formats
Reference data dependencies
Technical constraints (API limits model)
Existing enterprise integration technologies (middleware, ESB)
Future maintenance skill sets (technical versus administrator)

Each physical integration flow definition should not be entirely independent, instead groupings should be identified and robust integration patterns designed and documented. The solution components for each pattern would then be developed, tested and re-applied wherever possible. The schematic below provides a fictitious example of this approach.

Integration Patterns

Having a simple set of clearly defined patterns visible to the project team is key, and should be complemented by a project principle that new approaches to physical integration are by exception – nobody has discretion to be creative in this regard. Standardisation is good practice; integration is expensive in terms of technology, implementation time, run cost and maintenance.

Salesforce Platform Limits – Designing for Scale

A Salesforce instance is a constrained environment where limits exist in respect to capacity and execution. Examples of capacity limits being data storage, number of active users, number of custom objects/custom fields/picklists etc. examples of execution limits being API calls per 24-hour period, SOQL queries executed within an Apex transaction, Viewstate size in Visualforce pages etc.. In both the capacity limit and execution limit case it is imperative that the existence and implications of the constraints are factored into the solution design from the outset. Each and every constrained resource must be treated as a precious asset and consumed in an optimised manner even on seemingly trivial implementation projects. From experience it is often the case that a Salesforce implementation grows (in terms of both use and breadth of functionality) at a rapid rate once it gains traction in an enterprise. If you’ve carelessly exhausted all the constrained resources in the first release, what happens next? Note, some soft limits can be increased by Salesforce on a discretional or paid-for basis, however this doesn’t negate the need to make responsible design decisions and at the very least the highlight the possible additional cost associated with a particular approach. Hard limits do exist in key areas, the Spanning Relationships Limit or cross-object reference limit as it also referred is a strong example of this.

Designing for scale simply requires an intelligent consumption of such resources and appropriate solution design decisions in a limited number of areas. The proliferation of Apex and Visualforce related execution limits don’t necessarily impact the scalability of the implementation, the impact is isolated to the micro level. The selected limits listed below however apply at the org level (Salesforce instance) and can constrain the scalability of an implementation (in functional terms). This list is not exhaustive, for a complete picture refer to the Salesforce Limits Quick Reference Guide.

Limits Primarily Influenced by User License Model

Asynchronous Apex Method Executions :
This limit includes @futures, Batch Apex (start, execute and finish method invocations and Scheduled Apex (execute method invocations). Future method calls made from Apex Triggers can be a risk in relation to this limit. For example, Apex Triggers which fire on record updates which make callouts via @futures can cause scalability issues as data volumes grow. In this example it may become necessary to bulk process the modifications via Batch Apex, assuming a batch style of integration is acceptable. What if near real-time (NRT) is necessary?

The calculated limit is the higher number of 250K or (200 * user license count), where the applicable licenses to this calculation are full Salesforce and Force.com App Subscription only.

Total API Request Limit :
Enterprise Edition = 1,000 per Salesforce and Salesforce platform license, 200 for Force.com App subscription license
Unlimited/Performance Edition = 5,000 per Salesforce and Salesforce platform license, 200 per Force.com app subscription license

Note, sandboxes have a flat limit of 5M which can give a false impression of the limits applied in production.

All inbound API traffic counts towards this limit, including Outlook plug-ins, Data Loader etc. For implementations with limited Standard users this limit can be restrictive, and it is reasonably common for extension packs to be purchased to mitigate this. In all cases consumption must optimised by batching updates and use of the Bulk API where possible.

Limits Primarily Influenced by Salesforce Edition

Workflow Time Triggers Per Hour :
Enterprise Edition = 500
Unlimited/Performance Edition = 1000

This limit can be an issue for implementations with high volume transaction processing throughputs, where time-based workflow is employed to send reminder emails etc. If the hourly limit is exceeded triggers are processed in the next hour and so on. This may cause issue if the actions are time critical.

Workflow Emails Per Day :
1,000 per standard Salesforce license, capped at 2 million.

Apex Emails Per Day:
1,000 in total. The maximum message count per send is limited per edition.
Enterprise Edition = 500
Unlimited/Production Edition = 1000

An unlimited number of emails can be sent per day to Users by using the SingleEmailmessage.setTargetObjectId() and MassEmailmessage.setTargetobjsctIds() methods. This includes customer and partner portal users and even high volume portal users.

This limit is critical to understand and to mitigate in a scalable solution design. In short don’t use Apex to send email unless the recipient is a User. In portal cases use the User Id and not the Contact Id. Prefer Workflow based email sending, as the limits Are considerably higher, and perhaps use Apex script to set criteria picked up by a Workflow rule.

Additional Limits to Consider

Batch Apex (5 pending or active)
Scheduled Jobs (100 scheduled)
Apex Script Characters (3M)
Dynamic Apex Describes
Cross Object References

From a best practice perspective a Platform Limits Reference document should be maintained for all Salesforce implementations that lists the applicable limits and related consumption. This approach surfaces the existence of the limits and should provide design principles such as using Workflow to send customer emails in preference to Apex script. Without an ordered approach where limit consumption is proactively tracked it is highly likely that expensive refactoring exercises or multi-org strategies become necessary over time that could have been minimised,deferred or entirely avoided.

Salesforce Certified Architect – Value Proposition

A real challenge for all Salesforce Architects is keeping up-to-date on the constantly evolving native platform capabilities and customisation/extension points. It’s a truism that nothing stands still, this is especially true in the Salesforce world, with 3 releases a year, frequent off-cycle incremental feature enhancements, acquisitions etc. etc.. This constant evolution is of course great for Salesforce customers, the potential return on investment in respect to business value derivable from the license spend grows in parallel. The business challenges in this respect are those of understanding the potential, making appropriate tactical design and implementation decisions and rationalising a sensible and adaptable strategic roadmap.

In order to address these challenges, and mitigate the consequential risk of not doing so, the business must have access to experienced Salesforce architects with a deep understanding of the current Salesforce release and future product roadmap. This is a difficult proposition where competent Salesforce architects are currently a small community (which is expanding year-on-year) and those that exist are typically occupied in solution delivery not strategic advisory roles.

My advice to any business using Salesforce for anything beyond an out-of-the-box CRM implementation is to engage, or employ, a Salesforce Certified Technical Architect (CTA). This stringent accreditation provides the highest degree of assurance that the individual really understands the Salesforce platform and can provide the right guidance around platform potential (vision), tactical design decisions, strategic roadmap, platform governance and development process. Given that the number of CTA is a small subset of the Salesforce architect community this would seem an impossible task, however there is a growing number of CTAs to be found in the Salesforce partner community and via Salesforce Strategic Services (i.e. salesforce.com). An alternative approach would be to invest in an internal resource and provide the opportunity to advance through the Salesforce certifications step-by-step advancing up to the CTA level. This approach will take time, measured in years not months, and is not without risk, internal resources may lack the breadth of exposure to the Salesforce platform gained through (perhaps) more diverse consultancy projects. Additionally, as an internal CTA is a highly valuable asset to any business, it may be difficult to retain their services.

The CTA should be considered a strategic advisor not an implementation resource, and given the following type of accountabilities:

1. Upfront. Data Architecture (data model, quality management processes etc.).
2. Upfront. Integration Architecture (logical and physical architecture, integration patterns, protocols, tools and technologies).
3. Upfront. Standards. Technical standards, conventions and construction patterns.
4. Upfront. Development process. Methodology guidance. Platform governance. Release management and source code control. Environment strategy. Change management. Risk management.
5. Upfront and ongoing. Solution Design. Exploitation and augmentation of standard features, solution options for gaps.
6. Ongoing. Quality Assurance. Build reviews. Tactical advice on solution build.
7. Ongoing. Quality Assurance. Testing strategy guidance.
8. Ongoing. Strategic advisory. Communication of platform potential and new capabilities to project stakeholders.
9. Ongoing. Guidance on fit between new business requirements and Salesforce. High level estimation and project scoping. Vendor cost assessment.
10. Ongoing. Strategic advisory. Strategic roadmap definition and communication.

Taking the 10 preceding points as indicative, the overarching principles are that the CTA sets the scene for a successful implementation, underwrites the delivery through periodic review and quality assurance processes and finally defines the strategic roadmap. Taking each point in turn; setting the scene is perhaps the most critical aspect as a flawed data model, inaccurate mapping of business requirement to native feature, inexpert selection of solution options to plug the gaps etc. can be incredibly cost expensive to rectify at a later date or impossible to remediate. Underwriting the delivery is perhaps the least well defined area, but at a minimum the CTA should be engaged to review progress and take design decisions required while the implementation is in flight. Additional to this should be a retrospective consideration of the development process, project standards etc. which a view to adaptation in light of findings established via practical application. Finally the strategic roadmap aspect to the CTA value proposition is imperative in ensuring that the Salesforce implementation is sustainable, integral to the organisation’s enterprise architecture and evolving to drive the maximum business value from the investment over time in response to defined or emerging business goals.

In summary, a high degree of expertise applied at the start of a project and on a periodic basis can increase greatly the chances of a successful and cost effective Salesforce implementation. Most projects won’t have access to a full time CTA, for cost or availability reasons, however, most projects should consider a part-time CTA in an advisory role, like a good tax accountant they should pay for themselves.