Showing posts filed under:

Development lifecycle

Salesforce Naming Conventions – Declarative

Updated – 2014-11-18

This post follows on from my last post on Custom Settings and provides coverage of the wider set of naming conventions I apply across the various component types of the declarative build environment. The list isn’t exhaustive or necessarily better than any other set of standards. Having a set of conventions applied consistently across the build is key, the specifics of those conventions can be subjective. A key principle applied is that of consistency with established standard conventions wherever possible. For example, standard objects and fields don’t have underscores in the API names and follow simple patterns in terms of naming, there’s no good reason to deviate from this for custom objects and fields. Naming conventions shouldn’t be considered an area for creative thinking, instead creative energy should be focused on the functional and technical design, the conventions applied should be mundane and predictable.

Convention A – Custom Object

[Object name]. Singular, Pascal Case (upper camel case) and no underscores.
e.g. Data Source -> DataSource__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Data Source -> Data_Source__c

Note – the Description attribute for Custom Objects must always be populated appropriately. A self describing configuration is key to future maintenance.

Convention B – Custom Field

[Field name]. Pascal Case (upper camel case) and no underscores.

e.g. Date of Birth -> DateOfBirth__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Date of Birth -> Date_Of_Birth__c

In the scenario where an implementation is comprised of distinct functional domains, with custom fields relating specifically (and exclusively) to one single domain, the following convention should be applied.

Each functional domain has a defined field name Prefix. e.g. HR, FINANCE, SALES etc.
Fields exclusive to one domain have their API name prefixed with the domain prefix.
e.g. Payroll Number -> FINANCE_PayrollNumber__c
e.g. Industry Segment -> SALES_IndustrySegment__c

This convention allows a logical structure to be applied in isolating fields specific to a single functional domain.

Note – the Description attribute for Custom Fields must always be populated appropriately. A self describing configuration is key to future maintenance.

Note – the Help Text attribute for Custom Fields must always be populated appropriately. Inline user assistance can improve the end user experience greatly and reduce ongoing support.

Convention C – Child Relationships

Singular, Pascal Case (upper camel case) and no underscores.
e.g. Account->AccountMetrics__c relationship = Account.AccountMetrics__r

Convention D – Page Layout

[[Function] | [Object]] Layout

e.g. Pricing Case Layout
e.g. Pricing Case Close Layout

Convention E – Custom Setting

Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

API Name – List Settings
– [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

API Name – Hierarchy Settings
– [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

Convention F – Workflow Rule
Always separate the condition from the outcome when configuring workflow. The Workflow Rule is the condition, the associated Actions are the outcome. In many cases the 2 become entwined resulting in duplicated rules, or rules with actions unrelated to the stated purpose of the rule. A clear set of rules related to conditions promotes re-use.

[Object]: [Criteria Description i.e. Condition]

Convention G – Workflow Action

Field Update –
[Object]: Set [Field] to [Value]

Email Alert –
[Object]: Send [Template short description]

Task –
[Object]: [Task Subject]

Convention H – Sharing Rule

OBS: [Object] [From selection] to [To selection]
CBS: [Object] [Criteria] to [To selection]

Convention I – Custom Report Type

[Primary Object] with [Child object]s [and [Grandchild object]s]

Convention J – Custom Label

Define sensible categories for the labels. e.g. UI Button Label, UI Text, UI Error Message etc.

Name = [Category with underscores]_[Value with underscores] e.g. UI_Button_Label_Proceed
Category = [Category with underscores] e.g. UI_Button_Label
ShortDescription = [Category] [Value] e.g. UI Button Label Proceed
Value = [Value] e.g. Proceed

Convention K – Validation Rule

Single field :
[Field Label] [rule applied]
Mailing City Is Required
Start Date Must Be a Weekday

Multiple fields :
[Field grouping term] [rule applied]
Billing Address Must Be Complete

Cross object :
[Object Name] [Field Label] [rule applied]
Opportunity Stage Is Closed No Edit Of Opportunity Products

Convention L – Publisher Action

[Verb] [Noun]
New Invoice
Update Order
Alert Executive Team

Convention M – User Profile

[[Job Function] | [Department] | [Company]] [[User] | [System Administrator]]
Accounts Payable User
Marketing Executive User
Acme System Administrator

Convention N – Permission Set

Single Permissions :
Name must match the permission assigned.
Case Feed
Manage Dashboards
Manage Public List Views

Simple Combined Permissions :
[Verb] [Noun]
Manage Invoices

Combined Permissions :
[Feature Area Descriptor] [User Type]
Work.com Administrator
CloudInvoices User
Knowledge Contributor

Convention O – Public Group

[Grouping term] [[Users] | [Members]]

EU Users
Sales Users
HR Users
Project A Members

Convention P – Reports and Dashboard Folders

[Grouping term] Reports (adhoc reports related to a specific department, team, project etc.)
[Grouping term] Dashboard Reports (best practice to isolate dashboard reports in clear location)
[Grouping term] Dashboards

Finance Reports
HR Dashboards
HR Dashboard Reports

Note – the [Report Description] attribute for Reports must always be populated appropriately. A self describing configuration is key to future maintenance.

Salesforce Implementation Game Plan

Whether you’re managing a commercial software development, leading a consultancy project or building an IKEA table a game plan is absolutely key to successful delivery. In the latter example IKEA recognises the importance of prescriptive guidance and supplies an instruction leaflet in the box. This however covers only one dimension of successful delivery, the ‘What’ (i.e. what you need to do) – the ‘Who’, the ‘How’ and the ‘When’ are left up to you. In the case of an IKEA table this is acceptable as the resource is probably you (who will likely build the table in your own way regardless of advice received) and the timeline may not be critical. Moving away from this tenuous example, in non-trivial situations all the dimensions of successful delivery are equally significant and must combine cohesively to achieve the defined objective. This calm, controlled, empowering and productive state is precisely what planning is intended to achieve. This success delivery-state is rarely the norm, for various reasons; inexperience, over-optimism, command and control culture, inadequate expertise, process rigidity, poor communication etc. The net effect of such factors being a distress delivery-state where productivity is low and the sense of team is diminished in terms of empowerment, trust and accountability.

Like many people I often find myself misquoting Leo Tolstoy, who didn’t say that failing projects fail for a variety of reasons but succeeding projects succeed for the same reason. He did say this however – “All happy families are alike; each unhappy family is unhappy in its own way.” (Leo Tolstoy, Anna Karina) – where the interpretation comes from. I definitely read this somewhere, apologies if this was your book or blog.

So the idea is that all successful projects succeed for the same reason – that reason being in my view that the project was able to achieve a success delivery-state, i.e. the plan encompassed all the requisite dimensions and maintained the agility to react and adapt during flight. In this context it matters little which project process, methodology, framework etc. you employ what matters is that you have a well conceived plan from the outset, or game plan as I like to call it, and execute on that plan in a disciplined manner.

A game plan can take many forms (spreadsheet, picture, diagram, A3 sheet pinned to the wall etc.) whichever way you go the end result should be an engaging, high-level fusion of vision and planning and be tuned for effective communication to your specific audience.

The game plan should influence the detailed planning, but is a higher level concern that outlines the fundaments of how the project will succeed, covering the essential aspects only. My preference in the past has been an annotated timeline diagram, showing clearly the basis upon which I’m confident of success – this can be highly effective in terms of establishing confidence within the delivery team and across stakeholders. I don’t believe this is possible with a Gantt chart or spreadsheet, even where progression metrics are added.

By way of illustration the following sections outline an example Game Plan related to a fictitious Salesforce implementation project.

gameplan example

In summary, the game plan concept applied to project delivery can be a powerful tool. It matters little how the game plan is presented or what it contains, simply having one in any form can make a big difference in terms of confidence, focus and communication.

Integration Architecture Patterns

As an architect I’m generally obsessive about three things; patterns, principles and practices. I could probably add to this list but I also prefer to keep things simple. This post is concerned with the first P, Patterns – in the integration architecture context. At what level should they be defined and applied? I tend to consider the logical and physical aspects of data integration flows independently. In the logical case, the focus should be on the definition of an end-to-end business process that spans multiple systems. There should be no technology constraint or perspective applied to the logical view. In the physical case, the logical view should be considered an input, and a technical view defined in full consideration of the following.

Frequency of integration (batch, near-real-time, real-time)
Bi-directional, versus uni-directional
Multi-lateral, versus bi-lateral or uni-lateral
Volumetrics
Security
Protocols and message formats
Reference data dependencies
Technical constraints (API limits model)
Existing enterprise integration technologies (middleware, ESB)
Future maintenance skill sets (technical versus administrator)

Each physical integration flow definition should not be entirely independent, instead groupings should be identified and robust integration patterns designed and documented. The solution components for each pattern would then be developed, tested and re-applied wherever possible. The schematic below provides a fictitious example of this approach.

Integration Patterns

Having a simple set of clearly defined patterns visible to the project team is key, and should be complemented by a project principle that new approaches to physical integration are by exception – nobody has discretion to be creative in this regard. Standardisation is good practice; integration is expensive in terms of technology, implementation time, run cost and maintenance.

Salesforce Platform Limits – Designing for Scale

A Salesforce instance is a constrained environment where limits exist in respect to capacity and execution. Examples of capacity limits being data storage, number of active users, number of custom objects/custom fields/picklists etc. examples of execution limits being API calls per 24-hour period, SOQL queries executed within an Apex transaction, Viewstate size in Visualforce pages etc.. In both the capacity limit and execution limit case it is imperative that the existence and implications of the constraints are factored into the solution design from the outset. Each and every constrained resource must be treated as a precious asset and consumed in an optimised manner even on seemingly trivial implementation projects. From experience it is often the case that a Salesforce implementation grows (in terms of both use and breadth of functionality) at a rapid rate once it gains traction in an enterprise. If you’ve carelessly exhausted all the constrained resources in the first release, what happens next? Note, some soft limits can be increased by Salesforce on a discretional or paid-for basis, however this doesn’t negate the need to make responsible design decisions and at the very least the highlight the possible additional cost associated with a particular approach. Hard limits do exist in key areas, the Spanning Relationships Limit or cross-object reference limit as it also referred is a strong example of this.

Designing for scale simply requires an intelligent consumption of such resources and appropriate solution design decisions in a limited number of areas. The proliferation of Apex and Visualforce related execution limits don’t necessarily impact the scalability of the implementation, the impact is isolated to the micro level. The selected limits listed below however apply at the org level (Salesforce instance) and can constrain the scalability of an implementation (in functional terms). This list is not exhaustive, for a complete picture refer to the Salesforce Limits Quick Reference Guide.

Limits Primarily Influenced by User License Model

Asynchronous Apex Method Executions :
This limit includes @futures, Batch Apex (start, execute and finish method invocations and Scheduled Apex (execute method invocations). Future method calls made from Apex Triggers can be a risk in relation to this limit. For example, Apex Triggers which fire on record updates which make callouts via @futures can cause scalability issues as data volumes grow. In this example it may become necessary to bulk process the modifications via Batch Apex, assuming a batch style of integration is acceptable. What if near real-time (NRT) is necessary?

The calculated limit is the higher number of 250K or (200 * user license count), where the applicable licenses to this calculation are full Salesforce and Force.com App Subscription only.

Total API Request Limit :
Enterprise Edition = 1,000 per Salesforce and Salesforce platform license, 200 for Force.com App subscription license
Unlimited/Performance Edition = 5,000 per Salesforce and Salesforce platform license, 200 per Force.com app subscription license

Note, sandboxes have a flat limit of 5M which can give a false impression of the limits applied in production.

All inbound API traffic counts towards this limit, including Outlook plug-ins, Data Loader etc. For implementations with limited Standard users this limit can be restrictive, and it is reasonably common for extension packs to be purchased to mitigate this. In all cases consumption must optimised by batching updates and use of the Bulk API where possible.

Limits Primarily Influenced by Salesforce Edition

Workflow Time Triggers Per Hour :
Enterprise Edition = 500
Unlimited/Performance Edition = 1000

This limit can be an issue for implementations with high volume transaction processing throughputs, where time-based workflow is employed to send reminder emails etc. If the hourly limit is exceeded triggers are processed in the next hour and so on. This may cause issue if the actions are time critical.

Workflow Emails Per Day :
1,000 per standard Salesforce license, capped at 2 million.

Apex Emails Per Day:
1,000 in total. The maximum message count per send is limited per edition.
Enterprise Edition = 500
Unlimited/Production Edition = 1000

An unlimited number of emails can be sent per day to Users by using the SingleEmailmessage.setTargetObjectId() and MassEmailmessage.setTargetobjsctIds() methods. This includes customer and partner portal users and even high volume portal users.

This limit is critical to understand and to mitigate in a scalable solution design. In short don’t use Apex to send email unless the recipient is a User. In portal cases use the User Id and not the Contact Id. Prefer Workflow based email sending, as the limits Are considerably higher, and perhaps use Apex script to set criteria picked up by a Workflow rule.

Additional Limits to Consider

Batch Apex (5 pending or active)
Scheduled Jobs (100 scheduled)
Apex Script Characters (3M)
Dynamic Apex Describes
Cross Object References

From a best practice perspective a Platform Limits Reference document should be maintained for all Salesforce implementations that lists the applicable limits and related consumption. This approach surfaces the existence of the limits and should provide design principles such as using Workflow to send customer emails in preference to Apex script. Without an ordered approach where limit consumption is proactively tracked it is highly likely that expensive refactoring exercises or multi-org strategies become necessary over time that could have been minimised,deferred or entirely avoided.

Salesforce Certified Architect – Value Proposition

A real challenge for all Salesforce Architects is keeping up-to-date on the constantly evolving native platform capabilities and customisation/extension points. It’s a truism that nothing stands still, this is especially true in the Salesforce world, with 3 releases a year, frequent off-cycle incremental feature enhancements, acquisitions etc. etc.. This constant evolution is of course great for Salesforce customers, the potential return on investment in respect to business value derivable from the license spend grows in parallel. The business challenges in this respect are those of understanding the potential, making appropriate tactical design and implementation decisions and rationalising a sensible and adaptable strategic roadmap.

In order to address these challenges, and mitigate the consequential risk of not doing so, the business must have access to experienced Salesforce architects with a deep understanding of the current Salesforce release and future product roadmap. This is a difficult proposition where competent Salesforce architects are currently a small community (which is expanding year-on-year) and those that exist are typically occupied in solution delivery not strategic advisory roles.

My advice to any business using Salesforce for anything beyond an out-of-the-box CRM implementation is to engage, or employ, a Salesforce Certified Technical Architect (CTA). This stringent accreditation provides the highest degree of assurance that the individual really understands the Salesforce platform and can provide the right guidance around platform potential (vision), tactical design decisions, strategic roadmap, platform governance and development process. Given that the number of CTA is a small subset of the Salesforce architect community this would seem an impossible task, however there is a growing number of CTAs to be found in the Salesforce partner community and via Salesforce Strategic Services (i.e. salesforce.com). An alternative approach would be to invest in an internal resource and provide the opportunity to advance through the Salesforce certifications step-by-step advancing up to the CTA level. This approach will take time, measured in years not months, and is not without risk, internal resources may lack the breadth of exposure to the Salesforce platform gained through (perhaps) more diverse consultancy projects. Additionally, as an internal CTA is a highly valuable asset to any business, it may be difficult to retain their services.

The CTA should be considered a strategic advisor not an implementation resource, and given the following type of accountabilities:

1. Upfront. Data Architecture (data model, quality management processes etc.).
2. Upfront. Integration Architecture (logical and physical architecture, integration patterns, protocols, tools and technologies).
3. Upfront. Standards. Technical standards, conventions and construction patterns.
4. Upfront. Development process. Methodology guidance. Platform governance. Release management and source code control. Environment strategy. Change management. Risk management.
5. Upfront and ongoing. Solution Design. Exploitation and augmentation of standard features, solution options for gaps.
6. Ongoing. Quality Assurance. Build reviews. Tactical advice on solution build.
7. Ongoing. Quality Assurance. Testing strategy guidance.
8. Ongoing. Strategic advisory. Communication of platform potential and new capabilities to project stakeholders.
9. Ongoing. Guidance on fit between new business requirements and Salesforce. High level estimation and project scoping. Vendor cost assessment.
10. Ongoing. Strategic advisory. Strategic roadmap definition and communication.

Taking the 10 preceding points as indicative, the overarching principles are that the CTA sets the scene for a successful implementation, underwrites the delivery through periodic review and quality assurance processes and finally defines the strategic roadmap. Taking each point in turn; setting the scene is perhaps the most critical aspect as a flawed data model, inaccurate mapping of business requirement to native feature, inexpert selection of solution options to plug the gaps etc. can be incredibly cost expensive to rectify at a later date or impossible to remediate. Underwriting the delivery is perhaps the least well defined area, but at a minimum the CTA should be engaged to review progress and take design decisions required while the implementation is in flight. Additional to this should be a retrospective consideration of the development process, project standards etc. which a view to adaptation in light of findings established via practical application. Finally the strategic roadmap aspect to the CTA value proposition is imperative in ensuring that the Salesforce implementation is sustainable, integral to the organisation’s enterprise architecture and evolving to drive the maximum business value from the investment over time in response to defined or emerging business goals.

In summary, a high degree of expertise applied at the start of a project and on a periodic basis can increase greatly the chances of a successful and cost effective Salesforce implementation. Most projects won’t have access to a full time CTA, for cost or availability reasons, however, most projects should consider a part-time CTA in an advisory role, like a good tax accountant they should pay for themselves.

Any-org Design Considerations

The concept of any-org development is an interesting one. The strict definition, to my mind, being the development of a set of components (perhaps packaged) that are designed and coded specifically to install and function in any Salesforce org. This is typically an ISV concern, where testing and maintaining a single-code base can be highly desirable over managing a base package plus multiple extension packages, or in the worse case multiple independent packages. Either way an ISV needs to maximise the addressable market for a product whilst minimising the ongoing effort to do so. The same drivers do not apply in the single-org case, where a consultancy and/or in-house team are delivering technical components to be installed into a known Salesforce org (or multi-org estate). In the single-org case it is common practice to see technical components designed and coded for the current state of the target org(s), with no consideration to how the org(s) may evolve over time. This can often result in situations where costly technical work is required simply to activate an optional product feature, or to provide user access in another locale. In such cases the situation can often be compounded by the fact that the original development team are no longer available.

In short, in my view some degree of future-proofing should be considered in designing for the single-org model, using the techniques applied by ISVs in the any-org model.

    Any-org Design Considerations

  1. Optional Features
  2. Examples; Person Accounts, Quotes

    There are a multitude of optional product features which can be enabled directly in the Salesforce web application UI or via Salesforce support. In the majority of cases such feature activations irreversibly add new objects and fields to the Salesforce instance. From the perspective of keeping simple orgs uncluttered by objects related to unused features this makes perfect sense. From the perspective of designing for the any-org model, this approach poses a few challenges. The main challenge being that Apex code won’t compile where a static reference exists to an object (or field) that doesn’t exist in the org. There is no simple answer to this, instead a selective approach should be taken where optional features that may already be active (or could in the future be activated), that have some impact on your code are accommodated. The approach to achieving this for any-org Apex code basically involves replacing static references with Dynamic SOQL and Dynamic Apex (see coding techniques below).

  3. Multi-currency
  4. The default currency mode of a Salesforce org is single currency, the majority stay this way. It is however common to have multi-currency and perhaps advanced currency management (ACM) activated in orgs where business operations are international. Activation of multi-currency often occurs once the Salesforce org has become established, perhaps in a single region. This can be problematic where technical customisations have been added that aren’t currency aware.

    In the any-org case, all Apex code should be multi-currency aware and use Dynamic SOQL to add the CurrencyIsoCode field to all object queries involving currency fields. Additionally, currency aware logic should include checks to ensure that related transactions are the same currency, and that custom analytics are presenting data in the corporate currency (default and therefore expected behaviour for the native reporting functions). Note, the behaviour of aggregate functions involving currency fields must also be handled.

  5. Editions Support
  6. A key design decision for ISVs is the Salesforce editions to be supported by their managed package. This one has less relevance to the single-org model, unless the multi-org estate includes different editions.

    It is possible to group editions into two distinct groups;
    1. Group (or Team) Edition and Professional Edition
    2. Enterprise Edition and Unlimited Edition

    In the case of group 1 assume that standard objects such as Product, Pricebook2, PricebookEntry, RecordType do not exist and ensure no static references exist in the code. The OrganizationType field on the Organization object tells us which edition the code is executing within.

    [sourcecode language=”java”]
    public static Boolean isTeamOrProEdition(){

    if (isTeamOrProEdition==null){
    List<Organization> orgs = [select OrganizationType from Organization where Id=:UserInfo.getOrganizationId() limit 1];
    if (orgs.size()>0)
    isTeamOrProEdition=(orgs[0].OrganizationType==’Team Edition’ || orgs[0].OrganizationType==’Professional Edition’);
    }
    return isTeamOrProEdition;
    }
    [/sourcecode]

  7. Internationalisation
  8. Whether an international user base is anticipated or not it is general software development best practice to externalise string literals into resource files. In the Salesforce context this means Custom Labels. A best practice here is to apply strict categorisation and a meaningful naming convention. Also ensure all literals are externalised not just labels in the UI, for example trigger error messages.

    Another consideration for i18n is the use of currency and date formatting helpers. Where UI components do not apply default formatting for an SObject field you need to handle this in code. An i18nHelper class which translates ISO locale and currency codes to date format strings and currency format strings plus symbols respectively can be very helpful.

    Useful abbreviations:
    i18n – internationalisation; development practice enabling support for localisation.
    l11n – localisation; act of localising an internationalised software product for a specific locale.

  9. Profile Permissions
  10. Visualforce pages are preprocessed for components directly bound to SObject fields where the user profile does not have CRUD or FLS permissions. In such cases the fields are not displayed or are made read-only, depending on visibility state. This comes as a surprise for many developers who assume that User Profile permissions are entirely ignored on Visualforce pages.

    reference: Enforcing_CRUD_and_FLS

    In the any-org model, where direct SObject field binding is being used in a Visualforce page, this may require a manual check during initialisation to protect the functional integrity of the page. For example, a custom page with no fields displayed and no explanation is not a great user experience, instead the page should simply inform the user they don’t have sufficient permissions, they can then take this up with their Administrators.

    [sourcecode language=”java”]
    private Boolean hasRequiredFLS(){
    // rule 1: all custom fields must be accessible.
    // rule 2: check isUpdateable on all fields where inline editing offered.

    Schema.DescribeFieldResult d;

    Map<String, Schema.SObjectField> siFieldNameToToken=Schema.SObjectType.SalesInvoice__c.fields.getMap();

    for (Schema.SObjectField f : siFieldNameToToken.values()){
    d = f.getDescribe();

    if (!d.isCustom()) continue;
    if (!d.isAccessible()) return false;
    }

    d = siFieldNameToToken.get(‘InvoiceDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else {
    d = siFieldNameToToken.get(‘DueDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else this.isInlineEditable=true;
    }
    return true;
    }
    [/sourcecode]

    Coding Techniques

  1. Dynamic SOQL
  2. Do not statically reference objects or fields that may not exist in the org. Instead compose Dynamic SOQL queries and execute via Database.query(). With this approach, you can build the required query using flags which indicate the presence of optional feature fields such as RecordTypeId, CurrencyIsoCode etc. The Apex Language Reference provides good coverage of Dynamic SOQL. Be very careful to ensure that your composed string does not include user supplied text input – this would open up a vulnerability to SOQL injection security vectors.

    [sourcecode language=”java”]
    public static Id getStandardPricebookId(){
    if (standardPricebookId==null){
    String q=’select Id, isActive from Pricebook2 where IsStandard=true’;
    SObject p = Database.query(q);

    if (!(Boolean)p.get(‘IsActive’)){
    p.put(‘IsActive’,true);
    update p;
    }
    standardPricebookId=(String)p.get(‘Id’);
    }
    return standardPricebookId;
    }

    public SalesInvoice__c retrieveSalesInvoice(String siId){
    try{
    //& Using dynamic Apex to retrieve fields from the fieldset to create a soql query that returns all fields required by the view.
    String q=’select Id,Name,OwnerId’;
    q+=’,TotalGross__c’;

    for(Schema.FieldSetMember f : SObjectType.SalesInvoice__c.FieldSets.invoices__Additional_Information.getFields()){
    if (!q.contains(f.getFieldPath())) q+=’,’+f.getFieldPath();
    }

    if (UserInfo.isMultiCurrencyOrganization()) q+=’,CurrencyIsoCode’;
    if (AppHelper.isPersonAccountsEnabled()) q+=’,PersonEmail,PersonContactId’;

    q+=’,(select Id,Description__c,Quantity__c from SalesInvoiceLineItems__r order by CreatedDate asc)’;
    q+=’ from SalesInvoice__c’;
    q+=’ where Id=\”+siId+’\”;

    return Database.query(q);
    } catch (Exception e){
    throw e;
    }
    }
    [/sourcecode]

  3. Dynamic Apex
  4. Do not statically reference objects or fields that may not exist in the org. Instead use Dynamic Apex techniques such as global describes and field describes. Where a new SObject is required, use the newSObject() method as shown below, this is particularly useful for unit test data creation. The Apex Language Reference provides good coverage of Dynamic Apex, every developer should be familiar with this topic.

    [sourcecode language=”java”]
    public static List<SObject> createPBE(Id pricebookId, List<SObject> products){
    SObject pbe;
    List<SObject> entries = new List<SObject>();

    Schema.SObjectType targetType = Schema.getGlobalDescribe().get(‘PricebookEntry’);
    if (targetType==null) return null;

    for (SObject p : products){
    pbe = targetType.newSObject();

    pbe.put(‘Pricebook2Id’,pricebookId);
    pbe.put(‘Product2Id’,p.Id);
    pbe.put(‘UseStandardPrice’,false);
    pbe.put(‘UnitPrice’,100);
    pbe.put(‘IsActive’,true);
    entries.add(pbe);
    }
    if (entries.size()>0) insert entries;
    return entries;
    }
    [/sourcecode]

  5. UserInfo Methods
  6. The UserInfo standard class provides some highly useful methods for any-org coding such as;
    isMultiCurrencyOrganization(), getDefaultCurrency(), getLocale() and getTimezone(). The isMultiCurrencyOrganization() method will be frequently used to branch code specific to multi-currency orgs.

    [sourcecode language=”java”]
    public static String getCorporateCurrency(){
    if (corporateCurrencyIsoCode==null){
    corporateCurrencyIsoCode=UserInfo.getDefaultCurrency();

    if (UserInfo.isMultiCurrencyOrganization()){
    String q=’select IsoCode, ConversionRate from CurrencyType where IsActive=true and IsCorporate=true’;
    List<SObject> currencies = Database.query(q);
    if (currencies.size()>0)
    corporateCurrencyIsoCode=(String)currencies[0].get(‘ISOCode’);
    }
    return corporateCurrencyIsoCode;
    }
    }
    [/sourcecode]

    Challenges

  1. Unit Test Data
  2. In the any-org model the creation of unit test data can be a challenge due to the potential existence of mandatory custom fields and/or validation rules. To mitigate the former, Dynamic Apex can be used to identify mandatory fields and their data type such that test data can be added (via a factory pattern of some sort). In the latter case there is no way to reliably detect a validation rule condition and as such for ISVs it is a blessing that unit tests do not actual have to pass in a subscriber org (wrong as this may be in principle). In the single-org case we can improve on this (and we have to), by adding a global Validation Rule switch-off flag in a Org Behaviour Custom Setting (see previous post) – this approach is helpful in many areas but for unit test data creation it can isolate test code from Validation Rules added post-deployment. There’s a tradeoff here between protecting unit tests versus the risk of using test data that may not adhere to the current set of Validation Rules.

  3. Unit Test Code Coverage
  4. The addition of multiple conditional code paths, i.e. branching, for the any-org case makes it challenging to achieve a high code coverage percentage in orgs which do not have the accommodated features activated. For example, unit tests executing in a single currency org, will not be run code specific to multi-currency, and therefore the code coverage drops accordingly. To mitigate this, consider adding OR conditions to IF branches which include unit test flags and perhaps Test.isRunningTest() to cover as much code as possible before leaving the branch. During coding always strive to absolutely minimise the feature-specific code – this approach will help greatly in respect to unit test coverage.

  5. QA
  6. In the any-org model, it is imperative to test your code in an org with the accommodated features activated. This will require multiple QA orgs and can increase the overall testing overhead considerably. Also, factor in the lead time required to have features activated by Salesforce support, such as multi-currency and Person Accounts.

  7. Security
  8. Dynamic SOQL queries open up the possibility of SOQL-injection attacks where user-supplied text values are concatentated into an executed SOQL query string. Always sanitise and escape data values where such code behaviour is necessary.

  9. Governor Limits
  10. The any-org model is highly contingent upon the use of limited resources such as Apex Describes. As a best practice employ a helper class pattern with cached values.

    One Approach – Future Proofing Single-org Developments

    Optional Features – selective
    Multi-currency – yes
    Editions Support – no
    i18n – yes
    Unit Test Data – yes
    Profile Permissions – yes

    The list above is my default position on the approach to take on single-org developments, this can change significantly depending on the current state of the org in terms of configuration and customisation, plus the client perspective on the evolution of their Salesforce org and attitude toward investing in future-proofing/extensibility. In the single-org, consultancy project case it’s advisable to be completely open and let the client decide if the additional X% cost is worth the value. I think the real point here is that the conversation takes place and the client has the opportunity to make an informed decision.

Salesforce Source Control and Release Process

This post outlines my preferred approach to managing parallel developments on the Salesforce platform in what I refer to as the Converged Programme Model. I readily acknowledge that there’s a multitude of ways to accomplish this each with it’s own subjective merits. Before adopting a parallel work-stream model take the time to understand the technical complexity, process overhead and time investment required. Of particular concern should be the team’s readiness for such a disruptive change. In my experience it’s better to plug any skills gaps upfront, be very prescriptive with process guidance, start-small and build out incrementally – the risk otherwise is considerable. Typically resistance will come from individuals unaccustomed to a disciplined approach to software development/release process.

SCC

Objectives

  1. Concurrent Development. Support parallel programme workstreams converging into a shared production Salesforce environment.
  2. Automation. Deliver build automation – reducing the manual overhead required to deploy between environments.
  3. Gold Standard. Deliver a best practice approach – the initial design should scale up and down in response to changing programme conditions.
  4. Non-disruptive. Facilitate a staggered approach to adoption – enabling key benefits to be realised quickly without disrupting productivity.
  5. Minimise Release Overhead. Project branches should be regularly and incrementally updated from the master branch – reducing the inherent risk of divergence over time.

Tools

  1. GitHub
  2. – Get started with public repositories, upgrade to a paid plan and use private repositories for any source code you don’t want to share with the world at large.

    – Create an Organisation account to enable Team functionality.

    – Key benefit versus Subversion (CVS etc.) is fast and efficient branch management; parallel workstreams are managed on branches with frequent merging.

    – It is possible, albeit time expensive to implement a Git server within the enterprise. In my view the GitHub administration interface alone is worth the price.

  3. Jenkins
  4. – Deployed on a Windows EC2 instance with an elastic IP. A free usage tier, micro instance provides an ideal server host. Using a Linux host can be beneficial in regard to SSH authentication from Jenkins to GitHub. This is just one advantage of many. Pick the Operating System/Platform the team you’re working with are most familiar, a Linux host that only one team member can administrate makes no sense.

    – On Windows the Jenkins service should be configured to run a specific user account (with least privileges assigned). This is required to generate the key files for SSH authentication.

    – Enable Jenkins security. Particularly relevant if the host is open to the public web. Lock the inbound IP ranges via the EC2 security group if possible.

    – Either store the Ant build files (build.xml, build.properties) in the Git repository or use an XCOPY post-build step to copy the files into the workspace from a file system location – as below. I prefer to keep the build files external to Git – there shouldn’t be any need to version manage such files – plus the build.properties file may contain passwords in plaintext.

    Jenkins Job Build Config

    – Install GitHub and Git Plugins
    Required to build from a GitHub repository and enables build automation via Post-Receive Hooks. Under Jenkins System Configuration; configure “Manually manage hook URLs”, this requires your GitHub repository to have the hook set manually via Service Hooks under repository settings. Add a [Jenkins (GitHub plugin)] service hook like http://yourservername:8080/github-webhook/. The message sent on git-push to the remote repository will trigger any Jenkins job that builds from the branch that has been updated and has the [Build when a change is pushed to GitHub] option set to true.

    – SSH Keys
    In order to use SSH from Jenkins to a private GitHub repository, SSH authentication is required, which uses a generated key pair. The public key is added as a Deploy Key in GitHub under repository settings. This works well but if you want the same Jenkins user to access multiple repositories over SSH you have a problem as each Deploy Key must be globally unique across all GitHub repositories. The answer to this is to use aliasing and a SSH config file (refer: http://www.onemogin.com/blog/2011/9/1/jenkins-and-github-multiple-private-projects.html) however this won’t work with Post-Receive Hooks as the repository URL in the sent message won’t match to the aliased repository URL in the Jenkins job – typically errant behaviour below from the Jenkins log. I can’t see a way around this at the time of writing this post.

    [sourcecode language=”text”]
    FINE: Skipped GitHub Test – buildautomationtest repository because it doesn’t have a matching repository.
    May 7, 2013 6:21:35 PM com.cloudbees.jenkins.GitHubWebHook
    FINE: Considering to poke GitHub Test – buildautomationtest repository
    [/sourcecode]

    – Chatter Plugin
    I’m a big fan of this plugin by Simon Fell. I tend to use a dedicated release manager user, e.g. release.manager@force365.com, standard user license capacity permitting, and perform all deployment tasks in this user context. This approach provides clarity on changes made by a deployment versus actual user and provides an easy way to be notified of failures etc.

    Key Principles

    1. Fit-for-purpose Org-set
    2. – Org-set is the terminology I use to describe the collection of orgs, and their roles, required to deliver a project safely to production.

      – One size does not fit all. Pick the minimum set of orgs roles required to deliver the project. Each org is a time expensive overhead.

      – Sandbox types. In defining the org-set, factor in the availability of config-only and full-copy sandboxes. The latter must be retained for cases where infrequent refresh is required. Project-level orgs don’t need to be part of the sandbox estate, Developer Edition orgs, or perhaps Partner Developer Edition orgs can be employed. Full-copy sandboxes are incredibly expensive, valuable resources, use only when absolutely necessary for as wide a set of roles as possible.

      – Connected orgs. For projects involving complex integrations, the complexity involved in creating a connected-org may influence the org-set design – there may be an argument to consolidate roles onto a single test org used for QA and UAT perhaps.

    3. Continuous Integration
    4. A best practice org-set design for non-trivial technical projects with multiple technical contributors should require isolation of developer activities into a separate developer orgs with a code-level integration org and Continuous Integration (CI) process in place.

    5. Project-level sandboxes are not refreshed
    6. Project-level orgs are all built from the Git repository. The Pre-production programme-level org must be refreshed from Production pre-deployment to ensure the deployment is verified against the current state.

    7. Commit to the remote project branch is a commitment that metadata is ready for system testing
    8. Build automation will deploy a project branch commit to the project QA org. In my experience it pays to be prescriptive in terms of development process.

    9. Commit to the remote master branch is a commitment that metadata is ready for integration testing
    10. Build automation will deploy a master branch commit to the programme INT (integration) org – this org exists to enable rigorous regression testing to be applied by all project workstream. Post-deployment suites of automated tests should be invoked and reports analysed by the test lead on each project.

    11. Test Automation
    12. It’s a significant resource overhead to execute manual test scripts for each regression test cycle, not to mention error prone. For non-trivial projects, the investment must be made at an early stage in automated-testing. Selenium is a good choice, but the tool utilised doesn’t really matter, what matters is that from the outset of the project the test team start to build-up a comprehensive suite of automated test cases with coverage of the key acceptance criteria defined for each user story. The suites then enable automation of regression testing during deployment phases – the same scripts underpin system testing and provide an often overlooked second stage to CI (unit tests + acceptance tests).

    13. GitHub branch design
    14. – A simple, clean branch design is desirable in the remote repository.

      – Long-lived branches for active project workstreams. Project branches may have sub-branches for each sprint or phase.

      – Long-lived branch for patches. Bug fixes are developed on local branches and committed to the remote support branch when ready for system testing.

      – It can be advisable to consider how important a clean Network Graph is, this is impacted by Git merge versus rebase decisions.

    15. Build automation challenges
    16. In a perfect world, all metadata component types would be covered by the Metadata API. This isn’t the case so the nirvana of simple cloning of an org configuration is yet to exist. Instead a prescriptive process is required which spans manual configuration tasks, metadata deletion and build automation.

      – Proactive management of change
      A nominated release manager should proactively manage change at a programme-level, advise the project teams on release process and strive to minimise deployment conflicts through early involvement in all project developments. A change log should be maintained which lists all changes being made. This could include technical component types (ApexClass, ApexTrigger etc.) being added, modified or deleted, but as a minimum must track configuration changes requiring manual action – enablement of features, field data type changes etc. and required standing data (custom settings etc.). All changes should be mapped to a Change Type of manual or automated and a list of orgs to which the change has been deployed tracked. This is clearly an overhead to the project but without control it can be very easy to lose track of the current state of the orgs in use and face significant time expense in attempting to rationalise the situation through failing deployments. The release manager, or technical lead should apply manual tasks to target orgs pre-emptively to minimise automated build failures.

      – Be prepared for build failures
      Automated builds will fail; this is a fact of life where build-dependencies on manual actions exist. Proactive management will only get you so far. Attempting to minimise this is more realistic than elimination.

      – Data
      Automation of data setup in a target org is possible via Ant and the Data Loader CLI, or other similar means. Alternatively a data file could be deployed as a document or static resource and then loaded from an Apex script (as per the ISV approach).

      – Unsupported metadata component types
      Automation is possible using Selenium scripts, which execute at the UI level and can simulate, for example, a user activating a setting. Such scripts can then be integrated into an automated build. This is highly possible, but takes time and expertise with both Ant and Selenium to accomplish.

    17. Programme-level Integration
    18. The Converged Programme Model involves project workstreams building in isolated org-sets with frequent merge-from-master actions bringing across any changes to the production state. This approach should surface conflicts early, i.e. during development itself, but to be sure that shared component changes have not introduced any functional inconsistencies, regression testing must be applied by each and every project workstream on each occasion any project does a release. This is a strong argument for test automation.

    19. UAT
    20. – Project-level or programme-level?
      In principle UAT should always be applied at the local project-level as the commit to the programme-level integration org is an absolute commitment that the code is production ready. In practice UAT may be two tiered; initial user acceptance of new functionality, followed by some form of secondary acceptance testing in Pre-production, in parallel to deployment verification testing.

    21. Path-to-production Change Management
    22. As with any programme of work, fit-for-purpose Change Management processes should be in place. In context this means a Change Advisory Board (CAB) should be in place to approve deployment, this must include informed and empowered representation across business and technical functions.

      – A Deployment Request Form (DRF), or similar, should be produced to document the change being released, the impact, pre and post deployment tasks, GitHub commit # etc., approval date or rejection reason. The DRF could be approved by a convened board or via email response.

      – The DRF process is absolutely required for the final deployment to Production, but may also be applied to the Pre-Production deployment, i.e. the commencement of the final step of the path-to-production release flow.

Salesforce Summer 13 – Metadata Deployment

Quick post highlighting some Summer ’13 goodness for metadata deployment.

1. Abort a running deployment – This is a massive improvement enabling failed or inadvertent deployments to be cancelled whilst in progress. Anyone working on large deployments will bear witness how frustrating it can be to watch a 30 minute deployment run to completion with a failed unit test occurring after 5..

The Abort option appears against the running deployment in the Salesforce web UI. There doesn’t appear to be an abort operation via the Migration Tool or underlying Metadata API.

2. User references are maintained – Simply put, where individual user references (email alert workflow actions, running users etc.) exist in the metadata, the deployment process attempts to match the source usernames to existing target usernames, by stripping-off sandbox suffixes added to sandbox usernames. This is great for sandbox to production deployments, but needs to work with sandbox to sandbox deployments also – the documentation is unclear on this aspect, although it looks likely that this is supported. Multiple match or non-match cases result in a deployment error. I’ve previously used Post-Retrieve Modification via Ant to deal with this through simple string substitution.

3. Metadata API Supported Types. New additions include Approval Processes and SAML SSO configurations. The former being long overdue as approval processes can be extremely time consuming to recreate manually in a target org. The latter provides a minimal convenience for cases where multiple orgs share the same IdP.

External Id Deployment Error

On occasion when deploying components between orgs you may encounter a database-level category of deployment error, bubbling up from the underlying Oracle RDBMS with limited information to support any diagnostic process. The example below is one such case I’ve seen in practice recently. Most concerning about this type of error is that the deployment appears to commit (with the exception of the affected fields) but reports a failure result state. It’s therefore one to take seriously in automated build environments where undesirable action may be taken on the basis of the build result etc.

Example Case – Too many external Ids on a single object
Scenario – object (standard or custom) has 3 custom fields defined as external Ids (this is the soft limit), and an attempt is made to deploy a new object definition with 2 different fields marked as external Ids, making the sum 5. Remember, the deployment won’t remove this field attribute. In all such cases, where the updated object definition has more than the 3 limit, the result is..

[sourcecode language=”text”]
C:\Program Files (x86)\Jenkins\jobs\..\workspace\build.xml:41: Failed to process the request successfully. Cause(LIMIT_EXCEEDED): LIMIT_EXCEEDED: java.sql.SQLException: ORA-20526:
ORA-06512: at "HAPPY.CCUSTOMINDEX", line 275
ORA-06512: at line 1
: {call cCustomIndex.insert_detail(?,?,?,?,?,?,?,?,?,?,?)}

..or..

C:\Program Files (x86)\Jenkins\jobs\..\build.xml:41: Failed to process the request successfully. Cause(LIMIT_EXCEEDED): LIMIT_EXCEEDED: java.sql.SQLException: ORA-20526:
ORA-06512: at "SLEEPY.CCUSTOMINDEX", line 275
ORA-06512: at line 1
: {call cCustomIndex.insert_detail(?,?,?,?,?,?,?,?,?,?,?)}
[/sourcecode]

I’ve also seen this behaviour with deployments that exceed the tracked fields limit in a similar way, however I’ve been unable to recreate this with the 27.0 Metadata API.

Salesforce Exception Reports

I think it’s fair to say that consideration of reporting and analytics is not traditionally a technical architect’s direct concern. In the Salesforce context I always challenge this presumption and promote the idea that a project architect should absolutely be looking at the high-level analytical requirements during the foundation stage of the project. Why you may well ask? The answer to this is simple, in order to design a fit-for-purpose data model you must consider data coming out (via the native reporting tools) as well as data going in efficiently. I make this point in a prior post that outlines my view on the Salesforce development process. The number of projects I see where this thinking hasn’t been applied and 3rd party tools are wheeled in to address the gap continues to surprise me. Key to success here is understanding what can be achieved with the Salesforce reporting functionality. Which brings me on to the topic for this post – exception reports, a type of reporting not obviously catered for by the standard reporting tools. For instance – show me a list of Accounts without an Opportunity in the last year. Such reports are invaluable in identifying omissions, focusing sales activities, generating call lists, tracking inactivity etc. etc.

I’ll use 2 examples below to clarify how to approach such a scenario.

Simple Example.
Accounts without a closed Opportunity in the last 30 days.

For this report we use a “without” Cross Filter as per the screenshots. Cross Filters were added in Spring ’12 and enable records to be filtered based on the existence (or non-existence) of related records which meet defined criteria. Within a Cross filter up to 5 sub-filters can be defined. Note, the report’s top level filters do not apply at the sub-filter level. Cross filters can be added for any related object, custom or standard – the limit is 3 per report (each with up to 5 sub-filters). Powerful functionality indeed and a great convenience that avoids a Custom Report Type for the outer join, note the sub-filter is limited to fields on the related object, no-cross object sub-filter logic is possible.

simple exception report

Complex Example.
Accounts without a Won Opportunity involving a certain Product Family in the last 60 days.

For this report we start with a Cross Filter report to satisfy the “without Won Opportunity” requirement. From there we need a indicator (of the existence of an Opportunity Product linked to a product of the specified family) at the Opportunity level which can be used in a sub-filter. This is actually straightforward to do and relies on the old school technique for exception reports, namely the use of a rollup summary fields and simple workflow.

Solution components :
Opportunity Product – Custom Field – ProductFamilyHardwareFlag
Number (18,0 default 0)

Opportunity Product – Workflow Rule – “Opp Product – New or Changed Product”
Evaluate – Created and Edited – Formula (OR(ISNEW(), ISCHANGED( Product2Id )))

Opportunity Product – Workflow Action – “Opp Product – Set ProdFamilyHardwareFlag”
Field to update [ProductFamilyHardwareFlag] with Formula = IF( TEXT(Product2.Family)=’Hardware’, 1,0)

Opportunity – Custom Field – ProductFamilyHardwareCount
Rollup Summary Field (Opportunity Product – SUM – Field to aggregate = ProductFamilyHardwareFlag). All records included.

How does this all work? In short, a workflow rule on Opportunity Product sets a numeric field equal to 1/0 based on the product family meeting the defined criteria. The numeric field is then rolled up to the Opportunity level via RSF, which is then applied in a sub-filter as below.

complex exception report

Note, this is just an example to illustrate the concept. An actual implementation would typically require a more generic approach.