Showing posts tagged with:

Programming

Salesforce Test Driven Development

I’ve been fortunate to have worked as a consultant Salesforce architect for the last decade, during this time I’ve encountered an interesting variety of approaches to Agile in the Salesforce space – most of which fitting with the hybrid model unfortunately – what has been noticeably absent is the application of agile engineering practices. Test Driven Development (TDD) is one of most widely understood agile engineering practices (alongside Pair Programming) – and is perhaps the most practical to adopt, however despite the benefits, despite the level of lip-service paid to the subject in practice TDD remains a rarity. With this post I hope to provide a convincing argument to Salesforce developers and architects that TDD does not add overhead to the development lifecycle and in fact the opposite is true. The Salesforce platform in development terms is maturing, the introduction of new tools and techniques such as Salesforce DX (SFDX) are testimony to this. In this context, the idea of developers working ad-hoc and in an undisciplined manner doesn’t fit, instead the consistent application of established engineering practices will play a key role moving forward. No more low quality unit tests (written as an afterthought) and no more random coding conventions and inconsistent standards. Salesforce development teams can no longer choose to hide in the void that sits between declarative development and traditional software engineering, it’s definitely time for some new habits.

This blog post draws together a high-level conceptual view of Test Driven Development with the implementation practicalities unique to the Salesforce platform.

Let’s start with great quote to set the scene.

Build it right to build the right thing“, Ron Jeffries. The concept is simple; build the product right, repeat until it’s the right product.

Test Driven Development

Test Driven Development is an XP agile engineering practice. eXtreme Programming (XP) is a software development methodology focused on customer value, continuous feedback, incremental feature delivery and team empowerment. It is a widely recognised fact that the earlier testing is applied in the software development lifecycle – the greater the value (shorter development timescales, more reliable code etc.). TDD takes this idea to its ultimate conclusion where testing occurs in front of actual coding. This may seem counterintuitive. In order to understand the impact of this fully it’s important to understand the high-level approach.

(1) Identify a low-level requirement (PBI, Task etc.)
(2) Describe the logical tests necessary to confirm that code satisfies the requirement
(3) Write the minimum code necessary to pass the tests
(4) Tidy-up the code to remove clutter, complexity and duplication

The steps above describe a design process where software evolves in very small increments with a focus on simplicity and testability. This step-by-step approach results in emergent design, where the code informs the design approach and guides the developer toward the best solution. This is the direct opposite of the traditional big-design-upfront (or BDUF) approach. The goals and benefits of TDD are many; “clean code that works” is a phrase that best summarises this (another golden Ron Jeffries quote). Note, TDD is often referred to as Test Driven Design in recognition of this emergent design side-effect.

TDD Cycle

It is imperative to understand the conceptual aspects of TDD in order to gain the full benefits, thankfully the practical application follows a relatively straightforward and approachable 3-stage cycle (Red-Green-Refactor).

Stage 1 – Red : Write the minimum code needed for a failing test. This may entail development of (or extension to) a test harness, plus the unit test code required to exercise the logical test case. The test code could be completely new or simply an augmentation to existing tests. In writing the test code the developer is forced to think through the requirement in detailed, logical terms – the type of thinking more typically applied at a later stage when writing the functional code.

It is imperative that test code evaluate predictable logical test conditions. The test code should fail for a specific, expected reason – not due to a test harness issue or due to the fact the test code itself is invalid.

Stage 2 – Green : Write the minimum amount of code needed to pass the test. The code written here must satisfy the test condition and nothing more – no embellishments.

Discipline is required to avoid the developers innate tendency to create.

Stage 3 – Refactor : The functional code should be reviewed and modified to remove clutter, complexity and duplication. Code should work (of course) but also (critically) be readable, updateable and testable.

Clean code is the objective here; if every tiny incremental addition of code satisfies the clean code criteria (readable, updateable and testable) the outcome will be a highly organised and easily maintainable code base that can be extended with new features at a consistent level of effort over time. Where such an approach is not applied it is often inevitable that a complete re-write becomes necessary once the steadily increasing incremental cost (time) of adding new features becomes intolerable.

The TDD cycle then repeats over and over until all requirements are addressed. The incremental design approach focuses upon each individual requirement one at a time. This minimalist, simple design lead approach can be difficult for experienced developers to adopt, however once the learning curve is addressed the focus on simpler designs allows easier modification, more efficient testing and faster production.

TDD in the Salesforce Context

The Salesforce (or Force.com) cloud platform is an ideal software development environment in which to adopt the Test Driven Development (TDD) agile engineering practice. The Apex programming language has integrated unit testing capabilities which reduce the level-of-effort required. Salesforce Developers are also accustomed to the idea that code requires a (platform enforced) test coverage level of 75% for production deployment. In the Salesforce development context therefore the tools are provided and concepts relating to unit test development well understood. The only challenge remaining is the mind-set shift from big-design-upfront and retrospective test code development to TDD and the agile benefits of emergent design, clean resilient code and reduced development effort.

Unit Testing on the Salesforce Platform

An Apex class can be easily annotated (@isTest) to indicate its role as a unit test class. The snippet below shows a very basic example.

[code language=”java”]
@isTest(SeeAllData=false)
private class FeatureTestClass {

@testSetup static void setup(){
// populate custom settings.
// create test data used by all tests.
}

/* **/
static testMethod void isAccountProcessed() {

// setup test specific test data and context
insert new Account(Name=’Acme Inc’);

Test.startTest();

// do something
AccountProcessor.process();

Test.stopTest();

// test outcome
System.assertEquals(1, [select count() from Account where IsProcessed__c=true]);
}
/* */
}
[/code]

The [SeeAllData=false] flag indicates that only test data created by the class is visible to the test method (plus some actual data; products, users etc.). There are circumstances where the flag must be set as [SeeAllData=true] flag, where test code is unable to create required test data due to object access permissions. The isTest annotation takes a second parameter [isParallel=true|false] which controls whether the test class can run in parallel with other tests (for expedient execution) or whether the behaviour requires serial execution (perhaps to avoid lock contention on shared resources such as custom settings).

Once a test class is implemented, adding individual tests becomes very straightforward. A standardised naming convention for the test cases can help the readability of the code; perhaps based on the expected outcome or the logical test case. Test code organisation at the class level should also be predictable and may follow a feature convention.

There are a number of methods available for the execution of test code on the Salesforce platform; Setup menu, Developer console and API (incl. IDE) as examples. Tests can also be executed asynchronously at a scheduled future time via Apex or the API (ApexTestQueueItem, ApexTestResult objects).

As the screenshot above shows the developer can configure a test execution to include all or selected test cases from multiple test classes. In the context of the TDD cycle the developer requires the ability to run just the one test case – not all, this would waste time.

Test Suites allow test classes to be grouped and executed together; this can be seen in screenshot below.

In the TDD context, where unit tests drive the development process it is important to have a clear naming convention defined for:

Test Case: expected outcome, or logical test case (readability is key)
Test Class: group of test cases relating to a given feature
Test Suite: group of test classes relating to a functional area

What is a Unit?

For the TDD cycle to be effective it can help to understand the level at which a unit test is applied. A unit is variably defined as the smallest testable part of an application, or the smallest change that still makes sense to be tested in the developer’s opinion. This of course is a subjective definition.

Common sense should be applied; a 200 line Apex trigger on Account will not be a single unit, a 20 field update on Contact also won’t be 20 units. Instead the feature being developed should be decomposed into a unit test per logical test case. This can take practice to get right, but striving to keep each test as simple as possible should be the primary guiding factor. Simple tests are easy to modify and provide a self-documenting benefit.

Advanced Unit Testing

The Apex language supports the advanced unit testing technique of Mocking which enables isolation of test execution from external factors such as API callouts or Salesforce configuration changes.

Mocking essentially involves the non-disruptive, dynamic substitution of a test specific class instance in the place of a functional class instance. Apex provides the HttpCalloutMock interface to allow test code to utilise pre-fabricated Http callout responses. The Stub API (StubProvider interface and Test.createStub method) supports the development of Mocking frameworks that enable any class to be substituted dynamically at runtime.

References

Apex Developer Guide > Testing Apex

Ron Jeffries Blog – All things XP and much more

Martin Fowler – All things Code Refactoring and much more

Salesforce1 Lightning

Once again the annual Dreamforce event has been and gone leaving most practitioners with an uncomfortable knowledge deficit in terms of the real detail of the wealth of new innovations announced. This autumn period, post-Dreamforce, is often a time for investigation; piecing together information gathered from press releases, blog post, social media and event session replays.

This post provides the output of my own brief investigations into the Salesforce1 Lightning announcement. Note, I may be vague or incorrect in some areas, I have made assumptions (safe I hope).

Salesforce1 Lightning – What is it?
Salesforce1 Lightning is the next generation of the Salesforce1 Platform, it is framed specifically as a platform-as-a-service (PaaS) play and heralded as the world’s number 1. The platform is comprised of a number of new and re-branded technologies that collectively target the rapid delivery of cross-device, responsive applications via clicks-not-code. Responsive in this context relating to dynamic components that adapt their layout to the viewing environment, to provide an optimised viewing experience, regardless of the dimensions (i.e. a single view layer that supports desktop computers, smartphones, tablets and wearables).

Salesforce1 Lightning – Key Features
– Lightning Framework (GA now)
Saleforce developed UI framework for rapid web application development based on the single-page model, loosely coupled components and event-driven programming. The underlying Aura Framework was open-sourced last year and is used internally by Salesforce for the development of Salesforce1 plus recently introduced platform functionality (Opportunity Splits UI etc.). The Lightning Framework represents the availability of a subset of the Aura Framework functionality as a platform service for custom application development.

– Lightning Components (beta now, GA Spring 2015)
In order to build with the Salesforce1 Lightning Framework we need components. Components represent cleanly encapsulated units of functionality from which user interactions are comprised (i.e. apps). Component communication is supported by an event-based-architecture.

Standard Components – e.g. Button, Report Chart, Chatter Feed. Standard Components enable custom applications to be composed that inherit the Lightning UI (meaning the Salesforce1 UI).

Custom Components. Using the Lightning Component Framework, custom components can be developed to deliver any required experience (style, branding, function etc.). Developers build components with predominantly client-side behaviours, Apex can be employed to provide server-side controller functionality.

AppExchange Component. 3rd party commercial components installed from the AppExchange.

– Lightning App Builder (beta Spring 2015)
As the name implies the App Builder enables application composition through the drag-and-drop of components – meaning no-code. The builder provides Desktop, Laptop, Tablet, Phone and Smart Watch views onto which optimal component layouts are configured – the responsive behaviour to the components themselves therefore applies within the device type view – this makes good sense as an optimal laptop experience is not the phone view with proportionally bigger components. This approach is commonplace, wix.com for example works this way – i.e. build a generic device type specific view with components that adjust to the device specific viewing environment.

– Lightning Process Builder (GA Spring 2015)
Re-branded Force.com Flow / Visual Workflow functionality. In this context the point being that the business process logic is configurable through a drag-and-drop visual tool. Edit: my assumption here in relation to re-branding may well be incorrect, it has been suggested that the tool is in fact a distinct technology from Force.com Flow and may coexist. I’ll update this when I understand more – the functional overlap between Force.com Flow and Process Builder looks to be significant.

– Lightning Schema Builder (GA now)
Re-branded Schema Builder functionality. In this context the point being that the data schema is open to visual (i.e. drag-and-drop) manipulation.

– Lightning Connect (GA now)
Re-branded Platform Connect, or External Objects. The ability to configure virtual objects that query external data sources in real-time via the OData protocol. Connect underpins the concept of real-time external data access via the Salesforce1 platform and its constituent mobile applications.

– Lightning Community Builder (beta now, GA Spring 2015)
As the name implies a drag-and-drop tool for the configuration of Salesforce Communities plus the content delivered.

Entry Points
Salesforce1 Lightning Components can be accessed in the following ways.

Standalone App – access via /NAMESPACE/APPNAME.app
Salesforce1 Mobile App Tabs – create a Tab linked to a Lightning Component and add to the Mobile Navigation Menu.
Salesforce1 Mobile App Extensions – currently in Pilot, a means by which custom components can override standard components.

Future – All Visualforce entry points should ultimately become options for Lightning Components. This one is definitely an assumption. Standard components with which exhibit the standard Salesforce (Aloha) styling would be required for this.

Considerations
The following list represents some initial thoughts regarding the significance of Salesforce1 Lightning – formed without the insight gained through practical experience.

– Mobile-first.
The design principle is simple, the most effective way to build compelling user experiences across multiple devices is to start with the simplest (or perhaps smallest) case, i.e. the mobile. Note, the introduction of wearables possibly invalidates this slightly. The alternate approach of trying to shoe-horn complex desktop experiences onto smaller viewing environments rarely produces anything worthwhile.

– Rapid Development.
Quickly, faster, speed-of-business plus various other speed-related adverbs litter the press releases for Salesforce1 Lightning. If the name itself isn’t sufficient to highlight this, the platform is all about rapid development. In context rapid is realised by configuration-over-code; assembling apps with pre-fabricated, road-tested components delivers the shortest development cycle. That said, regardless of whatever form development takes a development lifecycle is absolutely necessary – the vision of analysts configuring apps over lunch and releasing immediately to business users is prone to disaster; I don’t believe this to be the message of Salesforce1 Lightning however.

– Technical Barriers versus Organisational Friction.
Removing technical barriers to rapid application development is only one part of the equation, organisations need to be culturally ready to embrace and therefore benefit from such agility. Building an enterprise app in a week makes little sense if it takes 3 months for acceptance, approval and adoption processes to run. The concept of turning IT departments into centres of innovation is an incredibly powerful aspiration, however this relies heavily on empowerment, trust and many other agile principles some organisations struggle with.

– Development model.
The Lightning development model is fully consistent with the age-old Salesforce philosophy of rapid declarative development via pre-fabricated componentry. Application architects, admins, analysts etc. assemble the apps with components supplied by Salesforce, built in-house or procured from the AppExchange. Developers will therefore focus more on building robust and reusable components than actual applications – which makes good sense assuming appropriate skills are applied to component specification and application design. The model requires non-technical resource to adopt a development mindset, this may be problematic for some.

– Developer Skills.
To build with the Salesforce1 Lightning Component Framework developers must be proficient in JavaScript, beyond intermediate level in my view. Lightning is a comprehensive component-based client-server UI framework with an event driven architecture. Developers with previous exposure limited to basic JavaScript augmentations to Visualforce face a learning curve. Anyone still under the false impression that JavaScript programming is simple compared to languages such as Java, C, C++ etc. may want to reconsider this before embarking on a Lightning project.

– Use Cases.
The viability of the proposed development model may ultimately come down to the complexity of the use cases/user experiences that can be supported without reverting to custom component development. By their very nature mobile interactions should be simplistic, but for desktop interactions it will be interesting to understand the scope of the potential for complex applications.

– Adoption.
Salesforce1 Lightning follows a similar paradigm to both Site.com and Force.com Flow, where historically technically oriented tasks are made possible for non-technical users; drag and drop visual composition of business process realisations and interactive web site development respectively. Both technologies, as innovative and empowering as they are, do not appear to have radically changed the development models to which they pertain. An obvious question therefore is whether the empowering technology alone is enough to drive adoption.

References
Aura Documentation Site
Lightning Components Developer’s Guide
Lightning QuickStart

Apex Unit Test Best Practice

This post provides some general best practices in regard to Apex Unit Tests. This isn’t a definitive list by any means, as such I’ll update the content over time.

Top 10 Best Practices (in no order)

1. TDD. Follow Test Driven Development practice wherever possible. There is no excuse for writing unit tests after the functional code, such an approach is indicative of a flawed development process or lax standards. It’s never a good idea to estimate or deliver functional code without unit tests – the client won’t appreciate an unexpected phase of work at the point of deployment, not to mention the pressure this approach puts on system testing.

2. Code Quality. Ensure unit tests are written to cover as many logical test cases as possible, code coverage is a welcome by-product but should always be a secondary concern. Developers who view unit tests as a necessary evil, or worse, need to be educated in the value of unit tests (code quality, regression testing, early identification of logical errors etc. etc.).

3. Test Code Structure. For some time now I’ve adopted a Test Suite, Test Helper pattern. A suite class groups tests related to a functional area. A test helper class creates test data for a primary object such as Account (i.e. AccountTestHelper.cls), secondary objects such as price book entry would be created within the product test helper class. The suite concept provides a logical and predictable structure, the helper concept emphasises that test data creation should be centralised.

4. Test Code Structure. Put bulk tests in a separate class e.g. AccountTriggerBulkTestSuite.cls (in addition to AccountTriggerTestSuite.cls). Bulk tests can take a long time to complete – this can be really frustrating when debugging test failures – particularly in production.

5. Test Code Structure. Ensure test classes contain a limited number of test methods. I tend to limit this to 10. As with point 4, this relates to test execution time, individual methods can’t be selectively executed – the smallest unit of execution is the class.

6. SeeAllData. Always use SeeAllData=true by exception and at the test method level only. Legacy test code related to pricebooks that historically required this can now be refactored to use Test.getStandardPricebookId(). Also, set the [Independent Auto-Number Sequence] flag to avoid gaps in auto number sequences through the creation of transient test data.

7. Test Case Types. As the Apex Language reference proposes, write unit tests for the following test case types.

Positive Behaviour – logical tests that ensure the code behaves as expected and provides successful positive outcomes
Negative Behaviour – logical tests for code behaviour where parameters are missing, or records do not adhere to defined criteria – does the code protect the integrity of unaffected records – does the runtime exception handling function as expected
Bulk – trigger related tests primarily – how the code behaves with a batch of 200 records – mix the batch composition to stress the code against governor limits
Restricted User – test relevant combinations of user role and profile – this test case type is prone to failure through sharing model adjustments – triggers should delegate processing to handler classes that have the “with sharing” modifier

8. Debugging. Always use the syntax below for debug statements within code (test and non-test code). An efficient practice is to add sensible outputs whilst writing the code. This approach avoids a code update or re-deployment to add debug statements during error diagnostics. Note – in such cases Checkpoints could be a better approach anyway – particularly in production. The use of the ERROR logging level enables a restrictive log filter to be applied such a clear debug log is produced and max log size truncation is avoided – note, log filters can also have a positive impact on transaction execution time.

[sourcecode language=”java”]System.debug(LoggingLevel.ERROR, ‘my message’);[/sourcecode]

9. Commenting. Always comment test methods verbosely to ensure the test case intent is clear and that the test code can be mapped to the related non-test code. Test classes should be fully self documenting and be viewed as the primary enabler for the future maintenance of the non-test code.

10. Maintenance. Test code is highly dependent on the environment state. Any configuration change can require test code to be updated; this could be a new mandatory custom field or a sharing model adjustment. In many cases the resultant unit test failure state is not encountered until the next deployment to production, which can’t proceed until the tests are fixed. This scenario will be familiar to many people. The mitigation requires the local administrator to understand the risk, frequently run the full set of unit tests and to manage the test code update cycle proactively.

Example Test Suite Class

[sourcecode language=”java”]
/*
Name: RecordMergeTestSuite.cls
Copyright © 2014 CloudMethods
======================================================
======================================================
Purpose:
——-
Test suite covering RecordMerge operations.
Bulk tests are defined in the class RecordMergeBulkTestSuite.cls
======================================================
======================================================
History
——-
Ver. Author Date Detail
1.0 Mark Cane& 2014-09-16 Initial development.
*/
@isTest(SeeAllData=false)
public with sharing class RecordMergeTestSuite {
/*
Test cases:
singleTestCase1 – postive code behaviour/expected outcome test case 1.
negativeTestCase1 – negative outcome test case 1.
restrictedUserTestCase1 – postive/negative code behaviour in the context of specific user role/profile combinations.
..
future test cases to cover : * some coverage provided
1. tbd.
2. tbd.
*/

/* */
static testMethod void singleTestCase1() {
// Test case 1 : postive outcome test case 1.
setup();

// Steps – 1.
// Logical tests – 1.
}
/* */

/* */
static testMethod void negativeTestCase1() {
// Negative test case 1 : negative outcome test case 1.
setup();

// Steps – 1.
// Logical tests – 1.
}
/* */

/* */
static testMethod void restrictedUserTestCase1() {
// Restricted user test case 1 : postive/negative code behaviour in the context of specific user role/profile combinations.
List<User> users;

System.runAs(new User(Id = Userinfo.getUserId())){ // Avoids MIXED_DML_OPERATION error (when test executes in the Salesforce UI).
setup();
users = UserTestHelper.createStandardUsers(2, ‘Sophie’, ‘Grigson’);
}

System.runAs(users[0]){
accounts = AccountTestHelper.createAccounts(1, ‘Abc Incorporated’);

// Steps – 1.
// Logical tests – 1.
}
}
/* */

// helper methods
private static void setup(){
SettingsTestHelper.setup();
}
// end helper methods
}
[/sourcecode]

Apex Trigger Exceptions

Custom Apex Triggers execute on standard CRM objects (Account, Contact, Lead etc.) and custom objects in response to all data modifications applied via the Salesforce web application, API transactions and custom Apex script. As such it is imperative that trigger code adheres to patterns that promote performance and maintainability, guards against recursive behaviour and most critically protects data integrity.

Apex code outside of triggers (Batch Apex, Visualforce Controllers etc.) is completely isolated to the specific context, trigger code is not; the effect of errant code is felt across the standard web app, mobile apps, API-based integration flows, data loads and so on. As such triggers must be used judiciously and avoided wherever possible on the core standard objects. I often come across multiple complex triggers defined on Account, where the logic applied is non-time critical and could have been handled by Batch Apex, or indeed by declarative methods. In certain circumstances this idealistic view is impractical and triggers are a necessary friend. When writing triggers always stick to a proven pattern, good examples provided by community experts are easy to find, and read the relevant sections in the Apex Language Reference if you haven’t done so for a while.

This post aims to provide some practical insight related to exception behaviour within Apex Triggers that may not be obvious from the referenced resources.

Key Concepts
1. DML Processing Mode
All bulk DML transactions on the platform run in either allOrNothing or partial processing mode. In some cases the behaviour is implicit and fixed, in other cases flags can be set to override the default behaviour.

allOrNothing
– Apex Database statements (e.g. update contactsToUpdate;)

partial processing
– Apex Database methods (Database.update(contactsToUpdate);. The default behaviour can be overridden via DmlOptions.
– API (default behaviour – can be overridden)

2. .addError()
The .addError(custom message) method can be invoked at the record or field level within the trigger context to flag a record as invalid. This is particularly useful in asserting logical errors. A best practice in regard to the custom messages is to use Custom Labels with a defined Apex Trigger Error category.

3. Runtime Exceptions
If an unhandled runtime exception is thrown then the Apex Transaction will immediately rollback and Apex exception notifications will occur. Structured exception handling can be added to the trigger code to catch runtime exceptions and to apply custom handling logic.

Apex Trigger – Exception Test Cases
The following test cases highlight the impact of the .addError() method and runtime exceptions in the context of the 2 processing modes. Tests were conducted using Execute Anonymous and the Apex Data Loader at API version 30.0.

Test Case 1 – Does code continue after .addError()?
Result: Yes (this is necessary in both allOrNothing and partial processing cases to gather a complete set of errors).

Test Case 2.1 – all_or_none – Does .addError() rollback the transaction if any subset of records has .addError() applied?
Result: Yes – Full rollback.

Test Case 2.2 – all_or_none – Does .addError() rollback the transaction if all records have .addError() applied?
Result: Yes – Full rollback.

Test Case 2.3 – all_or_none – Does an un-handled runtime exception rollback the transaction if no records have .addError() applied?
Result: Yes – Full rollback.

Test Case 2.4 – all_or_none – Does a handled runtime exception rollback the transaction if no records have .addError() applied?
Result: No – Records are committed (complete trigger context – regardless of which record caused the runtime exception).

Test Case 3.1 – partial – Does .addError() rollback the transaction if any subset of records has .addError() applied?
Result: Yes – Full rollback of all uncommitted changes, the trigger fires again for the subset of records that have not had addError() applied. Up to 2 retries are performed, after which the “Too many batch retries in the presence of Apex triggers and partial failures.” exception is thrown.

Test Case 3.2 – partial – Does .addError() rollback the transaction if all records have .addError() applied?
Result: Yes – Full rollback.

Test Case 3.3 – partial – Does an un-handled runtime exception rollback the transaction if no records have .addError() applied?
Result: Yes – Full rollback.

Test Case 3.4 – partial – Does a handled runtime exception rollback the transaction if no records have .addError() applied?
Result: No – Records are committed (complete trigger context – regardless of which record caused the runtime exception).

Test Case 4 – partial – In a partial failure case are static variables set within the original trigger invocation reset before retry invocations?
Result No – Static variables are not reset between trigger invocations. This means that statics guard variables defined to protect against recursive behaviour will stop trigger logic being applied to records processed by retry invocations.

Conclusions
1) In all cases .addError() results in the full Apex Transaction being rolled back. Custom logging schemes such as writing exceptions to a Custom Object (including via @future) or sending notification emails are futile gestures when employed in conjunction with .addError().

2) Handled runtime exceptions result in the Apex Transaction being committed, unless .addError() is applied to at least one record.

3) In the partial processing case, governor limits are reset to their original state for each retry trigger invocation, static variables are not.

4) Where static guard variables are used to protect against re-execution of the trigger logic via field updates within update DML transactions, partial processing retry trigger invocations will also be blocked. Such invocations are consistent with those initiated by workflow field updates later in the order of execution.

In order to distinguish between a field update trigger execution and a partial processing retry it becomes difficult to use static variables – batch sizes can be equivalent, sequencing is unpredictable etc.

One possible approach is to use a custom field on the target object (Is Workflow Processing?), a field update action would be added to set this to “True” alongside the existing actions. In the trigger code we can now reliably test this field state to identify a workflow field update initiated trigger execution. The trigger code must always reset the field to “False” before returning, such that any partial processing retry executions safely proceed. Where trigger logic resides in an after update trigger, a before update trigger could be added simply for convenience of reseting the field – in this case a static could be set in the before context and referenced in the after context.

It’s possible there are more efficient ways to provide a robust solution to this. A complete Apex based solution would definitely be preferable to the custom field based solution.

Any-org Design Considerations

The concept of any-org development is an interesting one. The strict definition, to my mind, being the development of a set of components (perhaps packaged) that are designed and coded specifically to install and function in any Salesforce org. This is typically an ISV concern, where testing and maintaining a single-code base can be highly desirable over managing a base package plus multiple extension packages, or in the worse case multiple independent packages. Either way an ISV needs to maximise the addressable market for a product whilst minimising the ongoing effort to do so. The same drivers do not apply in the single-org case, where a consultancy and/or in-house team are delivering technical components to be installed into a known Salesforce org (or multi-org estate). In the single-org case it is common practice to see technical components designed and coded for the current state of the target org(s), with no consideration to how the org(s) may evolve over time. This can often result in situations where costly technical work is required simply to activate an optional product feature, or to provide user access in another locale. In such cases the situation can often be compounded by the fact that the original development team are no longer available.

In short, in my view some degree of future-proofing should be considered in designing for the single-org model, using the techniques applied by ISVs in the any-org model.

    Any-org Design Considerations

  1. Optional Features
  2. Examples; Person Accounts, Quotes

    There are a multitude of optional product features which can be enabled directly in the Salesforce web application UI or via Salesforce support. In the majority of cases such feature activations irreversibly add new objects and fields to the Salesforce instance. From the perspective of keeping simple orgs uncluttered by objects related to unused features this makes perfect sense. From the perspective of designing for the any-org model, this approach poses a few challenges. The main challenge being that Apex code won’t compile where a static reference exists to an object (or field) that doesn’t exist in the org. There is no simple answer to this, instead a selective approach should be taken where optional features that may already be active (or could in the future be activated), that have some impact on your code are accommodated. The approach to achieving this for any-org Apex code basically involves replacing static references with Dynamic SOQL and Dynamic Apex (see coding techniques below).

  3. Multi-currency
  4. The default currency mode of a Salesforce org is single currency, the majority stay this way. It is however common to have multi-currency and perhaps advanced currency management (ACM) activated in orgs where business operations are international. Activation of multi-currency often occurs once the Salesforce org has become established, perhaps in a single region. This can be problematic where technical customisations have been added that aren’t currency aware.

    In the any-org case, all Apex code should be multi-currency aware and use Dynamic SOQL to add the CurrencyIsoCode field to all object queries involving currency fields. Additionally, currency aware logic should include checks to ensure that related transactions are the same currency, and that custom analytics are presenting data in the corporate currency (default and therefore expected behaviour for the native reporting functions). Note, the behaviour of aggregate functions involving currency fields must also be handled.

  5. Editions Support
  6. A key design decision for ISVs is the Salesforce editions to be supported by their managed package. This one has less relevance to the single-org model, unless the multi-org estate includes different editions.

    It is possible to group editions into two distinct groups;
    1. Group (or Team) Edition and Professional Edition
    2. Enterprise Edition and Unlimited Edition

    In the case of group 1 assume that standard objects such as Product, Pricebook2, PricebookEntry, RecordType do not exist and ensure no static references exist in the code. The OrganizationType field on the Organization object tells us which edition the code is executing within.

    [sourcecode language=”java”]
    public static Boolean isTeamOrProEdition(){

    if (isTeamOrProEdition==null){
    List<Organization> orgs = [select OrganizationType from Organization where Id=:UserInfo.getOrganizationId() limit 1];
    if (orgs.size()>0)
    isTeamOrProEdition=(orgs[0].OrganizationType==’Team Edition’ || orgs[0].OrganizationType==’Professional Edition’);
    }
    return isTeamOrProEdition;
    }
    [/sourcecode]

  7. Internationalisation
  8. Whether an international user base is anticipated or not it is general software development best practice to externalise string literals into resource files. In the Salesforce context this means Custom Labels. A best practice here is to apply strict categorisation and a meaningful naming convention. Also ensure all literals are externalised not just labels in the UI, for example trigger error messages.

    Another consideration for i18n is the use of currency and date formatting helpers. Where UI components do not apply default formatting for an SObject field you need to handle this in code. An i18nHelper class which translates ISO locale and currency codes to date format strings and currency format strings plus symbols respectively can be very helpful.

    Useful abbreviations:
    i18n – internationalisation; development practice enabling support for localisation.
    l11n – localisation; act of localising an internationalised software product for a specific locale.

  9. Profile Permissions
  10. Visualforce pages are preprocessed for components directly bound to SObject fields where the user profile does not have CRUD or FLS permissions. In such cases the fields are not displayed or are made read-only, depending on visibility state. This comes as a surprise for many developers who assume that User Profile permissions are entirely ignored on Visualforce pages.

    reference: Enforcing_CRUD_and_FLS

    In the any-org model, where direct SObject field binding is being used in a Visualforce page, this may require a manual check during initialisation to protect the functional integrity of the page. For example, a custom page with no fields displayed and no explanation is not a great user experience, instead the page should simply inform the user they don’t have sufficient permissions, they can then take this up with their Administrators.

    [sourcecode language=”java”]
    private Boolean hasRequiredFLS(){
    // rule 1: all custom fields must be accessible.
    // rule 2: check isUpdateable on all fields where inline editing offered.

    Schema.DescribeFieldResult d;

    Map<String, Schema.SObjectField> siFieldNameToToken=Schema.SObjectType.SalesInvoice__c.fields.getMap();

    for (Schema.SObjectField f : siFieldNameToToken.values()){
    d = f.getDescribe();

    if (!d.isCustom()) continue;
    if (!d.isAccessible()) return false;
    }

    d = siFieldNameToToken.get(‘InvoiceDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else {
    d = siFieldNameToToken.get(‘DueDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else this.isInlineEditable=true;
    }
    return true;
    }
    [/sourcecode]

    Coding Techniques

  1. Dynamic SOQL
  2. Do not statically reference objects or fields that may not exist in the org. Instead compose Dynamic SOQL queries and execute via Database.query(). With this approach, you can build the required query using flags which indicate the presence of optional feature fields such as RecordTypeId, CurrencyIsoCode etc. The Apex Language Reference provides good coverage of Dynamic SOQL. Be very careful to ensure that your composed string does not include user supplied text input – this would open up a vulnerability to SOQL injection security vectors.

    [sourcecode language=”java”]
    public static Id getStandardPricebookId(){
    if (standardPricebookId==null){
    String q=’select Id, isActive from Pricebook2 where IsStandard=true’;
    SObject p = Database.query(q);

    if (!(Boolean)p.get(‘IsActive’)){
    p.put(‘IsActive’,true);
    update p;
    }
    standardPricebookId=(String)p.get(‘Id’);
    }
    return standardPricebookId;
    }

    public SalesInvoice__c retrieveSalesInvoice(String siId){
    try{
    //& Using dynamic Apex to retrieve fields from the fieldset to create a soql query that returns all fields required by the view.
    String q=’select Id,Name,OwnerId’;
    q+=’,TotalGross__c’;

    for(Schema.FieldSetMember f : SObjectType.SalesInvoice__c.FieldSets.invoices__Additional_Information.getFields()){
    if (!q.contains(f.getFieldPath())) q+=’,’+f.getFieldPath();
    }

    if (UserInfo.isMultiCurrencyOrganization()) q+=’,CurrencyIsoCode’;
    if (AppHelper.isPersonAccountsEnabled()) q+=’,PersonEmail,PersonContactId’;

    q+=’,(select Id,Description__c,Quantity__c from SalesInvoiceLineItems__r order by CreatedDate asc)’;
    q+=’ from SalesInvoice__c’;
    q+=’ where Id=\”+siId+’\”;

    return Database.query(q);
    } catch (Exception e){
    throw e;
    }
    }
    [/sourcecode]

  3. Dynamic Apex
  4. Do not statically reference objects or fields that may not exist in the org. Instead use Dynamic Apex techniques such as global describes and field describes. Where a new SObject is required, use the newSObject() method as shown below, this is particularly useful for unit test data creation. The Apex Language Reference provides good coverage of Dynamic Apex, every developer should be familiar with this topic.

    [sourcecode language=”java”]
    public static List<SObject> createPBE(Id pricebookId, List<SObject> products){
    SObject pbe;
    List<SObject> entries = new List<SObject>();

    Schema.SObjectType targetType = Schema.getGlobalDescribe().get(‘PricebookEntry’);
    if (targetType==null) return null;

    for (SObject p : products){
    pbe = targetType.newSObject();

    pbe.put(‘Pricebook2Id’,pricebookId);
    pbe.put(‘Product2Id’,p.Id);
    pbe.put(‘UseStandardPrice’,false);
    pbe.put(‘UnitPrice’,100);
    pbe.put(‘IsActive’,true);
    entries.add(pbe);
    }
    if (entries.size()>0) insert entries;
    return entries;
    }
    [/sourcecode]

  5. UserInfo Methods
  6. The UserInfo standard class provides some highly useful methods for any-org coding such as;
    isMultiCurrencyOrganization(), getDefaultCurrency(), getLocale() and getTimezone(). The isMultiCurrencyOrganization() method will be frequently used to branch code specific to multi-currency orgs.

    [sourcecode language=”java”]
    public static String getCorporateCurrency(){
    if (corporateCurrencyIsoCode==null){
    corporateCurrencyIsoCode=UserInfo.getDefaultCurrency();

    if (UserInfo.isMultiCurrencyOrganization()){
    String q=’select IsoCode, ConversionRate from CurrencyType where IsActive=true and IsCorporate=true’;
    List<SObject> currencies = Database.query(q);
    if (currencies.size()>0)
    corporateCurrencyIsoCode=(String)currencies[0].get(‘ISOCode’);
    }
    return corporateCurrencyIsoCode;
    }
    }
    [/sourcecode]

    Challenges

  1. Unit Test Data
  2. In the any-org model the creation of unit test data can be a challenge due to the potential existence of mandatory custom fields and/or validation rules. To mitigate the former, Dynamic Apex can be used to identify mandatory fields and their data type such that test data can be added (via a factory pattern of some sort). In the latter case there is no way to reliably detect a validation rule condition and as such for ISVs it is a blessing that unit tests do not actual have to pass in a subscriber org (wrong as this may be in principle). In the single-org case we can improve on this (and we have to), by adding a global Validation Rule switch-off flag in a Org Behaviour Custom Setting (see previous post) – this approach is helpful in many areas but for unit test data creation it can isolate test code from Validation Rules added post-deployment. There’s a tradeoff here between protecting unit tests versus the risk of using test data that may not adhere to the current set of Validation Rules.

  3. Unit Test Code Coverage
  4. The addition of multiple conditional code paths, i.e. branching, for the any-org case makes it challenging to achieve a high code coverage percentage in orgs which do not have the accommodated features activated. For example, unit tests executing in a single currency org, will not be run code specific to multi-currency, and therefore the code coverage drops accordingly. To mitigate this, consider adding OR conditions to IF branches which include unit test flags and perhaps Test.isRunningTest() to cover as much code as possible before leaving the branch. During coding always strive to absolutely minimise the feature-specific code – this approach will help greatly in respect to unit test coverage.

  5. QA
  6. In the any-org model, it is imperative to test your code in an org with the accommodated features activated. This will require multiple QA orgs and can increase the overall testing overhead considerably. Also, factor in the lead time required to have features activated by Salesforce support, such as multi-currency and Person Accounts.

  7. Security
  8. Dynamic SOQL queries open up the possibility of SOQL-injection attacks where user-supplied text values are concatentated into an executed SOQL query string. Always sanitise and escape data values where such code behaviour is necessary.

  9. Governor Limits
  10. The any-org model is highly contingent upon the use of limited resources such as Apex Describes. As a best practice employ a helper class pattern with cached values.

    One Approach – Future Proofing Single-org Developments

    Optional Features – selective
    Multi-currency – yes
    Editions Support – no
    i18n – yes
    Unit Test Data – yes
    Profile Permissions – yes

    The list above is my default position on the approach to take on single-org developments, this can change significantly depending on the current state of the org in terms of configuration and customisation, plus the client perspective on the evolution of their Salesforce org and attitude toward investing in future-proofing/extensibility. In the single-org, consultancy project case it’s advisable to be completely open and let the client decide if the additional X% cost is worth the value. I think the real point here is that the conversation takes place and the client has the opportunity to make an informed decision.

Programming Pearls

I’ve always considered programming to sit somewhere between art and science, the “art of programming” being a phrase I like. Whilst the language syntax, underlying algorithms and platforms are definitely scientific in their absolute nature, the code we write is less definitive, more personal and in my view a creative process. As with any creative process there can be no concept of complete understanding or state where there is nothing left to learn. All programmers, regardless of proficiency, must acknowledge that whilst they may be able to recite portions of the language reference, they haven’t experienced every possible implementation pattern. It can therefore be said that programming is an endless process of continuous learning, some coders have an aptitude and see the best patterns naturally then validate, some learn through practical experience – most people work in both ways. Over the years I’ve come to realise that for many a key inhibitor to learning and developing as a programmer can be an inability to understand the art-of-the-possible, to adopt a creative programming mindset – maybe even to enjoy the “art of programming” as it should be. A great resource I’ve fallen back on many times to attempt to address this is the celebrated book Programming Pearls, Second edition by John Bentley. The book (published 1999) is a collection of engaging columns covering fundamental techniques and code design principles, and is rightly viewed as a classic. Read it and enjoy.