Showing posts filed under:

Development lifecycle

Salesforce Ant Scripts – Selenium

The Salesforce metadata API is an extremely powerful tool, when combined with Ant, Jenkins etc. for build automation. There is however a number of configuration items that simply can’t be retrieved and deployed using this API (Account Teams, Support Settings, Lead Settings, Case Assignment and Escalation Rules etc.). The unsupported list can be found here, unfortunately the platform expands at a rate more or less equal to the rate at which coverage of the API has increased over time. Anyway, my point here is that typically deployments have three steps; a manual step to cover the gaps in the metadata API (pre-requisites), an automated deployment step (retrieve-and-deploy with Ant) and finally a data population step (Data Loader CLI with Ant perhaps..). Leaving data to one side (for this post), an ability to merge steps 1 and 2 would enable full automation of the deployment of configuration – which in most cases would be a good thing. One approach to automate step 1 is to write Selenium web browser automation scripts which drive the Salesforce application at the UI level. The scripts can be exported as JUnit test cases and then be incorporated into an Ant based build process and automated. My approach to doing this is outlined below, as with most things there are many ways to achieve the same result and I’m sure this can be improved on, however it keeps the process simple and gets the job done which tends to work for me. Additionally, the approach plays well with Ant, Jenkins/Hudson etc.. so it should be straightforward to extend an existing build process.

1. Install the Selenium IDE Firefox Extension.
2. Using Selenium IDE record the act of logging-in to Salesforce and making the required configuration changes.
3. Export the test case as a Java / JUnit 4 / WebDriver file. This creates a .java file as below. The example simply creates a Chatter post for the logged-in user, hopefully this is simple and illustrative enough to make the point.
[sourcecode language=”java”]
package com.example.tests;

import java.util.regex.Pattern;
import java.util.concurrent.TimeUnit;
import org.junit.*;
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
import org.openqa.selenium.*;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.Select;

public class SeleniumTest {
private WebDriver driver;
private String baseUrl;
private boolean acceptNextAlert = true;
private StringBuffer verificationErrors = new StringBuffer();

@Before
public void setUp() throws Exception {
driver = new FirefoxDriver();
baseUrl = "https://test.salesforce.com/";
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
}

@Test
public void testSelenium() throws Exception {
driver.get(baseUrl + "/");
driver.findElement(By.id("username")).clear();
driver.findElement(By.id("username")).sendKeys("release.manager@force365.com");
driver.findElement(By.id("password")).clear();
driver.findElement(By.id("password")).sendKeys("mypassword");
driver.findElement(By.id("Login")).click();
driver.findElement(By.id("publishereditablearea")).clear();
driver.findElement(By.id("publishereditablearea")).sendKeys("new Chatter post – Selenium");
driver.findElement(By.id("publishersharebutton")).click();
}

@After
public void tearDown() throws Exception {
driver.quit();
String verificationErrorString = verificationErrors.toString();
if (!"".equals(verificationErrorString)) {
fail(verificationErrorString);
}
}

private boolean isElementPresent(By by) {
try {
driver.findElement(by);
return true;
} catch (NoSuchElementException e) {
return false;
}
}

private String closeAlertAndGetItsText() {
try {
Alert alert = driver.switchTo().alert();
if (acceptNextAlert) {
alert.accept();
} else {
alert.dismiss();
}
return alert.getText();
} finally {
acceptNextAlert = true;
}
}
}
[/sourcecode]

4. Modify the test case java code as required.
5. Download the Java Selenium Client Driver from http://seleniumhq.org/download/
6. Extend or create a new Ant build file to compile and execute the test case. My example below requires a [selenium\src] sub directory structure in the build root, with the .java test case files placed in the src directory.
[sourcecode language=”xml”]
<project basedir="." default="usage" name="invoke Selenium script to configure Salesforce org">
<property name="bin" value=".\selenium\bin" />
<property name="lib" value="c:\Release Management\selenium-2.28.0\libs" />
<property name="src" value=".\selenium\src" />
<property name="report" value=".\selenium\reports" />

<target name="usage" depends="">
<echo message="Compiles and executes Selenium IDE exported test cases (source format JUnit4 WebDriver .java files)" />
</target>

<target name="init">
<delete dir="${bin}" />
<mkdir dir="${bin}" />
</target>

<target name="compile" depends="init">
<javac includeantruntime="false" source="1.7" srcdir="${src}" fork="true" destdir="${bin}" >
<!– requires Selenium test cases exported as JUnit4 WebDriver .java files in the src sub-directory –>
<classpath>
<pathelement path="${bin}">
</pathelement>
<fileset dir="${lib}">
<include name="**/*.jar" />
</fileset>
</classpath>
</javac>
</target>

<target name="exec" depends="compile">
<delete dir="${report}" />
<mkdir dir="${report}" />
<mkdir dir="${report}/xml" />

<junit printsummary="yes" haltonfailure="yes">
<classpath>
<pathelement path="${bin}">
</pathelement>
<fileset dir="${lib}">
<include name="**/*.jar" />
</fileset>
</classpath>
<test name="com.example.tests.SeleniumTest" haltonfailure="yes" todir="${report}/xml" outfile="SeleniumTest-result">
<formatter type="xml" />
</test>
</junit>

<junitreport todir="${report}">
<fileset dir="${report}/xml">
<include name="TEST*.xml" />
</fileset>
<report format="frames" todir="${report}/html" />
</junitreport>
</target>
</project>
[/sourcecode]

Note. There is no need to start or stop a Selenium server as the script runs locally on the build server – Firefox will be required however if you stick with the default browser in recorded scripts.

I’ll follow this initial post with further detail on the following;
1. Conditional script logic – i.e. I want the script to check for a condition before making a change such that it selectively configures and therefore won’t be reliant on a clean, predictable state.
2. Execution of test suites rather than individual cases.
3. Most likely I’ll refine the build.xml example as I understand more about this.

Salesforce Ant Scripts – Post Retrieve Modification

If your deployment process involves manual modification of the metadata files between retrieve and deploy steps, it’s time to consider extending your knowledge of Ant. This is critical for Continuous Integration where manual processes are an anathema. With a small amount of Ant knowledge you can delete metadata files, edit and replace/remove content via regex, copy files into the directory structure, invoke Selenium scripts to perform configuration tasks at the UI level (addressing gaps in the metadata API perhaps) and so on and so forth. In short, understanding the potential of Ant is key to delivering build automation.

One exemplar use case for post-retrieve modification is deploying metadata from orgs with Social Contacts enabled – errors can arise as below due inconsistencies in the retrieval of the SocialPost object and related metadata.

SocialPost-Social Post Layout.layout(SocialPost-Social Post Layout):Parent entity failed to deploy
No Layout named SocialPost-Social Post Layout found

In this use case, to get the metadata to deploy we need to remove profile references to the SocialPost layout and then remove the layout file itself. The example build file below shows how this can be achieved. In addition, sandbox email address suffixes are also updated to match the target sandbox – a fairly common deployment issue with sandboxes and workflow alerts, dashboard running users etc.

Build File – Retrieve Org Metadata, Modify & Deploy to Org
[sourcecode language=”xml”]
<project xmlns:sf="antlib:com.salesforce" basedir="." default="deploy_ci" name="org to org">
<property file="build.properties" />
<property environment="env" />

<target name="retrieve_dev" depends="">
<echo message="retrieving metadata to ${metadata.root}" />
<sf:retrieve unpackaged="${metadata.root}/package.xml" retrieveTarget="${metadata.root}" singlePackage="true" serverurl="${dev.sf.org.serverurl}" password="${dev.sf.org.password}" username="${dev.sf.org.username}" />
</target>

<target name="update_email_address_suffixes" depends="retrieve_dev">
<echo message="updating email addresses in ${metadata.root}…" />
<replaceregexp match="${dev.sf.org.suffix}" replace="${ci.sf.org.suffix}" flags="gs" byline="false">
<fileset dir="${metadata.root}" />
</replaceregexp>
</target>

<target name="remove_social_post_from_profiles" depends="update_email_address_suffixes">
<echo message="updating profiles to remove Social-Post references in ${metadata.root}…" />
<replaceregexp match="^ &lt;layoutAssignments&gt;\n &lt;layout&gt;SocialPost-Social Post Layout&lt;/layout&gt;\n &lt;/layoutAssignments&gt;$" replace="" flags="gm" byline="false">
<fileset dir="${metadata.root}\profiles" includes="**/*.profile" />
</replaceregexp>
</target>

<target name="delete_social_post_files" depends="remove_social_post_from_profiles">
<echo message="deleting Social-Post related files from ${metadata.root}…" />
<delete file="${metadata.root}\workflows\SocialPost.workflow"/>
<delete file="${metadata.root}\layouts\SocialPost-Social Post Layout.layout"/>
</target>

<target name="deploy_ci" depends="delete_social_post_files">
<echo message="deploying modified metadata from ${metadata.root}…" />
<sf:deploy singlePackage="true" serverurl="${ci.sf.org.serverurl}" password="${ci.sf.org.password}" username="${ci.sf.org.username}" maxPoll="360" pollWaitMillis="20000" logType="Debugonly" rollbackOnError="true" runAllTests="${ci.sf.org.forcetests}" checkOnly="${ci.sf.org.checkonly}" deployroot="${metadata.root}">
</sf:deploy>
</target>
</project>

[/sourcecode]

Programming Pearls

I’ve always considered programming to sit somewhere between art and science, the “art of programming” being a phrase I like. Whilst the language syntax, underlying algorithms and platforms are definitely scientific in their absolute nature, the code we write is less definitive, more personal and in my view a creative process. As with any creative process there can be no concept of complete understanding or state where there is nothing left to learn. All programmers, regardless of proficiency, must acknowledge that whilst they may be able to recite portions of the language reference, they haven’t experienced every possible implementation pattern. It can therefore be said that programming is an endless process of continuous learning, some coders have an aptitude and see the best patterns naturally then validate, some learn through practical experience – most people work in both ways. Over the years I’ve come to realise that for many a key inhibitor to learning and developing as a programmer can be an inability to understand the art-of-the-possible, to adopt a creative programming mindset – maybe even to enjoy the “art of programming” as it should be. A great resource I’ve fallen back on many times to attempt to address this is the celebrated book Programming Pearls, Second edition by John Bentley. The book (published 1999) is a collection of engaging columns covering fundamental techniques and code design principles, and is rightly viewed as a classic. Read it and enjoy.

Patterns of Construction

I’m a big advocate of setting out the key elements of the development process succinctly but unambiguously at the start of a software development project, particularly in cases where I have no prior history of working with the development team. Such process elements typically cover environments, coding standards, technical design and review requirements, source-code control strategy etc. Perhaps the most valuable area to cover are the basic patterns of construction (or Design Patterns), without this developers are left to their own devices in naming technical components and structuring code, which can be a serious issue with maintainability and standardisation. It is incredibly time expensive and de-motivating to address this after the fact. Instead a clear picture provided upfront can provide the development team with a strong reference covering 80% of the cases, the remainder can be addressed individually during technical design. The example below provides an example of a basic construction pattern which covers naming conventions and structural concerns. Following such a pattern makes the technical implementation predictable and should improve maintainability, the latter being a obligation to take seriously on consulting projects. My rule of thumb is to try and leave the org in a state a future me would consider acceptable.

Salesforce Logical Data Models

A robust and intelligent data model provides the foundation upon which a custom Salesforce implementation can be built. Mistakes made in the functional or technical build are typically inexpensive to rectify (if caught quick enough), however a flawed data model can be incredibly time and cost expensive to mitigate. At the start of all projects I produce a logical data model, example provided below. this starts out as blocks and lines and improves iteratively to include physical concerns such as org-wide defaults, relationship types etc.. Only after a few revisions will I consider actually creating the model as custom objects. I use OmniGraffle for such diagrams.

Interaction Design

Some 18 years ago I attended a Microsoft developer academy event in Cambridge (UK) – one of the sessions I attended was delivered by Alan Cooper and addressed the topic of interaction design. This session had a fundamental impact on how I viewed software development – and still influences my thinking today. I can’t recommend Alan Cooper’s book About Face strongly enough for anyone interested in building great software products, regardless of technology or platform. My personal learning was this – software products exist to help people complete a task they’d rather not being doing – so the best solution is intuitive, unobtrusive, supportive and abstracted from the implementation mechanics. A bit wordy perhaps, this is best illustrated by Cooper’s optimal solution design, the big red button with the label ‘Just Do It’!

Rarely do great user experience occur by accident. The best experiences feel natural and so tuned to the task in hand that the interaction is effortless and predictable.

Such user experiences are of course far from effortless to deliver, instead they are the product of careful thought and the expert application of interaction design techniques. Replace my use of the term interaction design with usability, user-centric design, HCI or whatever term you’re familiar with that relates to designing interactions starting with the user.

In developing a number of commercial and non-commercial software products over the years I’ve tried to rationalise the process element of applying interaction design to a project, i.e. how to integrate the outputs into a viable development process. The framework outlined below provides one approach to this. Of course the real value of any interaction, or user, centric design is the translation of the understanding of the user plus functional, technical and corporate brand constraints into a cohesive set of simplified interactions. The approach to this varies greatly between projects.

A simple but useful framework to consider.

— Interaction Pattern Catalogue
An abstract set of well-defined and robust patterns for the typical UI interactions required within the context. For example, how is a List page structured, how does the primary user interaction and any related actions work. My interaction pattern catalogue would typically cover List, Detail, Edit, Report, Dashboard, Search pages – providing an absolute and detailed definition of the composition and operation of each pattern. In the ideal case every page of the application would be covered by the patterns, however this is unlikely and a number of exception patterns would be typical – added to the catalogue and described in the same detail just in case a second instance occurs.

— Process Diagram
A concrete decomposition of the user-interface of an application into a block for each page and arrows to denote the transition paths between pages. Each block is coloured or annotated to indicate the interaction pattern (or exception pattern) defined in the interaction pattern catalogue. From experience, drawing out an approximation of the full user-interface early in the process, really helps focus the development effort, as well as ensuring all the interaction patterns are identified. I typically print this diagram out and place it prominently within the development area – I have also used this to highlight progress by changing the colour on completed areas. This may sound like a BDUF approach, however even agile projects can benefit from this exercise as long as the right caveats are in place.

— User Interface Policies
An abstract set of policies, well-defined and absolute, which specify the use of fonts, the colour palette, iconisation, indication of mandatory state, error condition presentation, field highlighting behaviour, dimensionality (gaps between components etc.), alignment, tone of language, label terminators, accessibility behaviour etc.. In short the user interface policies provide definition to key characteristics of the UI applicable across interaction patterns. Policies are typically documented in lightweight form and provide a handy and concise reference which enforces uniformity across the application.

— Interaction Storyboards
A concrete set of storyboards which extend the interaction patterns defining the specific behaviour of certain interactions, typically in the context of a persona and a scenario.

Salesforce Development Process

There are typically two interpretations of the term “development process” – one being the tools, practices and methods applied in software development (i.e. methodology, plus build automation, standards etc.) the other being the process applied to get from requirements to working software (i.e. iterative or waterfall, plus how the analysis-design-build-test-release disciplines are executed). This post outlines one high-level approach to the latter in the context of Salesforce developments. The intent of this isn’t to be overly prescriptive, generally speaking each project requires its own defined process that factors in resources available (and their skills and experience critically) plus the nature of the work and the timescales. That said it is a truism that failed projects fail for a variety of reasons but successful projects are typically successful for the same reasons. A fundamental success factor being the adoption of a clearly defined and simple process – others being team empowerment and shared commitment.

The process above assumes an iterative process and focuses the initial iteration on the foundation of a robust data model, set of user profiles and permission sets, role hierarchy, record access model and statement of the reporting requirements for the project. Subsequent iterations improve the quality of the foundation over time, as new functional areas are developed. The data model in this context will include a statement of the org-wide defaults for each object and the specifics of each relationship (master-detail, lookup, mandatory lookup etc.). The record access model is critical – this shows how each user population maps to a user profile and role and how they gain access to the data required, i.e. sharing rule, Apex managed sharing etc. In my experience defining an approximate access model upfront and then refining during the feature build-out helps to avoid expensive refactoring later in the process and sets out a clear understanding for all contributors to the declarative and technical build. A piecemeal approach to defining a sharing model is commonplace – this rarely provides a clear and cohesive result. For the similar reasons defining a list of permission sets upfront ensures that user profiles are kept clean and focused, avoiding proliferation of profiles down the line. It may be surprising to see analytics such as reports and dashboards being considered during the foundation stage, this however is one of the primary inputs to the definition of a fit-for-purpose data model. I’ve worked on countless projects where reporting has been overlooked until a late stage, at which point it has become apparent that standard reporting features can’t produce the reports given the structure of the data. Ideally the data model should be designed from the outset to work well for both transaction processing and analytics.

A final point for consideration is the by-exception approach to identifying technical components. When breaking out the solution components required for a certain feature – expertise must be applied to ensure that standard product functionality or declarative options (workflows, reports etc.) are considered fully before bringing expensive technical options such Visualforce or Apex to bear.

Salesforce Ant Scripts

This brief post illustrates how Ant scripts can be used in a continuous integration scenario, i.e. where metadata is held in a source code control (SCC) repository such as Subversion. In a CI scenario developers would typically be working in isolated developer orgs with periodic commits of unit tested code to SCC following peer-review (hopefully). The act of committing changes to SCC will trigger a full deployment of the metadata state held in SCC to a dedicated integration org – with full execution of unit tests and hopefully automated acceptance tests (i.e. Selenium or similar). The whole point of this process is to introduce rigour into the development process around code commits and to ensure build errors are captured whilst the developer is in the moment and can remedy the problem quickly. CI is an agile practice related to technical excellence.

Ok, enough theory – the example (simplistic) script below shows a common case where the head revision is checked-out to a local folder and deployed to a Salesforce org – with unit tests running. In practice this script would be automated by a Hudson or Jenkins job that would be monitoring the SCC repository for commit operations.

Build Properties [build.properties]
[sourcecode language=”xml”]
# build.properties
# Contains properties referenced by all deployment scripts
# May be replaced by configuration parameters when invoked from a Hudson/Jenkins Job.

# local root folder.
metadata.root=metadata

# Salesforce task configuration properties.
sf.target.org.serverurl=https://test.salesforce.com
sf.target.org.username=mark@force365.com
sf.target.org.password=welcome2U
sf.target.org.forcetests=true
sf.target.org.checkonly=false
sf.target.org.deploy.maxPoll=20
sf.target.org.logType=Debugonly
sf.target.org.deploy.waiting.time=500000

# SvnAnt task configuration properties.
svnant.latest.url=svn://localhost/myproject
svnant.repository.user=mark
svnant.repository.passwd=welcome2U
[/sourcecode]

Build File – Retrieve Metadata from SCC and Deploy to Org [build.xml]

[sourcecode language=”xml”]
<project name="Subversion to Org" default="deploy" basedir="." xmlns:sf="antlib:com.salesforce">
<property file="build.properties" />
<property environment="env" />

<!– path to the svnant libraries. Usually they will be located in ANT_HOME/lib –>
<path id="svnant.classpath">
<fileset dir="${ant.home}\lib">
<include name="**/svn*.jar"/>
</fileset>
</path>

<!– load the svn task –>
<typedef resource="org/tigris/subversion/svnant/svnantlib.xml" classpathref="svnant.classpath" />

<target name="checkoutLatest">
<svn username="${svnant.repository.user}" password="${svnant.repository.passwd}">
<checkout url="${svnant.latest.url}"
revision="HEAD"
destPath="${metadata.root}" />
</svn>
</target>

<target name="deploy" depends="checkoutLatest">
<echo message="deploying from ${metadata.root}" />
<sf:deploy username="${sf.org.username}"
password="${sf.org.password}"
serverurl="${sf.org.serverurl}"
deployroot="${metadata.root}"
singlePackage="true"
runAllTests="${sf.org.forcetests}" />
</target>
</project>
[/sourcecode]

The script above can be executed manually from a standard (non-Force.com) project within Eclipse. I typically run with an Eclipse workspace per-client and maintain a deployment project within the workspace for all the scripts I use.

Pre-requisites are the Force.com Migration Tool and svnAnt task being added to the Ant classpath. Install instructions linked below.

Force.com Migration Tool Guide

SvnAnt Project Home

Salesforce Continuous Integration

What is it?
In essence CI is an aggressive build strategy requiring the isolated work of project developers to be integrated immediately following code commits to a shared source-code control system. Regression tests are run automatically, surfacing build errors or code inconsistency at an early stage.

CI is viewed as an Agile practice and is typically characteristic of a mature development process, and experienced developers. There is a definite learning curve and mindset adjustment for developers to be considered.

The manual alternative, which I term staged integration (SI), involves periodic integration testing of the HEAD revision from the source code control (SCC) system. The difference being the immediacy of performing the integration tests, and therefore verifying the integrity of the current build status. With the manual approach it can be difficult to instil team discipline, minor changes can often be viewed as not warranting a build and test.

Basic tenets
1. Developers work on an isolated copy of the code (i.e. branch) to avoid contention on shared resources, utility classes etc.
2. Developers commit unit-tested code to the shared SCC repository – often many times per-day.
3. An automated build process is triggered by the commit which takes the HEAD revision, deploys to a dedicated org running the full suite of unit tests. Test failures are reported proactively, naming and shaming the individual responsible for the failing commit. It’s key to note that pre-commit the developer should merge the current HEAD revision into their local branch and resolve conflicts (GIT for example will enforce this).
4. The HEAD revision represents a consistent “code complete” status. Development will typically take place in a isolated branch, with the master branch holding the production ready code.

Typical steps
1. Code is committed, this triggers a deployment to the INT org with unit test execution during the deployment.
2. Once deployment completes successfully, functional acceptance tests are executed, possibly via a tool like Selenium where functional tests at the UI level can be scripted (perhaps to verify a particular user story).

Why do it?
1.Daily builds have long been an industry best-practice, continuous integration is an evolutionary improvement.
2.The more frequently code is integrated the less painful it is.
3.Build errors are surfaced early, while the developer is still “in the zone” and can resolve the problem expediently.
4.Builds trust within the development team and a sense of collective ownership.
5.Driver for technical excellence, a key agile principle.
6.Encourages quality unit tests (code coverage and test case).

Obstacles
1.Big unit test suites can often take hours to run. To mitigate this obstacle, a smoke test could be executed on commit (current sprint related unit tests only), followed by a full test scheduled every 1/2 day, or overnight. The Force.com Migration tool enables executing test classes to be defined by name – so this is a feasible option.
2.Unit tests are an afterthought. Switch the team to TDD – perhaps with some education first.
3.Unsupported Metadata Types. Certain salesforce configuration elements (metadata types) can’t be deployed via the Force.com Migration Tool. Such elements must be recorded in a audit log and manually applied to the target org, or for automation a Selenium script could be utilised.
4.Standing Data. New features may require standing data (custom settings etc.). Use the Apex Data Loader in command-line mode (CLI), and invoke data manipulation operations within the build file.

Tools and process
A CI implementation requires fit-for-purpose tooling, for Force.com development the following stack is typical:

    SCC = Subversion or Git
    Build Automation Server = Jenkins or Hudson
    Scripting = ANT plus the Force.com Migration Tool (scriptable ANT task)

In simple terms CI works as follows. Within the build server (Jenkins for example) a job is defined that on each commit connects to the SCC repository and copies the HEAD revision to a working folder, then runs an ANT script. The script invokes a build.xml file which is held in SCC and therefore copied into the working folder. The build file runs whatever tasks are required including folder manipulation, static resource zipping, but ultimately (in context) the intent is to run the deploy target in the Force.com Migration Tool task, to deploy to a specific salesforce.com org. Connection details can be passed in via the job configuration or read from a build.properties file. A Jenkins plug-in can also be used to post build results to a Chatter post in a another org – very useful for notifications.

Exemplar Scenario – Single Project Org Strategy

Exemplar Scenario – Multiple Project Org Strategy

Related Concepts (for future posts)
TDD – Test Driven Development
Pair Programming
SCC Branching Strategy