Showing posts written by:

markcane

Salesforce Continuous Integration

What is it?
In essence CI is an aggressive build strategy requiring the isolated work of project developers to be integrated immediately following code commits to a shared source-code control system. Regression tests are run automatically, surfacing build errors or code inconsistency at an early stage.

CI is viewed as an Agile practice and is typically characteristic of a mature development process, and experienced developers. There is a definite learning curve and mindset adjustment for developers to be considered.

The manual alternative, which I term staged integration (SI), involves periodic integration testing of the HEAD revision from the source code control (SCC) system. The difference being the immediacy of performing the integration tests, and therefore verifying the integrity of the current build status. With the manual approach it can be difficult to instil team discipline, minor changes can often be viewed as not warranting a build and test.

Basic tenets
1. Developers work on an isolated copy of the code (i.e. branch) to avoid contention on shared resources, utility classes etc.
2. Developers commit unit-tested code to the shared SCC repository – often many times per-day.
3. An automated build process is triggered by the commit which takes the HEAD revision, deploys to a dedicated org running the full suite of unit tests. Test failures are reported proactively, naming and shaming the individual responsible for the failing commit. It’s key to note that pre-commit the developer should merge the current HEAD revision into their local branch and resolve conflicts (GIT for example will enforce this).
4. The HEAD revision represents a consistent “code complete” status. Development will typically take place in a isolated branch, with the master branch holding the production ready code.

Typical steps
1. Code is committed, this triggers a deployment to the INT org with unit test execution during the deployment.
2. Once deployment completes successfully, functional acceptance tests are executed, possibly via a tool like Selenium where functional tests at the UI level can be scripted (perhaps to verify a particular user story).

Why do it?
1.Daily builds have long been an industry best-practice, continuous integration is an evolutionary improvement.
2.The more frequently code is integrated the less painful it is.
3.Build errors are surfaced early, while the developer is still “in the zone” and can resolve the problem expediently.
4.Builds trust within the development team and a sense of collective ownership.
5.Driver for technical excellence, a key agile principle.
6.Encourages quality unit tests (code coverage and test case).

Obstacles
1.Big unit test suites can often take hours to run. To mitigate this obstacle, a smoke test could be executed on commit (current sprint related unit tests only), followed by a full test scheduled every 1/2 day, or overnight. The Force.com Migration tool enables executing test classes to be defined by name – so this is a feasible option.
2.Unit tests are an afterthought. Switch the team to TDD – perhaps with some education first.
3.Unsupported Metadata Types. Certain salesforce configuration elements (metadata types) can’t be deployed via the Force.com Migration Tool. Such elements must be recorded in a audit log and manually applied to the target org, or for automation a Selenium script could be utilised.
4.Standing Data. New features may require standing data (custom settings etc.). Use the Apex Data Loader in command-line mode (CLI), and invoke data manipulation operations within the build file.

Tools and process
A CI implementation requires fit-for-purpose tooling, for Force.com development the following stack is typical:

    SCC = Subversion or Git
    Build Automation Server = Jenkins or Hudson
    Scripting = ANT plus the Force.com Migration Tool (scriptable ANT task)

In simple terms CI works as follows. Within the build server (Jenkins for example) a job is defined that on each commit connects to the SCC repository and copies the HEAD revision to a working folder, then runs an ANT script. The script invokes a build.xml file which is held in SCC and therefore copied into the working folder. The build file runs whatever tasks are required including folder manipulation, static resource zipping, but ultimately (in context) the intent is to run the deploy target in the Force.com Migration Tool task, to deploy to a specific salesforce.com org. Connection details can be passed in via the job configuration or read from a build.properties file. A Jenkins plug-in can also be used to post build results to a Chatter post in a another org – very useful for notifications.

Exemplar Scenario – Single Project Org Strategy

Exemplar Scenario – Multiple Project Org Strategy

Related Concepts (for future posts)
TDD – Test Driven Development
Pair Programming
SCC Branching Strategy

Visualforce User Agent Detection

The code below provides an example of a page action method used to detect a mobile user-agent and perform redirection. Alternatively the same approach could be used with dynamic Visualforce components to switch between mobile/web optimised page composition.

[sourcecode language=”java”]
public PageReference redirectDevice(){
String userAgent = ApexPages.currentPage().getHeaders().get(‘USER-AGENT’);

//& some devices use custom headers for the user-agent.
if (userAgent==null || userAgent.length()==0){
userAgent = ApexPages.currentPage().getHeaders().get(‘HTTP_X_OPERAMINI_PHONE_UA’);
}
if (userAgent==null || userAgent.length()==0){
userAgent = ApexPages.currentPage().getHeaders().get(‘HTTP_X_SKYFIRE_PHONE’);
}

//& replace with custom setting – using (?i) case insensitive mode.
String deviceReg = ‘(?i)(iphone|ipod|ipad|blackberry|android|palm|windows\\s+ce)’;
String desktopReg = ‘(?i)(windows|linux|os\\s+[x9]|solaris|bsd)’;
String botReg = ‘(?i)(spider|crawl|slurp|bot)’;

Boolean isDevice=false, isDesktop=false, isBot=false;

Matcher m = Pattern.compile(deviceReg).matcher(userAgent);
if (m.find()){
isDevice = true;
} else {
//& don’t compile the patterns unless required.
m = Pattern.compile(desktopReg).matcher(userAgent);
if (m.find()) isDesktop = true;

m = Pattern.compile(botReg).matcher(userAgent);
if (m.find()) isBot = true;
}
//& Default is mobile – unless a desktop or bot user-agent identified.
if (!isDevice && (isDesktop || isBot)) return null; //& no redirect.
return new PageReference(‘/apex/MobileIndex’); //& redirect.
}
[/sourcecode]

Force.com Streaming API

So, imagine you’re building a modern web app (or page) and need to update some element on the page in near-real time, perhaps an open inventory quantity to prevent data conflicts upstream. The point being that the immediacy of update of the data plays a fundamental role in the business process. What are the options? Historically an interaction of this type would require a full page refresh, and as such would push the application toward a non-web delivery model. Then partial page updates became popular; we could then consider some form of client-initiated polling (JavaScript timer or otherwise). This client-initiated “Pull” model would satisfy the requirement, with a possible benefit of offloading the expensive polling activity to client agent, all the server needs to do is expose a lightweight (probably stateless) endpoint to return the calculated data. Excellent. However, as anyone implementing such an approach will know – there’s something slightly unedifying about the “Pull” model – you can never really tune the poll frequency to the right balance of data update frequency versus the feeling of wasting server resources. We should also strive to minimise how much trust we put in the client. In the Force.com context, unnecessary polling activity also comes with the cost of consumption of limited API calls – this is a key point. In an ideal world perhaps, the server could somehow just notify the client when the data has definitely changed with no redundant polling, wasted callouts or unnecessary server resource consumption. Enter the Force.com Streaming API.

The Force.com Streaming API provides a server-initiated “Push Model”, where notifications of changes to data of interest can be sent to internal pages (Visualforce), external app servers and external clients. The latter point being interesting in that the use cases for the API aren’t limited to simple UI updates, external system-type services can also be be subscribers in the model. The API is founded on the cometD stack and utilises the long polling technique where an almost persistent client-to-server connection used, i.e. the server holds the client request open until a response is available, once returned the client immediately re-requests the data thereby opening a new connection.

To use this API, a PushTopic is defined using a SOQL query construct and exposed via a Channel. Clients then subscribe to the channel and receive notifications when DML events occur that affect records covered by the SOQL query WHERE clause. PushTopics can notify on Insert and/or Update operations, this can be configured. Also, PushTopics can generate notifications for records matching the SOQL WHERE clause in response to any field change, or just changes to fields referenced in the SELECT clause or a WHERE clause predicate – or both. There is currently no UI for the creation of PushTopics, therefore Apex script must be used – executed perhaps via the Developer Console or Execute Anonymous tab in the Force.com IDE. Not ideal.

Supporting technology
Bayeux protocol – standard protocol for transportation of asynchronous messages – typically in an HTTP context.
CometD – implementation of the Bayeux protocol using the AJAX push technology pattern referred to as Comet. Reference: http://cometd.org.
JSON – notifications are formatted as JSON messages.

Key points
– Stateless model – no server persistence of client state – fire-and-forget from the server perspective.
– Bulk API operations do not initiate notifications – for obvious reasons.
– Server processing of new PushTopic notifications occurs every 3 seconds, therefore this becomes the maximum frequency of update from the client perspective.
– PushTopic construct – each PushTopic relates to one object (Custom or Standard). The SELECT clause must include the Id field, other fields are sent via the channel. Join and aggregation operations are not supported – also formula fields are not supported, which is a surprising limitation.
– Browser support is limited to IE8+ and FF4+.
– Client cookie support is required.
– A limit of 10 subscribing clients is enforced per PushTopic, with a maximum of 20 PushTopics in total. This again is surprising, as 10 clients per-topic is unrealistic for usage in a multi-user web-app. The documentation does suggest however that this soft-limit can be increased through contacting salesforce.com support. There may be a cost implication here.
– A number of JavaScript libraries must be added to each Visualforce page, via a static resource preferably. Use a page template, or page composition if possible to isolate the references.
– The API requires a reasonable level of JavaScript expertise.

Summary
The Force.com Streaming API adds native support for server-initiated “Push” models – helping to limit consumption of expensive API callouts in some circumstances. The key imitations to be aware of are that the API does not enable client-control over the scope of the data pushed from the server, and that the subscribers per PushTopic is limited to 10. With the former, it isn’t possible to limit the scope of notifications to a single Account as an example – therefore a web page would receive notifications for all Accounts. Careful consideration must be to the design and purpose of each topic, with a focus on maximising specificity.

Force.com Sites SEO

Quick how-to guide..

1. Page title tag
[sourcecode language=”html”]<title>Key Search Terms</title>[/sourcecode]

The title element is the single most important on-page SEO data. Consider it carefully.

2. Keywords and description metadata tags
[sourcecode language=”html”]<meta name="keywords" content="keyword1, keyword2" />
<meta name="description" content="Key Search Terms" />[/sourcecode]

1 and 2 are key, keep the content focused and short, search engines can black-list on the basis of repetitive, long content.

3. Site verification
Submit your site via Google Web Master Tools and verify using the metadata tag technique.
[sourcecode language=”html”]<meta name="google-site-verification" content="PgmB4If5NAcmMRrSxC9Lim4VFy6bldSB2VY" />[/sourcecode]
Bing, Yahoo etc. offer similar site verification mechanisms.

4. Add a VF page called RobotsTxt (or similar)
And add content below, this page should be set as the robots.txt file in the Site config detail page.
[sourcecode language=”html”]<apex:page contentType="text/plain" showHeader="false">
User-agent: *
Allow: /
</apex:page>[/sourcecode]

The default is to block all crawler activity. The entry above provides full-access to all pages to all search engines. This can be adjusted to fit your preference – reference http://en.wikipedia.org/wiki/Robots_exclusion_standard for further detail.

Optional 5. Add the site to Google Analytics and verify by adding the tracking code to the Site via the config detail page *this adds in the necessary markup.

Simulated Breakpoints

The first of a series of posts relating to new advancements in the Apex language of particular relevance to technical architects.

From a debugging perspective the Apex language lags behind its modern language counterparts. Standard features such as breakpoints (conditional or otherwise) and edit-and-continue are lacking, due to the challenges of pausing runtime execution in a multi-tenanted environment. The typical debug workflow has therefore involved the use of copious amounts of..
[sourcecode language=”java”]
System.debug(‘MyVar value is:’+myVar);
[/sourcecode]
.. statements and plenty of patience. In a development or QA org this is inefficient at best but workable. In a production setting however, deploying instrumented code to assist in diagnosing a runtime issue becomes incredibly time expensive – remember unit tests have to run. With Spring ’12 however the enhanced Developer Console, nicely renamed from the old System Log title, provides a far more efficient approach – Simulated Breakpoints!

In short, a breakpoint can be set on any line of Apex script using the familiar technique of clicking in the sidebar next to the required script line to reveal a red dot indicator. This can be done for all Apex code exposed via the Repository tab in the Developer Console. Subsequent debug logs will capture a snapshot of the heap during runtime execution when the breakpoint is encountered. The snapshots can be found in the Heap Dumps tab within the Developer Console. Not quite edit-and-continue but a marked improvement nonetheless.

This understated capability is a real advancement in debugging Apex script on the Force.com platform. Throw in the other new capabilities such as unbounded raw log access and Visualforce markup editing and it’s definitely time to look again at the Developer Console if you’re doing serious Apex coding.

Welcome!

Firstly, welcome to my blog!

The key themes for the blog will be salesforce.com technical architecture and agile development practices delivered in the form of how-to posts and theoretical musings. The intent is to provide interesting, actionable knowledge for cloud architect practitioners specialising in force.com technologies. Along the way secondary topics such as Scrum, usability methods and AppExchange ISV considerations will be covered.