Showing posts tagged with:

Process

Salesforce Source Control and Release Process

This post outlines my preferred approach to managing parallel developments on the Salesforce platform in what I refer to as the Converged Programme Model. I readily acknowledge that there’s a multitude of ways to accomplish this each with it’s own subjective merits. Before adopting a parallel work-stream model take the time to understand the technical complexity, process overhead and time investment required. Of particular concern should be the team’s readiness for such a disruptive change. In my experience it’s better to plug any skills gaps upfront, be very prescriptive with process guidance, start-small and build out incrementally – the risk otherwise is considerable. Typically resistance will come from individuals unaccustomed to a disciplined approach to software development/release process.

SCC

Objectives

  1. Concurrent Development. Support parallel programme workstreams converging into a shared production Salesforce environment.
  2. Automation. Deliver build automation – reducing the manual overhead required to deploy between environments.
  3. Gold Standard. Deliver a best practice approach – the initial design should scale up and down in response to changing programme conditions.
  4. Non-disruptive. Facilitate a staggered approach to adoption – enabling key benefits to be realised quickly without disrupting productivity.
  5. Minimise Release Overhead. Project branches should be regularly and incrementally updated from the master branch – reducing the inherent risk of divergence over time.

Tools

  1. GitHub
  2. – Get started with public repositories, upgrade to a paid plan and use private repositories for any source code you don’t want to share with the world at large.

    – Create an Organisation account to enable Team functionality.

    – Key benefit versus Subversion (CVS etc.) is fast and efficient branch management; parallel workstreams are managed on branches with frequent merging.

    – It is possible, albeit time expensive to implement a Git server within the enterprise. In my view the GitHub administration interface alone is worth the price.

  3. Jenkins
  4. – Deployed on a Windows EC2 instance with an elastic IP. A free usage tier, micro instance provides an ideal server host. Using a Linux host can be beneficial in regard to SSH authentication from Jenkins to GitHub. This is just one advantage of many. Pick the Operating System/Platform the team you’re working with are most familiar, a Linux host that only one team member can administrate makes no sense.

    – On Windows the Jenkins service should be configured to run a specific user account (with least privileges assigned). This is required to generate the key files for SSH authentication.

    – Enable Jenkins security. Particularly relevant if the host is open to the public web. Lock the inbound IP ranges via the EC2 security group if possible.

    – Either store the Ant build files (build.xml, build.properties) in the Git repository or use an XCOPY post-build step to copy the files into the workspace from a file system location – as below. I prefer to keep the build files external to Git – there shouldn’t be any need to version manage such files – plus the build.properties file may contain passwords in plaintext.

    Jenkins Job Build Config

    – Install GitHub and Git Plugins
    Required to build from a GitHub repository and enables build automation via Post-Receive Hooks. Under Jenkins System Configuration; configure “Manually manage hook URLs”, this requires your GitHub repository to have the hook set manually via Service Hooks under repository settings. Add a [Jenkins (GitHub plugin)] service hook like http://yourservername:8080/github-webhook/. The message sent on git-push to the remote repository will trigger any Jenkins job that builds from the branch that has been updated and has the [Build when a change is pushed to GitHub] option set to true.

    – SSH Keys
    In order to use SSH from Jenkins to a private GitHub repository, SSH authentication is required, which uses a generated key pair. The public key is added as a Deploy Key in GitHub under repository settings. This works well but if you want the same Jenkins user to access multiple repositories over SSH you have a problem as each Deploy Key must be globally unique across all GitHub repositories. The answer to this is to use aliasing and a SSH config file (refer: http://www.onemogin.com/blog/2011/9/1/jenkins-and-github-multiple-private-projects.html) however this won’t work with Post-Receive Hooks as the repository URL in the sent message won’t match to the aliased repository URL in the Jenkins job – typically errant behaviour below from the Jenkins log. I can’t see a way around this at the time of writing this post.

    [sourcecode language=”text”]
    FINE: Skipped GitHub Test – buildautomationtest repository because it doesn’t have a matching repository.
    May 7, 2013 6:21:35 PM com.cloudbees.jenkins.GitHubWebHook
    FINE: Considering to poke GitHub Test – buildautomationtest repository
    [/sourcecode]

    – Chatter Plugin
    I’m a big fan of this plugin by Simon Fell. I tend to use a dedicated release manager user, e.g. release.manager@force365.com, standard user license capacity permitting, and perform all deployment tasks in this user context. This approach provides clarity on changes made by a deployment versus actual user and provides an easy way to be notified of failures etc.

    Key Principles

    1. Fit-for-purpose Org-set
    2. – Org-set is the terminology I use to describe the collection of orgs, and their roles, required to deliver a project safely to production.

      – One size does not fit all. Pick the minimum set of orgs roles required to deliver the project. Each org is a time expensive overhead.

      – Sandbox types. In defining the org-set, factor in the availability of config-only and full-copy sandboxes. The latter must be retained for cases where infrequent refresh is required. Project-level orgs don’t need to be part of the sandbox estate, Developer Edition orgs, or perhaps Partner Developer Edition orgs can be employed. Full-copy sandboxes are incredibly expensive, valuable resources, use only when absolutely necessary for as wide a set of roles as possible.

      – Connected orgs. For projects involving complex integrations, the complexity involved in creating a connected-org may influence the org-set design – there may be an argument to consolidate roles onto a single test org used for QA and UAT perhaps.

    3. Continuous Integration
    4. A best practice org-set design for non-trivial technical projects with multiple technical contributors should require isolation of developer activities into a separate developer orgs with a code-level integration org and Continuous Integration (CI) process in place.

    5. Project-level sandboxes are not refreshed
    6. Project-level orgs are all built from the Git repository. The Pre-production programme-level org must be refreshed from Production pre-deployment to ensure the deployment is verified against the current state.

    7. Commit to the remote project branch is a commitment that metadata is ready for system testing
    8. Build automation will deploy a project branch commit to the project QA org. In my experience it pays to be prescriptive in terms of development process.

    9. Commit to the remote master branch is a commitment that metadata is ready for integration testing
    10. Build automation will deploy a master branch commit to the programme INT (integration) org – this org exists to enable rigorous regression testing to be applied by all project workstream. Post-deployment suites of automated tests should be invoked and reports analysed by the test lead on each project.

    11. Test Automation
    12. It’s a significant resource overhead to execute manual test scripts for each regression test cycle, not to mention error prone. For non-trivial projects, the investment must be made at an early stage in automated-testing. Selenium is a good choice, but the tool utilised doesn’t really matter, what matters is that from the outset of the project the test team start to build-up a comprehensive suite of automated test cases with coverage of the key acceptance criteria defined for each user story. The suites then enable automation of regression testing during deployment phases – the same scripts underpin system testing and provide an often overlooked second stage to CI (unit tests + acceptance tests).

    13. GitHub branch design
    14. – A simple, clean branch design is desirable in the remote repository.

      – Long-lived branches for active project workstreams. Project branches may have sub-branches for each sprint or phase.

      – Long-lived branch for patches. Bug fixes are developed on local branches and committed to the remote support branch when ready for system testing.

      – It can be advisable to consider how important a clean Network Graph is, this is impacted by Git merge versus rebase decisions.

    15. Build automation challenges
    16. In a perfect world, all metadata component types would be covered by the Metadata API. This isn’t the case so the nirvana of simple cloning of an org configuration is yet to exist. Instead a prescriptive process is required which spans manual configuration tasks, metadata deletion and build automation.

      – Proactive management of change
      A nominated release manager should proactively manage change at a programme-level, advise the project teams on release process and strive to minimise deployment conflicts through early involvement in all project developments. A change log should be maintained which lists all changes being made. This could include technical component types (ApexClass, ApexTrigger etc.) being added, modified or deleted, but as a minimum must track configuration changes requiring manual action – enablement of features, field data type changes etc. and required standing data (custom settings etc.). All changes should be mapped to a Change Type of manual or automated and a list of orgs to which the change has been deployed tracked. This is clearly an overhead to the project but without control it can be very easy to lose track of the current state of the orgs in use and face significant time expense in attempting to rationalise the situation through failing deployments. The release manager, or technical lead should apply manual tasks to target orgs pre-emptively to minimise automated build failures.

      – Be prepared for build failures
      Automated builds will fail; this is a fact of life where build-dependencies on manual actions exist. Proactive management will only get you so far. Attempting to minimise this is more realistic than elimination.

      – Data
      Automation of data setup in a target org is possible via Ant and the Data Loader CLI, or other similar means. Alternatively a data file could be deployed as a document or static resource and then loaded from an Apex script (as per the ISV approach).

      – Unsupported metadata component types
      Automation is possible using Selenium scripts, which execute at the UI level and can simulate, for example, a user activating a setting. Such scripts can then be integrated into an automated build. This is highly possible, but takes time and expertise with both Ant and Selenium to accomplish.

    17. Programme-level Integration
    18. The Converged Programme Model involves project workstreams building in isolated org-sets with frequent merge-from-master actions bringing across any changes to the production state. This approach should surface conflicts early, i.e. during development itself, but to be sure that shared component changes have not introduced any functional inconsistencies, regression testing must be applied by each and every project workstream on each occasion any project does a release. This is a strong argument for test automation.

    19. UAT
    20. – Project-level or programme-level?
      In principle UAT should always be applied at the local project-level as the commit to the programme-level integration org is an absolute commitment that the code is production ready. In practice UAT may be two tiered; initial user acceptance of new functionality, followed by some form of secondary acceptance testing in Pre-production, in parallel to deployment verification testing.

    21. Path-to-production Change Management
    22. As with any programme of work, fit-for-purpose Change Management processes should be in place. In context this means a Change Advisory Board (CAB) should be in place to approve deployment, this must include informed and empowered representation across business and technical functions.

      – A Deployment Request Form (DRF), or similar, should be produced to document the change being released, the impact, pre and post deployment tasks, GitHub commit # etc., approval date or rejection reason. The DRF could be approved by a convened board or via email response.

      – The DRF process is absolutely required for the final deployment to Production, but may also be applied to the Pre-Production deployment, i.e. the commencement of the final step of the path-to-production release flow.

Salesforce Exception Reports

I think it’s fair to say that consideration of reporting and analytics is not traditionally a technical architect’s direct concern. In the Salesforce context I always challenge this presumption and promote the idea that a project architect should absolutely be looking at the high-level analytical requirements during the foundation stage of the project. Why you may well ask? The answer to this is simple, in order to design a fit-for-purpose data model you must consider data coming out (via the native reporting tools) as well as data going in efficiently. I make this point in a prior post that outlines my view on the Salesforce development process. The number of projects I see where this thinking hasn’t been applied and 3rd party tools are wheeled in to address the gap continues to surprise me. Key to success here is understanding what can be achieved with the Salesforce reporting functionality. Which brings me on to the topic for this post – exception reports, a type of reporting not obviously catered for by the standard reporting tools. For instance – show me a list of Accounts without an Opportunity in the last year. Such reports are invaluable in identifying omissions, focusing sales activities, generating call lists, tracking inactivity etc. etc.

I’ll use 2 examples below to clarify how to approach such a scenario.

Simple Example.
Accounts without a closed Opportunity in the last 30 days.

For this report we use a “without” Cross Filter as per the screenshots. Cross Filters were added in Spring ’12 and enable records to be filtered based on the existence (or non-existence) of related records which meet defined criteria. Within a Cross filter up to 5 sub-filters can be defined. Note, the report’s top level filters do not apply at the sub-filter level. Cross filters can be added for any related object, custom or standard – the limit is 3 per report (each with up to 5 sub-filters). Powerful functionality indeed and a great convenience that avoids a Custom Report Type for the outer join, note the sub-filter is limited to fields on the related object, no-cross object sub-filter logic is possible.

simple exception report

Complex Example.
Accounts without a Won Opportunity involving a certain Product Family in the last 60 days.

For this report we start with a Cross Filter report to satisfy the “without Won Opportunity” requirement. From there we need a indicator (of the existence of an Opportunity Product linked to a product of the specified family) at the Opportunity level which can be used in a sub-filter. This is actually straightforward to do and relies on the old school technique for exception reports, namely the use of a rollup summary fields and simple workflow.

Solution components :
Opportunity Product – Custom Field – ProductFamilyHardwareFlag
Number (18,0 default 0)

Opportunity Product – Workflow Rule – “Opp Product – New or Changed Product”
Evaluate – Created and Edited – Formula (OR(ISNEW(), ISCHANGED( Product2Id )))

Opportunity Product – Workflow Action – “Opp Product – Set ProdFamilyHardwareFlag”
Field to update [ProductFamilyHardwareFlag] with Formula = IF( TEXT(Product2.Family)=’Hardware’, 1,0)

Opportunity – Custom Field – ProductFamilyHardwareCount
Rollup Summary Field (Opportunity Product – SUM – Field to aggregate = ProductFamilyHardwareFlag). All records included.

How does this all work? In short, a workflow rule on Opportunity Product sets a numeric field equal to 1/0 based on the product family meeting the defined criteria. The numeric field is then rolled up to the Opportunity level via RSF, which is then applied in a sub-filter as below.

complex exception report

Note, this is just an example to illustrate the concept. An actual implementation would typically require a more generic approach.

Salesforce Development Process

There are typically two interpretations of the term “development process” – one being the tools, practices and methods applied in software development (i.e. methodology, plus build automation, standards etc.) the other being the process applied to get from requirements to working software (i.e. iterative or waterfall, plus how the analysis-design-build-test-release disciplines are executed). This post outlines one high-level approach to the latter in the context of Salesforce developments. The intent of this isn’t to be overly prescriptive, generally speaking each project requires its own defined process that factors in resources available (and their skills and experience critically) plus the nature of the work and the timescales. That said it is a truism that failed projects fail for a variety of reasons but successful projects are typically successful for the same reasons. A fundamental success factor being the adoption of a clearly defined and simple process – others being team empowerment and shared commitment.

The process above assumes an iterative process and focuses the initial iteration on the foundation of a robust data model, set of user profiles and permission sets, role hierarchy, record access model and statement of the reporting requirements for the project. Subsequent iterations improve the quality of the foundation over time, as new functional areas are developed. The data model in this context will include a statement of the org-wide defaults for each object and the specifics of each relationship (master-detail, lookup, mandatory lookup etc.). The record access model is critical – this shows how each user population maps to a user profile and role and how they gain access to the data required, i.e. sharing rule, Apex managed sharing etc. In my experience defining an approximate access model upfront and then refining during the feature build-out helps to avoid expensive refactoring later in the process and sets out a clear understanding for all contributors to the declarative and technical build. A piecemeal approach to defining a sharing model is commonplace – this rarely provides a clear and cohesive result. For the similar reasons defining a list of permission sets upfront ensures that user profiles are kept clean and focused, avoiding proliferation of profiles down the line. It may be surprising to see analytics such as reports and dashboards being considered during the foundation stage, this however is one of the primary inputs to the definition of a fit-for-purpose data model. I’ve worked on countless projects where reporting has been overlooked until a late stage, at which point it has become apparent that standard reporting features can’t produce the reports given the structure of the data. Ideally the data model should be designed from the outset to work well for both transaction processing and analytics.

A final point for consideration is the by-exception approach to identifying technical components. When breaking out the solution components required for a certain feature – expertise must be applied to ensure that standard product functionality or declarative options (workflows, reports etc.) are considered fully before bringing expensive technical options such Visualforce or Apex to bear.