Showing posts tagged with:

Deployment

Salesforce Package Versions

In the new world of second generation packaging (2GP) capabilities such as Unlocked Packages mean the technical aspects of packaging are no longer a concern only for ISV developers but are now equally applicable to enterprise development. The Salesforce Developer Experience (SFDX) developer guide documentation provides a great reference for those getting started with packaging which covers all the main implementation considerations. One area that isn’t covered is the management of package versions in practice; I call this a package version scheme, an example of which is included below.

Package Version Scheme:

Convention [Major.Minor.Patch.Build]

    Pre-release development; 0.1.0-x .. 0.2.9-x

  • First-release; 1.0.0-1
  • Post-initial-release bugfixes; 1.0.1-x .. 1.0.3-x
    (Patch number must increment per package version release)
  • Post-initial-release internal builds; 1.1.0-x
    (Only the Build number must increment per package version release)
    Next-release (minor enhancement); 1.1.0-x (Release with the final build number)

  • Post-minor-release bugfixes; 1.1.1-x .. 1.1.3-x
    (Patch number must increment per package version release)
  • Post-minor-release internal builds; 1.2.0-x
    (Only the Build number must increment per package version release)
    Next-release (major enhancements); 2.0.0-1

  • Post-major-release bugfixes; 2.0.1-x .. 2.0.3-x
    (Patch number must increment per release)
  • Post-major-release internal builds; 2.1.0-x
    (Only the Build number must increment per release)

Additional references:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_manpkgs_package_versions.htm
https://developer.salesforce.com/docs/atlas.en-us.sfdx_dev.meta/sfdx_dev/sfdx_dev_intro.htm

Custom Metadata Types – Salesforce Winter ’16

1 Create New Custom Metadata Type

This post provides a high-level overview of the Winter ’16 enhancements to the Custom Metadata Types platform capability.

Custom Metadata Types – The App Configuration Engine for Force.com

There are in my view two distinct ways to consider Custom Metadata Types (CMT); firstly as analogous to Custom Settings and secondly as an architecturally significant paradigm shift in regard to platform extensibility. In the former case CMT can be viewed as a straightforward, almost like-for-like replacement to List Custom Settings – with the added benefit that records can be deployed as metadata. There are of course considerable differences between the two, however conceptually this view is simplistic and approachable. In respect to the latter case, prior to CMT platform extensibility for Force.com could be viewed as a vertical model where new instances of pre-defined metadata types are created to deliver custom interactions. Custom Metadata Types enable a horizontal extensibility model where new type definitions can be introduced and instances created as metadata. The platform is no longer constrained to the pre-defined set of metadata types, developers have freedom to extend the model and deploy both the types and instances freely across environments. This horizontal extensibility model enables a host of new use cases such as bespoke development frameworks that abstract or extend the Force.com platform, an idea variously described as Platform-on-a-Platform or Custom Platform.

Custom Metadata Types were introduced as a beta release in Spring ’15, with the GA release in Summer ’15. The enhancements added to Winter ’16 appear to represent another milestone on the journey toward an increasingly capable platform extensibility model where custom types can be related to standard types, perhaps to override or extend platform behaviour. This is definitely a key area of the Force.com platform to pay attention to over subsequent releases.

Note, the native user interface for Custom Metadata Type administration shown in the following screenshots is a new Winter ’16 feature, previously Metadata API calls were required to define types and manage associated records.

Key Concepts

Metadata Type
As the screenshot below shows, Custom Metadata Types support Custom Fields and Page Layouts, all very consistent with the Custom Object and Custom Setting equivalents (although not layouts in the Custom Setting case). At this point it’s worth considering the fact that all standard metadata types are comprised of a collection of attributes, for example an ApexClass has Name, Body attributes in the same way a CustomField has Name, Label, DisplayType attributes. This is how Force.com platform metadata is structured. The difference between a CMT and a Custom Object or Setting isn’t the definition it’s the type of data stored; with a CMT we’re recording metadata. Taking a somewhat obscure example, we could invent a new proprietary platform language called Opex (;-)), define a CMT called OpexClass with a Body attribute etc., populate it with metadata records that represent a System namespace and ship some actual ApexClass instances to translate and run the Opex code. I’ll concede this isn’t a practical example, however the point I hope should be clear.

2 Custom Metadata Type

The protected component attribute applies to Managed Packages; meaning visibility of the CMT in a Subscriber org.

When defining a new CustomField for a Custom Metadata Type, there are limitations to the field types that can be specified and as-per Custom Settings there are no picklist or relationship fields (as yet anyway).

3 New Field

Field Manageability is a new concept in Winter ’16 to understand, again in relation to Managed Packages. In short this setting provides field-level editability control, selectable values being:

Locked after release : Field value is locked after deployment (includes the developer org).
Subscriber editable : As the name suggests; deployed (developer) updates will not override subscriber field value changes.
Upgradable : Locked in the subscriber org, developer can edit and deploy upgrades.

4 Custom Metadata Type with Fields

Metadata Record
A Metadata Record is really where Custom Metadata Types head off on their own path; a Metadata Record as the name implies represents an instance of the metadata type as a record that can be manipulated by the Metadata API, deployed via Change Set and packaged. The significance of which is obvious but incredibly powerful. It becomes possible, for example, to track Metadata Records using Source Code Control tools and to deploy metadata plus configuration via a single deployment transaction. No more, 2 stage deployments or clumsy post-install data loading.

5 New Metadata Record

As can be seen in the preceding screenshot it is possible to define a Protected Component setting at the Metadata Record level. This enables the type to be public but records to be selectively hidden in the subscriber org – a very flexible capability.

6 Metadata Records

Key Benefits

As mentioned in the introduction the Custom Metadata Type platform feature is still emerging, in my view at least, the most interesting aspects are potentially yet to be revealed, however there are definitely some key benefits to highlight with the Winter ’16 release.

For Enterprise : Manual steps within an otherwise automated Application Lifecycle Management process can cause compliance issues and release management inefficiency. Custom Metadata Types enable application configurations to be deployed as part of a seamless, one-step deployment process thereby removing manual friction. Configuration management tools can also track and version control the application definition and its configuration state.

For Partners : A long time issue for ISVs has been the deployment of application configuration data as part of the managed package installation process. Post install scripts provide one option, but creating data via Apex script doesn’t scale well or deliver the required fine-grained control over subscriber org configurability and upgradeability. Custom Metadata Types address both issues.

The screenshot below shows both a Custom Metadata Type and Metadata Records added to a Managed Package definition.

7 Packageable

Note, the benefits stated above are those practical benefits of the capability in relation to its generic capability, the actual benefit for many developers will be the flex

Implementation Considerations

Audit trail : Changes to both Custom Metadata Types and Metadata Record are visible via the Setup Audit Trail, this is new to the Winter ’16 release.

8 Audit Trail

Metadata Record Access : Metadata Records can be accessed via SOQL query only, there is no direct Apex support. Note the __mdt suffix.

[code language=”java”]
Widget__mdt[] widgets =
[select QualifiedApiName, Height__c, Width__c from Widget__mdt];
[/code]

Metadata Record Modification : Custom Metadata Types do not support DML operations via Apex, the Metadata API must be used. For use cases where configuration data needs to be created via code, CMT may not be an effective approach.

Relationship Fields : At some stage (Spring ’16 perhaps) in the future evolution of Custom Metadata Types I would expect support for relationships to be provided. This I believe is where the feature will really take-off.

Apex Testing : Currently Metadata Records are visible in Apex unit tests (without SeeAllData=true), it’s likely that simulated test data will be supported in a future release to enable testing under different configurations.

Permissions : The permissions model for Custom Metadata Types is limited, the Metadata Records are either visible or not at the org-level. A finer-grained permission model, perhaps just at the record level would be an obvious progression.

References

https://developer.salesforce.com/blogs/engineering/2015/08/custom-metadata-types-winter-16.html

https://help.salesforce.com/HTViewHelpDoc?id=custommetadatatypes_overview.htm

http://releasenotes.docs.salesforce.com/en-us/winter16/release-notes/rn_forcecom_development_custom_metadata.htm

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Salesforce Release Methodology – Simple Case

A very common challenge addressed by architects working with Salesforce is the definition of an appropriate release methodology. By this I mean the identification of the Salesforce orgs required to support the project delivery whether serial or concurrent in nature, the role and purpose of each org and critically, the means by which change is managed and synchronised across environments. With this latter point, a clear definition of the path-to-production is imperative.

In the large-scale, complex project case there is typically time and expertise available to define a bespoke methodology, with build automation, source code control system integration and so forth tailored to the specifics of the programme environment. There’s an abundance of best-practice information available online to help guide the definition of a release methodology for complex projects. For less complex projects, such as those employing the declarative build model only, there is less information available, in such cases what is typically required is a standardised, best-practice approach that can be adopted as-is.

The remainder of this post provides an outline view of an exemplar release methodology for small-to-medium scale, configuration-centric projects (i.e. no Apex code or technical complexities). This information is provided for reference purposes only.

Environment Strategy
The following diagram outlines the environments and their purpose, the defined release steps and a basic approach to change management.

Release Methodology - Simple Case

Key Principles
1. Isolate development from testing activities. This is the golden rule. Testing requires a stable environment unaffected by ongoing development. Development shouldn’t grind to a halt while system testing and acceptance testing processes are applied.
2. Utilise the minimum number of sandboxes as possible. Synchronisation of change is time expensive and error prone, avoid this wherever possible. Preparation of standing data post sandbox refresh can also take time, as can the communication required to establish that a refresh can proceed.
3. Don’t over specify the sandbox type. Sandboxes are an expensive asset, especially full-copy and partial-data sandboxes. Calculate the required storage capacity and map to either Developer or Developer Pro. Retain full-copy sandboxes for purposes that do actually require the copied data.
4. Maintain a Change Control Log in the production org to record all changes (at a reasonably high-level) against applied environments.
5. Use the production org for implementation project collaboration. It can also be a useful adoption tool to create Chatter groups such as “Salesforce: Marketing”, “Salesforce: Finance” where collaboration can occur directly with the business users whilst the project is in flight.
6. Accept that change will inevitably be applied to the production org first; record such changes and apply to development and testing sandboxes asap.
7. Always verify the Change Control Log against the Setup Audit Trail before deployments.
8. Use Change Sets for deployment wherever possible.
9. Encourage a development process where Change Sets are updated continually, rather than retrospectively.
10. Always verify the Change Control Log against the list of Change Set support components.
11. On larger projects a Change Set partitioning strategy may be required; along functional lines, by team or by component type etc.
12. Ensure releases to production are documented and approved. A simple Deployment Request Form (DRF) template should be defined and used to gain approval. This process is key to communication and governance but also helps the team consider fully the pre- and post- deployment steps, risks and rollback strategy.
13. Post-release. Communicate how business processes have been mapped to Salesforce concepts, and the permissions model. Understanding how things work in simple terms can help avoid end-user frustration with a new system. This can also reduce the support burden as end-users can often self diagnose the cause of a problem.

The org strategy diagram above presents an appropriate approach for a serial-release model, i.e. one project or one sprint at a time is being developed, tested then released. In the concurrent-release model, where multiple parallel projects are converging into a single production org, isolated develop and test sandboxes will be duplicated per project with an integration (or pre-production) org providing a synchronisation point where the combined state is validated prior to deployment to production.

Salesforce Source Control and Release Process

This post outlines my preferred approach to managing parallel developments on the Salesforce platform in what I refer to as the Converged Programme Model. I readily acknowledge that there’s a multitude of ways to accomplish this each with it’s own subjective merits. Before adopting a parallel work-stream model take the time to understand the technical complexity, process overhead and time investment required. Of particular concern should be the team’s readiness for such a disruptive change. In my experience it’s better to plug any skills gaps upfront, be very prescriptive with process guidance, start-small and build out incrementally – the risk otherwise is considerable. Typically resistance will come from individuals unaccustomed to a disciplined approach to software development/release process.

SCC

Objectives

  1. Concurrent Development. Support parallel programme workstreams converging into a shared production Salesforce environment.
  2. Automation. Deliver build automation – reducing the manual overhead required to deploy between environments.
  3. Gold Standard. Deliver a best practice approach – the initial design should scale up and down in response to changing programme conditions.
  4. Non-disruptive. Facilitate a staggered approach to adoption – enabling key benefits to be realised quickly without disrupting productivity.
  5. Minimise Release Overhead. Project branches should be regularly and incrementally updated from the master branch – reducing the inherent risk of divergence over time.

Tools

  1. GitHub
  2. – Get started with public repositories, upgrade to a paid plan and use private repositories for any source code you don’t want to share with the world at large.

    – Create an Organisation account to enable Team functionality.

    – Key benefit versus Subversion (CVS etc.) is fast and efficient branch management; parallel workstreams are managed on branches with frequent merging.

    – It is possible, albeit time expensive to implement a Git server within the enterprise. In my view the GitHub administration interface alone is worth the price.

  3. Jenkins
  4. – Deployed on a Windows EC2 instance with an elastic IP. A free usage tier, micro instance provides an ideal server host. Using a Linux host can be beneficial in regard to SSH authentication from Jenkins to GitHub. This is just one advantage of many. Pick the Operating System/Platform the team you’re working with are most familiar, a Linux host that only one team member can administrate makes no sense.

    – On Windows the Jenkins service should be configured to run a specific user account (with least privileges assigned). This is required to generate the key files for SSH authentication.

    – Enable Jenkins security. Particularly relevant if the host is open to the public web. Lock the inbound IP ranges via the EC2 security group if possible.

    – Either store the Ant build files (build.xml, build.properties) in the Git repository or use an XCOPY post-build step to copy the files into the workspace from a file system location – as below. I prefer to keep the build files external to Git – there shouldn’t be any need to version manage such files – plus the build.properties file may contain passwords in plaintext.

    Jenkins Job Build Config

    – Install GitHub and Git Plugins
    Required to build from a GitHub repository and enables build automation via Post-Receive Hooks. Under Jenkins System Configuration; configure “Manually manage hook URLs”, this requires your GitHub repository to have the hook set manually via Service Hooks under repository settings. Add a [Jenkins (GitHub plugin)] service hook like http://yourservername:8080/github-webhook/. The message sent on git-push to the remote repository will trigger any Jenkins job that builds from the branch that has been updated and has the [Build when a change is pushed to GitHub] option set to true.

    – SSH Keys
    In order to use SSH from Jenkins to a private GitHub repository, SSH authentication is required, which uses a generated key pair. The public key is added as a Deploy Key in GitHub under repository settings. This works well but if you want the same Jenkins user to access multiple repositories over SSH you have a problem as each Deploy Key must be globally unique across all GitHub repositories. The answer to this is to use aliasing and a SSH config file (refer: http://www.onemogin.com/blog/2011/9/1/jenkins-and-github-multiple-private-projects.html) however this won’t work with Post-Receive Hooks as the repository URL in the sent message won’t match to the aliased repository URL in the Jenkins job – typically errant behaviour below from the Jenkins log. I can’t see a way around this at the time of writing this post.

    [sourcecode language=”text”]
    FINE: Skipped GitHub Test – buildautomationtest repository because it doesn’t have a matching repository.
    May 7, 2013 6:21:35 PM com.cloudbees.jenkins.GitHubWebHook
    FINE: Considering to poke GitHub Test – buildautomationtest repository
    [/sourcecode]

    – Chatter Plugin
    I’m a big fan of this plugin by Simon Fell. I tend to use a dedicated release manager user, e.g. release.manager@force365.com, standard user license capacity permitting, and perform all deployment tasks in this user context. This approach provides clarity on changes made by a deployment versus actual user and provides an easy way to be notified of failures etc.

    Key Principles

    1. Fit-for-purpose Org-set
    2. – Org-set is the terminology I use to describe the collection of orgs, and their roles, required to deliver a project safely to production.

      – One size does not fit all. Pick the minimum set of orgs roles required to deliver the project. Each org is a time expensive overhead.

      – Sandbox types. In defining the org-set, factor in the availability of config-only and full-copy sandboxes. The latter must be retained for cases where infrequent refresh is required. Project-level orgs don’t need to be part of the sandbox estate, Developer Edition orgs, or perhaps Partner Developer Edition orgs can be employed. Full-copy sandboxes are incredibly expensive, valuable resources, use only when absolutely necessary for as wide a set of roles as possible.

      – Connected orgs. For projects involving complex integrations, the complexity involved in creating a connected-org may influence the org-set design – there may be an argument to consolidate roles onto a single test org used for QA and UAT perhaps.

    3. Continuous Integration
    4. A best practice org-set design for non-trivial technical projects with multiple technical contributors should require isolation of developer activities into a separate developer orgs with a code-level integration org and Continuous Integration (CI) process in place.

    5. Project-level sandboxes are not refreshed
    6. Project-level orgs are all built from the Git repository. The Pre-production programme-level org must be refreshed from Production pre-deployment to ensure the deployment is verified against the current state.

    7. Commit to the remote project branch is a commitment that metadata is ready for system testing
    8. Build automation will deploy a project branch commit to the project QA org. In my experience it pays to be prescriptive in terms of development process.

    9. Commit to the remote master branch is a commitment that metadata is ready for integration testing
    10. Build automation will deploy a master branch commit to the programme INT (integration) org – this org exists to enable rigorous regression testing to be applied by all project workstream. Post-deployment suites of automated tests should be invoked and reports analysed by the test lead on each project.

    11. Test Automation
    12. It’s a significant resource overhead to execute manual test scripts for each regression test cycle, not to mention error prone. For non-trivial projects, the investment must be made at an early stage in automated-testing. Selenium is a good choice, but the tool utilised doesn’t really matter, what matters is that from the outset of the project the test team start to build-up a comprehensive suite of automated test cases with coverage of the key acceptance criteria defined for each user story. The suites then enable automation of regression testing during deployment phases – the same scripts underpin system testing and provide an often overlooked second stage to CI (unit tests + acceptance tests).

    13. GitHub branch design
    14. – A simple, clean branch design is desirable in the remote repository.

      – Long-lived branches for active project workstreams. Project branches may have sub-branches for each sprint or phase.

      – Long-lived branch for patches. Bug fixes are developed on local branches and committed to the remote support branch when ready for system testing.

      – It can be advisable to consider how important a clean Network Graph is, this is impacted by Git merge versus rebase decisions.

    15. Build automation challenges
    16. In a perfect world, all metadata component types would be covered by the Metadata API. This isn’t the case so the nirvana of simple cloning of an org configuration is yet to exist. Instead a prescriptive process is required which spans manual configuration tasks, metadata deletion and build automation.

      – Proactive management of change
      A nominated release manager should proactively manage change at a programme-level, advise the project teams on release process and strive to minimise deployment conflicts through early involvement in all project developments. A change log should be maintained which lists all changes being made. This could include technical component types (ApexClass, ApexTrigger etc.) being added, modified or deleted, but as a minimum must track configuration changes requiring manual action – enablement of features, field data type changes etc. and required standing data (custom settings etc.). All changes should be mapped to a Change Type of manual or automated and a list of orgs to which the change has been deployed tracked. This is clearly an overhead to the project but without control it can be very easy to lose track of the current state of the orgs in use and face significant time expense in attempting to rationalise the situation through failing deployments. The release manager, or technical lead should apply manual tasks to target orgs pre-emptively to minimise automated build failures.

      – Be prepared for build failures
      Automated builds will fail; this is a fact of life where build-dependencies on manual actions exist. Proactive management will only get you so far. Attempting to minimise this is more realistic than elimination.

      – Data
      Automation of data setup in a target org is possible via Ant and the Data Loader CLI, or other similar means. Alternatively a data file could be deployed as a document or static resource and then loaded from an Apex script (as per the ISV approach).

      – Unsupported metadata component types
      Automation is possible using Selenium scripts, which execute at the UI level and can simulate, for example, a user activating a setting. Such scripts can then be integrated into an automated build. This is highly possible, but takes time and expertise with both Ant and Selenium to accomplish.

    17. Programme-level Integration
    18. The Converged Programme Model involves project workstreams building in isolated org-sets with frequent merge-from-master actions bringing across any changes to the production state. This approach should surface conflicts early, i.e. during development itself, but to be sure that shared component changes have not introduced any functional inconsistencies, regression testing must be applied by each and every project workstream on each occasion any project does a release. This is a strong argument for test automation.

    19. UAT
    20. – Project-level or programme-level?
      In principle UAT should always be applied at the local project-level as the commit to the programme-level integration org is an absolute commitment that the code is production ready. In practice UAT may be two tiered; initial user acceptance of new functionality, followed by some form of secondary acceptance testing in Pre-production, in parallel to deployment verification testing.

    21. Path-to-production Change Management
    22. As with any programme of work, fit-for-purpose Change Management processes should be in place. In context this means a Change Advisory Board (CAB) should be in place to approve deployment, this must include informed and empowered representation across business and technical functions.

      – A Deployment Request Form (DRF), or similar, should be produced to document the change being released, the impact, pre and post deployment tasks, GitHub commit # etc., approval date or rejection reason. The DRF could be approved by a convened board or via email response.

      – The DRF process is absolutely required for the final deployment to Production, but may also be applied to the Pre-Production deployment, i.e. the commencement of the final step of the path-to-production release flow.

Salesforce Summer 13 – Metadata Deployment

Quick post highlighting some Summer ’13 goodness for metadata deployment.

1. Abort a running deployment – This is a massive improvement enabling failed or inadvertent deployments to be cancelled whilst in progress. Anyone working on large deployments will bear witness how frustrating it can be to watch a 30 minute deployment run to completion with a failed unit test occurring after 5..

The Abort option appears against the running deployment in the Salesforce web UI. There doesn’t appear to be an abort operation via the Migration Tool or underlying Metadata API.

2. User references are maintained – Simply put, where individual user references (email alert workflow actions, running users etc.) exist in the metadata, the deployment process attempts to match the source usernames to existing target usernames, by stripping-off sandbox suffixes added to sandbox usernames. This is great for sandbox to production deployments, but needs to work with sandbox to sandbox deployments also – the documentation is unclear on this aspect, although it looks likely that this is supported. Multiple match or non-match cases result in a deployment error. I’ve previously used Post-Retrieve Modification via Ant to deal with this through simple string substitution.

3. Metadata API Supported Types. New additions include Approval Processes and SAML SSO configurations. The former being long overdue as approval processes can be extremely time consuming to recreate manually in a target org. The latter provides a minimal convenience for cases where multiple orgs share the same IdP.

External Id Deployment Error

On occasion when deploying components between orgs you may encounter a database-level category of deployment error, bubbling up from the underlying Oracle RDBMS with limited information to support any diagnostic process. The example below is one such case I’ve seen in practice recently. Most concerning about this type of error is that the deployment appears to commit (with the exception of the affected fields) but reports a failure result state. It’s therefore one to take seriously in automated build environments where undesirable action may be taken on the basis of the build result etc.

Example Case – Too many external Ids on a single object
Scenario – object (standard or custom) has 3 custom fields defined as external Ids (this is the soft limit), and an attempt is made to deploy a new object definition with 2 different fields marked as external Ids, making the sum 5. Remember, the deployment won’t remove this field attribute. In all such cases, where the updated object definition has more than the 3 limit, the result is..

[sourcecode language=”text”]
C:\Program Files (x86)\Jenkins\jobs\..\workspace\build.xml:41: Failed to process the request successfully. Cause(LIMIT_EXCEEDED): LIMIT_EXCEEDED: java.sql.SQLException: ORA-20526:
ORA-06512: at "HAPPY.CCUSTOMINDEX", line 275
ORA-06512: at line 1
: {call cCustomIndex.insert_detail(?,?,?,?,?,?,?,?,?,?,?)}

..or..

C:\Program Files (x86)\Jenkins\jobs\..\build.xml:41: Failed to process the request successfully. Cause(LIMIT_EXCEEDED): LIMIT_EXCEEDED: java.sql.SQLException: ORA-20526:
ORA-06512: at "SLEEPY.CCUSTOMINDEX", line 275
ORA-06512: at line 1
: {call cCustomIndex.insert_detail(?,?,?,?,?,?,?,?,?,?,?)}
[/sourcecode]

I’ve also seen this behaviour with deployments that exceed the tracked fields limit in a similar way, however I’ve been unable to recreate this with the 27.0 Metadata API.

Salesforce Ant Scripts – Selenium

The Salesforce metadata API is an extremely powerful tool, when combined with Ant, Jenkins etc. for build automation. There is however a number of configuration items that simply can’t be retrieved and deployed using this API (Account Teams, Support Settings, Lead Settings, Case Assignment and Escalation Rules etc.). The unsupported list can be found here, unfortunately the platform expands at a rate more or less equal to the rate at which coverage of the API has increased over time. Anyway, my point here is that typically deployments have three steps; a manual step to cover the gaps in the metadata API (pre-requisites), an automated deployment step (retrieve-and-deploy with Ant) and finally a data population step (Data Loader CLI with Ant perhaps..). Leaving data to one side (for this post), an ability to merge steps 1 and 2 would enable full automation of the deployment of configuration – which in most cases would be a good thing. One approach to automate step 1 is to write Selenium web browser automation scripts which drive the Salesforce application at the UI level. The scripts can be exported as JUnit test cases and then be incorporated into an Ant based build process and automated. My approach to doing this is outlined below, as with most things there are many ways to achieve the same result and I’m sure this can be improved on, however it keeps the process simple and gets the job done which tends to work for me. Additionally, the approach plays well with Ant, Jenkins/Hudson etc.. so it should be straightforward to extend an existing build process.

1. Install the Selenium IDE Firefox Extension.
2. Using Selenium IDE record the act of logging-in to Salesforce and making the required configuration changes.
3. Export the test case as a Java / JUnit 4 / WebDriver file. This creates a .java file as below. The example simply creates a Chatter post for the logged-in user, hopefully this is simple and illustrative enough to make the point.
[sourcecode language=”java”]
package com.example.tests;

import java.util.regex.Pattern;
import java.util.concurrent.TimeUnit;
import org.junit.*;
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
import org.openqa.selenium.*;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.Select;

public class SeleniumTest {
private WebDriver driver;
private String baseUrl;
private boolean acceptNextAlert = true;
private StringBuffer verificationErrors = new StringBuffer();

@Before
public void setUp() throws Exception {
driver = new FirefoxDriver();
baseUrl = "https://test.salesforce.com/";
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
}

@Test
public void testSelenium() throws Exception {
driver.get(baseUrl + "/");
driver.findElement(By.id("username")).clear();
driver.findElement(By.id("username")).sendKeys("release.manager@force365.com");
driver.findElement(By.id("password")).clear();
driver.findElement(By.id("password")).sendKeys("mypassword");
driver.findElement(By.id("Login")).click();
driver.findElement(By.id("publishereditablearea")).clear();
driver.findElement(By.id("publishereditablearea")).sendKeys("new Chatter post – Selenium");
driver.findElement(By.id("publishersharebutton")).click();
}

@After
public void tearDown() throws Exception {
driver.quit();
String verificationErrorString = verificationErrors.toString();
if (!"".equals(verificationErrorString)) {
fail(verificationErrorString);
}
}

private boolean isElementPresent(By by) {
try {
driver.findElement(by);
return true;
} catch (NoSuchElementException e) {
return false;
}
}

private String closeAlertAndGetItsText() {
try {
Alert alert = driver.switchTo().alert();
if (acceptNextAlert) {
alert.accept();
} else {
alert.dismiss();
}
return alert.getText();
} finally {
acceptNextAlert = true;
}
}
}
[/sourcecode]

4. Modify the test case java code as required.
5. Download the Java Selenium Client Driver from http://seleniumhq.org/download/
6. Extend or create a new Ant build file to compile and execute the test case. My example below requires a [selenium\src] sub directory structure in the build root, with the .java test case files placed in the src directory.
[sourcecode language=”xml”]
<project basedir="." default="usage" name="invoke Selenium script to configure Salesforce org">
<property name="bin" value=".\selenium\bin" />
<property name="lib" value="c:\Release Management\selenium-2.28.0\libs" />
<property name="src" value=".\selenium\src" />
<property name="report" value=".\selenium\reports" />

<target name="usage" depends="">
<echo message="Compiles and executes Selenium IDE exported test cases (source format JUnit4 WebDriver .java files)" />
</target>

<target name="init">
<delete dir="${bin}" />
<mkdir dir="${bin}" />
</target>

<target name="compile" depends="init">
<javac includeantruntime="false" source="1.7" srcdir="${src}" fork="true" destdir="${bin}" >
<!– requires Selenium test cases exported as JUnit4 WebDriver .java files in the src sub-directory –>
<classpath>
<pathelement path="${bin}">
</pathelement>
<fileset dir="${lib}">
<include name="**/*.jar" />
</fileset>
</classpath>
</javac>
</target>

<target name="exec" depends="compile">
<delete dir="${report}" />
<mkdir dir="${report}" />
<mkdir dir="${report}/xml" />

<junit printsummary="yes" haltonfailure="yes">
<classpath>
<pathelement path="${bin}">
</pathelement>
<fileset dir="${lib}">
<include name="**/*.jar" />
</fileset>
</classpath>
<test name="com.example.tests.SeleniumTest" haltonfailure="yes" todir="${report}/xml" outfile="SeleniumTest-result">
<formatter type="xml" />
</test>
</junit>

<junitreport todir="${report}">
<fileset dir="${report}/xml">
<include name="TEST*.xml" />
</fileset>
<report format="frames" todir="${report}/html" />
</junitreport>
</target>
</project>
[/sourcecode]

Note. There is no need to start or stop a Selenium server as the script runs locally on the build server – Firefox will be required however if you stick with the default browser in recorded scripts.

I’ll follow this initial post with further detail on the following;
1. Conditional script logic – i.e. I want the script to check for a condition before making a change such that it selectively configures and therefore won’t be reliant on a clean, predictable state.
2. Execution of test suites rather than individual cases.
3. Most likely I’ll refine the build.xml example as I understand more about this.

Salesforce Ant Scripts – Post Retrieve Modification

If your deployment process involves manual modification of the metadata files between retrieve and deploy steps, it’s time to consider extending your knowledge of Ant. This is critical for Continuous Integration where manual processes are an anathema. With a small amount of Ant knowledge you can delete metadata files, edit and replace/remove content via regex, copy files into the directory structure, invoke Selenium scripts to perform configuration tasks at the UI level (addressing gaps in the metadata API perhaps) and so on and so forth. In short, understanding the potential of Ant is key to delivering build automation.

One exemplar use case for post-retrieve modification is deploying metadata from orgs with Social Contacts enabled – errors can arise as below due inconsistencies in the retrieval of the SocialPost object and related metadata.

SocialPost-Social Post Layout.layout(SocialPost-Social Post Layout):Parent entity failed to deploy
No Layout named SocialPost-Social Post Layout found

In this use case, to get the metadata to deploy we need to remove profile references to the SocialPost layout and then remove the layout file itself. The example build file below shows how this can be achieved. In addition, sandbox email address suffixes are also updated to match the target sandbox – a fairly common deployment issue with sandboxes and workflow alerts, dashboard running users etc.

Build File – Retrieve Org Metadata, Modify & Deploy to Org
[sourcecode language=”xml”]
<project xmlns:sf="antlib:com.salesforce" basedir="." default="deploy_ci" name="org to org">
<property file="build.properties" />
<property environment="env" />

<target name="retrieve_dev" depends="">
<echo message="retrieving metadata to ${metadata.root}" />
<sf:retrieve unpackaged="${metadata.root}/package.xml" retrieveTarget="${metadata.root}" singlePackage="true" serverurl="${dev.sf.org.serverurl}" password="${dev.sf.org.password}" username="${dev.sf.org.username}" />
</target>

<target name="update_email_address_suffixes" depends="retrieve_dev">
<echo message="updating email addresses in ${metadata.root}…" />
<replaceregexp match="${dev.sf.org.suffix}" replace="${ci.sf.org.suffix}" flags="gs" byline="false">
<fileset dir="${metadata.root}" />
</replaceregexp>
</target>

<target name="remove_social_post_from_profiles" depends="update_email_address_suffixes">
<echo message="updating profiles to remove Social-Post references in ${metadata.root}…" />
<replaceregexp match="^ &lt;layoutAssignments&gt;\n &lt;layout&gt;SocialPost-Social Post Layout&lt;/layout&gt;\n &lt;/layoutAssignments&gt;$" replace="" flags="gm" byline="false">
<fileset dir="${metadata.root}\profiles" includes="**/*.profile" />
</replaceregexp>
</target>

<target name="delete_social_post_files" depends="remove_social_post_from_profiles">
<echo message="deleting Social-Post related files from ${metadata.root}…" />
<delete file="${metadata.root}\workflows\SocialPost.workflow"/>
<delete file="${metadata.root}\layouts\SocialPost-Social Post Layout.layout"/>
</target>

<target name="deploy_ci" depends="delete_social_post_files">
<echo message="deploying modified metadata from ${metadata.root}…" />
<sf:deploy singlePackage="true" serverurl="${ci.sf.org.serverurl}" password="${ci.sf.org.password}" username="${ci.sf.org.username}" maxPoll="360" pollWaitMillis="20000" logType="Debugonly" rollbackOnError="true" runAllTests="${ci.sf.org.forcetests}" checkOnly="${ci.sf.org.checkonly}" deployroot="${metadata.root}">
</sf:deploy>
</target>
</project>

[/sourcecode]