Salesforce Cross-Organization Data Sharing

As a long time Salesforce-to-Salesforce (S2S) advocate and interested follower of the Saleforce integration space the Winter ’14 pilot of cross org data sharing, or COD for short, caught my attention recently. This short post covers the essentials only, for the detail refer to the linked PDF at the end of the post.

Note, the pilot is currently available for developer edition (DE) orgs exclusively, which gives some indication of where this feature is in term of production state.

What is it?
Hub and spoke architecture; the hub provides the metadata definition (selected fields) and data for shared objects, the spoke gets a read-only copy (proxy version) of the object named with the __xo suffix, e.g. myObj__xo.

Setup Configuration takes place at:
Data Management – Cross-Organization Data – (Hub | Spoke) Configuration

Connections are established via an invitation email, generated at the Hub, sent to a Spoke administrator and accepted.

Synchronisation takes place via the [Sync with Hub] action on the Spoke Detail page, which also displays the status of the org connection.

Synchronisation is a manual process, in the pilot release at least.
All custom objects can be shared but only the Account standard object.
Not all standard fields on Account can be shared,
API limits apply in addition to specify platform limits applicable to COD (for example, Spokes per Hub, concurrent Spoke connections).

Exemplar Use cases?
Master data sharing in Multi-org architectures.
Reference data sharing (product catalog, price list etc.) with business partners.

Key Differences with S2S?
Uni-directional synchronisation of data.
Spoke data is read-only.
S2S is not limited to the Account standard object (other limits apply)
S2S implies record-level sharing, COD is object-level (sharing permissions withstanding)
S2S initial record sharing can be automated (via ApexTrigger).
S2S externally shared records synchronise automatically.

Final Thoughts
COD is definitely an interesting new addition to the native platform integration capabilities. Data sharing between Salesforce customers via ETL tools or file exchange can be error prone and time expensive to maintain. COD provides utility in terms of simplification, centralised administration and tracking. In considering the pilot release, S2S remains the best fit for transactional data, whilst COD provides coverage of reference data and some master data scenarios.

COD Pilot Guide PDF
COD Overview – Salesforce Help

Salesforce to Salesforce – A Short Case Study

First of all let me be clear on one thing, I’m a big advocate for Salesforce-to-Salesforce, for many org-to-org data convergence/integration use cases S2S is an efficient, cost effective solution.

Over the last couple of years I’ve had the pleasure to work with a non-profit organisation, via the Salesforce foundation, on an interesting use case for data integration with Salesforce to Salesforce. I won’t disclose the organisation name or the nature of the data in play; this isn’t relevant directly to the purpose of this post which is to concisely outline the integration pattern. Hopefully in considering this short case study, the potential of S2S for multiple-org architectures and other record sharing use cases will be clear. S2S isn’t just for sharing Leads and Opportunities!

Case Study Context
The organisation provides a variety of support services, both directly to individuals and also to other charitable organisations. In respect to individuals, external partners/providers are utilised in service delivery.

In this context Salesforce is implemented as a data hub tracking individuals and the services they receive from external providers by location. This aggregation of data enables a 360 degree (or holistic) view to be taken on the support individuals are receiving. The primary challenge in delivering this view has been the implementation of a consistent and controlled aggregation of data across external providers. To address the consistency aspect the organisation developed a managed package containing the required custom objects for the service provider to populate, and advises on Salesforce implementation. To address the data integration challenge, the initial implementation approach employed middleware technology to extract, transform and load data from the multiple provider orgs into the central hub org. For a number of reasons (including cost, complexity, requisite expertise) the middleware approach to data aggregation didn’t provide a sustainable solution. Having in-house Salesforce expertise, the organisation identified Salesforce to Salesforce as a potential option to deliver a simplified data aggregation solution built on native Salesforce functionality.

Solution Outline
S2S 2

Technical challenges
To understand the requisite implementation steps for S2S, refer to the help documentation. In summary objects and fields are published from one org and subscribed to in another org in the context of an established partner connection. This configuration takes place within standard functionality using the Connections tab. The partner connection is established through a connection request (via email verification link) initiated from one of the orgs. Once the partner connection, publications and subscriptions are configured records can be shared manually with various UI elements added in both orgs to indicate the external sharing status. Note, the relationship between the two orgs within a partner connection is bi-directional, both orgs can define publications and subscriptions.

Whilst S2S fully automates the synchronisation of records between publications and subscriptions, there are a number of areas where complementary technical customisation can be required.

1. Automated record sharing.
S2S requires records to be manually shared, in many cases it is preferable to automate this via Apex Trigger script. Basic example below demonstrating how record Ids can be inserted into the PartnerNetworkRecordConnection standard object to initiate sharing. Note, the PartnerNetworkConnection standard object holds the connection details.

trigger AccountAfterInsert on Account (after insert) {
  if (!S2S_Configuration_Setting__c.getInstance().External_Sharing_Active__c) return;	
  if (S2S_Configuration_Setting__c.getInstance().Org_Connection_Name__c==null) return;
  String connectionId = S2SConnectionHelper.getConnectionId(S2S_Configuration_Setting__c.getInstance().Org_Connection_Name__c);	
  if (connectionId==null) return;
  Map<Id,SObject> idToAccount = new Map<Id,SObject>();
  for (Account a :{		
    if (a.ConnectionReceivedId==null){
      idToAccount.put(a.Id, a);
  S2SExternalSharingHelper.shareRecords(idToAccount, connectionId, null);

public with sharing class S2SExternalSharingHelper {	
  public static void shareRecords(Map idToSObject, Id connectionId, String parentFieldName){
    try {
      List shareRecords = new List(); 
      for (Id i : idToSObject.keySet()){
        String parentRecordId;
        if (parentFieldName!=null) {
          SObject o = idToSObject.get(i);
          parentRecordId = (String)o.get(parentFieldName);
        PartnerNetworkRecordConnection s = new PartnerNetworkRecordConnection(ConnectionId = connectionId,
							                         LocalRecordId = i,
							                         ParentRecordId = parentRecordId,
							                         SendClosedTasks = false,
							                         SendOpenTasks = false,
							                         SendEmails = false);
      if (shareRecords.size()>0) insert shareRecords;            
    } catch (Exception e){     
      //& always add exception handling logic - no silent failures..

2. Re-parenting records in the hub org.

Parent-child record relationships must be re-established in the target org, this does not happen automatically. To do this a custom formula field can be added to the shared record which contains the parent record identifier as known in the source org – lookup fields can’t be published. This custom formula field (ParentIdAtSource__c for example) is published to the connection. In the target org this field is mapped in the subscription to a custom field. An Apex Trigger can then be used to lookup the correct Id for the parent record as known in the target org which can then set the related relationship field value. Specifically the logic should query the PartnerNetworkRecordConnection object for the LocalRecordId which matches the PartnerRecordId value held in the custom field.

3. Record merges in the source org.

In the record merge case the update to the surviving parent record flows via S2S without issue, re-parented child records however do not. To address this an Apex Trigger (on delete of the parent record) can be used to “touch” the child records as shown in the basic example below.

trigger AccountBeforeDelete on Account (before delete) {
  try { 
    if (Trigger.isBefore && Trigger.isDelete) { 
      // call @future method to do a pseudo update on the contacts so that the reparenting flows via S2S 
      List<Contact> contactsToUpdate = [select Id from Contact where Accountid in :Trigger.old]; 
      Map<Id,Contact> idToContact = new Map<Id,Contact>();
      if (idToContact.keySet().size() > 0) { 
  } catch (System.Exception ex) { 
    //& always add exception handling logic - no silent failures..  

public static void touchContactsForAccountMerge(Set<Id> contactIds) { 
  List<Contact> contactList = [SELECT Id FROM Contact Where id in :contactIds]; 

4. Data clean-up.

Record deletions are not propagated via S2S instead the Status field in the PartnerNetworkRecordConnection object is set to ‘Deleted’ with no further action taken. A batch process in the target org may add value in flagging such records for attention or automating deletion.

5. Unit test coverage.

Unfortunately, the PartnerNetworkConnection object is not creatable (insertable), therefore unit test code is reliant on the existence of an active connection in the org. The ConnectionReceivedId field on standard and custom objects is also not createable or updateable, requiring shared records to be in place (and SeeAllDate=true) in order to test custom functionality in the target org. Not ideal.

Note, S2S record sharing interactions count against standard API limits and speed of update is not guaranteed. In my experience the updates are typically sub 5 seconds latency. My understanding is that the underlying synchronisation process runs on a polling schedule and therefore the speed of update will vary based on the distance to the next poll cycle.

Useful Links
An Introduction to Salesforce to Salesforce
Best Practices for Salesforce to Salesforce

Salesforce Org Architecture

The figure above shows a complex multiple org architecture (Hub-and-Spoke model). I’ll return to the drivers for multiple-org versus single org in a future post. For now let’s consider some interesting aspects of the above scenario.

SSO : users log in via their Active Directory Credentials. The CORPORATE org being a SAML 2.0 Service Provider to the AD Identity Provider. The CORPORATE org is also aN Identity Provider, enabling SSO across all child-orgs (which are SP).

Managed Packages : versioned baseline functionality. It’s often the case that certain configuration elements are common across orgs in a multi-org architecture. A best practice is to distribute this metadata as a managed package thereby preventing local modification. The business owners of the client org are free to innovate in their org, but the baseline configuration is locked (possibly to ensure compatibility with data sharing processes). Managed packages arenot just for ISV.

Salesforce-to-Salesforce : data sharing (automated or manual). S2S is a very underrated technology, enabling bi-directional, selective sharing of data between orgs. A great fit for multi-org architectures where common data can be shared across all orgs, or partitioned (geographically, business type etc.) and perhaps consolidated at the CORPORATE level.

External Execution Environment : complex, off-platform processing (perhaps legacy components written in Java). Salesforce orgs are subject to execution limits (governor limits etc.), whilst this becomes less restrictive with each release, there are times when an external execution environment can be helpful. For example a payroll calculation engine (written in Java and used within the enterprise) could be deployed to Heroku and called via Apex. Personally, I look to repurpose or buy technology before coding anything – the ability to assemble a solution should not be overlooked.