Showing posts tagged with:

Integration

Salesforce Cross-Organization Data Sharing

As a long time Salesforce-to-Salesforce (S2S) advocate and interested follower of the Saleforce integration space the Winter ’14 pilot of cross org data sharing, or COD for short, caught my attention recently. This short post covers the essentials only, for the detail refer to the linked PDF at the end of the post.

Note, the pilot is currently available for developer edition (DE) orgs exclusively, which gives some indication of where this feature is in term of production state.

What is it?
Hub and spoke architecture; the hub provides the metadata definition (selected fields) and data for shared objects, the spoke gets a read-only copy (proxy version) of the object named with the __xo suffix, e.g. myObj__xo.

Setup Configuration takes place at:
Data Management – Cross-Organization Data – (Hub | Spoke) Configuration

Connections are established via an invitation email, generated at the Hub, sent to a Spoke administrator and accepted.

Synchronisation takes place via the [Sync with Hub] action on the Spoke Detail page, which also displays the status of the org connection.

Limitations?
Synchronisation is a manual process, in the pilot release at least.
All custom objects can be shared but only the Account standard object.
Not all standard fields on Account can be shared,
API limits apply in addition to specify platform limits applicable to COD (for example, Spokes per Hub, concurrent Spoke connections).

Exemplar Use cases?
Master data sharing in Multi-org architectures.
Reference data sharing (product catalog, price list etc.) with business partners.

Key Differences with S2S?
Uni-directional synchronisation of data.
Spoke data is read-only.
S2S is not limited to the Account standard object (other limits apply)
S2S implies record-level sharing, COD is object-level (sharing permissions withstanding)
S2S initial record sharing can be automated (via ApexTrigger).
S2S externally shared records synchronise automatically.

Final Thoughts
COD is definitely an interesting new addition to the native platform integration capabilities. Data sharing between Salesforce customers via ETL tools or file exchange can be error prone and time expensive to maintain. COD provides utility in terms of simplification, centralised administration and tracking. In considering the pilot release, S2S remains the best fit for transactional data, whilst COD provides coverage of reference data and some master data scenarios.

References
COD Pilot Guide PDF
COD Overview – Salesforce Help

Integration Architecture Patterns

As an architect I’m generally obsessive about three things; patterns, principles and practices. I could probably add to this list but I also prefer to keep things simple. This post is concerned with the first P, Patterns – in the integration architecture context. At what level should they be defined and applied? I tend to consider the logical and physical aspects of data integration flows independently. In the logical case, the focus should be on the definition of an end-to-end business process that spans multiple systems. There should be no technology constraint or perspective applied to the logical view. In the physical case, the logical view should be considered an input, and a technical view defined in full consideration of the following.

Frequency of integration (batch, near-real-time, real-time)
Bi-directional, versus uni-directional
Multi-lateral, versus bi-lateral or uni-lateral
Volumetrics
Security
Protocols and message formats
Reference data dependencies
Technical constraints (API limits model)
Existing enterprise integration technologies (middleware, ESB)
Future maintenance skill sets (technical versus administrator)

Each physical integration flow definition should not be entirely independent, instead groupings should be identified and robust integration patterns designed and documented. The solution components for each pattern would then be developed, tested and re-applied wherever possible. The schematic below provides a fictitious example of this approach.

Integration Patterns

Having a simple set of clearly defined patterns visible to the project team is key, and should be complemented by a project principle that new approaches to physical integration are by exception – nobody has discretion to be creative in this regard. Standardisation is good practice; integration is expensive in terms of technology, implementation time, run cost and maintenance.

Salesforce to Salesforce – A Short Case Study

First of all let me be clear on one thing, I’m a big advocate for Salesforce-to-Salesforce, for many org-to-org data convergence/integration use cases S2S is an efficient, cost effective solution.

Over the last couple of years I’ve had the pleasure to work with a non-profit organisation, via the Salesforce foundation, on an interesting use case for data integration with Salesforce to Salesforce. I won’t disclose the organisation name or the nature of the data in play; this isn’t relevant directly to the purpose of this post which is to concisely outline the integration pattern. Hopefully in considering this short case study, the potential of S2S for multiple-org architectures and other record sharing use cases will be clear. S2S isn’t just for sharing Leads and Opportunities!

Case Study Context
The organisation provides a variety of support services, both directly to individuals and also to other charitable organisations. In respect to individuals, external partners/providers are utilised in service delivery.

In this context Salesforce is implemented as a data hub tracking individuals and the services they receive from external providers by location. This aggregation of data enables a 360 degree (or holistic) view to be taken on the support individuals are receiving. The primary challenge in delivering this view has been the implementation of a consistent and controlled aggregation of data across external providers. To address the consistency aspect the organisation developed a managed package containing the required custom objects for the service provider to populate, and advises on Salesforce implementation. To address the data integration challenge, the initial implementation approach employed middleware technology to extract, transform and load data from the multiple provider orgs into the central hub org. For a number of reasons (including cost, complexity, requisite expertise) the middleware approach to data aggregation didn’t provide a sustainable solution. Having in-house Salesforce expertise, the organisation identified Salesforce to Salesforce as a potential option to deliver a simplified data aggregation solution built on native Salesforce functionality.

Solution Outline
S2S 2

Technical challenges
To understand the requisite implementation steps for S2S, refer to the help documentation. In summary objects and fields are published from one org and subscribed to in another org in the context of an established partner connection. This configuration takes place within standard functionality using the Connections tab. The partner connection is established through a connection request (via email verification link) initiated from one of the orgs. Once the partner connection, publications and subscriptions are configured records can be shared manually with various UI elements added in both orgs to indicate the external sharing status. Note, the relationship between the two orgs within a partner connection is bi-directional, both orgs can define publications and subscriptions.

Whilst S2S fully automates the synchronisation of records between publications and subscriptions, there are a number of areas where complementary technical customisation can be required.

1. Automated record sharing.
S2S requires records to be manually shared, in many cases it is preferable to automate this via Apex Trigger script. Basic example below demonstrating how record Ids can be inserted into the PartnerNetworkRecordConnection standard object to initiate sharing. Note, the PartnerNetworkConnection standard object holds the connection details.

[sourcecode language=”java”]
trigger AccountAfterInsert on Account (after insert) {
if (!S2S_Configuration_Setting__c.getInstance().External_Sharing_Active__c) return;
if (S2S_Configuration_Setting__c.getInstance().Org_Connection_Name__c==null) return;

String connectionId = S2SConnectionHelper.getConnectionId(S2S_Configuration_Setting__c.getInstance().Org_Connection_Name__c);
if (connectionId==null) return;

Map<Id,SObject> idToAccount = new Map<Id,SObject>();
for (Account a : Trigger.new){
if (a.ConnectionReceivedId==null){
idToAccount.put(a.Id, a);
}
}
S2SExternalSharingHelper.shareRecords(idToAccount, connectionId, null);
}

public with sharing class S2SExternalSharingHelper {
public static void shareRecords(Map idToSObject, Id connectionId, String parentFieldName){
try {
List shareRecords = new List();

for (Id i : idToSObject.keySet()){
String parentRecordId;

if (parentFieldName!=null) {
SObject o = idToSObject.get(i);
parentRecordId = (String)o.get(parentFieldName);
}

PartnerNetworkRecordConnection s = new PartnerNetworkRecordConnection(ConnectionId = connectionId,
LocalRecordId = i,
ParentRecordId = parentRecordId,
SendClosedTasks = false,
SendOpenTasks = false,
SendEmails = false);
shareRecords.add(s);
}

if (shareRecords.size()>0) insert shareRecords;
} catch (Exception e){
//& always add exception handling logic – no silent failures..
}
}
}
[/sourcecode]

2. Re-parenting records in the hub org.

Parent-child record relationships must be re-established in the target org, this does not happen automatically. To do this a custom formula field can be added to the shared record which contains the parent record identifier as known in the source org – lookup fields can’t be published. This custom formula field (ParentIdAtSource__c for example) is published to the connection. In the target org this field is mapped in the subscription to a custom field. An Apex Trigger can then be used to lookup the correct Id for the parent record as known in the target org which can then set the related relationship field value. Specifically the logic should query the PartnerNetworkRecordConnection object for the LocalRecordId which matches the PartnerRecordId value held in the custom field.

3. Record merges in the source org.

In the record merge case the update to the surviving parent record flows via S2S without issue, re-parented child records however do not. To address this an Apex Trigger (on delete of the parent record) can be used to “touch” the child records as shown in the basic example below.

[sourcecode language=”java”]
trigger AccountBeforeDelete on Account (before delete) {
try {
if (Trigger.isBefore && Trigger.isDelete) {
// call @future method to do a pseudo update on the contacts so that the reparenting flows via S2S
List<Contact> contactsToUpdate = [select Id from Contact where Accountid in :Trigger.old];
Map<Id,Contact> idToContact = new Map<Id,Contact>();
idToContact.putAll(contactListForUpdate);

if (idToContact.keySet().size() > 0) {
S2SExternalSharingHelper.touchContactsForAccountMerge(idToContact.keySet());
}
}
} catch (System.Exception ex) {
//& always add exception handling logic – no silent failures..
}
}

@future
public static void touchContactsForAccountMerge(Set<Id> contactIds) {
List<Contact> contactList = [SELECT Id FROM Contact Where id in :contactIds];
database.update(contactList,false);
}
[/sourcecode]

4. Data clean-up.

Record deletions are not propagated via S2S instead the Status field in the PartnerNetworkRecordConnection object is set to ‘Deleted’ with no further action taken. A batch process in the target org may add value in flagging such records for attention or automating deletion.

5. Unit test coverage.

Unfortunately, the PartnerNetworkConnection object is not creatable (insertable), therefore unit test code is reliant on the existence of an active connection in the org. The ConnectionReceivedId field on standard and custom objects is also not createable or updateable, requiring shared records to be in place (and SeeAllDate=true) in order to test custom functionality in the target org. Not ideal.

Note, S2S record sharing interactions count against standard API limits and speed of update is not guaranteed. In my experience the updates are typically sub 5 seconds latency. My understanding is that the underlying synchronisation process runs on a polling schedule and therefore the speed of update will vary based on the distance to the next poll cycle.

Useful Links
An Introduction to Salesforce to Salesforce
Best Practices for Salesforce to Salesforce