Salesforce Identity Architecture

This diagram provides a high-level overview of Salesforce Identity Architecture concepts and platform capabilities. This is the fifth and final part of a Salesforce Architecture reference series covering core application and system architecture topics.

Salesforce Identity and Access Architecture

Salesforce Identity Architecture (PDF)

Miro Board

 

 

Salesforce B2C Solution Architecture

This diagram (provided as a Miro board) covers high-level Salesforce B2C Solution Architecture concepts, retail scenarios and C360 integrated experiences. Provided as a technical reference for architects taking the new Salesforce B2C Solution Architect certification.

Miro Board Link


Salesforce Development Architecture

This diagram provides a high-level overview of Salesforce Development Architecture concepts and platform capabilities. This is the fourth in a five-part Salesforce Architecture reference series covering core application and system architecture topics.

 

Salesforce Development Architecture (PDF)

Salesforce Integration Architecture

This diagram provides a high-level overview of Salesforce Integration Architecture concepts and platform capabilities. This is the third in a five-part Salesforce Architecture reference series covering core application and system architecture topics.

 

Salesforce Integration Architecture (PDF)

Salesforce Sharing Architecture

This diagram provides a high-level overview of Salesforce Sharing Architecture concepts and platform capabilities. This is the second in a five-part Salesforce Architecture reference series covering core application and system architecture topics.

 

Salesforce Sharing Architecture (PDF)

Salesforce Data Architecture

This diagram provides a high-level overview of Salesforce Data Architecture concepts and platform capabilities. This is the first in a five-part Salesforce Architecture reference series covering core application and system architecture topics. I hope this series provides a useful reference for Salesforce Architects or those on the journey toward this role.

 

Salesforce Data Architecture (PDF)

Salesforce Package Versions

In the new world of second generation packaging (2GP) capabilities such as Unlocked Packages mean the technical aspects of packaging are no longer a concern only for ISV developers but are now equally applicable to enterprise development. The Salesforce Developer Experience (SFDX) developer guide documentation provides a great reference for those getting started with packaging which covers all the main implementation considerations. One area that isn’t covered is the management of package versions in practice; I call this a package version scheme, an example of which is included below.

Package Version Scheme:

Convention [Major.Minor.Patch.Build]

    Pre-release development; 0.1.0-x .. 0.2.9-x

  • First-release; 1.0.0-1
  • Post-initial-release bugfixes; 1.0.1-x .. 1.0.3-x
    (Patch number must increment per package version release)
  • Post-initial-release internal builds; 1.1.0-x
    (Only the Build number must increment per package version release)
    Next-release (minor enhancement); 1.1.0-x (Release with the final build number)

  • Post-minor-release bugfixes; 1.1.1-x .. 1.1.3-x
    (Patch number must increment per package version release)
  • Post-minor-release internal builds; 1.2.0-x
    (Only the Build number must increment per package version release)
    Next-release (major enhancements); 2.0.0-1

  • Post-major-release bugfixes; 2.0.1-x .. 2.0.3-x
    (Patch number must increment per release)
  • Post-major-release internal builds; 2.1.0-x
    (Only the Build number must increment per release)

Additional references:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_manpkgs_package_versions.htm
https://developer.salesforce.com/docs/atlas.en-us.sfdx_dev.meta/sfdx_dev/sfdx_dev_intro.htm

Scrum Quality Assurance

More often than not quality assurance is considered to be an implicit benefit of agile process with no specific techniques or practices applied. Integrating effective quality assurance can be one of the most challenging aspects of applying the Scrum framework.

Before considering how quality assurance factors can be applied to the Scrum framework, let’s start with a clear definition.

Quality assurance (QA) is a set of practices which collectively ensure that standards of quality are met (i.e. technical standards) and specified requirements are delivered (customer or user needs or expectations). Standards of quality may include for example measures of correctness, integrity, maintainability or usability. Quality assurance does not mean software testing; this is one element of a quality assurance process. Effective QA means there are limited defects in the first place. The core elements of a QA process include standards, audits, solution governance, testing, defect management (analysis, tracking and resolution) and communication.

Scrum Concepts

The following points describe how QA should be considered in relation to some of the core Scrum concepts.

Developer. In Scrum terms this means any project contributor such as Software Testers, UI Designers in addition to Software Developers. This term is obviously confusing and a source of misunderstanding.

Scrum Ceremonies. QA should attend all meetings (Sprint Planning, Product Backlog Grooming, Stand-ups, Reviews and Retrospectives). A team-wide, test-first philosophy is often the driving factor behind high performance Scrum teams. This concept is also referred to as acceptance-test driven development.

Working Software. This means fully tested quality software – not code complete.

Definition of Done. The DoD must include strict criteria for testing and quality.

Customer Satisfaction. Conditions of Satisfaction (or Acceptance Criteria) defined for each User Story drive the quality assurance process.

Scrum Artefacts

The following points describe how QA should be considered in relation to some of the core Scrum artefacts.

User Stories. The User Story is a concise description of a feature told from the perspective of the person who requires it. User Stories are best considered a reminder of a conversation and should be limited to the detail required for effective estimation and prioritisation.

The following (INVEST) factors should be considered in respect to the quality characteristics of a User Story;
Independent – can it be delivered in a single sprint, or do dependencies exist?
Negotiable – the user story is not a specification (reminder of a conversation)
Valuable – is there obvious intrinsic value to the user?
Estimable – can it be estimated (sufficient detail, available knowledge)?
Small – is it sized appropriately (to fit in a sprint)?
Testable – are there appropriate Conditions of Satisfaction – or Acceptance Criteria?

The Conditions of Satisfaction (CoS) are most significant to QA. Each condition is a specific, user-focused requirement for acceptance and should be written using the Given-When-Then format (or heuristic technique) as below.

Given [context] – e.g. I have created a Customer Record and Sales Invoice.
When [event] – e.g. I attempt to enter a Sales Payment above the Sales Invoice Total.
Then [outcome] – e.g. I am warned that the payment exceeds the outstanding amount.

The high-level CoS will drive the team discussion of the User Story (in a test-first approach), a “Design and Test” whiteboard approach can be useful in this respect where the team consider both the design and testing approach at the same time. Each high-level CoS will be translated into lower-level (detailed) test cases (technical; unit and integration or functional).

User Story Test Cases. The Conditions of Satisfaction (CoS) for a given User Story provide a high-level framework for the definition of specific test cases. Such test cases should be structured to address a small, focused end-to-end workflow (or scenario) to avoid interdependencies and should enable collaborative and automated testing. Effective test cases provide a simple set of step-by-step test instructions (or steps). Note, the CoS covers the happy path scenarios only, test cases must be developed also that cover critical path, alternate path, negative path, technical, validation and edge case scenarios.

Product Backlog. The Product Backlog is a single, ordered list of requirements (typically but not exclusively User Stories) used by Scrum teams. The list is constantly evolving, business value ordered and estimated by the team – including QA.

The following factors should be considered in respect to the quality characteristics of the Product Backlog;
Prioritisation – are the Product Backlog Items (PBI) accurately prioritised to a consistent scale based on business value, ROI, interdependencies and risk?
Estimation – are the PBI consistently estimated (and benchmarked) by the team?
Well Groomed – is the backlog frequently groomed by the team?

Product Backlog grooming entails User Story discussion by the whole team (lead by the Product Owner) with a view to revising and refining estimates and adding details such as new Conditions of Satisfaction. Regular sessions should be scheduled for this purpose; ideally 10% of the Sprint time should be dedicated to this task. In practice this is rarely the case, however given the direct relationship between the completeness of the CoS to the efficiency of the quality assurance process it is important that sufficient time is allowed for effective grooming.

Scrum Practices

The following points describe how QA should be considered in relation to some of the core Scrum practices.

In Sprint Testing. User Story testing must be completed within the sprint – full-stop. The Definition of Done must include this criterion. Testing a sprint behind the development is common bad practice and directly violates the working software principle; additionally this approach builds up technical debt and defect debt to be resolved at a later time. Instead, the sprint backlog must be shaped to enable collaborative design, development and testing of each user story within the sprint. High performing Scrum teams will also automate some of the tests within the sprint. The following simplistic process describes how collaborative sprint development can be applied.

Developers and Testers working collaboratively – Team Swarming:
Write the test case
Write the code
Test together (developers and testers)
Fix the bugs (developers) / Automate the test case (testers)
>> next user story

Test Automation. The ability to automate the execution of test cases (i.e. unit tests and acceptance tests) is critical at scale. In order to quickly identify cases where the current sprint development breaks the code or functionality developed in a previous sprint it is necessary to execute the related tests efficiently. Test automation also supports evaluation of the impact of changes (What-if analysis) that can help with design decisions. Further QA related benefits of test automation include the ability to apply load testing (e.g. multiple concurrent users) and performance testing types. Unlike manual testing which must be conducted in-sprint, test automation can be added outside of the sprint and be developed a separate team. High performance Scrum teams will automate critical tests within the sprint.

Test data quality has a significant impact upon the effectiveness of testing. Random data simply doesn’t work and prefabricated test data is often written to expectations and is not representative. Using production data (real data) for test purposes can violate data protection regulations and comes with the risk of inadvertent communication to the customer, supplier etc. or PII or financial information leakage. To mitigate this an approach that combines real data with obfuscation (scrambling or scrubbing) techniques can produce large scale, representative test data sets without the risk. This can take significant time to perfect however the return on this investment will be high.

Note, in addition to automated tests, a quality test plan still includes manual testing to cover exploratory testing and usability testing types. Automated tests are systematic and unable to gauge the quality of the user experience for example.

Regression Testing. The ability to run test cases developed in previous sprints (and perhaps by different teams) enables the impact of a current development to be understood immediately. This testing practice is referred to as regression testing. Regression testing effort gets bigger over time, as with User Story test cases, regression testing should occur in-sprint and automation should be used wherever possible. It is good practice to reserve time at the end of the sprint to allow the team to complete regression testing, this should also include performance and security test cases. At scale, it is often necessary to be selective in terms of which test cases to focus upon. An effective but contentious practice is to reserve the final sprint for end-to-end regression testing – this sprint is referred to as a hardening sprint; the key risk with this approach is that defect resolution from earlier sprints is deferred to this sprint. The hardening sprint should be driven from the regression test plan not the product backlog and be managed by the QA lead.

Post Release Support. Once the product is released – and despite the QA rigour applied – it is inevitable that production defects will arise. Such defects should be reviewed first by the Product Owner to determine priority and relevancy. Where defects are to be addressed immediately (show-stopper bugs etc.) the resolution work should replace similar sized work in the current sprint. In every respect the defect should be treated in the same manner as a User Story. Defects can also be placed on the Product Backlog for grooming. Where the Scrum team is providing defect resolution it is good practice to retain velocity for production defects; this proactive approach reduces the requirement to pull User Stories from the current sprint that may be in progress.

Salesforce DX

This post provides a practioner’s introduction to Salesforce DX (Developer Experience) a new software development lifecycle paradigm for Salesforce. Please note at the time of writing (Spring ’17 – v40.0) Salesforce DX is at public beta status.

Salesforce DX (or SFDX) – What is it?

A new set of tools and practices to accelerate the software development lifecycle on the Salesforce platform. The tools in question enable established (technology agnostic) software development models to be applied to the Salesforce context efficiently; the practices provide guidance in how to do this effectively and at scale.
Many aspects of Salesforce DX have become standard practice over the last few years; source-driven development using tools such as Jenkins and git being a good example. Other aspects such as artifact-based development, the CLI and Scratch Orgs introduce a fundamentally new paradigm.

The Salesforce development model has historically been org-centric in concept; with the entire set of metadata types representing a single monolithic code-base. This conceptual model has benefits in respect to ensuring complex interdependencies are not overlooked and code structure and conventions are considered fully in a standardised approach. In the org-centric model – and typically not exclusively – a source-code control repository is mapped at the org-level and deployments are org-based not feature or app based. Salesforce DX provides the tools and practices to change the development model from org-centric to artifact-based; where an artifact represents a collection of components (org subset) – developed, tested and deployed together in safe isolation.

SFDX Tools

CLI (Command-line Interface – sfdx)

The new SFDX CLI provides the foundation for Salesforce DX with capabilities that span authorisation, org management, metadata creation, data population, packaging and deployment. The CLI commands are extensive and clearly structured within the top-level force namespace.

command examples:

//CLI create a new project (specific DX folder structure plus config file)
sfdx force:project:create 

//CLI needs to authorise against the DevHub (OAuth web-based flow or for CI JWT Bearer flow)
sfdx force:auth:web:login -d -a "Hub Org"

//CLI create scratch org (or set existing as default)
sfdx force:org:create -s -f config/project-scratch-def.json

//CLI push source code from local (respects the .forceignore file placed in root folder)
//Conflicts can be overwritten --forceoverwrite
//Note - Local project is the source of truth.
sfdx force:source:push

//CLI run unit tests
sfdx force:apex:test:run

A key concept is the local project which is the primary source of truth. The project structure is different from the Metadata API folder structure with higher-level metadata types such as Object broken down into sub-folders and individual files for sub-types such as listViews and fields. This new sfdx structure (or source shape) is designed for development and source-code control system interactions whereas the longstanding Metadata API structure related to deployment primarily. Conversion between the 2 formats is necessary when creating a new sfdx project from an existing org (or simply from existing metadata) and prior to deploying a sfdx project to a Salesforce org (not Scratch Orgs). The CLI provides commands to perform the format changes as illustrated below.

//Create DX project > Create folder for metadata files then ..

// Retrieve unpackaged metadata
sfdx force:mdapi:retrieve -r ./mdapipkg -u <username> -k ./package.xml

// Retrieve the metadata for a package
sfdx force:mdapi:retrieve -s -r ./mdapipkg -u <username> -p <package name>

// Convert to SFDX format.
sfdx force:mdapi:convert --rootdir <retrieve folder name>

// Convert to Metadata API format.
sfdx force:source:convert -d <metadata folder name>/ --packagename package_name
Scratch Orgs

A Scratch Org is a temporary org that can be created quickly and configured with predefined settings and preferences (Edition, languages, platform features etc.) via a JSON definition file. Scratch Orgs are easily created and deleted via the CLI; up to 50 can be created per-day (per Dev Hub) with up to 25 active concurrently, deletion occurs automatically after 7 days.

A Scratch Org is typically used for development and testing purposes and plays an enabling role in build automation. Note, unlike sandboxes, Scratch Orgs are not constrained by a refresh cycle, but also are not a clone of a source org configuration. A typical development workflow will be to create a short-lived feature branch in the Source Code Control system (SCC), create a local sfdx project (from the branch metadata) and Scratch Org, complete development and testing in the Scratch Org, commit change back to SCC and convert and deploy the change to Sandbox. In a Continuous Integration (CI) environment the CI server (Travis CI, Jenkins etc.) may initiate an automated build from the commit event. In such a scenario, headless Scratch Orgs may be used for automated testing (across editions, languages etc.).

The SFDX CLI provides commands for automating the population of data in a Scratch Org. This can be via CSV file loading or more interestingly via org-to-org copy via SOQL query export and subsequent import.

Finally, a key feature of the CLI is the automated tracking of change (the diff) between the current local project and the default Scratch Org. The CLI push and pull commands that facilitate the movement of change between the local project and the Scratch Org automatically process changed components only.

Dev Hub

The SFDX CLI must authorise to a Dev Hub before Scratch Orgs can be created; each Scratch Org is related to a parent Dev Hub. The Dev Hub is a production Salesforce org (Enterprise Edition or Unlimited Edition) – the feature can be activated from the Setup menu. A Dev Hub trial org can be created to enable safe experimentation with SFDX. Dev Hub trial orgs expire after 30 days and are limited (40 Scratch Orgs created per day, 20 active concurrently) why this is subtly different from the standard limits is unclear.

Authorisation of the Dev Hub org (and indeed any org) by the SFDX CLI is possible via OAuth Web Flow or JWT-based Flow. The latter being preferred for system authentications such as those conducted by a CI server.

For ISV (and others) working with namespaces, the Dev Hub provides a Namespace Registry capability which (as the name suggests) allows Developer Editions containing namespaces to be registered. Once the org link is complete, Scratch Orgs can be created as normal. This capability is great for ISV developing in a namespace registered org (as opposed to utilising a separate packaging org to apply the namespace).

IDE2

The new Force.com IDE2 is an Eclipse plug-in built on the SFDX CLI. Additional features of the IDE2 include the Lightning Code Analyser and Apex Code Analysis tools. The former wraps the capabilities of ESLint and the Salesforce Lightning CLI to evaluate JavaScript code against standard or custom rule sets. The latter covers syntax error detection and highlighting plus code auto-completion. The IDE2 also provides an embedded browser for editing Lightning Pages.

On initial inspection there didn’t appear to be any means to directly login/import metadata from an existing org. Presumably this must be done externally via the SFDX CLI.

Finally, the new IDE2 provides integrated tools for working with Git repositories, a key facet of the envisaged developer workflows centred around Salesforce DX.

Summary

At the time of writing, the available SFDX documentation relates primarily to the technical aspects of the tools, there is minimal information relating to process guidance (or practices). Whilst it’s relatively easy to envisage how the new tools and techniques could be integrated into development, test and deployment workflows, best-practice guidance in such areas would certainly provide confidence and accelerate adoption.

Historically, a key sticking point for source-based development (and deployment in general) in the Salesforce context has been the Metadata API coverage. In short, not all metadata types are supported and therefore automated deployments (whether Change Set or API based) are typically accompanied by pre- and post- deployment steps that manually address the gaps. As there’s no significant change in this regard, it’s likely that Scratch Orgs will require the same approach – however considering Scratch Org use cases will typically be code related this may have limited actual impact.

In general however Salesforce DX is a considerable step forward in terms of applying established software development lifecycle practices to the Salesforce context. The CLI alone is incredibly impressive and provides significant flexibility in respect to development, testing and release process. It will be interesting to see how tools vendors react to the availability of the CLI as a foundation for future, evolved IDE capabilities. Hopefully Salesforce DX represents the first step of many in this direction.

Resources

Salesforce DX Developer Guide (Beta)
Salesforce DX on Trailhead
Salesforce DX Chatter Group

Salesforce ISV Environment Strategy

salesforce-isv-environment-strategy-all

Please note – This content was previously posted on the audit9.com site directly and is re-published here as part of the consolidation of 2 blogs.

This post provides an overview of the various environments involved in delivering ISV solutions to the Salesforce AppExchange. As will be obvious to architects working outside of the ISV domain, the ISV environment strategy has some commonality to the non-ISV approach in respect to the development and testing activities but also introduces a further level of complexity and environment roles unique to the AppExchange distribution model.

Primary Org Definitions

Packaging Org

For small-scale, low complexity managed package developments the packaging and development orgs may be combined.

1. Package/Patch Upload
The Packaging Org is where the Managed Package is created and its namespace prefix defined. Package uploads in either Managed Beta or Managed Release state are initiated from this environment.

2. Patch Development Org – PDO
PDO are short-lived orgs created specifically for Patch development. Only certain components can be modified in a PDO. Once ready for distribution a Patch can be uploaded and pushed to Subscriber Orgs.

3. Push Upgrades
Uploaded Major Releases and Patches can be scheduled for push distribution to selected (or all) Subscriber Orgs. Package subscribers have no control over whether to accept pushed changes and as such it is not advisable (unless with tacit agreement) to push changes that impact the end user experience in any material way.

4. Localisation
Translation Workbench based localisation of internationalised resources.

Partner Business Org

A Partner Business Org can be provisioned upon partnership agreement with Salesforce or the same environment roles can be established within an existing production org.

1. Environment Hub
There Environment Hub enables partner orgs to be created for Development, Test and Trialforce usage. Single Sign-on can be implemented across connected orgs. A hub member org can only be connected to one hub at any given point in time. Where Trialforce is being implemented the TMO and Packaging Org must be connected to the same Environment Hub. The Environment Hub can create TSO directly, although bypassing a TMO prevents implementation of custom branding.

2. AppExchange Publishing Org – APO
This org role is for AppExchange management and is where Private and Public Listings and the Publisher Profile are stored.

3. Licence Management Org – LMO
This org role relates to the License Management App, a custom app installed by Salesforce which enables management of Packages, Package Versions, Licenses and Subscriber data resulting from AppExchange interactions. The LMO is also the entry point for Subscriber Org Login Access.

4. Channel Orders Application – COA
This org role relates to the Channel Orders app (also termed the Partner Order app), a custom app installed by Salesforce which supports automation of Service Order submissions (relating to AppExchange sales) to Salesforce.

5. Usage Metrics Reporting
Usage Metrics provides a daily update of statistics relating to Custom Object record counts (snapshot) and Visualforce Page access and performance (summaries) across all production Subscriber Orgs. There is no opt-in required for Package subscribers. The usage data is written to objects that are API accessible only; a visualisation app is available on the AppExchange to provide convenient access. Usage Metrics provide the basis for monitoring usage patterns and proactive customer retention process.

Trialforce Orgs

AppExchange listings can offer customers the ability to trial an app within a new, pre-configured Trial Org (package, data and users). This highly valuable capability is implemented with Trialforce. Trial organisations can also be created directly from Signup forms hosted on an external website or via the API. In addition to offering free trials the Trialforce solution provides a convenient option for the quick creation of pre-configured orgs for QA or demonstration purposes.

1. Trialforce Management Org – TMO
An optional environment used for centralised management of TSO for different packages. TSO can be created directly from the Environment Hub also, however such this direct approach does not support custom branding. The TMO approach does enable custom branding for Login Pages and Email Templates.

2. Trialforce Source Org – TSO
The TSO is where the package is installed (or upgraded) and configured to reflect the trial experience (user profiles, sample data etc.) A snapshot is taken of the Source org state to create a template. Note, TSO expire after a 12 month period unless a partner case is raised to request an extension. TSO are linked to the AppExchange Publishing Console manually, once a link is in place templates are listed on the Trial Templates tab.

3. Trialforce Template.
Templates listed in the Publishing Console can be submitted for Security Review. This reviews takes less time than the Package Security Review. Once approved a template can be connected to an AppExchange listing at which point the link to start a free trial appears on the AppExchange listing.

Note, each time a package is updated the TSO must be upgraded and a new template created, reviewed and connected to the AppExchange listing.

The ISVforce Guide provides further detail on the environments listed in the preceding sections.