Showing posts tagged with:

Best Practice

Salesforce ISV Environment Strategy

salesforce-isv-environment-strategy-all

Please note – This content was previously posted on the audit9.com site directly and is re-published here as part of the consolidation of 2 blogs.

This post provides an overview of the various environments involved in delivering ISV solutions to the Salesforce AppExchange. As will be obvious to architects working outside of the ISV domain, the ISV environment strategy has some commonality to the non-ISV approach in respect to the development and testing activities but also introduces a further level of complexity and environment roles unique to the AppExchange distribution model.

Primary Org Definitions

Packaging Org

For small-scale, low complexity managed package developments the packaging and development orgs may be combined.

1. Package/Patch Upload
The Packaging Org is where the Managed Package is created and its namespace prefix defined. Package uploads in either Managed Beta or Managed Release state are initiated from this environment.

2. Patch Development Org – PDO
PDO are short-lived orgs created specifically for Patch development. Only certain components can be modified in a PDO. Once ready for distribution a Patch can be uploaded and pushed to Subscriber Orgs.

3. Push Upgrades
Uploaded Major Releases and Patches can be scheduled for push distribution to selected (or all) Subscriber Orgs. Package subscribers have no control over whether to accept pushed changes and as such it is not advisable (unless with tacit agreement) to push changes that impact the end user experience in any material way.

4. Localisation
Translation Workbench based localisation of internationalised resources.

Partner Business Org

A Partner Business Org can be provisioned upon partnership agreement with Salesforce or the same environment roles can be established within an existing production org.

1. Environment Hub
There Environment Hub enables partner orgs to be created for Development, Test and Trialforce usage. Single Sign-on can be implemented across connected orgs. A hub member org can only be connected to one hub at any given point in time. Where Trialforce is being implemented the TMO and Packaging Org must be connected to the same Environment Hub. The Environment Hub can create TSO directly, although bypassing a TMO prevents implementation of custom branding.

2. AppExchange Publishing Org – APO
This org role is for AppExchange management and is where Private and Public Listings and the Publisher Profile are stored.

3. Licence Management Org – LMO
This org role relates to the License Management App, a custom app installed by Salesforce which enables management of Packages, Package Versions, Licenses and Subscriber data resulting from AppExchange interactions. The LMO is also the entry point for Subscriber Org Login Access.

4. Channel Orders Application – COA
This org role relates to the Channel Orders app (also termed the Partner Order app), a custom app installed by Salesforce which supports automation of Service Order submissions (relating to AppExchange sales) to Salesforce.

5. Usage Metrics Reporting
Usage Metrics provides a daily update of statistics relating to Custom Object record counts (snapshot) and Visualforce Page access and performance (summaries) across all production Subscriber Orgs. There is no opt-in required for Package subscribers. The usage data is written to objects that are API accessible only; a visualisation app is available on the AppExchange to provide convenient access. Usage Metrics provide the basis for monitoring usage patterns and proactive customer retention process.

Trialforce Orgs

AppExchange listings can offer customers the ability to trial an app within a new, pre-configured Trial Org (package, data and users). This highly valuable capability is implemented with Trialforce. Trial organisations can also be created directly from Signup forms hosted on an external website or via the API. In addition to offering free trials the Trialforce solution provides a convenient option for the quick creation of pre-configured orgs for QA or demonstration purposes.

1. Trialforce Management Org – TMO
An optional environment used for centralised management of TSO for different packages. TSO can be created directly from the Environment Hub also, however such this direct approach does not support custom branding. The TMO approach does enable custom branding for Login Pages and Email Templates.

2. Trialforce Source Org – TSO
The TSO is where the package is installed (or upgraded) and configured to reflect the trial experience (user profiles, sample data etc.) A snapshot is taken of the Source org state to create a template. Note, TSO expire after a 12 month period unless a partner case is raised to request an extension. TSO are linked to the AppExchange Publishing Console manually, once a link is in place templates are listed on the Trial Templates tab.

3. Trialforce Template.
Templates listed in the Publishing Console can be submitted for Security Review. This reviews takes less time than the Package Security Review. Once approved a template can be connected to an AppExchange listing at which point the link to start a free trial appears on the AppExchange listing.

Note, each time a package is updated the TSO must be upgraded and a new template created, reviewed and connected to the AppExchange listing.

The ISVforce Guide provides further detail on the environments listed in the preceding sections.

Salesforce Standard Reporting (3 of 3) – Best Practices

This final post in the Salesforce Standard Reporting series outlines a selection of best practice concepts and techniques to be considered in the delivery of on-platform reporting solutions that maximise the value of the standard reporting tools.

The key message of this series is simple; a solid understanding of how the Salesforce standard reporting tools work in practice, and the reporting patterns supported, can avoid the necessity to employ additional, expensive off-platform reporting tools. The earlier this thinking is applied, the greater the likelihood of success.

Best Practice Considerations

In no significant order, the following points provide a non-exhaustive selection for consideration.

  • Solution Design
  • Perhaps the most obvious point, but equally a common oversight, is the consideration of key performance indicators during the solution design phase. This approach shouldn’t entail a full coverage of all reporting requirements; instead a selection of exemplar reporting use cases that represent a broad set of required outputs should be documented in clear concise terms and factored into the overarching solution design. This early consideration mitigates the inherent risk of the classic “reporting workshop” held once the solution design is no longer emergent; the impact of which can often be the introduction of off-platform reporting solutions with their cost and security implications.

  • Data Model
  • A Salesforce data model, in physical terms, will not necessarily comply to standard relational database normalisation principles, other factors must be considered. The sharing model is one such consideration, where record access or indeed object access permissions mandate a deviation from the standard 3rd normal form. Reporting is another significant area of consideration; constraints in respect to the depth of object relational hierarchies and the ability to traverse parent-to-child relationships must be accommodated within the data model design to maximise the potential of the standard reporting tools. Techniques in this regard include reporting convenience lookups, which are added to bridge parent-to-child and sibling relationships, and restructuring hierarchical data to limit the depth of hierarchy to four levels.

  • Sharing Model
  • The standard reporting tools fully respect the implemented sharing model and as such a complex, over-specified sharing model design can inhibit the potential use. The sharing model design often reflects the transactional processing requirements for record visibility, but not necessarily the reporting need. To avoid creative workarounds during report production it is imperative that the sharing model design reflects both the former and the latter requirements.

  • Report Types
  • Standard Report Types are maintained by the platform and require zero administration, new fields are exposed automatically. As such, wherever possible build reports using standard report types. Custom Report Types require ongoing maintenance, but can be highly useful in providing a clear data-set for a focused purpose or where the standard report types are insufficient. The implementation of a limited number of complementary custom report types with clear descriptions that adhere to a strict naming convention is the best practice approach. A standardised approach here promotes re-use. A typical implementation for reporting on the Salesforce platform involves end-users developing their own reports, for this to work efficiently they should be provided with a clear set of intuitive report types upon which to work.

  • Conventions
  • Given that reports are often developed by business users and not administrators or developers, it can be challenging to maintain an ordered state. To mitigate this risk, a strict naming convention and structure should be adopted for report folders and exemplar reports should be provided that exhibit a standardised approach to report naming (and description). It is a best practice to conduct periodic reviews of the reporting environment with the business users to ensure standards are applied, inefficiencies are avoided (such as duplication), platform features are being exploited optimally and ongoing training requirements are identified and addressed.

    Folders

  • Art of the Possible
  • Clear communication in regard to the capabilities and constraints of the standard reporting tools is imperative to the successful implementation of a reporting solution. A demonstration environment configured with contextualised reports and dashboards which showcase the art-of-the-possible can be an effective communication tool. Business users and the project team should have access to the demo org to explore the possibilities and to establish their own frame of reference.

    This approach can aid understanding (and therefore increase usage) of less obvious concepts such as historical trend reporting, reporting snapshots, dynamic dashboards, dashboard filters and joined reports.

  • Visibility
  • A common oversight in the implementation of an effective on-platform reporting solution is the visibility of reports. Reports should not be hidden away on the Reports tab, instead all possible entry points and display options should be considered as part of an overarching report visibility model. Examples in regard to entry points include custom links and buttons on detail pages, perhaps with some level of parameterisation. Examples in regard to display options include report charts added to detail pages (Embedded Analytics) and console sidebars (Summer ’15 feature). A further consideration is the inclusion of report charts in Visualforce pages (via the reportChart component), this approach avoids the requirement to address the underlying data directly in Apex code where the reporting engine can be applied.

  • Collaboration
  • Reports and analytics can provide important data visualisations and business insight that should serve as the basis for employee collaboration. An effective Chatter implementation model should therefore encourage communication and sharing around reports and dashboards such that the internal conversation is captured.

    Dashboard Collaboration 1

    Dashboard Collaboration 2

  • Active not Passive
  • Reports and analytics are typically implemented to deliver outputs in a passive state, i.e. the report runs on-demand or by schedule and the output is provided. A final best practice to consider is the active state where report outputs are evaluated against defined conditions and action (email, post, Apex script etc.) is taken automatically. Reporting Notifications provide active state options that can be a powerful tool in reducing report-noise and driving actions proactively from significant data conditions.

    Notifications

Salesforce Standard Reporting (2 of 3) – Report Builder

This post is the second in the Salesforce Standard Reporting series and serves to outline the capabilities and constraints of the standard Report Builder. I’m using the term Report Builder loosely here as a term that groups the majority of the on-platform standard reporting capabilities.

The key message of this series is simple; a solid understanding of how the Salesforce standard reporting tools work in practice, and the reporting patterns supported, can avoid the necessity to employ additional, expensive off-platform reporting tools. The earlier this thinking is applied, the greater the likelihood of success.

Report Builder Capabilities
The following sections outline the key capabilities of the Report Builder with a view to establishing the context within which the supported reporting patterns can be described.

  • Fields
  • The right-hand-side of the Report Builder UI displays the sections and fields defined within the selected Report Type. All reports are based on one principal Report Type, whether Standard or Custom.

    Note, at a conceptual level it can be useful to mentally picture the data presented by the report type as single-big-table of denormalised data (just rows and columns like a spreadsheet) with the maximum number of rows equating to the number of child records at the lowest level.

  • Report Formats
  • Report Builder supports 4 formats;

    1. Tabular
    A simple view comprised of an ordered set of columns as-per a spreadsheet, with no summarisation of data, the lowest level input records are presented.

    2. Summary
    Extends the tabular view to enable specified field values to be used to group input records with subtotals per grouping.

    3. Matrix
    Extends the summary view to enable both row and column groupings, as per Pivot tables in the Excel context.

    4. Joined
    Enable multiple sub-reports (blocks) to be added to a single report, with each block being linked to a specific report type and configured with its own fields, filters and sorting.

    4. Joined Report

    With a joined report input records can be grouped across the blocks using fields from common parent objects; such fields are listed under Common Fields. The common parent object must be applicable across all report types added.

    5. Joined Report Run

  • Filters
  • Each report type added to a report has a Dynamic Filter (e.g. “My Accounts”, “My Team’s Accounts”), a date field filter (which requires a date field to be specified, plus a range), and custom filters where any field in the report type can be filtered against static value or relative date value criteria. If a historical trending report type is used then a Historical Date filter is added which allows selection of up to 5 snapshot dates.

  • Bucket Fields
  • Bucket fields can be defined which map a list of input field values against a bucket field output value. The bucket field can then be added as a column, to provide a summarised view of the input data. Bucketing can be useful for use cases such as grouping strategic accounts or extensive ranges of data into smaller distinct set of High, Medium, Low range values. Note bucket fields aren’t available for Joined Reports.

  • Formulas
  • Report Builder supports Custom Summary Formulas and Cross Block Custom Summary Formulas. In the former case, the min/max/sum/average value of numeric fields can be used as inputs to a formula expression, the output of which displays at the selected summary level. Each report can have 5 such formulas. Cross Block Formulas extend the same approach, but enable block specific inputs from multiple blocks to be assembled into a single expression.

  • Report Generation
  • Once defined a report can be run to generate the output. In view mode Report Builder supports filter manipulation, data export, a printable view (html) and the ability to schedule future runs with email distribution to named users or public groups.

  • Report Charts
  • Report Builder enables summarised report data to be presented on a chart located directly within the report. Summary groups can be added to the chart as rows or columns (depending on the type of report vertical/horizontal bar etc.), with a selected aggregated value available on the opposing axis. Up to 2 levels of grouping can be selected per chart, with combination charts enabling additional aggregated values to be plotted; this potential ranges from 1 to 3 values depending on the type of chart. Cumulative Line Charts enable the aggregate values to be plotted as a cumulative figure.

    Report Builder has other capabilities not mentioned in the outline above; multiple currency handling, conditional highlighting and the ability to invoke a parameterised report via URL are notable examples. However for the purpose of setting context, the above provides a sufficient coverage.

Report Builder – Benefits
The Report Builder tool is a classic transactional reporting tool, with a deliberate focus on usability over complexity. In my view this is definitely the right approach; report production should be an end-user concern whenever possible. A technically-oriented, complex report builder would clearly detract from that possibility. Self-sufficient end-users, confident that they can report and track accurately can be the best adoption advocates possible. This best-case scenario has the added benefit of removing any potential resource bottleneck associated with the implementation project/IT team delivering all the reporting outputs. The usability of the report builder correlates to its functional simplicity, which in turn imposes constraints on the type of reporting possible. It is the case however that the Report Builder does provide a good level of coverage for the majority of transactional reporting patterns, some examples are listed below.

  • Transactional Reports
  • Basic tabular reports are easy to produce; the WYSIWYG drag-and-drop editor provides a rich, intuitive design environment for basic transactional reports.

  • Summary Reports
  • Basic summary reports that can be satisfied within the grouping limits are straightforward to produce. For summary reports, 3 levels of grouping can be applied, for matrix reports it is possible to have 2 column groupings and 2 row groupings, so 4 in total. Excel allows an unlimited number of row or column fields in a PivotTable report (limited by system memory); in this light 4 appears restrictive.

  • Exception Reports
  • An often overlooked capability of Report Builder is the Cross Filter (the Row Limit filter is another example). Cross Filters allow explicit control over the join type (with = inner, without = outer) and enable the production of exception reports such as “Accounts without Cases”. Within a Cross Filter, up to 5 sub-filters can be added that increase the selectivity applied to the related object. This type of filter is typically used in conjunction with rollup summary fields to cover scenarios such as “Closed Opportunities over time, without a particular product”.

  • Embedded Analytics
  • Embedded data visualisations within report outputs, or page layouts provide a convenient method for combining the headline statistics and underlying detail in a single view. In the former case, if you then consider the native ability to schedule the generation and distribution of reports, the potential business utility is considerable.

  • Dashboards
  • Report Builder produced reports underpin standard dashboard components, enabling a single report to deliver both the transactional view and data visualisations drawn from the same data. The dashboard features are extensive but key features beyond the obvious are Dashboard filters and Dynamic dashboards. In regard to the former, up to 3 filters can be applied to drill into the data, each filter may have up to 10 (50 hard limit) filter options. With the latter it’s possible to define a single dashboard that inherits the record visibility of the viewing user. This relatively new capability removes the historic requirement to maintain duplicate dashboards set to run with different specified users. Dashboard filters can be applied to Dynamic dashboards providing a high degree of flexibility and through consolidation a significant reduction in the number of dashboards required.

  • Scheduling and Distribution
  • Scheduling reports (and dashboards) with automated distribution to Salesforce users is easy to do and a very effective communication tool.

  • Report Notifications
  • Report Notifications is a recent feature addition and a very interesting one. In short, users subscribe to reports by providing a time or recurrence pattern to run the report, user-defined conditions to evaluate against the generated report data and selected actions (email, post, Salesforce1 Notification, Apex code) to invoke if the condition logic evaluates to true. This powerful capability enables reports, typically passive in nature, to be proactive drivers of action. The notification model also avoids the common issue that scheduled reports, regardless of content, tend to lose their effectiveness over time.

  • Report Snapshots
  • Report Snapshots enable a tabular or summary report output to be mapped to fields on a custom object; scheduled generation of the report populates the custom object with the report data. This feature is limited to the population of up to 2,000 records in the target object per execution.

A final benefit associated with Report Builder is the wealth of packaged reports provided as standard and also available on the AppExchange. It always makes sense to utilise the available reports as far as possible and build any exceptions. It’s surprising how many implementations ignore the packaged reports and expend time (and money) re-creating similar outputs.

Report Builder – Limitations
As introduced in the preceding section, Report Builder is implemented to deliver a robust, simple set of reporting functions focused on end-user usability over complexity. The following set of limitations highlight some of the reporting patterns not directly supported by the Report Builder.

  • Object Relationships
  • Report Builder consumes report types in the construction of reports, each report type is limited to 4 object relationships meaning parent to child relationships, a further 56 objects can be referenced by lookup. Note, if a single report contains columns from more than 20 objects an error is thrown. The parent to child relationship limit is key to understand when deciding upon the primary object for a custom report type.

  • Comparative Analysis
  • Report types do not support the addition of sibling objects, the parent to child object relationships are strictly linear. One common case where this constraint can manifest is the production of comparative analysis style reports which contrast records in one object against another. For example, consider a scenario where date-based, budget values and actual values are held in separate objects, both linked to a parent cost centre object. With a joined report it’s possible to bring the data together and group by cost centre – but there’s no direct means to group the data by date to deliver a side-by-side analysis of actual versus budget per date period. There are techniques to mitigate this issue, the next post in the series will cover such concepts.

  • Date Ranges
  • Where input records represent a date range between two boundary date fields (Start Date and End Date for example), the Report Builder requires that one of the date fields is used in date-based grouping. Individual dates within the range can’t be referenced. Reporting tools often provide a pivot function with dynamic columns that enables ranges to be exploded within the reporting layer, avoiding the data storage overhead of storing the transactional data at the lowest level.

  • What-if Analysis
  • Report Builder does not enable the input data to be adjusted, thereby precluding scenario-based, what-if analysis type reports. There is of course the option to export the data to Excel, which does cater for this requirement.

  • Formulas
  • Report Builder supports sub-total, grand total and cross-group summary formula fields, but not row-level formula fields. For such requirements, a custom field can be added to the object, but the means to isolate the formula field to a report would help maintain a lean configuration.

  • Sub Reports
  • Joined reports, dashboards, embedded analytics and report charts provide effective, but simple, capabilities for the delivery of composite report views. Many reporting tools support the concept of sub-reports, or nested data, where multiple transaction level views, with or without common relationships, can be combined.

  • Report Distribution
  • A common complaint raised in relation to the standard reporting tools is the inability to schedule a report for distribution to individuals who are not Salesforce users.

  • Historical Trend Reporting
  • The Historical Trending Custom Report Types expose data for the last 3 months only, and Reporting Snapshots are limited to 2,000 (summarised) records per execution. One or both of these capabilities will provide an effective historical data analysis solution for many customers, at larger data volumes and for larger organisations the limits may be intractable.

The next and final post in the “Salesforce Standard Reporting” series outlines the key data model techniques that can help maximise the potential of the standard reporting tools.

Salesforce Standard Reporting (1 of 3) – Report Types

A key contributor to the successful implementation of Salesforce is an informed approach to reporting. I’ve said this many times and have been meaning to complete this short set of posts for a long time. The basic idea is that the delivery of key reports and analytics on-platform using the standard reporting tools should be a primary objective for all implementation projects. This approach may not always be feasible where requirements are complex or atypical, however the standard reporting features should be exhausted before an off-platform business intelligence tool or analytics cloud is employed. Such complementary services come at a significant price point.

In order to maximise the potential of the standard reporting features, both expertise and experience must be applied early in the project. Early consideration of reporting requirements enables a data model design that supports the production of required reports via the standard tools; this should never be an afterthought but often is.

This short series of three posts serves to outline the key concepts, the reporting patterns supported by the standard tools and critically the implications on the data model to be considered. This latter point is imperative as an effective Salesforce physical data model strongly reflects the reporting of data; a key differentiator from a pure physical data model applied to a traditional RDBMS.

The first post in the 3 part series focuses on the foundational concept of Report Types.

Report Types
In the context of outlining the capabilities of the standard reporting tools, the following concepts are significant.

  • Standard Report Types
  • Report types abstract the data model into logical reporting entities on top of which reports are built. This model can’t be circumvented.

    1. Select Report Type

    Standard Report Types are automatically created for all Standard Objects and for Custom Objects when the “Enable Reports” or “Allow Reports” checkbox is checked. Standard report types can’t be edited, new fields are automatically added.

    For unrelated objects, or parent objects in lookup relationships only, a single Standard Report Type is created, named as per the object plural name. E.g. “Rubric Scores”. Note there are exceptions to this where enabled standard features (e.g. Salesforce to Salesforce) add object relationships which result in additional report types (e.g. “Rubric Entries with Connections”).

    For parent objects in master-detail relationships multiple Standard Report Types can be created; one per master-detail relationship. E.g. “Rubric Entries with Questions”. Additional report types are added for each lookup relationship on the child object (e.g.”Rubric Entries with Questions and Score”, “Rubric Entries with Questions and Criterion”), where Score and Criterion are parent object relationships of the lookup type. Additional report types are also added for each additional master detail relationship (e.g. “Accounts with Assessments and Courses”).

    Standard report types are not created for grand-child relationships, or for child objects in master detail relationships.

    Note, the applied naming convention uses the child field label for lookup relationships and the parent object plural label for master detail relationships.

    A key feature of Standard Report Types is the ability to navigate up a lookup relationship from Child to Parent. The example above “Rubric Entries with Questions and Criterion”, is logically a parent to child (master-detail), then child to parent (lookup) traversal.

  • Custom Report Types
  • Custom report types (CRT) enable the definition of a report type that includes up to 4 levels of parent-to-child relationship, regardless of whether each relationship is lookup or master-detail type.

    2. CRT Definition

    Each CRT has a layout which can be configured with custom sections and selective field inclusion across the objects. Objects and Fields can also have CRT specific labels added and a flag set to include selective fields in new reports by default.

    Note, an often overlooked feature of Custom Report Types is the ability to add a parent object field regardless of whether the relationship is master-detail or lookup (although the UI indicates lookup only, both work). This de-normalisation technique is incredibly powerful; up to 5 levels of the parent object hierarchy can be traversed.

    3. CRT Add Lookup Field

    CRTs therefore enable the production of specialised, convenient report types providing a focused set of denormalised fields for a specific purpose. Many successful Salesforce implementations adopt a model where Salesforce Administrators or Developers produce the required CRTs, and the end users build the reports.

  • Historical Trending Custom Report Types
  • Historical Trending is a reporting feature, added in the Winter ’14 release, which can be enabled and configured on a per-object basis. For each enabled object a special CRT is added (e.g. “Rubric Entries with Historical Trending”), containing up to 8 selected fields (numeric, date, picklist, currency). Up to 3 months of data is available via the CRT.

    The next post in the “Salesforce Standard Reporting” series will cover the capabilities of Report Builder to consume the date exposed by report types.

Implementation Guiding Principles

In most cases software implementation projects start off with a clear vision of the various macro-level influences, constraints and directives that define the shape of the project at the outset. During the initiation stage the clear, consensus view on the project principles influences fully the micro-level decisions taken in terms of architecture, solution design, resourcing, scope, prioritisation, schedule and so on. The challenge for many projects is maintaining the purity of the vision across the course of the project in light of variations in uncontrollable factors such as the business context, risks becoming reality, project resources and so forth.

An effective tool to mitigate this loss of identity and direction, and the consequential impact this can have on confidence and productivity, is the definition of an unambiguous, concise set of guiding principles that are communicated early and frequently. Additional to the benefits related to a clear and strongly defined direction, principles support increased productivity through empowerment; if low level decisions align with the principles then action can be taken without time-expensive consultation.

Corporates do this, why not projects?
This approach is well established in business, with many corporates defining an aspirational, future state vision and set of guiding principles that underpin the delivery of the vision through applied influence to all activities undertaken and decisions made. Guiding principles can be very effective particularly where a culture is established that prioritises all actions taken in relation to their conformance to the principles. This level of application requires absolute confidence across the business that the principles are current, meaningful, complete and beneficial to all parties. The value of this approach at a business level can apply equally to a project.

Key Points
No more than 7. The primary value of guiding principles is the strength of the message, achieved through brevity. Telephone numbers were originally set at 7 digits as this was believed to be the most people could be expected to remember. 7 therefore seems a reasonable limit in this context.

Revisit during retrospectives. Stale principles will damage the integrity of the approach. Agility should be paramount such that principles are current and enable changes in the direction of travel whilst the project is in flight (excuse the metaphors here).

Communicate frequently. All project artefact (slides, documents etc.) should state the guiding principles and relate the content to the relevant principles, noting deviations.

All design decisions should relate to one or more principles. See above point.

Prioritisation. A simple prioritisation scheme (high, medium, low; 1,2,3 .. n) can be effective in resolving conflicts between principles.

Buy in. All project stakeholders must approve the principles and accept the impact. Without complete buy in the integrity of the approach is diminished.

Principles can be goals. Principles are often directive in nature, goals are an interesting extension to this.

Use Work.com. The Goals functionality of Work.com can provide a useful tool to manage, communicate, collaborate and report on the principles. This functionality also enables user-level goals to be mapped to the higher level goals.

Alternatives
Vision statements can be difficult to articulate, particularly where there are multiple unconnected concerns.

Project charters can be too cumbersome to be a realistic communication tool. It’s human nature to read once and subsequently ignore any project documentation that requires a time investment.

In both cases above, guiding principles can provide a complementary abbreviated formatting.

Examples
Maintainability of the solution by client administrator.

Future extensibility without technical resource.

Sales productivity improvement is the key objective.

Declarative build over technical solution options.

Quality over expediency.

Technical excellence.

Buy over build.

Alpha 1 bug count must be less than 100.

The solution will be productised internationally.

The project delivery date must be achieved.

Release early and iterate.

Business utility to be delivered every 2 weeks.

Only user-accepted features to be released.

User stories estimated at 5 days or more to be split.

Salesforce Topics

One of the more interesting (but low-key) functional enhancements to arrive on the Salesforce platform over recent releases is Topics.

All Topics Page

A Topic can be viewed as a consolidation of the collective enterprise intelligence around a specific term or theme, across both the collaboration dialogue and the business processes managed within Salesforce. In practical terms this translates to Chatter posts and Object records being assigned one or more new or existing Topics. All assigned records and posts then display on the Topic page – providing the consolidated view of everything happening across the enterprise that pertains to the Topic. This is incredibly powerful in terms of both data categorisation and the insight possible from considering the holistic view of a contextual theme or initiative.

Topic Page - Feed tab

The Topic functionality doesn’t end there; from a Topic page (see screenshot above) it is possible to endorse users as experts on the Topic thereby harnessing knowledge within the enterprise and providing an effective means of channeling communication such as a questions, suggestions and so on. Note, users can elect to remove themselves from the Knowledgeable People list.

Topics can be followed (if feed tracking enabled for Topics) and favourited which enables interested users to be updated proactively. Discussion of Topics is tracked at the Group and User level, the Chatter tab also shows the Trending Topics area which highlights recent activity, this excludes Topic used solely in private Groups or record feeds.

Note, as a point of clarification Topics differ from Groups in that they collect posts from across Chatter and also records.

Primary Use Cases

1. Topics provide a good solution option for any data categorisation or notification requirement relating to enterprise-wide initiatives or themes.
2. Topics map well to Organisational concepts such as departments, virtual teams, projects etc.
3. Topics could provide a useful (albeit elective) notifications model where the Topic feed is used to push notifications to users interested in new transactional records related to the Topic theme, which could represent a reference data type such as a region, product or channel.

Key Considerations

Topics for Object. Once enabled for an object, records can suggest Topics based on the values in selected fields. Whilst a Topic has a feed, record assignment does not generate a post. To mitigate this, Topic assignment can occur via #hastag in a post to the record feed. Additionally, feed activity related to a record is not viewable on assigned Topic pages. The concept relates to grouping and categorisation of records, not consolidation of feed items.

Global Search enabled. Topics appear as a distinct Object within the Record search results.

List Views. Topics can be referenced in List View filter criteria.

Reports. With Winter ’15 the reporting options are limited to those below (to my understanding) via CRT. Reporting on record level assignment isn’t there as yet.
–Topic – list of topics with # talking about statistic
–Topic Assignment – count of assignment per record type (key prefix)

Knowledge Articles. Topics can be added to Knowledge articles enabling end-user categorisation of published articles in addition to pre-defined data categories.

Customisation. Apex triggers can be defined on the Topic and TopicAssignment (assignment of a topic to a FeedItem or a record) objects, which are also API accessible.

Permissions. Profile permissions (or permission set) control who can create, edit, delete and assign topics. Defined topics are public regardless of whether they are solely used in private groups or on records.

Topics for Objects – Screenshots

1. Add a Topic to a Record

Add Topic to Record

2. Topics Assigned to a Record

Record with Topics

3. Topic Page – Records Tab

Topic Page - Records Tab

Salesforce Environment Hub

Environment Hub was announced back in August 2012 and was initially made available as a Pilot feature of the Winter ’13 release. This post follows up (some 2 years later) on my initial interest in understanding the utility Environment Hub delivers in the context of environment/user management.

Environment Hub – What is it?
In simple terms Environment Hub is an org administration tool that enables multiple orgs (of any type) to be associated and accessed from a central location. Each connected org is a termed a Hub Member, with an imposed constraint that each Hub Member org can only be parented by one Hub org. It is therefore imperative that Hub Members are only added where there is no contention over the appropriateness of the parent Hub. Commonality of target production org or packaging org is a good starting point for this consideration, as is the rule that client orgs should never be parented by an ISV or Consultancy Hub org. The parent Hub org should always be the most-accessed org, the credentials for which will become those by which all org access is made.

For larger programmes of work Environment Hub provides a highly useful means to catalogue the org estate and to provide SSO between the constituent orgs. This latter point enables reduced password maintenance, simplified access and centralised user administration in terms of deactivating a user account in one place only and ability to view login history in once place.

Environment Hub Tab

Environment Hub is enabled by Salesforce support who will require confirmation to proceed with the change. From experience, the enablement process can take a few days.

Key Features

Connect Organisation
Connecting Hub Members to the Environment Hub occurs via the entry of an administrator username for the target org and subsequent OAuth authentication and authorisation flow. The User Permission “Connect Organisation to Environment Hub” is required.

Connect Organisation 1

Connect Organisation 2

Connect Organisation 3 - OAuth

Connected Organisation Detail Page

Once an org is connected to the Environment Hub interesting detail such as the Edition, Org Status and Org Expiry date is revealed. It is also possible to add custom name and description attributes to the Hub Member, which I really like, each org in the estate should be justifiable and have a specific purpose – here we can capture this plus accountable contact etc. A very useful means of cataloguing and tracking the org estate.

The Company Detail page in the connected org will now show the Environment Hub Org Id value as below.

Company Information Page

In addition to the “User Added” origin, Hub Members are also auto-discovered using existing org-to-org relationships as below.

Auto-discovery types; sandbox to production, patch orgs to release org, trialforce source org to trialforce management org, release org to LMO

Create Organisation
New development/test/demo orgs can be created directly within the Environment Hub – this replaces the functionality previously exposed via the Salesforce Partner Portal, for partners at least. I’m unclear how the types of org offered are affected by partnership status etc. or perhaps whether the Environment Hub itself is available only to partners.

Create Organisation

Create Organisation 2

Org types offered;
Development = PDE org.
Test/Demo = Pick Edition for a 30 day time expired org.

Useful information of the distinction between different org types can be found here.

Single Sign-on
SSO can be enabled between each Hub Member and the Hub org; in implementation terms this means the Hub org is configured as an Identity Provider with a Service Provider being configured (automatically via the SSO enablement process) in both the Hub Member org and the Hub org. Both IdP initiated (via the Environment Hub tab) and SPI-initiated (via enablement of the Service Provider as a Login Page Authentication Service) SAML flows are supported. The latter point meaning SSO could be enforced as the only authentication means – thereby switching off standard Salesforce authentication entirely. Each Hub Member Org must have a My Domain configured for SSO to function.

Note, enabling SSO creates a Service Provider in the Hub org, default permissions are provided to the Standard User and System Administrator profiles only. It is therefore a requirement to ensure relevant permissions (Profile or Permission Set) are provided.

Connected Organisation Detail Page SSO

Within the Identity Provider, the Hub Member specific Service Provider configuration is set with “Subject Type=User’s ID determined by Environment Hub”, this setting delegates the user mapping to the Environment Hub settings, defined as below.

3 types of SSO User Mapping;
Method 1. Mapped Users – 1 to 1 mapping of user names – per-user.
Method 2. Federation Id – boolean state, yes = attempt to map users on common Federation Identifier values.
Method 3. User Name Formula – formula expression = attempt to map users via formula result.

Where multiple mapping types are enabled the precedence order above applies. For SSO between a sandbox and production org, user mapping is implicit and not configured as above.

Customisation
Enabling Environment Hub adds the EnvironmentHubMember standard object which is open to the declarative build model; custom fields, page layouts. validation rules, workflow rules, approvals etc.. In addition Apex Triggers can be defined on this object, use cases for which may include notifications relating to status changes etc.

A second standard object EnvironmentHubInvitation is also added, however this object is inaccessible from the Setup menu (although it can be reached via /p/setup/layout/LayoutFieldList?type=EnvironmentHubInvitation&setupid=EnvironmentHubInvitationFields). I’m assuming this object to be either forward looking or a legacy of an invitation-based connection model (as per Salesforce to Salesforce). Regardless, records do seem to be added to this object, the logic for which is unclear.

Related Permissions
Hub Org –
Manage Environment Hub
Environment Hub App and Tab access
EnvironmentHubMember standard object access permissions
Environment Hub Connected App
SSO Service Providers e.g. [00_____0000Cj__] Service Provider Access for SSO (by Profile or Permission Set)

Hub Member Org –
Connect Organisation to Environment Hub

Environment Hub in Practice
For consulting projects Environment Hub offers significant value in terms of management and tracking of the org-estate and centralisation of user administration. In practice this would require all project contributors to access production (Hub) as the primary org and SSO into secondary (Hub Member) orgs as required. In this model, production could be utilised for project collaboration, bug-tracking, project management etc. which is a common approach. The obvious downside being the requirement to license the project team in production – a big challenge on many projects where user licenses aren’t provisioned until a late stage or business use runs parallel to project activity. In such examples, project contributors could be provisioned with low-end user licenses on a temporary basis. Ideally we could do this with a Chatter Plus license as users would only require Chatter, Custom Objects (10 or less) and Environment Hub access.

For ISV projects, the utility of Environment Hub relates more specifically to the ability to catalogue the multitude of environments required for development, test, i11n, packing, release and patch purposes, not to mention TrialForce. Efficiency of access across this estate is also a key factor.

References
Environment Hub Online Help

Apex Unit Test Best Practice

This post provides some general best practices in regard to Apex Unit Tests. This isn’t a definitive list by any means, as such I’ll update the content over time.

Top 10 Best Practices (in no order)

1. TDD. Follow Test Driven Development practice wherever possible. There is no excuse for writing unit tests after the functional code, such an approach is indicative of a flawed development process or lax standards. It’s never a good idea to estimate or deliver functional code without unit tests – the client won’t appreciate an unexpected phase of work at the point of deployment, not to mention the pressure this approach puts on system testing.

2. Code Quality. Ensure unit tests are written to cover as many logical test cases as possible, code coverage is a welcome by-product but should always be a secondary concern. Developers who view unit tests as a necessary evil, or worse, need to be educated in the value of unit tests (code quality, regression testing, early identification of logical errors etc. etc.).

3. Test Code Structure. For some time now I’ve adopted a Test Suite, Test Helper pattern. A suite class groups tests related to a functional area. A test helper class creates test data for a primary object such as Account (i.e. AccountTestHelper.cls), secondary objects such as price book entry would be created within the product test helper class. The suite concept provides a logical and predictable structure, the helper concept emphasises that test data creation should be centralised.

4. Test Code Structure. Put bulk tests in a separate class e.g. AccountTriggerBulkTestSuite.cls (in addition to AccountTriggerTestSuite.cls). Bulk tests can take a long time to complete – this can be really frustrating when debugging test failures – particularly in production.

5. Test Code Structure. Ensure test classes contain a limited number of test methods. I tend to limit this to 10. As with point 4, this relates to test execution time, individual methods can’t be selectively executed – the smallest unit of execution is the class.

6. SeeAllData. Always use SeeAllData=true by exception and at the test method level only. Legacy test code related to pricebooks that historically required this can now be refactored to use Test.getStandardPricebookId(). Also, set the [Independent Auto-Number Sequence] flag to avoid gaps in auto number sequences through the creation of transient test data.

7. Test Case Types. As the Apex Language reference proposes, write unit tests for the following test case types.

Positive Behaviour – logical tests that ensure the code behaves as expected and provides successful positive outcomes
Negative Behaviour – logical tests for code behaviour where parameters are missing, or records do not adhere to defined criteria – does the code protect the integrity of unaffected records – does the runtime exception handling function as expected
Bulk – trigger related tests primarily – how the code behaves with a batch of 200 records – mix the batch composition to stress the code against governor limits
Restricted User – test relevant combinations of user role and profile – this test case type is prone to failure through sharing model adjustments – triggers should delegate processing to handler classes that have the “with sharing” modifier

8. Debugging. Always use the syntax below for debug statements within code (test and non-test code). An efficient practice is to add sensible outputs whilst writing the code. This approach avoids a code update or re-deployment to add debug statements during error diagnostics. Note – in such cases Checkpoints could be a better approach anyway – particularly in production. The use of the ERROR logging level enables a restrictive log filter to be applied such a clear debug log is produced and max log size truncation is avoided – note, log filters can also have a positive impact on transaction execution time.

[sourcecode language=”java”]System.debug(LoggingLevel.ERROR, ‘my message’);[/sourcecode]

9. Commenting. Always comment test methods verbosely to ensure the test case intent is clear and that the test code can be mapped to the related non-test code. Test classes should be fully self documenting and be viewed as the primary enabler for the future maintenance of the non-test code.

10. Maintenance. Test code is highly dependent on the environment state. Any configuration change can require test code to be updated; this could be a new mandatory custom field or a sharing model adjustment. In many cases the resultant unit test failure state is not encountered until the next deployment to production, which can’t proceed until the tests are fixed. This scenario will be familiar to many people. The mitigation requires the local administrator to understand the risk, frequently run the full set of unit tests and to manage the test code update cycle proactively.

Example Test Suite Class

[sourcecode language=”java”]
/*
Name: RecordMergeTestSuite.cls
Copyright © 2014 CloudMethods
======================================================
======================================================
Purpose:
——-
Test suite covering RecordMerge operations.
Bulk tests are defined in the class RecordMergeBulkTestSuite.cls
======================================================
======================================================
History
——-
Ver. Author Date Detail
1.0 Mark Cane& 2014-09-16 Initial development.
*/
@isTest(SeeAllData=false)
public with sharing class RecordMergeTestSuite {
/*
Test cases:
singleTestCase1 – postive code behaviour/expected outcome test case 1.
negativeTestCase1 – negative outcome test case 1.
restrictedUserTestCase1 – postive/negative code behaviour in the context of specific user role/profile combinations.
..
future test cases to cover : * some coverage provided
1. tbd.
2. tbd.
*/

/* */
static testMethod void singleTestCase1() {
// Test case 1 : postive outcome test case 1.
setup();

// Steps – 1.
// Logical tests – 1.
}
/* */

/* */
static testMethod void negativeTestCase1() {
// Negative test case 1 : negative outcome test case 1.
setup();

// Steps – 1.
// Logical tests – 1.
}
/* */

/* */
static testMethod void restrictedUserTestCase1() {
// Restricted user test case 1 : postive/negative code behaviour in the context of specific user role/profile combinations.
List<User> users;

System.runAs(new User(Id = Userinfo.getUserId())){ // Avoids MIXED_DML_OPERATION error (when test executes in the Salesforce UI).
setup();
users = UserTestHelper.createStandardUsers(2, ‘Sophie’, ‘Grigson’);
}

System.runAs(users[0]){
accounts = AccountTestHelper.createAccounts(1, ‘Abc Incorporated’);

// Steps – 1.
// Logical tests – 1.
}
}
/* */

// helper methods
private static void setup(){
SettingsTestHelper.setup();
}
// end helper methods
}
[/sourcecode]

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Technical Naming Conventions

Challenge – outside of the ISV development model there is no concept of an application namespace that can be used to group the technical components related to a single logical application. To mitigate this issue, and to provide a means to isolate application-specific components, naming schemes such as application specific prefixes are commonplace.

Risk – without application/module/function namespaces etc. all technical components reside as an unstructured (unpackaged) collection, identified only by their metadata type and name. As such maintainability and future extensibility can be inhibited as the technical components related to multiple logical applications converge into a single unstructured code-base.

Options –
1. Application specific prefix. All components related to a specific application are prefixed with an abbreviated application identifier, e.g. Finance Management = “fm”, HR = “hr”. This option addresses the requirement for isolation, but inevitably causes issue where helper classes or classes related to common objects span multiple applications. This option has the advantage of minimising the effort required to remove functionality related to a logical application, only shared classes would need to be modified.

2. Object centric approach. In considering a Salesforce org as a single consolidated codebase where most components (technical or declarative) relate to a primary data object, a strict object-centric approach can be taken to the naming of technical components. With such a mindset, the concept of a logical application becomes less significant, instead components are grouped against the primary data object and shared across the custom functionality that may be related to the object. A strictly governed construction pattern should promote this concept with the main class types defined on a per-object basis. Functional logic not related to a single object should only every reside in a controller class, web service class or helper class. In the controller and web service cases, the class should orchestrate data transactions across multiple objects to support specific functionality. In the helper class case a function-centric approach is appropriate.

In architectural terms, an object-centric data layer is introduced that is called from a function-centric presentation layer.

presentation layer [Object][Function].page —
SalesInvoiceDiscountCalc.page
SalesInvoiceDiscountCalcController.cls

data layer [Object][Class Type].cls —
SalesInvoiceManager.cls
AccountManager.cls

business logic layer [Function][Helper|Utility]–
DiscountCalcHelper.cls

The downside of this approach is contention on central classes in the data layer when multiple developers are working in a single org, plus the effort required to remove functionality on a selective basis. In the latter case using a source code management system such as Git with a smart tagging strategy can help to mitigate the issue. Additionally, code commenting should always be used to indicate class dependencies (i.e. in the header comment) and to convey the context in which code runs, this is imperative in ensuring future maintainability.

Recommended Approach –
1. Option 2. In summary, naming conventions should not artificially enforce the concept of a logical application – the composition of which is open to change by Admins, instead an object-centric approach should be applied that promotes code re-use and discipline in respect adherence to the applied construction patterns.

Whichever approach is taken, it is highly useful to consider how the consolidated codebase will evolve as future functionality and related code is introduced. A patterns-based approach can mitigate the risk of decreasing maintainability as the codebase size increases.