Showing posts tagged with:

Quality Assurance

Scrum Quality Assurance

More often than not quality assurance is considered to be an implicit benefit of agile process with no specific techniques or practices applied. Integrating effective quality assurance can be one of the most challenging aspects of applying the Scrum framework.

Before considering how quality assurance factors can be applied to the Scrum framework, let’s start with a clear definition.

Quality assurance (QA) is a set of practices which collectively ensure that standards of quality are met (i.e. technical standards) and specified requirements are delivered (customer or user needs or expectations). Standards of quality may include for example measures of correctness, integrity, maintainability or usability. Quality assurance does not mean software testing; this is one element of a quality assurance process. Effective QA means there are limited defects in the first place. The core elements of a QA process include standards, audits, solution governance, testing, defect management (analysis, tracking and resolution) and communication.

Scrum Concepts

The following points describe how QA should be considered in relation to some of the core Scrum concepts.

Developer. In Scrum terms this means any project contributor such as Software Testers, UI Designers in addition to Software Developers. This term is obviously confusing and a source of misunderstanding.

Scrum Ceremonies. QA should attend all meetings (Sprint Planning, Product Backlog Grooming, Stand-ups, Reviews and Retrospectives). A team-wide, test-first philosophy is often the driving factor behind high performance Scrum teams. This concept is also referred to as acceptance-test driven development.

Working Software. This means fully tested quality software – not code complete.

Definition of Done. The DoD must include strict criteria for testing and quality.

Customer Satisfaction. Conditions of Satisfaction (or Acceptance Criteria) defined for each User Story drive the quality assurance process.

Scrum Artefacts

The following points describe how QA should be considered in relation to some of the core Scrum artefacts.

User Stories. The User Story is a concise description of a feature told from the perspective of the person who requires it. User Stories are best considered a reminder of a conversation and should be limited to the detail required for effective estimation and prioritisation.

The following (INVEST) factors should be considered in respect to the quality characteristics of a User Story;
Independent – can it be delivered in a single sprint, or do dependencies exist?
Negotiable – the user story is not a specification (reminder of a conversation)
Valuable – is there obvious intrinsic value to the user?
Estimable – can it be estimated (sufficient detail, available knowledge)?
Small – is it sized appropriately (to fit in a sprint)?
Testable – are there appropriate Conditions of Satisfaction – or Acceptance Criteria?

The Conditions of Satisfaction (CoS) are most significant to QA. Each condition is a specific, user-focused requirement for acceptance and should be written using the Given-When-Then format (or heuristic technique) as below.

Given [context] – e.g. I have created a Customer Record and Sales Invoice.
When [event] – e.g. I attempt to enter a Sales Payment above the Sales Invoice Total.
Then [outcome] – e.g. I am warned that the payment exceeds the outstanding amount.

The high-level CoS will drive the team discussion of the User Story (in a test-first approach), a “Design and Test” whiteboard approach can be useful in this respect where the team consider both the design and testing approach at the same time. Each high-level CoS will be translated into lower-level (detailed) test cases (technical; unit and integration or functional).

User Story Test Cases. The Conditions of Satisfaction (CoS) for a given User Story provide a high-level framework for the definition of specific test cases. Such test cases should be structured to address a small, focused end-to-end workflow (or scenario) to avoid interdependencies and should enable collaborative and automated testing. Effective test cases provide a simple set of step-by-step test instructions (or steps). Note, the CoS covers the happy path scenarios only, test cases must be developed also that cover critical path, alternate path, negative path, technical, validation and edge case scenarios.

Product Backlog. The Product Backlog is a single, ordered list of requirements (typically but not exclusively User Stories) used by Scrum teams. The list is constantly evolving, business value ordered and estimated by the team – including QA.

The following factors should be considered in respect to the quality characteristics of the Product Backlog;
Prioritisation – are the Product Backlog Items (PBI) accurately prioritised to a consistent scale based on business value, ROI, interdependencies and risk?
Estimation – are the PBI consistently estimated (and benchmarked) by the team?
Well Groomed – is the backlog frequently groomed by the team?

Product Backlog grooming entails User Story discussion by the whole team (lead by the Product Owner) with a view to revising and refining estimates and adding details such as new Conditions of Satisfaction. Regular sessions should be scheduled for this purpose; ideally 10% of the Sprint time should be dedicated to this task. In practice this is rarely the case, however given the direct relationship between the completeness of the CoS to the efficiency of the quality assurance process it is important that sufficient time is allowed for effective grooming.

Scrum Practices

The following points describe how QA should be considered in relation to some of the core Scrum practices.

In Sprint Testing. User Story testing must be completed within the sprint – full-stop. The Definition of Done must include this criterion. Testing a sprint behind the development is common bad practice and directly violates the working software principle; additionally this approach builds up technical debt and defect debt to be resolved at a later time. Instead, the sprint backlog must be shaped to enable collaborative design, development and testing of each user story within the sprint. High performing Scrum teams will also automate some of the tests within the sprint. The following simplistic process describes how collaborative sprint development can be applied.

Developers and Testers working collaboratively – Team Swarming:
Write the test case
Write the code
Test together (developers and testers)
Fix the bugs (developers) / Automate the test case (testers)
>> next user story

Test Automation. The ability to automate the execution of test cases (i.e. unit tests and acceptance tests) is critical at scale. In order to quickly identify cases where the current sprint development breaks the code or functionality developed in a previous sprint it is necessary to execute the related tests efficiently. Test automation also supports evaluation of the impact of changes (What-if analysis) that can help with design decisions. Further QA related benefits of test automation include the ability to apply load testing (e.g. multiple concurrent users) and performance testing types. Unlike manual testing which must be conducted in-sprint, test automation can be added outside of the sprint and be developed a separate team. High performance Scrum teams will automate critical tests within the sprint.

Test data quality has a significant impact upon the effectiveness of testing. Random data simply doesn’t work and prefabricated test data is often written to expectations and is not representative. Using production data (real data) for test purposes can violate data protection regulations and comes with the risk of inadvertent communication to the customer, supplier etc. or PII or financial information leakage. To mitigate this an approach that combines real data with obfuscation (scrambling or scrubbing) techniques can produce large scale, representative test data sets without the risk. This can take significant time to perfect however the return on this investment will be high.

Note, in addition to automated tests, a quality test plan still includes manual testing to cover exploratory testing and usability testing types. Automated tests are systematic and unable to gauge the quality of the user experience for example.

Regression Testing. The ability to run test cases developed in previous sprints (and perhaps by different teams) enables the impact of a current development to be understood immediately. This testing practice is referred to as regression testing. Regression testing effort gets bigger over time, as with User Story test cases, regression testing should occur in-sprint and automation should be used wherever possible. It is good practice to reserve time at the end of the sprint to allow the team to complete regression testing, this should also include performance and security test cases. At scale, it is often necessary to be selective in terms of which test cases to focus upon. An effective but contentious practice is to reserve the final sprint for end-to-end regression testing – this sprint is referred to as a hardening sprint; the key risk with this approach is that defect resolution from earlier sprints is deferred to this sprint. The hardening sprint should be driven from the regression test plan not the product backlog and be managed by the QA lead.

Post Release Support. Once the product is released – and despite the QA rigour applied – it is inevitable that production defects will arise. Such defects should be reviewed first by the Product Owner to determine priority and relevancy. Where defects are to be addressed immediately (show-stopper bugs etc.) the resolution work should replace similar sized work in the current sprint. In every respect the defect should be treated in the same manner as a User Story. Defects can also be placed on the Product Backlog for grooming. Where the Scrum team is providing defect resolution it is good practice to retain velocity for production defects; this proactive approach reduces the requirement to pull User Stories from the current sprint that may be in progress.

Implementation Guiding Principles

In most cases software implementation projects start off with a clear vision of the various macro-level influences, constraints and directives that define the shape of the project at the outset. During the initiation stage the clear, consensus view on the project principles influences fully the micro-level decisions taken in terms of architecture, solution design, resourcing, scope, prioritisation, schedule and so on. The challenge for many projects is maintaining the purity of the vision across the course of the project in light of variations in uncontrollable factors such as the business context, risks becoming reality, project resources and so forth.

An effective tool to mitigate this loss of identity and direction, and the consequential impact this can have on confidence and productivity, is the definition of an unambiguous, concise set of guiding principles that are communicated early and frequently. Additional to the benefits related to a clear and strongly defined direction, principles support increased productivity through empowerment; if low level decisions align with the principles then action can be taken without time-expensive consultation.

Corporates do this, why not projects?
This approach is well established in business, with many corporates defining an aspirational, future state vision and set of guiding principles that underpin the delivery of the vision through applied influence to all activities undertaken and decisions made. Guiding principles can be very effective particularly where a culture is established that prioritises all actions taken in relation to their conformance to the principles. This level of application requires absolute confidence across the business that the principles are current, meaningful, complete and beneficial to all parties. The value of this approach at a business level can apply equally to a project.

Key Points
No more than 7. The primary value of guiding principles is the strength of the message, achieved through brevity. Telephone numbers were originally set at 7 digits as this was believed to be the most people could be expected to remember. 7 therefore seems a reasonable limit in this context.

Revisit during retrospectives. Stale principles will damage the integrity of the approach. Agility should be paramount such that principles are current and enable changes in the direction of travel whilst the project is in flight (excuse the metaphors here).

Communicate frequently. All project artefact (slides, documents etc.) should state the guiding principles and relate the content to the relevant principles, noting deviations.

All design decisions should relate to one or more principles. See above point.

Prioritisation. A simple prioritisation scheme (high, medium, low; 1,2,3 .. n) can be effective in resolving conflicts between principles.

Buy in. All project stakeholders must approve the principles and accept the impact. Without complete buy in the integrity of the approach is diminished.

Principles can be goals. Principles are often directive in nature, goals are an interesting extension to this.

Use Work.com. The Goals functionality of Work.com can provide a useful tool to manage, communicate, collaborate and report on the principles. This functionality also enables user-level goals to be mapped to the higher level goals.

Alternatives
Vision statements can be difficult to articulate, particularly where there are multiple unconnected concerns.

Project charters can be too cumbersome to be a realistic communication tool. It’s human nature to read once and subsequently ignore any project documentation that requires a time investment.

In both cases above, guiding principles can provide a complementary abbreviated formatting.

Examples
Maintainability of the solution by client administrator.

Future extensibility without technical resource.

Sales productivity improvement is the key objective.

Declarative build over technical solution options.

Quality over expediency.

Technical excellence.

Buy over build.

Alpha 1 bug count must be less than 100.

The solution will be productised internationally.

The project delivery date must be achieved.

Release early and iterate.

Business utility to be delivered every 2 weeks.

Only user-accepted features to be released.

User stories estimated at 5 days or more to be split.

Apex Unit Test Best Practice

This post provides some general best practices in regard to Apex Unit Tests. This isn’t a definitive list by any means, as such I’ll update the content over time.

Top 10 Best Practices (in no order)

1. TDD. Follow Test Driven Development practice wherever possible. There is no excuse for writing unit tests after the functional code, such an approach is indicative of a flawed development process or lax standards. It’s never a good idea to estimate or deliver functional code without unit tests – the client won’t appreciate an unexpected phase of work at the point of deployment, not to mention the pressure this approach puts on system testing.

2. Code Quality. Ensure unit tests are written to cover as many logical test cases as possible, code coverage is a welcome by-product but should always be a secondary concern. Developers who view unit tests as a necessary evil, or worse, need to be educated in the value of unit tests (code quality, regression testing, early identification of logical errors etc. etc.).

3. Test Code Structure. For some time now I’ve adopted a Test Suite, Test Helper pattern. A suite class groups tests related to a functional area. A test helper class creates test data for a primary object such as Account (i.e. AccountTestHelper.cls), secondary objects such as price book entry would be created within the product test helper class. The suite concept provides a logical and predictable structure, the helper concept emphasises that test data creation should be centralised.

4. Test Code Structure. Put bulk tests in a separate class e.g. AccountTriggerBulkTestSuite.cls (in addition to AccountTriggerTestSuite.cls). Bulk tests can take a long time to complete – this can be really frustrating when debugging test failures – particularly in production.

5. Test Code Structure. Ensure test classes contain a limited number of test methods. I tend to limit this to 10. As with point 4, this relates to test execution time, individual methods can’t be selectively executed – the smallest unit of execution is the class.

6. SeeAllData. Always use SeeAllData=true by exception and at the test method level only. Legacy test code related to pricebooks that historically required this can now be refactored to use Test.getStandardPricebookId(). Also, set the [Independent Auto-Number Sequence] flag to avoid gaps in auto number sequences through the creation of transient test data.

7. Test Case Types. As the Apex Language reference proposes, write unit tests for the following test case types.

Positive Behaviour – logical tests that ensure the code behaves as expected and provides successful positive outcomes
Negative Behaviour – logical tests for code behaviour where parameters are missing, or records do not adhere to defined criteria – does the code protect the integrity of unaffected records – does the runtime exception handling function as expected
Bulk – trigger related tests primarily – how the code behaves with a batch of 200 records – mix the batch composition to stress the code against governor limits
Restricted User – test relevant combinations of user role and profile – this test case type is prone to failure through sharing model adjustments – triggers should delegate processing to handler classes that have the “with sharing” modifier

8. Debugging. Always use the syntax below for debug statements within code (test and non-test code). An efficient practice is to add sensible outputs whilst writing the code. This approach avoids a code update or re-deployment to add debug statements during error diagnostics. Note – in such cases Checkpoints could be a better approach anyway – particularly in production. The use of the ERROR logging level enables a restrictive log filter to be applied such a clear debug log is produced and max log size truncation is avoided – note, log filters can also have a positive impact on transaction execution time.

[sourcecode language=”java”]System.debug(LoggingLevel.ERROR, ‘my message’);[/sourcecode]

9. Commenting. Always comment test methods verbosely to ensure the test case intent is clear and that the test code can be mapped to the related non-test code. Test classes should be fully self documenting and be viewed as the primary enabler for the future maintenance of the non-test code.

10. Maintenance. Test code is highly dependent on the environment state. Any configuration change can require test code to be updated; this could be a new mandatory custom field or a sharing model adjustment. In many cases the resultant unit test failure state is not encountered until the next deployment to production, which can’t proceed until the tests are fixed. This scenario will be familiar to many people. The mitigation requires the local administrator to understand the risk, frequently run the full set of unit tests and to manage the test code update cycle proactively.

Example Test Suite Class

[sourcecode language=”java”]
/*
Name: RecordMergeTestSuite.cls
Copyright © 2014 CloudMethods
======================================================
======================================================
Purpose:
——-
Test suite covering RecordMerge operations.
Bulk tests are defined in the class RecordMergeBulkTestSuite.cls
======================================================
======================================================
History
——-
Ver. Author Date Detail
1.0 Mark Cane& 2014-09-16 Initial development.
*/
@isTest(SeeAllData=false)
public with sharing class RecordMergeTestSuite {
/*
Test cases:
singleTestCase1 – postive code behaviour/expected outcome test case 1.
negativeTestCase1 – negative outcome test case 1.
restrictedUserTestCase1 – postive/negative code behaviour in the context of specific user role/profile combinations.
..
future test cases to cover : * some coverage provided
1. tbd.
2. tbd.
*/

/* */
static testMethod void singleTestCase1() {
// Test case 1 : postive outcome test case 1.
setup();

// Steps – 1.
// Logical tests – 1.
}
/* */

/* */
static testMethod void negativeTestCase1() {
// Negative test case 1 : negative outcome test case 1.
setup();

// Steps – 1.
// Logical tests – 1.
}
/* */

/* */
static testMethod void restrictedUserTestCase1() {
// Restricted user test case 1 : postive/negative code behaviour in the context of specific user role/profile combinations.
List<User> users;

System.runAs(new User(Id = Userinfo.getUserId())){ // Avoids MIXED_DML_OPERATION error (when test executes in the Salesforce UI).
setup();
users = UserTestHelper.createStandardUsers(2, ‘Sophie’, ‘Grigson’);
}

System.runAs(users[0]){
accounts = AccountTestHelper.createAccounts(1, ‘Abc Incorporated’);

// Steps – 1.
// Logical tests – 1.
}
}
/* */

// helper methods
private static void setup(){
SettingsTestHelper.setup();
}
// end helper methods
}
[/sourcecode]

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Technical Naming Conventions

Challenge – outside of the ISV development model there is no concept of an application namespace that can be used to group the technical components related to a single logical application. To mitigate this issue, and to provide a means to isolate application-specific components, naming schemes such as application specific prefixes are commonplace.

Risk – without application/module/function namespaces etc. all technical components reside as an unstructured (unpackaged) collection, identified only by their metadata type and name. As such maintainability and future extensibility can be inhibited as the technical components related to multiple logical applications converge into a single unstructured code-base.

Options –
1. Application specific prefix. All components related to a specific application are prefixed with an abbreviated application identifier, e.g. Finance Management = “fm”, HR = “hr”. This option addresses the requirement for isolation, but inevitably causes issue where helper classes or classes related to common objects span multiple applications. This option has the advantage of minimising the effort required to remove functionality related to a logical application, only shared classes would need to be modified.

2. Object centric approach. In considering a Salesforce org as a single consolidated codebase where most components (technical or declarative) relate to a primary data object, a strict object-centric approach can be taken to the naming of technical components. With such a mindset, the concept of a logical application becomes less significant, instead components are grouped against the primary data object and shared across the custom functionality that may be related to the object. A strictly governed construction pattern should promote this concept with the main class types defined on a per-object basis. Functional logic not related to a single object should only every reside in a controller class, web service class or helper class. In the controller and web service cases, the class should orchestrate data transactions across multiple objects to support specific functionality. In the helper class case a function-centric approach is appropriate.

In architectural terms, an object-centric data layer is introduced that is called from a function-centric presentation layer.

presentation layer [Object][Function].page —
SalesInvoiceDiscountCalc.page
SalesInvoiceDiscountCalcController.cls

data layer [Object][Class Type].cls —
SalesInvoiceManager.cls
AccountManager.cls

business logic layer [Function][Helper|Utility]–
DiscountCalcHelper.cls

The downside of this approach is contention on central classes in the data layer when multiple developers are working in a single org, plus the effort required to remove functionality on a selective basis. In the latter case using a source code management system such as Git with a smart tagging strategy can help to mitigate the issue. Additionally, code commenting should always be used to indicate class dependencies (i.e. in the header comment) and to convey the context in which code runs, this is imperative in ensuring future maintainability.

Recommended Approach –
1. Option 2. In summary, naming conventions should not artificially enforce the concept of a logical application – the composition of which is open to change by Admins, instead an object-centric approach should be applied that promotes code re-use and discipline in respect adherence to the applied construction patterns.

Whichever approach is taken, it is highly useful to consider how the consolidated codebase will evolve as future functionality and related code is introduced. A patterns-based approach can mitigate the risk of decreasing maintainability as the codebase size increases.

Visualforce Controller Class Convention

A quick post to outline an informal convention for the definition of a Visualforce controller class, key maintainability characteristics being predictable structure, readability and prominent revision history. All developers have a subjective preference in this regard, however consistency is key, particularly in the Salesforce context where multiple developers/consultancies contribute to a codebase over its lifetime. A simple, logical approach always makes sense to maximise adoption.

[sourcecode language=”java”]
/*
Name: MyPageController.cls
Copyright © 2014 Force365
======================================================
======================================================
Purpose:
——-
Controller class for the VF page – MyPage.page
======================================================
======================================================
History
——-
Ver. Author Date Detail
1.0 Mark Cane& 2014-05-20 Class creation.
1.1 Mark Cane& 2014-05-21 Initial coding for page initialisation.
*/
public with sharing class MyPageController {
//& public-scoped properties.
public List<MyWrapperClass> wrapperClassInstances { get; set; }
//& End public-scoped properties.

//& private-scoped variables.
private Boolean isInitialised=false;
//& End private-scoped variables.

//& page initialisation code.
public MyPageController(){
initialise();
}

private void initialise(){ isInitialised=true; }
//& End page initialisation code.

//& page actions.
public PageReference saveAction(){
return null;
}
//& End page actions.

//& data access helpers (class methods accessed from binding expressions).
//& End data access helpers.

//& controller code Helpers (class methods providing helper functions to data access helpers or actions).
//& End controller code helpers.

//& inner classes (wrapper classes typically, extending SObjects for convenience in the UI).
public class MyWrapperClass {
public MyWrapperClass(){}
}
//& End inner classes.
}
[/sourcecode]

Salesforce Implementation Audit

This post provides an outline approach to consider when performing an internal audit of an existing (or emerging) Salesforce implementation. As an individual who specialises in the provision of such quality assurance services from an external perspective, I’m convinced that most projects would benefit from a periodic internal review, perhaps augmented by some occasional external perspective and insight (Salesforce services can help here). However this is approached, in the majority case the internal project team will have the requisite experience and competency to deliver such an introspective review, the challenge is often one of finding the right time, or indeed any time, to conduct it. This is why a retrospective build review should be planned every 3 or 4 sprints (or thereabouts – projects differ) with a full implementation audit scheduled every release. The principal being that whilst the build is in flight, periodic sense checks are made on key quality aspects, technical integrity, platform limits etc. with a comprehensive audit applied pre-release (ideally). The latter may need to consider a combined future deployment state where multiple parallel development streams converge into a single production org.

Note, an implementation audit is build-focused (or solution oriented) and should not assess the fit-for-purpose nature of the functionality in respect to business requirements (i.e. the problem-to-solution mapping). The only exception to this arises where an obvious mapping to a standard feature is missed resulting in a “gap” that is unnecessarily filled by a technical solution option.

Note, in order to cut down on the time required to conduct the audit access to individuals who can describe the functional intent is imperative. In the internal case the programme/project architect should be leading the audit and should be aware of the functional design context.

Note, before diving into the detail of the implementation, it can be highly valuable to re-define the high-level solution architecture (HLSA) in current state terms. The key point being that the macro-level view is often distorted by micro-level design decisions made during the project course. A periodic check is useful to ensure that this organic change is understood and that the integrity of the original architectural vision is maintained.

Indicative review areas are listed below (this is not exhaustive)

Declarative build environment
1. Identify platform limits that are reaching a high percentage of utilisation that may present risk to scalability and future phases of development.
2. Identify any future maintainability risk presented by the conventions applied in the definition of configuration elements (e.g. naming conventions, opportunities for best practice improvements.).
3. Identify functional areas where a mapping to standard features could be achieved.
4. Identify security vulnerabilities (org-access, sharing model etc.).

Technical customisations
1. Identify risks to data integrity and application responsiveness.
2. Document risks to scalability and extensibility imposed by platform execution limits.
3. Document deviations from best practice technical patterns, conventions and coding standards.
4. Identify security vulnerabilities introduced by technical componentry.
5. Document deviations from best practice development practices and process.

Integration architecture
1. Identify risk associated with deviations from best practice integration patterns and practices.
2. Identify opportunities to reduce limits consumption.
3. Identify data integrity and scalability vulnerabilities related to the current state integration architecture.

Identity management
1. Identify risk associated with implemented single sign-on processes and related services/infrastructure.
2. Document deviations from best practices related to identity management.

Salesforce Naming Conventions – Declarative

Updated – 2014-11-18

This post follows on from my last post on Custom Settings and provides coverage of the wider set of naming conventions I apply across the various component types of the declarative build environment. The list isn’t exhaustive or necessarily better than any other set of standards. Having a set of conventions applied consistently across the build is key, the specifics of those conventions can be subjective. A key principle applied is that of consistency with established standard conventions wherever possible. For example, standard objects and fields don’t have underscores in the API names and follow simple patterns in terms of naming, there’s no good reason to deviate from this for custom objects and fields. Naming conventions shouldn’t be considered an area for creative thinking, instead creative energy should be focused on the functional and technical design, the conventions applied should be mundane and predictable.

Convention A – Custom Object

[Object name]. Singular, Pascal Case (upper camel case) and no underscores.
e.g. Data Source -> DataSource__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Data Source -> Data_Source__c

Note – the Description attribute for Custom Objects must always be populated appropriately. A self describing configuration is key to future maintenance.

Convention B – Custom Field

[Field name]. Pascal Case (upper camel case) and no underscores.

e.g. Date of Birth -> DateOfBirth__c

Consistent use of the default naming style, i.e. with underscores is acceptable also. it can be difficult to avoid this approach in a team environment.
e.g. Date of Birth -> Date_Of_Birth__c

In the scenario where an implementation is comprised of distinct functional domains, with custom fields relating specifically (and exclusively) to one single domain, the following convention should be applied.

Each functional domain has a defined field name Prefix. e.g. HR, FINANCE, SALES etc.
Fields exclusive to one domain have their API name prefixed with the domain prefix.
e.g. Payroll Number -> FINANCE_PayrollNumber__c
e.g. Industry Segment -> SALES_IndustrySegment__c

This convention allows a logical structure to be applied in isolating fields specific to a single functional domain.

Note – the Description attribute for Custom Fields must always be populated appropriately. A self describing configuration is key to future maintenance.

Note – the Help Text attribute for Custom Fields must always be populated appropriately. Inline user assistance can improve the end user experience greatly and reduce ongoing support.

Convention C – Child Relationships

Singular, Pascal Case (upper camel case) and no underscores.
e.g. Account->AccountMetrics__c relationship = Account.AccountMetrics__r

Convention D – Page Layout

[[Function] | [Object]] Layout

e.g. Pricing Case Layout
e.g. Pricing Case Close Layout

Convention E – Custom Setting

Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

API Name – List Settings
– [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

API Name – Hierarchy Settings
– [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

Convention F – Workflow Rule
Always separate the condition from the outcome when configuring workflow. The Workflow Rule is the condition, the associated Actions are the outcome. In many cases the 2 become entwined resulting in duplicated rules, or rules with actions unrelated to the stated purpose of the rule. A clear set of rules related to conditions promotes re-use.

[Object]: [Criteria Description i.e. Condition]

Convention G – Workflow Action

Field Update –
[Object]: Set [Field] to [Value]

Email Alert –
[Object]: Send [Template short description]

Task –
[Object]: [Task Subject]

Convention H – Sharing Rule

OBS: [Object] [From selection] to [To selection]
CBS: [Object] [Criteria] to [To selection]

Convention I – Custom Report Type

[Primary Object] with [Child object]s [and [Grandchild object]s]

Convention J – Custom Label

Define sensible categories for the labels. e.g. UI Button Label, UI Text, UI Error Message etc.

Name = [Category with underscores]_[Value with underscores] e.g. UI_Button_Label_Proceed
Category = [Category with underscores] e.g. UI_Button_Label
ShortDescription = [Category] [Value] e.g. UI Button Label Proceed
Value = [Value] e.g. Proceed

Convention K – Validation Rule

Single field :
[Field Label] [rule applied]
Mailing City Is Required
Start Date Must Be a Weekday

Multiple fields :
[Field grouping term] [rule applied]
Billing Address Must Be Complete

Cross object :
[Object Name] [Field Label] [rule applied]
Opportunity Stage Is Closed No Edit Of Opportunity Products

Convention L – Publisher Action

[Verb] [Noun]
New Invoice
Update Order
Alert Executive Team

Convention M – User Profile

[[Job Function] | [Department] | [Company]] [[User] | [System Administrator]]
Accounts Payable User
Marketing Executive User
Acme System Administrator

Convention N – Permission Set

Single Permissions :
Name must match the permission assigned.
Case Feed
Manage Dashboards
Manage Public List Views

Simple Combined Permissions :
[Verb] [Noun]
Manage Invoices

Combined Permissions :
[Feature Area Descriptor] [User Type]
Work.com Administrator
CloudInvoices User
Knowledge Contributor

Convention O – Public Group

[Grouping term] [[Users] | [Members]]

EU Users
Sales Users
HR Users
Project A Members

Convention P – Reports and Dashboard Folders

[Grouping term] Reports (adhoc reports related to a specific department, team, project etc.)
[Grouping term] Dashboard Reports (best practice to isolate dashboard reports in clear location)
[Grouping term] Dashboards

Finance Reports
HR Dashboards
HR Dashboard Reports

Note – the [Report Description] attribute for Reports must always be populated appropriately. A self describing configuration is key to future maintenance.