Digital experiences are how brands connect with customers. Behind the scenes, technical teams are responsible for optimizing and deploying these experiences across various digital touchpoints. As such, it is important for organizations to understand the key challenges related to deployment — especially when it comes to grasping the differences between B2C vs. B2B customer experience delivery.

B2B customer experience delivery challenges

While they have two-thirds of an acronym in common, scaling for B2B experience delivery is categorically different — and potentially more challenging — than scaling for B2C experience delivery. Here are some of the key reasons:

  • Even the most straightforward B2B engagements involve multiple stakeholders with different expectations and needs. One size does not fit all, or even most.
  • B2B engagements typically involve multiple touchpoints over a longer timeframe, often spanning months, or even years. Failing to energize and impress customers at any of these touchpoints can effectively end the relationship. And keeping each new engagement fresh requires capturing and analyzing data at each subsequent touchpoint to leverage insights.
  • Because so much data can be disconnected and multiple teams can be engaging the same customer, it is difficult to accurately gauge customer health and momentum across the journey. The post-sales journey can be just as complex as the pre-sales journey — which is especially critical for organizations such as those in the SaaS space whose business models depend on customers staying on the roster for several years.

5 key questions

To effectively scale from an operational perspective and meet B2B customer experience demands (e.g. performance, geographic distribution, redundancy, and high availability), it is important to consider the following five key questions:

1. What does success look like?

It is essential to clearly answer this question before doing anything else, because successful scaling requires a deep understanding of: 

  • What the user journey looks like
  • Anticipated levels of usage
  • Anticipated patterns of demand on the system
  • Acceptable performance-related aspects (e.g. server response times, page load times, functionality, availability, etc.) 

Generally, the first step is to have a clear understanding and definition of the solution’s requirements. Requirements significantly impact the delivery environment composition and, ultimately, the cost. Key issues to address include: Where do visitors typically enter the site? How many visitors typically visit at the same time? Are there predictable traffic spikes at certain times of the year? 

These types of questions will guide you in understanding and mapping out your visitors’ behavior, which can help you identify friction and inform your performance testing plans. Performance testing relies on the requirements to accurately model user interactions and service level expectations, and these insights can be used to indicate when a solution is ready for production.

The team also needs to determine what will be considered a success. Understanding the journey is key, but what amount of performance and availability is considered acceptable? Different organizations will have different tolerances for outages and response times depending on the importance of different parts of the application to the business.

 2. Where are your customers?

Having a fast response in a strategic region keeps your visitors from abandoning you and supports driving further conversions.

The need for speed — considering the following:

Maintaining a highly available, globally distributed infrastructure can be complex and costly to operate, as well as to host. To mitigate risk and increase efficiency, it is vital to know where your customers are, and what type of latencies are allowable in regions that are not strategic to your business.

You may also want to investigate whether your visitor experience can leverage Content Delivery Networks (CDNs) or other edge delivery technologies such as Sitecore’s Experience Edge solution to reach your audience where they are.

 3. Do you plan on leveraging personalization?

Personalization is an increasingly critical tactic for success. However, leveraging personalization significantly increases the complexity of a deployment, since it impacts performance and scale characteristics. It is also necessary to project peak usage into the future, in order to build out the collection database capacity. Accurate performance testing is the only way to validate collection capacity to a particular solution.

Planning for peak days or periods due to seasonal campaigns or product launches may be of interest. Experience analytics needs to be sized for the worst-case scenario if it is a one-off, but it does not always make sense from a cost perspective to build an environment for these extraordinary circumstances. As such, you may choose to disable analytics during peak periods, and instead size to the average peaks found throughout the year.

Analytics: to enable or to disable? Analytics has a significant impact on every visit to a solution. Organizations need to do a cost-benefit evaluation to decide whether it is more advantageous to build out the infrastructure for expected peaks and get accurate analytics data or disable analytics to save costs related to the required deployment topology.

One important consideration is that analytics limiting scale can lead to downtime in extreme usage situations, and as such keeping analytics enabled for known peak periods is a risk that many organizations (especially those in the B2C space) are unwilling to take.

It is also necessary to pay close attention when combining geographic distribution with analytics. If you are leveraging Sitecore Experience Database™ (xDB) — which is used to create a 360-degree customer view by collecting and connecting data across channels in real time — and native tracking capabilities, every request will either need a new contact or a known contact rehydrated. If the Sitecore analytics database is not close to the originating request location, this impacts the performance observed at that client. Generally, it is best to have analytics geographically situated with the solution deployment. However, when multiple geographic locations are part of the solution, additional development is often required to ensure both performance and functional concerns.

Still on the topic of leveraging personalization through experience analytics, it is important to be careful with Asynchronous JavaScript and XML (AJAX) calls that flow back into the delivery servers. Analytics relies on ASP.NET session providers, which can cause performance issues when multiple requests from the same contact flow in parallel. Also, solutions should define what requests need to be tracked and define any read-only request.

About read-only requestsA critical step in optimizing a solution's usage of Sitecore xDB is identifying requests that should not be part of xDB tracking, and requests that can be marked read-only because they do not write any contact or interaction information. Read-only requests are especially important in solutions that incorporate parallel requests with the same contact and are often associated with AJAX.  Parallel requests not marked as read-only can introduce response time issues, since each request must wait for exclusive access to the session object.

Typically, Redis is the recommended Session Provider. However, one concern with Redis is that the database is in memory, which constrains the peak user sessions. You will need to account for the maximum number of parallel sessions, and size Redis appropriately to accommodate usage.

It is also best practice to include a strategy for the collection database cleanup, which involves determining the length of time that data needs to be maintained. Typically, a routine is configured to clean up data based on business value, effectively allowing the analytics endpoints to maintain performance quality. In larger deployments, it is advisable to use a dedicated session expiration instance, which ensures optimal delivery resource usage.

4. Are you selling something?

A significant amount of content can be cached to optimize delivery. However, shopping experiences are transactional and unique to each customer. If your organization plans on having a commerce capability through your channel, then you will need to address the following:

  • Where is the transaction occurring (country, state)?
  • Where are the bottlenecks in the purchasing/ordering flow?
  • Are there seasons of high activity?
  • What is your catalog size (how many products, relationships, languages, price cards, coupons, categories)?
  • How many are checks/second peak expected?
  • What are the peak customer profiles?
  • How many items are expected in carts? (note that B2B tends to require larger carts than B2C)

Balancing capacity needs and speed for various scenarios from above can aid performance and planning. For example, you may be running a lot of your content as a static site with significant global response speeds on your edge delivery network. Users could be having a great experience with that part of the site, but if your ordering backend cannot handle transactions from a given region or country, then your commerce experience will suffer. Scaling your commerce features to keep this experience fast is key.

5. How can we see what is happening?

Monitoring and observability are critical to being able to respond to scaling demands. You need to be able to identify changes in performance while also allowing your teams to drill deeper to discover where performance bottlenecks and service health issues are. Effective monitoring helps your organization get ahead of issues and scale the right resources at the right time.

Kubernetes (K8s) can help simplify this process by establishing a dynamic hosting infrastructure. This effectively allocates resources to various workloads and provides load balancing across the instances that make up a solution’s workloads. With granular control over the infrastructure, operations teams can respond dynamically to many situations in a more cost-effective manner than trying to over-provision up-front.

You can also leverage monitoring or observability tools such as Azure Application Insights, Grafana, Prometheus, New Relic, Datadog, Honeycomb, or any number of other tools that are available. These tools allow you to visualize the data about your performance metrics and identify fluctuations, danger scenarios, and drill into possible problem areas. You can also engage Sitecore Managed Cloud or your hosting partner to help add these capabilities to your solution.

For example, let us assume that the infrastructure is configured to automatically scale given a particular demand to maintain a great experience. The users are happy! However, in the long-term you want to keep your operation costs down so that your team is happy too. That means you need to be able to see when these performance issues are happening and start analyzing for root causes of the issues. There may be a fault in your application logic, or a particular configuration that is causing a lock, or some other reason that is causing your application to need to scale. If you can identify and eliminate this underlying cause, that will also eliminate the need for future automatic scaling for that scenario.

Next steps

Digital experiences link brands with customers. To establish a strong connection, developers must build and deliver these experiences across multiple digital touchpoints. To move your organization in that direction, we recommend the following next steps:

  • Review the answers to the five questions highlighted above with your team.
  • Determine your comfort level about managing and monitoring your scaling process.
  • Continue addressing these questions periodically, and before any event that will generate new traffic patterns and/or utilize new channels.

We also encourage you to consult the Sitecore Insights Blogs, which offers a growing library of thought leadership articles, addressing various audiences on numerous aspects of digital transformation. Visit now

Pieter Brinkman is Senior Director of Technical Marketing at Sitecore, overseeing global strategy for its developer ecosystem, technical enablement of partners and technical employees, GTM strategies, and Sitecore’s sales demos.