When asked, many B2B buyers of complex solutions express a strong preference for a purchase experience free of sales rep interactions altogether. In a survey of nearly 1,000 B2B buyers, 43% of surveyed respondents agreed that they would prefer a rep-free buying experience. When cut by generation, 29% of Baby Boomers preferred to buy solutions without rep involvement, while remarkably over half of Millennials, 54%, expressed the same sentiment. Clearly, both practical experience and data driven evidence indicates a potentially dramatic generational shift in customer engagement preferences across the coming five to ten years.
In addition, SMART created three centers of excellence, where they consolidated otherwise duplicative efforts across traditional functional boundaries, one for data and analytics, and one for customer insights and positioning, and one for creative and digital experience.
Pods are managed by a brand new UCE dashboard, comprising a range of metrics spanning traditional marketing, sales, and service activity. Each pod leader is then tasked with helping the team ensure that SMART provides buyers in that geography with whatever support they might require, through whichever channel, at whatever time, on whatever job.
Organizations are managing a more diverse array of infrastructure than ever, which increases security, risk, and compliance concerns and affects service-level metrics. Monitoring and observability help address these concerns. However, monitoring is fragmented and significant data is unmonitored. Even so, as cloud, cloud-native, and open-source adoption, usage, and spending continue to increase, so do observability deployment and budget plans. Unfortunately, pricing and billing can be a barrier to achieving observability.
Observability enables organizations to measure how a system performs and identify issues and errors based on its external outputs. These external outputs are called telemetry data and include metrics, events, logs, and traces (MELT). Observability is the practice of instrumenting systems to secure actionable data that details when and why an error occurs.
Organizations use observability to determine why something unexpected happened (in addition to the what, when, and how), particularly in complex environments where the possible scope of problems and interactions between systems and services is significant. The key difference is that observability does not rely on prior experience to define the conditions used to solve all problems (unknown unknowns). Organizations also use observability proactively to optimize and improve environments. For example, they can use observability data and capabilities to reduce infrastructure costs through resource optimizations, improve customer experience through software optimizations, and so on.6
Monitoring tools alone can lead to data silos and data sampling. In contrast, an observability platform can instrument an entire technology stack and correlate the telemetry data drawn from it in a single location for one unified, actionable view. The ability to see everything in the tech stack that could affect the customer experience is known as full-stack observability7 or end-to-end observability.
Largely due to ongoing hybrid- and multi-cloud adoption, organizations are managing a more diverse array of infrastructure than ever, gathering metrics from on-premises (on-prem) infrastructure and private and public cloud services, including serverless and managed Kubernetes (also known as K8s) services. Modern systems increasingly involve open-source code and multiple cloud-native microservices running on containers and Kubernetes clusters.
Most (93%) of the 2021 CNCF survey respondents used or planned to use containers in production. New Relic observability platform user data supports this CNCF survey finding with a 49% year-on-year increase in overall container adoption. In addition, 39% used serverless technology.38
The growing complexity of distributed applications and hybrid- and multi-cloud adoption has highlighted the need for observability capabilities like APM, infrastructure monitoring, and log management as organizations aim to maintain visibility, improve incident response, and gain a contextual understanding of their applications and infrastructure. As organizations modernize their applications and maintain visibility over their expanding and increasingly distributed IT environments while using a data-driven approach for their incident and threat response, observability has become more important than ever.44
A subset of observability, security monitoring is also critical. In a 2018 report about developers by Stripe, 66% of C-suite executives said security/data breaches and 62% said increased regulation was threatening the success of their businesses.46
In 2020, more than 22 billion records of confidential personal information or business data were exposed, according to a report on the threat landscape by Tenable.47 According to a 2022 study by Gartner Peer Insights and Radiant Logic, 84% of organizations have experienced an identity-related data breach.48 A 2022 survey report about the state of ransomware by Gigamon found that 95% of respondents experienced ransomware attacks in the last year, and 59% claimed the ransomware crisis worsened in 2022. The research also revealed that 89% think deep observability is an important element of cloud security.49
According to a 2022 DevSecOps survey by GitLab, 57% of security team members said their organizations have shifted security left in the software development lifecycle (SDLC) or are planning to this year. About two-thirds of security professionals said they have a security plan for microservices (65%) and containers (64%). And 53% of teams said they had plans to secure cloud native and serverless. But while security scanning is increasing, access to data lags. In addition, almost 25% spent between half and three-quarters of their time dealing with audits and compliance.50
RUM, which can shed light on frontend systems and customer experience, garnered new interest due to the boost in e-commerce during the COVID-19 pandemic. Respondents considered synthetic monitoring nice to have. Event correlation interest has grown as more vendors embrace observability and pull together different data types to provide more context for root cause analysis. And 92% of organizations thought AIOps tools would enable them to manage more workloads with fewer employees; AIOps and machine learning operations (MLOps) in observability tooling have added value for organizations experiencing skills and personnel gaps in operations.70
For example, almost half (45%) had deployed APM, 51% had deployed infrastructure monitoring, and 50% had deployed log management. Just over half said they deploy environment monitoring capabilities, like database, infrastructure, network, and security monitoring, as well as log management. RUM capabilities, like browser and mobile monitoring, and services-monitoring capabilities, like APM, were in the 40% range. Monitoring capabilities for emerging technologies, like AIOps, Kubernetes monitoring, ML model performance monitoring, and serverless monitoring, were among the least deployed, with each hovering in the 30% range.71
The 2022 State of Logs Report by New Relic saw a 35% year-over-year increase in logging data. It also found that 56% of New Relic customers use logs with infrastructure monitoring. And approximately 14% use logs alongside APM, a 68% year-over-year increase, which it expects to rise.72
A 2021 report by 451 Research found that because organizations see APM as expensive tooling, they prioritize it for more critical apps only instead of applying it across their apps. In addition, highly distributed, microservices-based applications can generate huge amounts of telemetry data, and organizations are now managing more logs than ever. So organizations have to balance storing as many logs as possible for the most granular insights and prioritizing certain logs for longer-term storage to alleviate cost concerns.95 This is known as data sampling.
United States Security and Exchange Commission. February 2, 2022. Alphabet, Inc. Form 10-K: Annual Report for Fiscal Year Ending December 31, 2021. N.p.: United States Security and Exchange Commission. =/Archives/edgar/data/0001652044/000165204422000019/goog-20211231.htm.
United States Security and Exchange Commission. February 4, 2022. Amazon, Inc. Form 10-K: Annual Report for Fiscal Year Ending December 31, 2021. N.p.: United States Security and Exchange Commission. =/Archives/edgar/data/0001018724/000101872422000005/amzn-20211231.htm.
These factors are more than a nice to have, they directly impact purchasing decisions and, in turn, CLV and business metrics. A quick response is important (89%) when deciding what companies to buy from as is an overall smooth experience (85%).
When we drill into the metrics to see who is most likely to use these non-traditional customer service channels, it is not surprising that younger generations are more likely to take advantage of alternative options for support, although other generations also do use them at a lower rate.
The sample of the market tracker gives you a holistic overview of the available data sets(Excel file with all tabs, columns and key slides from the report). The sample also provides additional context on the topic and describes the methodology of the analysis. You can download the sample here:
The technology research firm Gartner defines data literacy as the ability to read, write and communicate data in context. This includes understanding the sources and constructs of data, the analytical techniques and methods used to create the data, and the ability to describe the result.
In the past, data literacy used to be more of a technical field, focused on managing and producing the data itself. Companies looked for data scientist experts who knew techniques such as SQL, data extraction and information normalization and who were familiar with technologies such as parallel processing, big data analysis and R (a programming language). 2b1af7f3a8