Adobe, Equinix, Lenovo, and G-P share the five difference makers that will help companies successfully harness global talent to compete at speed and scale.
This article was originally published on CIO.com by Chris Davis, Partner at Metis Strategy and Kelley Dougherty, Associate at Metis Strategy
To succeed as a large, global company, there is no choice but to harness the power of technology talent around the world. There simply aren’t enough people with the right skills, and at the right cost within a single location, to support the innovation and operational demands of a modern organization.
Organizations have implemented a variety of workforce models over the last two decades or so, but each has eventually proved to leave them with more questions than answers. The global outsourcing trend of the 1990s and early 2000s addressed capacity restraints through low-cost labor markets, but the lack of ownership and the transaction-based working relationships meant organizations outsourced not just the work, but also the accountability. The early 2010’s practice of co-locating talent supercharged collaboration, but also limited organizations’ ability to scale with a workforce based in high-density, cost-prohibitive metros. By 2020, many technology leaders began revisiting the idea of building dedicated, employee-based teams in lower-cost global locations, but the remote workforce model experienced its own set of challenges throughout the Covid-19 pandemic.
Fast forward to today. Many global technology leaders continue to struggle to find a balance between cost efficiencies, team productivity, and the human aspects of employment. However, there were a few pioneers that preemptively and effectively prepared for a distributed remote workforce and therefore were able to flourish at a time when others scrambled to adjust to the new normal.
Adobe, Equinix, Lenovo, and G-P were strategically situated and equipped to achieve the ideal duality: leverage global talent to realize cost efficiencies and realize the effectiveness of an agile team, yet in a distributed operating model. The key design principle that they all share is this:
To make global, distributed teams successful, they established dedicated decision-making power in distributed locations with full-stack teams of business and technology employees that can autonomously deliver end-to-end value.
The technology and HR leaders at each of these organizations shared their insights into building successful global teams that can sustainably drive innovation at scale. They highlighted five themes that can make or break the development of a global operating model:
We will explore each of these below:
Implementing operating structures that leverage distributed decision-making in the context of a coordinated strategy requires cross-functional credibility and finesse. The ability to drive strategy across several departments or teams with varying functions and skill sets is a rare talent, and leadership needs to build this muscle to ensure distributed teams are aligned for execution.
Art Hu and Jeanne Bauer-Hamlett, Chief Information Officer and Executive Director of Human Resources at Lenovo, emphasize the importance of equipping and empowering teams for success. Leaders need not only to identify the appropriate talent to manage and own decisions within global teams, but also to regularly engage with local managers to maintain strategic and process alignment. Middle management will ultimately be the make-or-break layer of the operating structure, so leaders need to ensure they are able to:
Leading global teams also comes with the inherent challenge of limited in-person interactions and face-to-face communication. Technology leaders therefore need to be particularly adept at building trust between themselves and dispersed teams.
“You’ve got to be good with the data, but you better have the emotional intelligence to match it,” says Richa Gupta, CHRO at G-P. This includes displays of empathy, authenticity, and concerted efforts to build human connection, either in-person or virtually.
Leaders should veer on the side of over-communication to break down emotional barriers and establish a sense of transparency across locations and management layers. Adobe CIO Cindy Stoddard explains that she makes it a habit to foster connections, both virtual and in person, throughout Adobe’s IT teams.
Likewise, Milind Wagle, CIO at Equinix, notes that he makes deliberate efforts to visit each of the company’s global teams at least twice a year to alleviate “emotional” distance with his reports and ensure each location feels valued and connected to the organization.
The value of building full-stack global teams is largely driven by the knowledge retention and improved delivery resulting from a sense of organizational identity within teams and the long-term commitment to the organization.
“Prioritizing culture among the leadership team is crucial, as a company’s culture starts at the top and is carried down to employees,” Hu said. Leaders should approach culture as an internal capability that needs to be actively maintained, measured, and nurtured from the top down.
For example, we worked with a retail company that built a nearshore development center in Mexico to maintain time zone alignment while taking advantage of 2-3x cost savings. While onboarding 50 new developers, U.S. team members flew to Queretaro for cross-location agile product operating model training. The team leader in Mexico took the time to educate employees on both Mexican and American business culture, and encouraged empathy and open dialogue between team members and segments of the IT organization. Doing so helped build cultural understanding and trust at the onset of the working relationship.
Indeed, organizations establishing globally distributed teams need to understand and navigate the business and cultural distances that may cause friction across teams and stakeholder groups. Rather than attempt to enforce a blanket uniformity across all offices, technology leaders should aim to strike a balance between promoting a common sense of organizational identity and celebrating the local cultures and customs of each team.
Establishing and maintaining effective employee feedback loops is an essential aspect of promoting a positive workplace culture. Art and Jean explain how they have made intentional efforts at Lenovo to ensure feedback loops to regularly measure employee experience and identify pain points. They note the importance of using outcomes, rather than outputs or behaviors for those measurements. For example, employees at Lenovo are evaluated based on the specific value or outcomes they deliver rather than the amount of time spent online or at a desk.
The same logic can be applied to measuring company cultural efforts. Rather than analyzing the number of diversity workshops or social events scheduled in a given period, leaders should instead assess outcome-based metrics such as:
Initial launches of a global operating model are typically designed with “decision makers” (Product Managers, Business System Analysts, senior Tech leads, and Business Stakeholders) based in the U.S., with Scrum Masters and engineering teams in the alternative location. While this can work, leading organizations tap global talent to build truly global solutions.
For example, one organization stood up a full data platform team, including product management, Scrum Masters, and engineering in India. This was not a subservient team that took orders from the U.S.; they were fully empowered to build the global product for all users.
Another organization built out a business-unit-aligned supply chain team in Brazil to best serve South America. A third built out a team in Singapore to support finance operations and then structured their business product management in the same location to align time zones. In each case, the big shift was allowing these teams to define the strategy, develop the product, launch, and operate it no differently than a team in the U.S.
The appropriate tactical model for each organization will be dependent on the specific needs and responsibilities of teams. For example, Hu notes that “a ‘follow-the-sun’ model can and does work for teams who have well-defined tasks, boundaries, and hand-off protocols.” In contrast, a team that has heavy dependencies and requires more centralized oversight and direction will be better suited to a setup that allows for more time-zone crossover with other teams, or fully accountable teams staffed within the same time zone.
Wagle of Equinix notes the importance of communication within a global operating model, but also emphasizes that cross-team communication does not necessarily need to be more frequent or within a specific forum. Communication should instead be optimized to provide the most time value for teams. Equinix moved away from daily scrum meetings in favor of weekly meetings with daily asynchronous check-ins to reduce meeting exhaustion and allow more time to work on key objectives. Technology leaders should ensure the proper cadences are established for strategic decision-makers and cross-functional teams to discuss key topics such as:
A technology operating model built on agile practices and consistent delivery processes enables teams to reduce operational redundancies, cross-team friction, and internal costs. Stoddard at Adobe describes improving business workflows as “a strategic investment” and notes that her organization focused on establishing systems that “create positive employee and customer experiences in the hybrid world, drive efficiency and productivity, and enable standardization, optimization and consolidation.”
With standard ways of working in place, technology leaders need to define where and how decision-making rights are delegated. The digital-first, hyper-connected nature of the modern workplace means people no longer need to be in a company headquarters to have influence, but organizations need to be intentional about which decisions are delegated to local teams. Technology leaders should have a defined architecture of decision-making rights that enable teams to work asynchronously and deliver autonomous value while ensuring those teams are working harmoniously toward enterprise-level strategic objectives.
“Technology is no longer just about enabling work, it’s the workplace itself,” said G-P’s Richa. Leaders establishing a digital operating model built on distributed teams need to ensure the appropriate tools and systems are in place to support it.
The most fundamental technologies are those enabling a unified and streamlined employee experience, giving teams the day-to-day resources and support they need to do their job. Delivery and project management tools that can be shared across locations will enable teams to have visibility across efforts, monitor risks, and identify dependencies without daily facetime. Milind provides the example of Equinix’s rollout of ‘Operation Collaboration’ that is geared toward maturing the organization’s workplace technologies and meeting experience platform to enable teams to work asynchronously.
Each of these companies above has also invested in technologies to streamline internal processes and reduce the operational risks of distributed teams. Cindy at Adobe advises that by investing in digitization, technology leaders “can help their organizations make the most of data analytics and insights, unlock new business and revenue opportunities, and significantly reduce costs.” These organizations also made strategic pushes to leverage AI and automation to minimize repetitive tasks, reduce time costs, optimize resource utilization, and allow teams to access services and support regardless of location or time zone. Lenovo in particular launched its Premier Support Plus, which “combines AI and human interaction for proactive, predictive, seamless and direct IT support, designed specifically for today’s hybrid workforce.”
The regulatory environment in each team location is the final, and potentially consequential, consideration of a workforce strategy. Among the standard regulatory concerns are those regarding the local labor and employment laws in a given location. Richa at G-P says that establishing a foreign entity and managing local administrative tasks is both costly and time-consuming, and advises that technology leaders work with internal or outsourced HR and legal experts to ascertain the compliance requirements around legal entities, taxes, compensation, benefits, workers rights, and the ability to hire and fire, among others.
The second facet of compliance is more closely aligned to a technology leader’s purview and pertains to the local data, privacy, and intellectual property regulations. Some regions could differ in their approach to data sovereignty and IP protection, so organizations may weigh privacy concerns when determining where and how to store sensitive information. Art at Lenovo advises that leaders have “full awareness of the laws and regulations, and make sure global teams have the tools and processes to adapt to the rapidly changing landscape.”
For organizations contemplating building a global technology operating model, the final big decision is whether your company is willing to truly change its mindset. There is a big difference between a “U.S.-based company that operates internationally” and a “global company that happens to be headquartered in the U.S.”
Not all companies will be ready for it. But, in our view, there is no other option to realize both efficiency and effectiveness in your operating model. Whether proactively or reactively, global companies will have to retool the way they work across these five dimensions to sustainably leverage global talent at scale.
How leaders can drive the coveted project-to-product transformation
This article was originally published on CIO.com by Chris Davis, Partner at Metis Strategy, and Kelley Dougherty, Associate at Metis Strategy.
In this time of fluid markets, fierce competition, and constant disruption, the modern enterprise must stay innovative and agile. It must be ready to evolve at any moment, and deliver quickly, consistently, and reliably through its large-scale software operations.
But it can hardly do so through traditional, monolithic ways of working, particularly those organized around projects. Many companies are therefore reorienting their operating models around end-to-end products. Done well, these transformations make a company nimble. Done poorly, they exhaust the organization and produce little value.
Leaders must transform their organizations methodically along a path that minimizes redundancies, builds momentum, and creates immediate and tangible business value. In this article, we outline the steps to start a product operating model journey, coloring the steps with stories told on the Metis Strategy Podcast by executives from companies like Ascension, Condé Nast, and Hyatt.
First, leaders must identify the products around which their operating model will be designed. We define a “product” in this context as:
“a capability or portion of a capability, brought to life through technology, business process, and customer experience, with a continuous value stream, and an ability to measure success independently.”
Therefore, leaders should draw the capability map of their business, showing how value streams and assets are positioned, how they relate to each other, and which of them are immature or missing. These capabilities can then be translated into end-to-end products calibrating for the organization’s size, offerings, and business model.
If an organization has uniform customer offerings and go-to-market motions, then its products should be aligned to the company’s value chain. Such is the case at Ascension, as explained by its Chief Marketing and Digital Experience Officer, Raj Mohan: “We’ve organized our teams particularly broken up by the consumer journey into product teams down that path, and then staff those teams along those journeys itself.”
In practice, products aligned to a customer-facing value-chain might include: Development → Marketing → Sales/Order Management → Fulfillment → Customer Success
Aligned to internal value streams, they might include Financial management, HR management, Legal Management, IT Management, Facilities Management, and Data and Analytics.
In contrast, if an organization has multiple business units, offerings, or go-to-market processes, its products must be defined so they account for each BU’s customers, geographies, and so on. This way, products can still be aligned to value chains but also arranged into broader groups, lines, and teams, each constituting a “deeper” aspect of the value chain.
This is how products have been defined at Condé Nast.
Sanjay Bhakta, Chief Product and Technology Officer at Condé Nast explains that his organization’s product offerings result in them having “some capability within the brands, especially the big brands, that focus on things that may be bespoke or have specific requirements.”
Next, leaders must define the capabilities around which they’ll organize resources and configure the product teams such that they can deliver value autonomously. Mohan suggests that a product team can stand on its own “if, over at least a three-year horizon, you can see clearly that a durable team can bring value that you can sign up for.”
How many product teams should you have? As a rule of thumb: about one tenth as many employees as there are in the organization. Ideally, each product team should comprise seven to nine people, and they should include a product manager, scrum lead, technical lead, and engineers. These might be supplemented by user experience leads for consumer products, other engineers, shared services, or specialists.
A project-to-product transformation requires that an enterprise think first in terms of products, and this shift hangs on the structures and processes by which the company manages its portfolio. A company should organize its portfolio around the outcomes it seeks, and those should in turn dictate the capabilities initially staffed to mature at a higher rate. When resources are limited, start by productizing 2-5 key areas, do it well, and scale from there.
Hyatt, for example, has organized its portfolio around customer-focused capabilities, and so has caused the enterprise at large to think in terms of customer outcomes. As Hyatt’s Global CIO, Eben Hewitt, has explained: “Moving to a product mindset, to me, means, number one, it’s for a customer… You’re thinking about the outcomes that people want.”
Further, an organization will do well to manage its portfolio according to Agile principles and to align its product teams to business outcomes. Not only will product teams then naturally align to each other and their shared objectives; the organization itself will think in terms of products and outcomes.
To manage portfolio by capabilities, use annual planning sessions to craft roadmaps aligned to outcomes and segmented by capability. Such roadmaps can then inform the teams who support those capabilities, and ensure their own roadmaps align to enterprise objectives. These planning sessions also give leaders a chance to decide how to allocate funds. As a rule, the product teams should receive roughly 80% of the organization’s budget, and that allocation should cover their needs end to end to build and manage the lifecycle of the product. The remaining 20% should go toward broader initiatives.
Adopting an Agile mindset and common ways of working early in the journey will help reorient a company reliant on waterfall, project-based operating models towards continuously delivering value. However, frameworks such as Scrum and Kanban are a means to an end. Some organizations conflate a “product” transformation with an “Agile transformation,” and lose themselves in the minutia of adhering to specific ‘rules’ and ceremonies. The key is to create a baseline for teams to form, storm, and norm by reducing confusion of how to transition from a rigid waterfall process to a mindset in which an entire agile product team establishes a shared identity founded in the problem the product solves; not their title or role on a waterfall assembly line.
Bhakta emphasizes that Agile should extend to the relationship between product and engineering. He explains: “[It] helps us do faster decision-making, helps us to get products out into the market faster.”
If organizations are already practicing Agile when they start transforming, then they should focus on infusing into their processes the product mindset. If an organization isn’t so mature, however, then it should train teams on core Agile practices to which they can align their processes.
Ultimately, this transformation largely depends on whether people can successfully serve the role as a Product Manager, and balance the business value, viability, usability, and feasibility to focus teams on shipping products and experiences that users love, adopt, and help improve with feedback.
Therefore, each team needs a Product Manager, who can:
Identifying, training, and upskilling Product Managers, especially for internal products, is often the hardest part of the journey. But to be successful, Product Managers must also have clear scopes of responsibility, the power to execute on them, and feedback loops by which they can measure performance and course-correct.
Each of the steps we’ve covered critically enable teams to scale, and once they’ve been carried out the first time, they tend to act as a flywheel, sustaining themselves with their own momentum and creating excitement within the organization to productize more capabilities.
To gauge success of your product operating model journey, start by:
The journey of maturing a product team is never really complete. Once the teams are launched with the steps outlined in this article, leaders should then do the following at scale, working team by team:
It is our firm belief that adopting a product operating model is the only way to successfully support a scaling organization. But don’t take it lightly; this is a commitment that requires leaders to dedicate at least a year of their time to successfully transform an organization’s mindset.
Personalized customer experiences, automated business operations, and data science-driven insights all depend on the quality and volume of your data. That’s why your data privacy strategy must be more than a policy on ethics.
This article was originally published on CIO.com by Chris Davis, Partner at Metis Strategy and Elizabeth Tse, Associate at Metis Strategy.
Companies continue to face implementation challenges as they rush to comply with data privacy regulations such as Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This is due largely to a mismatch between their management of data and the stringent requirements set by the regulations.
Organizations can address the complexities of privacy regulations via a well-defined data governance framework, which leverages people, processes and technologies to establish standards for data access, management and use. Such a framework also enables companies to address elements of privacy, including identity and access management, consent management and policy definition.
As leaders implement data governance models with privacy in mind, they may face challenges, including lukewarm executive buy-in, lack of a cohesive data strategy, or diverging opinions about how data should be used and handled. To address these obstacles, leaders should consider the following actions:
While a Chief Data Officer or CIO may lead the implementation of a data governance framework or model, data governance should be a shared responsibility across a company. At a minimum, the IT department, privacy office, security organization, and various business divisions should be involved, as each has an important stake in data management. Bringing in a variety of stakeholders early allows firms to establish key data objectives and a broader data governance vision. This collaboration can take the form of a dedicated task force or may involve regular reporting on data governance and privacy objectives to the executive board.
Data privacy, similarly, is also a shared responsibility. All employees have a part to play in maintaining data privacy by following accepted standards for data collection, use and sharing. Indeed, implementing a successful data governance model with privacy in mind requires educating employees on governance concepts, roles and responsibilities, as well as data privacy concepts and regulations (e.g. the definition of “personal information” vs. “consumer information”).
After establishing a governance vision and driving employee awareness, organizations can define their desired data governance roles – such as data owners, data stewards, data architects and data consumers – and tailor the roles to their needs. Some companies may distinguish between data stewards and data owners, for example, with the former responsible for executing daily data operations and the latter responsible for data policy definition. For one client with a large and complex IT department, Metis Strategy established a governance hierarchy with an executive-level board, combined data steward/owner roles, and other positions (e.g. data quality custodians). This structure facilitated ease of communication and enabled the client to scale its data management practices.
In the long term, firms should incorporate data governance and management skills into their talent strategy and workforce planning. Given the expertise required and the shortage of qualified people for some data-intensive roles, organizations can consider enlisting the help of talent-sourcing firms while focusing internal efforts on talent retention and upskilling. As companies’ strategic goals and regulatory requirements change, they should remain flexible in adjusting their data governance roles and ownership.
To respond adequately to consumer privacy-related requests for data, organizations should establish standardized procedures and policies across the data lifecycle. This will allow companies to understand what data they collect, use and share, and how those practices relate to consumers.
For example, the CCPA provides consumers with the right to opt out of having their personal information sold to third parties. If a retailer needed to comply with such a request, it would need to be able to answer questions in the following categories:
Establishing policies and standards for the above can help organizations quickly determine the actions needed to respond to customer requests under privacy regulations. Companies should communicate policies widely and ensure that they are being followed, as failing to do so can propagate the use of inconsistent templates and practices. At one Metis Strategy client, for example, few stakeholders had sufficient awareness of data management and access standards, despite the fact that the client’s IT department had established extensive policies around them.
To successfully implement data governance frameworks and ensure privacy compliance, firms may also need to address challenges posed by legacy infrastructure and technical debt. For example, data often is stored in silos throughout an organization, making it difficult to appropriately identify the source of any data privacy issues and promptly respond to consumers or regulatory authorities.
Firms also need to evaluate the security and privacy risks posed by outsourced cloud services, such as cloud-based data lakes. Those using multiple cloud providers may want to streamline their data sharing agreements to create consistency across vendors.
Some technologies can help companies leverage customer data while mitigating privacy risks. In a Metis Strategy interview, Greg Sullivan, CIO of Carnival Corporation, noted that data virtualization enhanced his organization’s analytics capabilities, drove down operational and computing costs and reduced the company’s exposure to potential security and privacy gaps.
Companies can also consider new privacy compliance technologies, which can enhance data governance through increased visibility and transparency. Data discovery tools use advanced analytics to identify data elements that could be deemed sensitive, for instance, while data flow mapping tools help companies understand how and where data moves both internally and externally. These tools can be used to help organizations determine the level of protection required for their most critical data elements and facilitate responses to consumer requests under GDPR and CCPA.
Although legacy technology overhauls can be time-consuming and costly, firms that are decisive about doing so can reduce their privacy and security risks while avoiding other challenges related to technical debt.
As the global data privacy landscape evolves, organizations should continuously adapt their data governance models. Companies should proactively address their obligations by designing data governance roles, processes, policies, and technology with privacy in mind, rather than reacting to current and forthcoming privacy legislation. Companies doing so can not only improve risk and reputational management, but also encourage greater transparency and data-driven decision-making across their organizations.
This article originally appeared on CIO.com. Steven Norton co-authored the piece.
You have heard the hype: Data is the “new oil” that will power next-generation business models and unlock untold efficiencies. For some companies, this vision is realized only in PowerPoint slides. At Western Digital, it is becoming a reality. Led by Steve Philpott, Chief Information Officer and head of the Digital Analytics Office (DAO), Western Digital is future- proofing its data and analytics capabilities through a flexible platform that collects and processes data in a way that enables a diverse set of stakeholders to realize business value.
As a computer Hard Disk Drive (HDD) manufacturer and data storage company, Western Digital already has tech-savvy stakeholders with an insatiable appetite for leveraging data to drive improvement across product development, manufacturing and global logistics. The nature of the company’s products requires engineers to model out the most efficient designs for new data storage devices, while also managing margins amid competitive market pressures.
Over the past few years, as Western Digital worked to combine three companies into one, which required ensuring both data quality and interoperability, Steve and his team had a material call to action to develop a data strategy that could:
To achieve these business outcomes, the Western Digital team focused on:
The course of this analytics journey has already shown major returns by enabling the business to improve collaboration and customer satisfaction, accelerate time to insight, improve manufacturing yields, and ultimately save costs.
Driving cultural change management and education
Effective CIOs have to harness organizational enthusiasm to explore the art of the possible while also managing expectations and instilling confidence that the CIO’s recommended course of action is the best one. With any technology trend, the top of the hype cycle brings promise of revolutionary transformation, but the practical course for many organizations is more evolutionary in nature. “Not everything is a machine learning use case,” said Steve, who started by identifying the problems the company was trying to solve before focusing on the solution.
Steve and his team then went on a roadshow to share the company’s current data and analytics capabilities and future opportunities. The team shared the presentation with audiences of varying technical aptitude to explain the ways in which the company could more effectively leverage data and analytics.
Steve recognized that while the appetite to strategically leverage data was strong, there simply were not enough in-house data scientists to achieve the company’s goals. There was also an added challenge of competing with silos of analytics capabilities across various functional groups. Steve’s team would ask, “could we respond as quickly as the functional analytics teams could?”
To successfully transform Western Digital’s analytics capabilities, Steve had to develop an ecosystem of partners, build out and enable the needed skill sets, and provide scalable tools to unlock the citizen data scientist. He also had to show his tech-savvy business partners that he could accelerate the value to the business units and not become a bureaucratic bottleneck. By implementing the following playbook, Steve noted, “we proved we can often respond faster than the functional analytics teams because we can assemble solutions more dynamically with the analytics capability building blocks.”
Achieving quick wins through incremental value while driving solution to scale
Steve and his team live by the mantra that “success breeds opportunity.” Rather than ask for tens of millions of dollars and inflate expectations, the team in IT called the High-Performance Computing group pursued a quick win to establish credibility. After identifying hundreds of data sources, the team prioritized various use cases based on those that met the sweet spot of being solvable while clearly exhibiting incremental value.
For example, the team developed a machine learning application called DefectNet to detect test fail patterns on the media surface of HDDs. Initial test results showed promise of detecting and classifying images by spatial patterns on the media surface. Process engineers then could trace patterns relating to upstream equipment in the manufacturing facility. From the initial idea prototype, the solution was grown incrementally to scale, expanding into use cases in metrology anomaly detection. Now every media surface in production goes through the application for classification, and the solution serves as a platform that is used for image classification applications across multiple factories.
A similar measured approach was taken while developing a digital twin for simulating material movement and dispatching in the factory. An initial solution focused on mimicking material moves within Western Digital’s wafer manufacturing operations. The incremental value realized from smart dispatching created support and momentum to grow the solution through a series of learning cycles. Once again, a narrowly focused prototype became a platform solution that now supports multiple factories. One advantage of this approach: deployment to a new factory reuses 80% of the already developed assets leaving only 20% site-specific customization.
Developing a DAO hybrid operating model
After earning credibility that his team could help the organization, Steve established the Digital Analytics Office (DAO), whose mission statement is to “accelerate analytics at scale for faster value realization.” Comprised of a combination of data scientists, data engineers, business analysts, and subject matter experts, this group sought to provide federated analytics capabilities to the enterprise. The DAO works with business groups, who also have their own data scientists, on specific challenges that are often related to getting analytics capabilities into production, scaling those capabilities, and ensuring they are sustainable.
The DAO works across functions to identify where disparate analytics solutions are being developed for common goals, using different methodologies and achieving varying outcomes. Standardizing on an enterprise-supported methodology and machine learning platform enables business teams faster time-to-insights with higher value.
To gain further traction, the DAO organized a hackathon that included 90 engineers broken into 23 teams that had three days to mock up a solution for a specific use case. A judging body then graded the presentations, ranked the highest value use cases, and approved funding for the most promising projects.
In addition to using hackathons to generate new demand, business partners can also bring a new idea to the DAO. Those ideas are presented to the analytics steering committee to determine business value, priority and approval for new initiatives. A new initiative then iterates in a “rapid learning cycle” over a series of sprints to demonstrate value back to the steering committee, and a decision is made to sustain or expand funding. This allows Western Digital to place smart bets, focusing on “singles rather than home runs” to maintain momentum.
Building out the data science skill set
“Be prepared and warned: the constraint will be the data scientists, not the technology,” said Steve, who recognized early in Western’s Digital journey that he needed to turn the question of building skills on its head.
The ideal data scientist is driven by curiosity and can ask “what if” questions that look beyond a single dimension or plane of data. They can understand and build algorithms and have subject matter expertise in the business process, so they know where to look for breadcrumbs of insight. Steve found that these unicorns represented only 10% of data scientists in the company, while the other 90% had to be paired with subject matter experts to combine the theoretical expertise with the business process knowledge to solve problems.
While pairing people together was not impossible, it was inefficient. In response, rather than ask how to train or hire more data scientists, Steve asked, “how do we build self-service machine learning capabilities that only require the equivalent of an SQL-like skill set?” Western Digital began exploring Google and Amazon’s auto ML capability, where machine learning generates additional machine learning. The vision is to abstract the more sophisticated skills involved in developing algorithms so that business process experts can be trained to conduct data science exploration themselves.
Designing and future-proofing technology
Many organizations take the misguided step of formulating a data strategy solely about the technology. The limitation of that approach is that companies risk over-engineering solutions with a slow time to value, and by the time products are in market, the solution may be obsolete. Steve recognized this risk and guided his team to develop a technology architecture that provides the core building blocks without locking in on a single tool. This fit-for-purpose approach allows Western Digital to future-proof its data and analytics capabilities with a flexible platform. The three core building blocks of this architecture are:
Designing and future-proofing technology: Collecting data
The first step is to be able to collect, store and make data accessible in a way that is tailored to each company’s business model. Western Digital, for example, has significant manufacturing operations that require sub-second latency for on-premise data processing at the edge, while other capabilities can afford cloud-based storage for the core business. Across both spectrums, Western Digital consumes 80-100 trillion data points into its analytics environment on a daily basis with more analytical compute power pushing to the edge. The company also optimizes where it stores data, decoupling the data and technology stack, based on the frequency with which the data must be analyzed. If the data is only needed a few times a year, the best low-cost option is to store the data in the cloud. Western Digital’s common data repository spans processes across all production environments and is structured in a way that can be accessed by various types of processing capabilities.
Further, as Western Digital’s use cases became more latency dependent, it was evident that they required core cloud-based big data capabilities closer to where the data was created. Western Digital wanted to enable their user community by providing a self-service architecture. To do this, the team developed and deployed a PaaS (Platform as a Service) called the Big Data Platform Edge Architecture using cloud native technologies and DevOps best practices in Western Digital’s factories.
Future-proofing technology: Process & govern data
With the data primed for analysis, Western Digital offers a suite of tools that allow its organizations to extract, govern, and maintain master data. From open source Hadoop to multi-parallel processing, NoSQL and TensorFlow, data processing capabilities are tailored to the complexity of the use case and the volume, velocity, and variety of data.
While these technologies will evolve over time, the company will continually need to sustain data governance and quality. At Western Digital, everyone is accountable for data quality. To foster that culture, the IT team established a data governance group that identifies, educates and guides data stewards in the execution of data quality delivery. With clear ownership of data assets, the trust and value of data sets is scalable.
Beyond ensuring ownership of data quality, the data governance group also manages platform decisions, such as how to structure the data warehouse, so that the multiple stakeholders are set up for success.
Future-proofing technology: Realize value
Data applied in context transforms numbers and characters into information, knowledge, insight, and ultimately action. In order to realize the value of data in the context of business processes – either looking backward, in real time, or into the future – Western Digital developed four layers of increasingly advanced capabilities:
By codifying the analytical service offerings in this way, business partners can use the right tool for the right job. Rather than tell people exactly what tool to use, the DAO focuses on enabling the fit-for-purpose toolset under the guiding principle that whatever is built should have a clear, secure, and scalable path to launch with the potential for re-use.
The platform re-use ability tremendously accelerates time to scale and business impact.
Throughout this transformation, Steve Phillpott and the DAO have helped Western Digital evolve its mindset as to how the company can leverage data analytics as a source of competitive advantage. The combination of a federated operating model, new data science tools, and a commitment to data quality and governance have allowed the company to define its own future, focused on solving key business problems no matter how technology trends change.
Price matters, a lot. In an era of hyper price transparency, the subtlest price discrepancies will drive consumers to purchase on channels with the lowest price. Often consumers make buying decisions in two steps: first, what they want to buy; second, where they will buy. Especially for goods and services that are not substantially differentiated in terms of quality or features, your average consumer will naturally gravitate towards the lowest price. This has been felt in an especially acute manner for retailers such as Best Buy, where consumers go to window shop, but complete their purchases on lower priced ecommerce alternatives (i.e., Amazon, eBay, Jet, etc.). Best Buy has since woken up to the fact that without differentiating the customer experience, they were unable to create stickiness to convert foot traffic. When selling a commodity, or a good/service with a comparably substitute, price parity is arguably the most important driver in decision making. The challenge, of course, is that the manufacturers of a good, or a provider of a service, don’t always own the end touch point with the consumer. Many companies rely on a network of distribution partners to help market and sell their products. While this approach allows companies to scale revenue without the risk of building a massive salesforce, it also means that the manufacturer/provider will not be able to control all the variables that influence consumer’s buying decisions.
To strike the right balance, many companies develop a distribution strategy that comprises two dimensions: direct and indirect sales. Direct distribution focuses on selling directly to customers, while indirect distribution depends on intermediaries to complete a transaction. A distribution strategy needs to be married with a robust approach to inventory management, which may mean different things to a manufacturer than a service provider. Manufacturing firms typically have robust Sales & Operations process (referred to S&OP), during which they forecast sales and ensure there is enough inventory produces and physically distributed to distribution centers or shelf space to meet consumer demand. Service providers tend to look at inventory as an expiring asset: once time has passed, you can no longer sell that service (e.g., once a plane takes off with an empty seat, or a tee time passes without a foursome teeing off).
Although hospitality was one of the first industries to create robust distribution channels and networks through Online Travel Agencies (OTAs) to capture additional business, one of the consequences of that arrangement is that customers were conditioned to view hotel rooms as a commodity where price was the primary decision factor. While OTAs let reviews and minimal merchandising try to differentiate hotels, consumers also got lost in the noise of the difference between one chain versus another.
Over the past 5 years, intermediaries successfully crafted a narrative that they had the consumer’s best interest at heart by negotiating with the hotels, and only the OTAs could be trusted for the lowest price. Some of this was true; you could find lower prices for last minute deals, and there was benefit to both the OTAs and hotel operators that did not want to see a bed go empty. However, as OTAs further influenced the customer experience, and ate into profits with a greater share of bookings, the hospitality, airline, and other industries recognized that they would have to take decisive action to remove price disparity as the primary reason a consumer would purchase products or services on any indirect channel.
One compelling example is Icelandair and El Al who have begun experimenting with displaying sample prices of their competitors on their own websites, to show how competitive their direct prices are, and to hopefully prevent customers from “clicking” away to competitors and other price aggregators. With the explosive growth of options in the online distribution environment, there are two primary factors that companies should concentrate on: Price Integrity and Price Parity.
Price integrity is the concept of a customer being confident that they are purchasing a product of a certain value. While a customer may be willing to pay more or less, depending on the time and place of their purchase, there is a psychological range that they base their expectations on.
Price parity is the practice of maintaining a consistent rate for the same product across all distribution channels, including both owned and partnership channels. Nothing destroys trust more than being able to find a cheaper price on another website, or worse, when a company’s website is cheaper than its stores.
For industries that rely both on direct channels and distribution channels, there is a “co-opetition” relationship in which it is not uncommon for a firm to be competing with their distribution partners for sales. On the one hand, if a consumer wasn’t going to come to AlaskaAirlines.com, they would be more than happy with a referral from KAYAK, or a booking through Expedia to fill an empty seat. But if there was a chance that customer could have booked directly with Alaska Airlines, they would have fought hard to win that booking.
Hospitality and travel companies are in the middle of an ongoing competition with their distribution partners (OTAs and Metasearch engines – METAs) for the future of guest bookings. According to Hitwise, hotel direct booking only made up ~30.56% of online booking market share in 2017, at the same time OTAs continued to eat away further at market share, growing 60 basis points from 2016 to 2017.
While OTAs and METAs have become an invaluable component of hospitality marketing and distribution campaigns, there are contractual violations that stress the trust necessary for heathy “co-opetition” Some OTAs and METAs may display available prices that undercut contracted prices. Often these discounted prices are provided to the OTAs and METAs by wholesalers in violation of price-parity contracts, but the complex web of distribution relationships and flash-speed of online pricing engines makes it difficult for hospitality companies to really hold their distribution partners accountable.
Despite the challenges, companies must maintain a vigilant eye on how inventory and experiences are being displayed by distribution partners to ensure that consumers that may have the inclination to purchase on direct channels are not actively dissuaded from doing so. A successful distribution strategy must be aggressive and can quickly be implemented and maintained by following these six critical steps:
Metric tracking allows you to better understand if your chosen distribution partners are worth their distribution costs. For example, “NRevPAR” (Net Revenue per Available Room) is the industry standard in hospitality for calculating the revenue generated per available room, net of any discounts or commissions paid to intermediaries. Through the re-evaluation of their NRevPAR, hoteliers can evaluate their current distribution partnerships across their current distribution channels to ensure that their distribution costs are harmonized with their expectations for each partner. A significant drop in a key metric is a telltale sign that it is time to either renegotiate with your current distributors or start looking for replacements.
It is imperative that you monitor how and where your inventory is displayed across your distribution partners’ platforms. You want to have the ability to confirm that your partners are playing by the rules as well as ensuring that your offering is not appearing unofficially on other public channels with rogue prices that undercut you and your partners. If a partner determines that your inventory is floating around the public space at prices that undercut their contracted prices, it won’t be long before you observe your inventory being pushed to the bottom of their display pages—if they don’t remove you altogether for being out of parity.
Andrew Sheivachman of Skift pointed out that in 2017, global digital travel sales were projected to reach $189.6 billion in 2017, of which 40 percent was to be attributed to purchases made through mobile (4% gain over 2016). With such a rapid rise in the adoption of mobile booking and shopping, you cannot let your mobile channel development lag. You must work proactively with your distribution partners to refresh user interfaces and user experiences to optimize their mobile shopping experience. Rich content, descriptions, and high-quality photography also allow you to differentiate your product when it is sitting on a digital shelf with comparable products.
Dynamic yield pricing allows you to base your pricing relative to demand and other variables. Dynamic pricing is being employed across various industries to match supply and demand to move expiring inventory: preventing waste in grocery stores, ensuring that there are enough drivers on the road for ride-sharing platforms, or driving loyalty by generating customer-specific fares for airlines. Within the hospitality industry, dynamic pricing allows for inventory to be priced appropriately in response to the timing of a booking, local events, or any occasion that could cause fluctuating demand. Just make sure that your dynamic price is not undercut by a distribution partner or cached by that distribution partner and out of date when prices go back up.
While channels you directly manage (a website, a social presence, in-store), may not be the first point of interaction between you and your prospective consumer, you still can convert customers to complete their purchase through your owned direct purchase channels as you get to know them and earn their attention. In 2015, of booking journeys that were initiated on OTAs – over 34% of bookings were completed through supplier websites. Bolstering your available offers for customers through loyalty programs, subscription email campaigns, and social media can help drive customers from your distribution partners to your direct-booking channels.
Legacy backend systems may cause you millions of dollars in system outages and will almost certainly inhibit your ability to proactively adjust your distribution network. These legacy platforms cause transactional friction during the process in which a supplier’s prices are sent out to the systems of distribution partners, which in turn forces revenue managers to spend hours a day manually validating that prices and inventory are being migrated accurately to various distribution channels and partners. Rate monitoring platforms are now available that allow for revenue managers to monitor the behavior of their distribution partners using automation. The use of these platforms also increases transparency of your distribution partners’ networks. These platforms can be used to not only monitor the integrity and parity of pricing for your own inventory, but they can be used to quickly determine if you are competitively priced across the globe. With our earlier example of Icelandair and El Al, technology can also automatically allow revenue managers to know when their rates are being advertised by competitors (either accurately or inaccurately).
While your distribution partners can help you reach new customers and markets, you must ensure that their role as an intermediary does not equate to them “owning” the customer. It’s the incentive of your distribution partners to provide you revenue, but they are unlikely to share customer information that can be used to convert a customer into a loyal patron (i.e. personal email address, mailing addresses, etc.). Providing an amazing customer experience is the best way to overcome a consumer’s bias to make decisions based on price. If a company can pair a differentiated customer experience, with an enticing loyalty program that rewards purchasing goods or services through direct channels, there is still hope to maintain a balanced distribution strategy.
In early 2015, when The Manitowoc Company decided to split into two companies, the executive leadership called on the CIO, Subash Anbu, to lead the charge.
The transformation would be the most consequential in its 113-year history. Leaders from the company, then a diversified manufacturer of cranes and foodservice equipment, decided that the whole of the diversified organization was no longer greater than the sum of its parts. It would split into two publicly-traded companies: Manitowoc (MTW), a crane-manufacturing business, and Welbilt (WBT), which manufactures foodservice equipment.
The CIO was a natural choice to lead a change of this magnitude because his role allowed him to understand the interconnectedness of the company’s various business capabilities, which processes and technology were already centralized or decentralized, and where there may be opportunities for greater synergy in the future-state companies.
Subject matter expertise, however, would not have been enough to qualify a candidate; the leader had to be charismatic, and Subash was widely recognized for his servant-leadership mentality. That would prove essential to removing critical blockers across the organization.
It was also important that the CIO had long-standing credibility with the Board of Directors, who were the ultimate decision makers in this endeavor.
Subash embraced the daunting challenge, saying, “While change brings uncertainty, it also brings opportunities. Change is my friend, as it is the only constant.”
In some ways, splitting a company into two may be harder than a merger. When merging, you have the luxury of more time to operate independently and merge strategically.
When Western Digital acquired HGST in 2015 and Sandisk in 2016, CIO Steve Philpott decided to move all three companies to a new enterprise resource planning (ERP) system rather than maintain multiple systems or force everyone onto the incumbent Western Digital solution. When splitting a company, there is greater urgency to define the target state business model and technology landscape and execute accordingly.
This split for Manitowoc introduced major consequences for change: duplication of every business function, completed within a fixed four-quarter schedule, while still executing the 2015 business plans. All business capabilities would be impacted, especially Finance, Tax, Treasury, Investor Relations, Legal, Human Resources, and of course, Information Technology.
While the Manitowoc Company had experience with divesting its marine segment (it started as a shipbuilding company in 1902), the scope and scale of the split was unprecedented for the company.
Breaking apart something that has been functioning together is an inherently risk-laden proposition. Subash and his team recognized that to mitigate risk, they would need to be both thoughtfully deliberate in planning and agile in their execution that breaks down big risks into smaller risks, prioritizing speed over perfection.
As Subash led the split of the company into two, he encountered the following risks:
When splitting a public company, the deadlines and outcome are clear. How Subash and the team would execute the split of the company, however, remained largely undefined.
The enormity of the task could have created a paralysis, but the team quickly began working backwards: getting on the same page with the right people; identifying the big-rock milestones; identifying the risks; sketching out a plan to reach the big-rock milestones; breaking the plan into smaller rocks to mitigate risk; and keeping everyone informed as the plan unfolded with greater detail.
In the process, Subash learned five critical lessons that all executives should heed before splitting a company:
Splitting a company requires cross-functional collaboration and visibility at the strategic planning and execution level. Start by creating a Separation Management Office, consisting of senior functional leaders that will oversee the end-to-end split across HR & Organizational Design, Shared Services & Physical Location Structuring, IT, Financial Reporting, Treasury & Debt Financing, Tax & Legal Entity Restructuring, and Legal & Contracts. The Separation Management Office should report to a Steering Committee consisting of the Board of Directors, CEO, CFO, and other C-level leaders. When faced with difficult questions that require a decision to meet deadlines, the Steering Committee should serve as the ultimate escalation point and decision maker to break ties, even if it means a compromise.
A split will require dedicated, skilled resources that understand the cross-functional complexities involved. This project team will need people that understand the interconnectedness of technology architecture, data, and processes, balanced with teams that can execute many detailed tasks. When forming the team, it is important to orient everyone on the common objective to create unity; departmental silos will not succeed. Variable capacity will almost certainly be necessary for major activities, and you may be able to stabilize your efforts by turning to trusted systems integrators or consulting partners to help guide the transition.
Agile evangelists often frown upon working under the heat of a mandated date and scope, but a public split forces such constraints. Treat the constraints as your friend: Work backward to identify your critical operational and transactional deadlines. Ensure the cross-functional team is building in the necessary lead time, especially when financial regulations or audits are involved. Dedicate a budget, but be prepared to spend more than you anticipate, as there will always be surprises to which teams will have to adapt. As part of your project planning, create a risk management framework with your highest priority risks, impacts, and decision makers clearly outlined. When time is of the essence, contingency plans need to be in place to adapt quickly.
Any time a working system is disassembled, there unquestionably will be problems. The key is not to wait for a big bang at the end to see if what you have done has worked. Spending nine months planning for and three months executing this split would have introduced new risks. Instead, Subash and his team built their plan and then iteratively built, tested, and improved in an agile-delivery process. The team was able to identify isolated mistakes early and often, allowing them then to proceed to the following phases with greater confidence—not with bated breath.
In a split, every employee, contractor, supplier, or customer will be impacted. Create a communication plan for the different personas: Steering Committee, operational leaders, functional groups, customers, partners and suppliers, and individual employee contributors. The Manitowoc Company had to communicate on everything from where people would sit, to who would be named as new organizational leaders. In the void of communication, fear and pessimism can creep in. To prevent this, the Separation Management Office launched “Subash’s Scoop,” a monthly newsletter on the separation progress. It brought helpful insight, with a flare of personality, to keep the organization aligned on its common goal.
The Manitowoc Company successfully split into two public companies—Manitowoc (MTW) and Welbilt (WBT)—in March 2016, hitting its publicly-declared target. In fact, many of the critical IT operational milestones were completed in January, well in advance of the go-live date.
Over the last two years, the stock prices for both companies have increased, validating the leadership evaluation that the whole was no longer greater than the sum of its parts.
If you’re not thinking like a software company, you’re already behind.
Software companies focus on codifying and then scaling everything they do. To do that, business subject-matter expertise and technical expertise must become one in the same, converging once siloed disciplines.
In a recent interview with Metis Strategy, Cathy Bessant, Bank of America’s Chief Operations & Technology Officer, explained that convergence must apply to all companies, saying, “Technology has completely changed the notion of business integration. You cannot say the business is technology or technology enables the business—they are one and the same.”
Your company will not be able to compete at scale and speed if delivery teams have not gone beyond typical IT-business hand offs to true convergence. This convergence extends beyond obvious points of technology dependence, such as an eCommerce website or managing internal productivity tools; it is happening everywhere.
“Metis Strategy helped us make big decisions on a number of key initiatives. Their real-world experience coupled with their ability to perform deep analysis gave our organization confidence in our new direction.” – Gary Reiner
Still, many companies struggle with where to start on this transformation. Business function leaders often communicate high-level goals that are difficult for technology leaders to translate into concrete actions, and technology leaders often approach a problem by addressing the technology first, and the business outcome second. They end up six months into a “digital transformation” effort with a disparate collection of projects, but no cohesive sense of prioritization or interdependence to create a more tech-driven future. The solution to bridge this gap between strategy and execution is for IT leaders to be better collaborators and communicators, and to understand the business and customer needs as well as their business partners do. But that is easier said than done. Start by rooting your IT plans in a well-defined business capabilities map, and then transform the way that IT goes to market by driving cross-functional operating model convergence in the long term.
Business capabilities are an integrated set of processes, technologies, and deep expertise that are manifested as a functional capacity to capture or deliver value to the organization. They outline “what” a business does, as opposed to “how” a business does it. They are the definition of your organizational skills, best represented in a landscape map that allows you to evaluate the full spectrum of capabilities against each other. Business capability maps are not just about technology; these tools are designed to improve an organization’s holistic ability to improve a business outcome, and in many cases, it is not the technology that is the constraint, but rather a process, skill, or policy issue. Consider the process for onboarding a new employee. Strong onboarding capabilities make the experience seamless for the new hire. From the second an employee steps into the office, they might:
Business capability maps are designed to improve an organization’s holistic ability to improve a business outcome.
There are various people, process and technology components behind each of the steps in the employee’s journey. However, the employee does not—and should not—feel the transition between, in this case, HR, facilities, and IT. If the desired outcome for this capability is to provide a seamless employee experience where the employee is productive in less than three days, the different functional areas should integrate their strategic plans to meet that objective. This is often challenging in an organization that thinks and acts in functional silos, but a capability-driven approach will bridge that gap.
Many organizations have never formally documented their business architecture and therefore struggle to understand business priorities. To bridge that gap, IT will generally dispatch enterprise architects or business relationship managers to form bonds with functional leaders, understand their current processes, and identify the pain points. As a result, they map the business capabilities. This exercise elevates technology leaders and their business partners to common ground, on which both can add value to the conversation: one around business process improvement, and the other around technology enablement. We generally suggest no more than four levels of cascading capabilities, with the fourth level most resembling the associated process. Keep in mind that business capability maps are not organizational charts. By definition, they are anchored by the business outcome, with many functional areas converging to realize that outcome.
Once you define your capabilities, prioritize them to help provide strategic direction to the organization. Not all capabilities are of equal importance to your ability to compete, so you need to ensure you are not boiling the ocean. While there is more nuance in practice, for simplicity, capabilities fall on a scale of achieving competitive parity through sustaining competitive advantage, and it is important to evaluate which are the most important to your business’ success. This segmentation will not change tremendously year by year, unless there are major shifts in the competitive forces at play.
Capabilities that—currently, or in the future—are critical to creating or sustaining your market position in a fundamentally unique way. Customers will hire you because of these capabilities, your employees will love you for them, and your investors will celebrate the cost effectiveness that they bring. For example, you may be able to segment customers and tailor offerings in a way that economizes your marketing spend far better than a competitor. Or, if your competitor competes on price, you may compete on amazing customer service. Thus, you might prioritize your capability on managing customer cases. To be clear, further segmentation is needed within the “Competitive Advantage” bucket; remember: not everything is created equal.
Capabilities that maintain customer expectations and operational needs. You don’t lose (but also probably don’t gain) fans because of these capabilities. For example, your “process payroll” capability probably needs to stay at current levels, but it does not need to be the target of heavy investment and prioritization. This doesn’t mean you don’t invest in these areas. For example, Uber uses Stripe to instantly pay drivers, giving them cash in hand each day, but Lyft also offers this capability. Uber needs to continue to invest in this area to stay at parity, in the case that, say, Lyft started predicting revenue for drivers and giving them advances. Still, if the offerings are similar, they may not be a deciding factor for whether a driver goes with Uber or Lyft.
Once you segment and prioritize your capabilities, you should evaluate the current state maturity for each capability, as well as the target future state. Evaluating maturity levels is as much art as it is science. As a result, the defining of maturity levels cannot be done independently, and often the conversation around why something is or is not mature is as valuable as whatever score you give yourself. We recommend undertaking this exercise with cross-functional groups that have an understanding of the capability from different perspectives. We often evaluate capability maturity as a function of process definition, degree of automation, organizational reach, and the measurement of the business outcome. This evaluation will influence the prioritization of near-term investments and will not always coincide 1:1 with the segmentation mentioned above. For example, if you have low maturity in a “parity” capability, you would still want to invest in that capability to get it up to par.
Enhancing a capability may require investments in people, processes, or technology. Therefore, a converged team of business function experts and technology leaders should jointly identify improvement activities. IT should lead in aligning the technology services (if your organization uses an ITSM approach) and technical architecture needed to enable these capabilities—but all in the context of how the business process may change. Once you have aligned your technical architecture, IT can identify gaps and redundancies. For example, if you have multiple applications supporting your “expense management” capability, you might opt to undertake a cost-benefit analysis of maintaining all of the applications. Conversely, you might discover you have a prioritized business capability of sales forecasting without a technology architecture supporting or enabling that capability. You might identify this an area where a new technology services is needed to provide data analytics to the sales operations team. Once developed, capability maps can bridge the gap between strategy and execution by driving organizational alignment around where investments are needed. For example, we recently helped a growing technology company through this journey. The IT organization had been viewed as an order-taker, and it often struggled to get budget consideration for more strategic projects that would add value to the business, but the CIO was intent on evolving the organization into a more strategic partner. The CIO knew that the convergence of business process improvement and technology enablement was key, so the team worked closely with business function leaders to develop prioritized capability maps across the organization. Then they leveraged the capability maps to identify areas in greatest need of investment, and in turn forced trade-off decisions that resulted in a meaningful prioritization of focus areas that galvanized the team. The converged business and technology teams, oriented around shared business outcomes, had threaded the needle from strategy to execution. In the end, one of the business partners said, “We have tried to do this many times over the past six years, and this is by far the best it has ever gone.” That is how IT goes to market differently, and wins.