This article originally appeared on CIO.com. Chris Boyd co-authored the piece.
As technology departments shift from traditional project management frameworks to treating IT as a product, it is triggering a broader re-think about how technology initiatives are funded.
Under the existing “plan, build, run” model, a business unit starts by sending project requirements to IT. The IT team then estimates the project costs, works with the business to agree on a budget, and gets to work.
This setup has several flaws that hamper agility and cause headaches for all involved. Cost estimates often occur before the scope of the project is truly evaluated and understood, and any variations in the plan are subject to an arduous change control process. What’s more, funding for these projects usually is locked in for the fiscal year, regardless of shifting enterprise priorities or changing market dynamics.
To achieve the benefits of a product-centric operating model, the funding model must shift as well. Rather than funding a project for a specific amount of time based on estimated requirements, teams instead are funded on an annual basis (also known as “perpetual funding”). This provides IT product teams with stable funding that can be reallocated as the needs of the business change. It also allows teams to spend time reducing technical debt or improving internal processes as they see fit, improving productivity and quality in the long run.
“We have to adapt with governance, with spending models, with prioritization,” Intuit CIO Atticus Tysen said during a 2019 panel discussion. “The days of fixing the budget at the beginning of the year and then diligently forging ahead and delivering it with business cases are over. That’s very out of date.”
Business unit leaders may be skeptical at first glance: why pay upfront for more services than we know we need right now? A closer look reveals that this model often delivers more value to the business per dollar spent. For example:
Shifting away from old ways and adapting a new funding model can seem like a daunting task, but you can get started by taking the following first steps:
First, establish the baseline to which you will measure the funding shift’s effectiveness. A technology leader must consider all the dimensions of service that will improve when making the shift. Two areas of improvement that have high business impact are service quality and price. To establish the baseline for service quality, it is important to measure things like cycle time, defects, net promotor score, and critical business metrics that are heavily influenced by IT solutions.
The price baseline is a little more difficult to establish. The most straightforward way we have found to do this is to look at the projects completed in the last fiscal year and tally the resources it took to complete them. Start with a breakdown of team members’ total compensation (salary plus benefits), add overhead (cost of hardware/software per employee, licenses, etc.) and then communicate that in terms of business value delivered. For example, “project A cost $1.2M using 6 FTE and improved sales associates productivity by 10%”. When phrased this way, your audience will have a clear picture of what was delivered and how much it cost. This clear baseline of cost per business outcome delivered will serve as a helpful comparison when you shift to perpetual funding and need to demonstrate the impact.
The shift to a new funding model will be highly visible to all business leaders. To create the greatest chance of success, focus on selecting the right teams to trial the shift. The best candidates for early adoption are high-performing teams that know their roles in the product operating model, have strong credibility with business unit stakeholders, and experience continuous demand.
In our work with large organizations piloting this shift, e-commerce teams often fit the mold because they have a clear business stakeholder and have developed the skills and relationships needed to succeed in a product-based model. Customer success teams with direct influence on the growth and longevity of recurring revenue streams are also strong candidates as their solutions (such as customer portals and knowledge bases) directly influence the degree to which a customer adopts, expands, and renews a subscription product.
Estimation in the product-based funding model is different than in the project model. Under the new model, teams are funded annually (or another agreed-upon funding cycle) by business units. As funding shifts to an annual basis, so should cost estimation. Rather than scoping the price of a project and then building a temporary team to execute it (and then disbanding after execution), leaders should determine the size and price of the team that will be needed to support anticipated demand for the year, and then direct that team to initiate an ongoing dialogue with the business to continuously prioritize targeted business outcomes.
When completing a team-based cost estimation, it is important to include the same cost elements ( salary, benefits, hardware, licenses, etc.) that were used to establish your baseline so that you are comparing apples to apples when demonstrating the ROI of product-based funding. Where you will see a difference in the team-based model is resource capacity needed to deliver demand. In a product model, a cross-functional team is perpetually dedicated to a business domain, and there is often zero ramp-up time to acquire needed business and technical knowledge.
Since the teams have been perpetually dedicated to the domain, they are encouraged to take a longitudinal view of the technology estate and are able to quickly identify and make use of reusable components such as APIs and microservices, significantly improving time to market. For these reasons, among others, teams in the product-based operating model with perpetual funding can achieve more business value for less cost.
Pilot teams should work closely with the BU leadership providing the funding. Stakeholders should work together to generate a list of quantitative and qualitative business outcomes for the year (or other funding cycle) that also satisfy any requirements for existing funding processes operating on “project by project” basis.
If you don’t already have a great relationship with finance, start working on it now. Your partnership with finance at the corporate and BU level will be critical to executing your pilot and paving the way to wider enterprise adoption of team-based funding models. Ideally. Leaders should engage with finance before, during, and after the team-based funding model so that everyone is in lockstep with you throughout the pilot. This alignment can help bolster adoption with other areas of the enterprise.
Each finance department has unique processes, cultures, and relationships with IT, so while you will need to tailor your approach, you should broach the following topics:
You will need to achieve success in the pilot to bolster adoption in other areas of the business. Your success needs to be communicated in terms that resonate with the business. As your pilot comes to an end, gather your baseline data and match it up with the results of your pilot. Put together a “roadshow deck” to show a side-by-side comparison of costs, resources, and business outcomes (Business KPIs, quality metrics, cycle times, NPS, etc.) before and after the shift to team-based funding.
Depending on your organization, it may be prudent to include other observations such as the number of change control meetings required under each funding model, indicators of team morale, and other qualitative benefits such as flexibility. Have conversations with other areas of the business that may benefit from team-based funding (start off with 1-on-1 meetings) and offer to bring in your partners from finance and the product teams as the discussion evolves. The most important part of your story is that the team-based funding model delivers more business impact at a lower cost than the old model.
Establish light and flexible governance mechanisms to monitor performance of the teams operating in the teams-based model. The purpose of these mechanisms is to validate that the increased level of autonomy is leading to high-priority business outcomes, not to review progress on design specs or other paper-based milestones. A $40B global manufacturing client adopting the team-based funding model established quarterly portfolio reviews with BU leadership and the CIO to review results. BU leadership reviews results of the teams and the planned roadmap for the subsequent quarter. BU leadership is then given the opportunity to reallocate investment based on changing business needs or can recommend the team proceed as planned.
It is important to communicate that this process requires constant buy-in from business units. While funds will be allocated annually, demand will need to be analyzed and projected on at least a quarterly basis, and funds should be reallocated accordingly. In cases where investments need to be altered in the middle of a fiscal year, it is important to note that the unit of growth in this model is a new cross-functional team focused on a targeted set of business outcomes. The idea is to create several high-performing, longstanding, cross-functional teams that have the resources needed to achieve targeted business outcomes, rather than throw additional contracted developers at teams as new scope is introduced.
Making the shift from project-based funding to product team-based funding is a major cultural and operational change that requires patience and a willingness to iterate over time. When executed successfully, CIOs often have closer relationships with their business partners, as well as less expensive, more efficient ways to deliver higher-quality products.
This article was co-written with Chris Davis.
Summary: People, not technology, are the true center of any digital transformation initiative. The half-life of skills is rapidly shortening, necessitating a mindset that embraces change, an adaptable skill set, and a workforce plan that ensures an organization has the talent necessary to operate at speed and scale through hiring, automating, upskilling, and sourcing.
Putting talent at the center of digital transformation
The biggest challenge of any digital transformation is not revamping technology, but rather shifting the company’s mindset to embrace new ways of working. Just like you can lead a horse to water, but you cannot make it drink, little can be achieved by making the latest tools available to an organization that is anchored to traditional processes.
Transformation efforts should have people at their core, and leaders must be intentional about inspiring, listening, and investing in change management to bring everyone along on the journey. We find that organizations typically under-communicate by a factor of 5X, don’t clearly articulate a pathway for current employees to help be a part of the future, and take an imbalanced approach to closing skill gaps.
With that in mind, there are three steps to developing an effective talent strategy for transformation:
While not an exhaustive list of activities to drive a transformation, executives that do not prioritize the people component of change management will inevitably fail.
Start with why and communicate relentlessly
People do not change their beliefs, values, and attitudes without good reason. They are especially unlikely do so when the norms, practices, and measures of success are inherited from a company legacy that has historically been successful. Success forgives a lot of sins, and even when there is a collective recognition of a need to change, it feels safer to endure the predictable way of working rather than venture into the unknown. This is why author Simon Sinek, whose TED talk amassed 48 million views, encourages leaders to “start with why.” In practice, that means explaining why the team is undergoing the change, what the expected impact and outcome will be, and how the firm and its people will benefit as a result of the transformation.
Communication must be personal. We regularly find that a senior leadership team will spend roughly 50 hours agreeing on a transformation plan, but an individual contribute receives less than ten hours of cumulative explanation. As those individual contributors are most directly affected by the change, this ratio is dramatically disproportionate. In this scenario, by the time the message reaches individual contributors the rationale for change is unclear, which can prompt fear and resistance. Develop a communication plan that segments personas by seniority, functional domain, and project/product team. Establish a communication campaign cadence per persona that specifies varying levels of detail tailored to the channel of communication (group meetings, training workshops, webcasts, 1:1s, etc.)
To catalyze the change, focus on creating a compelling vision for the future and explain how the leadership team will work with individuals to ensure a smooth transition. Communication is bi-directional, so ensure there is an active feedback loop. Workshop role-specific examples of new work patterns. Even if people raise concerns, it is more valuable to identify active resistance and change “detractors” early on than to succumb to passive resistance that erodes momentum. However, to create an environment of trust, it is critical not to shame anyone that has a concern into submission. Be judicious about delineating whether a voiced concern is someone being obstructionist or whether it is sign that the leadership team is not being effective in its communication.
In addition to the qualitative feedback loop, it is important to define and track outcome-oriented metrics that drive desired behaviors. Monthly dashboards at different levels of the organization can help transformation teams promote a successful, sustainable digital transformation. Done well, they can highlight areas where the right talent and skills are missing, monitor the achievement of key transformation change management milestones, and gauge the sentiment of the team. The metrics should serve as a compass to enable leaders to make data-driven decisions on how to steer the transformation when waters get choppy.
Assess your skills, knowledge, and traits and identify gaps compared to future state needs
Digital transformation will require people across your company to learn new skills and adapt to new ways of working. These skills typically fall into one of three buckets:
First, functional leaders should partner with HR to conduct a skills assessment and identify gaps between existing and needed skills. When speaking with employees, it is critical to communicate that this is not a performance evaluation. Otherwise, you may run the risk of employees overselling their abilities and skewing the results of the assessment. Instead, think of this as a way to identify and prioritize where the organization will dedicate its training and development resources. Explain how the newly acquired skills will advance one’s career and personal brand so that there is motivation to be vulnerable rather than self-aggrandizing.
Identify the people whose work creates the benchmark for the skills, knowledge, and traits your transformation needs, and deputize those high-performing and high-potential individuals as change agents for new skill adoption. Some practical skills to measure include consultative and technical skills, product and project management, and self-development and adaptability traits.
Next, develop a plan to close existing skills gaps and align it with the firm’s overall goals. Create training plans, with clear goals by level and function, and turn this into a digital transformation workstream like those used to manage other process or organizational changes. Set realistic timelines for skills adoption so employees are not paralyzed by the enormity of the change. One large financial services company set a bold vision to move its entire infrastructure to the cloud but was clear with employees that it would do so over five years and offered an internal “university” to certify people in new technologies like AWS S3. As a leader, you cannot just tell people to improve. You need to show them how to improve and invest in their development.
Define a balanced workforce plan around hiring, automating, upskilling, and sourcing
As companies define and identify skill gaps, they also need to develop a strategic staffing strategy that will help them achieve their transformation goals through the HAUS model:
The HAUS model allows leaders to decide how to fulfill their talent needs across core, value-added and transactional activities. For example, a company may decide to hire its head of DevOps, automate its software delivery value chain through CI/CD, upskill its current developers to learn to use the new tools, and in the interim source talent that can “teach to fish” while implementing the first wave of the new approach.
Another example can be drawn from the first wave of mobile app development. In 2010, iOS development was a fairly rare skill, so any major non-tech company developing its own mobile app was likely hiring an agency. Fast forward a decade, and you’ll find that most companies with major mobile-powered commercial operations will have in-sourced that skill set to have more control over their own destiny. The next wave of skills following this pattern is artificial intelligence and machine learning; most companies are outsourcing this skill set now but will likely have more internal talent in 2030. In this way, the HAUS model becomes a living, adaptable framework, instead of a one-time solution.
People and behaviors lead digital processes and tools, not the other way around. Putting people at the heart of the transformation while tracking results and behaviors is key to ensuring a successful and sustainable talent strategy. Your talent strategy must be managed as an equally weighted workstream within the overall transformation portfolio in order to ensure that the company’s most important assets are not overlooked. Finally, be humble. No transformation is perfectly planned, so be prepared to communicate, listen, and transform yourself first, if you want others to follow you.
On January 13, 2020, Victor Koelsch was named Chief Digital Officer of Polaris, Inc. a Minneapolis, Minnesota-based Polaris. The company is a $6.1 billion (as of 2018) revenue leader in powersports, founded in 1954. The CDO role was newly created for Koelsch. He will be responsible for digital strategy for the company and for accelerating Polaris’ development and integration of digital technologies into its products, services, and experiences, as well as leading the creation of new business solutions and digital offerings. Koelsch will dual report to Polaris Chairman and Chief Executive Officer Scott Wine and Polaris Executive Vice President and Chief Financial Officer Mike Speetzen.
Wine noted, “We have been building industry-leading digital capabilities for several years and are excited for Vic to take our digital efforts to the next level and deliver more value to customers and shareholders. He has made a career of developing and implementing market-leading technology solutions, and we are excited for him to leverage that acumen to augment our existing initiatives and spearhead Polaris’ digital future. We continuously evolve how consumers experience our brands, and Vic’s leadership will significantly enhance and accelerate that process.”
When asked about the opportunity before him, Koelsch said, “I look forward to joining Polaris and leveraging my background in bringing innovative, digitally enabled business models and solutions to market that will drive significant transformation and impact across our business and deliver breakthrough value and new experiences for our customers.”
At NetApp, our mission is to help our customers change the world with data. As we kick off a new decade, it’s clear that data, and the technologies to manage it, are at an inflection point: AI is seeing real use-cases; the advent of 5G is poised to revolutionize edge computing; hybrid multicloud is giving organizations more control and flexibility over their data than ever before.
Kim Stevenson has been named General Manager of NetApp’s Foundational Data Services Business Unit, reporting to Brad Anderson, the Executive Vice President and General Manager of the NetApp Cloud Infrastructure and Storage, Systems, and Software business units.
This represents a continued climb for Stevenson, who was once the Chief Information Officer of Intel. While at Intel, she rose to become the Chief Operating Officer of the Client, IoT and System Architecture Group. Most recently, she was the Senior Vice President and General Manager of Data Center Products and Solutions at Lenovo.
Former Honeywell Aerospace Chief Digital and Information Officer, Sathish Muthukrishnan, has been named Chief Information, Data and Digital Officer of Ally Financial. Muthukrishnan will lead Ally’s technology, data and digital transformation teams, with a focus on advanced technical capabilities, including cyber security and infrastructure, as well as accelerating Ally’s growth as a leading digital financial services provider.
Ally’s Chief Executive Officer Jeffrey J. Brown said about Muthukrishnan, “Ally’s success is the direct result of a relentless focus on offering consumers the best digital platforms developed by some of the brightest and most talented resources in the financial services industry. Sathish’s track record for delivering industry-leading digital solutions within the financial services sector as well as other industries makes him ideally suited for this highly important and critical role as we look to enhance our technology vision to further cement our leadership position in the sector. I am excited to welcome him to the Ally team.”
Prior to Honeywell Aerospace, Muthukrishnan spent 10 years at American Express leading their digital transformation efforts.
Richard Cox, Jr. was recently named Senior Vice President and Chief Information Officer for Cox Enterprises. In this role, he provides oversight and direction to corporate IT and business leaders on strategy, standards, and opportunities for data analytics and business intelligence, development and support, infrastructure, security and technical services. This is Cox’s first role as CIO of a company, though he had previously served as Chief Operations Officer of the City of Atlanta.
Dallas Clement, Executive Vice President and Chief Financial Officer of Cox Enterprises noted when reflecting on Cox’s appointment, “Richard’s leadership skills and ability to build strong, productive relationships with corporate and technology partners will help us drive innovation and adapt to the evolving needs of our businesses. We’re thrilled to begin the next phase of our technology journey with Richard at the helm.”
There were many great technology books published in 2019, but here are ten that I found particularly insightful. If you are unfamiliar with these works, I suggest you give them a read.
Tools and Weapons: The Promise and the Peril of the Digital Age,
by Brad Smith and Carol Ann Browne
Microsoft President Brad Smith together with Carol Ann Browne provide a story of the evolution of technology from their vantage point at Microsoft. They highlight challenges that have arisen, including cybercrime and cyberwar, social media issues, moral issues related to artificial intelligence, and even the challenges to democracy. Smith provides interesting insights into the decisions Microsoft leadership, himself included, of course, have faced, and the broader implications for society at large.
Digital Transformation: Survive and Thrive in an Era of Mass Extinction
by Tom Siebel
Tom Siebel argues that the confluence of four technologies—elastic cloud computing, big data, artificial intelligence, and the internet of things —will change the way in which business and government operate going forward. As he also noted in my recent interview with him, there is a blueprint companies can follow in order to take full advantage of these four technologies and sustain competitive advantage in the process.
The Metis Strategy team was honored to participate in the 2019 Forbes CIO Next conference, where chief information officers, technology and operations leaders, VCs, and artificial intelligence experts shared their insights into the evolution of AI in the enterprise and gave us a glimpse of where things are headed in 2020. Here are a few lessons we brought home:
“Digital immigrant” companies leverage their strengths. Organizations not born in the cloud, often referred to as “digital immigrants,” continue to face challenges that many of their digital native competitors do not. But as legacy firms upgrade their technology environments and make progress on digital transformation efforts, they increasingly are able to make use of their inherent advantages: stockpiles of valuable data, decades of industry expertise, and the scale to enter new markets quickly.
At Rockwell Automation, for example, the convergence of Information Technology (IT) and Operational Technology (OT) and an increased focus on data has helped the company improve its on-time delivery, optimize and automate many internal processes, and shift its business model toward services such as telemetry and predictive maintenance. At insurance firm Travelers, aerial photos paired with geospatial data and claims information help the company quickly assess potential losses and deliver help to customers.
In 2020, we expect digital immigrant firms will continue to use their data and scale advantages to optimize internal processes and deliver tech-enabled products and services that can compete with their startup rivals. New technology investments will focus on business capabilities that truly differentiate companies from their competitors.
New ways of working take hold, but developing talent remains a challenge. The line between IT and the business has all but disappeared as firms embrace cross-functional, agile product teams. Many executives noted that this way of working has allowed them to respond more quickly to market changes and provide better customer experiences. We expect this cross-functional collaboration to increase in the year ahead.
At the same time, the battle for talent shows no signs of slowing down. Conference attendees listed talent as a top priority for 2020 as they look to recruit, hire, and retain new people while re-skilling existing employees for jobs of the future. Executives said they continue to seek and develop “T-shaped” employees who have a breadth and depth of skills that span technology and operations. They also recognize the need to create work environments that promote continuous learning at all levels.
We’re still in early innings with AI. Enterprise adoption of AI and machine learning is accelerating as companies explore new use cases and pursue applications that drive concrete business value. In the year ahead, we expect many companies will work to hone existing use cases and develop mechanisms to scale advanced analytics capabilities across the enterprise. Indeed, as many traditional organizations called themselves tech companies in recent years, some panelists suggested we might start hearing them refer to themselves as AI companies.
But significant work remains to be done. Companies continue to invest in their core data infrastructure, and executives are looking for new ways to measure and communicate ROI for their AI initiatives. There are also fundamental issues yet to be resolved, such as how to create explainable algorithms, how to reduce inherent bias in data sets, and whether certain AI technologies should operate without a human in the loop.
The pinnacle: using data to drive growth. Both winners of this year’s Forbes CIO Innovation Award used their companies’ rich data sets to develop new services and drive tangible financial growth:
As we enter 2020, we expect CIOs to play an increasingly visible role in the development of corporate strategy. Many are likely to expand their purview as organizations look to new technologies to drive operational efficiency, deliver top-line growth, and create a differentiated customer experience. CIOs also will continue to be agents of cultural change as they foster new ways of working and develop technology talent across their organizations. We look forward to the year ahead!
This article originally appeared on CIO.com. Steven Norton co-authored the piece.
You have heard the hype: Data is the “new oil” that will power next-generation business models and unlock untold efficiencies. For some companies, this vision is realized only in PowerPoint slides. At Western Digital, it is becoming a reality. Led by Steve Philpott, Chief Information Officer and head of the Digital Analytics Office (DAO), Western Digital is future- proofing its data and analytics capabilities through a flexible platform that collects and processes data in a way that enables a diverse set of stakeholders to realize business value.
As a computer Hard Disk Drive (HDD) manufacturer and data storage company, Western Digital already has tech-savvy stakeholders with an insatiable appetite for leveraging data to drive improvement across product development, manufacturing and global logistics. The nature of the company’s products requires engineers to model out the most efficient designs for new data storage devices, while also managing margins amid competitive market pressures.
Over the past few years, as Western Digital worked to combine three companies into one, which required ensuring both data quality and interoperability, Steve and his team had a material call to action to develop a data strategy that could:
To achieve these business outcomes, the Western Digital team focused on:
The course of this analytics journey has already shown major returns by enabling the business to improve collaboration and customer satisfaction, accelerate time to insight, improve manufacturing yields, and ultimately save costs.
Driving cultural change management and education
Effective CIOs have to harness organizational enthusiasm to explore the art of the possible while also managing expectations and instilling confidence that the CIO’s recommended course of action is the best one. With any technology trend, the top of the hype cycle brings promise of revolutionary transformation, but the practical course for many organizations is more evolutionary in nature. “Not everything is a machine learning use case,” said Steve, who started by identifying the problems the company was trying to solve before focusing on the solution.
Steve and his team then went on a roadshow to share the company’s current data and analytics capabilities and future opportunities. The team shared the presentation with audiences of varying technical aptitude to explain the ways in which the company could more effectively leverage data and analytics.
Steve recognized that while the appetite to strategically leverage data was strong, there simply were not enough in-house data scientists to achieve the company’s goals. There was also an added challenge of competing with silos of analytics capabilities across various functional groups. Steve’s team would ask, “could we respond as quickly as the functional analytics teams could?”
To successfully transform Western Digital’s analytics capabilities, Steve had to develop an ecosystem of partners, build out and enable the needed skill sets, and provide scalable tools to unlock the citizen data scientist. He also had to show his tech-savvy business partners that he could accelerate the value to the business units and not become a bureaucratic bottleneck. By implementing the following playbook, Steve noted, “we proved we can often respond faster than the functional analytics teams because we can assemble solutions more dynamically with the analytics capability building blocks.”
Achieving quick wins through incremental value while driving solution to scale
Steve and his team live by the mantra that “success breeds opportunity.” Rather than ask for tens of millions of dollars and inflate expectations, the team in IT called the High-Performance Computing group pursued a quick win to establish credibility. After identifying hundreds of data sources, the team prioritized various use cases based on those that met the sweet spot of being solvable while clearly exhibiting incremental value.
For example, the team developed a machine learning application called DefectNet to detect test fail patterns on the media surface of HDDs. Initial test results showed promise of detecting and classifying images by spatial patterns on the media surface. Process engineers then could trace patterns relating to upstream equipment in the manufacturing facility. From the initial idea prototype, the solution was grown incrementally to scale, expanding into use cases in metrology anomaly detection. Now every media surface in production goes through the application for classification, and the solution serves as a platform that is used for image classification applications across multiple factories.
A similar measured approach was taken while developing a digital twin for simulating material movement and dispatching in the factory. An initial solution focused on mimicking material moves within Western Digital’s wafer manufacturing operations. The incremental value realized from smart dispatching created support and momentum to grow the solution through a series of learning cycles. Once again, a narrowly focused prototype became a platform solution that now supports multiple factories. One advantage of this approach: deployment to a new factory reuses 80% of the already developed assets leaving only 20% site-specific customization.
Developing a DAO hybrid operating model
After earning credibility that his team could help the organization, Steve established the Digital Analytics Office (DAO), whose mission statement is to “accelerate analytics at scale for faster value realization.” Comprised of a combination of data scientists, data engineers, business analysts, and subject matter experts, this group sought to provide federated analytics capabilities to the enterprise. The DAO works with business groups, who also have their own data scientists, on specific challenges that are often related to getting analytics capabilities into production, scaling those capabilities, and ensuring they are sustainable.
The DAO works across functions to identify where disparate analytics solutions are being developed for common goals, using different methodologies and achieving varying outcomes. Standardizing on an enterprise-supported methodology and machine learning platform enables business teams faster time-to-insights with higher value.
To gain further traction, the DAO organized a hackathon that included 90 engineers broken into 23 teams that had three days to mock up a solution for a specific use case. A judging body then graded the presentations, ranked the highest value use cases, and approved funding for the most promising projects.
In addition to using hackathons to generate new demand, business partners can also bring a new idea to the DAO. Those ideas are presented to the analytics steering committee to determine business value, priority and approval for new initiatives. A new initiative then iterates in a “rapid learning cycle” over a series of sprints to demonstrate value back to the steering committee, and a decision is made to sustain or expand funding. This allows Western Digital to place smart bets, focusing on “singles rather than home runs” to maintain momentum.
Building out the data science skill set
“Be prepared and warned: the constraint will be the data scientists, not the technology,” said Steve, who recognized early in Western’s Digital journey that he needed to turn the question of building skills on its head.
The ideal data scientist is driven by curiosity and can ask “what if” questions that look beyond a single dimension or plane of data. They can understand and build algorithms and have subject matter expertise in the business process, so they know where to look for breadcrumbs of insight. Steve found that these unicorns represented only 10% of data scientists in the company, while the other 90% had to be paired with subject matter experts to combine the theoretical expertise with the business process knowledge to solve problems.
While pairing people together was not impossible, it was inefficient. In response, rather than ask how to train or hire more data scientists, Steve asked, “how do we build self-service machine learning capabilities that only require the equivalent of an SQL-like skill set?” Western Digital began exploring Google and Amazon’s auto ML capability, where machine learning generates additional machine learning. The vision is to abstract the more sophisticated skills involved in developing algorithms so that business process experts can be trained to conduct data science exploration themselves.
Designing and future-proofing technology
Many organizations take the misguided step of formulating a data strategy solely about the technology. The limitation of that approach is that companies risk over-engineering solutions with a slow time to value, and by the time products are in market, the solution may be obsolete. Steve recognized this risk and guided his team to develop a technology architecture that provides the core building blocks without locking in on a single tool. This fit-for-purpose approach allows Western Digital to future-proof its data and analytics capabilities with a flexible platform. The three core building blocks of this architecture are:
Designing and future-proofing technology: Collecting data
The first step is to be able to collect, store and make data accessible in a way that is tailored to each company’s business model. Western Digital, for example, has significant manufacturing operations that require sub-second latency for on-premise data processing at the edge, while other capabilities can afford cloud-based storage for the core business. Across both spectrums, Western Digital consumes 80-100 trillion data points into its analytics environment on a daily basis with more analytical compute power pushing to the edge. The company also optimizes where it stores data, decoupling the data and technology stack, based on the frequency with which the data must be analyzed. If the data is only needed a few times a year, the best low-cost option is to store the data in the cloud. Western Digital’s common data repository spans processes across all production environments and is structured in a way that can be accessed by various types of processing capabilities.
Further, as Western Digital’s use cases became more latency dependent, it was evident that they required core cloud-based big data capabilities closer to where the data was created. Western Digital wanted to enable their user community by providing a self-service architecture. To do this, the team developed and deployed a PaaS (Platform as a Service) called the Big Data Platform Edge Architecture using cloud native technologies and DevOps best practices in Western Digital’s factories.
Future-proofing technology: Process & govern data
With the data primed for analysis, Western Digital offers a suite of tools that allow its organizations to extract, govern, and maintain master data. From open source Hadoop to multi-parallel processing, NoSQL and TensorFlow, data processing capabilities are tailored to the complexity of the use case and the volume, velocity, and variety of data.
While these technologies will evolve over time, the company will continually need to sustain data governance and quality. At Western Digital, everyone is accountable for data quality. To foster that culture, the IT team established a data governance group that identifies, educates and guides data stewards in the execution of data quality delivery. With clear ownership of data assets, the trust and value of data sets is scalable.
Beyond ensuring ownership of data quality, the data governance group also manages platform decisions, such as how to structure the data warehouse, so that the multiple stakeholders are set up for success.
Future-proofing technology: Realize value
Data applied in context transforms numbers and characters into information, knowledge, insight, and ultimately action. In order to realize the value of data in the context of business processes – either looking backward, in real time, or into the future – Western Digital developed four layers of increasingly advanced capabilities:
By codifying the analytical service offerings in this way, business partners can use the right tool for the right job. Rather than tell people exactly what tool to use, the DAO focuses on enabling the fit-for-purpose toolset under the guiding principle that whatever is built should have a clear, secure, and scalable path to launch with the potential for re-use.
The platform re-use ability tremendously accelerates time to scale and business impact.
Throughout this transformation, Steve Phillpott and the DAO have helped Western Digital evolve its mindset as to how the company can leverage data analytics as a source of competitive advantage. The combination of a federated operating model, new data science tools, and a commitment to data quality and governance have allowed the company to define its own future, focused on solving key business problems no matter how technology trends change.
This month, $92 billion consumer packaged goods (CPG) company Procter & Gamble named Vittorio Cretella as chief information officer. He will replace Javier Polit on January 7.
Cretella has run his own consulting firm, VCAdvisory, since retiring from Mars Incorporated in 2017 after 25 years with the company, the last four as CIO. He has consulted to a number of CPG companies as well as logistics companies.
Jon Moeller, P&G’s Chief Operating officer, Chief Financial Officer and Vice Chairman, to whom Cretella will report, noted “Vittorio is a thought-leading CIO with a wealth of digitally influenced business experience. He is fluent in today’s IT technology and capabilities – and deeply understands the relationship between IT and business.”