SlideShare a Scribd company logo
AI READINESS FOR
GOOGLE
SAGAR TANEJA
MBA
400530
1
INDEX
1. Strategy and Governance Systems 3
- Strategic Relevance of AI for Company Strategy 4
- Ethics Policy 5
- Outreach and External Accountability 6
2. Access to Data, Computing Power and Talent 7
- Access to Data 8
- Computational Power and Talent 9
3. Innovation and Operational Capabilities 10
- AI Diffusion 11
4. Recommendations 12
2
STRATEGY AND
GOVERNANCE
SYSTEMS
3
“We will move from mobile-first to an AI first world.” –
Sundar Pichai, Google CEO
AI STRATEGY
SCORE*
2
STRATEGIC
RELEVANCE OF AI FOR
COMPANY STRATEGY
4
SCORE*
0
ETHICS POLICY
Google’s lesser known AI ethics board allows its subsidiary,
DeepMind to hold control of the tech no matter how valuable
or dangerous it becomes.
5
OUTREACH AND EXTERNAL ACCOUNTABILITY
Be socially
beneficial.
01
Avoid creating
or reinforcing
unfair bias.
02
Be built and
tested for
safety.
03
Be accountable
to people.
04
Incorporate
privacy design
principles.
05
Uphold high
standards of
scientific
excellence.
06
Be made
available for
uses that accord
with these
principles.
07
In June 2018, Google’s CEO, Sundar Pichai released an AI ethics memo stating 7 principles for ethical AI
development which he claims to be concrete standards that will actively govern the company’s research and
product development and impact its business decisions.
6
ACCESS TO DATA,
COMPUTING POWER
AND TALENT
7
The amount of generic user data Google has on its hands is
a huge competitive advantage against most other
companies.AI STRATEGY
SCORE*
3
ACCESS TO DATA
Searches
Clicks on search
results
Web crawling Website analytics
Ad serving Email
G Suite (Docs,
Sheets, Slides,
Calendar, Drive,
etc.)
Google Public DNS
Google Chrome Android OS Google Pixel
Google Assistant
and Google Home
Chrome OS Google Finance YouTube Google Translate
Google Books
SOURCES OF DATA
8
2.5 million servers
COMPUTATIONAL POWER AND TALENT
9
904 Data Workers*
INNOVATION AND
OPERATIONAL
CAPABILITIES
10
AI DIFFUSION
Google has been very active about using technology to bring value to its business models. The
company has a central AI team to spread AI throughout its operations. Currently, it’s using AI in its
operations in the following ways:
1. Identify security bugs and malware through machine learning
2. Solve internal business challenges
3. Improve energy efficiency in data centers
11
As the leader of the AI race, Google needs to set an
example of being a responsible player that other
companies can emulate. Here are some recommendations
for Google on how it can be more responsible with AI.
1. Create an Ethics Board where members have
decision making power.
2. Work with industries and governments to ease
transition into the AI era.
3. Use AI to administer AI.
4. Share bias reduction tools with other technology
companies.
5. Use AI to upgrade cybersecurity.
AI STRATEGY
RECOMMENDATIONS
12
REFERENCES
• Alphabet Inc. Annual Report 2018 https://abc.xyz/investor/static/pdf/20171231_alphabet_10K.pdf
• Cbsinsights.com Google Strategy Teardown: Google Is Turning Itself Into An AI Company As It Seeks To Win New
Markets Like Cloud And Transportation https://www.cbinsights.com/research/report/google-strategy-
teardown/
• Forbes.com (April 2019), Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch In The Wake Of
Employee Protest https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-
less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/#fb8f55b6e281
• Forbes.com (Mar 2019), Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains
A Secret https://www.forbes.com/sites/samshead/2019/03/27/google-announced-an-ai-council-but-the-
mysterious-ai-ethics-board-remains-a-secret/#73b91338614a
• Cnet.com (June 2018) Read Google's AI ethics memo: 'We are not developing AI for use in weapons’
https://www.cnet.com/news/read-googles-ai-ethics-memo-we-are-not-developing-ai-for-use-in-weapons/
• Dailyo.in (Jan 2019) How hackers and cybersecurity experts are both using AI to outdo each other
https://www.dailyo.in/technology/cybersecurity-ai-google-microsoft-hackers/story/1/28745.html
• royal.pingdom.com (Nov 2018) How Google Collects Data About You and the Internet
https://royal.pingdom.com/how-google-collects-data-about-you-and-the-internet/
• Wsj.com Google, The Adecco Group Find Varied Uses for AI
https://deloitte.wsj.com/cmo/2019/04/09/google-the-adecco-group-find-varied-uses-for-ai/
• Bcg.com (Nov 2018) Artificial Intelligence Is a Threat to Cybersecurity. It’s Also a Solution.
https://www.bcg.com/en-us/publications/2018/artificial-intelligence-threat-cybersecurity-
solution.aspx
• Datacentreknowledge.com (March 2017) Google Data Center FAQ
https://www.datacenterknowledge.com/archives/2017/03/16/google-data-center-faq
• Nytimes.com (Oct 2017) Google Unveils Job Training Initiative With $1 Billion Pledge
https://www.nytimes.com/2017/10/12/technology/google-job-training-initiative.html
• Techrepublic.com (June 2019) https://www.techrepublic.com/article/7-tech-companies-that-
hire-the-most-data-scientists/
• Vox.com (April 2019) Google’s brand-new AI ethics board is already falling apart
https://www.vox.com/future-perfect/2019/4/3/18292526/google-ai-ethics-board-letter-acquisti-
kay-coles-james
• Glassdoor.com
13

More Related Content

AI Readiness For Google

  • 1. AI READINESS FOR GOOGLE SAGAR TANEJA MBA 400530 1
  • 2. INDEX 1. Strategy and Governance Systems 3 - Strategic Relevance of AI for Company Strategy 4 - Ethics Policy 5 - Outreach and External Accountability 6 2. Access to Data, Computing Power and Talent 7 - Access to Data 8 - Computational Power and Talent 9 3. Innovation and Operational Capabilities 10 - AI Diffusion 11 4. Recommendations 12 2
  • 4. “We will move from mobile-first to an AI first world.” – Sundar Pichai, Google CEO AI STRATEGY SCORE* 2 STRATEGIC RELEVANCE OF AI FOR COMPANY STRATEGY 4
  • 5. SCORE* 0 ETHICS POLICY Google’s lesser known AI ethics board allows its subsidiary, DeepMind to hold control of the tech no matter how valuable or dangerous it becomes. 5
  • 6. OUTREACH AND EXTERNAL ACCOUNTABILITY Be socially beneficial. 01 Avoid creating or reinforcing unfair bias. 02 Be built and tested for safety. 03 Be accountable to people. 04 Incorporate privacy design principles. 05 Uphold high standards of scientific excellence. 06 Be made available for uses that accord with these principles. 07 In June 2018, Google’s CEO, Sundar Pichai released an AI ethics memo stating 7 principles for ethical AI development which he claims to be concrete standards that will actively govern the company’s research and product development and impact its business decisions. 6
  • 7. ACCESS TO DATA, COMPUTING POWER AND TALENT 7
  • 8. The amount of generic user data Google has on its hands is a huge competitive advantage against most other companies.AI STRATEGY SCORE* 3 ACCESS TO DATA Searches Clicks on search results Web crawling Website analytics Ad serving Email G Suite (Docs, Sheets, Slides, Calendar, Drive, etc.) Google Public DNS Google Chrome Android OS Google Pixel Google Assistant and Google Home Chrome OS Google Finance YouTube Google Translate Google Books SOURCES OF DATA 8
  • 9. 2.5 million servers COMPUTATIONAL POWER AND TALENT 9 904 Data Workers*
  • 11. AI DIFFUSION Google has been very active about using technology to bring value to its business models. The company has a central AI team to spread AI throughout its operations. Currently, it’s using AI in its operations in the following ways: 1. Identify security bugs and malware through machine learning 2. Solve internal business challenges 3. Improve energy efficiency in data centers 11
  • 12. As the leader of the AI race, Google needs to set an example of being a responsible player that other companies can emulate. Here are some recommendations for Google on how it can be more responsible with AI. 1. Create an Ethics Board where members have decision making power. 2. Work with industries and governments to ease transition into the AI era. 3. Use AI to administer AI. 4. Share bias reduction tools with other technology companies. 5. Use AI to upgrade cybersecurity. AI STRATEGY RECOMMENDATIONS 12
  • 13. REFERENCES • Alphabet Inc. Annual Report 2018 https://abc.xyz/investor/static/pdf/20171231_alphabet_10K.pdf • Cbsinsights.com Google Strategy Teardown: Google Is Turning Itself Into An AI Company As It Seeks To Win New Markets Like Cloud And Transportation https://www.cbinsights.com/research/report/google-strategy- teardown/ • Forbes.com (April 2019), Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch In The Wake Of Employee Protest https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board- less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/#fb8f55b6e281 • Forbes.com (Mar 2019), Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret https://www.forbes.com/sites/samshead/2019/03/27/google-announced-an-ai-council-but-the- mysterious-ai-ethics-board-remains-a-secret/#73b91338614a • Cnet.com (June 2018) Read Google's AI ethics memo: 'We are not developing AI for use in weapons’ https://www.cnet.com/news/read-googles-ai-ethics-memo-we-are-not-developing-ai-for-use-in-weapons/ • Dailyo.in (Jan 2019) How hackers and cybersecurity experts are both using AI to outdo each other https://www.dailyo.in/technology/cybersecurity-ai-google-microsoft-hackers/story/1/28745.html • royal.pingdom.com (Nov 2018) How Google Collects Data About You and the Internet https://royal.pingdom.com/how-google-collects-data-about-you-and-the-internet/ • Wsj.com Google, The Adecco Group Find Varied Uses for AI https://deloitte.wsj.com/cmo/2019/04/09/google-the-adecco-group-find-varied-uses-for-ai/ • Bcg.com (Nov 2018) Artificial Intelligence Is a Threat to Cybersecurity. It’s Also a Solution. https://www.bcg.com/en-us/publications/2018/artificial-intelligence-threat-cybersecurity- solution.aspx • Datacentreknowledge.com (March 2017) Google Data Center FAQ https://www.datacenterknowledge.com/archives/2017/03/16/google-data-center-faq • Nytimes.com (Oct 2017) Google Unveils Job Training Initiative With $1 Billion Pledge https://www.nytimes.com/2017/10/12/technology/google-job-training-initiative.html • Techrepublic.com (June 2019) https://www.techrepublic.com/article/7-tech-companies-that- hire-the-most-data-scientists/ • Vox.com (April 2019) Google’s brand-new AI ethics board is already falling apart https://www.vox.com/future-perfect/2019/4/3/18292526/google-ai-ethics-board-letter-acquisti- kay-coles-james • Glassdoor.com 13

Editor's Notes

  1. *0 = No corporate initiatives. Isolated. 1 = Integration and cooperation in multiple business units. 2= Penetration of AI in all business units AI is integral to the Google as it’s embedded in almost all its products and services from search to autonomous vehicles. In the last decade, the company has acquired a number of AI startups all over the world and holds the most number of AI patents. The company has launched 2 AI funds, Gradient Ventures and Google Assistant Investment Program. Additionally, there is also a fund for building the capabilities of Google Assistant, the company’s virtual assistant. AI is the company’s foremost priority as it aims at developing sophisticated machine learning capabilities through both outside investments and in-house development. Though artificial intelligence has been a key focus for over a year, Google’s mentions of “AI” and “machine learning” on earnings calls reached a new peak in Q2’18.
  2. *0 = Poor ethics policy. 1 = covering at least one of the three dimensions Fairness, Accountability, Transparency. 2 = covering at least two of the three dimensions Fairness, Accountability, Transparency. 3 = covering all three dimensions Fairness, Accountability, Transparency. Earlier this year, Google launched an AI ethics board to guide its “responsible development of AI” but scrapped it within a week of its creation after receiving backlash from its employees for having Kay Coles James, the president of right-wing think tank The Heritage Foundation as one of its board members.   According to critics, the panel was nothing more but a PR tool for the company and wasn’t set up for success for a number of reasons.    1. Google stated that the panel would serve over the course of 2019 and meet four times. This isn’t enough to hear about all the projects the company is working on considering the complexity of the issues, members will be advising on. 2. The board members weren’t going to be paid and some think they would have advantaged the independently wealthy. It also shows that Google isn’t serious about the AI ethics board. 3. Like most ethics panels at tech companies, it didn’t have any power to influence decisions at Google. What is even more surprising is that Google already has an AI ethics board that was set up when the company bought the DeepMind AI lab in 2014. The cofounders of DeepMind requested that Google set up the board as part of the £400 million acquisition back in 2014. They've acknowledged that the AI ethics board exists in a number of public talks, but they've never said who sits on it, or exactly what it does. A report in The Economist revealed that the DeepMind and Google signed an agreement stating that if DeepMind ever succeeds in its core mission of building artificial general intelligence (AGI), then the control of that machine will lie with those on a governing panel known as the Ethics Board. The Ethics Board essentially allows DeepMind to legally maintain a degree of control over the technology it creates, no matter how valuable or dangerous it becomes, according to the report.
  3. 1. Be socially beneficial.  “We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.” 2. Avoid creating or reinforcing unfair bias. “We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.” 3. Be built and tested for safety. “We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.” 4. Be accountable to people. “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.” 5. Incorporate privacy design principles. “We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” 6. Uphold high standards of scientific excellence. “We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.” 7. Be made available for uses that accord with these principles.   “We will work to limit potentially harmful or abusive applications.” Additionally, Google has stated that it wouldn’t deploy AI in the following areas: Technologies that cause or are likely to cause overall harm. “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.” Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
  4. *0 = Data is not systematically gathered and analyzed, 1 = Data is gathered but not systematically structured and cleaned, 2 = Data is partially used, 3 = data is a core commodity of a company While Google has been called out for their widely reported invasive policies, and alleged attempts at collecting information about their users, it’s also responsible for keeping sensitive data such as our bank details and other personal files safe from hackers. Most of this data is anonymized. Advertising data in server logs is kept for a period of nine months, and cookies (for services that use them) aren’t anonymized until after 18 months. Google also has a partnership with Salesforce to pool data together to drive better value for advertisers investing in cloud infrastructure.
  5. There's no official data on how many servers there are in Google data centers, but Gartner estimated in a July 2016 report that Google at the time had 2.5 million servers. This number is always changing as the company expands capacity and refreshes its hardware. *Source: TechRepublic
  6. The Google Brain team which is responsible for Google Photos and speech recognition for Android, also works with any team and project for the identification of security bugs and malware through machine learning. The company has made a shift from deploying AI in narrow strategic areas to becoming “AI-first” and mandated its use across. It has learnt from best practices gained from operational use and has built those AI capabilities into nearly all its products. Now, Google Cloud is working to bring Google’s innovation in AI to all businesses. “We’ve found that in most cases where we had a particular business challenge, artificial intelligence could be applied to help us solve it,” says Rajen Sheth, senior director of product management at Google Cloud Artificial Intelligence. “We believe every company is going to be transforming itself with AI over the course of the next 10 years, so we felt it was imperative that we deploy it broadly as part of our own business strategy.” According to Sheth, one of the keys to successfully applying AI is identifying an internal business challenge and then exploring how AI might solve it. For example, Google Cloud leveraged Google Assistant technology to personalize experiences with the service department in its customer contact center. Using natural language recognition and simulation combined with machine learning automates the handling of tier 1-level calls. Rather than asking a generic set of questions from a script, the technology is able to understand the specific issues about which the customer called and access the knowledge repository to provide relevant questions and comments in a conversational manner. Another use case for AI allowed Google Cloud to improve the energy efficiency in its data centers: Using machine learning to set cooling system algorithms and reinforcement learning—where the system tries different things to test an outcome and then retrains itself based on findings—the system learned, on its own, what the optimal settings were. The cooling energy needed was reduced by 40 percent, resulting in a 15 percent reduction in overall data-center energy usage and producing significant cost savings for the organization. Google Cloud continues to explore internal deployment opportunities that could inform future commercial product development—from demand forecasting to systems control optimization and quality control. Its AI team is exploring use cases for artificial intelligence across the medical, scientific, and automotive industries, and it’s made its AI training program open source so that all technologists can benefit from the learning. It is also devoting resources to investigating how to monitor and analyze AI behavior, to detect and rectify bias in AI engines, and to facilitate the transformation of a traditional workforce in the age of machine learning.
  7. 1. Unlike the ethics board created earlier this year which didn’t have any power, Google can create an Ethics Board where members could actually influence the decisions taken at the company.  2. One of the biggest expected undesired outcomes of AI is that it will disrupt several human jobs in many industries due to automation. The possible repercussions of this could be very dangerous, especially in developing countries. While Google has already committed $1 billion to train Americans for jobs in technology, the company can work with industries and governments (at least in the countries where it operates) to ease this transition. 3. AI is still in the nascent stage, but what we do know is that algorithms are accomplishing tasks human beings don’t even understand. This is a serious dilemma for us as we can’t take the responsibility of administering AI if we can’t keep up with the pace of algorithms. One possible solution for Google could be to use AI to administer AI, if they aren’t already doing so. 4. Google has built a number of tools such as ‘Facets’, ‘What If Tool’ etc. to tackle unfair bias in AI. By sharing it with other technology companies, the biases in the products/services that use AI could be reduced collectively. 5. According to a survey by cybersecurity firm Webroot, more than 90% of cybersecurity professionals in the US and Japan expect attackers to use AI against the companies they work for. In the future, it is possible to have hackers attack a financial institution’s AI-controlled customer recognition software. Google can use AI to upgrade their cybersecurity capabilities and protect their AI initiatives.