Check out highlights from the 2024 Metis Strategy Summit | Read more

Peter High

9/12/2016

MIT’s Computer Science and Artificial Intelligence Laboratory is the largest on-campus laboratory as measured by research scope and membership. More than 250 companies have been hatched through CSAIL, including Akamai, iRobot, 3Com, and Meraki. CSAIL’s research activities are divided into seven areas of emphasis:

CSAIL’s Director is Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and a 2002 MacArthur Fellow. She is the first female head of CSAIL, a distinction she has used to help inspire other women to follow in her footsteps into the fields emphasized by the laboratory. From her post, she has been able to witness and influence a number of rising trends in technology that are driving the current digital revolution, all of which we cover in this interview.

Peter High: Can you give some background on your lab?

Daniela Rus:For more than 50 years, CSAIL’s research has pushed the boundaries of computing and played a vital role in the digital revolution, from the first time-sharing systems and the first computer password to public key encryption and the free-software movement.

CSAIL has more than one thousand members, including five hundred PhD students and postdocs, making it the largest interdepartmental research lab at MIT. Members range from roboticists like myself, to experts in data security, computational biology, software design and predictive analytics. This diverse set of interests allows the lab to conduct important interdisciplinary research that we believe will make a major global impact.

CSAIL started as “Project MAC,” with the idea that two people could use the same computer at the same time, and that machine was about as big as a room. It is amazing to me that in just 50 years we moved from dreaming about multiple people using the same machine to a world where computing is indispensable.

High:What is the goal of CSAIL?

Rus:Our goal is to invent the future of computing. We want to use computer science to tackle major challenges in fields like healthcare and education, from creating better tools for medical diagnosis, to developing accident-free cars, to inspiring kids to learn to code.

Our founders set out to allow multiple users to compute simultaneously, seeing this as the first step to enable humans to use machines in a way that augments our intelligence. That’s the goal CSAIL has pursued ever since.

Of course, the most pressing problems in computing are always changing. Computers have shrunk right alongside our bell-bottom jeans, and today they are in our phones, cars, TVs, and even our washing machines!  There are always new challenges on the horizon to think about how to make computing better, more powerful and more capable. We also want to use computing to solve important problems facing the world, for example in healthcare, education, and privacy. We strive to move the world of science fiction into science and then the realm of reality.

High:How do you set priorities for the lab?

Click here to read the full article

Peter High

11/7/16

Slack is the fastest growing workplace software ever. The company’s CEO Stewart Butterfield co-founded the company in August of 2013, as a cloud-based team collaboration tool.

As fast as the organization has grown, interestingly enough, Butterfield underestimated the true opportunity for the idea that he and his co-founders developed. Originally, when he first pitched Slack, he believed the market for this software was $100 million, which they recently exceeded in revenue in roughly three years.

As the organization has grown at such an impressive clip, Butterfield has been forced to grow the team substantially in parallel. He has done so with a laser focus on certain cultural attributes, aligning recruiting practices to his established mission in order to ensure the continued addition of high quality employees. As Butterfield notes below, the mission is: “to make people’s working lives simpler, more pleasant, and more productive.”

(To listen to an unabridged audio version of this interview, please click this link. This is the 20th interview in the IT Influencers series. To listen to past interviews with the likes of former Mexican President Vicente Fox, Sal Khan, Sebastian Thrun, Steve Case, Craig Newmark and Meg Whitman, please visit this link).

Peter High: I thought we would begin with the beginning of Slack itself. It was the result of a pivot when you were running a company called Tiny Speck, and it was a component to a game called Glitch, as I understand it. Can you talk a little bit about the genesis of that, the original intent of it, and how this became the idea itself?

Stewart Butterfield: Sure. It was not part of the game, but a tool that we used internally to communicate. The company was started by myself and three other members of the original team. At the time we started it, we had one person in New York, one person in San Francisco, and two in Vancouver, British Columbia, so the natural thing for us to use was IRC. As you know, IRC is now twenty-seven years old and predates the web by a couple of years. By modern standards, it is a clunky and ancient technology. For example, if you and I are using IRC to communicate and you are not connected to the server at a given moment, I cannot send you a message. We built a system to log messages so people could catch up when they got back online. Once we had those messages in a database, we wanted to be able to search, so we added search. I could keep going for a long time with the features we added.

I think one of the critical things was that we were doing this in a subconscious or pre-conscious way, which is not the normal method of software development. There was no ego and no speculation. Whenever a problem got so irritating that we couldn’t stand it or whenever an opportunity for improvement was so obvious that we could not help but take advantage of it, we would do it, and then go back to what we were supposed to be working on. The result of that after three and a half years was this system for internal communications that all of us agreed we would never work without again. We decided to see what else was out there in the market, and there wasn’t anything good, so we made a product at the moment we decided to shut down the game.

High: In those early days, what were the ambitions for it? Clearly, as you say, there was a need that wasn’t being met, even after seeking out something that might be more readily available. How big was the ambition in those early days? There are so many different areas now that Slack covers and so many different products and product categories that it now competes with. Did you see a broader enterprise use in those early days? Did you see this as something that would be taking on the likes of e-mail as well as the Skypes of the world? How did that all occur to you and how quickly did the broader implications of it grow?

Butterfield: It was a little bit of a slow boil in terms of how big it could be. We had taken a bunch of venture capital funding, and when we decided to sit down again, we had five million dollars left. Investors didn’t want their money back, they wanted us to try something else, so when we were putting together the pitch deck for Slack and explaining what we were going to do, we had sized the market at $100 million in revenue.

Click here to read the full article

The Most Influential AI Thought-Leaders and Practitioners

by Peter High, series on Forbes.com

I would like to introduce a series focused on artificial intelligence. Advances in artificial intelligence are rapidly redefining everything from how we work to how we learn to how we treat diseases. The expertise and background of the individuals interviewed in this series is varied, ranging from startup founders and corporate executives, to academic researchers and leaders of not-for-profit organizations. That said, they share a commonality: they are the world’s foremost leaders in the field of artificial intelligence.

Last week, I noted Gartner’s picks for the top-ten technology trends for 2017. This list differed from the lists for 2016, 2015, and 2014 inasmuch as there are more trends that are not yet implemented by even leading CIOs than in years passed. My informal polling of CIOs suggested that most have roughly half of these trends on their roadmap, with many suggesting the number is less than 50 percent. That said, CIOs are interested in better understanding each of these to determine how many more should be added.

My team and I put together our picks for books, articles, and podcasts to better understand the concepts described. Use these as solid primers for your team to better understand the concepts and to translate their validity to your strategic imperatives.

AI and Advanced Machine Learning

My pick for the best book on this topic in recent months is Kevin Kelly’s The Inevitable. A founding editor of Wired magazine, Kelly is in his mid-60s, but maintains the curiosity and flexibility of mind of someone much younger than him. He has seen trends come and go, and is a good filter for unwarranted hype, as a result. His book is an entertaining foray into the future of artificial intelligence, machine learning, and what it will mean for us.

Intelligent Things

The authority on the Internet of Things is Stacey Higginbotham, who is a former editor and writer for publications such as Time and GigaOmni Media. She moderates the Internet of Things Podcast.This podcast discusses all angles of the Internet of Things, including interviews with top IoT leaders, as well as unique viewpoints and in-depth analyses on the latest news and trends in the field

Virtual and Augmented Reality

Marc Prosser is a freelance journalist and researcher living in Tokyo and writes about all things science and technology. He has written a great number of pieces on virtual and augmented reality that can be found on SingularityHub.  One of the best isAugmented Reality, not VR, will be the Big Winner for Business. Digi-Capital estimates that AR companies will generate $120 billion in revenue by 2020.This article reviews how Boeing and other companies are experimenting with the technology, and the types of benefits it can provide to companies.

Digital Twin

Michael Grieves is the Executive Director of the Center for Advanced Manufacturing and Innovative Design at the Florida Institute of Technology. his paper Manufacturing Excellence through Virtual Factory Replications is the seminal work on the topic of digital twins, and it explores how digital twins can act as the critical connection between the data about the physical world and the information contained in the digital world about the physical asset.

Blockchain and Distributed Ledgers

Don Tapscott is a consultant and author who has written a number of books on digital trends and their impacts on business and society, including the business bestseller, Macrowikinomics. In his latest book, Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World, co-authored with his son, Alex, the concept of blockchain is explained in clear terms with an eye toward practical recommendations on how businesses might adopt the technology and reasons to do so.

Intelligent Apps

S. Somasegar is a former Corporate Vice President of the Developer Division at Microsoft, where he worked for 27 years. In the past year, he he joined Madrona Venture Group as a Venture Partner. In may of this year, he wrote an article entitled The Intelligent App Ecosystem in TechCrunch, describing how every new application built today will be an intelligent application. He offers an overview of this evolution, and highlights companies that are positioning themselves to realize significant competitive advantages in the years ahead.

Conversational Systems

John Smart is a global futurist and foresight consultant. He is CEO of Foresight U, which is a strategic foresight and entrepreneurship learning and development company. He has written a series of articles on The Brave new World of Smart Agents and their Data part 1, 2, 3 & 4. In this series, Smart explores the five to twenty year future of smart agents and the knowledge bases used to build them. Over the course of these four in-depth articles, Smart articulates how any why smart agents will soon become central to how billions of people live their lives.

Digital Technology Platforms

Salim Ismail has spent the last seven years building Singularity University as its founding executive director and current global ambassador. SU is based at NASA Ames, and its goal is to “educate, inspire, and empower a new generation of leaders to apply exponential technologies to address humanity’s grand challenges.”

In his book, Exponential Organizations: Why new organizations are ten times better, faster, and cheaper than yours (and what to do about it), Ismail notes that as businesses become increasingly digital and the pace of change continues to accelerate, traditional organizations will increasingly struggle to compete. Ismail highlights an organizational model that closes the gap between linear organizations and the exponential environment they operate in.

Mesh App and Service Architecture

Author and entrepreneur, Lisa Gansky has focused on building companies and supporting social ventures where there is an opportunity for well timed disruption and a resounding impact. In The Mesh: Why the Future of Business Is Sharing, she notes that in the last few years a fundamentally different model has taken root; one in which consumers have more choices, more tools, more information, and more peer-to-peer power.

Also, Bala Iyer is a professor and chair of the Technology, Operations, and Information Management Division at Babson College. Mohan Subramaniam is an associate professor of strategy at Boston College’s Carroll School of Management. Together, they authored “The Strategic Value of APIs“ in Harvard Business Review. They note that to shift to an event driven model, organizations must shift their attention from internal information exchanges to external information exchanges, and APIs are at the core of enabling this transition.

Adaptive Security Architecture

To my mind, there is no deeper thinker in the world of cybersecurity than National Institute of Standards and Technology (NIST) Fellow, Ron Ross. He leads the Federal Information Security Management Act (FISMA) Implementation Project, which includes the development of security standards and guidelines for the federal government, contractors, and the United States critical information infrastructure.

In my interview with him on these pages, entitled “A Conversation with the Most Influential Cybersecurity Guru to the U.S. Government,” he details how cyber threats will increase as our appetite for technology increases. He describes the TACIT acronym for technology leaders to keep in mind when managing cybersecurity, which stands for Threats, Assets, Complexity, Integration, and Trustworthiness. He articulates concepts to bear in mind in each case.

Special thanks to Brandon Metzger for his assistance in aggregating this list.

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs. Peter moderates the Forum on World Class IT podcast series. He speaks at conferences around the world. Follow him on Twitter @PeterAHigh.

Peter High

8/1/2016

Jeff Dean was one of the earliest employees of the company, having joined in 1999 after receiving his Ph.D. in Computer Science from the University of Washington three years earlier. He has been a prominent figure in the company’s growth, having designed and implemented much of the distributed computing infrastructure that supports most of Google’s products.

Google CEO Sundar Pichai has said that Google will become an artificial intelligence company primarily, and as the Senior Fellow in the Systems and Infrastructure Group, Dean and his team are essential to making that happen. In this far-ranging interview, Dean describes his various roles across the Google, the company’s AI vision, his thoughts on how Google has maintained an entrepreneurial spirit despite being a technology giant, as well as a variety of other topics.

(To listen to an unabridged audio version of this interview, please click this link.This is the tenth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodan of IBM Watson, Sebastian Thrun of Udacity, Scott Phoneix of Vicarious, Antoine Blondeau of Sentient Technologies, Greg Brockman of OpenAI, Oren Etzioni of the Allen Institute for Artificial Intelligence, Neil Jacobstein of Singularity University, Geoff Hinton of Google, and Nick Bostrom of Oxford University.

Peter High: Jeff Dean, you have been with Google for most of its history, having joined the company in 1999. Please give a brief depiction of the evolution of your roles across the company in the seventeen years since.

Jeff Dean: When I joined, the company was quite small. We were all wedged in a small office on University Avenue in Palo Alto. One of the first main things I worked on was building one of our first advertising systems. I then spent four or five years working on the crawling, indexing, and search systems used on every query at Google. After that, I worked mostly with my colleague Sanjay Ghemawat and others on building the software infrastructure that Google uses to store and process large data sets and do things like build search indices or process satellite imagery. More recently, I have been working on machine learning systems.

High: Given how broad your purview is and how expansive your role is, I imagine you do not have an “average day.” How do you determine who to interact with inside or outside of the company? I would be interested to know a little bit about how you spend your time on the different things you are working on at present.

Dean: There is no typical day. For the first fourteen or fifteen years, I did not take on any management roles, so that gave me more free time to just focus and write code. In the last couple of years, I have taken on a management role over some of the machine learning efforts, which has been an interesting and new learning experience for me.  Since I have worked on a lot of things over the history of the company, and I like to stay in touch with what is going on in those different projects, I tend to get a lot of emails. I spend a fair amount of time dealing with email, mostly deleting them or skimming them to get a sense of what is going on.I have a few technical projects that I am working on at any given time and figure out how to spend my day there, interspersed with various meetings or design review types of things.

High: Google remains a paragon of innovation, despite its dramatic growth. It is ambitious and innovative like it was when it was a smaller organization, but now it has the resources – both human and financial—of a behemoth within the tech space. How does the organization fight stasis and bureaucracy so that it can remain much nimbler than its size would suggest?

To read the rest of the article, please visit Forbes

by Peter High, published on Forbes

6-27-2016

There has been a lot written about the transformational power of artificial intelligence. If you are a regular reader of this column, you have gained the perspectives of eight of the leading thinkers on the topic. (See links to each below.) Nick Bostrom is perhaps the most influential thinker on safety concerns associated with the march toward artificial intelligence. He calls artificial intelligence “the single most important and daunting challenge that humanity has ever faced.”

Bostrom is an extraordinary polymath, having earned degrees in physics, philosophy, mathematical logic, and neuroscience. In many ways, he personifies the need for thinkers to collaborate at the intersection of disciplines in order to fully understand the opportunities and challenges represented by artificial intelligence.  In his bestselling book, Superintelligence: Paths, Dangers, Strategies, the Oxford University professor and the founding Director of the Future of Humanity Institute highlights that just as the fate of gorillas depends on the actions of humans rather than on gorillas themselves, the fate of humanity may come to depend on superintelligent machines. He points out that we have the advantage in that we are the authors of this fate, unlike our primate relatives.  He worries that we are not taking full advantage, however.

His work has profoundly influenced leading thinkers such as Elon Musk, Bill Gates, and Stephen Hawking. In this wide ranging interview, Bostrom explains his concerns with artificial intelligence, providing thoughts on what we might do to avoid them.

(To listen to an unabridged audio version of this interview, please click this link. This is the ninth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, Antoine Blondeau of Sentient Technologies, Greg Brockman of OpenAI, Oren Etzioni of the Allen Institute for Artificial Intelligence, Neil Jacobstein of Singularity University, and Geoff Hinton of Google.)

Peter High: Nick, you described yourself as a dis-interested student prior to age 15, but you experienced a profound awakening that led you to ambitious intellectual pursuits. At university you studied physics, philosophy, mathematical logic, neuroscience, and I am sure that this is not an exhaustive list. You perhaps are the first among an elite group that I have had the pleasure of speaking with who personify this need to be a polymath, having covered so many different topics. I am sure that does not mean that you do not require collaboration with people in these and many other areas, but I wonder how did it occur to you and why did you elect to pursue so much breadth in addition to depth in your studies? This seems not to be the norm with a lot of thinkers who operate in a similar space.

Nick Bostrom: I was following my instinct as to what I thought was interesting and potentially important from an intellectual point of view, and what I thought were interesting and important insights, ideas, and techniques in a number of different academic fields. I would say that among quite a few of my colleagues here at the research institute, many also have multi-disciplinary degrees in their background having studied more than one subject in university or having done masters in one field and then switching to a different field for their Ph.D.

High: In 2004 you were among the founders of the principles of ethics in emerging technologies. Not only were you studying the opportunities represented in the various areas that we just described, but you were also thinking about the ethical aspects about developing technology. How did the idea of ethics become something relevant to you?

To read the full article, please visit Forbes

by Peter High, published on Forbes

6-20-2016

Artificial intelligence (AI) is a white hot topic today as judged by the amount of capital being put behind it, the number of smart people who are choosing it as an area of emphasis, and the number of leading technology companies that are making AI the central nervous system of their strategic plans. Witness Google’s CEO’s plan to put AI “everywhere.”

There are some estimates that five percent of all AI talent within the private sector are currently employed by Google. Perhaps no on among that rich talent pool has as deep a set of perspectives as Geoff Hinton. He has been involved in AI research since the early 1970s, which means he got involved before the field was really defined. He also did so before the confluence of talent, capital, bandwidth, and unstructured data in need of structuring came together to put AI at the center of the innovation roadmap in Silicon Valley and beyond.

A British born academic, Hinton is considered a pioneer in the branch of machine learning referred to as deep learning. As he mentions in my extended interview with him, we are on the cusp of some transformative innovation in the field of AI, and as someone who splits his time between Google and his post at the University of Toronto, he personifies the value at the intersection between the research and theory and the practice of AI.

(To listen to an unabridged audio version of this interview, please click this link. This is the eighth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, Antoine Blondeau of Sentient Technologies, Greg Brockman of OpenAI, Oren Etzioni of the Allen Institute for Artificial Intelligence, and Neil Jacobstein of Singularity University.

Peter High: Your bio at the University of Toronto notes that your aim is to discover a learning procedure that is efficient at finding complex structure in large, high dimensional data sets, and to show that this is how the brain learns to see. I wonder if you can talk a little bit about that and about what you are working on day to day as the Emeritus University Professor at the University of Toronto as well as a Distinguished Researcher at Google today.

Geoffrey Hinton: The brain is clearly very good at taking very high dimensional data, like the information that comes along the optic nerve is a million weights changing quite fast with time, and making sense of it.  It makes a lot of sense of it in that when we get visual input we typically get the correct interpretation. We cannot see an elephant when there is really a dog there. Occasionally in the psychology lab things go wrong, but basically we are very good at figuring out what out there in the world gave rise to this very high dimensional input. After we have done a lot of learning, we get it right more or less every time. That is a very impressive ability that computers do not have. We are getting closer. But it is very different from, for example, what goes on in statistics where you have low dimensional data and not much training data, and you try a small model that does not have too many parameters.

The thing that fascinates me about the brain is that it has hugely more parameters than it has training data. So it is very unlike the neural nets that are currently being very successful. What is happening at present is we have neural nets with millions of weights and we train them on millions of training examples and they do very well. Sometimes billions of weights and billions of examples. But we typically do not have hugely more parameters than training data, and that is not true with the brain. The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.

High: Where would you say we are on the continuum of developing true artificial intelligence?

To read the full article, please visit Forbes

by Peter High, published on Forbes

6-13-2016

Singularity University is part business incubator and part think tank founded by Peter Diamandis and Ray Kurzweil in 2008 in the NASA Research Park in Silicon Valley. Among the topics that have risen in prominence in the curriculum of the University is artificial intelligence.

Neil Jacobstein is a former President of Singularity University, and currently he chairs the Artificial Intelligence and Robotics Track at Singularity University on the NASA Research Park campus in Mountain View California. We recently spoke, and the conversation covered his thoughts on how AI can be used to augment current human capability, strategies technology executives should use to think about AI, the role the government should play in helping mitigate the potential job losses from AI, his perspectives on the dangers of artificial intelligence that have been expressed by major thought leaders, advice on how to train workers to be prepared for the coming wave of AI, and a variety of other topics.

(To listen to an unabridged audio version of this interview, please click this link. This is the sixth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, Antoine Blondeau of Sentient Technologies, Greg Brockman of OpenAI, and Oren Etzioni of the Allen Institute for Artificial Intelligence.)

Peter High: Let’s begin with your role at Singularity University, and perhaps a little bit about the University itself. You were president of the University from 2010-2011 and are currently co-chair of the Artificial Intelligence and Robotics track. Can you describe the University, as well as your role in it?

Neil Jacobstein: Singularity University started on the NASA Research Park campus around 2008. We had our first graduate summer program in 2009. The University’s purpose is to help leaders utilize and understand the business, technical and ethical implications of exponential technologies, which are technologies that increase in price performance every eighteen to twenty-four months. Examples include artificial intelligence, robotics, synthetic biology, nanotechnology, and some other technologies that depend on those. Biology, for example, has become an information science and it is now growing in capability on an exponential curve.

We bring in leaders from around the world to attend our executive programs that are given every couple months or so. Usually there are about eighty to one hundred people in those executive programs and they last about five days. We have a nine-week long summer program that we have conducted every summer since 2009 and typically about eighty people attend. Oftentimes, they have won their seat in that program by winning a contest in their country. I am proud that we now have slightly more women in the program than we have men—we have a good ratio now, finally. We have people from forty plus countries represented, and they are absolutely top students, super competitive students. They cannot buy their way in. The program is sponsored by Google and other companies and in other ways. They live on the NASA research park campus here at Moffett Field and they first are exposed to a few weeks of exponential technologies, including AI, robotics, synthetic biology, nanotechnologies and other technologies that depend on those, such as energy, manufacturing, 3D printing, and medicine. They address building next generation businesses with each other and also non-profit entities. They form teams and use principles that include crowd sourcing and being able to build and scale entities rapidly, using the principles of exponential organizations. They then address global grand challenges like climate change, education, poverty, global health, energy, and security. Those kind of challenges really require the scale that exponential technologies can provide. The students in their teams—it might be up to twenty different teams—are coached by a wide variety of faculty and staff during the summer program. They then go on to perhaps join an incubator program that we have on campus if they meet certain thresholds, and we have had several successful businesses spin out every year. We are proud of the program and think we are getting better at it every year.

High: In the book Exponential Organizations by Salim Ismail, it is noted that AI and algorithms could be used to mitigate and compensate for heuristics in human cognition, such as anchoring bias, or ability bias, confirmation bias, cost bias, others like that. As an expert in AI, could you describe that insight, and also the way in which AI, and algorithms more generally speaking, can mitigate those issues?

To read the full article, please visit Forbes

by Peter High, published on Forbes

6-6-2016

Over the past decade and a half, Microsoft co-founder, Paul Allen, has created three “Allen Institutes” for Brian Science, Cell Science, and Artificial Intelligence. The Institute for AI was founded in 2013, andits mission is “to contribute to humanity through high-impact AI research and engineering.”

In early 2014, Allen tapped serial entrepreneur, Oren Etzioni, as chief executive officer. Etzioni has a PhD in computer science, has been a professor at the University of Washington, and founded or co-founded a number of companies, including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013).

The goal of Etzioni’s research is to solve fundamental problems in AI, particularly the automatic learning of knowledge from text. In our far ranging conversation, we discuss the specifics of his goal, the pace of innovation in AI more generally, safety concerns, and how they should be dealt with, the government’s role in mitigating risks of AI, and a variety of other topics.

(To listen to an unabridged audio version of this interview, please click this link. This is the fifth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, Antoine Blondeau of Sentient Technologies, and Greg Brockman of OpenAI.)

Peter High: You are the CEO of the Allen Institute for Artificial Intelligence whose mission is to contribute to humanity through high impact AI research and engineering. Can you provide your definition for high impact AI research and engineering?

Oren Etzioni: It starts with Paul Allen, who is a visionary and scientific philanthropist. He won the Carnegie Medal for Philanthropy last year. He has been passionate for decades about AI research and the potential of AI to benefit humanity.

In January 2014, we were launched as a nonprofit research institute in Seattle. We are now fifty people – about half PhDs and half engineers – and the question that we ask ourselves when we get up in the morning is “What can we do using the techniques?” Ultimately, to me, the computer is just a big pencil. What can we sketch using this pencil that makes a positive difference to society, and advances the state of the art, hopefully in an out-sized way? We are small compared to the teams that Google and Facebook and others have, but we want to punch above our weight class.

One of the things we have noticed as we have developed expertise in natural language processing and machine learning is that there are millions of scientific papers published every year – nobody can keep up. Google Scholar came on the scene about a decade ago and indexed all these papers, but there is too much information: You do a simple query and experience overload. What we need are techniques to help people cut through the clutter and hone in on key results. The approach we have taken is to use AI methods to filter irrelevant results—to extract key information like the topic of the paper, the figures that are involved, the citations that are influential, etc., etc.— in order to help people find the papers that they need. We have launched a free service on the internet called SemanticScholar.org, which currently indexes several million computer science papers. Our hypothesis is that if we can make scientists better at their job, then we can help solve some of humanity’s thorniest problems. We are starting with computer scientists, but we want to expand to medical researchers and ultimately doctors. Even a specialist does not have the latest information about your condition– they just cannot keep up. They are diagnosing you and treating you based on, at best, incomplete and potentially erroneous information. We want to help change that.

High: If you were to think about the next decade, what are some of the promising future attributes outcomes that you foresee with the developments that are coming down the pipeline and with regard to AI generally speaking?

Etzioni:   AI is becoming pervasive in its use in technology in society. Marc Andreessen famously said that software is eating the world. One might riff on that and say that AI is eating software, in the sense that everywhere where there is a software solution, there is the potential for an AI solution.

Cars are a great example: They have become complex computers. There are more than one hundred fifty computers in the average car. There is the potential now to have a car drive itself using AI. The reason that is exciting is that it could reduce the number of accidents we have on the roads today due to distracted human drivers or humans driving under the influence. Our highways and our roads are underutilized because of the allowances we have to make for human drivers. We could pack the roads a lot more densely and reduce traffic congestion and greenhouse gases and all those things if traffic were more efficient, so that is a great example. But, anywhere you look in society I see the potential for AI to help.

High:  I read a paper of yours from a number of months back in which you said, “The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy.” I wonder if you could unpack that insight a little bit.

To read the full article, please visit Forbes

by Peter High, published on Forbes

4-18-2016

Greg Brockman is co-founder and CTO at OpenAI, a non-profit artificial intelligence research company that also includes Elon Musk and Y-Combinator’s Sam Altman among other Silicon Valley luminaries as co-founders. OpenAI was founded to ensure that artificial intelligence benefits humanity as a whole, which has defined its non-profit status and long-term perspective. When I asked Brockman who influenced him, he listed Alan Kay of Xerox PARC among others, and highlighted the he hopes to foster a comparable idea lab to PARC. We also discussed how the organization’s bold mission and unique structure acts as a magnet for world-class talent, the trend of open sourcing AI development, how AI may impact jobs and society more broadly, and the promise versus the peril of AI, among other topics.

Prior to OpenAI, Greg was the CTO of Stipe, a FinTech company that builds tools enabling web commerce. Greg was the fourth employee at Stripe, a company that now has a valuation of over $5 billion.

(To listen to an unabridged audio version of this interview, please click this link. This is the fourth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, and Antoine Blondeau of Sentient Technologies. To read future articles in the series, including with Neil Jacobstein of Singularity University, Oren Etzioni of the Allen Institute for Artificial Intelligence, and Nick Bostrom of Oxford University, please click the “Follow” link above.)

Peter High: The stated goal of OpenAI is to advance digital intelligence in a way that is likely to benefit humanity as a whole, unconstrained by a need to generate financial return. What advances in digital intelligence are most likely to benefit humanity as a whole, in your mind?

Greg Brockman: I think there is something special going on right now in the field of artificial intelligence (AI) where, for the first time, systems that are based on deep learning and statistical methods suddenly start to have extremely good performance, and you are able to start building computer vision systems, for example, that can classify objects in a certain sense much better than humans can. Rather than having humans spend time understanding “how do I write down the code to specifically solve this problem?”, you build this general architecture, and the architecture learns from the data. We are getting better at writing these algorithms that are able to learn, to understand the world, and operate within it. At the same time, I do not think the world has changed in a significant way as a result yet.  It has only been a short period of time that these algorithms started to be best in class – it dates back to a 2012 paper that showed that if you scale up this neural network architecture in the right way, the system starts to perform significantly better in a wide variety of domains. I think we are going to see these techniques mature and start to be baked into a wide variety of products, both at big companies and at new companies, and in a variety of applications.

We are already starting to get a sense of this if you think about self-driving cars. They are basically here. There is a lot of engineering left to do and lot of hard work and a lot of societal questions to answer, but it is a just a question of when; it is not a question of if. I think that is the tip of the iceberg. Robotics, I think, is poised to start working. Imagine you have a robot in your house that can clean things. A couple of years ago that was not something on the horizon. Now it is not even extrapolation anymore to say that it is going to start having an impact.

To read the full article, please visit Forbes