Tag Archives: OK Google

Power sharing?

Just to keep you up to speed, everything is on schedule or ahead of schedule.

In the race toward a superintelligence or ubiquitous AI. If you read this blog or you are paying attention at any level, then you know the fundamentals of AI. But for those of you who don’t here are the basics. Artificial Intelligence comes from processing and analyzing data. Big data. Then programmers feed a gazillion linked-up computers (CPUs) algorithms that can sort this data and make predictions. This process is what is at work when the Google search engine makes suggestions concerning what you are about to key into the search field. These are called predictive algorithms. If you want to look at pictures of cats, then someone has to task the CPUs with learning what a cat looks like as opposed to a hamster, then scour the Internet for pictures of cats and deliver them to your search. The process of teaching the machine what a cat looks like is called machine learning. There is also an algorithm that watches your online behavior. That’s why, after checking out sunglasses online, you start to see a plethora of ads for sunglasses on just about every page you visit. Similar algorithms can predict where you will drive to today, and when you are likely to return home. There is AI that knows your exercise habits and a ton of other physiological data about you, especially when you’re sharing your Fitbit or other wearable data with the Cloud. Insurance companies extremely interested in this data, so that it can give discounts to “healthy” people and penalize the not so healthy. Someday they might also monitor other “behaviors” that they deem to be not in your best interests (or theirs). Someday, especially if we have a “single-payer” health care system (aka government healthcare), this data may be required before you are insured. Before we go too far into the dark side (which is vast and deep), AI can also search all the cells in your body and identify which ones are dangerous, and target them for elimination. AI can analyze a whole host of things that humans could overlook. It can put together predictions that could save your life.

Googles chips stack up and ready to go. Photo from WIRED.

Now, with all that AI background behind us, this past week something called Google I/O went down. WIRED calls it Google’s annual State-of-the-Union address. There, Sundar Pichai unveiled something called TPU 2.0 or Cloud TPU. This is something of a breakthrough, because, in the past, the AI process that I just described, even though lighting fast and almost transparent, required all those CPUs, a ton of space (server farms), and gobs of electricity. Now, Google (and others) are packing this processing into chips. These are proprietary to Google. According to WIRED,

“This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics…

…says Chris Nicholson, the CEO, and founder of a deep learning startup called Skymind. “Google is trying to do something better than Amazon—and I hope it really is better. That will mean the whole market will start moving faster.”

Funny, I was just thinking that the market is not moving fast enough. I can hardly wait until we have a Skymind.

“Along those lines, Google has already said that it will offer free access to researchers willing to share their research with the world at large. That’s good for the world’s AI researchers. And it’s good for Google.”

Is it good for us?

This sets up another discussion (in 3 weeks) about a rather absurd opinion piece in WIRED about why we should have an AI as President. These things start out as absurd, but sometimes don’t stay that way.

Bookmark and Share

But nobody knows what better is.

South by Southwest, otherwise known as SXSW calls itself a film and music festival and interactive media conference. It’s held every spring in Austin, Texas. Other than maybe the Las Vegas Consumer Electronics Show or San Diego’s ComicCon, I can’t think of many conferences that generate as much buzz as SXSW. This year it is no different. I will have blog fodder for weeks. Though I can’t speak to the film or music side, I’m sure they were scintillating. Under the category of interactive, most of the buzz is about technology in general, as tech gurus and futurists are always in attendance along with celebs who align themselves to the future.

Once again at SXSW, Ray Kurzweil was on stage. In my blogs, Kurzweil is probably the one guy I quote the most throughout this blog. So here we go again. Two tech sites caught my eye they week, reporting on Kurzweil’s latest prediction that moves up the date of the Singularity from 2045 to 2029; that’s 12 years away. Since we are enmeshed in the world of exponentially accelerating technology, I have encouraged my students to start wrapping their heads around the idea of exponential growth. In our most recent project, it was a struggle just to embrace the idea of how in only seven years we could see transformational change. If Kurzweil is right about his latest prognostication, then 12 years could be a real stunner. In case you are visiting this blog for the first time, the Singularity to which Kurzweil refers is, acknowledged as the point at which computer intelligence exceeds that of human intelligence; it will know more, anticipate more, and analyze more than any human capability. Nick Bostrom calls it the last invention we will ever need to make. We’ve already seen this to some extent with IBM’s Watson beating the pants off a couple of Jeopardy masters and Google’s DeepMind handily beat a Go genius at a game that most thought to be too complex for a computer to handle. Some refer to this “computer” as a superintelligence, and warn that we better be designing the braking mechanism in tandem with the engine, or this smarter-than-us computer may outsmart us in unfortunate ways.

In an article in Scientific American, Northwestern University psychology professor Paul Weber says we are bombarded each day with about 2.5 exabytes of data and that the human brain can only store an estimated 2.5 petabytes (a million gigabytes). Of course, the bombardment will continue to increase. Another voice that emerges in this discussion is Rob High IBM’s vice president and chief technology officer. According to the futurism tech blog, High was part of a panel discussion at the American Institute of Aeronautics and Astronautics (AIAA) SciTech Conference 2017. High said,

“…we have a very desperate need for cognitive computing…The information being produced is far surpassing our ability to consume and make use of…”

On the surface, this seems like a compelling argument for faster, more pervasive computing. But since it is my mission to question otherwise compelling arguments, I want to ask whether we actually need to process 2.5 exabytes of information? It would appear that our existing technology has already turned on the firehose of data (Did we give it permission?) and now it’s up to us to find a way to drink from the firehose. To me, it sounds like we need a regulator, not a bigger gullet. I have observed that the traditional argument in favor of more, better, faster often comes wrapped in the package of help for humankind.

Rob High, again from the futurism article, says,

“‘If you’re a doctor and you’re trying to figure out the best way to treat your patient, you don’t have the time to go read the latest literature and apply that knowledge to that decision’ High explained. ‘In any scenario, we can’t possibly find and remember everything.’ This is all good news, according to High. We need AI systems that can assist us in what we do, particularly in processing all the information we are exposed to on a regular basis — data that’s bound to even grow exponentially in the next couple of years.’”

From another futurism article, Kurzweil uses a similar logic:

“We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”

The other rationale that almost always becomes coupled with expanding our minds is that we will be “better.” No one, however, defines what better is. You could be a better jerk. You could be a better rapist or terrorist or megalomaniac. What are we missing exactly, that we have to be smarter, or that Bach, or Mozart are suddenly inferior? Is our quality of life that impoverished? And for those who are impoverished, how does this help them? And what about making us smarter? Smarter at what?

But not all is lost. On a more positive note, futurism in a third article (they were busy this week), reports,

“The K&L Gates Endowment for Ethics and Computational Technologies seeks to introduce the thoughtful discussion on the use of AI in society. It is being established through funding worth $10 million from K&L Gates, one of the United States’ largest law firms, and the money will be used to hire new faculty chairs as well as support three new doctoral students.”

Though I’m not sure whether we can consider this a regulator, rather something to lessen the pain of swallowing.

Finally (for this week), back to Rob High,

“Smartphones are just the tip of the iceberg,” High said. “Human intelligence has its limitations and artificial intelligence is going to evolve in a lot of ways that won’t be similar to human intelligence. But, I think they will work best in the presence of humans.”

So, I’m more concerned with when artificial intelligence is not working at its best.

Bookmark and Share

Paying attention.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.


1. http://www.bizjournals.com/sanjose/news/2016/09/06/exclusivegoogle-singularity-visionary-ray.htm
Bookmark and Share

Now I know that Kurzweil is right.


In a previous blog entitled “Why Kurzweil is probably right,” I made this statement,

“Convergence is the way technology leaps forward. Supporting technologies enable formerly impossible things to become suddenly possible.”

That blog was talking about how we are developing AI systems at a rapid pace. I quoted a WIRED magazine article by David Pierce that was previewing consumer AIs already in the marketplace and some of the advancements on the way. Pierce said that a personal agent is,

“…only fully useful when it’s everywhere when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.”

Then, I made my usual cautionary comment about how such technologies will change us. And they will. So, if you follow this blog, you know that I throw cold water onto technological promises as a matter of course. I do this because I believe that someone has to.

Right now I’m preparing my collaborative design studio course. We’re going to be focusing on AR and VR, but since convergence is an undeniable influence on our techno-social future, we will have to keep AI, human augmentation, the Internet of Things, and a host of other emerging technologies on the desktop as well. In researching the background for this class, I read three articles from Peter Diamandis for the Singularity Hub website. I’ve written about Peter before, as well. He’s brilliant. He’s also a cheerleader for the Singularity. So that being said, these articles, one on the Internet of Everything (IoE/IoT), Artificial Intelligence (AI), and another on Augmented and Virtual Reality (AR/VR), are full of promises. Most of what we thought of as science fiction, even a couple of years ago are now happening with such speed that Diamandis and his cohorts believe they are imminent in only three years. And by that I mean commonplace.

If that isn’t enough for us to sit up and take notice, then I am reminded of an article from the Silicon Valley Business Journal, another interview with Ray Kurzweil. Kurzweil, of course, has pretty much convinced us all by now that the Law of Accelerating Returns is no longer hyperbole. If anyone thought that it was only hype, sheer observation should have brought them to their senses. In this article,
Kurzweil gives this excellent illustration of how exponential growth actually plays out—no longer as a theory but—as demonstrable practice.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago when the genome project was completed.”

When you combine that with the nearly exponential chaos of hundreds of other converging technologies, indeed the changes to our world and behavior are coming at us like a bullet-train. Ask any Indy car driver, when things are happening that fast, you have to be paying attention.
But when the input is like a firehose and the motivations are unknown, how on earth do we do that?

Personally, I see this as a calling for design thinkers worldwide. Those in the profession, schooled in the ways of design thinking have been espousing our essential worth to realm of wicked problems for some time now. Well, problems don’t get more wicked than this.

Maybe we can design an AI that could keep us from doing stupid things with technologies that we can make but cannot yet comprehend the impact of.

Bookmark and Share

Big-Data Algorithms. Don’t worry. Be happy.


It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.



Bookmark and Share

Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Nine years from now.


Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.



Bookmark and Share

Sitting around with my robo-muse and collecting a check.


Writing a weekly blog can be a daunting task especially amid teaching, research and, of course, the ongoing graphic novel. I can only imagine the challenge for those who do it daily. Thank goodness for friends who send me articles. This week the piece comes from The New York Times tech writer Farhad Manjoo. The article is entitled, “A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck.” The topic follows nicely on the heels of last week’s blog about the inevitability of robot-companions. Unfortunately, both the author and the people behind this idea appear to be woefully out of touch with reality.

Here is the premise: After robots and AI have become ubiquitous and mundane, what will we do with ourselves? “How will society function after humanity has been made redundant? Technologists and economists have been grappling with this fear for decades, but in the last few years, one idea has gained widespread interest — including from some of the very technologists who are now building the bot-ruled future,” asks Manjoo.

The answer, strangely enough, seems to be coming from venture capitalists and millionaires like Albert Wenger, who is writing a book on the idea of U.B. I. — universal basic income — and Sam Altman, president of the tech incubator Y Combinator. Apparently, they think that $1,000 a month would be about right, “…about enough to cover housing, food, health care and other basic needs for many Americans.”

This equation, $12,000 per year, possibly works for the desperately poor in rural Mississippi. Perhaps it is intended for some 28-year-old citizen with no family or social life. Of course, there would be no money for that iPhone, or cable service. Such a mythical person has a $300 rent controlled apartment (utilities included), benefits covered by the government, doesn’t own a car, or buy gas or insurance, and then maybe doesn’t eat either. Though these millionaires clearly have no clue about what it costs the average American to eek out a living, they have identified some other fundamental questions:

“When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?”

The Times article continues with, “Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.”

Often, the people that float ideas like this paint them as utopia, but I have a couple of additional questions. Why are venture capitalists interested in this notion? Will they also reduce their income to $1,000 per month? Seriously, that never happens. Instead, we see progressives in government and finance using an equation like this: “One thousand for you. One hundred thousand for me. One thousand for you. One hundred thousand for me…”

Fortunately, it is an unlikely scenario, because it would not move us toward equality but toward a permanent under-class forever dependent on those who have. Scary.

Bookmark and Share

Your robot buddy is almost ready.


There is an impressive video out this week showing a rather adept robot going through the paces, so to speak, getting smacked down and then getting back up again. The year is 2016, and the technology is striking. The company is Boston Dynamics. One thing you know from reading this blog is that I ascribe to The Law of Accelerating Returns. So, as we watch this video, if we want to be hyper-critical we can see that the bot still needs some shepherding, tentatively handles the bumpy terrain, and is slow to get up after a violent shove to the floor. On the other hand, if you are at all impressed with the achievement, then you know that the people working on this will only make it better, agiler, more svelt, and probably faster on the comeback. Let’s call this ingredient one.


Last year I devoted several blogs to the advancement of AI, about the corporations rushing to be the go-to source for the advanced version of predictive behavior and predictive decision-making. I have also discussed systems like Amazon Echo that use the advanced Alexa Voice Service. It’s something like Siri on steroids. Here’s part of Amazon’s pitch:

“• Hears you from across the room with far-field voice recognition, even while music is playing
• Answers questions, reads audiobooks and the news, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more with Alexa, a cloud-based voice service
• Controls lights, switches, and thermostats with compatible WeMo, Philips Hue, Samsung SmartThings, Wink, Insteon, and ecobee smart home devices
•Always getting smarter and adding new features and skills…”

You’re supposed to place it in a central position in the home where it is proficient at picking up your voice from the other room. Just don’t go too far. The problem with Echo is that its stationery. Call Echo ingredient two. What Echo needs is ingredient one.

Some of the biggest players in the AI race right now are Google, Amazon, Apple, Facebook, IBM, Elon Musk, and the military, but the government probably won’t give you a lot of details on that. All of these billion-dollar companies have a vested interest in machine learning and predictive algorithms. Not the least of which is Google. Google already uses Voice Search, “OK, Google.” to enable type-free searching. And their AI algorithm software TensorFlow has been open-sourced. The better that a machine can learn, the more reliable the Google autonomous vehicle will be. Any one of these folks could be ingredient three.

I’m going to try and close the loop on this, at least for today. Guess who bought Boston Dynamics last year? That would be corporation X, formerly known as Google X.


Bookmark and Share