Tag Archives: robotics

Disruption. Part 2.

 

Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share

Disruption. Part 1

 

We often associate the term disruption with a snag in our phone, internet or other infrastructure service, but there is a larger sense of the expression. Technological disruption refers the to phenomenon that occurs when innovation, “…significantly alters the way that businesses operate. A disruptive technology may force companies to alter the way that they approach their business, risk losing market share or risk becoming irrelevant.”1

Some track the idea as far back as Karl Marx who influenced economist Joseph Schumpeter to coin the term “creative destruction” in 1942.2 Schumpeter described that as the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” But it was, “Clayton M. Christensen, a Harvard Business School professor, that described it’s current framework. “…a disruptive technology is a new emerging technology that unexpectedly displaces an established one.”3

OK, so much for the history lesson. How does this affect us? Historical examples of technological disruption go back to the railroads, and the mass produced automobile, technologies that changed the world. Today we can point to the Internet as possibly this century’s most transformative technology to date. However, we can’t ignore the smartphone, barely ten years old which has brought together a host of converging technologies substantially eliminating the need for the calculator, the dictaphone, land lines, the GPS box that you used to put on your dashboard, still and video cameras, and possibly your privacy. With the proliferation of apps within the smartphone platform, there are hundreds if not thousands of other “services” that now do work that we had previously done by other means. But hold on to your hat. Technological disruption is just getting started. For the next round, we will see an increasingly pervasive Internet of Things (IoT), advanced robotics, exponential growth in Artificial Intelligence (AI) and machine learning, ubiquitous Augmented Reality (AR), Virtual Reality (VR), Blockchain systems, precise genetic engineering, and advanced renewable energy systems. Some of these such as Blockchain Systems will have potentially cataclysmic effects on business. Widespread adoption of blockchain systems that enable digital money would eliminate the need for banks, credit card companies, and currency of all forms. How’s that for disruptive? Other innovations will just continue to transform us and our behaviors. Over the next few weeks, I will discuss some of these potential disruptions and their unique characteristics.

Do you have any you would like to add?

1 http://www.investopedia.com/terms/d/disruptive-technology.asp#ixzz4ZKwSDIbm

2 http://www.investopedia.com/terms/c/creativedestruction.asp

3 http://www.intelligenthq.com/technology/12-disruptive-technologies/

See also: Disruptive technologies: Catching the wave, Journal of Product Innovation Management, Volume 13, Issue 1, 1996, Pages 75-76, ISSN 0737-6782, http://dx.doi.org/10.1016/0737-6782(96)81091-5.
(http://www.sciencedirect.com/science/article/pii/0737678296810915)

Bookmark and Share

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.

 

Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share

Enter the flaw.

 

I promised a drone update this week, but by now, it is probably already old news. It is a safe bet there are probably a few thousand more drones than last week. Hence, I’m going to shift to a topic that I think is moving even faster than our clogged airspace.

And now for an AI update. I’ve blogged previously about Kurzweil’s Law of Accelerating Returns, but the evidence is mounting every day that he’s probably right.  The rate at which artificial intelligence is advancing is beginning to match nicely with his curve. A recent article on the Txchnologist website demonstrates how an AI system called Kulitta, is composing jazz, classical, new age and eclectic mixes that are difficult to tell from human compositions. You can listen to an example here. Not bad actually. Sophisticated AI creations like this underscore the realization that we can no longer think of robotics as the clunky mechanized brutes. AI can create. Even though it’s studying an archive of man-made creations the resulting work is unique.

First it learns from a corpus of existing compositions. Then it generates an abstract musical structure. Next it populates this structure with chords. Finally, it massages the structure and notes into a specific musical framework. In just a few seconds, out pops a musical piece that nobody has ever heard before.

The creator of Kulitta, Donya Quick says that this will not put composers out of a job, it will help them do their job better. She doesn’t say how exactly.

If even trained ears can’t always tell the difference, what does that mean for the masses? When we can load the “universal composer” app onto our phone and have a symphony written for ourselves, how will this serve the interests of musicians and authors?

The article continues:

Kulitta joins a growing list of programs that can produce artistic works. Such projects have reached a critical mass–last month Dartmouth College computational scientists announced they would hold a series of contests. They have put a call out seeking artificial intelligence algorithms that produce “human-quality” short stories, sonnets and dance music. These will be pitted against compositions made by humans to see if people can tell the difference.

The larger question to me is this: “When it all sounds wonderful or reads like poetry, will it make any difference to us who created it?”

Sadly, I think not. The sweat and blood that composers and artists pour into their compositions could be a thing of the past. If we see this in the fine arts, then it seems an inevitable consequence for design as well. Once the AI learns the characters, behaviors and personalities of the characters in The Lightstream Chronicles, it can create new episodes without me. Taking characters and setting that already exist as CG constructs, it’s not a stretch that it will be able to generate the wireframes, render the images, and layout the panels.

Would this app help me in my work? It could probably do it in a fraction of the time that it would take me, but could I honestly say it’s mine?

When art and music are all so easily reconstructed and perfect, I wonder if we will miss the flaw. Will we miss that human scratch on the surface of perfection, the thing that reminds us that we are human?

There is probably an algorithm for that, too. Just go to settings > humanness and use the slider.

Bookmark and Share

The Robo-Apocalypse. Part 2.

 

Last week I talked about how the South Koreans have developed a 50 caliber toting, nearly autonomous weapon system and have sold a few dozen around the world. This week I feel obligated to finish up on my promise of the drone with a pistol. I discovered this from a WIRED article. It was a little tongue-in-cheek piece that analyzed a YouTube video and concluded that pistol-packing drone is probably real. I can’t think of anyone who doesn’t believe that this is a really bad idea, including the author of the piece. Nevertheless, if we were to make a list of unintended consequences of DIY drone technology, (just some simple brainstorming) the list, after a few minutes, would be a long one.

This week FastCo reported that  NASA held a little get-together with about 1,000 invited guests from the drone industry to talk about a plan to manage the traffic when, as the agency believes, “every home will have a drone, and every home will serve as an airport at some point in the future”. NASA’s plan takes things slowly. Still the agency predicts that we will be able to get our packages from Amazon and borrow a cup of sugar from Aunt Gladys down the street, even in populated areas, by 2019.

Someone taking action is good news as we work to fix another poorly conceived technology that quickly went rogue. Unfortunately, it does nothing about the guy who wants to shoot down the Amazon drone for sport (or anyone/anything else for that matter).

On the topic of bad ideas, this week The Future Of Life Institute, a research organization out of Boston issued an open letter warning the world that autonomous weapons powered by artificial intelligence (AI) were imminent. The reasonable concern here is that a computer will do the kill-or-not-kill, bomb-or-not-bomb thinking, without the human fail-safe. Here’s an excerpt from the letter:

“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [Emphasis mine.]

The letter is short. You should read it. For once we have and example of those smart people I alluded to last week, the ones with compassion and vision. For virtually every “promising” new technology—from the seemingly good to the undeniably dangerous) we need people who can foresee the unintended consequences of one-sided promises. Designers, scientists, and engineers are prime candidates to look into the future and wave these red flags. Then the rest of the world needs to pay attention.

Once again, however, the technology is here and whether it is legal or illegal, banned or not banned the cat is out of the bag. It is kind of like a nuclear explosion. Some things you just can’t take back.

Bookmark and Share

The robo-apocalypse. Part 1.

Talk of robot takeovers is all the rage right now.

I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.

1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,

“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”

Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.

“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”

Or in the words of songwriter Donald Fagen,

“A just machine to make big decisions
Programmed by fellows with compassion and vision…1

Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,

“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

A 50 caliber round. Heavy damage.
A 50 caliber round. Heavy damage.

They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.

Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”

Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

Based on what?

Next week: Drones with pistols.

 

1. Donald Fagen, IGY From the Night Fly album. 1982
Bookmark and Share

Promises. Promises.

Throughout the course of the week, usually on a daily basis, I collect articles, news blurbs and what I call “signs from the future.” Mostly they fall into categories such as design fiction, technology, society, future, theology, and philosophy. I use this content sometimes for this blog, possibly for a lecture but most often for additional research as part of scholarly papers and presentations that are a matter of course as a professor. I have to weigh what goes into the blog because most of these topics could easily become full-blown papers.  Of course, the thing with scholarly writing is that most publications demand exclusivity on publishing your ideas. Essentially, that means that it becomes difficult to repurpose anything I write here for something with more gravitas.  One of the subjects that are of growing interest to me is Google. Not the search engine, per se, rather the technological mega-corp. It has the potential to be just such a paper, so even though there is a lot to say, I’m going to land on only a few key points.

A ubiquitous giant in the world of the Internet, Google has some of the most powerful algorithms, stores your most personal information, and is working on many of the most advanced technologies in the world. They try very hard to be soft-spoken, and low-key, but it belies their enormous power.

Most of us would agree that technology has provided some marvelous benefits to society especially in the realms of medicine, safety, education and other socially beneficial applications. Things like artificial knees, cochlear implants, air bags (when they don’t accidentally kill you), and instantaneous access to the world’s libraries have made life-changing improvements. Needless to say, especially if you have read my blog for any amount of time, technology also can have a downside. We may see greater yields from our agricultural efforts, but technological advancements also pump needless hormones into the populace, create sketchy GMO foodstuffs and manipulate farmers into planting them. We all know the problems associated with automobile emissions, atomic energy, chemotherapy and texting while driving. These problems are the obvious stuff. What is perhaps more sinister are the technologies we adopt that work quietly in the background to change us. Most of them we are unaware of until, one day, we are almost surprised to see how we have changed, and maybe we don’t like it. Google strikes me as a potential contributor in this latter arena. A recent article from The Guardian, entitled “Where is Google Taking Us?” looks at some of their most altruistic technologies (the ones they allowed the author to see). The author, Tim Adams, brought forward some interesting quotes from key players at Google. When discussing how Google would spend some $62 million in cash that it had amassed, Larry Page, one of the company’s co-founders asked,

“How do we use all these resources… and have a much more positive impact on the world?”

There’s nothing wrong with that question. It’s the kind of question that you would want a billionaire asking. My question is, “What does positive mean, and who decides what is and what isn’t?” In this case, it’s Google. The next quote comes from Sundar Pichai. With so many possibilities that this kind of wealth affords, Adams asked how they stay focused on what to do next.

“’Focus on the user and all else follows…We call it the toothbrush test,’ Pichai says, ‘we want to concentrate our efforts on things that billions of people use on a daily basis.’”

The statement sounds like savvy marketing. He is also talking about the most innate aspects of our everyday behavior. And so that I don’t turn this into an academic paper, here is one more quote. This time the author is talking to Dmitri Dolgov, principal engineer for Google Self-Driving Cars. For the whole idea to work, that is, the car reacting like a human would, only better, it has to think.

“Our maps have information stored and as the car drives around it builds up another local map with its sensors and aligns one to the other – that gives us a location accuracy of a centimetre or two. Beyond that, we are making huge numbers of probabilistic calculations every second.”

Mapping everything down to the centimeter.
Mapping everything down to the centimeter.

It’s the last line that we might want to ponder. Predictive algorithms are what artificial intelligence is all about, the kind of technology that plugs-in to a whole host of different applications from predicting your behavior to your abilities. If we don’t want to have to remember to check the oil, there is a light that reminds us. If we don’t want to have to remember somebody’s name, there is a facial recognition algorithm to remember for us. If my wearable detects that I am stressed, it can remind me to take a deep breath. If I am out for a walk, maybe something should mention all the things I could buy while I’m out (as well as what I am out of). 

Here’s what I think about. It seems to me that we are amassing two lists: the things we don’t want to think about, and the things we do. Folks like Google are adding things to Column A, and it seems to be getting longer all the time. My concern is whether we will have anything left in Column B.

 

Bookmark and Share

On better humans and bad bots.

News of breaking future technologies, the stuff at the crux of my research, accumulates as a daily occurrence, and this week is no different. Of note, Zoltan Istvan is (another) 2016 US presidential candidate, but this time for the Transhumanism party. Transhumanism, “(abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities”. 1 For those of you who didn’t know. Living forever is job one for the “movement.” Mr. Istvan is not likely to be in the debates but you can follow him and the rest of H+ at humanity+. org. I’ll reserve comment on this.

On another front, for those who think that once we get this human thing down right, that technology will save us and mischief will cease, there is this item from WIRED magazine UK. A couple of researchers at Google (that’s OK you can trust them) have “created an artificial intelligence that developed its responses based on transcripts from an IT helpdesk chat service and a database of movie scripts.” This AI is called a chatbot. Chatbot are computer programs designed to talk to you. You can try one out here.

According to WIRED’s James Temperton,

“The [Google] system used a neural network — a collection of machines arranged to mimic a human brain — to create its unnervingly human responses. Such networks use a huge amount of computing power and Google’s latest research shows their potential for improving how machines hold conversations.”

Apparently, it was the addition of the movie scripts that made the bot more contentious. Consider this exchange on morality:

“Human: I really like our discussion on morality and ethics …

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.”

Fun with programming. All of this points to the old adage, “Junk in is junk out.” In The Lightstream Chronicles the future version of this mischief is called twisting. Basically you take a perfectly good, well-behaved, synthetic human and put in some junk. The change in programming is generally used to make these otherwise helpful synths do criminal things.

The logo says it all.

This tendency we have as human beings to twist good ideas into bad ones is nothing new, and today’s headlines are evidence of it. We print guns with 3D printers, we use drones to vandalize, cameras to spy, and computers to hack. Perhaps that is what Humanity+ has in mind: Make humanity more technologically advanced. More like a… machine, then reprogram the humanness (that just leads to no good) out. What could possibly go wrong with that?

 

1 https://en.wikipedia.org/wiki/Transhumanism
Bookmark and Share