Tag Archives: Dreamcatcher

What did one AI say to the other AI?

I’e asked this question before, but this is an entirely new answer.

We may  never know.

Based on a plethora of recent media on artificial intelligence (AI), not only are there a lot of people working on it, but many of those on the leading edge are also concerned with what they don’t understand in the midst of their ominous new creation.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft teamed up to form the Partnership on AI.(1) Their charter talks about sharing information and being responsible. It includes all of the proper buzz words for a technosocial contract.

“…transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.”(2)

They are not alone in this concern, as the EU(3) is also working on AI guidelines and a set of rules on robotics.

Some of what makes them all a bit nervous is the way AI learns, the complexity of neural networks and inability to go back and see how the AI arrived at its conclusion. In other words, how do we know that its recommendation is the right one? Adding to that list is the discovery that AIs working together can create their own languages; languages we don’t speak or understand. In one case, at Facebook, researchers saw this happening and stopped it.

For me, it’s a little disconcerting that Facebook, a social media app is one of those corporations leading the charge and leading the research into AI. That’s a broad topic for another blog, but their underlying objective is to market to you. That’s how they make their money.

To be fair, that is at least part of the motivation for Amazon, DeepMind/Google, IBM, and Microsoft, as well. The better they know you, the more stuff they can sell you. Of course, there are also enormous benefits to medical research as well. Such advantages are almost always what these companies talk about first. AI will save your life, cure cancer and prevent crime.

So, it is somewhat encouraging to see that these companies on the forefront of AI breakthroughs are also acutely aware of how AI could go terribly wrong. Hence we see wording from the Partnership on AI, like

“…Seek out, support, celebrate, and highlight aspirational efforts in AI for socially benevolent applications.”

The key word here is benevolent. But the clear objective is to keep the dialog positive, and

“Create and support opportunities for AI researchers and key stakeholders, including people in technology, law, policy, government, civil liberties, and the greater public, to communicate directly and openly with each other about relevant issues to AI and its influences on people and society.”(2)

I’m reading between the lines, but it seems like the issue of how AI will influence people and society is more of an obligatory statement intended to demonstrate compassionate concern. It’s coming from the people who see huge commercial benefit from the public being at ease with the coming onslaught of AI intrusion.

In their long list of goals, the “influences” on society don’t seem to be a priority. For example, should they discover that particular AI has a detrimental effect on people, that their civil liberties are less secure, would they stop? Probably not.

At the rate that these companies are racing toward AI superiority, the unintended consequences for our society are not a high priority. While these groups are making sure that AI does not decide to kill us, I wonder if they are also looking at how the AI will change us and are those changes a good thing?

(1) https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

(2) https://www.partnershiponai.org/#

(3) http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

Other pertinent links:

https://www.fastcompany.com/3064368/we-dont-always-know-what-ai-is-thinking-and-that-can-be-scary

https://www.fastcodesign.com/90133138/googles-next-design-project-artificial-intelligence

Bookmark and Share

Yes, you too can be replaced.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

 

Bookmark and Share

Surveillance. Are we defenseless?

Recent advancements in AI that are increasing exponentially (in areas such as facial recognition) demonstrate a level of sophistication in surveillance that renders most of us indefensible. There is a new transparency, and virtually every global citizen is a potential microbe for scrutiny beneath the microscope. I was blogging about this before I ever set eyes on the CBS drama Person of Interest, but the premise that surveillance could be ubiquitous is very real. The series depicts a mega, master computer that sees everything, but the idea of gathering a networked feed of the world’s cameras and a host of other accessible devices into a central data facility where AI sorts, analyzes and learns what kind of behavior is potentially threatening, is well within reach. It isn’t even a stretch that something like it already exists.

As with most technologies, however, they do not exist in a vacuum. Technologies converge. Take, for example, a recent article in WIRED about how accurate facial recognition is becoming even when the subject is pixelated or blurred. A common tactic to obscure the identity of video witness or an innocent bystander is to blur or to pixelate their face; a favored technique of Google Maps. Just go to any big city street view and Google has systematically obscured license plates and faces. Today these methods no longer compete against state-of-the-art facial recognition systems.

The next flag is the escalating sophistication of hacker technology. One of the most common methods is malware. Through an email or website, malware can infect a computer and raise havoc. Criminals often use it to ransom a victim’s computer before removing the infection. But not all hackers are criminals, per se. The FBI is pushing for the ability to use malware to digital wiretap or otherwise infiltrate potentially thousands of computers using only a single warrant. Ironically, FBI Director James Comey recently admitted that he puts tape over the camera on his personal laptop. I wrote about this a few weeks back What does that say about the security of our laptops and devices?

Is the potential for destructive attacks on our devices is so pervasive that the only defense we have is duct tape? We can track as far back as Edward Snowden, the idea that the NSA can listen in on your phone even when it’s off. And since 2014, experts have confirmed that the technology exists. In fact, albeit sketchy, some apps purport to do exactly that. You won’t find them in the app store (for obvious reasons), but there are websites where you can click the “buy” button. According to the site Stalkertools.com, which doesn’t pass the legit news site test, (note the use of awesome) one these apps promises that you can:

• Record all phone calls made and received, hear everything being said because you can record all calls and even listen to them at a later date.
• GPS Tracking, see on a map on your computer, the exact location of the phone
• See all sites that are opened on the phone’s web browser
• Read all the messages sent and received on IM apps like Skype, Whatsapp and all the rest
• See all the passwords and logins to sites that the person uses, this is thanks to the KeyLogger feature.
• Open and close apps with the awesome “remote control” feature
• Read all SMS messages and see all photos send and received on text messages
• See all photos taken with the phone’s camera

“How it work” “ The best monitoring for protect family” — Yeah. Sketchy.
“How it work” “ The best monitoring for protect family” — Sketchy, you think?

I visited one of these sites (above) and, frankly, I would never click a button on a website that can’t form a sentence in English, and I would not recommend that you do either. Earlier this year, the UK Independent published an article where Kelli Burns, a mass communication professor at the University of South Florida, alleged that Facebook regularly listens to users phone conversations to see what people are talking about. Of course, she said she can’t be certain of that.

Nevertheless, it’s out there, and if it has not already happened eventually, some organization or government will find a way to network the access points and begin collecting information across a comprehensive matrix of data points. And, it would seem that we will have to find new forms of duct tape to attempt to manage whatever privacy we have left. I found a site that gives some helpful advice for determining whether someone is tapping your phone.

Good luck.

 

Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.
Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

Design fiction. Think now.

This week I gave my annual lecture to Foundations students on design fiction. The Foundations Program at The Ohio State University Department of Design is comprised primarily (though not entirely) of incoming freshmen aspiring to get into the program at the end of their first year. Out of roughly 90 hopefuls, as many as 60 could be selected.

 
Design fiction is something of an advanced topic for first-year students. It is a form of design research that goes beyond conventional forms of research and stretches into the theoretical. The stuff it yields (like all research) is knowledge, which should not be confused with the answer or the solution to a problem, rather it becomes one of the tools that designers can use in crafting better futures.

 
Knowledge is critical.
One of the things that I try to stress to students is the enormity of what we don’t know. At the end of their education students will know much more than they do know but there is an iceberg of information out of sight that we can’t even begin to comprehend. This is why research is so critical to design. The theoretical comes in when we try to think about the future, perhaps the thing we know the least about. We can examine the tangible present and the recorded past, but the future is a trajectory that is affected by an enormous number of variables outside our control. We like to think that we can predict it, but rarely are we on the mark. So design fiction is a way of visualizing the future along with its resident artifacts, and bring it into the present where we can examine it and ask ourselves if this is a future we want.

 
It is a different track. I recently attended the First International Conference on Anticipation. Anticipation is a completely new field of study. According to its founder Roberto Poli,

“An anticipatory behavior is a behavior that ‘uses’ the future in its actual decisional process. It is the process of using the future in the present, which includes a forward-looking stance and the use of that forwardlooking stance to effect a change in the present. Anticipation therefore includes two mandatory components: a forward-looking attitude and the use of the former’s result for action.”

For me, this highlights some key similarities in design fiction and anticipation. At one level, all futures are fictions. Using a future design— design that does not yet exist—to help us make decisions today is an appropriate a methodology for this new field. Concomitantly, designers need a sense of anticipation as they create new products, communications, places, experiences, organizations and systems.

 
The reality of technological convergence makes the future an unstable concept. The merging of cognitive science, genetics, nanotech, biotech, infotech, robotics, and artificial intelligence is like shuffling a dozen decks of cards. The combinations become mind-boggling. So while it may seem a bit advanced for first-year design students, from my perspective we cannot start soon enough to think about our profession as a crucial player in crafting what the future will look like. Design fiction—drawing from the future—will be an increasingly important tool.

Bookmark and Share

Micropigs. The forerunner to ordering blue skinned children.

 

Your favorite shade, of course.

Last week I tipped you off to Amy Webb, a voracious design futurist with tons of tidbits on the latest technologies that are affecting not only design but our everyday life. I saved a real whopper for today. I won’t go into her mention of CRISPR-Cas9 since I covered that a few months ago without Amy’s help, but here’s one that I found more than interesting.

Chinese genomic scientists have created some designer pigs. They are called ‘micro pigs’ and they are taking orders at $1,600 a pop for the little critters. It turns out that pigs are very close—genetically—to humans but the big fellow were cumbersome to study (and probably too expensive to feed) so the scientists bred a smaller version by turning of the growth gene in their DNA. Voilà: micropigs. Plus you can order

Micropigs. Photo from BPi and nature.com
Micropigs. Photo from BPI and nature.com

them in different colors (they can do that, too). Now, of course this is all to further research and all proceeds will go to more research to help fight disease in humans, at least until they sell the patent on micropigs to the highest bidder.

So now we have genetic engineering to make a micropig, fashion statement. Wait a minute. We could use genetic engineering for human fashion statements, too. After all, it’s a basic human right to be whatever color we want. Oh, no. We would never do that.

Next up is Googles’ new email respond feature coming soon to your gmail account.

Bookmark and Share

What did one AI say to the other AI?

 

I know what you want.

A design foundations student recently asked my advice on a writing assignment, something that might be affected by or affect design in the future. I told him to look up predictive algorithms. I have long contended that logic alone indicates that predictive algorithms, taking existing data and applying constraints, can be used to solve a problem, answer a question, or design something. With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out. In case you haven’t noticed, big data is, well, big.

One of the design practitioners that I follow is Amy Webb. Amy has been thinking about this longer than I have but clearly, we think alike, and we are looking at the same things. I don’t know if she is as alarmed as I am. We’ve never spoken. In her recent newsletter, her focus was on what else, predictive algorithms. Amy alerted me to a whole trove of new developments. There were so many that I have decided to make it a series of blogs starting with this one.

Keep in mind, that as I write this these technologies are in their infancy. If the already impress you, then the future will likely blow you away. The first was something known as, Project Dreamcatcher from Autodesk. These are the people who make Maya, and AutoCAD and much of the software that designers, animators, engineers and architects use every day. According to the website:

“The Dreamcatcher system allows designers to input specific design objectives, including functional requirements, material type, manufacturing method, performance criteria, and cost restrictions. Loaded with design requirements, the system then searches a procedurally synthesized design space to evaluate a vast number of generated designs for satisfying the design requirements. The resulting design alternatives are then presented back to the user, along with the performance data of each solution, in the context of the entire design solution space.”

Another on Amy’s list was Google’s recently announced RankBrain, Google’s next venture into context-aware platforms using advances in predictive algorithms to make what you see scarily tailored to who you are. According to Amy from a 2012 article (this is old news folks):

“With the adoption of the Siri application, iOS 5 mobile phones (Apple only) can now compare location, interests, intentions, schedule, friends, history, likes, dislikes and more to serve content and answers to questions.”

In other words, there’s lots more going on than you think when Siri answers a question for you. Well RankBrain takes this to the next level, according to Bloomberg who broke the story on RankBrain:

“For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain…’Machine learning is a core transformative way by which we are rethinking everything we are doing,’ said Google’s Chief Executive Officer Sundar Pichai on the company’s earnings call last week.”

By the way, so far, most AI predicts much more accurately than we do, humans that is.

If this is moving too fast for you, next week, thanks to Amy, I’ll highlight some applications of AI that will have you squirming.

PS— if you wan to follow Amy Webb go here.

Bookmark and Share