Tag Archives: ethics

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.
Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild
Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?

 

1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

Logical succession, Part 2.

Last week the topic was Ray Kurzweil’s prediction that by 2030, not only would we send nanobots into our bloodstream by way of the capillaries, but they would target the neocortex, set up shop, connect to our brains and beam our thoughts and other contents into the Cloud (somewhere). Kurzweil is no crackpot. He is a brilliant scientist, inventor and futurist with an 86 percent accuracy rate on his predictions. Nevertheless, and perhaps presumptuously, I took issue with his prediction, but only because there was an absence of a logical succession. According to Coates,

“…the single most important way in which one comes to an understanding of the future, whether that is working alone, in a team, or drawing on other people… is through plausible reasoning, that is, putting together what you know to create a path leading to one or several new states or conditions, at a distance in time” (Coates 2010, p. 1436).1

Kurzweil’s argument is based heavily on his Law of Accelerating Returns that says (essentially), “We won’t experience 100 years of progress in the 21st century it will be more like 20,000 years of progress (at today’s rate).” The rest, in the absence of more detail, must be based on faith. Faith, perhaps in the fact that we are making considerable progress in architecting nanobots or that we see promising breakthroughs in mind-to-computer communication. But what seems to be missing is the connection part. Not so much connecting to the brain, but beaming the contents somewhere. Another question, why, also comes to mind, but I’ll get to that later.

There is something about all of this technological optimism that furrows my brow. A recent article in WIRED helped me to articulate this skepticism. The rather lengthy article chronicled the story of neurologist Phil Kennedy, who like Kurzweil believes that the day is soon approaching when we will connect or transfer our brains to other things. I can’t help but call to mind what one time Fed manager Alan Greenspan called, “irrational exuberance.” The WIRED article tells of how Kennedy nearly lost his mind by experimenting on himself (including rogue brain surgery in Belize) to implant a host of hardware that would transmit his thoughts. This highly invasive method, the article says is going out of style, but the promise seems to be the same for both scientists: our brains will be infinitely more powerful than they are today.

Writing in WIRED columnist Daniel Engber makes an astute statement. During an interview with Dr. Kennedy, they attempted to watch a DVD of Kennedy’s Belize brain surgery. The DVD player and laptop choked for some reason and after repeated attempts they were able to view Dr. Kennedy’s naked brain undergoing surgery. Reflecting on the mundane struggles with technology that preceded the movie, Engber notes, “It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year.”

Dr. Kennedy’s saga was all about getting thoughts into text, or even synthetic speech. Today, the invasive method of sticking electrodes into your cerebral putty has been replaced by a kind of electrode mesh that lays on top of the cortex underneath the skull. They call this less invasive. Researchers have managed to get some results from this, albeit snippets with numerous inaccuracies. They say it will be decades, and one of them points out that even Siri still gets it wrong more than 30 years after the debut of speech recognition technology.
So, then it must be Kurzweil’s exponential law that still provides near-term hope for these scientists. As I often quote Tobias Revell, “Someone somewhere in a lab is playing with your future.”

There remain a few more nagging questions for me. What is so feeble about our brains that we need them to be infinitely more powerful? When is enough, enough? And, what could possibly go wrong with this scenario?

Next week.

 

1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
Bookmark and Share

Logical succession, please.

In this blog, I wouldn’t be surprised to discover that of all the people I talk (or rant) about most is Ray Kurzweil. That is not all that surprising to me since he is possibly the most visible and vociferous and visionary proponent of the future. Let me say in advance that I have great respect for Ray. A Big Think article three years ago claimed that
“… of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be “essentially correct” (off by a year or two), giving his predictions a stunning 86% accuracy rate.”

Last year Kurzweil predicted that
“ In the 2030s… we are going to send nano-robots into the brain (via capillaries) that will provide full immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud.”1

This prediction caught my attention as not only quite unusual but, considering that it is only 15 years away, incredibly ambitious. Since 2030 is right around the corner, I wanted to see if anyone has been able to connect to the neocortex yet. Before I could do that, however, I needed to find out what exactly the neocortex is. According to Science Daily, it is the top layer of the brain (which is made up of six layers). “It is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”2 According to Kurzweil, “There is beauty, love and creativity and intelligence in the world, and it all comes from the neocortex.”3

OK, so on to how we connect. Kurzweil predicts nanobots will do this though he doesn’t say how. Nanobots, however, are a reality. Scientists have designed nanorobotic origami, which can fold itself into shapes on the molecular level and molecular vehicles that are drivable. Without additional detail, I can only surmise that once our nano-vehicles have assembled themselves, they will drive to the highest point and set up an antenna and, violå, we will be linked.

 

Neurons of the Neocortex stained with golgi’s methode - Photograph: Benjamin Bollmann
Neurons of the Neocortex stained with golgi’s methode – Photograph: Benjamin Bollmann

I don’t let my students get away with predictions like that, so why should Kurzweil? Predictions should engage more than just existing technologies (such as nanotech and brain mapping); they need demonstrate plausible breadcrumbs that make such a prediction legitimate. Despite the fact that Ray gives a great TED talk, it still didn’t answer those questions. I’m a big believer that technological convergence can foster all kinds of unpredictable possibilities, but the fact that scientists are working on a dozen different technological breakthroughs in nanoscience, bioengineering, genetics, and even mapping the connections of the neocortex4, doesn’t explain how we will tap into it or transmit it.

If anyone has a theory on this, please join the discussion.

1. http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time
2. http://www.sciencedaily.com/terms/neocortex.htm
3. http://www.dailymail.co.uk/sciencetech/article-3257517/Human-2-0-Nanobot-implants-soon-connect-brains-internet-make-super-intelligent-scientist-claims.html#ixzz3xtrHUFKP
4. http://www.neuroscienceblueprint.nih.gov/connectome/

Photo from: http://connectomethebook.com/?portfolio=neurons-of-the-neocortex

Bookmark and Share

A paralyzing electro magnetic laser: future possibility or sheer fantasy?

In episode 134, the Techman is paralyzed, lifted off the ground and thumped back to the floor. Whether it’s electrostatic, electromagnetic or superconductor electricity reduced to a hand-held device, the concept seems valid, especially 144 years from now. Part of my challenge is to make this design fiction logical by pulling threads of current research and technology to extrapolate possible futures. Mind you, it’s not a prediction, but a possibility. Here is my thinking:

Keiji’s weapon assumes that at least four technologies come together sometime in the next 14 decades. Safe bet? To start with the beam has to penetrate the door and significantly stun the subject. This idea is not that far-fetched. Weapons like this are already on the drawing board. For instance, the military is currently working on something called laser-guided directed-energy weapons. They work like “artificial lightning” to disable human targets. According to Defense Update,

Laser-Induced Plasma Channel (LIPC) technology was developed by Ionatron to channel electrical energy through the air at the target. The interaction of the air and laser light at specific wavelength, causes light to break into filaments, which form a plasma channel that conducts the energy like a virtual wire. This technology can be adjusted for non-lethal or lethal use. “

The imaginative leap here is that the beam can penetrate the wall to find it’s target. Given the other advancements, I feel reasonably safe stretching on this one.

LIPC at work.
LIPC at work.

Next, you have to get the subject off the ground. Lifting a 200-pound human would require at least two technologies assisted by a third. First is a levitating superconductor. A levitating superconductor uses electric current from a superconductor to produce magnetic forces that could counter the force of gravity. According to physics.org:

“Like frogs, humans are about two-thirds water, so if you had a big enough Bitter electromagnet, there’s no reason why a human couldn’t be levitated diamagnetically. None of the frogs that have taken part in the diamagnetic levitation experiments have experienced any adverse effects, which bodes well for any future human guinea pigs.”

The other ingredient is a highly powerful magnet. If we had a superconductor with a few decades of refinement and miniaturization, it’s conceivable that it could produce magnetic forces counter to the force of gravity. 1

The final component would be the power source small enough to fit inside the weapon and carrying enough juice to generate the plasma, and magnetic field for at least fifteen seconds. Today, you can buy a million-volt stun device on Amazon.com for around $50 and thyristor semiconductor technology could help ramp up the power surge necessary to sustain the arc.  Obviously, I’m not an engineer, but if you are, please feel free to chime in.

1. http://helios.gsfc.nasa.gov/qa_gp_elm.html

Bookmark and Share

What could happen.

1.  about last week

I’ll be the first to acknowledge that my blog last week was a bit depressing. However, if I thought, the situation was hopeless, I wouldn’t be doing this in the first place. I believe we have to acknowledge our uncanny ability to foul things up and, as best we can, design the gates and barriers into new technology to help prevent its abuse. And even though it may seem that way sometimes, I am not a technology pessimist or purely dystopian futurist. In truth, I’m tremendously excited about a plethora of new technologies and what they promise for the future.

2.  see the future

Also last week (by way of asiaone.com) Dr. Michio Kaku spoke in Singapore served up this future within the next 50 years.

“Imagine buying things just by blinking. Imagine doctors making an artificial heart for you within 20 hours. Imagine a world where garbage costs more than computer chips.”

Personally, I believe he’s too conservative. I see it happening much sooner. Kaku is a one of a handful of famous futurists, and his “predictions” have a lot of science behind them. So who am I to argue with him? He’s a brilliant scientist, prolific author, and educator. Most futurists or forecasters will be the first to tell you that their futures are not predictions but rather possible futures. According to forecaster Paul Saffo, “The goal of forecasting is not to predict the future but to tell you what you need to know to take meaningful action in the present.”1

According to Saffo “… little is certain, nothing is preordained, and what we do in the present affects how events unfold, often in significant, unexpected ways.”

Though my work is design fiction, I agree with Saffo. We both look at the future the same way. The objective behind my fictions is to jar us into thinking about the future so that it doesn’t surprise us. The more that our global citizenry thinks about the future and how it may impact them, the more likely that they will get involved. At least that is my hope. Hence, it is why I look for design fictions that will break out of the academy or the gallery show and seep into popular culture. The future needs to be an inclusive conversation.

Of course, the future is a broad topic: it impacts everything and everyone. So much of what we take for granted today could be entirely different—possibly even unrecognizable—tomorrow. Food, medicine, commerce, communication, privacy, security, entertainment, transportation, education, and jobs are just a few of the enormously important areas for potentially radical change. Saffo and Kaku don’t know what the future will bring any more than I do. We just look at what it could bring. I tend to approach it from the perspective of “What could go wrong?” Others take a more balanced view, and some look only at the positives. It is these perspectives that create the dialog and debate, which is what they are supposed to do. We also have to be careful that we don’t see these opinions as fact. Ray Kurzweil sees the equivalent of 20,000 years of change packed into the 21st century. Kaku (from the article mentioned above) sees computers being relegated to the

“‘dull, dangerous and dirty’ jobs that are repetitive, such as punching in data, assembling cars and any activity involving middlemen who do not contribute insights, analyses or gossip.’ To be employable, he stresses, you now have to excel in two areas: common sense and pattern recognition. Professionals such as doctors, lawyers and engineers who make value judgments will continue to thrive, as will gardeners, policemen, construction workers and garbage collectors.”

Looks like Michio and I disagree again. The whole idea behind artificial intelligence is in the area of predictive algorithms that use big data to learn. Machine learning programs detect patterns in data and adjust program actions accordingly.2 The idea of diagnosing illnesses, advising humans on potential human behaviors,  analyzing soil, site conditions and limitations, or even collecting trash are will within the realm of artificial intelligence. I see these jobs every bit as vulnerable as those of assembly line workers.

That, of course, is all part of the discussion—that we need to have.

 

1 Harvard Business Review | July–August 2007 | hbr.org
2. http://www.machinelearningalgorithms.com
Bookmark and Share

The ultimate wild card.

 

One of the things that futurists do when they imagine what might happen down the road is to factor in the wild card. Short of the sports or movie references a wild card is defined by dictionary.com as: “… of, being, or including an unpredictable or unproven element, person, item, etc.” One might use this term to say, “Barring a wild card event like a meteor strike, global thermonuclear war, or a massive earthquake, we can expect Earth’s population to grow by (x) percent.”

The thing about wild card events is that they do happen. 9/11 could be considered a wild card. Chernobyl, Fukushima, and Katrina would also fall into this category. At the core, they are unpredictable, and their effects are widespread. There are think tanks that work on the probabilities of these occurrences and then play with scenarios for addressing them.

I’m not sure what to call something that would be entirely predictable but that we still choose to ignore. Here I will go with a quote:

“The depravity of man is at once the most empirically verifiable reality but at the same time the most intellectually resisted fact.”

― Malcolm Muggeridge

Some will discount this automatically because the depravity of man refers to the Christian theology that without God, our nature is hopeless. Or as Jeremiah would say, our heart is “deceitful and desperately wicked” (Jeremiah 17:9).

If you don’t believe in that, then maybe you are willing to accept a more secular notion that man can be desperately stupid. To me, humanity’s uncanny ability to foul things up is the recurring (not-so) wild card. It makes all new science as much a potential disaster as it might be a panacea. We don’t consider it often enough. If we look back through my previous blogs from Transhumanism to genetic design, this threat looms large. You can call me a pessimist if you want, but the video link below stands as a perfect example of my point. It is a compilation of all the nuclear tests, atmospheric, underground, and underwater, since 1945. Some of you might think that after a few tests and the big bombs during WWII we decided to keep a lid on the insanity. Nope.

If you can watch the whole thing without sinking into total depression and reaching for the Clorox, you’re stronger than I am. And, sadly it continues. We might ask how we have survived this long.

Bookmark and Share

Enter the flaw.

 

I promised a drone update this week, but by now, it is probably already old news. It is a safe bet there are probably a few thousand more drones than last week. Hence, I’m going to shift to a topic that I think is moving even faster than our clogged airspace.

And now for an AI update. I’ve blogged previously about Kurzweil’s Law of Accelerating Returns, but the evidence is mounting every day that he’s probably right.  The rate at which artificial intelligence is advancing is beginning to match nicely with his curve. A recent article on the Txchnologist website demonstrates how an AI system called Kulitta, is composing jazz, classical, new age and eclectic mixes that are difficult to tell from human compositions. You can listen to an example here. Not bad actually. Sophisticated AI creations like this underscore the realization that we can no longer think of robotics as the clunky mechanized brutes. AI can create. Even though it’s studying an archive of man-made creations the resulting work is unique.

First it learns from a corpus of existing compositions. Then it generates an abstract musical structure. Next it populates this structure with chords. Finally, it massages the structure and notes into a specific musical framework. In just a few seconds, out pops a musical piece that nobody has ever heard before.

The creator of Kulitta, Donya Quick says that this will not put composers out of a job, it will help them do their job better. She doesn’t say how exactly.

If even trained ears can’t always tell the difference, what does that mean for the masses? When we can load the “universal composer” app onto our phone and have a symphony written for ourselves, how will this serve the interests of musicians and authors?

The article continues:

Kulitta joins a growing list of programs that can produce artistic works. Such projects have reached a critical mass–last month Dartmouth College computational scientists announced they would hold a series of contests. They have put a call out seeking artificial intelligence algorithms that produce “human-quality” short stories, sonnets and dance music. These will be pitted against compositions made by humans to see if people can tell the difference.

The larger question to me is this: “When it all sounds wonderful or reads like poetry, will it make any difference to us who created it?”

Sadly, I think not. The sweat and blood that composers and artists pour into their compositions could be a thing of the past. If we see this in the fine arts, then it seems an inevitable consequence for design as well. Once the AI learns the characters, behaviors and personalities of the characters in The Lightstream Chronicles, it can create new episodes without me. Taking characters and setting that already exist as CG constructs, it’s not a stretch that it will be able to generate the wireframes, render the images, and layout the panels.

Would this app help me in my work? It could probably do it in a fraction of the time that it would take me, but could I honestly say it’s mine?

When art and music are all so easily reconstructed and perfect, I wonder if we will miss the flaw. Will we miss that human scratch on the surface of perfection, the thing that reminds us that we are human?

There is probably an algorithm for that, too. Just go to settings > humanness and use the slider.

Bookmark and Share

Meddling with the primal forces of nature.

 

 

One of the more ominous articles of recent weeks came from WIRED magazine in an article about the proliferation of DNA editing. The story is rich with technical talk and it gets bogged down in places but essentially it is about a group of scientists who are concerned about the Pandora’s Box they may have created with something called Crispr-Cas9, or Crispr for short. Foreseeing this as far back as 1975, the group thought that establishing “guidelines” for what biologists could and could not do; things like creating pathogens and mutations that could be passed on from generation to generation — maybe even in humans — were on the list of concerns. It all seemed very far off back in the 70’s, but not anymore. According to WIRED writer Amy Maxmen,

“Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people.”

Maxmen states that startups are launching with Crispr as their focus. Two quotes that I have used excessively come to mind. First, Tobias Revell: “Someone, somewhere in a lab is playing with your future.”1. Next, from a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” 2.

And so, we play catch-up. From the WIRED article:

“It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.”

The most disconcerting part of all this, to me, is that now, before the rules exist that even the smallest breach in protocol could unleash repercussions of Biblical proportions. Everything from killer mosquitoes and flying spiders, horrific mutations and pandemics are up for grabs.

We’re not even close to ready for this. Don’t tell me that it could eradicate AIDS or Huntington’s disease. That is the coat that is paraded out whenever a new technology peers its head over the horizon.

“Now, with less than $100, an ordinary arachnologist can snip the wing gene out of a spider embryo and see what happens when that spider matures.”

Splice-movie-baby-Dren
From the movie “Splice”. Sometimes bad movies can be the most prophetic.

It is time to get the public involved in these issues whether through grass-roots efforts or persistence with their elected officials to spearhead some legislation.

“…straight-out editing of a human embryo sets off all sorts of alarms, both in terms of ethics and legality. It contravenes the policies of the US National Institutes of Health, and in spirit at least runs counter to the United Nations’ Universal Declaration on the Human Genome and Human Rights. (Of course, when the US government said it wouldn’t fund research on human embryonic stem cells, private entities raised millions of dollars to do it themselves.) Engineered humans are a ways off—but nobody thinks they’re science fiction anymore.”

Maxmen interviewed Harvard geneticist George Church. In a closer to the article,

“When I ask Church for his most nightmarish Crispr scenario, he mutters something about weapons and then stops short. He says he hopes to take the specifics of the idea, whatever it is, to his grave. But thousands of other scientists are working on Crispr. Not all of them will be as cautious. “You can’t stop science from progressing,” Jinek says. “Science is what it is.” He’s right. Science gives people power. And power is unpredictable.”

Who do you trust?

 

 

1. Critical Exploits. Performed by Tobias Revell. YouTube. January 28, 2014. Accessed February 14, 2014. http://www.youtube.com/watch?v=jlpq9M1VELU#t=364.
2. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share