Tag Archives: crime thriller

What now?

 

If you follow this blog, you know that I like to say that the rationale behind design fiction—provocations that get us to think about the future—is to ask, “What if?” now so that we don’t have to ask “What now?”, then. This is especially important as our technologies begin to meddle with the primal forces of nature, where we naively anoint ourselves as gods and blithely march forward—because we can.

The CRISPR-Cas9 technology caught my eye almost exactly two years ago from today through a WIRED article by Amy Maxmen. Then I wrote about it, as an awesomely powerful tool for astounding progress for the good of humanity while at the same time taking us down a slippery slope. A Maxmen stated,

“It could, at last, allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes.”

The article chronicles how, back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.

43 years later, what was only a glimmer was now a reality. So, in 2015, some of these researchers came together again to discuss the implications of a new technique called CRISPR-Cas9. It was just a few years after Jennifer Doudna and Emmanuelle Charpentier figured out the elegant tool for genome editing. Again from Maxmen,

“On June 28, 2012, Doudna’s team published its results in Science. In the paper and an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.”

In 2015 it was Doudna herself that called the meeting, this time in Napa, to discuss the ethical ramifications of Crispr. Their biggest concern was what they call germline modifications—the stuff that gets passed on from generation to generation, substantially changing the human forever. In September of 2015, Doudna gave a TED Talk asking the asks the scientific community to pause and discuss the ethics of this new tool before rushing in. On the heels of that, the US National Academy of Sciences said it would work on a set of ”recommendations“ for researchers and scientists to follow. No laws, just recommendations.

Fast forward to July 26, 2017. MIT Technology Review reported:

“The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon… Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.”

MIT’s article was thin on details because the actual paper that delineated the experiment was not yet published. Then, this week, it was. This time it was, indeed, a germline objective.

“…because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.”(ibid).

All this was led by fringe researcher Shoukhrat Mitalipov of Oregon Health and Science University, and WIRED was quick to provide more info, but in two different articles.

The first of these stories appeared last Friday and gave more specifics on Mitalipov than the actual experiment.

“the same guy who first cloned embryonic stem cells in humans. And came up with three-parent in-vitro fertilization. And moved his research on replacing defective mitochondria in human eggs to China when the NIH declined to fund his work. Throughout his career, Mitalipov has gleefully played the role of mad scientist, courting controversy all along the way (sic).”

In the second article, we discover what the mad scientist was trying to do. In essence, Mitalipov demonstrated a highly efficient replacement of mutated genes like MYBPC3, which is responsible for a heart condition called “hypertrophic cardiomyopathy that affects one in 500 people—the most common cause of sudden death among young athletes.” Highly efficient means that in 42 out of 58 attempts, the problem gene was removed and replaced with a normal one. Mitalipov believes that he can get this to 100%. This means that fixing genetic mutations can be done successfully and maybe even become routine in the near future. But WIRED points out that

“would require lengthy clinical trials—something a rider in the current Congressional Appropriations Act has explicitly forbidden the Food and Drug Administration from even considering.”

Ah, but this is not a problem for our fringe mad scientist.

“Mitalipov said he’d have no problem going elsewhere to run the tests, as he did previously with his three-person IVF work.”

Do w see a pattern here? One surprising thing that the study revealed was that,

“Of the 42 successfully corrected embryos, only one of them used the supplied template to make a normal strand of DNA. When Crispr cut out the paternal copy—the mutant one—it left behind a gap, ready to be rebuilt by the cell’s repair machinery. But instead of grabbing the normal template DNA that had been injected with the sperm and Crispr protein, 41 embryos borrowed the normal maternal copy of MYBPC3 to rebuild its gene.”

In other words, the cell said, thanks for your stinking code but we’ll handle this. It appears as though cellular repair may have a mission plan of its own. That’s the mysterious part that reminds us that there is still something miraculous going on here behind the scenes. Mitalipov thinks he and his team can force these arrogant cells to follow instructions.

So what now? With this we have more evidence that guidelines and recommendations, clear heads and cautionary voices are not enough to stop scientists and researchers on the fringe, governments with dubious ethics, or whoever else might want to give things a whirl.

That puts noble efforts like Asilomar in 1975, a similar conference some years ago on nanotechnology, and one earlier this year on Artificial Intelligence as simply that, noble efforts. Why do these conference occur in the first place? Because scientists are genuinely worried that we’re going to extinct ourselves if we aren’t careful. But technology is racing down the autobahn, folks and we can’t expect the people who stand to become billionaires from their discoveries to be the same people policing their actions.

And this is only one of the many transformative technologies that are looming on the horizon. While everyone is squawking about the Paris Accords, why don’t we marshall some of our righteous indignation and pull the world together to agree on some meaningful oversight of these technologies?

We’ve gone from “What if?” to  “What now?” Are we going to avoid, “Oh, shit!”

  1. https://www.wired.com/2015/07/crispr-dna-editing-2/?mbid=nl_72815

2. http://wp.me/p7yvqL-mt

3. https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/?set=608342

4. https://www.wired.com/story/scientists-crispr-the-first-human-embryos-in-the-us-maybe/?mbid=social_twitter_onsiteshare

5. https://www.wired.com/story/first-us-crispr-edited-embryos-suggest-superbabies-wont-come-easy/?mbid=nl_8217_p9&CNDID=49614846

Bookmark and Share

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Are we ready to be gods? Revisited.

 

I base today’s blog on a 2013 post with a look at the world from the perspective of The Lightstream Chronicles, which takes place in the year 2159. To me, this is a very plausible future. — ESD

 

There was a time when crimes were simpler. Humans committed crimes against other humans — not so simple anymore. In that world, you have the old-fashioned mano a mano, but you also have human against synthetic, and synthetic against the human. There are creative variations as well.

It was bound to happen. No sooner than the first lifelike robots became commercially available in the late 2020’s, there were issues of ethics and misuse. Though scientists and ethicists discussed the topic in the early part of the 21st century, the problems escalated faster than the robotics industry had conceived possible.

According to the 2007 Roboethics Roadmap,

“…problems inherent in the possible emergence of human function in the robot: like consciousness, free will, self-consciousness, sense of dignity, emotions, and so on. Consequently, this is why we have not examined problems — debated in literature — like the need not to consider robot as our slaves, or the need to guarantee them the same respect, rights and dignity we owe to human workers.”1

In the 21st century many of the concerns within the scientific community centered around what we as humans might do to infringe upon the “rights” of the robot. Back in 2007, it occurred to researchers that the discussion of roboethics needed to include more fundamental questions regarding the ethics of the robots’ designers, manufacturers and users. However, once in the role of the creator-god, they did not foresee how “unprepared” for that responsibility we were as a society, and how quickly humans would pervert the robot for formerly “unethical” uses, including but not limited to their modification for crime and perversion.

Nevertheless, more than 100 years later, when synthetic human production is at the highest levels in history, the questions of ethics in both humans and their creations remain a significant point of controversy. As the 2007 Roboethics Roadmap concluded, “It is absolutely clear that without a deep rooting of Roboethics in society, the premises for the implementation of an artificial ethics in the robots’ control systems will be missing.”

After these initial introductions of humanoid robots, now seen as almost comically primitive, the technology, and in turn the reasoning, emotions, personality and realism became progressively more sophisticated. Likewise, their implementations became progressively more like the society that manufactured them. They became images of their creators both benevolent and malevolent.

Schematic1Longm
In our image?

 

 

A series of laws were enacted to prevent the use of humanoid robots for criminal intent, yet at the same time, military interests were fully pursuing dispassionate automated humanoid robots with the express purpose of extermination. It was truly a time of paradoxical technologies. To further complicate the issue were ongoing debates on the nature of what was considered “criminal”. Could a robot become a criminal without human intervention? Is something criminal if it is consensual?

These issues ultimately evolved into complex social, economic, political, and legal entanglement that included heavy government regulation and oversight where such was achievable. As this complexity and infrastructure grew to accommodate the continually expanding technology, the greatest promise and challenges came almost 100 years after those first humanoid robots. With the advent of virtual human brains now being grown in labs, the readily identifiable differences between synthetic humans and real human gradually began to disappear. The similarities were so shocking and so undetectable that new legislation was enacted to restrict the use of virtual humans, and classification system was established to ensure visible distinctions for the vast variety of social synthetics.

The concerns of the very first Roboethics Roadmap are confirmed even 150 years into the future. Synthetics are still abused and used to perpetrate crimes. Their virtual humanness only adds an element of complexity, reality, and in some cases, horror to the creativity of how they are used.

 

 1 Euron Roboethics Roadmap
Bookmark and Share

Powerful infant.

In previous blogs (such as this one), I have discussed the subject of virtual reality. Yesterday, I tried it. The motivation for my visit to The Advanced Computing Center for the Arts and Design (ACCAD), Ohio State’s cutting-edge technology and arts center, was a field trip for my junior Collaborative Studio design students. Their project this semester is to design a future system that uses emerging technologies. It is hard to imagine that in the near-future VR will be commonplace. We stepped inside the a large, empty performance stage rigged with a dozen motion capture cameras that could track your movements throughout virtual space. We looked at an experimental animation in which we could stand amidst the characters and another work-in-progress that allowed us to step inside a painting. It wasn’t my first time in a Google cardboard device where I could look around at a 360-degree world (sensed by my phone’s gyroscope), but on an empty stage where you could walk amongst virtual characters, the experience took on a new dimension—literally. I found myself concerned about bumping into things that weren’t there and even getting a bit dizzy. (I did not let on in front of my students).

I immediately saw an application for The Lightstream Chronicles and realized that I could load up one of my scenes from the graphic novel, bring it over to ACCAD’s mocap studio and step into this virtual world that I have created. I build all of my scenes (including architecture) to scale, furnish the rooms and interiors and provide for full 360º viewing. Building sets this way allows me to revisit them at any time, follow my characters around or move the camera to get a better angle without having to add walls that I might not have anticipated using. After the demo, I was pretty excited. It became apparent that this technology will enable me to see what my characters see, and stand beside them. It’s a bit mind-blowing. Now the question becomes which scene to use. Any ideas?

Clearly VR is in its infancy, but it is a very powerful infant. The future seems exciting, and I can see why people can get caught up in what the promises could be. Of course, I have to be the one to wonder at what this powerful infant will grow up to be.

Bookmark and Share

Harmless.

 

Once again, it has been a week where it is difficult to decide what present-future I should talk about. If you are a follower of The Lightstream Chronicles, then you know I am trying to write about more than science fiction. The story is indeed a cyberpunk-ish, crime-thriller, drama intended to entertain, but it is also a means of scrutinizing a future where all the problems we imagine that technology will solve often create new ones, subtle ones that end up re-engineering us. Many of these technologies start out a curiosities, entertainments, or diversions that are picked-up by early-adopting technophiles and end up, gradually in the mainstream.

One of these curiosities is the idea of wearable tech. Wristbands watches and other monitors are designed to keep track of what we do, remind us to do something, or now in increasing popularity, remind us not to do something. One company, Chaotic Moon is working on a series of tattoo-like monitors. These are temporary, press-on circuits that use the conductivity of your skin to help them work and transmit. They are called Tech Tats and self-classified as bio-wearables. In addition to their functional properties, they also have an aesthetic objective—a kind of tattoo. Still somewhat primitive (technologically and artistically) they, nevertheless, fall into this category of harmless diversions.

techtats
Monitoring little Susi’s temperature.

Of course, Chaotic Moon is hoping (watch the video) that they will become progressively more sophisticated, and their popularity will grow from both  as both tech and fashion. Perhaps they should be called bio-fashion. If no one has already claimed this, then you saw it here first, folks. If you watch the video from Chaotic Moon you’ll see this promise that these things (in a future iteration) will be used for transactions and should be considered safer than carrying around lots of credit cards. By the way, thieves are already hacking the little chip in your credit card that is supposed to be so much safer than the old non-chipped version. Sorry, I digress.

My brand of design fiction looks at these harmless diversions and asks, “What next?”, and “What if?”. I think most futurists agree that these kinds of implants will eventually move inside the body through simple injections or, in future versions, constructed inside via nanobots. Under my scrutiny, two interesting things are at work here. First there is the idea of wearing and then implanting technology which clearly brings us across a transhuman threshold, and the idea of fashion as the subtle carrier of harmlessness and adoptive lure. You can probably imagine where I’m going with that.

Next up is VR. Virtual reality is something I blog about fairly often. In The Lightstream Chronicles, it has reached a level of sophistication that surpasses game controllers boxes and hardware. You simply dial in your neocortex to the Lightstream, (the future Internet) and you are literally wherever you want to be and doing whatever your imagination can conjure up.  In the story, I more or less predict that this total immersion becomes seriously addictive. Check out the prologue episodes to Season 4.

Thanks to one of my students for pointing out this video called the Uncanny Valley.

“I feel like I can be myself and not go to jail for it.”
“I feel like I can be myself and not go to jail for it.”

You can watch it on Vimeo. Chat up the possible idea of any detrimental effects of video games with a gamer and you’ll almost certainly hear the word harmless.

These are the design futures that I think about. What do you think?

Bookmark and Share

A paralyzing electro magnetic laser: future possibility or sheer fantasy?

In episode 134, the Techman is paralyzed, lifted off the ground and thumped back to the floor. Whether it’s electrostatic, electromagnetic or superconductor electricity reduced to a hand-held device, the concept seems valid, especially 144 years from now. Part of my challenge is to make this design fiction logical by pulling threads of current research and technology to extrapolate possible futures. Mind you, it’s not a prediction, but a possibility. Here is my thinking:

Keiji’s weapon assumes that at least four technologies come together sometime in the next 14 decades. Safe bet? To start with the beam has to penetrate the door and significantly stun the subject. This idea is not that far-fetched. Weapons like this are already on the drawing board. For instance, the military is currently working on something called laser-guided directed-energy weapons. They work like “artificial lightning” to disable human targets. According to Defense Update,

Laser-Induced Plasma Channel (LIPC) technology was developed by Ionatron to channel electrical energy through the air at the target. The interaction of the air and laser light at specific wavelength, causes light to break into filaments, which form a plasma channel that conducts the energy like a virtual wire. This technology can be adjusted for non-lethal or lethal use. “

The imaginative leap here is that the beam can penetrate the wall to find it’s target. Given the other advancements, I feel reasonably safe stretching on this one.

LIPC at work.
LIPC at work.

Next, you have to get the subject off the ground. Lifting a 200-pound human would require at least two technologies assisted by a third. First is a levitating superconductor. A levitating superconductor uses electric current from a superconductor to produce magnetic forces that could counter the force of gravity. According to physics.org:

“Like frogs, humans are about two-thirds water, so if you had a big enough Bitter electromagnet, there’s no reason why a human couldn’t be levitated diamagnetically. None of the frogs that have taken part in the diamagnetic levitation experiments have experienced any adverse effects, which bodes well for any future human guinea pigs.”

The other ingredient is a highly powerful magnet. If we had a superconductor with a few decades of refinement and miniaturization, it’s conceivable that it could produce magnetic forces counter to the force of gravity. 1

The final component would be the power source small enough to fit inside the weapon and carrying enough juice to generate the plasma, and magnetic field for at least fifteen seconds. Today, you can buy a million-volt stun device on Amazon.com for around $50 and thyristor semiconductor technology could help ramp up the power surge necessary to sustain the arc.  Obviously, I’m not an engineer, but if you are, please feel free to chime in.

1. http://helios.gsfc.nasa.gov/qa_gp_elm.html

Bookmark and Share

What could happen.

1.  about last week

I’ll be the first to acknowledge that my blog last week was a bit depressing. However, if I thought, the situation was hopeless, I wouldn’t be doing this in the first place. I believe we have to acknowledge our uncanny ability to foul things up and, as best we can, design the gates and barriers into new technology to help prevent its abuse. And even though it may seem that way sometimes, I am not a technology pessimist or purely dystopian futurist. In truth, I’m tremendously excited about a plethora of new technologies and what they promise for the future.

2.  see the future

Also last week (by way of asiaone.com) Dr. Michio Kaku spoke in Singapore served up this future within the next 50 years.

“Imagine buying things just by blinking. Imagine doctors making an artificial heart for you within 20 hours. Imagine a world where garbage costs more than computer chips.”

Personally, I believe he’s too conservative. I see it happening much sooner. Kaku is a one of a handful of famous futurists, and his “predictions” have a lot of science behind them. So who am I to argue with him? He’s a brilliant scientist, prolific author, and educator. Most futurists or forecasters will be the first to tell you that their futures are not predictions but rather possible futures. According to forecaster Paul Saffo, “The goal of forecasting is not to predict the future but to tell you what you need to know to take meaningful action in the present.”1

According to Saffo “… little is certain, nothing is preordained, and what we do in the present affects how events unfold, often in significant, unexpected ways.”

Though my work is design fiction, I agree with Saffo. We both look at the future the same way. The objective behind my fictions is to jar us into thinking about the future so that it doesn’t surprise us. The more that our global citizenry thinks about the future and how it may impact them, the more likely that they will get involved. At least that is my hope. Hence, it is why I look for design fictions that will break out of the academy or the gallery show and seep into popular culture. The future needs to be an inclusive conversation.

Of course, the future is a broad topic: it impacts everything and everyone. So much of what we take for granted today could be entirely different—possibly even unrecognizable—tomorrow. Food, medicine, commerce, communication, privacy, security, entertainment, transportation, education, and jobs are just a few of the enormously important areas for potentially radical change. Saffo and Kaku don’t know what the future will bring any more than I do. We just look at what it could bring. I tend to approach it from the perspective of “What could go wrong?” Others take a more balanced view, and some look only at the positives. It is these perspectives that create the dialog and debate, which is what they are supposed to do. We also have to be careful that we don’t see these opinions as fact. Ray Kurzweil sees the equivalent of 20,000 years of change packed into the 21st century. Kaku (from the article mentioned above) sees computers being relegated to the

“‘dull, dangerous and dirty’ jobs that are repetitive, such as punching in data, assembling cars and any activity involving middlemen who do not contribute insights, analyses or gossip.’ To be employable, he stresses, you now have to excel in two areas: common sense and pattern recognition. Professionals such as doctors, lawyers and engineers who make value judgments will continue to thrive, as will gardeners, policemen, construction workers and garbage collectors.”

Looks like Michio and I disagree again. The whole idea behind artificial intelligence is in the area of predictive algorithms that use big data to learn. Machine learning programs detect patterns in data and adjust program actions accordingly.2 The idea of diagnosing illnesses, advising humans on potential human behaviors,  analyzing soil, site conditions and limitations, or even collecting trash are will within the realm of artificial intelligence. I see these jobs every bit as vulnerable as those of assembly line workers.

That, of course, is all part of the discussion—that we need to have.

 

1 Harvard Business Review | July–August 2007 | hbr.org
2. http://www.machinelearningalgorithms.com
Bookmark and Share

The ultimate wild card.

 

One of the things that futurists do when they imagine what might happen down the road is to factor in the wild card. Short of the sports or movie references a wild card is defined by dictionary.com as: “… of, being, or including an unpredictable or unproven element, person, item, etc.” One might use this term to say, “Barring a wild card event like a meteor strike, global thermonuclear war, or a massive earthquake, we can expect Earth’s population to grow by (x) percent.”

The thing about wild card events is that they do happen. 9/11 could be considered a wild card. Chernobyl, Fukushima, and Katrina would also fall into this category. At the core, they are unpredictable, and their effects are widespread. There are think tanks that work on the probabilities of these occurrences and then play with scenarios for addressing them.

I’m not sure what to call something that would be entirely predictable but that we still choose to ignore. Here I will go with a quote:

“The depravity of man is at once the most empirically verifiable reality but at the same time the most intellectually resisted fact.”

― Malcolm Muggeridge

Some will discount this automatically because the depravity of man refers to the Christian theology that without God, our nature is hopeless. Or as Jeremiah would say, our heart is “deceitful and desperately wicked” (Jeremiah 17:9).

If you don’t believe in that, then maybe you are willing to accept a more secular notion that man can be desperately stupid. To me, humanity’s uncanny ability to foul things up is the recurring (not-so) wild card. It makes all new science as much a potential disaster as it might be a panacea. We don’t consider it often enough. If we look back through my previous blogs from Transhumanism to genetic design, this threat looms large. You can call me a pessimist if you want, but the video link below stands as a perfect example of my point. It is a compilation of all the nuclear tests, atmospheric, underground, and underwater, since 1945. Some of you might think that after a few tests and the big bombs during WWII we decided to keep a lid on the insanity. Nope.

If you can watch the whole thing without sinking into total depression and reaching for the Clorox, you’re stronger than I am. And, sadly it continues. We might ask how we have survived this long.

Bookmark and Share

Enter the flaw.

 

I promised a drone update this week, but by now, it is probably already old news. It is a safe bet there are probably a few thousand more drones than last week. Hence, I’m going to shift to a topic that I think is moving even faster than our clogged airspace.

And now for an AI update. I’ve blogged previously about Kurzweil’s Law of Accelerating Returns, but the evidence is mounting every day that he’s probably right.  The rate at which artificial intelligence is advancing is beginning to match nicely with his curve. A recent article on the Txchnologist website demonstrates how an AI system called Kulitta, is composing jazz, classical, new age and eclectic mixes that are difficult to tell from human compositions. You can listen to an example here. Not bad actually. Sophisticated AI creations like this underscore the realization that we can no longer think of robotics as the clunky mechanized brutes. AI can create. Even though it’s studying an archive of man-made creations the resulting work is unique.

First it learns from a corpus of existing compositions. Then it generates an abstract musical structure. Next it populates this structure with chords. Finally, it massages the structure and notes into a specific musical framework. In just a few seconds, out pops a musical piece that nobody has ever heard before.

The creator of Kulitta, Donya Quick says that this will not put composers out of a job, it will help them do their job better. She doesn’t say how exactly.

If even trained ears can’t always tell the difference, what does that mean for the masses? When we can load the “universal composer” app onto our phone and have a symphony written for ourselves, how will this serve the interests of musicians and authors?

The article continues:

Kulitta joins a growing list of programs that can produce artistic works. Such projects have reached a critical mass–last month Dartmouth College computational scientists announced they would hold a series of contests. They have put a call out seeking artificial intelligence algorithms that produce “human-quality” short stories, sonnets and dance music. These will be pitted against compositions made by humans to see if people can tell the difference.

The larger question to me is this: “When it all sounds wonderful or reads like poetry, will it make any difference to us who created it?”

Sadly, I think not. The sweat and blood that composers and artists pour into their compositions could be a thing of the past. If we see this in the fine arts, then it seems an inevitable consequence for design as well. Once the AI learns the characters, behaviors and personalities of the characters in The Lightstream Chronicles, it can create new episodes without me. Taking characters and setting that already exist as CG constructs, it’s not a stretch that it will be able to generate the wireframes, render the images, and layout the panels.

Would this app help me in my work? It could probably do it in a fraction of the time that it would take me, but could I honestly say it’s mine?

When art and music are all so easily reconstructed and perfect, I wonder if we will miss the flaw. Will we miss that human scratch on the surface of perfection, the thing that reminds us that we are human?

There is probably an algorithm for that, too. Just go to settings > humanness and use the slider.

Bookmark and Share

Meddling with the primal forces of nature.

 

 

One of the more ominous articles of recent weeks came from WIRED magazine in an article about the proliferation of DNA editing. The story is rich with technical talk and it gets bogged down in places but essentially it is about a group of scientists who are concerned about the Pandora’s Box they may have created with something called Crispr-Cas9, or Crispr for short. Foreseeing this as far back as 1975, the group thought that establishing “guidelines” for what biologists could and could not do; things like creating pathogens and mutations that could be passed on from generation to generation — maybe even in humans — were on the list of concerns. It all seemed very far off back in the 70’s, but not anymore. According to WIRED writer Amy Maxmen,

“Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people.”

Maxmen states that startups are launching with Crispr as their focus. Two quotes that I have used excessively come to mind. First, Tobias Revell: “Someone, somewhere in a lab is playing with your future.”1. Next, from a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” 2.

And so, we play catch-up. From the WIRED article:

“It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.”

The most disconcerting part of all this, to me, is that now, before the rules exist that even the smallest breach in protocol could unleash repercussions of Biblical proportions. Everything from killer mosquitoes and flying spiders, horrific mutations and pandemics are up for grabs.

We’re not even close to ready for this. Don’t tell me that it could eradicate AIDS or Huntington’s disease. That is the coat that is paraded out whenever a new technology peers its head over the horizon.

“Now, with less than $100, an ordinary arachnologist can snip the wing gene out of a spider embryo and see what happens when that spider matures.”

Splice-movie-baby-Dren
From the movie “Splice”. Sometimes bad movies can be the most prophetic.

It is time to get the public involved in these issues whether through grass-roots efforts or persistence with their elected officials to spearhead some legislation.

“…straight-out editing of a human embryo sets off all sorts of alarms, both in terms of ethics and legality. It contravenes the policies of the US National Institutes of Health, and in spirit at least runs counter to the United Nations’ Universal Declaration on the Human Genome and Human Rights. (Of course, when the US government said it wouldn’t fund research on human embryonic stem cells, private entities raised millions of dollars to do it themselves.) Engineered humans are a ways off—but nobody thinks they’re science fiction anymore.”

Maxmen interviewed Harvard geneticist George Church. In a closer to the article,

“When I ask Church for his most nightmarish Crispr scenario, he mutters something about weapons and then stops short. He says he hopes to take the specifics of the idea, whatever it is, to his grave. But thousands of other scientists are working on Crispr. Not all of them will be as cautious. “You can’t stop science from progressing,” Jinek says. “Science is what it is.” He’s right. Science gives people power. And power is unpredictable.”

Who do you trust?

 

 

1. Critical Exploits. Performed by Tobias Revell. YouTube. January 28, 2014. Accessed February 14, 2014. http://www.youtube.com/watch?v=jlpq9M1VELU#t=364.
2. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share