With true A.I. I believe researchers already know to be cautious as they move forward. I believe they will try to hardwire them with morality and a respect for human life. At the very least, you should be able to communicate to an A.I. using logic and reason.
With stupid autonomous machines, the same precautions might not be taken. While we still don’t fully understand human style intelligences, we seem to understand swarm intelligence even less.
Take, for example, something like a termite mound. Individually, each termite is stupid and basically follows a few basic operations. Yet with thousands of termites all following these same operations, they create a level of complexity and destructiveness you could never predict from looking at one individual insect.
Imagine a company building an autonomous machine with what they think are a few harmless basic commands to take care of a simple job. They may fail to anticipate the high orders of complexity that will emerge once they have tens of thousands of identical machines working at the same time.
I believe it is highly improbable that humanity will be seriously harmed by our own robots. That said, I think a swarm of mindless recycling nanobots that unwittingly begin taking apart humans to refine or iron is a more likely risk than the emergence of a Skynet-style evil A.I. overlord.
True tragedies more often occur because of an unforeseen set of careless mistakes, not as part of a specific plan. If A.I. is ever created, it will be part of a very conscious and focused effort, but an out-of-control swarm is something that could just emerge from a set of seemingly unimportant oversights.
I think the upcoming remake of the classic movie, the Blob, could potentially be an interesting popular culture vehicle for talking about this concern. Instead of making the Blob an alien lifeform, they could make it an out-of-control stew of nanobots.
* Sergkorn at en.wikipedia [CC BY-SA 3.0 or GFDL], from Wikimedia Commons