To regulate or not to regulate AI? This seems to be the edgy question many governments are battling with. On the one hand there are those that argue that we must absolutely regulate, and quickly, if we’re hoping to end up on the upside of these new technology developments. Then there are those that argue that regulating, especially early on when involved technologies are ill defined, will stifle innovation and progressive development.

In fact, however, we would argue that regulation early on can actually increase the pace of positive and sustainable innovation. By regulating and creating governance structures for these technologies we are creating the practical environments in which these technologies can be adopted, learn, adapt, and ultimately progress in a way that is real world ready.  The alternative is sometimes the development of emerging technologies happening in dark corners, often void of ethical standards or insight into what future ready and sustainable development should look like.

Here’s an excerpt from an article that addresses this dilemma in the context of Capitol Hill and the United States:

“The intense focus on these foundational questions threatens to obscure, however, a key point: AI is already subject to regulation in many ways, and, even while the broader debates about AI continue, additional regulations look sure to follow. These regulations aren’t the sort of broad principles that Musk and Hawking urge and AI advocates fear: There’s nothing on the books as dramatic “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” This is the first of Isaac Asimov’s famed three laws of robotics.”

Read Complete Article Here: Should the Government Regulate AI? It Already is