Other News, Commentary and Research

  • AI Is Now So Complex Its Creators Can’t Trust Why It Makes Decisions
    "Artificial intelligence is seeping into every nook and cranny of modern life….But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another."
    Quartz
    12/07/17
  • Artificial Intelligence Seeks an Ethical Conscience
    "Concern about the potential downsides of more powerful AI [one being] even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it."
    Wired
    12/07/17
  • The Dark Side Of Artificial Intelligence
    "I strongly believe that any algorithm making decisions about opportunities that affect people’s lives requires a methodological design and testing process to ensure that it is truly bias-free. Because when you employ AI itself to remove bias from an algorithm, the results can be extraordinary."
    Forbes
    12/05/17
  • Can A.I. Be Taught to Explain Itself?
    "It wasn’t just that a clever graph indicating the best choice wasn’t the same as explaining why that choice was correct. The analyst was pointing to a legal and ethical motivation for explainability: Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen."
    The New York Times
    11/21/17
  • The Brutal Fight to Mine Your Data and Sell It to Your Boss
    "Tomorrow’s monopolies won’t be able to be measured just by how much they sell us. They’ll be based on how much they know about us and how much better they can predict our behavior than competitors."
    Bloomberg Businessweek
    11/15/17
  • Artificial Intelligence May Reflect the Unfair World We Live In
    "[A] plethora of evidence suggests that AI has developed a system biased against racial minorities and women…. Factors like these suggest that human bias against race and gender has been transferred by AI professionals into machine intelligence. The result is that AI systems are being trained to reflect the general opinions, prejudices and assumptions of their creators, particularly in the fields of lending and finance."
    Entrepreneur
    11/14/17
  • Chief Scientist Alan Finkel Calls For Artificial Intelligence Regulation
    "The rules have to evolve for all AI and they have to be enforced. In humanity 2.0, there are consequences for breaking the rules ... So too in AI 2.0, we need rules and consequences for breaking them. '[But] we don't want a total ban. We don't want a free-for-all. We want a forward-looking regulatory regime, with clear expectations, that gives consumers confidence that we can roll out the technology safely.'"
    Financial Review
    11/14/17
  • Democracy Needs a Reboot for the Age of Artificial Intelligence
    "We desperately need new policies, regulations, and safety nets for those displaced by machines. With computing power accelerating exponentially, the scale of AI’s significance is still not being fully internalized…. The decisions will be fair only if the data is unbiased, and we don’t have to look too far to be reminded that our world, and therefore our data, is far from even-handed."
    The Nation.
    11/08/17
  • Robots with Artificial Intelligence Become Racist And Sexist—Scientists Think They've Found a Way to Change Their Minds
    "Tay demonstrated an important issue with machine learning artificial intelligence: That robots can be as racist, sexist and prejudiced as humans if they acquire knowledge from text written by humans. Fortunately, scientists may now have discovered a way to better understand the decision-making process of artificial intelligence algorithms to prevent such bias."
    Newsweek
    10/26/17
  • It’s Time for More Transparency in A.I.
    "A black-box algorithm is one in which you only know the inputs and outputs. How the A.I., in this case, comes to its conclusions is a mystery to anyone not involved in its development. As Quartz points out, the application of these types of algorithms in the public sector raises questions when it comes to our right to 'due process.'"
    Slate.com
    10/24/17
  • Tech Companies Pledge to Use Artificial Intelligence Responsibly
    "The Information Technology Industry Council — a DC-based group representing the likes of IBM, Microsoft, Google, Amazon, Facebook and Apple— is today releasing principles for developing ethical artificial intelligence systems."
    AXIOS
    10/24/17
  • Protecting Against AI’s Existential Threat
    "AI tasked with maximizing profits for a corporation—and given the seemingly innocuous ability to post things on the internet—could deliberately cause political upheaval in order to benefit from a change in commodity prices. Why would humans allow this? They wouldn’t. But AI will require more autonomy, and these systems will achieve their goals at speeds limited only by their computational resources—speeds that will likely exceed the capacity of human oversight."
    The Wall Street Journal
    10/18/17
  • Artificial Intelligence—With Very Real Biases
    "But unlike our water systems, there are no established methods to test AI for safety, fairness or effectiveness. Error-prone or biased artificial-intelligence systems have the potential to taint our social ecosystem in ways that are initially hard to detect, harmful in the long term and expensive—or even impossible—to reverse."
    The Wall Street Journal
    10/17/17
  • Your Next Job Interview Could Be With a Robot
    "Most job applicants know their resumes will probably be scanned by a computer before a human recruiter lays eyes on it. But they might be surprised to learn that their first interview could be with or judged by a computer."
    San Francisco Chronicle
    10/17/17
  • Prepare to Meet the Robot Recruiters
    "Recruiters are struggling to handle much higher volumes of curriculum vitaes than ever before. It is partly because people are changing jobs more often and partly because the internet has made it so much easier for applicants to apply to a lot of jobs at once. The flood of CVs is particularly severe for hourly wage jobs, such as in warehouses or retail stores, where a company might need to hire hundreds or even thousands of workers ahead of a busy season."
    Financial Times
    10/15/17
  • Forget Killer Robots—Bias Is the Real AI Danger
    "Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it."
    MIT Technology Review
    10/03/17
  • Biased Algorithms Are Everywhere, and No One Seems to Care
    "Opaque and potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing them nor the government is interested in addressing the problem."
    MIT Technology Review
    07/12/17
  • Inspecting Algorithms for Bias
    "As one specifically unsettling study showed, parole boards were more likely to free convicts if the judges had just had a meal break. This probably had never occurred to the judges."
    MIT Technology Review
    06/12/17
  • Artificial Intelligence Can Be Sexist and Racist Just Like Humans
    "Many of us like to think that artificial intelligence could help eradicate biases, that algorithms could help humans avoid hiring or policing according to gender or race-related stereotypes. But a new study suggests that when computers acquire knowledge from text written by humans, they also replicate the same racial and gender prejudices—thus perpetuating them."
    Newsweek
    04/14/17
  • Semantics Derived Automatically from Language Corpora Contain Human-Like Biases
    "Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases….Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect "
    Science Magazine
    04/14/17
Show More