Other News, Commentary and Research

  • AI Is Now So Complex Its Creators Can’t Trust Why It Makes Decisions
    "Artificial intelligence is seeping into every nook and cranny of modern life….But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another."
    Quartz
    12/07/17
  • Artificial Intelligence Seeks an Ethical Conscience
    "Concern about the potential downsides of more powerful AI [one being] even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it."
    Wired
    12/07/17
  • A.I. Will Serve Humans—But Only About 1% of Them
    "Right now, while artificial intelligence is focusing on profit-generation, natural intelligence has proven to be more than up to the task of manipulating it, as if sneaking up behind someone distracted by a shiny object. We’re coming to understand just how adroitly AI can be played as we learn more and more about Russia’s manipulation of social media during the 2016 presidential election."
    BigThink.com
    11/15/17
  • The Brutal Fight to Mine Your Data and Sell It to Your Boss
    "Tomorrow’s monopolies won’t be able to be measured just by how much they sell us. They’ll be based on how much they know about us and how much better they can predict our behavior than competitors."
    Bloomberg Businessweek
    11/15/17
  • Artificial Intelligence May Reflect the Unfair World We Live In
    "[A] plethora of evidence suggests that AI has developed a system biased against racial minorities and women…. Factors like these suggest that human bias against race and gender has been transferred by AI professionals into machine intelligence. The result is that AI systems are being trained to reflect the general opinions, prejudices and assumptions of their creators, particularly in the fields of lending and finance."
    Entrepreneur
    11/14/17
  • Chief Scientist Alan Finkel Calls For Artificial Intelligence Regulation
    "The rules have to evolve for all AI and they have to be enforced. In humanity 2.0, there are consequences for breaking the rules ... So too in AI 2.0, we need rules and consequences for breaking them. '[But] we don't want a total ban. We don't want a free-for-all. We want a forward-looking regulatory regime, with clear expectations, that gives consumers confidence that we can roll out the technology safely.'"
    Financial Review
    11/14/17
  • Algorithms With Minds of Their Own
    "A better solution is to make artificial intelligence accountable. The concepts of accountability and transparency are sometimes conflated, but the former does not involve disclosure of a system’s inner workings. Instead, accountability should include explainability, confidence measures, procedural regularity, and responsibility."
    The Wall Street Journal
    11/12/17
  • Democracy Needs a Reboot for the Age of Artificial Intelligence
    "We desperately need new policies, regulations, and safety nets for those displaced by machines. With computing power accelerating exponentially, the scale of AI’s significance is still not being fully internalized…. The decisions will be fair only if the data is unbiased, and we don’t have to look too far to be reminded that our world, and therefore our data, is far from even-handed."
    The Nation.
    11/08/17
  • Once Considered a Boon to Democracy, Social Media Have Started to Look Like Its Nemesis
    "In 2010 Wael Ghonim, an entrepreneur and fellow at Harvard University, was one administrator of a Facebook page called 'We are all Khaled Saeed,' which helped spark the Egyptian uprising centred on Tahrir Square. 'We wanted democracy,' he says today, 'but got mobocracy.' Fake news spread on social media is one of the 'biggest political problems facing leaders around the world,' says Jim Messina. Governments simply do not know how to deal with this—except, that is, for those that embrace it."
    The Economist
    11/04/17
  • OECD Digital Economy Outlook 2017
    "The effects of the digital transformation manifest in job destruction and creation in different sectors, the emergence of new forms of work, and a reshaping trade landscape, in particular for services."
    OECD Publishing
    10/30/17
  • What Should Governments Be Doing About the Rise of Artificial Intelligence?
    "When it comes to what governments should be doing, there was an implied agreement that they should be enabling AI to be used for their obvious benefits to society. This has to be balanced by minimising the risks of the increased collection of personal data and also the risks of how the AI is actually using that data."
    TheConversation.com
    10/30/17
  • Will Modern Luddites Smash Tech’s Future?
    "But the real driver of unrest is the speed with which these changes are occurring and the sheer scale of new technologies entering commercial use. The ongoing revolution in computing, in which core components continue to get faster, cheaper and smaller, invariably accelerates stress on individuals and institutions. Twenty years ago, I referred to that phenomenon as The Law of Disruption: technology changes exponentially, while humans change incrementally."
    The Washington Post
    10/30/17
  • Business Vows to Retrain 1m Workers in Digital Push
    "British industry will pledge to retrain 1m workers within five years, in exchange for a government-backed, national plan to promote the adoption of digital technologies across the manufacturing sector."
    Financial Times
    10/29/17
  • What’s Behind the Hype About Artificial Intelligence?
    "I think AI systems have still not figured out to do unsupervised learning well, or learned how to train on a very limited amount of data, or train without a lot of human intervention. That is going to be the main thing that continues to remain difficult. None of the recent research have shown a lot of progress here."
    Knowledge@Wharton
    10/27/17
  • Digital Transformation Will Be Dramatic and Painful
    "The only thing that is certain is that people will need to take more responsibility for their income through the creation of small and micro-businesses where they are in control. Employers can play a huge role by providing start-up training and mentoring through the early phases of business creation for employees they are replacing with technology."
    Financial Times
    10/26/17
  • It’s Time for More Transparency in A.I.
    "A black-box algorithm is one in which you only know the inputs and outputs. How the A.I., in this case, comes to its conclusions is a mystery to anyone not involved in its development. As Quartz points out, the application of these types of algorithms in the public sector raises questions when it comes to our right to 'due process.'"
    Slate.com
    10/24/17
  • OECD: Artificial Intelligence: Why a Global Dialogue is Critical
    "There is no stopping the evolution and rise of artificial intelligence. That shouldn’t even be the goal. Rather, policy makers, regulators, business leaders, AI researchers and the public should be asking what sort of framework is needed to promote the ethical development of artificial intelligence and safeguard against potential abuses."
    OECD
    10/24/17
  • Tech Companies Pledge to Use Artificial Intelligence Responsibly
    "The Information Technology Industry Council — a DC-based group representing the likes of IBM, Microsoft, Google, Amazon, Facebook and Apple— is today releasing principles for developing ethical artificial intelligence systems."
    AXIOS
    10/24/17
  • DeepMind Computer Teaches Itself to Become World's Best Go Player
    "'It learns to play simply by playing games against itself, starting from completely random play,' said Demis Hassabis, DeepMind chief executive. 'In doing so, it quickly surpassed human level of play and defeated the previously published version of AlphaGo by 100 games to zero.'"
    Financial Times
    10/18/17
  • Protecting Against AI’s Existential Threat
    "AI tasked with maximizing profits for a corporation—and given the seemingly innocuous ability to post things on the internet—could deliberately cause political upheaval in order to benefit from a change in commodity prices. Why would humans allow this? They wouldn’t. But AI will require more autonomy, and these systems will achieve their goals at speeds limited only by their computational resources—speeds that will likely exceed the capacity of human oversight."
    The Wall Street Journal
    10/18/17
Show More