lunes, 13 de mayo de 2019

Should We Make AI Explain Its Decisions?


Neural nets, data sources and data points are topics on this episode of The AI Minute. For more on Artificial Intelligence: https://voicesinai.com https://gigaom.com https://byronreese.com https://amzn.to/2vgENbn... Transcript: Do you have the right to an explanation of why an artificial intelligence made the suggestion or choice that it did? This will become an evermore-contentious issue. There is a provision before the EU, which is already adopted in France, which requires companies that use AI to make determinations about customers, to be able to offer an explanation as to why that choice was made. This seems, on the one hand, very reasonable in cases having to do with the pricing of life insurance, or on the other hand whether you get declined for a car loan, but as artificial intelligence becomes more complicated, it may be incredibly difficult to tease “the why” out of the data. When a neural net, for instance, is using hundreds and thousands of different data sources across billions and billions of data points it comes to a conclusion, the explanation of which may be beyond human comprehension. So, the question becomes, do we decide that these are not legitimate uses of artificial intelligence, or do we just accept that there will be some things that the AI will choose that we will never understand? http://bit.ly/2E82ZRN gigaom May 13, 2019 at 04:35PM

No hay comentarios.:

Publicar un comentario