I suppose that makes perfect sense. A corporation is an accountability sink for owners, board members and executives, so why not also make AI accountable?
I was thinking more along the lines of the “human in the loop” model for AI where one human is responsible for all the stuff that AI gets wrong despite it physically not being possible to review every line of code an AI produces.
I suppose that makes perfect sense. A corporation is an accountability sink for owners, board members and executives, so why not also make AI accountable?
I was thinking more along the lines of the “human in the loop” model for AI where one human is responsible for all the stuff that AI gets wrong despite it physically not being possible to review every line of code an AI produces.