The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.” At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda. Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject. [Read: Scientists claim they can teach AI to judge ‘right’ from ‘wrong’] Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula.

The AI ethics pipeline

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption. Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place. The study authors warned that this could have far-reaching consequences: The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors: While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.