Without further ado, let’s dive in and see what the pros think will happen in the wake of 2020. Dr. Arash Rahnama, Head of Applied AI Research at Modzy: Dr. Kim Duffy, Life Science Product Manager at Vicon. 2021 will be the year of explainability. As organization integrate AI, explainability will become a major part of ML pipelines to establish trust for the users. Understanding how machine learning reasons against real-world data helps build trust between people and models. Without understanding outputs and decision processes, there will never be true confidence in AI-enabled decision-making. Explainability will be critical in moving forward into the next phase of AI adoption. The combination of explainability, and new training approaches initially designed to deal with adversarial attacks, will lead to a revolution in the field. Explainability can help understand what data influenced a model’s prediction and how to understand bias — information which can then be used to train robust models that are more trusted, reliable and hardened against attacks. This tactical knowledge of how a model operates, will help create better model quality and security as a whole. AI scientists will re-define model performance to encompass not only prediction accuracy but issues such as lack of bias, robustness and strong generalizability to unpredicted environmental changes. Joe Petro, CTO of Nuance Communications: For 2021, however, we may see more clinicians, biomechanists, and researchers adopting these approaches during data analysis. Over the last few years, we have seen more literature presenting AI and ML work in gait. I believe this will continue into 2021, with more collaborations occurring between clinical and research groups to develop machine learning algorithms that facilitate automatic interpretations of gait data. Ultimately, these algorithms may help propose interventions in the clinical space sooner. It is unlikely we will see the true benefits and effects of machine learning in 2021. Instead, we’ll see more adoption and consideration of this approach when processing gait data. For example, the presidents of Gait and Posture’s affiliate society provided a perspective on the clinical impact of instrumented motion analysis in their latest issue, where they emphasized the need to use methods like ML on big-data in order to create better evidence of the efficiency of instrumented gait analysis. This would also provide better understanding and less subjectivity in clinical decision-making based on instrumented gait analysis. We’re also seeing more credible endorsements of AI/ML – such as the Gait and Clinical Movement Analysis Society – which will also encourage further adoption by the clinical community moving forward. Dr. Max Versace, CEO and Co-Founder, Neurala: With AI permeating nearly every aspect of technology, there will be an increased focus on ethics and deeply understanding the implications of AI in producing unintentional consequential bias. Consumers will become more aware of their digital footprint, and how their personal data is being leveraged across systems, industries, and the brands they interact with, which means companies partnering with AI vendors will increase the rigor and scrutiny around how their customers’ data is being used, and whether or not it is being monetized by third parties. Another year, another set of predictions. You can see how our experts did last year by clicking here. You can see how our experts did this year by building a time machine and traveling to the future. Happy Holidays! Humans will turn their attention to “why” AI makes the decisions it makes. When we think about the explainability of AI, it has often been talked about in the context of bias and other ethical challenges. But as AI comes of age and gets more precise, reliable and finds more applications in real-world scenarios, we’ll see people start to question the “why?” The reason? Trust: humans are reluctant to give power to automatic systems they do not fully understand. For instance, in manufacturing settings, AI will need to not only be accurate, but also “explain” why a product was classified as “normal” or “defective,” so that human operators can develop confidence and trust in the system and “let it do its job”.