Artificial intelligence (AI) is a powerful new tool, but it also has limitations. Program in your biases, and it turns out your AI system will be biased, too.
Intel is dedicated to using AI for good. That means ensuring that AI technology is not only free of data and human biases, but also safe and secure, inclusive, explainable, respectful of human rights, and monitored with human oversight.
To do this, Intel is collaborating with its ecosystem of partners, customers and broader community. The effort is part of the company’s larger corporate social responsibility initiative, which includes its RISE Strategy — short for Responsible, Inclusive, Sustainable and Enabling.
To understand how this is playing out in the real world, consider Intel’s response to the COVID-19 pandemic. To date Intel has committed $60 million to pandemic relief programs.
AI is playing a role in Intel’s COVID-19 pandemic response and readiness initiative. The company has launched a pandemic response initiative that comprises 3 main areas: health and life sciences, distance learning, and economic recovery.
Intel knows that in fighting a pandemic, AI can be an important tool. The technology can be used for a myriad of uses, including detection, response, prevention and recovery to name just a few.
For example, during response, AI can help with diagnostic testing, Imaging, Life-critical equipment (ventilators), multi-modality screening. One of Intel’s partners, Huiying Medical, has developed a solution that employs AI technology to use CT scans to detect coronavirus infections.
For prevention, AI solutions include treatment/vaccine discovery, population health, clinical studies and preventative testing. Also, health systems can deploy virtual assistants, or use chatbots or virtual assistants, to help automate aspects of healthcare services, including triaging and recommending an appropriate health service.
Intel is also collaborating with Career Launcher, which has enabled remote learning for underserved communities and focused on helping businesses reopen with confidence. Another Intel partner, Sensormatic Solutions, is helping retail stores open safely during the pandemic by integrating AI solutions into existing store infrastructure and cameras.
Federating data for privacy
Inherent bias in AI systems can be stopped. One way Intel is looking into this is with more representative datasets.
For example, deep learning (an AI subset) can be used to screen and monitor health conditions. But to be effective, the datasets need to be large and diverse.
Intel is working on this, allowing machine learning to occur over a large number of geographically diverse hospitals. The idea is to collect a greater diversity of datasets. In this way, AI models can be created with less (or ideally no) bias toward a particular type of patient or population.
Currently, that’s still a challenge. Many hospitals lack sufficient data to build powerful AI models.
For hospitals that do have the data, there’s still an issue of data models. Often, these models work well for the patients at that hospital, but may not work for patients at other hospitals, due to demographic differences.
One possible solution is to have hospitals share their data. But that would raise issues around patient privacy and data ownership.
To help overcome this barrier, Intel has joined a community that’s developing and testing a new approach. Known as federated learning, it involves using an AI model that leverages available data from a group of hospitals, but without sharing that data among the hospitals.
To do this, a federated learning system distributes the model training to each data owner. Then it aggregates their results in a privacy-preserving manner. The model learns from data that comes from a wide variety of patient populations.
This new approach appears to work. In one trial, researchers from Intel and other institutions found that federated learning among 10 institutions resulted in models reaching 99% of the model quality realized with centralized data.
Another AI area where Intel wants to make a difference is around what’s known as explainability.
AI systems that use deep learning are something of a black box, in that the decision-making process is not visible to observers. These systems mimic the neural networks of the human brain. That means that unlike more basic machine learning systems, deep learning relies on patterns that are mostly hidden.
That’s powerful, but it’s also a problem. Before people will use a technology, they want to understand how it works.
One of Intel Capital’s portfolio companies, Retrace, is working on this challenge. The company focuses on dental-insurance claims processing by using computer vision and decision intelligence.
AI for youth
Another Intel program for social good is Intel AI for Youth. Launched last year, the program empowers students worldwide to create social-impact programs with AI.
The AI for Youth program offers students training in data science, computer vision and coding. It also provides them with social skills around AI ethics and biases, and AI solution-building. This program helps develop talent pipeline with knowledge of developing responsible AI and therefore helps reduce human biases.
One project involved creating a system that converts handwritten complaints from a rural village into a digital format that can be sent to government officials. Another implemented a drone system that searches for missing people.
Intel AI for Youth is currently available to K-12 and vocational students in China, India, Germany, Poland, Russia, Singapore, South Korea, the UK and the United States. Intel expects the program will reach more than 100,000 students in high schools and vocational schools by this year’s end. Beyond that, Intel hopes to expand the program to as many as 30 million people worldwide by 2030.
Put it all together, and you have Intel’s intelligent approach to responsible AI.
Explore these Intel AI resources:
> Intel Partner University Competencies:
> Edge AI