Home / SmartTech / AI has trouble getting ready for 2020 – TechCrunch

AI has trouble getting ready for 2020 – TechCrunch



By 2020, every industry had to rethink COVID-19: civil rights movements, an election year and countless other great news. On a human level, we had to adjust to a new way of life. We have started accepting these changes and figuring out how to live our lives under these new pandemic rules. As people settle in, AI struggles to keep up.

The problem with AI training in 2020 is that we have suddenly changed our social and cultural norms. The truths we have taught these algorithms are often no longer true. With visual AI in particular, we ask them to immediately interpret the new way we live with an updated context that does not yet exist.

Algorithms are still adapting to new visual queues and trying to understand exactly how to identify them. When visual AI catches up, we also need to re-focus the routine updates to the AI ​​training process so that inaccurate training records and existing open source models can be corrected.

Computer vision models struggle to adequately label representations of the new scenes or situations in which we find ourselves during the COVID-1

9 era. Categories have shifted. Suppose there is a picture of a father working at home while his son is playing. AI still categorizes it as “leisure” or “relaxation”. It is not called “work” or “office”, although working with your children next to you is the common reality for many families during this time.

Credit: Westend61 / Getty Images

On a more technical level, we have physically different pixel representations of our world. At Getty Images, we trained AI to “see”. This means that algorithms can identify images and categorize them based on the pixel structure of this image and decide what it contains. If we change our daily life quickly, we also change the meaning of a category or a day (e.g. “cleaning”).

Think of it this way – cleaning can now involve wiping surfaces that already appear optically clean. So far, the algorithms have been taught that to represent cleaning, a mess must occur. It looks very different now. Our systems have to be retrained to take these newly defined category parameters into account.

This also applies to a smaller scale. Someone could grab a doorknob with a small wipe or clean his steering wheel while sitting in his car. What used to be a trivial detail is now important when people try to stay safe. We need to capture these little nuances so that they are appropriately labeled. Then, in 2020, AI can begin to understand our world and get accurate results.

Credit: Chee Gin Tan / Getty Images

Another problem for AI right now is that machine learning algorithms are still trying to understand how faces can be identified and categorized with masks. Faces are only recognized as the upper half of the face or as two faces – one with the mask and a second only with the eyes. This leads to inconsistencies and prevents the exact use of face recognition models.

One way forward is to retrain algorithms to perform better if only the top of the face (above the mask) is specified. The mask problem resembles the classic challenges in face recognition, e.g. B. if someone is wearing sunglasses or recognizing a person’s face in profile. Masks are now the order of the day.

Credit: Rodger Shija / EyeEm / Getty Images

This shows us that computer vision models still have a long way to go before they can really “see” in our ever-evolving social landscape. To counteract this, robust data records must be created. Then we can train computer vision models to consider the myriad of different ways in which a face can be obstructed or covered.

At this point, we’re expanding the parameters of what the algorithm sees as a face – be it a person wearing a mask in a grocery store, a nurse wearing a mask as part of their daily work, or a person covering themselves Face for religious reasons.

When we create the content that is required to create these robust records, we should be aware of a potentially increased accidental bias. While there is always some bias within AI, we are now seeing unbalanced data sets that represent our new normal. For example, we see more pictures of whites wearing masks than other races.

This may be the result of strict instructions for staying at home, where photographers have limited access to communities other than their own and cannot diversify their motifs. This may be due to the ethnicity of the photographer who chose this motif. Or because of the impact COVID-19 had on different regions. Regardless of the reason, this imbalance results in algorithms being able to recognize a white person wearing a mask more accurately than any other race or ethnicity.

Data scientists and those who build products with models have an increased responsibility to check the accuracy of models in the face of changes in social norms. Routine checks and updates of training data and models are the key to ensuring the quality and robustness of models – today more than ever. If spending is inaccurate, data scientists can quickly identify it and correct the course.

It is also worth noting that our current way of life will continue for the foreseeable future. For this reason, we have to be careful with the open source data sets that we use for training purposes. Records that can be changed should. Open source models that cannot be changed require a disclaimer so that it is clear which projects can be negatively affected by the outdated training data.

Recognizing the new context that the system should understand is the first step in advancing visual AI. Then we need more content. Further representations of the world around us – and the different perspectives. As we accumulate this new content, take stock of new potential distortions and opportunities to retrain existing open source records. We all have to watch out for inconsistencies and inaccuracies. We are bringing AI to 2020 through persistence and commitment to retraining computer vision models.


Source link