Skip to content
Advertisement

In an image recognition task how to deal with unexpected images [closed]

I’m trying to develop an image classifier with Keras, I followed the tutorial on the page: https://www.tensorflow.org/tutorials/images/classification?hl=en

Still using the flower dataset that the tutorial provides, I tried to test the model by making a prediction with a photo of a goat. The result was that the prediction saw that the goat photo belongs to the class of daisies at 70%. I think this is impossible considering also that there are no daisies in the photo of the goat.

This is the code I used :

JavaScript

I am using tensorflow 2.3.2 version and the other libraries at the last version.

How can I improve the model so that if it receives a photo that is totally different from the classes it used for training, such as a photo that is not a flower, it gives a prediction equal to 0% ?

I need a solution because I am trying to develop a system that having a set of images, for example three classes of a company logo: correct, wrong and acceptable, if it receives a logo it must classify it in the right class. But if it receives a photo that isn’t a company logo it must returns that it doesn’t belong in the logo set.

Advertisement

Answer

Since your model has been trained on a closed set of images, it will take any image of another kind to fit the schemes it learned and often it will succeed. In the end the output probability is nothing else that the probability given certain shapes combinations and colors in an image, not the abstract concept that you expect as an output (for understanding more what a neural network actually sees, you can read this article among others: https://www.kdnuggets.com/2016/06/peeking-inside-convolutional-neural-networks.html)

The solution to your problem is therefore to teach your neural network another class, a suplemental one, consisting of various generic images which are not your target. You can get the images to feed into such a class from ImageNet (http://image-net.org/), for instance. In this way you will have an adversarial output that will fire up when the images you are providing are not among the targets you expect.

Techinically speaking, in relation to the code you have provided, you just have to create a further directory in your training and validation directories, a directory containing mixed images for your extra class “other”.

User contributions licensed under: CC BY-SA
3 People found this is helpful
Advertisement