Limitations of Content-based Image Retrieval© Copyright 2008 by T. Pavlidis Appendix A: Web Test Results of Content-based Image RetrievalRetrievrRetrievr is an experimental web site that allows users to either submit their own images or sketch one is . The following are screen dumps of six examples from that site. Apparently the site uses a method based on wavelets that may be fast but leaves much to be desired. Figure A1: first search from image Figure A2: second search from image Figure A3: third search from image Figure A4: search from sketch Figure A4a: what matches a fence (photo) Figure A4b: what matches a fence (sketch) VIMAVIMA's main site does not allow users to submit their own images. Instead the site starts with a random display from their collection and the user must click on a picture asking for "more like it". An example can be seen in Figure A5. The image in the upper left corner is the one chosen by the user and the rest are images considered perceptually similar to it. Figure A5: a VIMA screen dump VIMA also offers a site that allows submission of images by the user and the results are shown in the screen dumps of Figures A6-A8. Interestingly, VIMA has included some pictures of dogs in its returns for a dog query image while while Retrievr returned no dog pictures, only some pictures of cats. Figure A6: first search from image Figure A7: second search from image Figure A8: third search from image Both Retrievr and VIMA use a very large database for the search (Flickr) so they attempt quite a challenging task and by allowing users to provide their own image, they face the most severe test possible. ALIPRALIPR is supposed to offer automatic tagging of images. Figures A9 and A10 show the screen dumps obtained using the same pictures as I used for Retriev and Vima. The result is a set of tags offered for the picture. None of them was "animal" and in the second case I clicked on "Indoor" since the picture had indeed been taken indoors. Then I received the results shown in Figure A11. Apparently the method used by ALIPR relied on comparing color histograms. Therefore the picture a black dog in front of a yellow wall and over a light brown floor is matched to pictures of athletes in shirts with black and yellow stripes as well as to pornographic pictures ( the floor color can be confused with the color of human flesh). Figure A9: first screen dump Figure A10: second screen dump Figure A11: strange Another critique of ALIPR can be found in Andrew Lampert's BLOG under the title The Broken Promise of Automatic Image Tagging. Back to the Table of Contents - Back to the Introduction Latest update June 11, 2008 |