Image Dataset for Litter Detection
Today i want to talk a bit about an important project: TACO.
TACO, which stands for Trash Annotations in Context, and it is an open image dataset for litter detection, similar to COCO object segmentation. Started by the idealist computer-vision researcher Pedro Proença (with myself as contributor), it contains photos of litter taken under diverse environments, from tropical beaches to London streets. These images are manually labeled and segmented according to a hierarchical taxonomy to train and evaluate object detection algorithms.
Why is TACO needed?
Humans have been trashing planet Earth from the bottom of Mariana trench to Mount Everest. Every minute, at least 15 tonnes of plastic waste leak into the ocean, which is equivalent to the capacity of one garbage truck. We have all seen the impact of this behaviour to wildlife on images of turtles choking on plastic bags and birds filled with bottle caps. Recent studies have also found microplastics in human stools. These should be kept in the recycling chain not in our food chain.
We believe AI has an important role to play. Think of drones surveying trash, robots picking up litter, anti-littering video surveillance and AR to educate and help humans to separate trash.
That is our vision. All of this is now possible with the recent advances of deep learning. However, to learn accurate trash detectors, deep learning needs many annotated images. While there are a few other trash datasets, we believe these are not enough and therefore we created TACO.
TACO Features
- Object segmentation. Typically used bounding boxes are not enough for certain tasks, e.g., robotic grasping
- Images under free licence. You can do whatever you want with TACO as long as you cite us.
- Background annotation. TACO covers many environments which are tagged for convenience.
- Object context tag. Not all objects in TACO are strictly litter. Some objects are handheld or not even trash yet. Thus, objects are tagged based on context.
The Dataset
TACO contains high resolution images, taken mostly by mobile phones. These are managed and stored by Flickr, whereas our server manages the annotations and runs periodically a crawler to collect more potential images of litter.
Images are labeled with the scene tags, to describe their background — these are not mutually exclusive — and litter instances are segmented and labeled using a hierarchical taxonomy with 60 categories of litter which belong to 28 super (top) categories ,including a special category:Unlabeled litterforobjects that are either ambiguous or not covered by the other categories.
This is fundamentally different from other datasets (e.g. COCO) where distinction between classes is key. Here, all objects can be in fact classified as one class: litter. Furthermore, it may be impossible to distinguish visually between two classes, e.g., plastic bottle and glass bottle. Given this ambiguity and the class imbalance, classes can be rearranged to suit a particular task.
How can one help?
- Annotations are key to improve the dataset and TACO is officially open for new annotations.
- Litter image submission is also important and can be done here or to Flickr following our instructions.
- Use dataset: If you are interested in machine learning, check out our repo and start using this dataset in your experiments. We would love to hear about your results.
- Feedback is appreciated. Let us know if you spot any issue with the dataset or our tools.