This year, we plan to run a competition on our InsDet dataset, which is the first instance detection benchmark dataset which is larger
in scale and more challenging than existing InsDet datasets. The major strengths of our InsDet dataset over prior InsDet datasets include
(1) both high-resolution profile images of object instances and high-resolution testing images from more realistic indoor scenes,
simulating real-world indoor robots locating and recognizing object instances from a cluttered indoor scene in a distance
(2) a realistic unified InsDet protocol to foster the InsDet research.
- A realistic unified InsDet protocol.
In real-world indoor robotic applications, we consider the scenario that assistive robots must locate and recognize instances to fetch them
in a cluttered indoor scene. For a given object instance, the robots should see it only from a few views at the training stage,
and then accurately detect it at a distance in any scene at the testing stage.
- InsDet in the closed-world.
InsDet has been explored in the closed-world, which allows access to profile images during model development, we also call it conventional instance detection.
While one can exploit profile images to train models, it is still unknown how testing images look like when encountered in the open world.
Prevalent methods adopt a cut-paste-learn strategy [10] that cuts and pastes profile images on random background photos
(sampled in the open world) to generate synthetic training data and uses such synthetic data to train a detector.
- InsDet in the open-world.
The challenge of InsDet lies in its open-world nature that one has no knowledge of data distribution at test-time,
which can be unknown testing scene imagery, unexpected scene clutter, and novel object instances specified only in testing, we also call it novel instance detection.
Prevalent methods exploit the open world by using foundation models and by using diverse data to pretrain InsDet models.
We use
EvalAI as submission protal.
Teams with interesting submission will be announced before 11:59 AOE, Dec 04, 2024. Authors are invited to give a short talk (max 15 minutes) during the workshop.
Announce
Congratulations to all teams for your excellent performance!
We invite the following three teams to give a
short talk (max 15 minutes) for your distinct performance. Please
contact us as soon as possible for more details.
If you are not able to travel to Hanoi for any reason, you are able paticipate our workshop on virtual.