Benchmarking Detectors for the RIVA Cervical Cytology Challenge
Haotian Jiang, Mengjie Xu, Manman Fei, Zelin Liu, Lichi Zhang and Qian Wang
Early and accurate detection of abnormal cervical cells is essential for preventing cervical cancer in women, particularly in regions where access to expert cytotechnologists is limited. The RIVA Cervical Cytology Challenge invites the global machine learning community to advance automated analysis of Pap smear images using a large, expert-annotated dataset. On this page you will find the description of the tasks, the competition tracks, submission rules, important dates and additional resources to help you get started.
Key information about the oral session at the conference.
Haotian Jiang, Mengjie Xu, Manman Fei, Zelin Liu, Lichi Zhang and Qian Wang
Yan Kong, Yuan Yin, Hongan Chen, Fang Yuqi and Caifeng Shan
Martin Amster and Camila María Polotto
Vibujithan Vigneshwaran, Chris Kang and Nils Forkert
Quick updates, announcements, and useful links.
The challenge is divided into two tracks to accommodate different levels of complexity and experimental focus.
Localize and classify cells in Pap smear images using bounding boxes, assigning each detection to one of the eight Bethesda categories (NILM, ASCUS, LSIL, HSIL, ASC-H, SCC, INFL, ENDO).
Localize cells in Pap smear images using bounding boxes. Only cells belonging to the Bethesda categories (NILM, ASCUS, LSIL, HSIL, ASC-H, SCC, INFL, ENDO) will be considered, although class labels are not evaluated in this track.
All dates are given in GMT-0 and may be subject to minor changes. Please check this page regularly.
The competition opens on Kaggle. The Preliminary Phase test set becomes available and submissions to the public leaderboard begin. Participants can start building and validating models using the official training dataset hosted on Zenodo.
The Preliminary Phase closes. A new test set is released to all teams for the Final Evaluation. This dataset will determine the final ranking.
Submissions are evaluated on the Final Evaluation test set. No further submissions are accepted after this date. We begin internal validation to ensure that top-ranking teams' results can be reproduced. Final leaderboard is locked pending verification.
All participating teams are invited to submit a 4-page challenge paper (IEEE ISBI format) describing their method, submitted through the ISBI 2026 EDAS platform under the Challenge Track. This deadline aligns with the official ISBI schedule.
Authors receive reviews and may revise their manuscripts. Teams have two weeks to address reviewer comments and submit minor updates.
Final camera-ready papers must be uploaded to EDAS. Preliminary top-ranking teams will be announced on our website and social channels.
Authors are informed whether their works will be presented as an oral or poster during ISBI 2026.
Final results, awards, and official recognition of winning teams will be presented during ISBI 2026.
These dates follow the official ISBI 2026 schedule. Please refer to the ISBI Challenge page for any updates: https://biomedicalimaging.org/2026/challenges/
Please read the following rules carefully to ensure your submissions are eligible for awards.
For details on submission formats and evaluation metrics, please refer to the description of each track, as these may vary. The full implementation of the evaluation metric is available in this Github repo.
Useful links related to paper submission and formatting.
The RIVA Cervical Cytology Dataset (named after Hospital Rivadavia in Buenos Aires, where the data were collected) is a large-scale, expert-annotated collection of high-resolution Pap smear images designed to advance research in automated cervical cancer screening. The dataset contains thousands of cell-level annotations across the eight Bethesda diagnostic categories: NILM, ASC-US, LSIL, HSIL, ASC-H, SCC, INFL, ENDO. It supports both detection-only and detection-plus-classification tasks. All images were obtained from routine clinical workflows at Hospital Rivadavia and manually annotated by trained cytotechnologists and pathologists. Each annotated cell includes a bounding box and a Bethesda class label. The dataset underwent strict quality-control procedures to ensure clean annotations and clinically meaningful label distributions. The full dataset is openly available on Zenodo and mirrored for convenient access in this competition. It provides a realistic, diverse, and clinically grounded benchmark for developing robust and generalizable cytology models.
The RIVA Cervical Cytology Challenge is organized by:
For questions related to the challenge, datasets or submission process, please contact: