Overview

Early and accurate detection of abnormal cervical cells is essential for preventing cervical cancer in women, particularly in regions where access to expert cytotechnologists is limited. The RIVA Cervical Cytology Challenge invites the global machine learning community to advance automated analysis of Pap smear images using a large, expert-annotated dataset. On this page you will find the description of the tasks, the competition tracks, submission rules, important dates and additional resources to help you get started.

Competition Tracks

The challenge is divided into two tracks to accommodate different levels of complexity and experimental focus.

Track A: Cell detection + Classification

Localize and classify cells in Pap smear images using bounding boxes, assigning each detection to one of the eight Bethesda categories (NILM, ASCUS, LSIL, HSIL, ASC-H, SCC, INFL, ENDO).

  • Task: Detection + Classification
  • Target metric: Mean Average Precision (mAP@[50:95])
  • Data: RIVA Dataset
  • Awards: 1st Place: USD 2,000 | 2nd Place: USD 1,200
Go to Kaggle Track A

Track B: Cell Detection

Localize cells in Pap smear images using bounding boxes. Only cells belonging to the Bethesda categories (NILM, ASCUS, LSIL, HSIL, ASC-H, SCC, INFL, ENDO) will be considered, although class labels are not evaluated in this track.

  • Task: Detection
  • Target metric: mean Average Precision (mAP@[50:95])
  • Data: RIVA Dataset
  • Awards: 1st Place: USD 1,200 | 2nd Place: USD 600
Go to Kaggle Track B

Important Dates

All dates are given in GMT-0 and may be subject to minor changes. Please check this page regularly.

December 15, 2025

Official Launch & Preliminar Phase Test Set Released

The competition opens on Kaggle. The Preliminar Phase test set becomes available and submissions to the public leaderboard begin. Participants can start building and validating models using the official training dataset hosted on Zenodo.

February 13, 2026

Preliminar Phase Ends

The Preliminar Phase closes. A new test set is released to all teams for the Final Evaluation. This dataset will determine the final ranking.

February 20, 2026

Final Evaluation Phase Ends

Submissions are evaluated on the Final Evaluation test set. No further submissions are accepted after this date. We begin internal validation to ensure that top-ranking teams' results can be reproduced. Final leaderboard is locked pending verification.

February 26, 2026

Paper Submission Deadline

All participating teams are invited to submit a 4-page challenge paper (IEEE ISBI format) describing their method, submitted through the ISBI 2026 EDAS platform under the Challenge Track. This deadline aligns with the official ISBI schedule.

March 1, 2026

Reviews Released on EDAS

Authors receive reviews and may revise their manuscripts. Teams have two weeks to address reviewer comments and submit minor updates.

March 15, 2026

Camera-ready Deadline & Top-performing Teams Announced

Final camera-ready papers must be uploaded to EDAS. Preliminary top-ranking teams will be announced on our website and social channels.

March 20, 2026

Presentation Format Notification

Authors are informed whether their works will be presented as an oral or poster during ISBI 2026.

April 8-11, 2026

Winners Announced at ISBI 2026

Final results, awards, and official recognition of winning teams will be presented during ISBI 2026.

These dates follow the official ISBI 2026 schedule. Please refer to the ISBI Challenge page for any updates: https://biomedicalimaging.org/2026/challenges/

General Rules

Please read the following rules carefully to ensure your submissions are eligible for awards.

Teams & Eligibility

  • The competition is open to academic, industry, and independent teams.
  • Each participant may join only one team.

Reproducibility & Ethics

  • Top teams will be asked to submit a short report (4 pages, IEEE ISBI format) and reproducible code.
  • All participants must comply with data privacy and ethical use guidelines.

Submission Rules

  • The use of external public datasets and publicly available pretrained models is allowed, provided they are properly disclosed and cited.
  • The use of private, undisclosed, or access-restricted data is strictly prohibited.
  • The final ranking is based exclusively on the Final Evaluation test set, released one week before the final submission deadline.
  • Participants are ranked exclusively by mean Average Precision (mAP@[50:95]) on the Final Evaluation test set; in the event of a tie, the submission timestamp will serve as the tiebreaker.

Submission Format and Resources

For details on submission formats and evaluation metrics, please refer to the description of each track, as these may vary. The full implementation of the evaluation metric is available in this Github repo.

Dataset

The RIVA Cervical Cytology Dataset (named after Hospital Rivadavia in Buenos Aires, where the data were collected) is a large-scale, expert-annotated collection of high-resolution Pap smear images designed to advance research in automated cervical cancer screening. The dataset contains thousands of cell-level annotations across the eight Bethesda diagnostic categories: NILM, ASC-US, LSIL, HSIL, ASC-H, SCC, INFL, ENDO. It supports both detection-only and detection-plus-classification tasks. All images were obtained from routine clinical workflows at Hospital Rivadavia and manually annotated by trained cytotechnologists and pathologists. Each annotated cell includes a bounding box and a Bethesda class label. The dataset underwent strict quality-control procedures to ensure clean annotations and clinically meaningful label distributions. The full dataset is openly available on Zenodo and mirrored for convenient access in this competition. It provides a realistic, diverse, and clinically grounded benchmark for developing robust and generalizable cytology models.

Evaluation

Track A: Detection + Classification Challenge

  • Evaluation Metric: Mean Average Precision (mAP@[50:95]) computed across all annotated cell categories (multi-class detection).
  • Final Ranking Criterion: Ranking is based exclusively on mAP on the second test set. In the event of a tie, submission timestamp is used as tiebreaker.

Track B: Detection Challenge

  • Evaluation Metric: Mean Average Precision (mAP@[50:95]) computed on annotated cells (single-class detection).
  • Final Ranking Criterion: Ranking is based exclusively on mAP on the second test set. In the event of a tie, submission timestamp is used as tiebreaker.

Prizes & Recognition

Prizes for Top Teams

  • Track A: 1st Place: USD 2,000 | 2nd Place: USD 1,200
  • Track B: 1st Place: USD 1,200 | 2nd Place: USD 600
  • Invitation to present methods at an IEEE ISBI 2026 special session (top teams only).
  • Co-authorship opportunity in the challenge overview paper for qualifying teams.

Prize Eligibility

  • Submit executable code and trained models for verification.
  • Submit a quality paper describing methods and training strategy.
  • At least one team member presents in person at ISBI.

Organizers

The RIVA Cervical Cytology Challenge is organized by:

  • Emmanuel Iarussi, Universidad Torcuato Di Tella, Argentina
  • Manuel Andrade, Universidad Torcuato Di Tella, Argentina
  • Luciana Bruno, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina
  • Viviana Siless, Universidad Torcuato Di Tella, Argentina
  • José Ignacio Orlando, Universidad del Centro de la Provincia de Buenos Aires, Argentina

Contact

For questions related to the challenge, datasets or submission process, please contact: