Skip to content

Commit

Permalink
chore(openchallenges): 2024-04-27 DB update (#2653)
Browse files Browse the repository at this point in the history
Co-authored-by: vpchung <[email protected]>
  • Loading branch information
github-actions[bot] and vpchung authored Apr 27, 2024
1 parent 24ca9fa commit 317b183
Showing 1 changed file with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@
"376","flare","FLARE21","Abdominal organ segmentation challenge","Abdominal organ segmentation plays an important role in clinical practice, and to some extent, it seems to be a solved problem because the state-of-the-art methods have achieved inter-observer performance in several benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can be generalized on more diverse datasets. Moreover, many SOTA methods use model ensembles to boost performance, but these solutions usually have a large model size and cost extensive computational resources, which are impractical to be deployed in clinical practice. To address these limitations, we organize the Fast and Low GPU Memory Abdominal Organ Segmentation challenge that has two main features: (1) the dataset is large and diverse, includes 511 cases from 11 medical centers. (2) we not only focus on segmentation accuracy but also segmentation efficiency, whi...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/599/logo_hDqJ8uG.gif","https://flare.grand-challenge.org/","active","5","https://doi.org/10.1016/j.media.2022.102616","\N","\N","\N","2023-11-08 00:42:00","2023-11-15 22:36:39"
"377","nucls","NuCLS","Triple-negative breast cancer nuclei challenge","Classification, Localization and Segmentation of nuclei in scanned FFPE H&E stained slides of triple-negative breast cancer from The Cancer Genome Atlas. See: Amgad et al. 2021. arXiv:2102.09099 [cs.CV].","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/601/TCGA-AR-A0U4-DX1_id-5ea40a88ddda5f8398990ccf_left-42405_top-70784_bo_PgpXdUu.png","https://nucls.grand-challenge.org/","completed","5","","\N","\N","\N","2023-11-08 00:42:00","2023-11-17 23:29:28"
"378","bcsegmentation","Breast Cancer Segmentation","Triple-negative breast cancer segmentation","Semantic segmentation of histologic regions in scanned FFPE H&E stained slides of triple-negative breast cancer from The Cancer Genome Atlas. See: Amgad M, Elfandy H, ..., Gutman DA, Cooper LAD. Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics. 2019. doi: 10.1093/bioinformatics/btz083","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/602/BCSegmentationLogo.png","https://bcsegmentation.grand-challenge.org/","completed","5","","\N","\N","\N","2023-11-08 00:42:00","2023-11-17 23:29:37"
"379","feta","FeTA - Fetal Tissue Annotation Challenge","Fetal tissue annotation challenge","The Fetal Tissue Annotation and Segmentation Challenge (FeTA) is a multi-class, multi-institution image segmentation challenge part of MICCAI 2022. The goal of FeTA is to develop generalizable automatic multi-class segmentation methods for the segmentation of developing human brain tissues that will work with data acquired at different hospitals. The challenge provides manually annotated, super-resolution reconstructed MRI data of human fetal brains which will be used for training and testing automated multi-class image segmentation algorithms. In FeTA 2021, we used the first publicly available dataset of fetal brain MRI to encourage teams to develop automatic brain tissue segmentation algorithms. This year, FeTA 2022 takes it to the next level by launching a multi-center challenge for the development of image segmentation algorithms that will be generalizable to different hospitals with unseen data. We will include data from two institutions in the training dataset, and there wi...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/604/FeTA_logo_640.png","https://feta.grand-challenge.org/","active","5","","2024-03-21","2024-04-26","\N","2023-11-08 00:42:00","2023-12-12 19:00:18"
"379","feta","FeTA - Fetal Tissue Annotation Challenge","Fetal tissue annotation challenge","The Fetal Tissue Annotation and Segmentation Challenge (FeTA) is a multi-class, multi-institution image segmentation challenge part of MICCAI 2022. The goal of FeTA is to develop generalizable automatic multi-class segmentation methods for the segmentation of developing human brain tissues that will work with data acquired at different hospitals. The challenge provides manually annotated, super-resolution reconstructed MRI data of human fetal brains which will be used for training and testing automated multi-class image segmentation algorithms. In FeTA 2021, we used the first publicly available dataset of fetal brain MRI to encourage teams to develop automatic brain tissue segmentation algorithms. This year, FeTA 2022 takes it to the next level by launching a multi-center challenge for the development of image segmentation algorithms that will be generalizable to different hospitals with unseen data. We will include data from two institutions in the training dataset, and there wi...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/604/FeTA_logo_640.png","https://feta.grand-challenge.org/","completed","5","","2024-03-21","2024-04-26","\N","2023-11-08 00:42:00","2023-12-12 19:00:18"
"380","fastpet-ld","fastPET-LD","PET scan ""hot spots"" detection challenge","In this challenge, we provide 2 training datasets of 68 cases each: the first one was acquired at Sheba medical center (Israel) nuclear medicine department with a very-short exposure of 30s pbp, while the second is the same data followed by a denoising step implemented by a fully convolutional Dnn architecture trained under perceptual loss [1,2]. The purpose of this challenge is the detection of “hot spots”, that is locations that have an elevated standard uptake value (SUV) and potential clinical significance. Corresponding CT scans are also provided. The ground truth, common to both datasets, was generated by Dr. Liran Domachevsky, chair of nuclear medicine at Sheba medical center. It consists of a 3-D segmentation map of the hot spots as well as an Excel file containing the position and size of a 3D cuboid bounding box for each hot spot.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/605/IMG_19052021_144815_600_x_600_pixel.jpg","https://fastpet-ld.grand-challenge.org/","active","5","","\N","\N","\N","2023-11-08 00:42:00","2023-11-15 22:35:52"
"381","autoimplant2021","AutoImplant 2021","Automatic cranial implant design challenge","Please see our AutoImplant 2020 website for an overview of the cranial implant design topic. Our 2nd AutoImplant Challenge (referred to as AutoImplant 2021) sees the (not limited to) following three major improvements compared to the prior edition, besides a stronger team: Real craniotomy defective skulls will be provided in the evaluation phase. Task specific metrics (e.g., boundary Dice Score) that are optimally in agreement with the clinical criteria of cranial implant design will be implemented and used. Besides a metric-based scoring and ranking system, neurosurgeons will be invited to verify, score and rank the participants-submitted cranial implants based their clinical usability (for the real cases in Task 2).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/607/AutoImplant_2021_Logo.png","https://autoimplant2021.grand-challenge.org/","completed","5","https://doi.org/10.1109/tmi.2021.3077047","\N","\N","\N","2023-11-08 00:42:00","2023-11-16 17:41:01"
"382","dfu-2021","DFUC2021","Diabetic foot ulcer challenge 2021","We have received approval from the UK National Health Service (NHS) Re-search Ethics Committee (REC) to use these images for the purpose of research. The NHS REC reference number is 15/NW/0539. Foot images with DFU were collected from the Lancashire Teaching Hospital over the past few years. Three cameras were used for capturing the foot images, Kodak DX4530, Nikon D3300and Nikon COOLPIX P100. The images were acquired with close-ups of the full foot at a distance of around 30–40 cm with the parallel orientation to the plane of an ulcer. The use of flash as the primary light source was avoided, and instead, adequate room lights were used to get the consistent colours in images. Images were acquired by a podiatrist and a consultant physician with specialization in the diabetic foot, both with more than 5 years professional experience. As a pre-processing stage, we have discarded photographs with out of focus and blurry artefacts. The DFUC2021 consists of 15,683 DFU patche...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/608/footsnap_logo.png","https://dfu-2021.grand-challenge.org/","active","5","https://doi.org/10.1007/978-3-030-94907-5_7","\N","\N","\N","2023-11-08 00:42:00","2023-11-16 17:41:08"
Expand Down Expand Up @@ -500,6 +500,6 @@
"499","brats-goat","BraTS-ISBI 2024 - Generalizability Across Tumors Challenge","BraTS-GoAT Challenge: Generalizability Across Brain Tumor Segmentation Tasks","The International Brain Tumor Segmentation (BraTS) challenge has been focusing, since its inception in 2012, on generating a benchmarking environment and a dataset for delineating adult brain gliomas. The focus of the BraTS 2023 challenge remained the same: generating a standard benchmark environment. At the same time, the dataset expanded into explicitly addressing 1) the same adult glioma population, as well as 2) the underserved sub-Saharan African brain glioma patient population, 3) brain/intracranial meningioma, 4) brain metastasis, and 5) pediatric brain tumor patients. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. Each segmentation method was evaluated exclusively on the patient population it was trained on in each sub-challenge. In this challenge, we aim to organize the BraTS Generalizability Across Tumors (BraTS-GoAT) Challenge. The hypothesis is t...","","https://www.synapse.org/brats_goat","completed","1","","2024-01-09","2024-04-18","\N","2024-02-19 18:20:32","2024-04-08 19:21:58"
"500","ctc2024","Cell Tracking Challenge 2024","Develop novel, robust cell segmentation and tracking algorithms","Segmenting and tracking moving cells in time-lapse sequences is a challenging task, required for many applications in both scientific and industrial settings. Properly characterizing how cells change their shapes and move as they interact with their surrounding environment is key to understanding the mechanobiology of cell migration and its multiple implications in both normal tissue development and many diseases. In this challenge, we objectively compare and evaluate state-of-the-art whole-cell and nucleus segmentation and tracking methods using both real and computer-generated (2D and 3D) time-lapse microscopy videos of cells and nuclei. With over a decade-long history and three detailed analyses of its results published in Bioinformatics 2014, Nature Methods 2017, and Nature Methods 2023, the Cell Tracking Challenge has become a reference in cell segmentation and tracking algorithm development. This ongoing benchmarking initiative calls for segmentation-and-tracking and segm...","http://celltrackingchallenge.net/files/extras/tracking-result.gif","http://celltrackingchallenge.net/ctc-vii/","completed","\N","","2023-12-22","2024-04-05","\N","2024-03-06 18:57:14","2024-03-26 1:26:38"
"501","isbi-bodymaps24-3d-atlas-of-human-body","ISBI BodyMaps24: 3D Atlas of Human Body","","Variations in organ sizes and shapes can indicate a range of medical conditions, from benign anomalies to life-threatening diseases. Precise organ volume measurement is fundamental for effective patient care, but manual organ contouring is extremely time-consuming and exhibits considerable variability among expert radiologists. Artificial Intelligence (AI) holds the promise of improving volume measurement accuracy and reducing manual contouring efforts. We formulate our challenge as a semantic segmentation task, which automatically identifies and delineates the boundary of various anatomical structures essential for numerous downstream applications such as disease diagnosis and treatment planning. Our primary goal is to promote the development of advanced AI algorithms and to benchmark the state of the art in this field. The BodyMaps challenge particularly focuses on assessing and improving the generalizability and efficiency of AI algorithms in medical segmentation across divers...","","https://codalab.lisn.upsaclay.fr/competitions/16919","completed","9","","2024-01-10","2024-04-15","\N","2024-03-06 20:12:50","2024-03-06 20:16:23"
"502","precisionfda-automated-machine-learning-automl-app-a-thon","precisionFDA Automated Machine Learning (AutoML) App-a-thon","Unlock new insights into its potential applications in healthcare and medicine","Say goodbye to the days when machine learning (ML) access was the exclusive purview of data scientists and hello to automated ML (AutoML), a low-code ML technique designed to empower professionals without a data science background and enable their access to ML. Although ML and artificial intelligence (AI) have been highly discussed topics in healthcare and medicine, only 15% of hospitals are routinely using ML due to lack of ML expertise and a lengthy data provisioning process. Can AutoML help bridge this gap and expand ML throughout healthcare? The goal of this app-a-thon is to evaluate the effectiveness of AutoML when applied to biomedical datasets. This app-a-thon aligns with the new Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, which calls for agencies to promote competition in AI. The results of this app-a-thon will be used to help inform regulatory science by evaluating whether AutoML can match or improve the performance of traditional, human-c...","","https://precision.fda.gov/challenges/32","active","6","","2024-02-26","2024-04-26","\N","2024-03-11 22:58:43","2024-03-11 23:02:12"
"502","precisionfda-automated-machine-learning-automl-app-a-thon","precisionFDA Automated Machine Learning (AutoML) App-a-thon","Unlock new insights into its potential applications in healthcare and medicine","Say goodbye to the days when machine learning (ML) access was the exclusive purview of data scientists and hello to automated ML (AutoML), a low-code ML technique designed to empower professionals without a data science background and enable their access to ML. Although ML and artificial intelligence (AI) have been highly discussed topics in healthcare and medicine, only 15% of hospitals are routinely using ML due to lack of ML expertise and a lengthy data provisioning process. Can AutoML help bridge this gap and expand ML throughout healthcare? The goal of this app-a-thon is to evaluate the effectiveness of AutoML when applied to biomedical datasets. This app-a-thon aligns with the new Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, which calls for agencies to promote competition in AI. The results of this app-a-thon will be used to help inform regulatory science by evaluating whether AutoML can match or improve the performance of traditional, human-c...","","https://precision.fda.gov/challenges/32","completed","6","","2024-02-26","2024-04-26","\N","2024-03-11 22:58:43","2024-03-11 23:02:12"
"503","dream-olfactory-mixtures-prediction","DREAM olfactory mixtures prediction","Predicting smell from molecule features","The goal of the DREAM Olfaction Challenge is to find models that can predict how close two mixtures of molecules are in the odor perceptual space (on a 0-1 scale, 0 is total overlap, 1 is the furthest away) using physical and chemical features. For this challenge, we are providing a large published training-set of 500 mixtures measurements obtained from 3 publications, mixtures have varying number of molecules and an unpublished test-set of 46 equi-intense mixtures of 10 molecules whose distance was rated by 35 human subjects.","","https://www.synapse.org/#!Synapse:syn53470621/wiki/626022","active","1","","2024-04-19","2024-08-01","2319","2024-04-22 18:21:54","2024-04-22 21:54:39"
"504","fets-2024","Federated Tumor Segmentation (FeTS) 2024 Challenge","Benchmarking weight aggregation methods for federated training","Contrary to previous years, this time we only focus on one task and invite participants to compete in “Federated Training” for effective weight aggregation methods for the creation of a consensus model given a pre-defined segmentation algorithm for training, while also (optionally) accounting for network outages. The same data is used as in FeTS 2022 challenge, but this year the epmhasis is on instance segmentation of brain tumors.","","https://www.synapse.org/fets2024","active","1","","2024-04-01","2024-07-01","\N","2024-04-22 22:07:18","2024-04-22 22:07:18"

0 comments on commit 317b183

Please sign in to comment.