diff --git a/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv b/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv index bb1ea166fd..ebbccc428a 100644 --- a/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv +++ b/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv @@ -279,7 +279,7 @@ "278","qbi-hackathon","QBI hackathon","The QBI hackathon","The QBI hackathon is a 48-hour event connecting the vibrant Bay Area developer community with the scientists from UCSF, UCB and UCSC, during which we work together on the cutting edge biomedical problems. Advances in computer vision, AI, and machine learning have enabled computers to pick out cat videos, recognize people''s faces from photos, play video games and drive cars. More recently, application of deep neural nets to protein structure prediction completely revolutionized the field. We look forward to seeing how far we can push science ahead when we apply these latest algorithms to biomedically relevant light microscopy, electron microscopy, and proteomics data. If you love FFTs, transformers, language models, topological data processing, or simply writing code, this is your chance to apply your skills to make an impact on global healthcare. Beyond the actual event, we hope to establish a better connection between talented developers and scientists in the Bay Area, so that w...","","https://www.eventbrite.com/e/qbi-hackathon-2023-tickets-633794304827?aff=oddtdtcreator","completed","\N","","2023-11-04","2023-11-05","\N","2023-10-06 21:22:51","2023-11-15 22:49:20" "279","niddk-central-repository-data-centric-challenge","NIDDK Central Repository Data-Centric Challenge","Enhance NIDDK datasets for future Artificial Intelligence (AI) applications","The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repository (https://repository.niddk.nih.gov/home/) is conducting a Data Centric Challenge aimed at augmenting existing Repository data for future secondary research including data-driven discovery by artificial intelligence (AI) researchers. The NIDDK Central Repository (NIDDK-CR) program strives to increase the utilization and impact of the resources under its guardianship. However, lack of standardization and consistent metadata within and across studies limit the ability of secondary researchers to easily combine datasets from related studies to generate new insights using data science methods. In the fall of 2021, the NIDDK-CR began implementing approaches to augment data quality to improve AI-readiness by making research data FAIR (findable, accessible, interoperable, and reusable) via a small pilot project utilizing Natural Language Processing (NLP) to tag study variables. In 2022, the NIDD...","","https://www.challenge.gov/?challenge=niddk-central-repository-data-centric-challenge","completed","\N","","2023-09-20","2023-11-03","\N","2023-10-18 16:58:17","2023-11-15 22:49:26" "280","stanford-ribonanza-rna-folding","Stanford Ribonanza RNA Folding","A path to programmable medicine and scientific breakthroughs","Ribonucleic acid (RNA) is essential for most biological functions. A better understanding of how to manipulate RNA could help usher in an age of programmable medicine, including first cures for pancreatic cancer and Alzheimer''s disease as well as much-needed antibiotics and new biotechnology approaches for climate change. But first, researchers must better understand each RNA molecule's structure, an ideal problem for data science.","","https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding","completed","8","","2023-08-23","2023-11-24","\N","2023-10-23 20:58:06","2023-11-15 22:49:31" -"281","uls23","Universal Lesion Segmentation Challenge '23","Advancements, challenges, and a universal solution emerges","Significant advancements have been made in AI-based automatic segmentation models for tumours. Medical challenges focusing on e.g. Liver, kidney, or lung tumours have resulted in large performance improvements for segmenting these types of lesions. However, in clinical practice there is a need for versatile and robust models capable of quickly segmenting the many possible lesions types in the thorax-abdomen area. Developing a universal lesion segmentation (uls) model that can handle this diversity of lesions types requires a well-curated and varied dataset. Whilst there has been previous work on uls [6-8], most research in this field has made extensive use of a single partially annotated dataset [9], containing only the long- and short-axis diameters on a single axial slice. Furthermore, a test set containing 3d segmentation masks used during evaluation on this dataset by previous publications is not publicly available.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/747/ULS23_logo_aoB8tlx.png","https://uls23.grand-challenge.org/","active","5","","2023-10-29","2024-04-09","\N","2023-11-02 15:35:22","2024-03-18 16:30:06" +"281","uls23","Universal Lesion Segmentation Challenge '23","Advancements, challenges, and a universal solution emerges","Significant advancements have been made in AI-based automatic segmentation models for tumours. Medical challenges focusing on e.g. Liver, kidney, or lung tumours have resulted in large performance improvements for segmenting these types of lesions. However, in clinical practice there is a need for versatile and robust models capable of quickly segmenting the many possible lesions types in the thorax-abdomen area. Developing a universal lesion segmentation (uls) model that can handle this diversity of lesions types requires a well-curated and varied dataset. Whilst there has been previous work on uls [6-8], most research in this field has made extensive use of a single partially annotated dataset [9], containing only the long- and short-axis diameters on a single axial slice. Furthermore, a test set containing 3d segmentation masks used during evaluation on this dataset by previous publications is not publicly available.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/747/ULS23_logo_aoB8tlx.png","https://uls23.grand-challenge.org/","completed","5","","2023-10-29","2024-04-10","\N","2023-11-02 15:35:22","2024-04-10 17:52:24" "282","vessel12","VESSEL12","Assess methods for blood vessels in lung CT images","The VESSEL12 challenge compares methods for automatic (and semi-automatic) segmentation of blood vessels in the lungs from CT images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/1/logo.png","https://vessel12.grand-challenge.org/","completed","5","https://doi.org/10.1016/j.media.2014.07.003","2011-11-25","2012-04-01","\N","2023-11-08 00:42:00","2023-11-17 21:30:05" "283","crass","CRASS","Invites participants to submit clavicle segmentation results","Crass stands for chest radiograph anatomical structure segmentation. The challenge currently invites participants to send in results for clavicle segmentation algorithms.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/5/logo.png","https://crass.grand-challenge.org/","completed","5","","\N","\N","\N","2023-11-08 00:42:00","2023-11-15 22:09:56" "284","anode09","ANODE09","Automatic pulmonary nodule detection systems in chest CT scans","ANODE09 is an initiative to compare systems that perform automatic detection of pulmonary nodules in chest CT scans on a single common database, with a single evaluation protocol.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/7/logo.png","https://anode09.grand-challenge.org/","completed","5","https://doi.org/10.1016/j.media.2010.05.005","\N","\N","\N","2023-11-08 00:42:00","2023-11-17 23:17:55" @@ -424,7 +424,7 @@ "423","crossmoda2022","Cross-Modality Domain Adaptation: Segmentation & Classification","CrossMoDA 2022: unsupervised domain adaptation","The CrossMoDA 2022 challenge is the second edition of the first large and multi-class medical dataset for unsupervised cross-modality Domain Adaptation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/701/squarelogo_2022.png","https://crossmoda2022.grand-challenge.org/","active","5","","2022-05-11","\N","\N","2023-11-08 00:42:00","2023-11-17 23:32:53" "424","atm22","Multi-site, Multi-Domain Airway Tree Modeling (ATM'22)","Airway segmentation in x-ray CT for pulmonary diseases","Airway segmentation is a crucial step for the analysis of pulmonary diseases including asthma, bronchiectasis, and emphysema. The accurate segmentation based on X-Ray computed tomography (CT) enables the quantitative measurements of airway dimensions and wall thickness, which can reveal the abnormality of patients with chronic obstructive pulmonary disease (COPD). Besides, the extraction of patient-specific airway models from CT images is required for navigatiisted surgery.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/702/logo_xqf7twK.png","https://atm22.grand-challenge.org/","active","5","https://doi.org/10.1007/978-3-031-16431-6_48","2022-08-17","\N","\N","2023-11-08 00:42:00","2023-11-21 17:16:40" "425","ps-fh-aop-2023","FH-PS-AOP challenge","Fetal head and pubic symphysis segmentation","The task of the FH-PS-AOP grand challenge is to automatically segment 700 FH-PSs from transperineal ultrasound images in the devised Set 2 (test set), given the availability of Set 1, consisting of 401 images. Set 2 is held private and therefore not released to the potential participants to prevent algorithm tuning, but instead the algorithms have to be submitted in the form of Docker containers that will be run by organizers on Set 2. The challenge is organized by taking into account the current guidelines for biomedical image analysis competitions, in particular the recommendations of the Biomedical Image Analysis Challenges (BIAS) initiative for transparent challenge reporting.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/703/F2_WDBTbsq.tif","https://ps-fh-aop-2023.grand-challenge.org/","completed","5","https://doi.org/10.1007/s11517-022-02747-1","2023-03-27","2023-09-20","\N","2023-11-08 00:42:00","2023-11-16 17:41:56" -"426","shifts","Shifts Challenge 2022","Shifts challenge 2022: distributional shift and uncertainty","The goal of the Shifts Challenge 2022 is to raise awareness among the research community about the problems of distributional shift, robustness, and uncertainty estimation, and to identify new solutions to address them. The competition will consist of two new tracks: White Matter Multiple Sclerosis (MS) lesion segmentation in 3D Magnetic Resonance Imaging (MRI) of the brain and Marine cargo vessel power estimation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/704/logo_1200.png","https://shifts.grand-challenge.org/","active","5","https://arxiv.org/abs/2206.15407","2022-09-15","2024-04-08","\N","2023-11-08 00:42:00","2023-11-17 23:33:07" +"426","shifts","Shifts Challenge 2022","Shifts challenge 2022: distributional shift and uncertainty","The goal of the Shifts Challenge 2022 is to raise awareness among the research community about the problems of distributional shift, robustness, and uncertainty estimation, and to identify new solutions to address them. The competition will consist of two new tracks: White Matter Multiple Sclerosis (MS) lesion segmentation in 3D Magnetic Resonance Imaging (MRI) of the brain and Marine cargo vessel power estimation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/704/logo_1200.png","https://shifts.grand-challenge.org/","completed","5","https://arxiv.org/abs/2206.15407","2022-09-15","2024-04-08","\N","2023-11-08 00:42:00","2023-11-17 23:33:07" "427","megc2022","ACMMM MEGC2022: Facial Micro-Expression Grand Challenge","Facial macro- and micro-expressions spotting","The unseen testing set (MEGC2022-testSet) contains 10 long video, including 5 long videos from SAMM (SAMM Challenge dataset) and 5 clips cropped from different videos in CAS(ME)3. The frame rate for SAMM Challenge dataset is 200fps and the frame rate for CAS(ME)3 is 30 fps. The participants should test on this unseen dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/705/acmmm2022_logo.png","https://megc2022.grand-challenge.org/","active","5","https://doi.org/10.1109/fg47880.2020.00029","2022-05-23","\N","\N","2023-11-08 00:42:00","2023-11-16 17:39:17" "428","midog2022","MItosis DOmain Generalization Challenge 2022","Mitosis domain generalization challenge 2022","Motivation: Mitosis detection is a key component of tumor prognostication for various tumors. Modern deep learning architectures provide detection accuracies for mitosis that are on the level of human experts. Mitosis is known to be relevant for many tumor types, yet, when trained on one tumor / tissue type, the performance will typically drop significantly on another. Scope: Detect mitotic figures (cells undergoing cell division) from histopathology images (object detection). You will be provided with images from 6 different tumor types, 5 out of which are labeled. In total the set consists of 405 cases and includes 9501 mitotic figure annotations in the training set. Evaluation will be done on ten different tumor types with the F1 score as main metric.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/706/midog_compact.png","https://midog2022.grand-challenge.org/","completed","5","https://doi.org/10.5281/zenodo.6362337","2022-08-04","2022-08-30","\N","2023-11-08 00:42:00","2023-11-16 17:39:11" "429","isles22","Ischemic Stroke Lesion Segmentation Challenge","Ischemic stroke lesion segmentation challenge","The goal of this challenge is to evaluate automated methods of stroke lesion segmentation in MR images. Participants are tasked with automatically generating lesion segmentation masks from DWI, ADC and FLAIR MR modalities. The task consist on a single phase of algorithms evaluation. Participants will submit their segmentation model (""algorithm"") via a Docker container which will then be used to generate predictions on a hidden dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/707/Slide1_N1qHO1K.png","https://isles22.grand-challenge.org/","active","5","","2022-07-15","2030-12-06","\N","2023-11-08 00:42:00","2023-11-15 21:48:03" @@ -497,7 +497,7 @@ "496","lightmycells","Light My Cells: Bright Field to Fluorescence Imaging Challenge","Enhance biology and microscopy","Join the Light My Cells France-Bioimaging challenge! Enhance biology and microscopy by contributing to the development of new image-to-image deep labelling methods. The task: predict the best-focused output images of several fluorescently labelled organelles from label-free transmitted light input images. Dive into the future of imaging with us! #LightMyCellsChallenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/750/logo_light_my_cells.png","https://lightmycells.grand-challenge.org/","upcoming","5","","\N","\N","\N","2024-01-31 22:49:33","2024-02-05 16:58:06" "497","hack-rare-disease","Harvard Rare Disease Hackathon 2024","Are you a student interested in using AI/ML to tackle rare diseases? Join us!","This March 2-3, join us for the 2024 Harvard Rare Disease Hackathon, where students will gather on Harvard''s campus to set forth their own data-driven solutions for rare diseases. Participants will have the chance to analyze public and patient-sourced genomic and clinical datasets, and will be challenged to produce deliverables for participating patient organizations. These deliverables may take the form of a data report, computational tool, or web/mobile application that improves the lives of patients or furthers research progress. Participation is free and open to all undergraduate and graduate students who register with their .edu email address.","","https://www.harvard-rarediseases.org/","completed","\N","","2024-03-02","2024-03-03","\N","2024-02-06 00:12:34","2024-02-06 0:41:24" "498","dreaming","Diminished Reality for Emerging Applications in Medicine through Inpainting","Dataset of Synthetic Surgery Scenes: Photorealistic Operating Room Simulations","The Diminished Reality for Emerging Applications in Medicine through Inpainting (DREAMING) challenge seeks to pioneer the integration of Diminished Reality (DR) into oral and maxillofacial surgery. While Augmented Reality (AR) has been extensively explored in medicine, DR remains largely uncharted territory. DR involves virtually removing real objects from the environment by replacing them with their background. Recent inpainting methods present an opportunity for real-time DR applications without scene knowledge. DREAMING focuses on implementing such methods to fill obscured regions in surgery scenes with realistic backgrounds, emphasizing the complex facial anatomy and patient diversity. The challenge provides a dataset of synthetic yet photorealistic surgery scenes featuring humans, simulating an operating room setting. Participants are tasked with developing algorithms that seamlessly remove disruptions caused by medical instruments and hands, offering surgeons an unimpeded ...","https://rumc-gcorg-p-public.s3.amazonaws.com/b/752/isbi_dreaming_banner_gc_297CU3H.x10.jpeg","https://dreaming.grand-challenge.org/","active","5","","2024-01-08","2024-04-27","\N","2024-02-12 21:56:27","2024-02-20 6:38:09" -"499","brats-goat","BraTS-ISBI 2024 - Generalizability Across Tumors Challenge","BraTS-GoAT Challenge: Generalizability Across Brain Tumor Segmentation Tasks","The International Brain Tumor Segmentation (BraTS) challenge has been focusing, since its inception in 2012, on generating a benchmarking environment and a dataset for delineating adult brain gliomas. The focus of the BraTS 2023 challenge remained the same: generating a standard benchmark environment. At the same time, the dataset expanded into explicitly addressing 1) the same adult glioma population, as well as 2) the underserved sub-Saharan African brain glioma patient population, 3) brain/intracranial meningioma, 4) brain metastasis, and 5) pediatric brain tumor patients. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. Each segmentation method was evaluated exclusively on the patient population it was trained on in each sub-challenge. In this challenge, we aim to organize the BraTS Generalizability Across Tumors (BraTS-GoAT) Challenge. The hypothesis is t...","","https://www.synapse.org/brats_goat","active","1","","2024-01-09","2024-04-06","\N","2024-02-19 18:20:32","2024-04-03 19:14:42" -"500","ctc2024","Cell Tracking Challenge 2024","Develop novel, robust cell segmentation and tracking algorithms","Segmenting and tracking moving cells in time-lapse sequences is a challenging task, required for many applications in both scientific and industrial settings. Properly characterizing how cells change their shapes and move as they interact with their surrounding environment is key to understanding the mechanobiology of cell migration and its multiple implications in both normal tissue development and many diseases. In this challenge, we objectively compare and evaluate state-of-the-art whole-cell and nucleus segmentation and tracking methods using both real and computer-generated (2D and 3D) time-lapse microscopy videos of cells and nuclei. With over a decade-long history and three detailed analyses of its results published in Bioinformatics 2014, Nature Methods 2017, and Nature Methods 2023, the Cell Tracking Challenge has become a reference in cell segmentation and tracking algorithm development. This ongoing benchmarking initiative calls for segmentation-and-tracking and segm...","http://celltrackingchallenge.net/files/extras/tracking-result.gif","http://celltrackingchallenge.net/ctc-vii/","active","\N","","2023-12-22","2024-04-05","\N","2024-03-06 18:57:14","2024-03-26 1:26:38" +"499","brats-goat","BraTS-ISBI 2024 - Generalizability Across Tumors Challenge","BraTS-GoAT Challenge: Generalizability Across Brain Tumor Segmentation Tasks","The International Brain Tumor Segmentation (BraTS) challenge has been focusing, since its inception in 2012, on generating a benchmarking environment and a dataset for delineating adult brain gliomas. The focus of the BraTS 2023 challenge remained the same: generating a standard benchmark environment. At the same time, the dataset expanded into explicitly addressing 1) the same adult glioma population, as well as 2) the underserved sub-Saharan African brain glioma patient population, 3) brain/intracranial meningioma, 4) brain metastasis, and 5) pediatric brain tumor patients. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. Each segmentation method was evaluated exclusively on the patient population it was trained on in each sub-challenge. In this challenge, we aim to organize the BraTS Generalizability Across Tumors (BraTS-GoAT) Challenge. The hypothesis is t...","","https://www.synapse.org/brats_goat","active","1","","2024-01-09","2024-04-18","\N","2024-02-19 18:20:32","2024-04-08 19:21:58" +"500","ctc2024","Cell Tracking Challenge 2024","Develop novel, robust cell segmentation and tracking algorithms","Segmenting and tracking moving cells in time-lapse sequences is a challenging task, required for many applications in both scientific and industrial settings. Properly characterizing how cells change their shapes and move as they interact with their surrounding environment is key to understanding the mechanobiology of cell migration and its multiple implications in both normal tissue development and many diseases. In this challenge, we objectively compare and evaluate state-of-the-art whole-cell and nucleus segmentation and tracking methods using both real and computer-generated (2D and 3D) time-lapse microscopy videos of cells and nuclei. With over a decade-long history and three detailed analyses of its results published in Bioinformatics 2014, Nature Methods 2017, and Nature Methods 2023, the Cell Tracking Challenge has become a reference in cell segmentation and tracking algorithm development. This ongoing benchmarking initiative calls for segmentation-and-tracking and segm...","http://celltrackingchallenge.net/files/extras/tracking-result.gif","http://celltrackingchallenge.net/ctc-vii/","completed","\N","","2023-12-22","2024-04-05","\N","2024-03-06 18:57:14","2024-03-26 1:26:38" "501","isbi-bodymaps24-3d-atlas-of-human-body","ISBI BodyMaps24: 3D Atlas of Human Body","","Variations in organ sizes and shapes can indicate a range of medical conditions, from benign anomalies to life-threatening diseases. Precise organ volume measurement is fundamental for effective patient care, but manual organ contouring is extremely time-consuming and exhibits considerable variability among expert radiologists. Artificial Intelligence (AI) holds the promise of improving volume measurement accuracy and reducing manual contouring efforts. We formulate our challenge as a semantic segmentation task, which automatically identifies and delineates the boundary of various anatomical structures essential for numerous downstream applications such as disease diagnosis and treatment planning. Our primary goal is to promote the development of advanced AI algorithms and to benchmark the state of the art in this field. The BodyMaps challenge particularly focuses on assessing and improving the generalizability and efficiency of AI algorithms in medical segmentation across divers...","","https://codalab.lisn.upsaclay.fr/competitions/16919","active","9","","2024-01-10","2024-04-15","\N","2024-03-06 20:12:50","2024-03-06 20:16:23" "502","precisionfda-automated-machine-learning-automl-app-a-thon","precisionFDA Automated Machine Learning (AutoML) App-a-thon","Unlock new insights into its potential applications in healthcare and medicine","Say goodbye to the days when machine learning (ML) access was the exclusive purview of data scientists and hello to automated ML (AutoML), a low-code ML technique designed to empower professionals without a data science background and enable their access to ML. Although ML and artificial intelligence (AI) have been highly discussed topics in healthcare and medicine, only 15% of hospitals are routinely using ML due to lack of ML expertise and a lengthy data provisioning process. Can AutoML help bridge this gap and expand ML throughout healthcare? The goal of this app-a-thon is to evaluate the effectiveness of AutoML when applied to biomedical datasets. This app-a-thon aligns with the new Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, which calls for agencies to promote competition in AI. The results of this app-a-thon will be used to help inform regulatory science by evaluating whether AutoML can match or improve the performance of traditional, human-c...","","https://precision.fda.gov/challenges/32","active","6","","2024-02-26","2024-04-26","\N","2024-03-11 22:58:43","2024-03-11 23:02:12"