The competition

There is a lot of evidence that Neural Architecture Search can produce excellent models capable of ML tasks on well-known datasets, that is, datasets like CIFAR-10 and ImageNet where years of research have created a set of best practices to follow to achieve good results. This leads us to wonder, are we overfitting our NAS algorithms to well-known datasets?

We want to investigate the "real-world" use case of NAS, that is, searching for a state-of-the-art architecture on an entirely novel dataset. In such a case, there is no existing set of best practices to build from, nor extensive research into optimal architectural patterns, augmentation policies, or hyperparameter selection. In essence, we are asking how well NAS pipelines can work “out-of-the-box” with little-to-no time for tuning. To explore this, we have designed this competition to evaluate how well a NAS pipeline fares over unseen novel tasks and datasets.

In this competition, you will be asked to design a NAS pipeline, namely a data processor, a NAS algorithm, and a training policy. This pipeline will then be evaluated on our machines over a number of completely novel datasets: the challenge is to design a pipeline that is flexible enough to be competitive regardless of the datasets provided to it.

This year we are running the competition at AutoML 2025 (2025.automl.cc). AutoML is more alligned with NAS than previous venues, and we think it will be a more suitable home for the competition.

The competition will be run in three phases:

  • Phase 1: You download our framework, and develop your method locally. Open Now!
  • Phase 2: You send us your submission, and we do a shorter test run on our system. Aproximately June 2025
  • Phase 3: We rerun your last working Phase 2 submission for a longer final run. Aproximately August 2025.

Results are to be announced at AutoML 2025 in New York, September 8th-11th, with finalists being informed in August.