Published on

May 19, 2025

Data Scientist

Data Scientist

GPU Benchmarking & Hardware Optimization

Zurich / Berlin

About Lyceum


Lyceum is building a user-centric GPU cloud from the ground up. Our mission is to make high-performance computing seamless, accessible, and tailored to the needs of modern AI and ML workloads. We're not just deploying infrastructure — we’re designing and building our own large-scale GPU cluster from scratch. If you've ever wanted to help shape a cloud platform from day one, this is your moment.

The Role


We’re looking for a data scientist with a strong understanding of GPU performance and modelling to help guide our hardware allocation strategy. This isn’t just about running benchmarks — it’s about developing intelligent, adaptive systems that learn from usage patterns and inform how we assign compute. You’ll design and execute GPU performance benchmarks, develop models that predict job
behaviour, and help us build tools that make smarter infrastructure decisions — all in a hands-on, startup environment.

What You’ll Do

  • Design and run GPU benchmarks across diverse models and workloads

  • Build predictive models for job performance and hardware suitability

  • Analyse job traces and runtime metrics to identify patterns and inefficiencies

  • Develop tooling for automated hardware configuration recommendations

  • Collaborate with engineering to integrate models into the orchestration layer

What We’re Looking For

  • Strong background in data science, applied ML, or systems modelling

  • Solid understanding of GPU architectures and compute stack (CUDA, PyTorch, etc.)

  • Experience designing experiments, benchmarking, and performance profiling

  • A highly creative and independent mindset — this is uncharted territory

  • Strong programming skills (Python preferred)

  • A sense of ownership and the drive to build practical systems from scratch

Bonus Points

  • Prior work in HPC, GPU scheduling, or performance modelling

  • Knowledge of compiler/runtime internals or low-level hardware profiling

  • Experience with ML-based or statistical workload prediction

Why Join Us

  • Build from zero: This is a rare opportunity to join a startup at the earliest stages and shape not just the product, but the foundation of the company. You’ll have real ownership over what you build — and the freedom to do things right from the beginning.

  • Hard, meaningful problems: We’re tackling some of the most interesting challenges in cloud infrastructure, scheduling, and performance optimization — at the intersection of hardware and AI.

  • World-class hardware: You’ll be working directly with cutting-edge GPU hardware and helping build the most performant compute platforms in Europe.

  • Everything else: Compensation, equity, healthcare, team events etc – it’s our job to make sure you have everything you need to do your thing!

About Lyceum


Lyceum is building a user-centric GPU cloud from the ground up. Our mission is to make high-performance computing seamless, accessible, and tailored to the needs of modern AI and ML workloads. We're not just deploying infrastructure — we’re designing and building our own large-scale GPU cluster from scratch. If you've ever wanted to help shape a cloud platform from day one, this is your moment.

The Role


We’re looking for a data scientist with a strong understanding of GPU performance and modelling to help guide our hardware allocation strategy. This isn’t just about running benchmarks — it’s about developing intelligent, adaptive systems that learn from usage patterns and inform how we assign compute. You’ll design and execute GPU performance benchmarks, develop models that predict job
behaviour, and help us build tools that make smarter infrastructure decisions — all in a hands-on, startup environment.

What You’ll Do

  • Design and run GPU benchmarks across diverse models and workloads

  • Build predictive models for job performance and hardware suitability

  • Analyse job traces and runtime metrics to identify patterns and inefficiencies

  • Develop tooling for automated hardware configuration recommendations

  • Collaborate with engineering to integrate models into the orchestration layer

What We’re Looking For

  • Strong background in data science, applied ML, or systems modelling

  • Solid understanding of GPU architectures and compute stack (CUDA, PyTorch, etc.)

  • Experience designing experiments, benchmarking, and performance profiling

  • A highly creative and independent mindset — this is uncharted territory

  • Strong programming skills (Python preferred)

  • A sense of ownership and the drive to build practical systems from scratch

Bonus Points

  • Prior work in HPC, GPU scheduling, or performance modelling

  • Knowledge of compiler/runtime internals or low-level hardware profiling

  • Experience with ML-based or statistical workload prediction

Why Join Us

  • Build from zero: This is a rare opportunity to join a startup at the earliest stages and shape not just the product, but the foundation of the company. You’ll have real ownership over what you build — and the freedom to do things right from the beginning.

  • Hard, meaningful problems: We’re tackling some of the most interesting challenges in cloud infrastructure, scheduling, and performance optimization — at the intersection of hardware and AI.

  • World-class hardware: You’ll be working directly with cutting-edge GPU hardware and helping build the most performant compute platforms in Europe.

  • Everything else: Compensation, equity, healthcare, team events etc – it’s our job to make sure you have everything you need to do your thing!

Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.