Machine Learning: Sudoku Benchmark
Go back to listMachine-Learning Deep-Learning Performance
Public
This projects aims to explore the performance metrics of Sudoku Solver Benchmark given by its different parameters. The main goal is to discover what is best configuration for training and inference, especially find the best trade-off between parameters such as batch size or dataset size.
Setup
Hardware
The benchmark has been ran in the Cloud, at Open Telekom Cloud on a p2.2xlarge.8 equipped of a Tesla V100.
Software
Docker and especially the Tensorflow-GPU image has been used as based for testing.
Dataset
All puzzles and their solutions are generated on-the-fly by the software. It creates problems with between 30-40 missing numbers and a unique solution.
Configuration options
- Training dataset size: Number of cases submitted for train the AI
- Batch size: Number of simultaneous cases submitted for each training episode
- Precision policy: Datastructure used for computation
Output metrics
- Training speed: How fast we train the model
- Training end loss: How good the model is estimated after training
- Inference score: How good the model performs