This projects aims to explore the performance metrics of Sudoku Solver Benchmark given by its different parameters. The main goal is to discover what is best configuration for training and inference, especially find the best trade-off between parameters such as batch size or dataset size.
The benchmark has been ran in the Cloud, at Open Telekom Cloud on a p2.2xlarge.8 equipped of a Tesla V100.
Docker and especially the Tensorflow-GPU image has been used as based for testing.
All puzzles and their solutions are generated on-the-fly by the software. It creates problems with between 30-40 missing numbers and a unique solution.
- Training dataset size: Number of cases submitted for train the AI
- Batch size: Number of simultaneous cases submitted for each training episode
- Precision policy: Datastructure used for computation
- Training speed: How fast we train the model
- Training end loss: How good the model is estimated after training
- Inference score: How good the model performs