InvokeAI: Configuration and performance

Go back to list

InvokeAI: Configuration and performance
Compute Virtual-Machine GPU Performance LLM
Public

Nowadays, image generation through AI models is a common task and while SaaS service such as DALL-E gives a staighforward usage, open-source alternatives allows building of custom services based around this purpose. One of these publicly available model is Stable diffusion, pretrained and with weights under CreativeML License allowing the large public to benefits from this technology.

But even with a trained model, without a deep knowledge in this area it is still complex to handle the different steps of image generation. The tasks themselves require dozens of parameters to achieve good performance in terms of image quality and speed of generation. This last barrier is crossed by series of software acting as human interface, web or API.

To explore configuration and performance of Stable Diffusion, we experiment image generation with the framework InvokeAI. We aim to discover the speed of InvokeAI through OTC p3.2xlarge.8 equipped with a NVIDIA A100. The basic configuration is the following:

  • FP32
  • Euler scheduler
  • 50 steps

This default configuration outputs a similar image:

A green rose