Empowering Local Image Generation: Harnessing Stable Diffusion for Machine Learning and AI
Ahmed Imran Kabir (),
Limon Mahomud (),
Abdullah Al Fahad () and
Ridwan Ahmed ()
Informatica Economica, 2024, vol. 28, issue 1, 25-38
Abstract:
This paper examines the ability to use Stable Diffusion's diffusion models to get state-of-the-art synthesis results on image data and other types of data. Also, a guiding interface can be used to control the process of making images by converting text to images and image to image. But because these models usually work directly in pixel space, optimizing strong DMs often needs more GPU VRAM to run. Using Stable Diffusion and diffusion models on local hardware like this lets more information and depth be added while generating images, which greatly improves the quality detail of the image. By combining diffusion models to model architecture, I have made diffusion models into powerful and flexible producers for general conditioning inputs, such as when using XL-XDXL 1.0 and LoRA models. Overall, the paper highlights how a normal person can run their own Midjourney like AI image generation with the help of machine learning and generative AI.
Keywords: Stable Diffusion; Machine Learning; Image generation; generative AI; VRAM; GPU; Diffusion Models; Prompt (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.revistaie.ase.ro/content/109/03%20-%20 ... 20fadad,%20ahmed.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:aes:infoec:v:28:y:2024:i:1:p:25-38
Access Statistics for this article
Informatica Economica is currently edited by Ion Ivan
More articles in Informatica Economica from Academy of Economic Studies - Bucharest, Romania Contact information at EDIRC.
Bibliographic data for series maintained by Paul Pocatilu ().