The Diffusion of Stable
Ever since I first received a copyright takedown notice for an accidental use of an unattributed image (don’t trust Google Image Search for Creative Commons attribution), I’ve been looking for a way to create images that could complement my posts. It’s led me to taking my own photos to trying out the first image generation tools. Finally, almost more than a year and half ago, I made the plunge to almost solely using generative AI tools for the images to go alongside my posts.
Eventually, I hopped from Craiyon to Glide-Test, to Dall-E, and eventually, after figuring out how to use Discord (I know… silly that it took quite a few attempts), I settled on Midjourney. There, it evolved from V3 to V5.2 (and growing), becoming a powerful and very usable tool for image creation. I don’t spend much time tailoring my prompts. For the most part, I’m satisfied with one of the four variations generated after the first prompting.
Projecting ahead a year, I see these tools becoming easier and more refinable. We’ll be able to control the sources from which the model has been trained, there will be better suggestions for prompts, and the quality will be more refined. Also, we’ll be able to maintain style to create videos and giphies with ease. Soon after, being able to pair with music or stories will emerge, allowing us to create our own entertainment channels.