I have used several different generators. What they all seem to have in common is that they don’t always display what I am asking for. Example: if I am looking for a person in jeans and t-shirt, I will get images of a person wear things totally different clothing and it isn’t consistent. Another example is if I want a full body picture, that command seems to be ignored giving just waist up or just below the waist. Same goes if I ask for side views or back views. Sometimes they work. Sometimes they don’t. More often they don’t. I have also seen that none of the negative requests seem to actually work. If I ask for pictures of people and don’t want them using cell phones or no tattoos, like magic they have cell phones. Some have tattoos. I have noticed this in every single generator I have used. Am I asking for things the wrong way or is the AI doing whatever it wants and not paying attention to my actual request?
Thanks
Can you give an example of a complete prompt? Are you using Dall-E, Midjourney, Stable Diffusion…?
It seems that all models need to have prompts crafted specifically for them and you need to follow-up with corrections. The follow-up is critical for pretty much anything these LMMs output.
Image-to-image also helps a lot with SD. Even some roughly-drawn blobs can be the difference between the image almost matching what you had in mind vs. looking exactly how you intended.
My favorite has been locally hosting Automatic1111’s UI. The setup process was super easy and you can get great checkpoints and models on Civitai. This gives me complete control over the models and the generation process. I think it’s an expectation thing as well. Learning how to write the correct prompt, adjust the right settings for the loaded checkpoint, and running enough iterations to get what you’re looking for can take a bit of patience and time. It may be worth learning how the AI actually ‘draws’ things to adjust how you’re interacting with it and writing prompts. There’s actually A LOT of control you gain by locally hosting - controlNet, LORA, checkpoint merging, etc. Definitely look up guides on prompt writing and learn about weights, order, and how negative prompts actually influence generation.
Ive started with stablediffusion_webui, i feel you !!
Its time to promote, https://lemmy.dbzer0.com/c/stable_diffusion_art.
Very helpfull and relaxing,
Dall-E 3 is the easiest to use and usually understand prompts the best. You can use it for free via Bing Image Editor.
Let me add to my post. I am using Perchance. I didn’t make that clear. I am not sure what system it uses. I am only a beginner at this. I use it primarily because it is completely free. There are no credits to earn or anything to buy. Most other “free” sites offer 5-10 start up credits then want you to purchase a credit package to continue to use them. My original intent was to take a picture I have of someone I know and use that face to create a character. I was told that was image to image AI. I found a few free trial ones but they either take your original and digitize it or make you use it in their pre-designated environments they offer. You can’t create an image and use it to create your own environment. So, I am stuck with text to image which only works sometimes and will only work if the AI knows the person you designate. For example, if I say Taylor Swift on a beach in a bikini, it will generate and image likeness of her in the environment I specified, but ONLY if I use words they will allow. If I say, put my friend Cheryl in a similar picture, I get some stranger that looks nothing like her. I tried the Taylor thing on Bing to test it out and it won’t do it because I used words Bing felt were inappropriate. That isn’t exactly porn. There are actual pictures of that on the internet. There is no creative freedom on most of the AI sites available. Perchance was the only one that would allow that and was totally free and had unlimited usage. I am just trying to learn this technique. I am not looking to spend big money on this, but I would like something that is consistent.
Taylor Swift alone gets the image blocked every time on Bing.
Given that all my experience is with Bing, all I can add with confidence is that given the goal you have, Bing is not the tool for the job.
I’d hazard a guess that you’re better off learning to edit pictures to get what you’re going for.
I use DAL-E 3 through the Bing Image Creator website. It’s free and happens to work well with the way I describe things.
For the full body picture, describe their shoes as well as their hat or hair. Or describe what they’re standing on and what they’re looking at.
Most of the time, DalE will take “do not include thing” to mean “
do notinclude thing.” Sometimes starting from Bing chat and asking for it to draw a picture not including a thing works better.