Fri. Apr 19th, 2024


Key Takeaways

  • Iterate prompts from broad to specific for better results in MidJourney image generation. Start simple and add details gradually.
  • Describe image composition in detail to control how MidJourney interprets your prompts. Be specific about layout and elements.
  • Use style modifiers and negative prompts to guide MidJourney in creating images that match your preferences. Don’t forget to use advanced tools for tweaking.


The latest version of MidJourney is pretty great at understanding what you want and giving it to you, but if the AI is being a little stubborn, there are a few ways you can increase the chances of getting exactly what you wanted from the imagination of the machine.


Iterate Your Prompts

MidJourney V6 and later has changed how it interprets prompts drastically from before. It now understands natural language much better, so you can give it long and elaborate prompts, and it will generally give you something that has all the elements that you asked for, but have you thought it through?

If you’re throwing too many things in there at once, try to iterate your prompts from something simple and broad, to something more specific and detailed.


For example, you could start with something like “Traditional Japanese garden.” Then “Traditional Japanese garden with a water feature and sakura blossoms.” Then, finally “Traditional Japanese garden with a water feature and sakura blossoms and a woman in a kimono”

With each iteration, as you think of more elements to add, you’ll get a feel for when you’re putting too much in, or when a single element in the prompt is causing chaos.

Be Specific About Composition

MidJourney V6 represent a major leap when it comes to prompt adherence—how well it sticks to the instructions in the prompt. In the past, you’d get a variety of compositions, and then choose which you liked, but now you can take much more control if you take the time to describe what the image layout should be.

For example: “A man and a woman are smiling at each other in side-profile. The man is on the left, wearing a red shirt. The woman is on the right wearing a purple knitted dress. In the background is a garden with purple plants”


An example of MidJourney sticking correctly to a prompt with a man and woman smiling at each other in a garden.

As you can see MidJourney pretty much nailed it with all four images generated from the prompt.

Use the Right Style Modifier

Nothing will create bigger headaches for you than giving MidJourney a prompt without some sort of indication of the style you want. A huge part of an image’s appeal or final result is the style that it’s mimicking. This is like saying you want to watch a movie, but you don’t specify whether it should be a romance or a horror film—you’ll probably get something you dislike!

I’ve written about some of my favorite Midjourney style modifiers before, but these are just examples to get you thinking about style. It’s definitely worth doing some research on the various art styles, or to find out what the art style is called for your favorite pieces. It’s quite often the missing ingredient in a prompt that makes all the difference.


Use Negative Prompts

A standard prompt tells Midjourney what you want in the image, but it doesn’t stop it from adding more elements. Generally, this is a good thing, since many varied elements are associated with the terms in your prompt. However, you can use a negative prompt to tell the AI what not to put in the image.

In Midjourney simply use “–no” followed by the elements to exclude. For example, here I have used the prompt: “cars of different colors –no red cars.”

An AI-generated image of toy cars in various colors.
Sydney Louw Butler/How-To Geek/Midjourney

The power of negative prompts can’t be overstated, and it’s a good way to take control of what’s in your image.


Let ChatGPT Write or Polish Your Prompt

Most people probably underestimate the length of prompt that Midjourney can process, and humans are a little lazy by nature, so you’re probably not writing whole paragraphs of imaginative visual text. However, chatbots like ChatGPT have no problem with verbosity and can often come up with details you may not have thought of.

It’s as simple as asking ChatGPT for a prompt based on your needs, but I’ve created a short guide to help you combine ChatGPT and Midjourney.

Midjourney offers a cornucopia of post-generation tweaking tools, and if you’re not using them you’re doing yourself a major disservice. You can see them all as these buttons below your image in the Discord interface.

A Midjourney image in Discord with the modification buttons visible beneath it.
Sydney Louw Butler/How-To Geek/Midjourney

This is what they look like in the Alpha web interface that’s only open to users with more than 1000 images under the belt as of this writing.


The advanced tweaking buttons in Midjourney's web alpha interface.
Sydney Louw Butler/How-To Geek/Midjourney

One of the best is “Vary Region”, which is Midjourney’s name for its inpainting feature. This allows you to make parts of the image you’re not happy with, and re-generate those portions without affecting the rest of the picture.

A MidJourney image marked with the Vary Region function.

The panning and zooming tools are also invaluable, especially if something in your image has been cut off, or the framing is too close. None of these tools alter your original image, so it’s perfectly safe to experiment with them.



While we’re not at the point where you can take full control of AI image generation, Midjourney now has numerous ways you can easily take charge of what’s happening in your image. This should cut down on the number of times you have to roll the dice, and make it more likely that you’ll get exactly what you’re looking for.



Source link

By John P.

Leave a Reply

Your email address will not be published. Required fields are marked *