For years the term photoshopping has been used for any filtering or tweaking of images. But now Adobe have added more AI than you can shake a stick with their latest Photoshop Beta.
This allows you tweak areas of your photos, out paint them and even create new images from scratch. And this really sets the AI cat amongst the Photog pigeons. But I don’t think we should worry.
Technically AI assistance has been Adobe products for some time, but that more mundane tweaking of lighting shadows etc. That took a step forward with the neural filters which allowed you to change lighting and tones as well as skin appearance and facial expression and added AI enabled photo retouching.
But this year they have added in their own AI art generator Firefly. This is trained using adobe stock, not images scraped from online lists. It’s a good editor. But it may not be quite up there with current versions of Stable Diffusion or Dall-E2 from a promtogram point of view. But it does surprise on occasions. It does like the original Dall_E2 struggle with hands however.
But it’s strengths are for editing photos. And boy is it good.
And Adobe have great hopes for Firefly as this Demo video would suggest showing it being used in everything from video editing to product design to poster creation
Hang on Photoshop isn’t used to generate images ?
Technically that’s right – other adobe products are more commonly used. But some digital artists do work from PS only.
The AI tool are meant for tweaking images but you can exploit them to create a full image.
So how are you meant to use Photoshop Beta new Features ?
This is quite radical for photoshop, it’s adding in content not tweaking it.
Adobe talk about using generative fill allowing you to paint in or around addition imagery
This is the big feature you can add stuff that isn’t there and it can be quite good at it It doesn’t add people so well
Take an area and select it and type in a prompt. Below I though the Toronto skyline could be brightened up with a dinosaur
Give me a Worked example ?
So lets take a basic image, How about this of one of our local castles taken on a early noughties Panasonic Lumix digicam
I don’t like the sky so I select that in photoshop beta and in the generative prompt type in cloudy sky. It generated 3 for me to pick from. I went with this. Arguable I could have feathered the edges but I was being quick
Next I wanted to expand the image so I used then crop feature to make my canvas bigger and selected my image (I cut into the image a bit on selection to make a smoother blend). I selected select-> inverse so I was selecting the area around the image and left the prompt spalce balnk when i Hit it. I got 3 choices again
As you can see the image is expanded out with AI generated scenery. What’s more it is well shadowed to the left of the castle.
But lets move our castle. I though the moon so I selected the castle only and then inverted as before. It didn’t really do what I wanted but it did generate this. You can spot the bit I missed off the select mask
I tweaked the sky as before
and then I thought I needed a fake Girlfriend for my holiday horrors image. I just selected an area in front of the castle and typed in young woman.
This didn’t take long and I didn’t really do any other corrections. So not great but that run took just a few minutes. The Ai has got the lighting almost perfect. Yes you’d wanna tweak further but…
Okay but it’s not like you can change what folk wear ?
And the power of this stuff is quite incredible. The image on the left was generated in Stable diffusion. I thought she looked a bit cold so I added a sweater using PS Beta
Crickey What do the Pro think of Photoshop Beta ?
Good question ! That’s what Patrick Hall of FStoppers mulls over in this vlog whilst demonstrating how this works in real time in the hands of a professional
Patrick raises the power of this and the impact
But hang on I could get a plug-in to get clouds or use Neo ?
Yes you could and there’s no doubt
Another feature that isn’t initially obvious is you now have a built in image creator. Just create a blank image select all and generate your image
And then you can expand them. So take my title image generated in Photoshop Beta. A few generations down the line I got this
it is not infallible as I also got this Spring break gone seriously wrong image
But a little tweaking and this ended up here. Maybe not your cuppa but I can see a weird album cover vibe. And none of this exists
But I’m a film shooter this doesn’t matter ?
Well it is an editing tool. take this shot I took in Galerie David d’Angers in Angers on a crappy disposable B&W model. A nice fussy poor contrast image. My other half took a shine to this bloke
Ai won’t work will it ?
Other AI can do this but just not with the same aplomb. This was Stable diffusion beta XL (via Night Cafe). It interestingly cleaned up the rest of the image
All this is good and well but I’m a skinflint and Gimp is free. Can I try this ?
So Firefly is also available web based allowing you to either create or inpaint images. The creation lags just behind existing AI leads like stable or Dall-E2 IMHO but not by much
But the inpainting is really impressive. It is by far the best online I’ve seen
This was AI generated but with poor facial qualities. then Firefly had a play
Now you might note the free to play adds a watermark to ever image which makes it a tad less useful.
Existing PS users can sign up for Photoshop Beta on the same page.
But can’t you do this elsewhere ?
yes but not cohesively.
It still isn’t the best tool for image generation but it’s infill or image expansion is spectacular
I don’t see Photoshop Beta as a threat to photography but more a tool for tweaking or expanding an image. More power to to the image creator than a threat.
It is not perfect (hands as ever) but this is good.
Photoshop Beta has many additional advancements besides the new AI functions, such as a revamped user interface and faster processing speeds.