top of page
Rising Dice logo
  • kofi
  • Instagram
  • TikTok
  • Youtube
Rising Dice logo
Rising Dice logo
  • kofi
  • Instagram
  • TikTok
  • Youtube
  • Instagram
  • Facebook
  • X

HOME             BLOG

Rising Dice logo

Keeping Photography Real In An AI World

  • Writer: Lee Winder
    Lee Winder
  • Sep 14
  • 7 min read

How I use AI tools to save time on repetitive tasks while ensuring the creativity and art remain mine, and respect the work of the designers and artists whose games I photograph



Photography has always involved tools. Cameras, lenses, film, sensors, editing software, each of which has shaped how we work and what we create. Artificial intelligence is simply the latest tool to appear, and like the ones before it, it’s worth asking where it helps and where it risks crossing a line which is the balance I think about a lot at Rising Dice. So here I want to be open about how I use AI, where it helps, and where it doesn’t, while making sure the creativity stays with me and the people who make the games I’m shooting.


It’s probably not worth calling out any more but AI is everywhere.  My Neewer app I use to control my lighting rigs has it, my Logitech mouse software has it, and half the tools on my Mac now seem to have added some kind of “AI feature” whether they need it or not and most of the time these additions are nothing more than a gimmick.  


ree

It’s hard to avoid.


Do I Even Use AI at Rising Dice?

Of course I do. If people ignore the efficiency gains that come from AI, they’re choosing to do things the hard way, and the long way, when they don’t need to. But I’ve made a few rules for myself that I stick to.


Photography is an art form, regardless of what you’re taking pictures of and AI, in all its different guises, does not create art because of how it fundamentally works (I will firmly plant my flag on that hill!). So it’s important to me that anything coming out of Rising Dice ultimately comes from me, not from an algorithm stringing together prediction after prediction.


My framework is pretty straight forward. If it’s something I could already do, and AI just makes it faster (or less painful!), then I’ll use it. If it’s something I might not normally do but could, and the result is clearly better, I’ll consider it and probably lean into it. 


But, if it’s something I can’t do, wouldn’t do, or that shifts the work from “real” into “generated” then I leave it alone and keep it manual.


That last category is the one where most of the debate would exist because everyone has their own definition of what counts as “generated”. Is bumping up the contrast, correcting colour, or removing blemishes moving from real to generated?  Is the amount of processing your phone does to every picture you take move an image from real to generated? None of these rely on AI (well, phones are starting to I suppose), and in some cases photographers have been doing this for decades. Yet they all change the image from what came straight out of the camera.


AI Outside of Photography

Before I even get to shooting a board game there’s the work needed to even get there.


Take this blog and this post.  I draft everything myself, actually typing out each and every word on a keyboard like it was still 2020, shaping the sections, and deciding what I want to say.  But when I’m done, I know my writing often meanders. I mix tenses, swap between “you” and “they,” and generally make it harder to read than it needs to be. That is very much me, but it doesn’t always help the reader.


So I run the draft through AI, usually ChatGPT or Gemini, with a clear request to tighten up the language and clean up the grammar without changing my tone.  I’ll refer it to old posts I’ve written so it has context on my style, and how I want the writing a bit tighter and clean.  Then I go back through its version (like I am right now) and make sure it still feels like my voice. 


It’s no different to asking a family member, or in my case my wife (who writes and speaks far better than I do), to edit a piece before I publish it. It’s work I could do, but it would take a lot longer.


The same goes for my website, www.risingdice.com, which I recently gave a bit of a refresh to make it more targeted and brighter than it was. 


AI didn’t design it for me, but it did help me review the old site and compare it against other board game and product photography sites, bringing together examples and ways of approaching the presentation of the site that I could use. Doing that research myself would have meant hours of taking screenshots and writing notes whereas AI gave me a summary in minutes, which I could then shape into my own design decisions.


The end result was still my work. I just saved myself a lot of legwork and I even discovered a few board game photography channels I hadn’t come across before, which was a nice bonus.



I do think ChatGPT’s design algorithm needs a bit of work
I do think ChatGPT’s design algorithm needs a bit of work

And really that covers where most of my AI use outside of photography fall into

  • Editing my writing so I can polish it further

  • Researching brands or companies so I understand their background better



AI in Photography

This is the part people will find more contentious, and that’s using AI in the photography itself.


I use Adobe Lightroom Classic for cataloguing and editing, and I rarely lean into Photoshop. Lightroom does have AI features, but they don’t use Adobe’s Firefly generative tools and instead everything is processed on-device using pre-trained models.  But of those tools, I only ever use three of them.


Denoise

Remove noise generated when capturing the image


Most of the time I don’t need Denoise, as I have control over lighting and usually shoot at low ISO as well as modern cameras handling noise far better than they used to. But sometimes, as with this shot for Confidential Information, I want a darker, moodier scene, and to make it easier to capture I’ll push the ISO higher which can generate quite a bit of noise in the darker areas.


ree

I could solve that with more lights, and a more expensive set up and of course more time. Instead, I use Denoise as it allows me to still use the image I shot, but cleaner, and I can move on more quickly.


Masking

Select and adjust specific parts of an image


I use masking most often for white-background pack shots as even the cleanest, freshest paper background isn’t truly white and simply brightening the whole shot would wash out the product too.


In the past, creating a mask around an image was painstaking manual work as you clipped the selection around every edge and line. Object aware masking brushes now get me 85 percent of the way there in minutes. They’re not perfect, in the image below for Timber Town, the small meeples at the top right really confused it, and the white edges of the boxes didn’t provide enough contrast, but it still saved me hours across all the images. I still refine them by hand, and the photo remains real, it’s just significantly quicker to prepare.


ree

Remove

Takes things out of an image and fills the gaps.


This is clearly the most contentious tool of the three, and the one I limit the most because it’s so easy to move from something that has never existed.


Board game boxes rarely arrive in perfect condition regardless of how much bubble wrap is used, and dents don’t belong in a final product photo. In the past I would clone and blend the damage away manually but now remove now does the same in less time, without changing what the product is.


Take this image of the box for Terraria which came with more packaging that I’ve seen in my life, and yet it still managed to arrived with some mangled corners and the front box had been creased in each corner.  This is a minor alteration to the image that would have been done anyway, so fits firmly into the area of where AI is a significant time saver.


ree

This is my limit on Remove, fixing blemishes and damage, removing props from images that are needed to create the shot and cleaning up any rogue elements.  Any more and I feel you’re starting to move into generation territory, and the product is no longer the product you shot.


That said, Remove can be incredibly unreliable. Sometimes it gets it right and sometimes it produces absolute nonsense but that is usually due to not including all the object, and especially shadows, which make the AI want to create something with very little context. 


As an example, I needed a shot of some meeples with and without the box they come in.  In reality I simple shot the image, removed the box and shot it again but I wanted to see what would come out of it if I used the Remove tool.


Instead of blank space, it gave me what looked like a monstrous cross between a sandwich and a kebab.


ree

Because of quirks like that, I’ve even locked my Lightroom version to 14.3.1. Later versions alter areas outside the mask in an attempt to (what I assume is) make changes blend better. For me that means less control and alters the image further than what I feel is acceptable and it’s also a reminder that we’re always tied to the models built into the software, for better or worse.



That’s how I use AI today.


It’s not about replacing the art or generating images from nothing. It’s about making the repetitive or technical parts faster, so I can focus more on the creative side, which is the part that still has to come from me.

 
 
 

Comments


bottom of page