On Stolen Things and Excavated Ones
The discourse around AI image generation circles endlessly around theft. Stolen styles. Scraped artworks. The labour of human creators fed into machines that regurgitate approximations without consent or compensation. These concerns are legitimate, and this project offers no resolution to them.
But something unexpected emerged in the process. Where the common narrative positions AI as a tool of extraction—taking from artists without permission—this project inverted the relationship. Nothing was being stolen. Something was being excavated. The machine became an instrument for recovering what was already mine, lost to time and circumstance, carried only in the unreliable architecture of memory.
The experience taught uncomfortable lessons about the nature of creative collaboration with technology. The machine resists the singular. It wants to give you what it has seen a thousand times before—the composite, the average, the statistically probable. To produce something genuinely personal requires iteration upon iteration, an ongoing battle to overcome the gravitational pull of training data. The prompts that finally worked were not descriptions of images but negotiations with a system that preferred to give me something else.
Where this leaves the broader questions of appropriation and originality, of style and ownership, remains uncertain. But this much became clear: for those willing to wrestle with it, the technology can serve rather than supplant. The choice is whether to accept what the machine offers by default, or to master it sufficiently to extract what was always yours to begin with.