AI-edited smartphone photos are no longer just “better pictures.” They’re becoming a quiet co-author of our personal history—because the image your phone saves is often the result of heavy computational reconstruction, not a simple capture of light. That’s the core provocation in Federico Ferrazza’s Tech Away piece for la Repubblica: if the phone “improves” reality automatically, who exactly is making the memory—us, or the algorithm?
Not subscribed to la Repubblica yet? Start your subscription now via this link, or learn more here.
The uncomfortable part is how invisible it is. With old-school editing, you knew you were choosing a filter or retouching a face. With modern camera pipelines, the “default” file can already include face refinement, texture synthesis, multi-frame merging, and selective sharpening—before you even see it in your gallery. Ferrazza’s point lands because it’s not framed as a photography debate; it’s framed as a human one: we don’t just store photos on phones, we store life there.
Table of Contents
Computational photography isn’t neutral anymore
A lot of people still think “AI in photos” means cheesy filters. But the bigger shift is computational photography: the camera app stitches together multiple frames, denoises aggressively, reconstructs edges, and sometimes swaps parts of images based on what the system thinks should be there.
That’s why the famous “Moon photos” controversy mattered. In 2023, a Reddit user showed that a Samsung phone could produce a Moon image packed with crater detail that the sensor and lens likely couldn’t have captured from that distance. The debate wasn’t “is it pretty?” It was “is the phone generating detail that the optics couldn’t realistically resolve?” That case pushed a mainstream question into the open: at what point does enhancement become fabrication?
This isn’t just Samsung. Apple’s Deep Fusion, for example, is described as pixel-by-pixel processing powered by machine learning to optimize detail, texture, and noise. In practice, it can be great. It also means your “photo” is already a computed output.
Google takes it a step further with features like Best Take, which can assemble a group photo by mixing “better” faces from multiple shots into one final image—useful, yes, but it’s literally creating a moment that never occurred in a single frame.

What the MIT study adds (and why it’s the real headline)
Ferrazza brings in the part most people aren’t ready for: the cognitive cost. The research he references is closely aligned with a study titled Synthetic Human Memories, co-authored by researchers including Pattie Maes and Elizabeth Loftus. In a preregistered experiment with 200 participants split across conditions (unedited images vs AI-edited images vs AI-generated video variations), AI-altered visuals significantly increased false recollections, and the strongest condition produced about a 2.05x increase versus control—while confidence in those false memories also rose.
That’s the pivot from “phone camera nerd topic” to “society topic.” If AI-enhanced (or AI-generated) media can make people remember things that didn’t happen—and feel confident about it—the implications go way beyond Instagram aesthetics: legal disputes, politics, public opinion, even everyday family conflict (“You did smile in that picture!”).
A travel lens: why this matters the moment you leave home
Travel is where this hits most people emotionally. Trips are memory accelerators: new places, intense days, photos as proof you were there and as anchors for storytelling later. If your phone quietly beautifies faces, deepens skies, removes crowds, “fixes” blur, or composites expressions, it can also subtly rewrite what you remember about the place and the moment.
For travel creators and journalists, there’s a credibility layer too. The more “automatic” the enhancement becomes, the more important it is to be able to say what’s documentary and what’s interpretive—especially when images are used to support claims (overtourism, safety, weather conditions, accessibility, the reality of a destination versus the marketing version).
What I’d actually do (without turning your camera roll into a legal archive)
If you want a practical middle ground—enjoy the benefits without surrendering authorship—think in terms of “keeping the receipt.”
Shoot in a mode that preserves originals when it matters (RAW/ProRAW or an option that saves an original alongside the processed file, depending on your phone). For key travel moments—once-in-a-lifetime places, professional work, anything that could be used as evidence—export and back up the originals, not only the prettified share version.
Also, train yourself to notice when a feature is building a “best possible” moment rather than documenting a real one. Group-photo tools that swap faces are the clearest example: they reduce stress, but they also manufacture a tiny alternate history.
Read also: Tuscany’s “MIA” Project: How Doctors Are Training the Next Generation of Medical AI
Ferrazza’s closing idea is the one that sticks: we’re getting used to a reality that must be improved to be acceptable. That’s not inherently dystopian—but it is a cultural shift. And like most shifts, it’s healthier when we can see it happening, name it, and choose when we want it.





