Project Snapshot
A creative project that pairs my photos with LLM-crafted narratives, soundscapes, and shareable microsites.
PhotographyCreative CodingAudio
Skills Flexed
- Azure OpenAI prompt engineering with JSON schema responses
- Next.js 15 App Router UI + accessibility-first interactions
- Adaptive learning logic: diagnostics, drift control, and retry flows
- PromptOps harness for regression testing evaluation suites
Concept
Lens to Language is how I release photography series. The pipeline ingests RAW files, metadata, and field notes, then collaborates with an LLM to craft essays and ambient soundscapes.
Pipeline
- Curate · A Next.js dashboard where I score frames, pick moods, and jot memory prompts.
- Describe · LangChain flows blend my notes with computer vision tags to draft cohesive stories and captions.
- Compose · A tiny audio engine stitches loops from my Ableton library based on the story arc.
- Publish · Generates a microsite per drop with scroll-triggered narration, photo sequencing, and downloadable zines.
Outcome
- Each release feels like an exhibition: narrative text, adaptive audio, printable zines, and social cards render automatically.
- Gives collaborators a reproducible toolkit to remix their own photo essays.