Designer: Val Yang

Founder, lead designer

AI Facial Recognition

Imagine uploading thousands of event photos and then… having to tag every single person by hand. That’s the reality Asset Administrators faced in Acquia DAM. It was tedious, time-consuming, and risky when it came to compliance.

We saw an opportunity to change that. By introducing facial recognition, we could help admins identify people faster, manage consent more easily, and keep assets compliant — all without drowning in manual tagging.

My role

Lead designer in a sprint team of 6

Duration

Q4 24 - Q2 25

Team

3 Product Manager, 1 Frontend Engineer, 1 Content Designer 2 Design Researchers

Constraints

  1. Security & data privacy priorities.

  2. Tight timeline

Challenge

How might we take the grunt work out of tagging faces in DAM while keeping the process accurate, trustworthy, and compliant?

Current probelm

Customers were uploading hundreds or even thousands of assets with faces. Manually tagging them wasn’t just slow; it was unsustainable. And yet, accurate identification mattered a lot — for contacting employees, managing model rights, and staying compliant with privacy laws.

Solution preview

We built a facial recognition feature that:

  • Detects faces automatically

  • Lets authorized users tag, edit, or delete identities

  • Gives users confidence and control at every step

Impact

  • Customers who relied heavily on people-tagging were suddenly saving hours of work. For many, this was the difference between staying or leaving the platform — and it helped Acquia retain them.

mapping

The DAM is a complex system! So the challenge wasn’t ‘can we detect faces?’ it was "how do we introduce this in a way users trust and understand?"

  • Uploader – for tagging on the fly

  • Asset Digest page – for reviewing and managing later

  • Admin settings – for oversight and compliance

This map became our north star for discussions with PMs and engineers.

A diagram showing my work process

Research

We test our hypotheses with a low-fidelity prototype to help us understand the usability and appetite for AI tagging features in 3 main areas of the DAM:

Competitor analysis

I reviewed products such as Adobe, Google photo, and Apple to seek inspirations for task flows and identify familiar patterns and terms.

💡 I looked at Adobe, Google Photos, and Apple. My questions:

  • Do they call them “Faces”? “People”? Something else?

  • What can users actually do once faces are recognized?

  • How do they handle uncertainty (confidence levels)?

Borrowing from familiar patterns meant users wouldn’t need to relearn from scratch.

Talking to users

We interviewed 5 DAM customers to unpack:

  • How much confidence they needed before trusting auto-tagging

  • What decisions mattered during upload vs. post-upload

  • What “trust” looked like in this context

💡 Key research Insights:

  • “Skip for now” confused people — where do those assets go?

  • Everyone wanted the option to auto-tag all similar photos after upload.

  • The idea of setting a primary image for a face got mixed reactions.


Design

I sketched four task flows across the three touchpoints, starting with assumptions from PMs and personas. Then, after each user interview, I refined the flows and visuals until they matched what people actually wanted and needed.

On the Digest page, for example, users could:

  • See faces automatically detected

  • Confirm or correct them

  • Apply bulk tagging actions across many assets

This iterative cycle kept us moving fast while staying grounded in feedback.

Pivot/iterations

Early on, I assumed avatars were out of scope. But as engineers explored Amazon Rekognition, they found we could support avatars as primary images.

That single discovery opened the door to a better user experience — people could now visually recognize and manage identities more easily. I reworked flows to include avatars, making tagging not just faster but also more intuitive. This kind of pivot showed the value of staying close to engineering and treating constraints as opportunities.

That single discovery opened the door to a better user experience — people could now visually recognize and manage identities more easily. I reworked flows to include avatars, making tagging not just faster but also more intuitive. This kind of pivot showed the value of staying close to engineering and treating constraints as opportunities.

That single discovery opened the door to a better user experience — people could now visually recognize and manage identities more easily. I reworked flows to include avatars, making tagging not just faster but also more intuitive. This kind of pivot showed the value of staying close to engineering and treating constraints as opportunities.

Hightlights

Making a facial recognition feature tailored to our DAM user

Upload

allows user to compare metadata

Admin

allows user to compare metadata

Reflection

Having basic understanding of the technology enables me to quickly select solutions that require a reasonable amount of effort while fulfilling users' desires.

This project pushed me to think big picture while staying deeply practical.

  • My baseline knowledge of AI tools let me quickly zero in on solutions that were feasible and impactful.

  • Mapping complex ideas like subject and likeness into clear frameworks helped the team move forward together.

  • It expanded my “T-shape” as a designer — deepening my craft, but also sharpening how I balance business goals and technical realities.

In the end, we delivered more than just a feature. We gave customers a way to handle an overwhelming task with confidence and ease — and gave Acquia a way to keep them happy.