We just have three simple requirements here:

When I upload this blurry image the API quickly tells us that there is in fact one face in the photo but it’s not in focus.

Using this information we can alert the user and give them the opportunity to change the photo or proceed anyway.

The confirmation step is important because we can’t rely on AI.

AI is not perfect.

We should give the user a way to override the AI’s recommendation.

In Verïfi we do that by notifying the user that there might be a problem with the photo they’ve uploaded but we still give them the option to submit it anyway.

With an updated UX like this, I’d expect to see a decrease in the number of users uploading poor quality photos and an increase in user retention at this point in the sign up flow.

The code

And here’s the “magic” behind the photo quality-check:

export const request = (
  photoBlob: Blob,
  photoDimensions: { height: number; width: number }
) => {
const API_KEY = process.env.REACT_APP_MS_API_KEY
  ? process.env.REACT_APP_MS_API_KEY
  : "";
const API_URL =
const API_PARAMS = {
  returnFaceId: "false",
  returnFaceLandmarks: "false",
  returnRecognitionModel: "false",
  returnFaceAttributes: "accessories,blur,exposure,glasses,noise,occlusion",
  detectionModel: "detection_01",
  recognitionModel: "recognition_02"
// Assemble the URL and query string params
const reqParams = Object.keys(API_PARAMS)
  .map(key => `${key}=${API_PARAMS[key as keyof typeof API_PARAMS]}`)
const reqURL = `${API_URL}?${reqParams}`;
// Fetch via POST with required headers; body is the photo itself
return fetch(reqURL, {
    method: "POST",
    headers: {
      "Content-Type": "application/octet-stream",
      "Ocp-Apim-Subscription-Key": API_KEY
    body: photoBlob
  }).then(response =>
    response.json().then(json => ({ json, photoDimensions }))

It’s just an API call.

That’s it!

Told ya it was easy.

Send your photo to the endpoint and you’ll get a response containing a whole bunch of data.

Here’s what we care about:

    "faceAttributes": {
      "occlusion": {
        "foreheadOccluded": false,
        "eyeOccluded": false,
        "mouthOccluded": false
      "accessories": [
        {"type": "headWear", "confidence": 0.99},
        {"type": "glasses", "confidence": 1.0},
        {"type": "mask"," confidence": 0.87}
      "blur": {
        "blurLevel": "Medium",
        "value": 0.51
      "exposure": {
        "exposureLevel": "GoodExposure",
        "value": 0.55
      "noise": {
        "noiseLevel": "Low",
        "value": 0.12

We use the blur and noise values to determine if the face is in focus or not.

occlusion and accessories tell us if the face is visible.

And the length of the outermost array tells us how many faces are in the photo.

Once we have this data we just need to define a transform function that converts the data into a format we can use in the app, i.e. boolean values telling us whether the requirements have been met or not.

Here’s the example from Verïfi:

export const transform = (response: { json: any }) => {
  const { json } = response;
  let requirements = {
    score: 0,
    errorMessage: null,
    hasSingleFace: false,
    isInFocus: false,
    isCorrectBrightness: false,
    isVisibleFace: false

  // Capture error returned from API and abort
  if (!Array.isArray(json)) {
    return Object.assign({}, requirements, {
      errorMessage: json.error.message

  // If exactly 1 face is detected, we can evaluate its attributes in detail
  if ((requirements.hasSingleFace = json.length === 1)) {
    const {
      faceAttributes: {
        blur: { blurLevel },
        noise: { noiseLevel },
        exposure: { exposureLevel },
    } = json[0];

    // All conditions must be true to consider a face "visible"
    // Put in array to make the subsquent assignment less verbose
    const visibleChecks = [
      glasses === "NoGlasses",
      Object.values(occlusion).every(v => v === false),
      accessories.length === 0
    requirements.isInFocus =
      blurLevel.toLowerCase() === "low" && noiseLevel.toLowerCase() === "low";
    requirements.isCorrectBrightness =
      exposureLevel.toLowerCase() === "goodexposure" || exposureLevel.toLowerCase() === "overexposure";
    requirements.isVisibleFace = visibleChecks.every(v => v === true);

  // Use results to compute a "score" between 0 and 1
  // Zero means no requirements are met; 1 means ALL requirements are met (perfect score)
  // We actively ignore `errorMessage`, `score` in calculation because they're never boolean
  const values = Object.values(requirements);
  requirements.score =
    values.filter(e => e === true).length / (values.length - 2);
  return requirements;

The returned requirements are then used to inform the user if their photo is acceptable or not.

Example photo for Verïfi

The boolean values are used to change the icon displayed next to each requirement. This photo fails the \"in focus\" requirement but passes the others.

The boolean values are used to change the icon displayed next to each requirement. This photo fails the "in focus" requirement but passes the others.

And there we have it.

We’ve added AI to an app and used it to improve UX!

That wasn’t so hard was it?


As we’ve seen, you don’t need to be an AI expert to take advantage of the benefits it can provide.

Jimi Hendrix didn’t make his own guitar, van Gogh didn’t make his own brushes, and you don’t need to build your own AI models.

Let other companies build them for you.

All the usual suspects are doing it and offer AI APIs:

By leveraging their collective knowledge, you can focus on what you do best: building the frontend and improving your users’ experience.

Remember, if you can use an API you can do AI.

So what opportunities can you find to improve UX with AI?

More resources


Related Services

A woman holding a smartphone stands beside a shopping bag, with graphics of a 3D wave and a graph below her. An icon with intertwined arrows is present near the wave. The entire image is set against a light blue background.
Yena Lee, Strategist
Abdella Ali, Solutions Architect
Liam Gardner, Marketing Co-op Intern
Fidelia Ho, Editorial Content Lead

A practical guide to using AI to curate product collections at scale, make product recommendation and search algorithms smarter with semantic awareness, and empower teams to deliver seamless customer experiences.

person receiving mug from robot
Willian Corrêa, SVP of Technology, Engineering

The future of digital experience platforms will be shaped by the ongoing evolution of technologies such as generative AI, voice and conversational interfaces, augmented and irtual reality (AR/VR), internet of things (IoT), and edge computing, as well as the growing importance of data privacy and security. By staying ahead of these developments and embracing the full potential of DXPs, organizations can maintain their competitive edge and ensure their long-term success in the digital age.

Two documents, one has text highlighted
Ryan Marchildon, Artificial Intelligence Developer
Xiyang Chen, Software Engineer
Jan Scholz, Director, Machine Learning

When we think automation, we think digitization. But ironically, the digitization of text is not enough to automate the tasks we care about. It might seem simple to extract information from web pages, PDF forms, emails, and word documents, because they can be easily read by a computer. However, the unstructured nature of most text documents effectively locks their information away from straightforward automatic processing.

David MacDonald and Jan Scholz
David MacDonald, Senior Solutions Architect
Jan Scholz, Director, Machine Learning

Artificial Intelligence (AI) is everywhere. However, AI is often discussed in theory or in a futuristic manner which can make it difficult to understand. So, how can AI be applied in a practical way to help your organization innovate right now? In this post (and podcast) we’ll dive in and demystify AI to help you better understand how it can be applied to your business.