It’s the not-so-distant future. Employees have returned to the office, and so have in-person meetings. Seated at a table in a client’s newly remodeled office, you can’t help but notice how sleek and comfortable the acrylic conference chairs are.
Curious to know who makes them, you discretely snap a pic and upload it to Steelcase.com. In seconds, you get search results listing possible matches.
If this sounds too good to be true, it’s not. Our team has created a visual search function for Steelcase—the largest office furniture manufacturer in the world—that makes this exact scenario possible. Paul, one of our product owners, explains how.
How exactly does the visual search function work?
When someone wants to find a product or products that match those in an environment image, they simply upload it on mobile or desktop. Then, visual search analyzes it, compares it to indexed products from the Catalog API, and returns matching results.
How does visual search relate to AI/machine learning?
We use a service called ViSenze to catalog and index products and metadata coupled to product images. Their system analyzes those images and generates product profiles. When an image is uploaded, the system indexes it against product and product category profiles to return matching product results in an array.
That product and product category profile can be configured and curated so the system “learns” how to better identify specific products.
How did we implement it for Steelcase?
ViSenze provides a customizable widget that we trigger from an icon in the search bar. A responsive modal opens, enabling the user to select and upload an image for analysis.
Where is this technology going?
From what I’ve seen, between ViSenze and Cloudinary, image AI learning and teaching are headed toward more granular object and product identification. Think products with specific variants and configurations, like a Leap chair with a headrest, articulated armrests, and a steel frame.
We see the output of this technology in the likes of search results, “You May Also Like” product suggestions, and dynamic product image cropping—i.e., feed the CDN an environment image URL, size restraints, and declarations to center-weight the image crop to “office chair” on the fly.
We’re also seeing this technology applied to video for automated cropping and resizing and on-the-fly camera recentering via AI declarations.
To see other cool improvements we’ve made to Steelcase.com, check out this case study.