WO2022204098A1 - System and method for providing three-dimensional, visual search - Google Patents

System and method for providing three-dimensional, visual search Download PDF

Info

Publication number
WO2022204098A1
WO2022204098A1 PCT/US2022/021280 US2022021280W WO2022204098A1 WO 2022204098 A1 WO2022204098 A1 WO 2022204098A1 US 2022021280 W US2022021280 W US 2022021280W WO 2022204098 A1 WO2022204098 A1 WO 2022204098A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
computing device
data
information
recited
Prior art date
Application number
PCT/US2022/021280
Other languages
French (fr)
Inventor
Fouad Bousetouane
Srikanth Ojasve
Nirav Saraiya
Original Assignee
W.W. Grainger, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/698,172 external-priority patent/US20220207585A1/en
Application filed by W.W. Grainger, Inc. filed Critical W.W. Grainger, Inc.
Publication of WO2022204098A1 publication Critical patent/WO2022204098A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Definitions

  • LiDAR Light Detection And Ranging
  • the LiDAR process sends pulses of light and calculates the time it takes for the pulses of light to return to the LiDAR source. The calculated time is used to determine the distance of the object from the device.
  • Figure 1 illustrates an example computing device for use in capturing image frame information
  • Figure 2 illustrates an example method for implementing object detection and tracking and visual search using the captured image frame information
  • Figure 3 illustrates components of an example system/method for implementing object detection and tracking and visual search
  • Figure 4 illustrates an example of an object detection and tracking process as it is being performed on the computing device of Fig. 1;
  • Figures 5A-5F illustrate an additional example of an object detection and tracking and visual search process as it is being performed on the computing device of Fig. 1;
  • Figure 6 illustrates an example of an object detection and tracking and visual search process as it is being performed on the computing device of Fig. 1 to obtain search results for multiple objects within a crowded scene;
  • Figure 7 illustrates an example method in which depth information is also used to provide product matching during the visual search process
  • Figure 8 illustrates an example search result resulting from an execution of the method illustrated in Figure 7;
  • Figure 9 illustrates an example method in which LiDAR is used to provide sizing information for use during the visual search process
  • Figure 10 illustrates an example search result resulting from an execution of the method illustrated in Figure 9;
  • Figure 11 illustrates an example method in which three-dimensional, computer aided drawing are used in providing an augmented reality or virtual reality experience
  • Figure 12 illustrates an example search by image process
  • Figure 13 illustrates the example process of Fig. 12 with further detail.
  • the 3D information will include 3D image information, e.g., 3D information that is obtained via use of 3D sensors while imaging an object of interest (i.e., the product to be searched for), and 3D reference information, e.g., 3D product information (such as obtained from CAD drawing) stored in database.
  • 3D information can be used to provide product matching and product sizing capabilities to a visual search process.
  • the visual search process can be performed using a “tap-less” capability.
  • the “tap-less” capability is achieved by combining object detection and tracking techniques with visual search and scene understanding technologies.
  • object detection is performed on-device in real time on image frames captured via use of a camera (and 3D sensors as appropriate), data from object detection is presented in real time to the customer as visual cues for the prominent object being detected and tracked thus allowing the customer to choose the object of interest within a crowded scene, data from object detection is used for filtering out unnecessary information within the captured frame, and data from object detection is used as the input to the visual search process.
  • Object tracking is performed in real time in conjunction with object detection on the image frames captured via use of the camera.
  • Data from object tracking specifically the ID of the prominent object detected in the viewfinder frame, is used to present the customer with visual cues as to the data acquisition and to intuitively have the user stabilize the camera onto the object of interest.
  • a visual search trigger algorithm will automatically cause product matching to be performed via use of a visual search engine that resides in the cloud.
  • Multi-constrained optimization techniques are preferably used to choose the most-significant tracks in a given timeframe for triggering the cloud-based product matching process.
  • Visual search is preferably performed in the cloud due to its algorithmically complex nature and the size of the products database.
  • Fig. 1 illustrates, in block diagram form, an example computing device 100 usable with the subject app.
  • the computing device 100 is in the form of a mobile computing device .e.g., a smartphone, an electronic book reader, or tablet computer.
  • a computing device 100 can include desktop computers, notebook computers, electronic book readers, personal data assistants, cellular phones, video gaming consoles or controllers, television set top boxes, and portable media players, among other devices so long as the device includes or is capable of being coupled to a movable image capturing element.
  • the computing device 100 has an associated display and one or more image capture elements 104.
  • the image capture elements can include one or more 3D sensors, such as a LiDAR scanner which is included as a component part of an “Apple” brand “iPhone” brand cellular phone.
  • the display may be a touch screen, electronic ink (e-ink), organic light emitting diode (OLED), liquid crystal display (LCD), or the like element, operable to display information or image content to one or more customers or viewers of the computing device 100.
  • Each image capture element 104 may be, for example, a camera, a charge-coupled device (CCD), a motion detection sensor, an infrared sensor, a 3D sensor, or other linage capturing technology as needed for any particular purpose.
  • the computing device 100 can use the image frames (e.g., still or video) captured from the one or more image capturing devices 104 to capture data representative of an object of interest whereupon the captured image information can be analyzed to recognize the object of interest.
  • Image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc.
  • the computing device 100 can include the ability to start and/or stop image capture, e g., stop the visual search process, such as when receiving a command from a user, application, or other device.
  • the computing device 100 also includes one or more orientation-determining and/or position-determining elements 106 operable to provide information such as a position, direction, motion, and/or orientation of the computing device 100.
  • These elements can include, for example, accelerometers, inertial sensors, electronic gyroscopes, and/or electronic compasses without limitation.
  • the computing device 100 preferably includes at least one communication device 108, such as at least one wired or wireless component operable to communicate with one or more electronic devices, such as a cell tower, wireless access point (“WAP”'), computer, or the like.
  • WAP wireless access point
  • a processing unit 112 which will execute instructions, including the instructions associated with a visual search related app, that can be stored in one or more memory devices 114.
  • the computing device 100 can include many types of memory', data storage, or computer-readable media, such as a first data storage for program instructions for execution by the processing unit(s) 112, the same or separate storage for images or data, a removable memory 7 for sharing information with other devices, etc.
  • the computing device 100 also includes a power system 1 10
  • the power system 110 may he a battery operable to be recharged through conventional plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device.
  • the computing device 100 can include at least one additional input device 116 able to receive conventional input from a user.
  • This input device 116 can include, for example, a push button, touch pad, touch screen, wheel, joystick, keyboard, mouse, keypad, or any other such device or element whereby a user can input a command to the device.
  • These I/O devices could even be connected by a wireless infrared, Bluetooth, or other link in some embodiments.
  • Some devices also can include a microphone or other audio capture element that accepts voice or other audio commands.
  • the input device 116 can, among other things, be used to launch the app and to dose the app as desired.
  • the object detection 202 and tracking 204 processes are performed on-device, for example as soon as the customer launches a visual search related app on the computing device 100. Once launched, the customer will point the imaging element(s) 104 towards the object of interest and, as the customer trains the camera on the scene that includes the object of interest, object detection and tracking will be performed on every frame presented to the user within the viewfinder that is caused to be displayed in the display 102 of the computing device 100. Object detection and tracking will be performed under real-time constraints, i.e., the process will consider the device processing power and frame processing may be skipped when necessary in order to achieve a real-time fluid experience.
  • While object detection may detect multiple objects within the viewfinder’s frames, only the most prominent detected object shall be tracked and visually cued to the customer, allowing the customer to select the object of interest within a crowded scene by simply pointing the camera 104 towards that object and keeping the camera 104 focused on that object for a predetermined period of time 206
  • the viewfinder presented in the display 10 an example of which is illustrated in Fig.
  • an indicia 402 such as a bounding box, that functions to emphasize the current focus of the camera 104, i.e., the current object of interest 404 within the scene, and a progress indicator 406 that indicates to the customer the amount of time the camera 104 has been focused on the object of interest 404 and, accordingly, the amount of time before the search process will be automatically triggered.
  • an indicia 402 such as a bounding box
  • a progress indicator 406 that indicates to the customer the amount of time the camera 104 has been focused on the object of interest 404 and, accordingly, the amount of time before the search process will be automatically triggered.
  • the viewfinder can further provide an indication to the customer when the search process has been automatically triggered, for example by changing/darkening the view of the scene as presented to the customer as shown in Fig. 5E as compared to Fig. 5D.
  • the example visual progress indicator 406 can also be associated with or alternatively implemented as an audible progress indicator.
  • data from object detection is preferably presented in real-time to the customer in visual form, for example in the form of a bounding box 402 of the most prominent object 404 detected, overlaid on top of the captured image displayed in the viewfinder.
  • This highlighting 402 of the object of interest 404 to the customer achieves two goals. First and foremost, highlighting 402 the object of interest 404 guides the customer into choosing the object of interest 404 from many objects within the field of view. Additionally, highlighting 402 the object of interest 404 guides the customer into bringing the object of interest 404 into a position of prominence in the field of view thus implicitly improving product matching by improving the captured object data used for product matching.
  • the prominent detected object’s bounding box in this example - which defines an area of interest within the captured frame - may be used for filtering out unnecessary information (e.g., busy scenery or adjacent objects within the captured frame) from the captured image frame when performing the visual search process, thus improving product matching.
  • data from object detection specifically the prominent detected object bounds within the captured frames, may be used to crop the object image from the captured frame and these object images may be stored for optimally choosing the best data as input to the visual search process.
  • Data from object tracking is additionally used in connection with the progress indicator 406.
  • the system may function to fill the progress bar in keeping with the embodiment illustrated in Figs. 5A-5E. If, however, the value of the tracking ID changes then the progress indicator 406 will be reset to indicate to the customer that a new object has gained prominence and the device is now gathering data for that object.
  • the use of object tracking in this manner will intuitively train the user into stabilizing the camera viewfinder onto the object of interest 404.
  • visual search is preferably performed in the cloud due to its algorithmically complex nature and the size of the products database.
  • the input to the visual search is the data captured during the object detection phase, preferably after being subjected to a multi-constrained optimization technique that functions to choose the most-significant tracks in a given time-frame.
  • the data may simply be an optimally chosen image of the prominently detected object.
  • the output of the visual search may be product match IDs that can be thereafter translated into product metadata (product name, brand, images, price, availability etc.) and presented to the customer as purchasing options.
  • product metadata product name, brand, images, price, availability etc.
  • the cloud-based visual search engine will have access to product data that is to be used during the matching process where that product data is further cross-referenced in one or more associated data repositories to the product metadata.
  • the product metadata may then be provided to the computing device 100 of the customer for display whereupon the customer may interact with the product information to perform otherwise conventional e-commerce related actions, e.g., to place product into a shopping cart or list, to purchase product, etc.
  • pre-processing technique is a cross-frames brightness correction technique that may be employed to enhance the object detection and tracking outcome.
  • image stabilization techniques such as the monitoring of the rotation vector as part of the exposed mobile OS motion sensors APIs, may be used to enhance the quality of the captured data during the object detection and tracking phase.
  • FIG. 3 an example system/method that combines object detection and tracking techniques with visual search and scene understanding technologies to thereby provide a “tap-less” visual search capability is illustrated.
  • the system uses a real-time object detection component 302 to detect objects within a scene 300 that is being pointed to by a camera 104 of the computing device 100.
  • the object detection component 302 can be implemented using, for example, “GOOGLE’s FIREBASE” brand toolbox.
  • the object detection component will provide output that identifies areas of possible interest in the frame, e.g., defines bounding boxes in the frame.
  • the frames 300 can be provided to correction component 301 that functions to process the frames to reduce noise prior to the frames being provided to the real-time object detection component 302.
  • the output from the real-time object detection component 302 may then be provided to a bounding box/object locating component 304.
  • the bounding box/object locating component 304 is intended to identify, via use of the data that is output by the real-time object detection component 302, the bounding-box with the highest confidence, i.e., identify the location of the object of interest within the frame.
  • the output of the bounding box/object locating component 304 namely, the location within the image of the bounding-box surrounding the product of interest, is provided to the real-time tracking component 306.
  • the real-time tracking component 306 in cooperation with the object location trajectory component 308, tracks the location of the bounding-box within the image to ensure that the camera is remaining focused on the same object through multiple frames/over time.
  • These components may use a Kalman filter that functions to assign an ID to the object/bounding box location to assist in the location tracking procedure.
  • a time sampler component 310 is used to continuously capture the time a customer spends focusing on one object with the camera 104.
  • the time sampler component 310 operates in conjunction with a motion detecting component 312 that uses data generated by the orientation/positioning element 106 of the mobile computing device 100 to track the motion of the mobile computing device 100 to determine if the customer is quickly shifting the focus from one object to another within the scene as described immediately below.
  • the output from the time sample component 310 may be used to update the progress indicator 406 as it is being presented in the viewfinder.
  • the data generated by the above components is provided to a multi-constraint optimization algorithm component 314 that functions to determine if visual search should be triggered or if processing should continue. More particularly, the multi-constraint optimization algorithm component 314 uses linear programming techniques to decide if the customer is interested in a given object, e.g., determines if the customer has kept the camera focused on the object for a predetermined amount of time. If the multi-constraint optimization algorithm component 314 determines that the customer is interested in the object in focus, the multi-constraint optimization algorithm component 314 will automatically trigger the visual search.
  • the multi-constraint optimization algorithm component 314 will indicate to the system that the whole process must be reset 316, e.g., the system should reset the indicia 402, such as a bounding box, that functions to emphasize the current focus of the camera 104, and rest the progress indicator 406 that indicates to the customer the amount of time the camera 104 has been focused on the object of interest 404 within the viewfinder.
  • the image data is provided to the cloud-based, visual search engine 320.
  • a region of interest (“ROI”) extraction component 322 will use the coordinates that define that area of interest within the image frame, e.g., the coordinates of the bounding box, to extract from the image frame the object of interest.
  • a normalization component 324 may be used to normalize, at the pixel level, the extracted image information and an encoding component 326 may be used to encode to base64 the normalized, extracted image information prior to the extracted image information being operated on by the image recognition component 326.
  • the image recognition component 326 uses one or more algorithms to recognize the object of interest, e.g., to locate one or more products within a database of product information 332 that is/are an exact or close match to the object of interest.
  • the located one or more products located within the database may then be ranked by a ranking component 330, for example based upon a confidence level, whereupon the located product information will be returned to the computing device 100 for presentation to the customer.
  • the described systems and methods for providing tap-less, real-time visual search provide, among other things, an improved shopping experience for customers by allowing a customer to find a product’s replacement (usually an exact match replacement or near exact replacement) where the only user interaction needed is pointing a camera towards an object of interest.
  • a product usually an exact match replacement or near exact replacement
  • the subject system and method has the advantage of seamlessly providing information about plural objects within a crowded scene simply in response to a customer pointing a camera towards each of the objects in turn.
  • a visual search system will utilize 3D information in connection with the visual search process.
  • a first image capturing element 104a e.g,, a camera associated with a cell-phone 100, is used to obtain a two-dimensional (2D) image 700 of an object of interest 702, e.g,, a cordless, power drill.
  • the data associated with the 2D image 700 is, as described above, provided to a 2D visual search engine 704 and the 2D visual search engine 704 will use the data associated with the 2D image 700 to identity within a database of product information 706, e.g., a data store having 2D image data for product sold by a vendor that is cross-referenced to product identifying information, such as a vendor stock keeping unit (SKU), product pricing, product availability, product parameters (materials, sizing, etc.), and the like as needed for any particular purpose.
  • product information identified by die 2D visual search engine as being a match (or a close match) for the data associated with the 2D image 700 of the object of interest 702 is then provided to a multi-modal, re-ranking module 708.
  • 3D image information 710 for the object of interest 702 is also caused to be obtained.
  • the 3D image information 710 is obtained by using a 3D capable image capturing element 104b, such as a LiDAR sensor, and/or by using one or more 2D capable imaging capturing elements to create a stereoscopic image of the object of interest 702.
  • the obtained 3D image information 710 for the object of interest 702 is then provided to a 3D visual search engine 712 and the 3D visual search engine 712 will use the obtained 3D image information 710 to identity within a database of product information 714, e.g., a data store having 3D image data, such as CAD data, for product sold by a vendor that is cross-referenced to product identifying information, such as a vendor stock keeping unit. (SKU), product pricing, product availability, product parameters (materials, sizing, etc.), and the like as needed for any particular purpose.
  • product, information identified by the 3D visual search engine as being a match (or a dose match) for the data associated with the 3D image information 710 of the object of interest 702 is then also provided to a multi-modal re- ranking module 708.
  • the module 708 may determine, for example using a weighted score applied to the 2D visual search results, the 3D visual search results, and/or a combination of the visual search results, which one or more of the visual search results generated by the visual search engine(s) should be provided to a user.
  • the determined “besf’ search results may then be returned to the user device 100 for display to the user.
  • Figure 8 shows a search result 800 that is caused to be displayed to a user of a visual search app executing on a device 100 when a cordless drill is caused to be an object of interest.
  • the number of products included in the search result 800 can be varied as needed for any particular purpose.
  • product search results not meeting a predetermined score threshold could be filtered from the search result 800.
  • only the top X scored product search results could be included in the search result 800, or the hire without [imitation.
  • the search results 800 can be accompanied with links to conventional e-commerce related functionalities as desired, e.g., links to add product to a shopping cart, to add product to a list, to navigate to a product detail page, etc.
  • the 3D visual search engine and the 2D visual search engine described in the preceding example need not be separate and distinct search engines. Rather, the 3D visual search engine and the 2D visual search engine can share modules, processes, data, etc. as appropriate and/or as needed. In addition, the 3D image processing and the 2D image processing need not be performed at separate times but can be performed as needed using one or more processes that can be executed in parallel at the device 100, at one or more cloud server systems, or a combination thereof also as appropriate and/or as needed.
  • one or more obtained product sizing estimations may also be used to provide product search results.
  • one or more image capturing elements 104 e g., a LiDAR sensor, one or more cameras, etc.
  • the obtained 3D image data 900 is additionally used to generate estimations of one or more size dimensions associated with the product.
  • the object of interest 902 and the image capturing device 100 may be moved relative to one another to obtain the one or sizing size estimations.
  • the processes described above that are used to detect an object of interest within a scene, to extract the object of interest from the scene, etc. may be repeated as needed, with the information obtained from the various views being cross-referenced to the object of interest for use in the visual search process.
  • the 3D image 900 associated with the object of interest including any generated sizing estimations for any one or more views of the object of interest, is, as described above, provided to a 3D visual search engine 904 and the ID visual search engine 904 will use the shape and size data associated with the 3D image 900 to identity within a database of product information 906, e.g., a data store having 3D image data, such as derived from CAD models, for product sold by a vendor that is cross-referenced to product identifying information, such as a vendor stock keeping unit (SKU), product pricing, product availability, product parameters (materials, sizing, etc.), and the like as needed for any particular purpose.
  • product information identified by the 3D visual search engine as being a match (or a close match) for the data associated with the 3D image 900 of the object of interest 902 may then be provided to the user as a search result 1000, as shown in Fig. 10.
  • the methods illustrated in Figs. 12 and 13 may be performed.
  • 3D and 2D image information for a scene is captured.
  • the 2D image information may be used to determine an object of interest within a scene as described above in connection with Figs. 1-6, e.g., via use of a bounding box, with the itnage information (and location information within the scene) for the determined object of interest then being extracted, in the ease where both a 2D and a 3D visual search is performed, the 2D data for the object will be extracted and provided to a 2D search engine as described above.
  • the determined bounding boxfidentifi cation of the object of interest will also be used to extract from the 3D visual search information the 3D visual search information that is associated with the object of interest, e.g., a 3D mesh of the object is obtained from a depth map.
  • the 3D visual search information that is associated with the object of interest, e.g., a 3D mesh of the object is obtained from a depth map.
  • points on the object can automatically selected and used to estimate size dimensions for the object, particularly using the object to device distance as is w 7 eli-known in the art.
  • This estimated size information and the obtained shape information may then be provided to the 3D search engine to obtain the search results as described previously.
  • the 3D search results can be processed alone or with any 2D search results to determined which search results to provide to the end user.
  • 3D product information stored in a database of product information 1102, such a CAD model data can also be utilized to provide AR/VR on a device.
  • a user may be provided with a user interface to select a product of interest within the database 1102 and the CAD model data associated with the identified product of interest could be provided to a display device, such as a pair or AR/VR glasses 1104, a smart device 1106, etc. where the 3D product information can be used to provide a 3D, real-time AR/VR experience with a virtually displayed object 1108 to the user.

Abstract

A system and method determines that an object within an image frame being captured via use of an imaging system is an object of interest. The determined object of interest is used to extract from a three-dimensional information obtained via use of a three-dimensional (3D) data obtaining component of the imaging system a 3D information for the object of interest. At least a part of the 3D information for the object of interest is caused to be provided to a cloud-based visual search process for the purpose of locating one or more matching products from within a product database for the object of interest with the located one or more matching products being returned to a customer as a product search result.

Description

SYSTEM AND METHOD FOR PROVIDING THREE-DIMENSIONAL, VISUAL SEARCH
RELATED APPLICATION INFORMATION [0001] This application claims the benefit of U.S. Provisional Application No. 63/165,389, filed on March 24, 2021.
[0002] This application is also a continuation-in-part of and claims the benefit of U.S. Application No. 17/148,725, filed on January 14, 2021, which application claims the benefit of U.S. Provisional Application No. 63/048,704, filed on July 7, 2020, and U.S. Provisional Application No. 63/076,741, filed on September 10, 2020.
[0003] The disclosure within each of the applications from which priority is claimed is incorporated herein by reference in its entirety.
BACKGROUND
[0004] As described in U.S. Patent No. 9,411,413 and U.S. Publication No. 2021/018316, which publications are incorporated herein by reference in their entirety, Light Detection And Ranging (LiDAR) is a known sensing method usable to measure and extract an exact distance of an object/surface from a device. Generally, the LiDAR process sends pulses of light and calculates the time it takes for the pulses of light to return to the LiDAR source. The calculated time is used to determine the distance of the object from the device.
SUMMARY
[0005] The following describes systems and methods or using 3D sensors, particularly LiDAR sensors, and CAD models to provide better product matching during a visual search process.
[0006] A better understanding of the objects, advantages, features, properties, and relationships of the hereinafter described systems/methods will be obtained from the following detailed description and accompanying drawings which set forth illustrative embodiments and which are indicative of the various ways in which the principles of the described systems/methods may be employed.
BRIEF DESCRIPTION OF THE DRAWINGS [0007] Example systems and method for providing visual search, including three- dimensional visual search, will be described hereinafter with reference to the attached drawings in which: [0008] Figure 1 illustrates an example computing device for use in capturing image frame information;
[0009] Figure 2 illustrates an example method for implementing object detection and tracking and visual search using the captured image frame information;
[0010] Figure 3 illustrates components of an example system/method for implementing object detection and tracking and visual search;
[0011] Figure 4 illustrates an example of an object detection and tracking process as it is being performed on the computing device of Fig. 1;
[0012] Figures 5A-5F illustrate an additional example of an object detection and tracking and visual search process as it is being performed on the computing device of Fig. 1;
[0013] Figure 6 illustrates an example of an object detection and tracking and visual search process as it is being performed on the computing device of Fig. 1 to obtain search results for multiple objects within a crowded scene;
[0014] Figure 7 illustrates an example method in which depth information is also used to provide product matching during the visual search process;
[0015] Figure 8 illustrates an example search result resulting from an execution of the method illustrated in Figure 7;
[0016] Figure 9 illustrates an example method in which LiDAR is used to provide sizing information for use during the visual search process;
[0017] Figure 10 illustrates an example search result resulting from an execution of the method illustrated in Figure 9;
[0018] Figure 11 illustrates an example method in which three-dimensional, computer aided drawing are used in providing an augmented reality or virtual reality experience;
[0019] Figure 12 illustrates an example search by image process; and [0020] Figure 13 illustrates the example process of Fig. 12 with further detail.
DETAILED DESCRIPTION
[0021] The following describes a new and innovative visual search product that will utilize three-dimensional(3D) information to augment the visual search process. In a preferred example, the 3D information will include 3D image information, e.g., 3D information that is obtained via use of 3D sensors while imaging an object of interest (i.e., the product to be searched for), and 3D reference information, e.g., 3D product information (such as obtained from CAD drawing) stored in database. The 3D information can be used to provide product matching and product sizing capabilities to a visual search process. [0022] In some instances, the visual search process can be performed using a “tap-less” capability. In general, the “tap-less” capability is achieved by combining object detection and tracking techniques with visual search and scene understanding technologies. In the “tapless” search, object detection is performed on-device in real time on image frames captured via use of a camera (and 3D sensors as appropriate), data from object detection is presented in real time to the customer as visual cues for the prominent object being detected and tracked thus allowing the customer to choose the object of interest within a crowded scene, data from object detection is used for filtering out unnecessary information within the captured frame, and data from object detection is used as the input to the visual search process.
[0023] Object tracking is performed in real time in conjunction with object detection on the image frames captured via use of the camera. Data from object tracking, specifically the ID of the prominent object detected in the viewfinder frame, is used to present the customer with visual cues as to the data acquisition and to intuitively have the user stabilize the camera onto the object of interest.
[0024] Once the object of interest is in-focus, a visual search trigger algorithm will automatically cause product matching to be performed via use of a visual search engine that resides in the cloud. Multi-constrained optimization techniques are preferably used to choose the most-significant tracks in a given timeframe for triggering the cloud-based product matching process. Visual search is preferably performed in the cloud due to its algorithmically complex nature and the size of the products database. Thus, using the data captured during the object detection and tracking phase, the visual search engine will return to the customer one or more product matches for presentation to the customer via use of a computing device.
[0025] Turning now to Fig. 1, Fig. 1 illustrates, in block diagram form, an example computing device 100 usable with the subject app. Preferably, the computing device 100 is in the form of a mobile computing device .e.g., a smartphone, an electronic book reader, or tablet computer. However, it is to be understood that any device capable of receiving and processing input can be used in accordance with the various embodiments discussed herein. Thus, a computing device 100 can include desktop computers, notebook computers, electronic book readers, personal data assistants, cellular phones, video gaming consoles or controllers, television set top boxes, and portable media players, among other devices so long as the device includes or is capable of being coupled to a movable image capturing element. [0026] For use in connection with the visual search process, the computing device 100 has an associated display and one or more image capture elements 104. Aa discussed further below, the image capture elements can include one or more 3D sensors, such as a LiDAR scanner which is included as a component part of an “Apple” brand “iPhone” brand cellular phone. The display may be a touch screen, electronic ink (e-ink), organic light emitting diode (OLED), liquid crystal display (LCD), or the like element, operable to display information or image content to one or more customers or viewers of the computing device 100. Each image capture element 104 may be, for example, a camera, a charge-coupled device (CCD), a motion detection sensor, an infrared sensor, a 3D sensor, or other linage capturing technology as needed for any particular purpose. As discussed, the computing device 100 can use the image frames (e.g., still or video) captured from the one or more image capturing devices 104 to capture data representative of an object of interest whereupon the captured image information can be analyzed to recognize the object of interest. Image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc. Further, the computing device 100 can include the ability to start and/or stop image capture, e g., stop the visual search process, such as when receiving a command from a user, application, or other device.
[0027] As further shown in Fig, 1, the computing device 100 also includes one or more orientation-determining and/or position-determining elements 106 operable to provide information such as a position, direction, motion, and/or orientation of the computing device 100. These elements can include, for example, accelerometers, inertial sensors, electronic gyroscopes, and/or electronic compasses without limitation. Meanwhile, for communicating data to remotely located processing devices, the computing device 100 preferably includes at least one communication device 108, such as at least one wired or wireless component operable to communicate with one or more electronic devices, such as a cell tower, wireless access point (“WAP”'), computer, or the like.
[0028] As yet further illustrated in Fig. 1, these and other components are coupled to a processing unit 112 which will execute instructions, including the instructions associated with a visual search related app, that can be stored in one or more memory devices 114. As will be apparent, to one of ordinary skill in the art, the computing device 100 can include many types of memory', data storage, or computer-readable media, such as a first data storage for program instructions for execution by the processing unit(s) 112, the same or separate storage for images or data, a removable memory7 for sharing information with other devices, etc.
[0029] To provide power to the various components of the computing device 100, the computing device 100 also includes a power system 1 10 The power system 110 may he a battery operable to be recharged through conventional plug-in approaches, or through other approaches such as capacitive charging through proximity with a power mat or other such device.
[0030] In some embodiments the computing device 100 can include at least one additional input device 116 able to receive conventional input from a user. This input device 116 can include, for example, a push button, touch pad, touch screen, wheel, joystick, keyboard, mouse, keypad, or any other such device or element whereby a user can input a command to the device. These I/O devices could even be connected by a wireless infrared, Bluetooth, or other link in some embodiments. Some devices also can include a microphone or other audio capture element that accepts voice or other audio commands. As will be appreciated, the input device 116 can, among other things, be used to launch the app and to dose the app as desired.
[0031] Turning to Fig. 2, in a preferred embodiment of the subject system the object detection 202 and tracking 204 processes are performed on-device, for example as soon as the customer launches a visual search related app on the computing device 100. Once launched, the customer will point the imaging element(s) 104 towards the object of interest and, as the customer trains the camera on the scene that includes the object of interest, object detection and tracking will be performed on every frame presented to the user within the viewfinder that is caused to be displayed in the display 102 of the computing device 100. Object detection and tracking will be performed under real-time constraints, i.e., the process will consider the device processing power and frame processing may be skipped when necessary in order to achieve a real-time fluid experience.
[0032] While object detection may detect multiple objects within the viewfinder’s frames, only the most prominent detected object shall be tracked and visually cued to the customer, allowing the customer to select the object of interest within a crowded scene by simply pointing the camera 104 towards that object and keeping the camera 104 focused on that object for a predetermined period of time 206 To assist the customer during this process, the viewfinder presented in the display 10, an example of which is illustrated in Fig. 4, may provide the customer with an indicia 402, such as a bounding box, that functions to emphasize the current focus of the camera 104, i.e., the current object of interest 404 within the scene, and a progress indicator 406 that indicates to the customer the amount of time the camera 104 has been focused on the object of interest 404 and, accordingly, the amount of time before the search process will be automatically triggered. Thus, as shown in Fig. 5A- 5F, once the customer has trained the camera 104 on the object of interest 404 for a sufficient period of time, which is indicated to customer in this example by the filling of progress indicator 406 in the form of a progress bar, the system will automatically commence the process of matching the object of interest 404. If, however, the customer refocuses the camera 104 onto a different object of interest 404’ prior to the expiry of the measured time, the process will recommence. The viewfinder can further provide an indication to the customer when the search process has been automatically triggered, for example by changing/darkening the view of the scene as presented to the customer as shown in Fig. 5E as compared to Fig. 5D. It will also be appreciated that the example visual progress indicator 406 can also be associated with or alternatively implemented as an audible progress indicator. [0033] As noted above, data from object detection is preferably presented in real-time to the customer in visual form, for example in the form of a bounding box 402 of the most prominent object 404 detected, overlaid on top of the captured image displayed in the viewfinder. This highlighting 402 of the object of interest 404 to the customer achieves two goals. First and foremost, highlighting 402 the object of interest 404 guides the customer into choosing the object of interest 404 from many objects within the field of view. Additionally, highlighting 402 the object of interest 404 guides the customer into bringing the object of interest 404 into a position of prominence in the field of view thus implicitly improving product matching by improving the captured object data used for product matching. Yet further, the prominent detected object’s bounding box in this example - which defines an area of interest within the captured frame - may be used for filtering out unnecessary information (e.g., busy scenery or adjacent objects within the captured frame) from the captured image frame when performing the visual search process, thus improving product matching. Still further, data from object detection, specifically the prominent detected object bounds within the captured frames, may be used to crop the object image from the captured frame and these object images may be stored for optimally choosing the best data as input to the visual search process.
[0034] Data from object tracking, such as the ID of the prominent object being detected and tracked, is additionally used in connection with the progress indicator 406. For example, while the ID of the prominent object being detected and tracked remains unchanged over consecutive frames, the system may function to fill the progress bar in keeping with the embodiment illustrated in Figs. 5A-5E. If, however, the value of the tracking ID changes then the progress indicator 406 will be reset to indicate to the customer that a new object has gained prominence and the device is now gathering data for that object. The use of object tracking in this manner will intuitively train the user into stabilizing the camera viewfinder onto the object of interest 404. [0035] Once triggered, visual search is preferably performed in the cloud due to its algorithmically complex nature and the size of the products database. The input to the visual search is the data captured during the object detection phase, preferably after being subjected to a multi-constrained optimization technique that functions to choose the most-significant tracks in a given time-frame. In further embodiments the data may simply be an optimally chosen image of the prominently detected object.
[0036] As particularly illustrated in Fig. 5F, the output of the visual search may be product match IDs that can be thereafter translated into product metadata (product name, brand, images, price, availability etc.) and presented to the customer as purchasing options. For this purpose, the cloud-based visual search engine will have access to product data that is to be used during the matching process where that product data is further cross-referenced in one or more associated data repositories to the product metadata. The product metadata may then be provided to the computing device 100 of the customer for display whereupon the customer may interact with the product information to perform otherwise conventional e-commerce related actions, e.g., to place product into a shopping cart or list, to purchase product, etc. [0037] In some instances, it may be desirable to pre-process the image information prior to the image information being provided to the visual search engine. A non-limiting example of a pre-processing technique is a cross-frames brightness correction technique that may be employed to enhance the object detection and tracking outcome. In addition, image stabilization techniques, such as the monitoring of the rotation vector as part of the exposed mobile OS motion sensors APIs, may be used to enhance the quality of the captured data during the object detection and tracking phase.
[0038] Turing now to Fig. 3, an example system/method that combines object detection and tracking techniques with visual search and scene understanding technologies to thereby provide a “tap-less” visual search capability is illustrated. As discussed above, when the subject app is launched, the system uses a real-time object detection component 302 to detect objects within a scene 300 that is being pointed to by a camera 104 of the computing device 100. The object detection component 302 can be implemented using, for example, “GOOGLE’s FIREBASE” brand toolbox. The object detection component will provide output that identifies areas of possible interest in the frame, e.g., defines bounding boxes in the frame. While not required, the frames 300 can be provided to correction component 301 that functions to process the frames to reduce noise prior to the frames being provided to the real-time object detection component 302. [0039] The output from the real-time object detection component 302 may then be provided to a bounding box/object locating component 304. The bounding box/object locating component 304 is intended to identify, via use of the data that is output by the real-time object detection component 302, the bounding-box with the highest confidence, i.e., identify the location of the object of interest within the frame. The output of the bounding box/object locating component 304, namely, the location within the image of the bounding-box surrounding the product of interest, is provided to the real-time tracking component 306. The real-time tracking component 306, in cooperation with the object location trajectory component 308, tracks the location of the bounding-box within the image to ensure that the camera is remaining focused on the same object through multiple frames/over time. These components may use a Kalman filter that functions to assign an ID to the object/bounding box location to assist in the location tracking procedure.
[0040] While the above described components are performing object detection and tracking, a time sampler component 310 is used to continuously capture the time a customer spends focusing on one object with the camera 104. In this example, the time sampler component 310 operates in conjunction with a motion detecting component 312 that uses data generated by the orientation/positioning element 106 of the mobile computing device 100 to track the motion of the mobile computing device 100 to determine if the customer is quickly shifting the focus from one object to another within the scene as described immediately below. It will also be appreciated that the output from the time sample component 310 may be used to update the progress indicator 406 as it is being presented in the viewfinder.
[0041] The data generated by the above components is provided to a multi-constraint optimization algorithm component 314 that functions to determine if visual search should be triggered or if processing should continue. More particularly, the multi-constraint optimization algorithm component 314 uses linear programming techniques to decide if the customer is interested in a given object, e.g., determines if the customer has kept the camera focused on the object for a predetermined amount of time. If the multi-constraint optimization algorithm component 314 determines that the customer is interested in the object in focus, the multi-constraint optimization algorithm component 314 will automatically trigger the visual search. If, however, the data indicates that the customer is not interested in the object in focus, e.g., the customer moves the computing device 100 prior to the expiry of the predetermined amount of time by an amount that changes the bounding box with the highest confidence/the ID of the object being tracked, the multi-constraint optimization algorithm component 314 will indicate to the system that the whole process must be reset 316, e.g., the system should reset the indicia 402, such as a bounding box, that functions to emphasize the current focus of the camera 104, and rest the progress indicator 406 that indicates to the customer the amount of time the camera 104 has been focused on the object of interest 404 within the viewfinder.
[0042] When the visual search process is automatically triggered, the image data is provided to the cloud-based, visual search engine 320. As further illustrated in Fig. 3, a region of interest (“ROI”) extraction component 322 will use the coordinates that define that area of interest within the image frame, e.g., the coordinates of the bounding box, to extract from the image frame the object of interest. A normalization component 324 may be used to normalize, at the pixel level, the extracted image information and an encoding component 326 may be used to encode to base64 the normalized, extracted image information prior to the extracted image information being operated on by the image recognition component 326. As will be appreciated by those of skill in the art, the image recognition component 326 uses one or more algorithms to recognize the object of interest, e.g., to locate one or more products within a database of product information 332 that is/are an exact or close match to the object of interest. The located one or more products located within the database may then be ranked by a ranking component 330, for example based upon a confidence level, whereupon the located product information will be returned to the computing device 100 for presentation to the customer.
[0043] In view of foregoing, it will be appreciated that the described systems and methods for providing tap-less, real-time visual search provide, among other things, an improved shopping experience for customers by allowing a customer to find a product’s replacement (usually an exact match replacement or near exact replacement) where the only user interaction needed is pointing a camera towards an object of interest. Furthermore, as seen by the sample screen images illustrated in Fig. 6, which depict a computing device 100 being used to execute multiple product searches and to display the corresponding search results, the subject system and method has the advantage of seamlessly providing information about plural objects within a crowded scene simply in response to a customer pointing a camera towards each of the objects in turn.
[0044] In a further example, which may or may not utilize the tap-less, search initiating feature described above, a visual search system will utilize 3D information in connection with the visual search process. As shown in Fig. 7, a first image capturing element 104a, e.g,, a camera associated with a cell-phone 100, is used to obtain a two-dimensional (2D) image 700 of an object of interest 702, e.g,, a cordless, power drill. The data associated with the 2D image 700 is, as described above, provided to a 2D visual search engine 704 and the 2D visual search engine 704 will use the data associated with the 2D image 700 to identity within a database of product information 706, e.g., a data store having 2D image data for product sold by a vendor that is cross-referenced to product identifying information, such as a vendor stock keeping unit (SKU), product pricing, product availability, product parameters (materials, sizing, etc.), and the like as needed for any particular purpose. The product information identified by die 2D visual search engine as being a match (or a close match) for the data associated with the 2D image 700 of the object of interest 702 is then provided to a multi-modal, re-ranking module 708.
[0045] In addition to processing the 2D image information obtained via use of the first image capturing element 104a, 3D image information 710 for the object of interest 702 is also caused to be obtained. The 3D image information 710 is obtained by using a 3D capable image capturing element 104b, such as a LiDAR sensor, and/or by using one or more 2D capable imaging capturing elements to create a stereoscopic image of the object of interest 702. The obtained 3D image information 710 for the object of interest 702 is then provided to a 3D visual search engine 712 and the 3D visual search engine 712 will use the obtained 3D image information 710 to identity within a database of product information 714, e.g., a data store having 3D image data, such as CAD data, for product sold by a vendor that is cross-referenced to product identifying information, such as a vendor stock keeping unit. (SKU), product pricing, product availability, product parameters (materials, sizing, etc.), and the like as needed for any particular purpose. The product, information identified by the 3D visual search engine as being a match (or a dose match) for the data associated with the 3D image information 710 of the object of interest 702 is then also provided to a multi-modal re- ranking module 708.
[0046] Once the 3D visual search engine results and the 2D visual search engine results are received by the multi-modal re-ranking module 708, the module 708 may determine, for example using a weighted score applied to the 2D visual search results, the 3D visual search results, and/or a combination of the visual search results, which one or more of the visual search results generated by the visual search engine(s) should be provided to a user. The determined “besf’ search results may then be returned to the user device 100 for display to the user. By way of example only, Figure 8 shows a search result 800 that is caused to be displayed to a user of a visual search app executing on a device 100 when a cordless drill is caused to be an object of interest. As will be appreciated, the number of products included in the search result 800 can be varied as needed for any particular purpose. To this end, product search results not meeting a predetermined score threshold could be filtered from the search result 800. only the top X scored product search results could be included in the search result 800, or the hire without [imitation. As noted previously, the search results 800 can be accompanied with links to conventional e-commerce related functionalities as desired, e.g., links to add product to a shopping cart, to add product to a list, to navigate to a product detail page, etc.
[0047] It will be understood that the 3D visual search engine and the 2D visual search engine described in the preceding example need not be separate and distinct search engines. Rather, the 3D visual search engine and the 2D visual search engine can share modules, processes, data, etc. as appropriate and/or as needed. In addition, the 3D image processing and the 2D image processing need not be performed at separate times but can be performed as needed using one or more processes that can be executed in parallel at the device 100, at one or more cloud server systems, or a combination thereof also as appropriate and/or as needed.
[0048] In a further search process generally illustrated In Fig. 9, one or more obtained product sizing estimations may also be used to provide product search results. In this example method, one or more image capturing elements 104, e g., a LiDAR sensor, one or more cameras, etc., are used to obtain 3D image data 900 for an object of interest 902, e.g., a screw, and the obtained 3D image data 900 is additionally used to generate estimations of one or more size dimensions associated with the product. As needed, the object of interest 902 and the image capturing device 100 may be moved relative to one another to obtain the one or sizing size estimations. In the event the object of interest 902 and the image capturing device 100 are moved relative to one another to obtain size estimations for multipie views of the object, the processes described above that are used to detect an object of interest within a scene, to extract the object of interest from the scene, etc. may be repeated as needed, with the information obtained from the various views being cross-referenced to the object of interest for use in the visual search process. Once obtained, the 3D image 900 associated with the object of interest, including any generated sizing estimations for any one or more views of the object of interest, is, as described above, provided to a 3D visual search engine 904 and the ID visual search engine 904 will use the shape and size data associated with the 3D image 900 to identity within a database of product information 906, e.g., a data store having 3D image data, such as derived from CAD models, for product sold by a vendor that is cross-referenced to product identifying information, such as a vendor stock keeping unit (SKU), product pricing, product availability, product parameters (materials, sizing, etc.), and the like as needed for any particular purpose. The product information identified by the 3D visual search engine as being a match (or a close match) for the data associated with the 3D image 900 of the object of interest 902 may then be provided to the user as a search result 1000, as shown in Fig. 10.
[0049] More particularly, to obtain search results with the use of 3D information, including sizing information, the methods illustrated in Figs. 12 and 13 may be performed. Using the 2D and 3D imaging capabilities of the device 100, 3D and 2D image information for a scene is captured. The 2D image information may be used to determine an object of interest within a scene as described above in connection with Figs. 1-6, e.g., via use of a bounding box, with the itnage information (and location information within the scene) for the determined object of interest then being extracted, in the ease where both a 2D and a 3D visual search is performed, the 2D data for the object will be extracted and provided to a 2D search engine as described above. The determined bounding boxfidentifi cation of the object of interest will also be used to extract from the 3D visual search information the 3D visual search information that is associated with the object of interest, e.g., a 3D mesh of the object is obtained from a depth map. Using the extracted 3D mesh of the object, points on the object can automatically selected and used to estimate size dimensions for the object, particularly using the object to device distance as is w7eli-known in the art. This estimated size information and the obtained shape information may then be provided to the 3D search engine to obtain the search results as described previously. As needed, the 3D search results can be processed alone or with any 2D search results to determined which search results to provide to the end user.
[0050] As illustrated in Fig. 11, 3D product information stored in a database of product information 1102, such a CAD model data, can also be utilized to provide AR/VR on a device. For example, a user may be provided with a user interface to select a product of interest within the database 1102 and the CAD model data associated with the identified product of interest could be provided to a display device, such as a pair or AR/VR glasses 1104, a smart device 1106, etc. where the 3D product information can be used to provide a 3D, real-time AR/VR experience with a virtually displayed object 1108 to the user.
[0051] While various concepts have been described in detail, it will be appreciated by those skilled in the art that various modifications and alternatives to those concepts could be developed in light of the overall teachings of the disclosure. Further, while described in the context of functional modules and illustrated using block diagram format, it is to be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or a software module, or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an enabling understanding of the invention. Rather, the actual implementation of such modules would be well within the routine skill of an engineer, given the disclosure herein of the attributes, functionality, and inter-relationship of the various functional modules in the system. Therefore, a person skilled in the art, applying ordinary skill, will be able to practice the invention set forth in the claims without undue experimentation. It will be additionally appreciated that the particular concepts disclosed are meant to be illustrative only and not limiting as to the scope of the invention which is to be given the full breadth of the appended claims and any equivalents thereof.

Claims

CLAIMS What is claimed is:
1. A non-transitory, computer-readable media having stored thereon instructions, the instructions, when executed by a computing device, cause the computing device to perform steps comprising:
(a) determing that an object within an image frame being captured via use of an imaging element associated with the computing device is an object of interest;
(b) using the determined object of interest to extract from a three-dimensional information obtained via use of a three-dimensional (3D) data obtaining system associated with the computing device a 3D information for the object of interest; and
(c) providing at least a part of the 3D information for the object of interest to a cloud- based visual search process for the purpose of locating one or more matching products from within a product database for the object of interest with the located one or more matching products being returned to a customer as a product search result.
2. The non-transitory, computer-readable media as recited in claim 1, wherein the 3D data obtaining system comprises a LiDAR sensor associated with the computing device.
3. The non-transitory, computer-readable media as recited in claim 1, wherein the 3D data obtaining system comprises one or more cameras and processing for creating a stereoscopic image from image data captured via use of the one or more cameras.
4. The non-transitory, computer-readable media as recited in claim 1, wherein the 3D information for the object of interest comprises shape data and size estimation data for one or more portions of the object of interest.
5. The non-transitory, computer-readable media as recited in claim 1, wherein the instructions further cause a two-dimensional (2D) data associated with the object of interest as extracted from the image frame to be provided to the cloud-based visual search process for additional use in locating one or more matching products from within the product database for the object of interest.
6. A method for providing search results, comprising:
(a) determing that an object within an image frame being captured via use of an imaging element associated with a computing device is an object of interest;
(b) using the determined object of interest to extract from a three-dimensional information obtained via use of a three-dimensional (3D) data obtaining system associated with the computing device a 3D information for the object of interest; and
(c) providing at least a part of the 3D information for the object of interest to a cloud- based visual search process for the purpose of locating one or more matching products from within a product database for the object of interest with the located one or more matching products being returned to a customer as a product search result.
7. The method as recited in claim 6, wherein the 3D data obtaining system comprises a LiDAR sensor associated with the computing device.
8. The method as recited in claim 6, wherein the 3D data obtaining system comprises one or more cameras and processing for creating a stereoscopic image from image data captured via use of the one or more cameras.
9. The method as recited in claim 6, wherein the 3D information for the object of interest comprises shape data and size estimation data for one or more portions of the object of interest.
10. The method as recited in claim 6, further comprising causing a two-dimensional (2D) data associated with the object of interest as extracted from the image frame to be provided to the cloud-based visual search process for additional use in locating one or more matching products from within the product database for the object of interest.
11. A computing device, comprising: a processor; an imaging system coupled to the processor; a display element; and memory including instructions that, when executed by the processor, enable the computing device to:
(a) determine that an object within an image frame being captured via use of the imaging system is an object of interest;
(b) use the determined object of interest to extract from a three-dimensional information obtained via use of a three-dimensional (3D) data obtaining component of the imaging system associated with the computing device a 3D information for the object of interest;
(c) provide at least a part of the 3D information for the object of interest to a cloud- based visual search process for the purpose of locating one or more matching products from within a product database for the object of interest with the located one or more matching products being returned to a customer as a product search result.
12. The computing device as recited in claim 11, wherein the 3D data obtaining component comprises a LiDAR sensor associated with the computing device.
13. The computing device as recited in claim 11, wherein the 3D data obtaining component comprises one or more cameras and processing for creating a stereoscopic image from image data captured via use of the one or more cameras.
14. The computing device as recited in claim 11, wherein the 3D information for the object of interest comprises shape data and size estimation data for one or more portions of the object of interest.
15. The computing device as recited in claim 11, wherein the instructions further cause a two-dimensional (2D) data associated with the object of interest as extracted from the image frame to be provided to the cloud-based visual search process for additional use in locating one or more matching products from within the product database for the object of interest.
PCT/US2022/021280 2021-03-24 2022-03-22 System and method for providing three-dimensional, visual search WO2022204098A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163165389P 2021-03-24 2021-03-24
US63/165,389 2021-03-24
US17/698,172 US20220207585A1 (en) 2020-07-07 2022-03-18 System and method for providing three-dimensional, visual search
US17/698,172 2022-03-18

Publications (1)

Publication Number Publication Date
WO2022204098A1 true WO2022204098A1 (en) 2022-09-29

Family

ID=83397849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/021280 WO2022204098A1 (en) 2021-03-24 2022-03-22 System and method for providing three-dimensional, visual search

Country Status (1)

Country Link
WO (1) WO2022204098A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065851A1 (en) * 2017-08-30 2019-02-28 Avid Ratings System and method for programmatic identification and cross-platform registration of hardware products via visual object recognition
US20190147614A1 (en) * 2017-11-10 2019-05-16 Skidata Ag Classification and identification systems and methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065851A1 (en) * 2017-08-30 2019-02-28 Avid Ratings System and method for programmatic identification and cross-platform registration of hardware products via visual object recognition
US20190147614A1 (en) * 2017-11-10 2019-05-16 Skidata Ag Classification and identification systems and methods

Similar Documents

Publication Publication Date Title
US9332189B2 (en) User-guided object identification
US9424461B1 (en) Object recognition for three-dimensional bodies
US9094670B1 (en) Model generation and database
CN105027033B (en) Method, device and computer-readable media for selecting Augmented Reality object
US10013633B1 (en) Object retrieval
US10027883B1 (en) Primary user selection for head tracking
US9303982B1 (en) Determining object depth information using image data
US9064149B1 (en) Visual search utilizing color descriptors
US9792491B1 (en) Approaches for object tracking
US9691000B1 (en) Orientation-assisted object recognition
US9298974B1 (en) Object identification through stereo association
CN111919222B (en) Apparatus and method for recognizing object in image
CN105373929A (en) Method of providing photographing recommending information and apparatus thereof
CN103608761B (en) Input equipment, input method and recording medium
US9058660B2 (en) Feature searching based on feature quality information
KR101703013B1 (en) 3d scanner and 3d scanning method
US9076062B2 (en) Feature searching along a path of increasing similarity
CN113474816A (en) Elastic dynamic projection mapping system and method
US20220207585A1 (en) System and method for providing three-dimensional, visual search
CN112927259A (en) Multi-camera-based bare hand tracking display method, device and system
US9772679B1 (en) Object tracking for device input
US11836977B2 (en) System and method for hybrid visual searches and augmented reality
CN110213307B (en) Multimedia data pushing method and device, storage medium and equipment
WO2022204098A1 (en) System and method for providing three-dimensional, visual search
US10701999B1 (en) Accurate size selection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22776436

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22776436

Country of ref document: EP

Kind code of ref document: A1