US20230169554A1 - System and method for automated electronic catalogue management and electronic image quality assessment - Google Patents

System and method for automated electronic catalogue management and electronic image quality assessment Download PDF

Info

Publication number
US20230169554A1
US20230169554A1 US18/102,162 US202318102162A US2023169554A1 US 20230169554 A1 US20230169554 A1 US 20230169554A1 US 202318102162 A US202318102162 A US 202318102162A US 2023169554 A1 US2023169554 A1 US 2023169554A1
Authority
US
United States
Prior art keywords
image
features
images
structural similarity
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/102,162
Inventor
Mani Kanteswara GARLAPATI
Souradip CHAKRABORTY
Rajesh Shreedhar Bhat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Walmart Apollo LLC
Original Assignee
Walmart Apollo LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Walmart Apollo LLC filed Critical Walmart Apollo LLC
Priority to US18/102,162 priority Critical patent/US20230169554A1/en
Assigned to WALMART APOLLO, LLC reassignment WALMART APOLLO, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BHAT, Rajesh Shreedhar, CHAKRABORTY, Souradip, GARLAPATI, Mani Kanteswara
Publication of US20230169554A1 publication Critical patent/US20230169554A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0603Catalogue ordering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • G06T5/002
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present disclosure relates to automated electronic catalogue management and electronic image quality assessment, and more specifically to using automated electronic image quality assessment to catalogue and re-order images.
  • An exemplary method performed according to the concepts disclosed herein can include: receiving a plurality of images of an item; identifying, via a processor configured to perform image analysis, and within each image in the plurality of images, the item; performing, via the processor, a structural similarity analysis of the item, to yield a structural similarity score; for each image in the plurality of images applying, via the processor, a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated; identifying, via the processor, within the plurality of distorted images associated with each image in the plurality of images, at least one feature; and applying, via the processor, a regression model to the plurality of images using the at least one feature and the structural similarity score.
  • An exemplary system configured according to the concepts disclosed herein can include: a processor configured to perform image analysis; and a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations including: receiving a plurality of images of an item; identifying, within each image in the plurality of images, the item; performing a structural similarity analysis of the item, to yield a structural similarity score; for each image in the plurality of images applying a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated; identifying within the plurality of distorted images associated with each image in the plurality of images, at least one feature; and applying a regression model to the plurality of images using the at least one feature and the structural similarity score.
  • An exemplary non-transitory computer-readable storage medium configured according to this disclosure can have instructions stored which, when executed by a computing device configured to perform image processing, cause the computing device to perform operations including: receiving a plurality of images of an item; identifying, within each image in the plurality of images, the item; performing a structural similarity analysis of the item, to yield a structural similarity score; for each image in the plurality of images applying a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated; identifying within the plurality of distorted images associated with each image in the plurality of images, at least one feature; and applying a regression model to the plurality of images using the at least one feature and the structural similarity score.
  • FIG. 1 illustrates a first example method embodiment
  • FIG. 2 illustrates an exemplary flowchart of a disclosed process
  • FIG. 3 illustrates an exemplary convolutional neural network architecture for image orientation classification
  • FIG. 4 illustrates a second exemplary method embodiment
  • FIG. 5 illustrates an example method embodiment
  • This disclosure is directed to automated electronic catalogue management, electronic image quality assessment, and selecting/ordering electronic images for use in an electronic catalogue based upon the assessments made. More specifically, the solutions disclosed provide for an algorithm (using applications of computer vision and deep learning) which can automatically identify the various complex orientations of the catalogue image and sort it accordingly. Next step an algorithm which can detect the quality of the catalogue images (using a structural similarity metric and/or deep learning), such that quality can be predicted for images without a reference image or subject.
  • Tasks associated with four phases of the operations are disclosed and described. These tasks/phases can be combined, excluded, or otherwise used as required for any specific embodiment.
  • the image can be compared to statistical markers of previously categorized images. That is, multiple images can be analyzed, their features extracted, and a histogram of gradient features used in those images can be generated showing predictions of features for objects in a known orientation. For example, multiple shirts having a front view can be analyzed, and the system can identify left and right sleeves within the images, as well as a “V” or “swoop” where a shirt neckline appears. By contrast, the angle of an image a side view may result in a significantly smaller proportion of an image being associated with the neckline.
  • a predetermined set of classifications such as Front, Side, and Back views
  • a histogram of gradient features of the new image can be identified as predictors.
  • a histogram of gradient based features are robust features which, for example, give direction of the color gradient within an image, such that the histogram of gradient based features differ between different images.
  • These predictors can provide statistical estimates of how similar the new image, or portions of the image, are to known images (or portions of the known images).
  • the predictors can also be used as inputs into a Convolution Neural Network (CNN) model trained to identify the distinct classifications of images.
  • CNN Convolution Neural Network
  • pre- trained embeddings from a predefined model are extracted, and using the extracted features a machine learning model is formed which in turn generates the CNN model.
  • the CNN model uses a cross-correlation of the predictors with known features from previous images to identify common aspects between the known features and the current image being analyzed.
  • the CNN model (or portions of the model extracted) can be split into a 3 ⁇ 3 depth-wise convolution and a 1 ⁇ 1 point-wise convolution, which increases accuracy and speed.
  • a logistic regression model (trained on similar data as the CNN model) can be combined with the CNN model (and/or other models) as part of the image orientation classification, which can further increase the overall object (and its orientation) recognition.
  • systems configured according to this disclosure can use structural similarity to identify the object, then determine the quality of the object identified.
  • Metrics to determine the similarity can include peak signal-to-noise ratio (PSNR) and the mean squared error (MSE), which can operate directly from the intensity of the image.
  • PSNR peak signal-to-noise ratio
  • MSE mean squared error
  • the system utilizes a structural similarity index to which the object is compared, where the structural similar index can take into account the impact of changes in luminescence, contrast, and structure within the image being considered.
  • the structural similarity index can be a single score which takes into account all of the individual factors (such as luminescence, contrast, etc.).
  • the architecture and data pipeline described makes the model independent of a reference image, such that when an image is received, the system extracts the quality embeddings from the architecture, which serves as an input to the ridge regression model to predict the structural similarity score (which in tum indicates the quality of the image).
  • distortion is added to the one or more images, resulting in the original (non-distorted) images and distorted images of the object. If a particular configuration is using reference images, the reference images can be distorted using the same distortion/noise algorithms (such as mean blur, Gaussian blur, bilateral blur, median blur, etc.).
  • Table I illustrates an example of how a sample image can be distorted and the resulting classes.
  • the features, in this example, are extracted initially from pre-defined embeddings which have been trained with dense layers in classifying the quality of images, thereby forming quality embeddings.
  • a comparison between the original images and the distorted images (as well as distorted reference images, if available) can then result in features related to quality characteristics of the images being analyzed.
  • a structural similarity score for the distorted images is computed.
  • This structural similarity score can, when using reference images, also be based on the reference images (including distorted reference images).
  • the quality related features can be taken as predictor variables, and the structural similarity score for one or more of the images can be used as response variables.
  • a ridge regression (a technique for analyzing multiple regression data that has collinearity) model can be used with the structural similarity score.
  • Ridge regression model is regularized regression which, in this example, contains additional constraints on the model parameters which are not present in linear regression.
  • L2 regularization is used in ridge regressions. The regularization parameter has been determined intelligently suitable for the problem.
  • features can be extracted from the image. These features can be used as a test data point for the ridge regression model while a structural similarity score is generated (in parallel or in series) for that image.
  • the system can determine, based on a business unit to which the image(s) will be assigned, cutoffs for the similarity score and/or test data to determine if the image quality is at an acceptable, predetermined quality, or if the images must be revised or otherwise corrected.
  • FIG. 1 illustrates a first example method embodiment.
  • a supplier shares images of items ( 102 ) with the purpose of the receiving entity publishing the images in such a way that consumers can view the images on the Internet.
  • the supplier can supply an online marketplace with a generic, “stock” photograph of the item to be sold.
  • the system uses deep learning and computer vision to perform quality based filtering ( 104 ) on the images received. More specifically, the system uses a processor, specifically configured to perform image processing, to assess the quality of the images received.
  • the processor deploys an algorithm to ensure images sent/received are for the correct description of the item ( 106 ) (i.e., object detection and comparison of the detected object to any descriptions received).
  • the system deploys an algorithm to classify the images according to the different views (such as Front, Side, and Back views), and order the images ( 108 ).
  • the system can provide confidence of the classification of the images, confidence that the description is correct, and/or that the order of the images is correct. If one or more of these indications is low, the system can prompt manual review for the low confidence items ( 112 ).
  • the system can provide the results of the assessments and algorithms to an automated catalogue management ( 110 ).
  • FIG. 2 illustrates an exemplary flowchart of a disclosed process for assessing the quality of an image 202 (or images, depending on a particular configuration). As illustrated, the image 202 is received and two separate processes are performed. On the left is illustrated performing a structural similarity analysis 204 , with the result being a structural similarity score
  • the features extracted 216 are features common across all the models 202 , 210 , 212 , 214 (both distorted and original), however in some configurations the features extracted can be identified as associated with only the original 202 and not found in any of the distorted images 210 , 212 ,
  • the resulting extracted features 216 and the structural similarity score 206 are then applied to a regression model 218 , such as a ridge regression model, which can be used to assess the quality of the image 202 to a standard.
  • a regression model 218 such as a ridge regression model
  • FIG. 3 illustrates an exemplary convolutional neural network architecture for image orientation classification.
  • images 302 , 304 , 306 of an object are received from distinct angles.
  • the images are of a shirt as viewed from the back 302 , side 304 , and front 306 .
  • These images are sent to a CNN feature extractor 308 , such as a CNN feature extractor (e.g., a MobileNet or other streamlined architecture which uses depth-wise separable convolutions to build neural networks), or a custom CNN feature extractor.
  • the kernel PCA (Principal Component Analysis) features 312 are extracted along with image embeddings 310 .
  • PCA Principal Component Analysis
  • These outputs 312 , 310 are then input to a respective classifier loss function 314 , which can determine the image orientation based on the loss detected.
  • FIG. 4 illustrates a second exemplary method embodiment.
  • the system receives a plurality of images of an item ( 402 ) and identifies, via a processor configured to perform image analysis, and within each image in the plurality of images, the item ( 404 ). In some configurations, this identification process can be further augmented using metadata and/or a database of products or items.
  • the system performs, via the processor, a structural similarity analysis of the item, to yield a structural similarity score ( 406 ) and, for each image in the plurality of images applying, via the processor, a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated ( 408 ).
  • the system identifies, via the processor, within the plurality of distorted images associated with each image in the plurality of images, at least one feature ( 410 ), and applies, via the processor, a regression model to the plurality of images using the at least one feature and the structural similarity score ( 412 ).
  • the method can further include ordering, via the processor, the plurality of images based on applying the regression model to the plurality of images.
  • the method can further include training a convolution neural network using the at least one feature, to yield a trained convolution neural network and using the trained convolution neural network during the applying of the regression model to the plurality of images.
  • the plurality of distortions can include a mean blur, a Gaussian blur, and a bilateral blur.
  • the regression model is a ridge regression.
  • the structural similarity identifies at least luminance, contrast, and structure of the item.
  • the plurality of images include a front image, a side image, and a back view of the item.
  • an exemplary system includes a general-purpose computing device 500 , including a processing unit (CPU or processor) 520 and a system bus 510 that couples various system components including the system memory 530 such as read-only memory (ROM) 540 and random access memory (RAM) 550 to the processor 520 .
  • the system 500 can include a cache of highspeed memory connected directly with, in close proximity to, or integrated as part of the processor 520 .
  • the system 500 copies data from the memory 530 and/or the storage device 560 to the cache for quick access by the processor 520 . In this way, the cache provides a performance boost that avoids processor 520 delays while waiting for data.
  • These and other modules can control or be configured to control the processor 520 to perform various actions.
  • the memory 530 may be available for use as well.
  • the memory 530 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a computing device 500 with more than one processor 520 or on a group or cluster of computing devices networked together to provide greater processing capability.
  • the processor 520 can include any general purpose processor and a hardware module or software module, such as module 1 562 , module 2 564 , and module 3 566 stored in storage device 560 , configured to control the processor 520 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.
  • the processor 520 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc.
  • a multi-core processor may be symmetric or asymmetric.
  • the system bus 510 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • a basic input/output (BIOS) stored in ROM 540 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 500 , such as during start-up.
  • the computing device 500 further includes storage devices 560 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like.
  • the storage device 560 can include software modules 562 , 564 , 566 for controlling the processor 520 . Other hardware or software modules are contemplated.
  • the storage device 560 is connected to the system bus 510 by a drive interface.
  • the drives and the associated computer-readable storage media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computing device 500 .
  • a hardware module that performs a particular function includes the software component stored in a tangible computer- readable storage medium in connection with the necessary hardware components, such as the processor 520 , bus 510 , display 570 , and so forth, to carry out the function.
  • the system can use a processor and computer-readable storage medium to store instructions which, when executed by the processor, cause the processor to perform a method or other specific actions.
  • the basic components and appropriate variations are contemplated depending on the type of device, such as whether the device 500 is a small, handheld computing device, a desktop computer, or a computer server.
  • tangible computer-readable storage media, computer-readable storage devices, or computer-readable memory devices expressly exclude media such as transitory waves, energy, carrier signals, electromagnetic waves, and signals per se.
  • an input device 590 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth.
  • An output device 570 can also be one or more of a number of output mechanisms known to those of skill in the art.
  • multimodal systems enable a user to provide multiple types of input to communicate with the computing device 500 .
  • the communications interface 580 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

In various examples, a system receives image data characterizing an image of an item. Additionally, the system implements a first set of operations and a second set of operations. In some examples, the first set of operations includes performing a structural similarity analysis of the item, based on the image data, and determining a structural similarity score based on the structural similarity analysis of the item. In other examples, the second set of operations includes generating a plurality of derivative images by applying a plurality of distortions to the image of the item, extracting one or more features based at least on the plurality of derivative images, and determining the quality of the image based at least on the extracted one or more features and the structural similarity score.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. Pat. Application No. 17/493,417, filed Oct. 4, 2021, now U.S. Pat. No. [______], which is a continuation U.S. Pat. Application No. 16/548,162, filed Aug. 22, 2019, now U.S. Pat. No. 11,164,300, which claims benefit of priority to U.S. Provisional Pat. Application No. 62/778,962 filed Dec. 13, 2018 and Indian Provisional Application No. 201811031632, filed Aug. 23, 2018, each of which is hereby incorporated by reference in their respective entireties.
  • TECHNICAL FIELD
  • The present disclosure relates to automated electronic catalogue management and electronic image quality assessment, and more specifically to using automated electronic image quality assessment to catalogue and re-order images.
  • BACKGROUND
  • Catalogue management is a very important aspect in e-commerce as it helps the visitors to websites efficiently select respective items. In every retail website, the items displayed are in a particular order based on their respective categories. For items which have more than one view, the order of the views is predetermined based on the item classification. However, when receiving images/photographs of the item, there are often several problems with the received information. First, there is a question of identifying the item: do the images (or associated metadata) predefine the object in question? If not, how is the object identified?
  • Second, there is a question of quality: do the images meet the required quality for display on an official website? If being performed by a human being, such determinations may lack accuracy due to the subjective ability of human visual analysis. Third, using current technology, the images must be manually ordered according to the object’s classification, which again relies on the inaccuracy of human in completing the task.
  • TECHNICAL PROBLEM
  • How to train a computer to correctly identify and classify an object when electronic images of the object have distinct orientations and quality.
  • SUMMARY
  • Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
  • An exemplary method performed according to the concepts disclosed herein can include: receiving a plurality of images of an item; identifying, via a processor configured to perform image analysis, and within each image in the plurality of images, the item; performing, via the processor, a structural similarity analysis of the item, to yield a structural similarity score; for each image in the plurality of images applying, via the processor, a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated; identifying, via the processor, within the plurality of distorted images associated with each image in the plurality of images, at least one feature; and applying, via the processor, a regression model to the plurality of images using the at least one feature and the structural similarity score.
  • An exemplary system configured according to the concepts disclosed herein can include: a processor configured to perform image analysis; and a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations including: receiving a plurality of images of an item; identifying, within each image in the plurality of images, the item; performing a structural similarity analysis of the item, to yield a structural similarity score; for each image in the plurality of images applying a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated; identifying within the plurality of distorted images associated with each image in the plurality of images, at least one feature; and applying a regression model to the plurality of images using the at least one feature and the structural similarity score.
  • An exemplary non-transitory computer-readable storage medium configured according to this disclosure can have instructions stored which, when executed by a computing device configured to perform image processing, cause the computing device to perform operations including: receiving a plurality of images of an item; identifying, within each image in the plurality of images, the item; performing a structural similarity analysis of the item, to yield a structural similarity score; for each image in the plurality of images applying a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated; identifying within the plurality of distorted images associated with each image in the plurality of images, at least one feature; and applying a regression model to the plurality of images using the at least one feature and the structural similarity score.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a first example method embodiment;
  • FIG. 2 illustrates an exemplary flowchart of a disclosed process;
  • FIG. 3 illustrates an exemplary convolutional neural network architecture for image orientation classification;
  • FIG. 4 illustrates a second exemplary method embodiment; and
  • FIG. 5 illustrates an example method embodiment.
  • DETAILED DESCRIPTION
  • Various embodiments of the disclosure are described in detail below. While specific implementations are described, it should be understood that this is done for illustration purposes only. Other components and configurations may be used without parting from the spirit and scope of the disclosure. This disclosure is directed to automated electronic catalogue management, electronic image quality assessment, and selecting/ordering electronic images for use in an electronic catalogue based upon the assessments made. More specifically, the solutions disclosed provide for an algorithm (using applications of computer vision and deep learning) which can automatically identify the various complex orientations of the catalogue image and sort it accordingly. Next step an algorithm which can detect the quality of the catalogue images (using a structural similarity metric and/or deep learning), such that quality can be predicted for images without a reference image or subject.
  • Current electronic image classification practices often rely on rule-based methods to classify image orientations, which is prone to errors and not robust. Likewise, when a histogram of gradients is currently used in image classification, the accuracy achieved is insufficient for quality classification. In addition, with regards to the image quality assessment, metrics such as Signal to Noise Ratio (SNR), mean squared error, etc. are often used. Again, such metrics fail to provide adequate results and human perceived quality.
  • Tasks associated with four phases of the operations are disclosed and described. These tasks/phases can be combined, excluded, or otherwise used as required for any specific embodiment.
  • Phase 1 - Image Orientation Classification
  • To classify an image into one of a predetermined set of classifications (such as Front, Side, and Back views), the image can be compared to statistical markers of previously categorized images. That is, multiple images can be analyzed, their features extracted, and a histogram of gradient features used in those images can be generated showing predictions of features for objects in a known orientation. For example, multiple shirts having a front view can be analyzed, and the system can identify left and right sleeves within the images, as well as a “V” or “swoop” where a shirt neckline appears. By contrast, the angle of an image a side view may result in a significantly smaller proportion of an image being associated with the neckline. By comparing the features of a new image to features of known images, a histogram of gradient features of the new image can be identified as predictors. A histogram of gradient based features are robust features which, for example, give direction of the color gradient within an image, such that the histogram of gradient based features differ between different images. These predictors can provide statistical estimates of how similar the new image, or portions of the image, are to known images (or portions of the known images).
  • The predictors can also be used as inputs into a Convolution Neural Network (CNN) model trained to identify the distinct classifications of images. In one configuration, pre- trained embeddings from a predefined model are extracted, and using the extracted features a machine learning model is formed which in turn generates the CNN model. The CNN model uses a cross-correlation of the predictors with known features from previous images to identify common aspects between the known features and the current image being analyzed. In one example, the CNN model (or portions of the model extracted) can be split into a 3×3 depth-wise convolution and a 1×1 point-wise convolution, which increases accuracy and speed. In some cases, a logistic regression model (trained on similar data as the CNN model) can be combined with the CNN model (and/or other models) as part of the image orientation classification, which can further increase the overall object (and its orientation) recognition.
  • Phase 2 - Image Quality Assessment
  • To assess the quality of an image, traditional (subjective) assessments by human beings does not result in a repeatable accuracy. To counter this, systems configured according to this disclosure can use structural similarity to identify the object, then determine the quality of the object identified. Metrics to determine the similarity can include peak signal-to-noise ratio (PSNR) and the mean squared error (MSE), which can operate directly from the intensity of the image. However, such metrics fail to account for how a human being would perceive the image. To account for human perception, the system utilizes a structural similarity index to which the object is compared, where the structural similar index can take into account the impact of changes in luminescence, contrast, and structure within the image being considered. The structural similarity index can be a single score which takes into account all of the individual factors (such as luminescence, contrast, etc.).
  • To further assess the image, a methodology 1 s needed which can operate without reference images, or alternatively with a very small data set. Instead, the architecture and data pipeline described makes the model independent of a reference image, such that when an image is received, the system extracts the quality embeddings from the architecture, which serves as an input to the ridge regression model to predict the structural similarity score (which in tum indicates the quality of the image). To accomplish this, distortion is added to the one or more images, resulting in the original (non-distorted) images and distorted images of the object. If a particular configuration is using reference images, the reference images can be distorted using the same distortion/noise algorithms (such as mean blur, Gaussian blur, bilateral blur, median blur, etc.). Table I illustrates an example of how a sample image can be distorted and the resulting classes. The features, in this example, are extracted initially from pre-defined embeddings which have been trained with dense layers in classifying the quality of images, thereby forming quality embeddings.
  • TABLE 1
    Type of Noise added Kernels and Parameters No Of Classes
    Reference Image - 1
    Mean Blur (5,5),(25,25),(55,55),(75,75) 4
    Gaussian Blur (5,5),(25,25),(55,55),(95,95) 4
    Bilateral Blur (9,50,50),(9,125,125) 2
    Median Blur 5,27 2
  • A comparison between the original images and the distorted images (as well as distorted reference images, if available) can then result in features related to quality characteristics of the images being analyzed.
  • Once the quality related features have been identified/extracted from the images, a structural similarity score for the distorted images is computed. This structural similarity score can, when using reference images, also be based on the reference images (including distorted reference images). The quality related features can be taken as predictor variables, and the structural similarity score for one or more of the images can be used as response variables. A ridge regression (a technique for analyzing multiple regression data that has collinearity) model can be used with the structural similarity score. Ridge regression model is regularized regression which, in this example, contains additional constraints on the model parameters which are not present in linear regression. L2 regularization is used in ridge regressions. The regularization parameter has been determined intelligently suitable for the problem.
  • In other words, when a new image is detected, features can be extracted from the image. These features can be used as a test data point for the ridge regression model while a structural similarity score is generated (in parallel or in series) for that image. The system can determine, based on a business unit to which the image(s) will be assigned, cutoffs for the similarity score and/or test data to determine if the image quality is at an acceptable, predetermined quality, or if the images must be revised or otherwise corrected.
  • These variations shall be described herein as the various embodiments are set forth. The disclosure now turns to FIG. 1 . FIG. 1 illustrates a first example method embodiment. In this example, a supplier shares images of items (102) with the purpose of the receiving entity publishing the images in such a way that consumers can view the images on the Internet. For example, the supplier can supply an online marketplace with a generic, “stock” photograph of the item to be sold. The system uses deep learning and computer vision to perform quality based filtering (104) on the images received. More specifically, the system uses a processor, specifically configured to perform image processing, to assess the quality of the images received. In addition, the processor deploys an algorithm to ensure images sent/received are for the correct description of the item (106) (i.e., object detection and comparison of the detected object to any descriptions received). In addition, the system deploys an algorithm to classify the images according to the different views (such as Front, Side, and Back views), and order the images (108). The system can provide confidence of the classification of the images, confidence that the description is correct, and/or that the order of the images is correct. If one or more of these indications is low, the system can prompt manual review for the low confidence items (112). Finally, the system can provide the results of the assessments and algorithms to an automated catalogue management (110).
  • FIG. 2 illustrates an exemplary flowchart of a disclosed process for assessing the quality of an image 202 (or images, depending on a particular configuration). As illustrated, the image 202 is received and two separate processes are performed. On the left is illustrated performing a structural similarity analysis 204, with the result being a structural similarity score
  • 206. On the right is illustrated adding noise/distortion 208 to the received image, resulting in multiple images 210, 212, 214 which are derivatives of the original image 202. Using the original image 202 and the derivative images 210,212,214, features are extracted 216. In general, the features extracted 216 are features common across all the models 202, 210, 212, 214 (both distorted and original), however in some configurations the features extracted can be identified as associated with only the original 202 and not found in any of the distorted images 210, 212,
  • 214. The resulting extracted features 216 and the structural similarity score 206 are then applied to a regression model 218, such as a ridge regression model, which can be used to assess the quality of the image 202 to a standard.
  • FIG. 3 illustrates an exemplary convolutional neural network architecture for image orientation classification. In this example, images 302, 304, 306 of an object are received from distinct angles. As illustrated, the images are of a shirt as viewed from the back 302, side 304, and front 306. These images are sent to a CNN feature extractor 308, such as a CNN feature extractor (e.g., a MobileNet or other streamlined architecture which uses depth-wise separable convolutions to build neural networks), or a custom CNN feature extractor. The kernel PCA (Principal Component Analysis) features 312 are extracted along with image embeddings 310.
  • These outputs 312, 310 are then input to a respective classifier loss function 314, which can determine the image orientation based on the loss detected.
  • FIG. 4 illustrates a second exemplary method embodiment. In this example, the system receives a plurality of images of an item (402) and identifies, via a processor configured to perform image analysis, and within each image in the plurality of images, the item (404). In some configurations, this identification process can be further augmented using metadata and/or a database of products or items. The system performs, via the processor, a structural similarity analysis of the item, to yield a structural similarity score (406) and, for each image in the plurality of images applying, via the processor, a plurality of distortions, such that for each image in the plurality of images a plurality of distorted images are generated (408). The system identifies, via the processor, within the plurality of distorted images associated with each image in the plurality of images, at least one feature (410), and applies, via the processor, a regression model to the plurality of images using the at least one feature and the structural similarity score (412).
  • In some configurations, the method can further include ordering, via the processor, the plurality of images based on applying the regression model to the plurality of images. Likewise, the method can further include training a convolution neural network using the at least one feature, to yield a trained convolution neural network and using the trained convolution neural network during the applying of the regression model to the plurality of images.
  • In some configurations, the plurality of distortions can include a mean blur, a Gaussian blur, and a bilateral blur. In some configurations, the regression model is a ridge regression.
  • In some configurations, the structural similarity identifies at least luminance, contrast, and structure of the item.
  • In some configurations, the plurality of images include a front image, a side image, and a back view of the item.
  • With reference to FIG. 5 , an exemplary system includes a general-purpose computing device 500, including a processing unit (CPU or processor) 520 and a system bus 510 that couples various system components including the system memory 530 such as read-only memory (ROM) 540 and random access memory (RAM) 550 to the processor 520. The system 500 can include a cache of highspeed memory connected directly with, in close proximity to, or integrated as part of the processor 520. The system 500 copies data from the memory 530 and/or the storage device 560 to the cache for quick access by the processor 520. In this way, the cache provides a performance boost that avoids processor 520 delays while waiting for data. These and other modules can control or be configured to control the processor 520 to perform various actions. Other system memory 530 may be available for use as well. The memory 530 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a computing device 500 with more than one processor 520 or on a group or cluster of computing devices networked together to provide greater processing capability. The processor 520 can include any general purpose processor and a hardware module or software module, such as module 1 562, module 2 564, and module 3 566 stored in storage device 560, configured to control the processor 520 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 520 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
  • The system bus 510 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 540 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 500, such as during start-up. The computing device 500 further includes storage devices 560 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 560 can include software modules 562, 564, 566 for controlling the processor 520. Other hardware or software modules are contemplated. The storage device 560 is connected to the system bus 510 by a drive interface. The drives and the associated computer-readable storage media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computing device 500. In one aspect, a hardware module that performs a particular function includes the software component stored in a tangible computer- readable storage medium in connection with the necessary hardware components, such as the processor 520, bus 510, display 570, and so forth, to carry out the function. In another aspect, the system can use a processor and computer-readable storage medium to store instructions which, when executed by the processor, cause the processor to perform a method or other specific actions. The basic components and appropriate variations are contemplated depending on the type of device, such as whether the device 500 is a small, handheld computing device, a desktop computer, or a computer server.
  • Although the exemplary embodiment described herein employs the hard disk 560, other types of computer-readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs) 550, and read-only memory (ROM) 540, may also be used in the exemplary operating environment. Tangible computer-readable storage media, computer-readable storage devices, or computer-readable memory devices, expressly exclude media such as transitory waves, energy, carrier signals, electromagnetic waves, and signals per se.
  • To enable user interaction with the computing device 500, an input device 590 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 570 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 500. The communications interface 580 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
  • The steps outlined herein are exemplary and can be implemented in any combination thereof, including combinations that exclude, add, or modify certain steps.
  • Use of language such as “at least one of X, Y, and Z” or “at least one or more of X, Y, or Z” are intended to convey a single item Gust X, or just Y, or just Z) or multiple items (i.e., {X and Y}, {Y and Z}, or {X, Y, and Z}). “At least one of” is not intended to convey a requirement that each possible item must be present.
  • The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. Various modifications and changes may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure.

Claims (20)

What is claimed is:
1. A system comprising:
one or more processors; and
a memory resource storing a set of instructions, that when executed by the one or more processors, causes the one or more processors to:
receive image data characterizing an image of an item;
determine a quality of the image based on one or more features extracted from the image based on a plurality of derivative images and a structural similarity score, wherein the plurality of derivative images are generated by applying at least one distortion to the image of the item; and
determine an orientation of the image based on a detected classifier loss.
2. The system of claim 1, wherein the quality of the image is determined by applying a regression model to the one or more features and the structural similarity score.
3. The system of claim 2, wherein the regression model is a ridge regression model.
4. The system of claim 1, wherein the one or more features are extracted by a convolutional neural network.
5. The system of claim 4, wherein the detected classifier loss is determined by a classifier loss function configured to receive an output of the convolutional neural network.
6. The system of claim 5, wherein the output of the convolutional neural network includes kernel principal component analysis features and image embeddings.
7. The system of claim 1, wherein the at least one distortion is one of a mean blur, a Gaussian blur, or a bilateral blur.
8. The system of claim 1, wherein the structural similarity score is determined based on changes luminance, contrast, and structure of the image.
9. A computer-implemented method, comprising:
receiving image data characterizing an image of an item;
determining a quality of the image based on one or more features extracted from the image based on a plurality of derivative images and a structural similarity score, wherein the plurality of derivative images are generated by applying at least one distortion to the image of the item; and
determining an orientation of the image based on a detected classifier loss.
10. The computer-implemented method of claim 9, wherein the quality of the image is determined by applying a regression model to the one or more features and the structural similarity score.
11. The computer-implemented method of claim 10, wherein the regression model is a ridge regression model.
12. The computer-implemented method of claim 9, wherein the one or more features are extracted by a convolutional neural network.
13. The computer-implemented method of claim 12, wherein the detected classifier loss is determined by a classifier loss function configured to receive an output of the convolutional neural network.
14. The computer-implemented method of claim 13, wherein the output of the convolutional neural network includes kernel principal component analysis features and image embeddings.
15. The computer-implemented method of claim 9, wherein the at least one distortion is one of a mean blur, a Gaussian blur, or a bilateral blur.
16. The computer-implemented method of claim 9, wherein the structural similarity score is determined based on changes luminance, contrast, and structure of the image.
17. A non-transitory computer-readable medium storing instructions, that when executed by one or more processors, causes the one or more processors to:
receive image data characterizing an image of an item;
extract one or more features from the image, wherein the one or more features are extracted by a convolutional neural network;
determine a quality of the image based on the one or more features extracted from the image based on a plurality of derivative images and a structural similarity score, wherein the plurality of derivative images are generated by applying at least one distortion to the image of the item; and
determine an orientation of the image based on a detected classifier loss based on an output of the convolutional neural network.
18. The non-transitory computer-readable medium of claim 17, wherein the output of the convolutional neural network includes kernel principal component analysis features and image embeddings.
19. The non-transitory computer-readable medium of claim 17, wherein the quality of the image is determined by applying a regression model to the one or more features and the structural similarity score.
20. The non-transitory computer-readable medium of claim 17, wherein the at least one distortion is one of a mean blur, a Gaussian blur, or a bilateral blur.
US18/102,162 2018-08-23 2023-01-27 System and method for automated electronic catalogue management and electronic image quality assessment Pending US20230169554A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/102,162 US20230169554A1 (en) 2018-08-23 2023-01-27 System and method for automated electronic catalogue management and electronic image quality assessment

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
IN201811031632 2018-08-23
IN201811031632 2018-08-23
US201862778962P 2018-12-13 2018-12-13
US16/548,162 US11164300B2 (en) 2018-08-23 2019-08-22 System and method for automated electronic catalogue management and electronic image quality assessment
US17/493,417 US11599983B2 (en) 2018-08-23 2021-10-04 System and method for automated electronic catalogue management and electronic image quality assessment
US18/102,162 US20230169554A1 (en) 2018-08-23 2023-01-27 System and method for automated electronic catalogue management and electronic image quality assessment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/493,417 Continuation US11599983B2 (en) 2018-08-23 2021-10-04 System and method for automated electronic catalogue management and electronic image quality assessment

Publications (1)

Publication Number Publication Date
US20230169554A1 true US20230169554A1 (en) 2023-06-01

Family

ID=69586145

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/548,162 Active 2039-12-02 US11164300B2 (en) 2018-08-23 2019-08-22 System and method for automated electronic catalogue management and electronic image quality assessment
US17/493,417 Active US11599983B2 (en) 2018-08-23 2021-10-04 System and method for automated electronic catalogue management and electronic image quality assessment
US18/102,162 Pending US20230169554A1 (en) 2018-08-23 2023-01-27 System and method for automated electronic catalogue management and electronic image quality assessment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US16/548,162 Active 2039-12-02 US11164300B2 (en) 2018-08-23 2019-08-22 System and method for automated electronic catalogue management and electronic image quality assessment
US17/493,417 Active US11599983B2 (en) 2018-08-23 2021-10-04 System and method for automated electronic catalogue management and electronic image quality assessment

Country Status (2)

Country Link
US (3) US11164300B2 (en)
WO (1) WO2020041610A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161727A1 (en) * 2015-12-07 2017-06-08 American Express Travel Related Services Company, Inc. System and method for creating and issuing virtual transaction instruments
EP3776377A4 (en) * 2018-05-28 2021-05-12 Samsung Electronics Co., Ltd. Method and system for dnn based imaging
CN111881967A (en) * 2020-07-22 2020-11-03 北京三快在线科技有限公司 Picture classification model training method, device, medium and electronic equipment
US11688049B2 (en) * 2021-04-20 2023-06-27 Walmart Apollo, Llc Systems and methods for image processing
CN115830028B (en) * 2023-02-20 2023-05-23 阿里巴巴达摩院(杭州)科技有限公司 Image evaluation method, device, system and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180114096A1 (en) * 2015-04-30 2018-04-26 The Regents Of The University Of California Machine learning to process monte carlo rendered images

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702545B1 (en) 2005-09-08 2010-04-20 Amazon Technologies, Inc. System and method for facilitating exchanges between buyers and sellers
US7979340B2 (en) 2005-09-21 2011-07-12 Overstock.Com, Inc. System, program product, and methods for online image handling
US9558510B2 (en) 2009-02-24 2017-01-31 Ebay Inc. System and method to create listings using image and voice recognition
CA2774957C (en) 2009-10-09 2018-06-05 Edgenet, Inc. Automatic method to generate product attributes based solely on product images
JP2014515587A (en) 2011-06-01 2014-06-30 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Learning image processing pipelines for digital imaging devices
US8737728B2 (en) 2011-09-30 2014-05-27 Ebay Inc. Complementary item recommendations using image feature data
US8639036B1 (en) 2012-07-02 2014-01-28 Amazon Technologies, Inc. Product image information extraction
US9892133B1 (en) 2015-02-13 2018-02-13 Amazon Technologies, Inc. Verifying item attributes using artificial intelligence
US9734567B2 (en) * 2015-06-24 2017-08-15 Samsung Electronics Co., Ltd. Label-free non-reference image quality assessment via deep neural network
US9633282B2 (en) 2015-07-30 2017-04-25 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
US11244349B2 (en) 2015-12-29 2022-02-08 Ebay Inc. Methods and apparatus for detection of spam publication
US11080918B2 (en) * 2016-05-25 2021-08-03 Metail Limited Method and system for predicting garment attributes using deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180114096A1 (en) * 2015-04-30 2018-04-26 The Regents Of The University Of California Machine learning to process monte carlo rendered images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Paul E. Rybski, Daniel Huber, Daniel Morris, and Regis Hoffman: "Visual Classification of Coarse Vehicle Orientation Using Histogram of Oriented Gradients Features"; June 21-24, 2010; IEEE Intelligent Vehicles Symposium; pp. 921-928. (Year: 2010) *

Also Published As

Publication number Publication date
US20220028049A1 (en) 2022-01-27
US20200065955A1 (en) 2020-02-27
US11599983B2 (en) 2023-03-07
WO2020041610A1 (en) 2020-02-27
US11164300B2 (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US20230169554A1 (en) System and method for automated electronic catalogue management and electronic image quality assessment
US10574883B2 (en) System and method for guiding a user to take a selfie
US10885397B2 (en) Computer-executed method and apparatus for assessing vehicle damage
Kao et al. Visual aesthetic quality assessment with a regression model
US10579860B2 (en) Learning model for salient facial region detection
JP2023018021A (en) Technique for identifying skin color in image in which illumination condition is not controlled
Wang et al. Expression of Concern: Facial feature discovery for ethnicity recognition
Yu et al. Face biometric quality assessment via light CNN
US20240185604A1 (en) System and method for predicting formation in sports
US11017016B2 (en) Clustering product media files
US20210031507A1 (en) Identifying differences between images
US12002085B2 (en) Digital image ordering using object position and aesthetics
WO2021031704A1 (en) Object tracking method and apparatus, computer device, and storage medium
GB2547760A (en) Method of image processing
Yu Emotion monitoring for preschool children based on face recognition and emotion recognition algorithms
US11423262B2 (en) Automatically filtering out objects based on user preferences
KR102440198B1 (en) VIDEO SEARCH METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
CN110969602B (en) Image definition detection method and device
CN108446602B (en) Device and method for detecting human face
Tang et al. Learning Hough regression models via bridge partial least squares for object detection
CN111860223A (en) Attribute recognition system, learning server, and computer-readable recording medium
Kharchevnikova et al. Video-based age and gender recognition in mobile applications
JP7236062B2 (en) LEARNING DEVICE, LEARNING METHOD AND LEARNING PROGRAM
JP7496567B2 (en) Processing system, learning processing system, processing method, and program
Vedantham Adaptive increasing-margin adversarial neural iterative system based on facial expression recognition feature models

Legal Events

Date Code Title Description
AS Assignment

Owner name: WALMART APOLLO, LLC, ARKANSAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GARLAPATI, MANI KANTESWARA;CHAKRABORTY, SOURADIP;BHAT, RAJESH SHREEDHAR;REEL/FRAME:062541/0438

Effective date: 20190107

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED