WO2013075295A1 - Procédé d'identification de vêtements et système pour une vidéo à faible résolution - Google Patents

Procédé d'identification de vêtements et système pour une vidéo à faible résolution Download PDF

Info

Publication number
WO2013075295A1
WO2013075295A1 PCT/CN2011/082705 CN2011082705W WO2013075295A1 WO 2013075295 A1 WO2013075295 A1 WO 2013075295A1 CN 2011082705 W CN2011082705 W CN 2011082705W WO 2013075295 A1 WO2013075295 A1 WO 2013075295A1
Authority
WO
WIPO (PCT)
Prior art keywords
clothing
human body
target
foreground image
frame
Prior art date
Application number
PCT/CN2011/082705
Other languages
English (en)
Chinese (zh)
Inventor
李响
李俐
张超
陈晓娟
Original Assignee
浙江晨鹰科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江晨鹰科技有限公司 filed Critical 浙江晨鹰科技有限公司
Priority to PCT/CN2011/082705 priority Critical patent/WO2013075295A1/fr
Publication of WO2013075295A1 publication Critical patent/WO2013075295A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features

Definitions

  • the present invention relates to the field of image information processing technologies, and more particularly to a garment recognition method and system for low resolution video.
  • Face recognition based on face recognition mainly adopts multi-level detection system, which is roughly divided into four levels: face detection, uniform area detection, accessory detection and collar recognition, and filtering out a large amount of irrelevant data in the process of executing each level. To improve detection accuracy and efficiency.
  • the specific identification process is shown in Figure 1.
  • the face data obtained after the face detection is the next level of execution of the uniform area inspection service; the uniform area mask obtained after the inspection of the uniform area is the next level of accessories and the collar detection service. .
  • the present invention provides a clothing recognition method and system for low-resolution video, which overcomes the method of face recognition based on the prior art, and cannot realize character clothing and identity recognition in low-resolution video. The problem.
  • the present invention provides the following technical solutions:
  • a clothing recognition method for low resolution video comprising:
  • Determining a current time series in the received video stream extracting a foreground image in the video stream, determining a human body target from the foreground image, and extracting contour information of the human body target;
  • Decomposing the contour information of the human body target extracting a clothing feature value corresponding to each block in the contour information of the human body target according to the preset clothing category; and a clothing category of each of the blocks in the frame;
  • step of determining a current time sequence in the video stream acquiring a clothing category of the same human target in each frame in different time series in the video stream, and performing a voting decision according to the pre-stored clothing category, determining The clothing category of the sports target.
  • a garment recognition system for low resolution video comprising:
  • Extracting means configured to determine a current time sequence in the received video stream, and extract a foreground image of the video stream time series, determine a human body target from the foreground image, and extract contour information of the human body target;
  • Decomposing means configured to decompose the contour information of the human body target, and extract a clothing feature value corresponding to each block in the contour information of the human body target according to the preset clothing category;
  • a comparison identifying device configured to compare the obtained clothing feature values of each segment with a preset clothing feature threshold, and identify a clothing category of each segment in the current frame
  • a merging device configured to fuse a clothing category of each of the blocks in the same time sequence or different time series in the video stream
  • a determining device configured to perform a voting decision according to the pre-stored clothing category, determine a clothing category of the human target in the current time series; and a clothing category of the same human target in each time frame of different time series After the fusion, the judgment of the clothing category of the human target.
  • the present invention discloses a clothing recognition method and system for low resolution video. Based on the spatio-temporal classifier fusion technique, first, extracting the foreground image in the acquired video stream, and extracting the contour information of the moving human body; and then, identifying the moving human body object according to the extracted contour information; and passing the same frame in the video frame Different blocks of the same human target in the image are processed by multi-point feature recognition, and the recognition result is voted. Finally, the voting result is determined according to the judgment result of the same human target in the multi-frame image in the video stream, and finally the human body is determined.
  • the clothing category of the target is a clothing recognition method and system for low resolution video.
  • the method for performing moving human target recognition according to the background model preprocesses the algorithm, and can eliminate objects in the video background that are similar to the target color, and reduce interference.
  • the result of the clothing feature judgment in multiple video frames of the same human target finally determining the clothing category of the moving target, thereby achieving high efficiency and high quality. , high-accuracy identity and clothing identification purposes.
  • FIG. 1 is a flow chart of a method for identifying a face recognition based on the prior art
  • FIG. 2 is a flowchart of a method for recognizing a low resolution video according to an embodiment of the present invention
  • a flowchart of extracting a foreground image disclosed in the embodiment
  • FIG. 4a-4c are diagrams showing an effect of the process of identifying a human body object according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of performing various feature information extraction according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of performing multi-feature weak classifier fusion according to an embodiment of the present invention
  • FIG. 7 is an effect diagram of finalizing garment recognition processing in low-resolution video according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a clothing recognition system for low resolution video disclosed in an embodiment of the present invention.
  • the technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. example. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without departing from the inventive scope are the scope of the present invention.
  • Embodiment 1 discloses a clothing recognition method and system for low-resolution video. Based on the spatio-temporal classifier fusion technology, various uniforms, general clothing, camouflage clothes, etc. can be identified and classified, and high efficiency and high quality are adopted. And realize the identification of the final identity information in a reliable and accurate manner. The specific process is described in detail by the following examples.
  • Embodiment 1 is described in detail by the following examples.
  • FIG. 2 it is a flow chart of a method for recognizing a low-resolution video according to the present invention, which mainly includes the following steps:
  • Step S101 Extract a foreground image in the received video stream.
  • step S101 is:
  • Step S1011 The video stream is read into a computer or a related device that can be analyzed, and the obtained video stream is decomposed, and a plurality of single-frame video sequences are obtained according to a time series.
  • Step S1012 Acquire a foreground image corresponding to the plurality of single-frame video sequences.
  • the process of acquiring the foreground image corresponding to a single frame video sequence in the step S1012 is: first, performing background modeling on the video according to the content of the video sequence; secondly, determining the current single frame video sequence and the current background frame; Determining a foreground image corresponding to the current single-frame video sequence by using a difference between the current frame video sequence and the background frame; and finally, updating the background frame according to the current single-frame video sequence to ensure the accuracy of the background frame in the next frame,
  • the update process is updated in real time.
  • the process of determining the current background frame determined by the above is: background modeling implemented by a single Gaussian or mixed Gaussian method. And further adopting the frame difference principle to obtain a corresponding foreground image according to the difference between the current single frame video sequence and the background frame.
  • the above background modeling of video can adopt single Gaussian, mixed Gaussian, Kernel-based, Eigen-Background and the like.
  • the background modeling is performed by using a mixed Gaussian method, that is, the background frame is obtained, and the mixed Gaussian model is defined as:
  • each pixel has a probability of 3 ⁇ 4 PC ⁇ K Gaussian mixture to facilitate background modeling in the video stream.
  • the foreground image of the moving object is extracted by the above process, that is, the foreground image is extracted by subtracting the video image from the background image, that is, the frame difference can be used to improve the foreground image extraction effect, thereby obtaining a more accurate foreground of the moving target.
  • the image that is, the contour foreground image.
  • Fig. 4a ⁇ ®4c The effect diagram after performing the above process is shown in Fig. 4a ⁇ ®4c, wherein Fig. 4a is a video image; Fig. 4b is a background image (background frame); Fig. 4c is a foreground image corresponding to the current moving target.
  • Step S102 Determine a current time series in the video stream, determine a moving target from the foreground image, and extract contour information of the moving human target.
  • the human body target generally refers to the moving human body, that is, the human body appearing in the foreground image in the current time series.
  • the contour information acquired during the execution of step S102 is determined based on the contour width and the contour height of the human body target.
  • the process of specifically extracting the contour information of the human body target is: first, extracting the feature of the moving object from the foreground image, analyzing according to the ratio of the width and height of the moving object, and identifying the moving human body; and then analyzing and acquiring the human body Outline information.
  • a more specific description is: Extracting the contour features of the moving object based on the plane geometry knowledge.
  • the distance between the leftmost point and the rightmost point of the contour of each animal is taken as the width of the object; the distance between the uppermost point and the lowermost point is taken as the height of the moving object.
  • Calculating the aspect ratio of the moving object and comparing the length-to-width ratio of each obtained moving object with the threshold value of the shoulder width and the height ratio of the conventional human body, excluding other moving objects such as vehicles, and determining the moving target, that is, the moving human body, and Extract the contour information of the moving human body from it.
  • the threshold of height ratio can effectively overcome the influence of objects such as trees and buildings on the recognition result, and reduce the interference of non-moving human body in the moving object to recognize the moving human body.
  • Step S103 Decompose the contour information of the human body target, and extract a clothing feature value corresponding to each block in the contour information of the human body target according to the preset clothing category. In contrast, the clothing category of each block in the current frame is identified.
  • step S105 the clothing category of each of the segments is merged, and a voting decision is performed according to the pre-stored clothing category, and the clothing category of the human target in the current time series is determined.
  • Step S1031 Decompose the contour information of the human body, and divide the human body according to the biological characteristics of the human body.
  • Step S1032 Perform eigenvalue training, and perform calculation of the corresponding clothing feature value according to the preset clothing category.
  • Step S1033 Extract a clothing feature value corresponding to each block in the contour information of the human body.
  • Identifying the clothing category of each of the segments in the current frame for step S104, identifying and determining the clothing category of the human target in the current time series is then for step S105, and the fusion of the plurality of characteristic information for each of the segments is also directed to step S105.
  • the process can also be specifically as follows: First, according to the human target obtained in step S102, one of the personal goals is determined. Decomposing the contour information of the determined human target, that is, dividing the same individual body, for example, dividing the human body into parts such as arms, tops, pants, etc. according to human body characteristics (step S1031); and then performing each block The calculation and extraction of the clothing feature values; the different segment clothing categories are identified, and the clothing categories of the respective blocks are merged. That is, the fusion is implemented based on the spatiotemporal classifier fusion technique; finally, the voting decision is made by the large number decision, and the clothing category of the human target in the current time series is determined.
  • the above process can be recorded as: for the clothing category of the same individual body target in the same frame image in the video stream, using the spatial correlation of the image, voting judgment is performed on the multiple recognition results, and the same human body is judged by using the large number judgment.
  • the plurality of recognition results are combined and aggregated to obtain a clothing recognition result.
  • any feature is not stable, therefore,
  • a plurality of weak classifiers are fused to form a strong classifier that is stable for a single block image to implement clothing recognition for the block image.
  • the process of specifically performing multi-feature weak classifier fusion can be seen in FIG. 6.
  • the color and texture features are selected as the classification features for the calculation of the feature values, and the calculation process corresponds to the feature value calculation portion in FIG.
  • training for different categories of clothing color and texture features can be performed offline. Select hundreds of samples to calculate clothing color and texture features separately. That is, the eigenvalue calculations such as color and gray level symmetry are performed for each identified block (human body region).
  • the color feature calculation in the mega area of the segmentation sample, converts the RGB color space into the HSV color space, and the HSV color system is closer to human visual perception.
  • the specific conversion method is as follows:
  • R, G, and B represent the color values of the RGB color space, respectively
  • H, S, and V represent the color values in the HSV color space, respectively.
  • the three channels of H, S, and V are separated for the converted picture.
  • the range of values of the preset clothing color feature threshold can be determined.
  • the gray level co-occurrence matrix is used for texture feature calculation.
  • the gray level co-occurrence matrix has a total of 15 eigenvalues.
  • the four second-order moments, the contrast, the correlation, and the entropy are selected for calculation.
  • the second moment of the angle, also called energy can be expressed by the formula:
  • ASM ⁇ P(i ⁇ ( 5 )
  • the second-order moment of the angle is a measure of the uniformity of the gray scale of the image texture, which is used to reflect the uniformity of the image gray distribution and the texture thickness.
  • the contrast CON is used to measure how the values of the matrix are distributed and how much of the image changes locally, reflecting the sharpness of the image and the depth of the texture.
  • Correlated CORRLN is used to measure the similarity of spatial gray level co-occurrence matrix elements in the row or column direction. Therefore, the magnitude of the correlation value can reflect the local grayscale correlation in the image. Specifically expressed as:
  • the second-order moment, contrast, correlation and entropy information of different categories of clothing angles are calculated for the sample RGB images converted into grayscale images, and the preset clothing texture feature threshold range is finally determined.
  • the color and texture features of the human body region in the video image of the recognized human body are subjected to block calculation according to the above calculation method; and a plurality of different features for the selected color feature and texture;
  • the weak classifier 126 is constructed, and the value of each weak classifier of each image clothing is determined by the threshold range; since each classifier has the same feature weight, for each of the N features, each feature weight is 1/1 N.
  • the weak classifier fusion is cascaded into a strong classifier output that is stable for a single block image. That is, the clothing category of each of the sub-blocks is fused by the clothing category weak classifier corresponding to each sub-block, and the result of the fusion is output through the strong classifier.
  • the clothing identification is performed using the large number judgment, and the final recognition result is obtained.
  • the embodiment of the invention is formed by multiple weak classifiers by comprehensively considering multiple features such as color and texture.
  • the strong classifier after a large number of judgments, uses the recognition results of different blocks of the same human body to optimize the clothing recognition results. Ensure the recognition effect with good effect and high reliability.
  • the preset clothing texture feature threshold and the preset clothing color feature threshold are both part of the preset clothing feature threshold. That is to say, in the process of comparing, the obtained clothing feature values of each block of the same type are compared with the feature thresholds of the same type in the preset clothing feature threshold.
  • Step S106 returning to step S102, the tracking of the human body target is realized, that is, the recognition of the clothing category of the human body target in the next time sequence or the adjacent time series is performed.
  • Step S107 Acquire a clothing category of the same human target in each frame in the different time series in the video stream, and perform a voting decision according to the pre-stored clothing category to determine a clothing category of the moving target.
  • Step S106 is performed by using the most single moving target tracking method to perform the same motion on the moving human body target in the next time series or adjacent time series in the video sequence according to the human body morphological characteristics (aspect ratio) and the position correlation of the target. Target Tracking. And repeating the above steps S102 to S105 to obtain the clothing category of the human target in the current time series of the same moving target. Finally, step S107 is performed, according to the identification result of the clothing category of the plurality of adjacent video sequences, the voting is performed, and the result of the large number is used to smooth out the erroneous recognition result. Thereby completing the garment recognition processing in the low resolution video. See Figure 7 for the specific recognition effect.
  • the adjacent multi-frame recognition result is summarized by using the time series correlation in the embodiment of the present invention, and the multi-frame clothing recognition result of the same human body is voted based on the tracking result.
  • the embodiments disclosed in the present invention can improve the real-time performance of the present invention for clothing recognition by reducing the number of detection targets by motion detection, performing feature training by offline, and using a method of setting a weighted weight calculation method.
  • the above-mentioned disclosed embodiments of the present invention describe in detail a clothing recognition method for low-resolution video.
  • the method of the present invention can be implemented by various forms of systems, and thus the present invention also discloses a low-resolution video clothing.
  • the identification system is described in detail below with reference to specific embodiments.
  • a garment recognition system for low resolution video disclosed in the embodiment of the present invention mainly includes: an extracting device 11, a disassembling device 12, a comparison identifying device 13, and a fusing device 14. And decision device 15.
  • the extracting device 11 is configured to determine a current time sequence in the received video stream, and extract a foreground image in the video stream, determine a moving human target from the foreground image, and extract contour information of the human target.
  • the decomposing device 12 is configured to decompose the contour information of the human body target, and extract the clothing feature value corresponding to each segment in the contour information of the human body target according to the preset clothing category.
  • the comparison identifying means 13 is configured to compare the obtained clothing feature values of the respective blocks with the preset clothing feature thresholds, and identify the clothing categories of the respective blocks in the current frame.
  • the merging device 14 is configured to fuse the clothing categories of each of the blocks in the same time series or different time series in the video stream.
  • the determining device 15 is configured to perform a voting decision according to the pre-stored clothing category, determine a clothing category of the human target in the current time series, and a clothing category of the same human target in each frame in different time series, the human body The judgment of the target clothing category.
  • the method further includes:
  • the removing device 16 is configured to perform noise and cavity removal operations on the acquired foreground image.
  • the above-disclosed system of the present invention corresponds to the method disclosed in the above-mentioned first embodiment, and the principle or the process of execution of each part can be referred to the above-disclosed method and its related parts.
  • the method and system disclosed by the present invention are based on a spatiotemporal classifier fusion technique, based on motion detection, human body recognition and clothing recognition, determine clothing characteristics in multiple video streams of the same human target, and finally determine the clothing category and identity of the moving target. In order to achieve high efficiency, high quality, high accuracy identity and clothing recognition purposes.

Abstract

L'invention concerne un procédé d'identification de vêtements et un système pour une vidéo à faible résolution. Le procédé est basé sur la technologie de convergence de classifieurs temporels et spatiaux. Le procédé concerne en particulier : l'extraction d'une image de premier plan dans un flux vidéo, et l'extraction d'informations de contour concernant un objet mobile dans l'image de premier plan ; l'identification d'un corps humain cible mobile conformément aux informations de contour extraites ; l'exécution d'un traitement d'identification de caractéristique multipoint sur différents blocs du même corps humain cible dans la même image de trame dans le flux vidéo et le vote pour décider du résultat d'identification ; et l'exécution de la décision de vote conformément au résultat de décision du même corps humain cible dans une pluralité d'images de trame dans le flux vidéo et, enfin, la détermination du type de vêtements du corps humain cible mobile. Au moyen du procédé présenté sur la base de la technologie de convergence de classifieurs temporels et spatiaux dans la présente invention, les caractéristiques des vêtements d'une pluralité de trames vidéo de la même cible mobile sont déterminées sur la base de la détection de mouvement, de l'identification de corps humain et de l'identification de vêtements, et le type de vêtements et l'identité du corps humain cible sont finalement déterminés, réalisant ainsi l'objectif d'identification d'identité et de vêtements avec une grande efficacité, une qualité élevée, et une grande précision.
PCT/CN2011/082705 2011-11-23 2011-11-23 Procédé d'identification de vêtements et système pour une vidéo à faible résolution WO2013075295A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/082705 WO2013075295A1 (fr) 2011-11-23 2011-11-23 Procédé d'identification de vêtements et système pour une vidéo à faible résolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/082705 WO2013075295A1 (fr) 2011-11-23 2011-11-23 Procédé d'identification de vêtements et système pour une vidéo à faible résolution

Publications (1)

Publication Number Publication Date
WO2013075295A1 true WO2013075295A1 (fr) 2013-05-30

Family

ID=48468998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/082705 WO2013075295A1 (fr) 2011-11-23 2011-11-23 Procédé d'identification de vêtements et système pour une vidéo à faible résolution

Country Status (1)

Country Link
WO (1) WO2013075295A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886305A (zh) * 2014-04-08 2014-06-25 中国人民解放军国防科学技术大学 面向基层治安、维稳及反恐的特异人脸搜索方法
CN111476336A (zh) * 2019-01-23 2020-07-31 阿里巴巴集团控股有限公司 服装的计件方法、装置及设备
CN112613376A (zh) * 2020-12-17 2021-04-06 深圳集智数字科技有限公司 重识别方法及装置,电子设备
CN112711966A (zh) * 2019-10-24 2021-04-27 阿里巴巴集团控股有限公司 视频文件的处理方法、装置以及电子设备
CN113221928A (zh) * 2020-01-21 2021-08-06 海信集团有限公司 服装分类信息显示装置、方法及存储介质
CN113283369A (zh) * 2021-06-08 2021-08-20 苏州市伏泰信息科技股份有限公司 一种港口码头作业人员安全防护措施监控系统及方法
CN113409076A (zh) * 2021-06-11 2021-09-17 广州天辰信息科技有限公司 一种基于大数据构建用户画像的方法、系统及云平台
CN114040140A (zh) * 2021-11-15 2022-02-11 北京医百科技有限公司 一种视频抠图方法、装置、系统及存储介质
US20220180551A1 (en) * 2020-12-04 2022-06-09 Shopify Inc. System and method for generating recommendations during image capture of a product
US11967105B2 (en) 2023-02-06 2024-04-23 Shopify Inc. System and method for generating recommendations during image capture of a product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164690A (ja) * 2005-12-16 2007-06-28 Matsushita Electric Ind Co Ltd 画像処理装置及び画像処理方法
US20070239764A1 (en) * 2006-03-31 2007-10-11 Fuji Photo Film Co., Ltd. Method and apparatus for performing constrained spectral clustering of digital image data
JP2007293912A (ja) * 2004-06-09 2007-11-08 Matsushita Electric Ind Co Ltd 画像処理方法および画像処理装置
US20090116698A1 (en) * 2007-11-07 2009-05-07 Palo Alto Research Center Incorporated Intelligent fashion exploration based on clothes recognition
WO2009091259A1 (fr) * 2008-01-18 2009-07-23 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Procédé d'amélioration de la résolution d'un objet mobile dans une séquence d'images numériques
CN101527838A (zh) * 2008-03-04 2009-09-09 华为技术有限公司 对视频对象的反馈式对象检测与跟踪的方法和系统
CN101763634A (zh) * 2009-08-03 2010-06-30 北京智安邦科技有限公司 一种简单的目标分类方法及装置
CN101833653A (zh) * 2010-04-02 2010-09-15 上海交通大学 低分辨率视频中的人物识别方法
CN101882217A (zh) * 2010-02-26 2010-11-10 杭州海康威视软件有限公司 视频图像的目标分类方法及装置
CN102521565A (zh) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 低分辨率视频的服装识别方法及系统

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007293912A (ja) * 2004-06-09 2007-11-08 Matsushita Electric Ind Co Ltd 画像処理方法および画像処理装置
JP2007164690A (ja) * 2005-12-16 2007-06-28 Matsushita Electric Ind Co Ltd 画像処理装置及び画像処理方法
US20070239764A1 (en) * 2006-03-31 2007-10-11 Fuji Photo Film Co., Ltd. Method and apparatus for performing constrained spectral clustering of digital image data
US20090116698A1 (en) * 2007-11-07 2009-05-07 Palo Alto Research Center Incorporated Intelligent fashion exploration based on clothes recognition
WO2009091259A1 (fr) * 2008-01-18 2009-07-23 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Procédé d'amélioration de la résolution d'un objet mobile dans une séquence d'images numériques
CN101527838A (zh) * 2008-03-04 2009-09-09 华为技术有限公司 对视频对象的反馈式对象检测与跟踪的方法和系统
CN101763634A (zh) * 2009-08-03 2010-06-30 北京智安邦科技有限公司 一种简单的目标分类方法及装置
CN101882217A (zh) * 2010-02-26 2010-11-10 杭州海康威视软件有限公司 视频图像的目标分类方法及装置
CN101833653A (zh) * 2010-04-02 2010-09-15 上海交通大学 低分辨率视频中的人物识别方法
CN102521565A (zh) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 低分辨率视频的服装识别方法及系统

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886305B (zh) * 2014-04-08 2017-01-25 中国人民解放军国防科学技术大学 面向基层治安、维稳及反恐的特异人脸搜索方法
CN103886305A (zh) * 2014-04-08 2014-06-25 中国人民解放军国防科学技术大学 面向基层治安、维稳及反恐的特异人脸搜索方法
CN111476336B (zh) * 2019-01-23 2023-06-20 阿里巴巴集团控股有限公司 服装的计件方法、装置及设备
CN111476336A (zh) * 2019-01-23 2020-07-31 阿里巴巴集团控股有限公司 服装的计件方法、装置及设备
CN112711966B (zh) * 2019-10-24 2024-03-01 阿里巴巴集团控股有限公司 视频文件的处理方法、装置以及电子设备
CN112711966A (zh) * 2019-10-24 2021-04-27 阿里巴巴集团控股有限公司 视频文件的处理方法、装置以及电子设备
CN113221928B (zh) * 2020-01-21 2023-07-18 海信集团有限公司 服装分类信息显示装置、方法及存储介质
CN113221928A (zh) * 2020-01-21 2021-08-06 海信集团有限公司 服装分类信息显示装置、方法及存储介质
US20220180551A1 (en) * 2020-12-04 2022-06-09 Shopify Inc. System and method for generating recommendations during image capture of a product
US11645776B2 (en) * 2020-12-04 2023-05-09 Shopify Inc. System and method for generating recommendations during image capture of a product
CN112613376A (zh) * 2020-12-17 2021-04-06 深圳集智数字科技有限公司 重识别方法及装置,电子设备
CN112613376B (zh) * 2020-12-17 2024-04-02 深圳集智数字科技有限公司 重识别方法及装置,电子设备
CN113283369A (zh) * 2021-06-08 2021-08-20 苏州市伏泰信息科技股份有限公司 一种港口码头作业人员安全防护措施监控系统及方法
CN113409076A (zh) * 2021-06-11 2021-09-17 广州天辰信息科技有限公司 一种基于大数据构建用户画像的方法、系统及云平台
CN114040140A (zh) * 2021-11-15 2022-02-11 北京医百科技有限公司 一种视频抠图方法、装置、系统及存储介质
CN114040140B (zh) * 2021-11-15 2024-04-12 北京医百科技有限公司 一种视频抠图方法、装置、系统及存储介质
US11967105B2 (en) 2023-02-06 2024-04-23 Shopify Inc. System and method for generating recommendations during image capture of a product

Similar Documents

Publication Publication Date Title
WO2013075295A1 (fr) Procédé d'identification de vêtements et système pour une vidéo à faible résolution
CN110837784B (zh) 一种基于人体头部特征的考场偷窥作弊检测系统
US8395676B2 (en) Information processing device and method estimating a posture of a subject in an image
JP4273359B2 (ja) 年齢推定システム及び年齢推定方法
JP5675229B2 (ja) 画像処理装置及び画像処理方法
Avgerinakis et al. Recognition of activities of daily living for smart home environments
Seow et al. Neural network based skin color model for face detection
JP5959093B2 (ja) 人物検索システム
Lu et al. A novel approach for video text detection and recognition based on a corner response feature map and transferred deep convolutional neural network
CN102521565A (zh) 低分辨率视频的服装识别方法及系统
JP2000003452A (ja) デジタル画像における顔面の検出方法、顔面検出装置、画像判定方法、画像判定装置およびコンピュ―タ可読な記録媒体
Zheng et al. Attention-based spatial-temporal multi-scale network for face anti-spoofing
CN111126240B (zh) 一种三通道特征融合人脸识别方法
Wang et al. Head pose estimation with combined 2D SIFT and 3D HOG features
CN110991315A (zh) 一种基于深度学习的安全帽佩戴状态实时检测方法
WO2015131468A1 (fr) Procédé et système permettant d'estimer la présence d'empreintes digitales
WO2018019149A1 (fr) Procédé et appareil de reconnaissance automatique du sexe d'un corps humain
CN106529441B (zh) 基于模糊边界分片的深度动作图人体行为识别方法
CN107392105B (zh) 一种基于反向协同显著区域特征的表情识别方法
CN103577804A (zh) 基于sift流和隐条件随机场的人群异常行为识别方法
Gürel et al. Design of a face recognition system
Hu et al. Fast face detection based on skin color segmentation using single chrominance Cr
CN111639562A (zh) 一种手掌感兴趣区域的智能定位方法
Duan et al. Local feature learning for face recognition under varying poses
Gul et al. A machine learning approach to detect occluded faces in unconstrained crowd scene

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11876088

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11876088

Country of ref document: EP

Kind code of ref document: A1