WO2020151300A1 - Procédé et appareil de reconnaissance de sexe basée sur un réseau résiduel profond, support et dispositif associés - Google Patents

Procédé et appareil de reconnaissance de sexe basée sur un réseau résiduel profond, support et dispositif associés Download PDF

Info

Publication number
WO2020151300A1
WO2020151300A1 PCT/CN2019/116236 CN2019116236W WO2020151300A1 WO 2020151300 A1 WO2020151300 A1 WO 2020151300A1 CN 2019116236 W CN2019116236 W CN 2019116236W WO 2020151300 A1 WO2020151300 A1 WO 2020151300A1
Authority
WO
WIPO (PCT)
Prior art keywords
gender
preset number
video frames
target object
weighted
Prior art date
Application number
PCT/CN2019/116236
Other languages
English (en)
Chinese (zh)
Inventor
马潜
李洪燕
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020151300A1 publication Critical patent/WO2020151300A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • This application relates to the field of intelligent recognition technology. Specifically, this application relates to a gender recognition method, device, computer-readable storage medium, and computer equipment based on a deep residual network.
  • this application provides the following technical solutions based on a deep residual network gender recognition method and corresponding devices, computer-readable storage media and computer equipment.
  • the embodiments of the present application provide a method for gender recognition based on a deep residual network, including the following steps:
  • the preset number of video frames are respectively input into the pre-trained gender recognition model to obtain the gender prediction values corresponding to the target object in the preset number of video frames respectively; wherein the gender recognition model is pre-trained based on the deep residual network ;
  • the gender recognition result of the target object is obtained.
  • the embodiments of the present application provide a gender recognition device based on a deep residual network, including:
  • the video frame acquisition module is used to acquire a preset number of video frames of the target object from the video stream based on the pedestrian tracking algorithm;
  • the predictive value acquisition module is used to input a preset number of video frames into a pre-trained gender recognition model to obtain gender predictive values corresponding to the target object in the preset number of video frames; wherein the gender recognition model is based on The deep residual network is pre-trained;
  • a weighted calculation module configured to perform a weighted calculation on the gender prediction value to obtain the weighted gender prediction value of the target object
  • the gender recognition result generation module is used to obtain the gender recognition result of the target object according to the weighted gender prediction value.
  • the embodiments of the present application provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned deep residual network-based Method of gender identification.
  • the embodiments of the present application provide a computer device.
  • the computer includes one or more processors; a memory; one or more computer programs, wherein the one or more computer programs are stored in the The memory is configured to be executed by the one or more processors, and the one or more computer programs are configured to execute the aforementioned method for gender recognition based on the deep residual network.
  • the gender recognition method, device, computer-readable storage medium, and computer equipment based on the deep residual network obtained multiple video frames from the video stream during the dynamic walking of the target object, and input multiple video frames
  • the gender recognition model pre-trained based on the deep residual network realizes the gender recognition of the target object.
  • Real-time gender recognition of pedestrians can be realized without face recognition.
  • the gender recognition efficiency and accuracy are high, which meets the practical application of real-time gender recognition of pedestrians. demand.
  • FIG. 1 is a method flowchart of a gender recognition method based on a deep residual network provided by an embodiment of this application;
  • FIG. 2 is a schematic structural diagram of a gender recognition device based on a deep residual network provided by an embodiment of this application;
  • Fig. 3 is a schematic structural diagram of a computer device provided by an embodiment of the application.
  • the embodiment of the application provides a method for gender recognition based on a deep residual network. As shown in FIG. 1, the method includes:
  • Step S110 Obtain a preset number of video frames of the target object from the video stream based on the pedestrian tracking algorithm.
  • the target object is a person whose gender is to be identified.
  • the target object is first tracked based on the pedestrian tracking algorithm within a preset time period, and the video stream during the dynamic walking of the target object during the preset time period is recorded by a video monitoring tool; then, from Extracting a preset number of video frames of the target object from the video stream, wherein the preset number of video frames of the target object may be obtained by extracting key frames from the video stream in a preset period, and the preset
  • the setting period can be any length of 50ms, 80ms, 1s, etc.
  • the preset number of acquired video frames is used as input data for inputting a pre-trained gender recognition model.
  • the preset number can be any value such as 5, 9, 15, etc. Those skilled in the art can determine the specific value of the preset number according to actual application requirements, which is not limited in this embodiment.
  • Step S120 Input a preset number of video frames into a pre-trained gender recognition model, respectively, to obtain gender prediction values corresponding to the target object in the preset number of video frames; wherein the gender recognition model is based on a deep residual network Get pre-trained.
  • the gender recognition model is used to extract the gender characteristics of the target object and calculate the gender prediction value.
  • the obtained preset number of video frames are successively input into the pre-trained gender recognition model, and the sex prediction values of the respective video frames corresponding to the target object can be successively obtained.
  • the calculation process of the gender recognition model to estimate the gender prediction value of the target object is specifically: extracting the gender feature vector of the target object according to the video frame as input data, and further estimating the target based on the gender feature vector The probabilities that the objects are male and female respectively, the gender classification recognition of the target object is realized according to the probability that the target object is male and female.
  • the deep residual network uses the residual structure as the basic structure of the network.
  • This basic structure can be used to solve the problem of performance degradation after the network becomes deeper, and it can also improve the accuracy of gender prediction. And computing efficiency provides strong technical support.
  • Step S130 Perform a weighted operation on the gender prediction value to obtain the weighted gender prediction value of the target object.
  • the gender predictive value corresponding to each video frame is weighted according to a preset weighting method, and the weighted gender predictive value of the target object is calculated.
  • the gender predictive value corresponding to each video frame is weighted and combined. Calculating the weighted gender predictive value can obtain a more accurate gender predictive value than a single static image to identify gender, thereby obtaining a more accurate gender recognition result.
  • Step S140 Obtain a gender recognition result of the target object according to the weighted gender prediction value.
  • the weighted gender prediction value it is determined whether the weighted gender prediction value is greater than a preset threshold; if the weighted gender prediction value is greater than the preset threshold, it is determined that the gender of the target user is male, and the result is obtained.
  • the preset threshold may be 0.5.
  • the predicted gender value is greater than 0.5, it is determined that the gender of the target object is male, and when the predicted gender value is less than or equal to 0.5, the gender of the target object is determined to be female.
  • the gender recognition method based on the deep residual network obtaineds multiple video frames from the video stream during the dynamic walking of the target object, and inputs the multiple video frames into the gender recognition based on the pre-trained deep residual network
  • the model realizes the gender recognition of the target object, and can realize the real-time gender recognition of pedestrians without being based on face recognition.
  • the efficiency and accuracy of gender recognition are high, and it meets the practical application requirements of real-time gender recognition of pedestrians.
  • the obtaining a preset number of video frames of the target object from a video stream based on a pedestrian tracking algorithm includes:
  • the preset number of video frames of the target object is obtained from the video stream.
  • the KCF target tracking algorithm has the characteristics of fast algorithm speed and strong robustness, which can further improve the efficiency and accuracy of obtaining the preset number of video frames of the target object, and meet real-time requirements.
  • the performing a weighted operation on the gender prediction value to obtain the weighted gender prediction value of the target object includes:
  • a weight used for weighting calculation is preset for each video frame in the preset number of video frames to obtain the weight ratio of the preset number of video frames.
  • the weight used for weighting calculation of each video frame may be the same or different.
  • the weight of each video frame in the preset number of video frames is set according to the sequence of the time stamp of the video frame corresponding to the video stream, that is, the weight used for weighting calculation is preset for each video frame.
  • the sizes are respectively associated with the sequence of the time stamps of the respective video frames corresponding to the video stream.
  • the weight of the video frame that is lower in the order of the timestamp is larger, so that the video frame that can capture a more complete target object is weighted
  • the calculated contribution of gender prediction value is greater, thereby improving the accuracy of real-time identification of pedestrian gender.
  • the gender prediction value of each video frame in the preset number of video frames is multiplied by the corresponding weight to calculate a weighted average value, and the weighted average value is used as the target The weighted gender predictive value of the subject.
  • the weighted gender predictive value of the target object is calculated by performing a weighted operation on the gender predictive value, which can further improve the accuracy of real-time gender recognition of pedestrians.
  • the gender recognition model is pre-trained through the following steps:
  • a deep residual network is trained based on the training samples to obtain a gender recognition model.
  • a training sample for training the deep residual network as a gender recognition model is obtained from a preset pedestrian image database, wherein the training sample prestores a large number of pedestrian human images, and the pedestrian human images are A human body image of a person in a walking state, and each pedestrian human body image is pre-marked with a corresponding gender.
  • one hundred thousand pre-collected human body images of males and females are obtained from a preset pedestrian database and used as input data for the deep residual network.
  • the standard deep residual network is trained according to the pedestrian body image and the gender information marked by the pedestrian body image in the training sample to obtain the network structure and weights suitable for the gender recognition task of this scheme, and the training obtains The gender recognition model.
  • the method further includes:
  • the video frames of the preset number of video frames of the target object and the corresponding gender recognition result are saved in the gender recognition result database .
  • the video frames and the corresponding gender recognition results stored in the gender recognition result database can be periodically cleaned up according to a preset intelligent strategy.
  • the method before inputting a preset number of video frames into a pre-trained gender recognition model to obtain a gender prediction value corresponding to the target object in the preset number of video frames, the method further includes:
  • the target Before the subject performs gender recognition, it is quickly matched based on the existing gender recognition results.
  • the preset database is a gender recognition result database that stores video frames of historical target objects and corresponding gender recognition results
  • the video frames of historical target objects are human body images of pedestrians that include the historical target objects.
  • the pedestrian human body image is a human body image in which a person is walking. Match one or more of the acquired video frames of the preset number of target objects with the video frames in the gender recognition result database, and determine whether there is a match with the preset number of video frames in the gender recognition result database Pedestrian human body image.
  • the gender information of the historical target object corresponding to the pedestrian body image is determined according to the gender recognition result pre-stored in the gender recognition result database, and all The gender information of the historical target object is used as the gender recognition result of the target object. If there is no matching pedestrian human body image in the gender recognition result database, real-time gender recognition of the target object is performed.
  • the gender recognition system does not need to re-take the target in the video shooting range within a preset time period. Subjects perform gender recognition again, which significantly reduces the workload of gender recognition in actual application scenarios and improves the efficiency of real-time gender recognition of pedestrians.
  • the inputting a preset number of video frames into a pre-trained gender recognition model to obtain the gender prediction value corresponding to the target object in the preset number of video frames respectively includes:
  • the preset number of human body images of pedestrians are respectively input to a pre-trained gender recognition model to obtain gender prediction values corresponding to the target object in the preset number of video frames respectively.
  • the video surveillance tool records the video stream during the dynamic walking of the target object. Therefore, the image information in the preset number of video frames extracted from the video stream may include the information within the shooting range Information outside the target object will interfere with the gender recognition result of the target object. Therefore, it is necessary to preprocess the preset number of video frames, and use the preprocessed preset number of video frames as input data of the gender recognition model.
  • the preprocessing includes:
  • the human body image of the pedestrian is subjected to operations such as normalization processing, noise reduction, and light supplement, and the pre-processed human body image of a preset number of pedestrians is used as the input data of the gender recognition model, and the preset number of pedestrians
  • the human body images are respectively input to the pre-trained gender recognition model, and the gender prediction values of the target objects in the preset number of video frames are obtained respectively.
  • an embodiment of the present application provides a gender recognition device based on a deep residual network.
  • the device includes: a video frame acquisition module 21, a prediction value acquisition module 22, a weighting operation module 23, and gender recognition The result generation module 24; among them,
  • the video frame acquisition module 21 is configured to acquire a preset number of video frames of the target object from the video stream based on a pedestrian tracking algorithm
  • the predicted value acquisition module 22 is configured to input a preset number of video frames into a pre-trained gender recognition model to obtain the predicted value of the gender of the target object in the preset number of video frames respectively; wherein, the gender The recognition model is pre-trained based on the deep residual network;
  • the weighted calculation module 23 is configured to perform a weighted calculation on the gender prediction value to obtain the weighted gender prediction value of the target object;
  • the gender recognition result generating module 24 is configured to obtain the gender recognition result of the target object according to the weighted gender prediction value.
  • the video frame acquisition module 21 is specifically configured to:
  • the preset number of video frames of the target object is obtained from the video stream.
  • the predicted value obtaining module 22 is specifically configured to:
  • the gender recognition model is pre-trained through the following steps:
  • a deep residual network is trained based on the training samples to obtain a gender recognition model.
  • the method further includes:
  • the method before inputting a preset number of video frames into a pre-trained gender recognition model to obtain a gender prediction value corresponding to the target object in the preset number of video frames, the method further includes:
  • the predicted value obtaining module 22 is specifically configured to:
  • the preset number of human body images of pedestrians are respectively input to a pre-trained gender recognition model to obtain gender prediction values corresponding to the target object in the preset number of video frames, respectively.
  • the gender recognition device based on the deep residual network provided in this application can be realized by obtaining multiple video frames from the video stream of the target object during the dynamic walking process, and inputting the multiple video frames into the pre-trained based on the deep residual network
  • the gender recognition model realizes the gender recognition of the target object, and can realize the real-time gender recognition of pedestrians without being based on face recognition.
  • the efficiency and accuracy of gender recognition are high, and it can meet the practical application requirements of real-time gender recognition of pedestrians.
  • the gender recognition device based on the deep residual network provided by the embodiments of the present application can implement the method embodiments provided above.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the gender based on the deep residual network described in the above embodiment is implemented. recognition methods.
  • the computer-readable storage medium includes, but is not limited to, any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory), RAM (Random AccesSS) Memory), EPROM (EraSable Programmable Read-Only Memory), EEPROM (Electrically EraSable Programmable Read-Only Memory), flash memory, magnetic card or Light card.
  • a storage device includes any medium that stores or transmits information in a readable form by a device (for example, a computer or a mobile phone), and may be a read-only memory, a magnetic disk, or an optical disk.
  • the computer-readable storage medium provided in this application can realize: by obtaining multiple video frames from the video stream of the target object during the dynamic walking process, and inputting the multiple video frames into the gender recognition model pre-trained based on the deep residual network
  • gender recognition model pre-trained based on the deep residual network To realize the gender recognition of the target object, real-time gender recognition of pedestrians can be realized without the need of face recognition. The efficiency and accuracy of gender recognition are high, which meets the practical application requirements of real-time gender recognition of pedestrians.
  • the computer-readable storage medium provided in the embodiments of the present application can implement the method embodiments provided above.
  • an embodiment of the present application also provides a computer device, as shown in FIG. 3.
  • the computer equipment described in this embodiment may be equipment such as servers, personal computers, and network equipment.
  • the computer equipment includes a processor 302, a memory 303, an input unit 304, a display unit 305 and other devices.
  • the memory 303 may be used to store a computer program 301 and various functional modules, and the processor 302 runs the computer program 301 stored in the memory 303 to execute various functional applications and data processing of the device.
  • the memory may be internal memory or external memory, or include both internal memory and external memory.
  • the internal memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, or random access memory.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory or random access memory.
  • External storage can include hard disks, floppy disks, ZIP disks, U disks, tapes, etc.
  • the memory disclosed in this application includes but is not limited to these types of memory.
  • the memory disclosed in this application is only an example and not a limitation.
  • the input unit 304 is used for receiving signal input and receiving keywords input by the user.
  • the input unit 304 may include a touch panel and other input devices.
  • the touch panel can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc., to operate on the touch panel or near the touch panel), and according to the preset
  • the program drives the corresponding connection device; other input devices can include, but are not limited to, one or more of a physical keyboard, function keys (such as playback control keys, switch keys, etc.), trackball, mouse, and joystick.
  • the display unit 305 can be used to display information input by the user or information provided to the user and various menus of the computer device.
  • the display unit 305 can take the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the processor 302 is the control center of the computer equipment. It uses various interfaces and lines to connect the various parts of the entire computer. By running or executing the software programs and/or modules stored in the memory 302, and calling the data stored in the memory, execute Various functions and processing data.
  • the computer device includes: one or more processors 302, a memory 303, and one or more computer programs 301, wherein the one or more computer programs 301 are stored in the memory 303 and configured to Executed by the one or more processors 302, the one or more computer programs 301 are configured to execute the deep residual network-based gender recognition method described in any of the above embodiments.
  • the computer equipment provided in this application can realize the realization of the target object by acquiring multiple video frames from the video stream during the dynamic walking of the target object, and inputting the multiple video frames into the gender recognition model pre-trained based on the deep residual network Gender recognition can realize real-time gender recognition of pedestrians without being based on face recognition.
  • the efficiency and accuracy of gender recognition are high, which meets the practical application requirements of real-time gender recognition of pedestrians.
  • the computer device provided in the embodiments of the present application can implement the method embodiments provided above.
  • the functional units in the various embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de reconnaissance de sexe basée sur un réseau résiduel profond, consistant : à obtenir un nombre prédéfini de trames vidéo d'un objet cible à partir d'un flux vidéo, en fonction d'un algorithme de suivi de piétons (S110) ; à entrer le nombre prédéfini de trames vidéo dans un modèle de reconnaissance de sexe pré-entraîné afin d'obtenir des valeurs de prédiction de sexe correspondant à l'objet cible dans le nombre prédéfini de trames vidéo, respectivement, ledit modèle étant pré-entraîné en fonction d'un réseau résiduel profond (S120) ; à pondérer les valeurs de prédiction de sexe afin d'obtenir les valeurs de prédiction de sexe pondérées de l'objet cible (S130) ; et à obtenir le résultat de reconnaissance de sexe de l'objet cible en fonction des valeurs de prédiction de sexe pondérées (S140). Le procédé selon l'invention peut réaliser une reconnaissance de sexe en temps réel sur un piéton, sans reconnaissance faciale, fournit une efficacité et une précision élevées de reconnaissance de sexe, et répond aux besoins d'application pratiques d'une reconnaissance de sexe de piétons en temps réel.
PCT/CN2019/116236 2019-01-25 2019-11-07 Procédé et appareil de reconnaissance de sexe basée sur un réseau résiduel profond, support et dispositif associés WO2020151300A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910074634.4 2019-01-25
CN201910074634.4A CN109829415A (zh) 2019-01-25 2019-01-25 基于深度残差网络的性别识别方法、装置、介质和设备

Publications (1)

Publication Number Publication Date
WO2020151300A1 true WO2020151300A1 (fr) 2020-07-30

Family

ID=66862501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116236 WO2020151300A1 (fr) 2019-01-25 2019-11-07 Procédé et appareil de reconnaissance de sexe basée sur un réseau résiduel profond, support et dispositif associés

Country Status (2)

Country Link
CN (1) CN109829415A (fr)
WO (1) WO2020151300A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469144A (zh) * 2021-08-31 2021-10-01 北京文安智能技术股份有限公司 基于视频的行人性别及年龄识别方法和模型
CN113761275A (zh) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 视频预览动图生成方法、装置、设备及可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829415A (zh) * 2019-01-25 2019-05-31 平安科技(深圳)有限公司 基于深度残差网络的性别识别方法、装置、介质和设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3023911A1 (fr) * 2014-11-24 2016-05-25 Samsung Electronics Co., Ltd. Procédé et appareil de reconnaissance d'objet et procédé et appareil de reconnaissance d'apprentissage
CN106203306A (zh) * 2016-06-30 2016-12-07 北京小米移动软件有限公司 年龄的预测方法、装置及终端
CN106529442A (zh) * 2016-10-26 2017-03-22 清华大学 一种行人识别方法和装置
CN107633223A (zh) * 2017-09-15 2018-01-26 深圳市唯特视科技有限公司 一种基于深层对抗网络的视频人体属性识别方法
CN107844784A (zh) * 2017-12-08 2018-03-27 广东美的智能机器人有限公司 人脸识别方法、装置、计算机设备和可读存储介质
CN108510000A (zh) * 2018-03-30 2018-09-07 北京工商大学 复杂场景下行人细粒度属性的检测与识别方法
CN109829415A (zh) * 2019-01-25 2019-05-31 平安科技(深圳)有限公司 基于深度残差网络的性别识别方法、装置、介质和设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980279A (zh) * 2014-10-16 2015-10-14 腾讯科技(深圳)有限公司 一种身份认证方法及相关设备、系统
CN107705807B (zh) * 2017-08-24 2019-08-27 平安科技(深圳)有限公司 基于情绪识别的语音质检方法、装置、设备及存储介质
CN107808149A (zh) * 2017-11-17 2018-03-16 腾讯数码(天津)有限公司 一种人脸信息标注方法、装置和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3023911A1 (fr) * 2014-11-24 2016-05-25 Samsung Electronics Co., Ltd. Procédé et appareil de reconnaissance d'objet et procédé et appareil de reconnaissance d'apprentissage
CN106203306A (zh) * 2016-06-30 2016-12-07 北京小米移动软件有限公司 年龄的预测方法、装置及终端
CN106529442A (zh) * 2016-10-26 2017-03-22 清华大学 一种行人识别方法和装置
CN107633223A (zh) * 2017-09-15 2018-01-26 深圳市唯特视科技有限公司 一种基于深层对抗网络的视频人体属性识别方法
CN107844784A (zh) * 2017-12-08 2018-03-27 广东美的智能机器人有限公司 人脸识别方法、装置、计算机设备和可读存储介质
CN108510000A (zh) * 2018-03-30 2018-09-07 北京工商大学 复杂场景下行人细粒度属性的检测与识别方法
CN109829415A (zh) * 2019-01-25 2019-05-31 平安科技(深圳)有限公司 基于深度残差网络的性别识别方法、装置、介质和设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761275A (zh) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 视频预览动图生成方法、装置、设备及可读存储介质
CN113469144A (zh) * 2021-08-31 2021-10-01 北京文安智能技术股份有限公司 基于视频的行人性别及年龄识别方法和模型
CN113469144B (zh) * 2021-08-31 2021-11-09 北京文安智能技术股份有限公司 基于视频的行人性别及年龄识别方法和模型

Also Published As

Publication number Publication date
CN109829415A (zh) 2019-05-31

Similar Documents

Publication Publication Date Title
US11354901B2 (en) Activity recognition method and system
Jegham et al. Vision-based human action recognition: An overview and real world challenges
WO2021114892A1 (fr) Procédé de reconnaissance de mouvement corporel basé sur la compréhension sémantique environnementale, appareil, dispositif et support de stockage
WO2020177673A1 (fr) Procédé de sélection de séquences vidéo, dispositif informatique et support d'enregistrement
WO2020151300A1 (fr) Procédé et appareil de reconnaissance de sexe basée sur un réseau résiduel profond, support et dispositif associés
WO2016107482A1 (fr) Procédé et dispositif servant à déterminer un identifiant d'identité d'un visage humain dans une image de visage humain, et terminal
CN105160318A (zh) 基于面部表情的测谎方法及系统
CN107239735A (zh) 一种基于视频分析的活体检测方法和系统
CN112380512B (zh) 卷积神经网络动态手势认证方法、装置、存储介质及设备
WO2018103416A1 (fr) Procédé et dispositif de détection d'image faciale
Ponce-López et al. Multi-modal social signal analysis for predicting agreement in conversation settings
WO2018068654A1 (fr) Procédé d'estimation dynamique de modèle de scénario, procédé et appareil d'analyse de données, et dispositif électronique
US20220027606A1 (en) Human behavior recognition method, device, and storage medium
US11138417B2 (en) Automatic gender recognition utilizing gait energy image (GEI) images
Borghi et al. Fast gesture recognition with multiple stream discrete HMMs on 3D skeletons
Zuo et al. Face liveness detection algorithm based on livenesslight network
Radwan et al. Regression based pose estimation with automatic occlusion detection and rectification
Zhang et al. A review of small target detection based on deep learning
Zhu et al. Multi-target tracking via hierarchical association learning
KR100711223B1 (ko) 저니키(Zernike)/선형 판별 분석(LDA)을 이용한얼굴 인식 방법 및 그 방법을 기록한 기록매체
Ren et al. Human fall detection model with lightweight network and tracking in video
Yan et al. Foreground Extraction and Motion Recognition Technology for Intelligent Video Surveillance
Lv et al. Efficient person search via learning-to-normalize deep representation
Chattopadhyay et al. Exploiting pose information for gait recognition from depth streams
WO2023109551A1 (fr) Procédé et appareil de détection d'un corps vivant et dispositif informatique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19912077

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19912077

Country of ref document: EP

Kind code of ref document: A1