CN106022220A - Method for performing multi-face tracking on participating athletes in sports video - Google Patents

Method for performing multi-face tracking on participating athletes in sports video Download PDF

Info

Publication number
CN106022220A
CN106022220A CN201610301411.3A CN201610301411A CN106022220A CN 106022220 A CN106022220 A CN 106022220A CN 201610301411 A CN201610301411 A CN 201610301411A CN 106022220 A CN106022220 A CN 106022220A
Authority
CN
China
Prior art keywords
face
path segment
convolutional neural
video
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610301411.3A
Other languages
Chinese (zh)
Other versions
CN106022220B (en
Inventor
王进军
张顺
姜思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hippo energy Sports Technology Co., Ltd.
Original Assignee
Xi'an Brision Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Brision Information Technology Co Ltd filed Critical Xi'an Brision Information Technology Co Ltd
Priority to CN201610301411.3A priority Critical patent/CN106022220B/en
Publication of CN106022220A publication Critical patent/CN106022220A/en
Application granted granted Critical
Publication of CN106022220B publication Critical patent/CN106022220B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a method for performing multi-face tracking on participating athletes in a sports video. The method comprises the following steps: pre-training a convolutional neural network for face identification; performing lens segmentation on an input video, and selecting all close-range lens segments; performing face detection on each image in a close-range lens, and obtaining face detection responses; associating with the face detection responses to form locus segments; according to time-space information restrictions between the locus segments, generating training samples; by taking the obtained training samples as input, by use of a Siamerse or Triplet netowrk, finely tuning the pre-trained convolutional neural network; by use of the finely-tuned convolutional neural network, extracting features of each face image; and associating with all the locus segments in a layered mode to generate a face motion lous. According to the method, the training samples are collected from a video to be tracked in an online mode, the pre-trained convolutional neural network is finely tuned, more distinguished face features are learned in an online mode, and more effective multi-face tracking is carried out by use of the features.

Description

A kind of method in sports video, contestant being carried out plurality of human faces tracking
Technical field:
The invention belongs to Video processing and computer vision field, be specifically related in a kind of sports video competition Athlete carries out the method for plurality of human faces tracking.
Background technology:
Multiple target tracking refers to targets interested multiple in video sequence are positioned, followed the tracks of, and pushes away Survey the track of each target.Multiple target tracking, as an important topic in computer vision field, is regarding The aspects such as frequency monitoring, target recognition, video information discovery have important value.
Plurality of human faces in sports video is followed the tracks of and is referred to, carries out determining to the face of contestant each in video Position, is tracked simultaneously, ultimately generates the face movement locus of each contestant.In sports video Plurality of human faces tracking as a basic technology, can be applicable to athletic identification, sports video In the task that content analysis etc. are higher level, there is extremely important commercial application value.
Compared with the multiple target tracking problem in monitor video, the multiple target tracking problem in sports video is more Challenging.First, competition area are shot from different perspectives by sports video by multiple video cameras Camera lens splicing is made, and adjacent two camera lenses can exist the situations such as the switching of quick image or gradual change conversion. Secondly, same competition person has the change of the aspects such as complicated attitude, illumination and yardstick under different camera lenses Changing, this causes great difficulty to face tracking problem.Finally, sports video also exists have similar The human face target of outward appearance, this adds difficulty to plurality of human faces tracking technique.
In existing sports video Patents, the face to each contestant is not tracked Method.The present invention can make up this vacancy, exactly the multiple faces in video is positioned and is followed the tracks of, Generate each athletic face tracking track.
Summary of the invention:
In order to overcome the deficiencies in the prior art, the invention provides in a kind of sports video contestant is entered The method that row plurality of human faces is followed the tracks of.The face of contestants multiple in video can be carried out by the method simultaneously can By location, ground and tracking, generate accurate face movement locus.
For reaching above-mentioned purpose, the present invention adopts the following technical scheme that and realizes:
A kind of method in sports video, contestant being carried out plurality of human faces tracking, comprises the following steps:
1) comprising no less than on the off-line human face data collection of 3000 different face classifications, using supervised Method training in advance one is for the convolutional neural networks model of recognition of face;
2) by the Shot change in detection video, input video is divided into non-overlapping camera lens fragment, and Select the camera lens fragment of all close shots;
3) in the camera lens fragment of each close shot, use human-face detector that every piece image is carried out Face datection, Obtain Face datection response;
4) in the camera lens fragment of each close shot, by Face datection response high for similarity in adjacent several two field pictures It is associated as path segment;
5) in obtained path segment, limit according to space time information, generate positive and negative two class training samples;
6) using the positive and negative training sample that obtains as input, Siamese or Triplet network is used to 1) in The convolutional neural networks of pre-training is finely adjusted, on-line study more distinction and adaptive face characteristic;
7) use the convolutional neural networks after fine setting, extract the face characteristic of each image in each path segment;
8) layering associates all path segment, generates final face movement locus.
The present invention is further improved by, described step 1) in, the structure of convolutional neural networks is input layer -convolution and sample level-output layer, input layer is the facial image of input, and convolution and sample level include process of convolution With Max Pooling process, the corresponding face classification of each neuron of output layer.
The present invention is further improved by, described step 5) in, positive training sample is from same track Two facial images in fragment, negative training sample is two faces respectively from two different tracks fragments Image, wherein the two path segment occurs in a certain two field picture simultaneously;
Positive and negative training sample combines in the way of ternary one group: two facial images from same path segment, 3rd facial image is from another path segment, and wherein the two path segment is same in a certain two field picture Time occur.
The present invention is further improved by, described step 6) in, Siamese network is identical by structure and weighs Two convolutional neural networks compositions that value is shared, using two facial images as input, use contrast loss letter Number;
Three convolutional neural networks that Triplet network is identical by structure and weights are shared form, with ternary one group Mode as input, use Triplet loss function.
The present invention is further improved by, described step 8) in, association face path segment in two steps, the One step is in each camera lens fragment, uses multi-object tracking method, according to movable information and the study of target The identification face characteristic association path segment obtained;Second step is the face characteristic obtained merely with study, The method using stratification agglomerative clustering, the path segment under the different camera lens of association, generate final face mesh Mark track.
Compared with prior art, the method have the advantages that
Multi-object tracking method based on recognition of face of the present invention, collects from video to be tracked online Training sample, is finely adjusted the face convolutional neural networks of training in advance, thus on-line study is more sentenced The face characteristic of other property, and then use this feature to carry out more efficiently plurality of human faces tracking.
Accompanying drawing illustrates:
Fig. 1 is the schematic flow sheet of the present invention.
Detailed description of the invention:
Below in conjunction with the accompanying drawings the present invention is described in further detail:
With reference to Fig. 1, the method for multiple target tracking in sports video based on recognition of face of the present invention, bag Include following steps:
1) on the off-line human face data collection comprising a large amount of face classification, supervised method training in advance one is used The individual convolutional neural networks model for recognition of face.The structure of convolutional neural networks is " input layer convolution and adopting Sample layer output layer ", input layer is the facial image of input, and convolution and sample level include process of convolution and Max Pooling process, the corresponding face classification of each neuron of output layer.
2) by the Shot change in detection video, input video is divided into non-overlapping camera lens fragment.Root The ratio of positive width image is accounted for according to face, and face and competition area reference substance (such as meadow, court line etc.) Relation, selects the camera lens fragment of all close shots.
3) in the camera lens fragment of each close shot, use the human-face detector published that every piece image is entered Row Face datection, obtains Face datection response.
4) in the camera lens fragment of each close shot, by Face datection response high for similarity in adjacent several two field pictures It is associated as path segment.
5) in obtained path segment, limit according to space time information, generate positive and negative two class training samples.
Positive training sample is from two facial images in same path segment.Negative training sample is respectively From two facial images of two different tracks fragments, wherein the two path segment is in a certain two field picture Occur simultaneously.OrderRepresent a length of niPath segment, x represents that Face datection rings Should, then positive training sampleIf TiAnd TjRepresent same Two the different path segment occurred in frame, then bear training sample
Positive and negative training sample can combine further in the way of ternary one group (Triplet): two facial images From from same path segment, the 3rd facial image from another path segment, wherein the two Path segment occurs in a certain two field picture simultaneously.Make TiAnd TjRepresent two differences occurred in same frame Path segment, then can be from TiAnd TjMiddle generation training sample s,
6) using the training sample that obtains as input, use Siamese or Triplet network to 1) in advance The convolutional neural networks of training is finely adjusted, on-line study more distinction and adaptive face characteristic.
Two convolutional neural networks that Siamese network is identical by structure and weights are shared form, with two faces Image, as input, uses contrast loss function.In Siamese network, the extraction process of face characteristic is permissible It is expressed as f (x)=Conv (x;W), wherein Conv () is mapping function, x ∈ R227×227×3It it is the face of input Image, f (x) represents the characteristic vector extracted.Make x1,x2Represent two training sample image, thenRepresent the distance of two image feature vectors.Damage is contrasted below using in training Mistake function reduces the distance between the image of two same targets, increases between two different target images simultaneously Distance:
L P = 1 2 ( y · d f 2 + ( 1 - y ) · m a x ( 0 , τ - d f 2 )
Wherein, τ is nargin (margin).Y=1 represents that two images represent two from same target, y=0 Open image from different target.
Three convolutional neural networks that Triplet network is identical by structure and weights are shared form, with ternary one group Mode as input, use Triplet loss function.In training, to one group of input sampleNeeds make positive training sample pairBetween distance less than negative training sample pairBetween Distance.It is below the loss function of Triplet network:
L t = Σ i , j , k , l [ | | f ( x i k ) - f ( x i l ) | | 2 2 - | | f ( x i k ) - f ( x j m ) | | 2 2 + α ]
Wherein α is distance nargin.
7) use the convolutional neural networks after fine setting, extract the face of every width facial image in each path segment Feature.
Association face path segment in two steps.The first step is in each camera lens fragment, uses traditional many mesh Mark tracking, associates path segment according to the identification face characteristic that the movable information of target obtains with study. Second step is the face characteristic obtained merely with study, the method using stratification agglomerative clustering, association difference Path segment under camera lens, generates final human face target track.

Claims (5)

1. the method in a sports video, contestant being carried out plurality of human faces tracking, it is characterised in that bag Include following steps:
1) comprising no less than on the off-line human face data collection of 3000 different face classifications, using supervised Method training in advance one is for the convolutional neural networks model of recognition of face;
2) by the Shot change in detection video, input video is divided into non-overlapping camera lens fragment, and Select the camera lens fragment of all close shots;
3) in the camera lens fragment of each close shot, use human-face detector that every piece image is carried out Face datection, Obtain Face datection response;
4) in the camera lens fragment of each close shot, by Face datection response high for similarity in adjacent several two field pictures It is associated as path segment;
5) in obtained path segment, limit according to space time information, generate positive and negative two class training samples;
6) using the positive and negative training sample that obtains as input, Siamese or Triplet network is used to 1) in The convolutional neural networks of pre-training is finely adjusted, on-line study more distinction and adaptive face characteristic;
7) use the convolutional neural networks after fine setting, extract the face characteristic of each image in each path segment;
8) layering associates all path segment, generates final face movement locus.
A kind of sports video the most according to claim 1 carries out plurality of human faces tracking to contestant Method, it is characterised in that described step 1) in, the structure of convolutional neural networks is input layer-convolution and adopts Sample layer-output layer, input layer is the facial image of input, and convolution and sample level include process of convolution and Max Pooling process, the corresponding face classification of each neuron of output layer.
A kind of sports video the most according to claim 1 carries out plurality of human faces tracking to contestant Method, it is characterised in that described step 5) in, positive training sample is from two in same path segment Opening facial image, negative training sample is two facial images respectively from two different tracks fragments, wherein The two path segment occurs in a certain two field picture simultaneously;
Positive and negative training sample combines in the way of ternary one group: two facial images from same path segment, 3rd facial image is from another path segment, and wherein the two path segment is same in a certain two field picture Time occur.
A kind of sports video the most according to claim 1 carries out plurality of human faces tracking to contestant Method, it is characterised in that described step 6) in, Siamese network is identical by structure and weights are shared two Individual convolutional neural networks forms, and using two facial images as input, uses contrast loss function;
Three convolutional neural networks that Triplet network is identical by structure and weights are shared form, with ternary one group Mode as input, use Triplet loss function.
A kind of sports video the most according to claim 1 carries out plurality of human faces tracking to contestant Method, it is characterised in that described step 8) in, association face path segment in two steps, the first step is often In individual camera lens fragment, use multi-object tracking method, the differentiation obtained according to movable information and the study of target Property face characteristic association path segment;Second step is the face characteristic obtained merely with study, uses stratification The method of agglomerative clustering, the path segment under the different camera lens of association, generate final human face target track.
CN201610301411.3A 2016-05-09 2016-05-09 Method for tracking multiple faces of participating athletes in sports video Expired - Fee Related CN106022220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610301411.3A CN106022220B (en) 2016-05-09 2016-05-09 Method for tracking multiple faces of participating athletes in sports video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610301411.3A CN106022220B (en) 2016-05-09 2016-05-09 Method for tracking multiple faces of participating athletes in sports video

Publications (2)

Publication Number Publication Date
CN106022220A true CN106022220A (en) 2016-10-12
CN106022220B CN106022220B (en) 2020-02-28

Family

ID=57098843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610301411.3A Expired - Fee Related CN106022220B (en) 2016-05-09 2016-05-09 Method for tracking multiple faces of participating athletes in sports video

Country Status (1)

Country Link
CN (1) CN106022220B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709478A (en) * 2017-02-22 2017-05-24 桂林电子科技大学 Pedestrian image feature classification method and system
CN106875425A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of multi-target tracking system and implementation method based on deep learning
CN106909625A (en) * 2017-01-20 2017-06-30 清华大学 A kind of image search method and system based on Siamese networks
CN106919917A (en) * 2017-02-24 2017-07-04 北京中科神探科技有限公司 Face comparison method
CN107292915A (en) * 2017-06-15 2017-10-24 国家新闻出版广电总局广播科学研究院 Method for tracking target based on convolutional neural networks
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment
CN108038455A (en) * 2017-12-19 2018-05-15 中国科学院自动化研究所 Bionic machine peacock image-recognizing method based on deep learning
CN108090918A (en) * 2018-02-12 2018-05-29 天津天地伟业信息系统集成有限公司 A kind of Real-time Human Face Tracking based on the twin network of the full convolution of depth
CN108229294A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 A kind of motion capture method, apparatus, electronic equipment and storage medium
CN108229516A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 For interpreting convolutional neural networks training method, device and the equipment of remote sensing images
CN108388574A (en) * 2018-01-11 2018-08-10 同济大学 Fast face search method based on triplet depth two-value networks
CN108399381A (en) * 2018-02-12 2018-08-14 北京市商汤科技开发有限公司 Pedestrian recognition methods, device, electronic equipment and storage medium again
CN108399382A (en) * 2018-02-13 2018-08-14 阿里巴巴集团控股有限公司 Vehicle insurance image processing method and device
CN108509657A (en) * 2018-04-27 2018-09-07 深圳爱酷智能科技有限公司 Data distribute store method, equipment and computer readable storage medium
CN108596940A (en) * 2018-04-12 2018-09-28 北京京东尚科信息技术有限公司 A kind of methods of video segmentation and device
CN109190561A (en) * 2018-09-04 2019-01-11 四川长虹电器股份有限公司 Face identification method and system in a kind of video playing
CN109344661A (en) * 2018-09-06 2019-02-15 南京聚铭网络科技有限公司 A kind of webpage integrity assurance of the micro code based on machine learning
CN109871469A (en) * 2019-02-28 2019-06-11 浙江大学城市学院 Tuftlet crowd recognition method based on dynamic graphical component
CN110096941A (en) * 2018-01-29 2019-08-06 西安科技大学 A kind of Gait Recognition system based on siamese network
CN111800663A (en) * 2019-04-09 2020-10-20 阿里巴巴集团控股有限公司 Video synthesis method and device
CN112132103A (en) * 2020-09-30 2020-12-25 新华智云科技有限公司 Video face detection and recognition method and system
CN112132152A (en) * 2020-09-21 2020-12-25 厦门大学 Multi-target tracking and segmenting method utilizing short-range association and long-range pruning
US20210103718A1 (en) * 2016-10-25 2021-04-08 Deepnorth Inc. Vision Based Target Tracking that Distinguishes Facial Feature Targets

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794385A (en) * 2010-03-23 2010-08-04 上海交通大学 Multi-angle multi-target fast human face tracking method used in video sequence
CN103942536A (en) * 2014-04-04 2014-07-23 西安交通大学 Multi-target tracking method of iteration updating track model
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105654055A (en) * 2015-12-29 2016-06-08 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method for performing face recognition training by using video data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794385A (en) * 2010-03-23 2010-08-04 上海交通大学 Multi-angle multi-target fast human face tracking method used in video sequence
CN103942536A (en) * 2014-04-04 2014-07-23 西安交通大学 Multi-target tracking method of iteration updating track model
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105654055A (en) * 2015-12-29 2016-06-08 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method for performing face recognition training by using video data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANIV TAIMAN ETC.: "DeepFace: Closing the Gap to Human-Level Performance in Face Verification", 《CVPR》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210103718A1 (en) * 2016-10-25 2021-04-08 Deepnorth Inc. Vision Based Target Tracking that Distinguishes Facial Feature Targets
US11544964B2 (en) * 2016-10-25 2023-01-03 Deepnorth Inc. Vision based target tracking that distinguishes facial feature targets
CN108229516A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 For interpreting convolutional neural networks training method, device and the equipment of remote sensing images
CN106909625A (en) * 2017-01-20 2017-06-30 清华大学 A kind of image search method and system based on Siamese networks
CN106875425A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of multi-target tracking system and implementation method based on deep learning
CN106709478A (en) * 2017-02-22 2017-05-24 桂林电子科技大学 Pedestrian image feature classification method and system
CN106919917A (en) * 2017-02-24 2017-07-04 北京中科神探科技有限公司 Face comparison method
CN107292915A (en) * 2017-06-15 2017-10-24 国家新闻出版广电总局广播科学研究院 Method for tracking target based on convolutional neural networks
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment
CN108229294A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 A kind of motion capture method, apparatus, electronic equipment and storage medium
CN108229294B (en) * 2017-09-08 2021-02-09 北京市商汤科技开发有限公司 Motion data acquisition method and device, electronic equipment and storage medium
CN108038455A (en) * 2017-12-19 2018-05-15 中国科学院自动化研究所 Bionic machine peacock image-recognizing method based on deep learning
CN108388574A (en) * 2018-01-11 2018-08-10 同济大学 Fast face search method based on triplet depth two-value networks
CN108388574B (en) * 2018-01-11 2021-07-02 同济大学 Quick face retrieval method based on triplet depth binary network
CN110096941A (en) * 2018-01-29 2019-08-06 西安科技大学 A kind of Gait Recognition system based on siamese network
CN108399381A (en) * 2018-02-12 2018-08-14 北京市商汤科技开发有限公司 Pedestrian recognition methods, device, electronic equipment and storage medium again
CN108090918A (en) * 2018-02-12 2018-05-29 天津天地伟业信息系统集成有限公司 A kind of Real-time Human Face Tracking based on the twin network of the full convolution of depth
US11301687B2 (en) 2018-02-12 2022-04-12 Beijing Sensetime Technology Development Co., Ltd. Pedestrian re-identification methods and apparatuses, electronic devices, and storage media
CN108399381B (en) * 2018-02-12 2020-10-30 北京市商汤科技开发有限公司 Pedestrian re-identification method and device, electronic equipment and storage medium
WO2019153830A1 (en) * 2018-02-12 2019-08-15 北京市商汤科技开发有限公司 Pedestrian re-identification method and apparatus, electronic device, and storage medium
US10891515B2 (en) 2018-02-13 2021-01-12 Advanced New Technologies Co., Ltd. Vehicle accident image processing method and apparatus
CN108399382A (en) * 2018-02-13 2018-08-14 阿里巴巴集团控股有限公司 Vehicle insurance image processing method and device
US10891517B2 (en) 2018-02-13 2021-01-12 Advanced New Technologies Co., Ltd. Vehicle accident image processing method and apparatus
CN108596940A (en) * 2018-04-12 2018-09-28 北京京东尚科信息技术有限公司 A kind of methods of video segmentation and device
CN108596940B (en) * 2018-04-12 2021-03-30 北京京东尚科信息技术有限公司 Video segmentation method and device
CN108509657A (en) * 2018-04-27 2018-09-07 深圳爱酷智能科技有限公司 Data distribute store method, equipment and computer readable storage medium
CN109190561A (en) * 2018-09-04 2019-01-11 四川长虹电器股份有限公司 Face identification method and system in a kind of video playing
CN109344661B (en) * 2018-09-06 2023-05-30 南京聚铭网络科技有限公司 Machine learning-based micro-proxy webpage tamper-proofing method
CN109344661A (en) * 2018-09-06 2019-02-15 南京聚铭网络科技有限公司 A kind of webpage integrity assurance of the micro code based on machine learning
CN109871469A (en) * 2019-02-28 2019-06-11 浙江大学城市学院 Tuftlet crowd recognition method based on dynamic graphical component
CN109871469B (en) * 2019-02-28 2021-09-24 浙江大学城市学院 Small cluster crowd identification method based on dynamic graphics primitives
CN111800663A (en) * 2019-04-09 2020-10-20 阿里巴巴集团控股有限公司 Video synthesis method and device
CN112132152B (en) * 2020-09-21 2022-05-27 厦门大学 Multi-target tracking and segmentation method utilizing short-range association and long-range pruning
CN112132152A (en) * 2020-09-21 2020-12-25 厦门大学 Multi-target tracking and segmenting method utilizing short-range association and long-range pruning
CN112132103A (en) * 2020-09-30 2020-12-25 新华智云科技有限公司 Video face detection and recognition method and system

Also Published As

Publication number Publication date
CN106022220B (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN106022220A (en) Method for performing multi-face tracking on participating athletes in sports video
US11544928B2 (en) Athlete style recognition system and method
CN105389562B (en) A kind of double optimization method of the monitor video pedestrian weight recognition result of space-time restriction
CN106503687A (en) The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN108765394A (en) Target identification method based on quality evaluation
Amirgholipour et al. A-CCNN: adaptive CCNN for density estimation and crowd counting
CN107315795B (en) The instance of video search method and system of joint particular persons and scene
CN104751136A (en) Face recognition based multi-camera video event retrospective trace method
CN108764269A (en) A kind of cross datasets pedestrian recognition methods again based on space-time restriction incremental learning
CN111400536B (en) Low-cost tomato leaf disease identification method based on lightweight deep neural network
CN110135251B (en) Group image emotion recognition method based on attention mechanism and hybrid network
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
CN107590427A (en) Monitor video accident detection method based on space-time interest points noise reduction
CN109492534A (en) A kind of pedestrian detection method across scene multi-pose based on Faster RCNN
Khan et al. Learning deep C3D features for soccer video event detection
CN108154113A (en) Tumble event detecting method based on full convolutional network temperature figure
CN111539351A (en) Multi-task cascaded face frame selection comparison method
Ibrahem et al. Real-time weakly supervised object detection using center-of-features localization
CN108875448B (en) Pedestrian re-identification method and device
Ding et al. Machine learning model for feature recognition of sports competition based on improved TLD algorithm
CN113792686B (en) Vehicle re-identification method based on visual representation of invariance across sensors
CN106445146A (en) Gesture interaction method and device for helmet-mounted display
Ge et al. Co-saliency-enhanced deep recurrent convolutional networks for human fall detection in E-healthcare
Liu et al. A Sports Video Behavior Recognition Using Local Spatiotemporal Patterns
CN108932532A (en) A kind of eye movement data number suggesting method required for the prediction of saliency figure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180112

Address after: 100022 building 3, building 88, building 7-10, Jianguo Road, Beijing, Chaoyang District, 305

Applicant after: Beijing Hippo energy Sports Technology Co., Ltd.

Address before: 710075 Shaanxi city of Xi'an province high tech Zone Feng Hui Road No. 18 sigma building room 10201-224-26

Applicant before: Xi'an Brision Information Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200228

Termination date: 20200509

CF01 Termination of patent right due to non-payment of annual fee