CN110879970A - Video interest area face abstraction method and device based on deep learning and storage device thereof - Google Patents

Video interest area face abstraction method and device based on deep learning and storage device thereof Download PDF

Info

Publication number
CN110879970A
CN110879970A CN201911002439.7A CN201911002439A CN110879970A CN 110879970 A CN110879970 A CN 110879970A CN 201911002439 A CN201911002439 A CN 201911002439A CN 110879970 A CN110879970 A CN 110879970A
Authority
CN
China
Prior art keywords
face
video
deep learning
mtcnn
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911002439.7A
Other languages
Chinese (zh)
Inventor
程家明
孔繁东
陈升亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN XINGTU XINKE ELECTRONIC CO Ltd
Original Assignee
WUHAN XINGTU XINKE ELECTRONIC CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN XINGTU XINKE ELECTRONIC CO Ltd filed Critical WUHAN XINGTU XINKE ELECTRONIC CO Ltd
Priority to CN201911002439.7A priority Critical patent/CN110879970A/en
Publication of CN110879970A publication Critical patent/CN110879970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, equipment and storage equipment for abstracting a face in a video interest area based on deep learning, which comprises the following steps: million Asian face pictures with good picture significance are collected, and the Asian face pictures are trained, detected and identified; improving an mtcnn _ detector face detection algorithm; detecting the face of the video sequence image by using an improved mtcnn _ detector algorithm; initializing a Kalman filter by utilizing an improved mtcnn _ detector algorithm; recognizing the detected face by utilizing facenet face recognition algorithm; judging whether the face identified by facenet is a target face by utilizing a binary classification algorithm; carrying out coincidence comparison on the prediction position of the Kalman filter and the position of a binary classification judgment non-target face frame; and (4) carrying out video synthesis on the video image frame containing the target face identified by the facenet, the prediction position of the Kalman filter and the binary classification judgment non-target face frame.

Description

Video interest area face abstraction method and device based on deep learning and storage device thereof
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and equipment for abstracting a human face in a video interest area based on deep learning and a storage device thereof.
Background
Deep learning is an important research direction of artificial intelligence, and is widely applied in the field of image recognition and video analysis, so that the recognition and analysis precision is greatly improved. And applying a face recognition algorithm based on deep learning to the video abstract of the interest region on the basis of the deep learning.
In recent years, many results have been obtained in the research of face search and summarization algorithms. Patent document 1(CN106682094A) proposes a method and a system for retrieving a face video, in which a search area of a key frame is determined by information in a non-compressed domain, and then a tracking search area is obtained by motion and prediction information in a compressed domain, so that the data amount and the computation amount of video search are reduced, and the timeliness of video search is improved. Patent document 2(CN204102129U) discloses a device for face retrieval in video, which includes a preprocessing module, a face detection module, a face extraction module, a face recognition module, and a face index association module, which are respectively connected to a system bus module; the system bus module is connected with the data interface module, and the data interface module is connected with the display module; the device for searching the face in the video can solve the problem that a target video with specific face information can be browsed and played back quickly and accurately under massive video monitoring data. The workload of workers is reduced, the operation time is shortened, and the working efficiency is improved. Patent document 3(CN104731964A) proposes a face summarization method based on face recognition, which includes generating face images of different people appearing in an original video, and forming a list of the appearing face images, including steps of scanning image frames in the original video, obtaining whether a face region exists in the video frames, face detection, face feature extraction, face feature clustering, face summarization image generation, and the like.
However, the face video retrieval methods and systems disclosed in patent documents 1, 2, and 3 do not relate to a face detection and recognition algorithm based on deep learning that performs well in the case of angle and scene changes. The face video retrieval method proposed in patent document 1(CN106682094A) and the face part of the video face retrieval device designed in patent document 2(CN204102129U) both use the traditional face detection algorithm, and the traditional face detection algorithm has poor detection adaptability to faces with multiple scenes, multiple angles and multiple scales; patent document 3(CN104731964A) proposes a face summarization method based on face recognition, in which the recognition accuracy of the face recognition algorithm is also affected by the scene and the target features. The invention is made in view of the above-mentioned shortcomings, and aims to provide a method for abstracting a face of a video region of interest based on deep learning, which performs fast frame-by-frame recognition on a face in a video image with multiple scenes, multiple angles and multiple scales through a deep learning face detection and recognition algorithm, stores a recognized target frame, and forms a section of condensed video, thereby completing retrieval of key frames and video condensation.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and a storage apparatus for abstracting a face of a video interest region based on deep learning, which performs better under the condition of angle and scene change.
The invention provides a method, equipment and storage equipment for abstracting a face in a video interest area based on deep learning, which comprises the following steps:
step 1: collecting million Asian face pictures to train a face detection and recognition model;
step 2: improving an mtcnn _ detector face detection algorithm;
and step 3: selecting an interested image in a video sequence image scene by using a mouse;
and 4, step 4: detecting the human face appearing in the video sequence image in the step 3 by using the improved mtcnn _ detector algorithm in the step 2, and initializing a Kalman filter;
and 5: recognizing the face detected in the step 4 by using a facenet face recognition model;
step 6: judging whether the face identified by facenet is a target face by utilizing a binary classification algorithm;
and 7: the method comprises the following steps of carrying out video synthesis on a video image frame which contains a target face and an image frame which is predicted by a Kalman filter and is superposed with a binary classification judgment non-target face frame, and specifically comprises the following steps: after the face is detected in the next frame by the improved mtcnn _ detector algorithm, the detected face is identified by utilizing the facenet, if the ratio of the characteristic value of the image frame calculated by the facenet to the characteristic value in the face library is greater than a threshold value, the face is a target face, video synthesis can be directly carried out, if the ratio is less than the threshold value, coincidence judgment is carried out on the face and the prediction position of a Kalman filter, the Kalman filter predicts the target face position of the next frame by taking the target face position detected by the mtcnn _ detector algorithm as the reference, and if the Kalman prediction position is coincident with the face position detected in the next frame by the mtcnn _ detector algorithm, video synthesis can be carried out.
Further, the face picture used for training the model in the step 1 covers the characteristics of multiple angles, multiple scales, multiple illumination changes, background changes and better significance.
Further, the improved method of the mtcnn _ detector algorithm in step 2 is as follows: the upper limit and the lower limit of the mtcnn _ detector algorithm face detection frame are dynamically adjusted by combining practical application, the lower limit of the face detection frame is 5% of the area of an image to be detected, the upper limit of the face detection frame is 90% of the area of the image to be detected, and false detection in the detection process can be reduced through dynamic adjustment.
Further, the method for judging whether the face identified by facenet is the target face by the binary classification algorithm in step 6 is as follows: setting a threshold, if the ratio of the characteristic value of the image frame calculated by the facenet to the characteristic value in the face library is larger than the threshold, indicating that the image frame is a target face, if the ratio is smaller than the threshold, then judging whether the image frame position is overlapped with the image frame position detected by the mtcnn _ detector detection frame according to the Kalman filter prediction, and if the image frame position is overlapped, indicating that the image frame is the target face.
Further, step 7 sets the threshold value to 0.7.
A storage device, the storage device storing instructions and data for implementing the method for abstracting a video interest area face based on deep learning, a video interest area face abstracting device based on deep learning, the device comprising a processor and the storage device; the processor loads and executes instructions and data in the storage device to realize the video interest region face summarization method based on deep learning.
The technical scheme provided by the invention has the beneficial effects that: the problems of large workload of manual retrieval and low retrieval precision of the traditional method are solved, and the efficiency and accuracy of video retrieval are improved.
Drawings
FIG. 1 is a flow chart of a video interest region face summarization method based on deep learning according to the present invention;
FIG. 2 is a detailed operation procedure of a video interest region face summarization method based on deep learning according to the present invention;
fig. 3 is a schematic diagram of the operation of the hardware device according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1 and fig. 2, an embodiment of the present invention provides a method, an apparatus and a storage apparatus for abstracting a face of a video region of interest based on deep learning, including the following steps:
step 1: the method comprises the steps of collecting 100 million Asian face pictures, training a face detection model for locating the face position in an image and a face identification model for identifying the face identity, wherein the 100 million Asian face pictures have the characteristics of covering multiple angles, multiple scales, multiple illumination changes and background changes and good significance;
step 2: improving an mtcnn _ detector face detection algorithm; the upper limit and the lower limit of the mtcnn _ detector algorithm face detection frame are dynamically adjusted by combining practical application, the lower limit of the face detection frame is 5% of the area of an image to be detected, the upper limit of the face detection frame is 90% of the area of the image to be detected, and false detection in the detection process can be reduced through dynamic adjustment.
And step 3: selecting an interested image in a video sequence image scene by using a mouse;
and 4, step 4: detecting the human face appearing in the video sequence image in the step 3 by utilizing the improved mtcnn _ detector algorithm in the step 2;
and 5: recognizing the face detected in the step 4 by using a facenet face recognition model;
step 6: judging whether the face identified by facenet is a target face by utilizing a binary classification algorithm; the method comprises the following steps: setting a threshold value to be 0.7, if the ratio of the characteristic value of the image frame calculated by the facenet to the characteristic value in the face library is more than 0.7, indicating that the image frame is a target face, if the ratio is less than 0.7, judging whether the image frame face position is overlapped with the image frame face position detected by the mtcnn _ detector detection frame according to the Kalman filter prediction, and if the image frame face position is overlapped, indicating that the image frame face is the target face.
And 7: the method comprises the following steps of carrying out video synthesis on a video image frame which contains a target face and an image frame which is predicted by a Kalman filter and is superposed with a binary classification judgment non-target face frame, and specifically comprises the following steps: after the face is detected in the next frame of the mtcnn _ detector algorithm, the detected face is identified by utilizing the facenet, if the ratio of the characteristic value of the image frame calculated by the facenet to the characteristic value in the face library is greater than 0.7, the face is a target face, video synthesis can be directly carried out, if the ratio is less than 0.7, coincidence judgment is carried out on the face and the prediction position of a Kalman filter, the Kalman filter predicts the target face position of the next frame by taking the target face position detected by the mtcnn _ detector algorithm as the reference, and if the Kalman prediction position and the face position detected in the next frame of the mtcnn _ detector algorithm coincide, video synthesis can be carried out.
Referring to fig. 3, fig. 3 is a schematic diagram of a hardware device according to an embodiment of the present invention, where the hardware device specifically includes: a video interest region face summarization device 401 based on deep learning, a processor 402 and a storage device 403.
A video interest region face summarization device 401 based on deep learning: a video interest region face summarization device 401 based on deep learning realizes the video interest region face summarization method based on deep learning.
The processor 402: the processor 402 loads and executes the instructions and data in the storage device 403 to implement the method for abstracting a video region of interest face based on deep learning.
The storage device 403: the storage device 403 stores instructions and data; the storage device 403 is used to implement the method for abstracting a face of a video interest region based on deep learning.
The embodiments and features of the embodiments described herein above may be combined with each other without conflict. The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A video interest area face abstraction method based on deep learning is characterized by comprising the following steps:
step 1: collecting million Asian face pictures to train a face detection and recognition model;
step 2: improving an mtcnn _ detector face detection algorithm;
and step 3: selecting an interested image in a video sequence image scene by using a mouse;
and 4, step 4: detecting the human face appearing in the video sequence image in the step 3 by using the improved mtcnn _ detector algorithm in the step 2, and initializing a Kalman filter;
and 5: recognizing the face detected in the step 4 by using a facenet face recognition model;
step 6: judging whether the face identified by the face net is a target face or not by utilizing a binary classification algorithm;
and 7: the video synthesis is carried out on the video image frame which contains the target face and the image frame which is predicted by the Kalman filter and is superposed with the two classification judgment non-target face frames, and the method specifically comprises the following steps: after the face is detected in the next frame by the improved mtcnn _ detector algorithm, the detected face is identified by utilizing the facenet, if the ratio of the characteristic value of the image frame calculated by the facenet to the characteristic value in the face library is greater than a threshold value, the face is a target face, video synthesis can be directly carried out, if the ratio is less than the threshold value, coincidence judgment is carried out on the face and the prediction position of a Kalman filter, the Kalman filter predicts the target face position of the next frame by taking the target face position detected by the improved mtcnn _ detector algorithm as a reference, and if the target face position is coincident with the face position detected in the next frame by the mtcnn _ detector algorithm, video synthesis can be carried out.
2. The method for abstracting human face of interest region in video based on deep learning of claim 1, wherein the human face image used for training the model in step 1 covers features of multiple angles, multiple scales, multiple illumination changes and background changes and better significance.
3. The method for abstracting a human face of a video interest region based on deep learning of claim 1, wherein the mtcnn _ detector algorithm in step 2 is improved by: the upper limit and the lower limit of the mtcnn _ detector algorithm face detection frame are dynamically adjusted by combining practical application, the lower limit of the face detection frame is 5% of the area of an image to be detected, the upper limit of the face detection frame is 90% of the area of the image to be detected, and false detection in the detection process can be reduced through dynamic adjustment.
4. The method for abstracting a video interested area face based on deep learning of claim 1, wherein the method for judging whether the face identified by facenet is the target face by the binary algorithm in step 6 is as follows: setting a threshold, if the ratio of the characteristic value of the image frame calculated by the facenet to the characteristic value in the face library is larger than the threshold, indicating that the image frame is a target face, if the ratio is smaller than the threshold, predicting whether the position of the image frame face is overlapped with the position of the image frame face detected by the mtcnn _ detector detection frame according to the Kalman filter, and if the position is overlapped, indicating that the image frame face is the target face.
5. The method for abstracting a face of a video interest region based on deep learning of claim 1, wherein the threshold of step 7 is set to 0.7.
6. A storage device, comprising: the storage device stores instructions and data for implementing the method for abstracting the human face in the video interest region based on deep learning as claimed in claims 1-4.
7. A video interest area face abstract device based on deep learning is characterized in that: the method comprises the following steps: a processor and the storage device; the processor loads and executes instructions and data in the storage device to realize the video interest region face summarization method based on deep learning as claimed in claims 1-4.
CN201911002439.7A 2019-10-21 2019-10-21 Video interest area face abstraction method and device based on deep learning and storage device thereof Pending CN110879970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911002439.7A CN110879970A (en) 2019-10-21 2019-10-21 Video interest area face abstraction method and device based on deep learning and storage device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911002439.7A CN110879970A (en) 2019-10-21 2019-10-21 Video interest area face abstraction method and device based on deep learning and storage device thereof

Publications (1)

Publication Number Publication Date
CN110879970A true CN110879970A (en) 2020-03-13

Family

ID=69728400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911002439.7A Pending CN110879970A (en) 2019-10-21 2019-10-21 Video interest area face abstraction method and device based on deep learning and storage device thereof

Country Status (1)

Country Link
CN (1) CN110879970A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576747B (en) * 2023-11-15 2024-06-28 深圳市紫鹏科技有限公司 Face data acquisition and analysis method, system and storage medium based on deep learning

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092930A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method of generation of video abstract and device of generation of video abstract
CN103413330A (en) * 2013-08-30 2013-11-27 中国科学院自动化研究所 Method for reliably generating video abstraction in complex scene
CN104244113A (en) * 2014-10-08 2014-12-24 中国科学院自动化研究所 Method for generating video abstract on basis of deep learning technology
CN104331905A (en) * 2014-10-31 2015-02-04 浙江大学 Surveillance video abstraction extraction method based on moving object detection
KR101496287B1 (en) * 2014-11-11 2015-02-26 (주) 강동미디어 Video synopsis system and video synopsis method using the same
CN104731964A (en) * 2015-04-07 2015-06-24 上海海势信息科技有限公司 Face abstracting method and video abstracting method based on face recognition and devices thereof
CN106856577A (en) * 2015-12-07 2017-06-16 北京航天长峰科技工业集团有限公司 The video abstraction generating method of multiple target collision and occlusion issue can be solved
CN107943837A (en) * 2017-10-27 2018-04-20 江苏理工学院 A kind of video abstraction generating method of foreground target key frame
CN108921038A (en) * 2018-06-07 2018-11-30 河海大学 A kind of classroom based on deep learning face recognition technology is quickly called the roll method of registering
CN109002744A (en) * 2017-06-06 2018-12-14 中兴通讯股份有限公司 Image-recognizing method, device and video monitoring equipment
CN109325964A (en) * 2018-08-17 2019-02-12 深圳市中电数通智慧安全科技股份有限公司 A kind of face tracking methods, device and terminal
CN109934162A (en) * 2019-03-12 2019-06-25 哈尔滨理工大学 Facial image identification and video clip intercept method based on Struck track algorithm
CN110321873A (en) * 2019-07-12 2019-10-11 苏州惠邦医疗科技有限公司 Sensitization picture recognition methods and system based on deep learning convolutional neural networks

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092930A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method of generation of video abstract and device of generation of video abstract
CN103413330A (en) * 2013-08-30 2013-11-27 中国科学院自动化研究所 Method for reliably generating video abstraction in complex scene
CN104244113A (en) * 2014-10-08 2014-12-24 中国科学院自动化研究所 Method for generating video abstract on basis of deep learning technology
CN104331905A (en) * 2014-10-31 2015-02-04 浙江大学 Surveillance video abstraction extraction method based on moving object detection
KR101496287B1 (en) * 2014-11-11 2015-02-26 (주) 강동미디어 Video synopsis system and video synopsis method using the same
CN104731964A (en) * 2015-04-07 2015-06-24 上海海势信息科技有限公司 Face abstracting method and video abstracting method based on face recognition and devices thereof
CN106856577A (en) * 2015-12-07 2017-06-16 北京航天长峰科技工业集团有限公司 The video abstraction generating method of multiple target collision and occlusion issue can be solved
CN109002744A (en) * 2017-06-06 2018-12-14 中兴通讯股份有限公司 Image-recognizing method, device and video monitoring equipment
CN107943837A (en) * 2017-10-27 2018-04-20 江苏理工学院 A kind of video abstraction generating method of foreground target key frame
CN108921038A (en) * 2018-06-07 2018-11-30 河海大学 A kind of classroom based on deep learning face recognition technology is quickly called the roll method of registering
CN109325964A (en) * 2018-08-17 2019-02-12 深圳市中电数通智慧安全科技股份有限公司 A kind of face tracking methods, device and terminal
CN109934162A (en) * 2019-03-12 2019-06-25 哈尔滨理工大学 Facial image identification and video clip intercept method based on Struck track algorithm
CN110321873A (en) * 2019-07-12 2019-10-11 苏州惠邦医疗科技有限公司 Sensitization picture recognition methods and system based on deep learning convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王亚沛等: "对象和关键帧相结合的监控视频摘要提取方法", 《工业控制计算机》, no. 03, 25 March 2015 (2015-03-25), pages 11 - 13 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576747B (en) * 2023-11-15 2024-06-28 深圳市紫鹏科技有限公司 Face data acquisition and analysis method, system and storage medium based on deep learning

Similar Documents

Publication Publication Date Title
CN111783576B (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN110609920B (en) Pedestrian hybrid search method and system in video monitoring scene
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN110717411A (en) Pedestrian re-identification method based on deep layer feature fusion
CN105574506A (en) Intelligent face tracking system and method based on depth learning and large-scale clustering
CN112861575A (en) Pedestrian structuring method, device, equipment and storage medium
CN103049459A (en) Feature recognition based quick video retrieval method
CN103237201A (en) Case video studying and judging method based on social annotation
CN111652035B (en) Pedestrian re-identification method and system based on ST-SSCA-Net
CN105389562A (en) Secondary optimization method for monitoring video pedestrian re-identification result based on space-time constraint
CN107610177B (en) The method and apparatus of characteristic point is determined in a kind of synchronous superposition
Bandla et al. Active learning of an action detector from untrimmed videos
CN104463232A (en) Density crowd counting method based on HOG characteristic and color histogram characteristic
CN111508006A (en) Moving target synchronous detection, identification and tracking method based on deep learning
CN103106414A (en) Detecting method of passer-bys in intelligent video surveillance
Yadav et al. Human Illegal Activity Recognition Based on Deep Learning Techniques
CN112651996A (en) Target detection tracking method and device, electronic equipment and storage medium
CN115527269B (en) Intelligent human body posture image recognition method and system
CN111539257A (en) Personnel re-identification method, device and storage medium
Ahmad et al. Embedded deep vision in smart cameras for multi-view objects representation and retrieval
Ji et al. News videos anchor person detection by shot clustering
CN110879970A (en) Video interest area face abstraction method and device based on deep learning and storage device thereof
Mantini et al. Camera Tampering Detection using Generative Reference Model and Deep Learned Features.
CN118038494A (en) Cross-modal pedestrian re-identification method for damage scene robustness
CN105893967A (en) Human body behavior detection method and system based on time sequence preserving space-time characteristics

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200313

RJ01 Rejection of invention patent application after publication