CN110795980A - Network video-based evasion identification method, equipment, storage medium and device - Google Patents

Network video-based evasion identification method, equipment, storage medium and device Download PDF

Info

Publication number
CN110795980A
CN110795980A CN201910391648.9A CN201910391648A CN110795980A CN 110795980 A CN110795980 A CN 110795980A CN 201910391648 A CN201910391648 A CN 201910391648A CN 110795980 A CN110795980 A CN 110795980A
Authority
CN
China
Prior art keywords
face
processed
picture
escaping
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910391648.9A
Other languages
Chinese (zh)
Inventor
卢修学
黎雪峰
杨锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rui Curer Technology Co Ltd
Original Assignee
Shenzhen Rui Curer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Rui Curer Technology Co Ltd filed Critical Shenzhen Rui Curer Technology Co Ltd
Priority to CN201910391648.9A priority Critical patent/CN110795980A/en
Publication of CN110795980A publication Critical patent/CN110795980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a network video-based evasion identification method, equipment, a storage medium and a device, wherein the method comprises the following steps: carrying out human face picture interception on a human face video to be processed to obtain a plurality of human face pictures to be processed; comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture; and selecting an image of the person escaping from the target from each face picture to be processed according to the face similarity. Based on artificial intelligence, the video is compared with each escaping face picture preset in the escaping person picture library, so that the accuracy and efficiency of escaping person identification are improved, and the workload of escaping person identification is reduced.

Description

Network video-based evasion identification method, equipment, storage medium and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a network video-based criminal identification method, equipment, a storage medium and a device.
Background
In the prior art, the mode of pursuing evacuees is single, police take pictures on site to collect data during patrol and check, and then compare and analyze the collected picture data, so that the number of the collected pictures is limited, the related range is limited, many evacuees cannot be identified in time, the workload of the police is large, and the tasks are heavy.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a network video-based evasion identification method, equipment, a storage medium and a device, and aims to solve the technical problem of low evasion identification efficiency in the prior art.
In order to achieve the above object, the present invention provides a network video-based evasion identification method, which includes the following steps:
carrying out human face picture interception on a human face video to be processed to obtain a plurality of human face pictures to be processed;
comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture;
and selecting an image of the person escaping from the target from each face picture to be processed according to the face similarity.
Preferably, before the face image capturing is performed on the to-be-processed face image video to obtain a plurality of to-be-processed face images, the method for identifying a fleeing from a network video further includes:
monitoring a request of a target application program, and analyzing the monitored request through a video warehousing script to obtain a request address;
extracting a target uniform resource locator according to the request address, and downloading a network video through the video warehousing script according to the target uniform resource locator;
and performing face detection on the network video, and if the detected network video comprises face features, taking the network video comprising the face features as a to-be-processed portrait video.
Preferably, the performing face detection on the network video, and if the detected network video includes a face feature, taking the network video including the face feature as a to-be-processed portrait video specifically includes:
and performing face detection on the network video through a fast candidate area-based convolutional neural network fast R-CNN algorithm, and if the detected network video comprises face features, taking the network video comprising the face features as a portrait video to be processed.
Preferably, the human face image capturing is performed on the human image video to be processed to obtain a plurality of human face images to be processed, and the method specifically includes:
performing frame extraction processing on a portrait video to be processed to obtain a frame picture;
and drawing the face position in the frame picture, and intercepting the face position to obtain a plurality of face pictures to be processed.
Preferably, the comparing each to-be-processed face picture with each escaping face picture preset in an escaping people picture library to obtain a face similarity between each to-be-processed face picture and each escaping face picture specifically includes:
positioning face characteristic points of each face picture to be processed to obtain the face characteristic points to be processed corresponding to each face picture to be processed;
comparing the human face characteristic points to be processed with preset positive face characteristic points to obtain a homography matrix;
transforming the face in the face picture to be processed through the homography matrix to obtain a calibration face picture;
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
Preferably, the comparing, by the convolutional neural network model, the calibration face picture with each escaping face picture preset in an escaping people picture library to obtain a face similarity between each to-be-processed face picture and each escaping face picture specifically includes:
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolution layer, a pooling layer, a full connection layer and a preset activation function of a convolution neural network model to obtain the face similarity between the face picture to be processed and each escaping face picture.
Preferably, after the selecting the target from each of the face pictures to be processed according to the face similarity is performed on the image of the escaper, the method for identifying the escaper based on the network video further comprises:
acquiring a target in-store personnel image corresponding to the target in-store personnel image from a preset in-store personnel escaping image library;
acquiring a target portrait video corresponding to the image of the target person escaping from the flight, and acquiring a target shooting address and target shooting time of the target portrait video;
and taking the target escaping personnel image, the target in-store personnel image, the target shooting address and the target shooting time as alarm information, and sending the alarm information to target user equipment for alarm prompt.
In addition, to achieve the above object, the present invention further provides a network video-based evasion identification device, which includes a memory, a processor and a network video-based evasion identification program stored in the memory and running on the processor, wherein the network video-based evasion identification program is configured to implement the steps of the network video-based evasion identification method as described above.
Further, to achieve the above object, the present invention also proposes a storage medium having stored thereon a network video-based escapement recognition program, which when executed by a processor, implements the steps of the network video-based escapement recognition method as described above.
In addition, to achieve the above object, the present invention further provides a network video-based escaping identification apparatus, including:
the intercepting module is used for intercepting the face pictures of the image video to be processed to obtain a plurality of face pictures to be processed;
the comparison module is used for comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture;
and the selecting module is used for selecting the image of the target escaping from each to-be-processed face picture according to the face similarity.
According to the method, human face pictures are intercepted from a human image video to be processed to obtain a plurality of human face pictures to be processed, each human face picture to be processed is compared with each escaping human face picture preset in an escaping human face picture library to obtain human face similarity between each human face picture to be processed and each escaping human face picture, and the video is compared with each escaping human face picture preset in the escaping human face picture library on the basis of artificial intelligence, so that accuracy and efficiency of escaping recognition are improved; and selecting the image of the person escaping from the target from each to-be-processed face picture according to the face similarity, so as to reduce the workload of identifying the escaped person.
Drawings
FIG. 1 is a schematic diagram of a network video-based escapement identification device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a method for identifying a network-video-based evasive device according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of the method for identifying a network-video-based evasive device according to the present invention;
FIG. 4 is a flowchart illustrating a third embodiment of a network video-based criminal identification method according to the present invention;
fig. 5 is a block diagram illustrating a first embodiment of a network video-based crime detection apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a network video-based criminal identification device of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the network video-based escapement recognizing apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), and the optional user interface 1003 may further include a standard wired interface and a wireless interface, and the wired interface for the user interface 1003 may be a USB interface in the present invention. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory or a Non-volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of network video-based crime detection devices, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a network video-based evasion identification program.
In the network video-based evasion identification device shown in fig. 1, the network interface 1004 is mainly used for connecting with a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting user equipment; the network video-based evasion identification apparatus calls a network video-based evasion identification program stored in the memory 1005 through the processor 1001 and performs the network video-based evasion identification method provided by the embodiment of the present invention.
The network video-based escapement recognition device calls a network video-based escapement recognition program stored in the memory 1005 through the processor 1001 and performs the following operations:
carrying out human face picture interception on a human face video to be processed to obtain a plurality of human face pictures to be processed;
comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture;
and selecting an image of the person escaping from the target from each face picture to be processed according to the face similarity.
Further, the network video based escapement identification device calls the network video based escapement identification program stored in the memory 1005 by the processor 1001, and also performs the following operations:
monitoring a request of a target application program, and analyzing the monitored request through a video warehousing script to obtain a request address;
extracting a target uniform resource locator according to the request address, and downloading a network video through the video warehousing script according to the target uniform resource locator;
and performing face detection on the network video, and if the detected network video comprises face features, taking the network video comprising the face features as a to-be-processed portrait video.
Further, the network video based escapement identification device calls the network video based escapement identification program stored in the memory 1005 by the processor 1001, and also performs the following operations:
and performing face detection on the network video through a fast candidate area-based convolutional neural network fast R-CNN algorithm, and if the detected network video comprises face features, taking the network video comprising the face features as a portrait video to be processed.
Further, the network video based escapement identification device calls the network video based escapement identification program stored in the memory 1005 by the processor 1001, and also performs the following operations:
performing frame extraction processing on a portrait video to be processed to obtain a frame picture;
and drawing the face position in the frame picture, and intercepting the face position to obtain a plurality of face pictures to be processed.
Further, the network video based escapement identification device calls the network video based escapement identification program stored in the memory 1005 by the processor 1001, and also performs the following operations:
positioning face characteristic points of each face picture to be processed to obtain the face characteristic points to be processed corresponding to each face picture to be processed;
comparing the human face characteristic points to be processed with preset positive face characteristic points to obtain a homography matrix;
transforming the face in the face picture to be processed through the homography matrix to obtain a calibration face picture;
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
Further, the network video based escapement identification device calls the network video based escapement identification program stored in the memory 1005 by the processor 1001, and also performs the following operations:
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolution layer, a pooling layer, a full connection layer and a preset activation function of a convolution neural network model to obtain the face similarity between the face picture to be processed and each escaping face picture.
Further, the network video based escapement identification device calls the network video based escapement identification program stored in the memory 1005 by the processor 1001, and also performs the following operations:
acquiring a target in-store personnel image corresponding to the target in-store personnel image from a preset in-store personnel escaping image library;
acquiring a target portrait video corresponding to the image of the target person escaping from the flight, and acquiring a target shooting address and target shooting time of the target portrait video;
and taking the target escaping personnel image, the target in-store personnel image, the target shooting address and the target shooting time as alarm information, and sending the alarm information to target user equipment for alarm prompt.
In the embodiment, human face pictures are intercepted from a human image video to be processed to obtain a plurality of human face pictures to be processed, each human face picture to be processed is compared with each escaping human face picture preset in an escaping human face picture library to obtain human face similarity between each human face picture to be processed and each escaping human face picture, and the video is compared with each escaping human face picture preset in the escaping human face picture library on the basis of artificial intelligence, so that the accuracy and the efficiency of escaping recognition are improved; and selecting the image of the person escaping from the target from each to-be-processed face picture according to the face similarity, so as to reduce the workload of identifying the escaped person.
Based on the hardware structure, the embodiment of the network video-based evasion identification method is provided.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the method for identifying a network video-based evasion according to the present invention, and provides the first embodiment of the method for identifying a network video-based evasion according to the present invention.
In a first embodiment, the method for identifying network video-based evasion comprises the following steps:
step S10: and intercepting the face pictures of the image video to be processed to obtain a plurality of face pictures to be processed.
It should be understood that the execution subject of the embodiment is the network video-based evasion identification device, and the network video-based evasion identification device may be an electronic device such as a smart phone, a personal computer, a desktop computer, or a server, which is not limited in this embodiment. The to-be-processed portrait video is a video clip comprising human face features. In the current society, internet videos develop rapidly, many users like to publish videos related to personal life information on the internet, a large number of network videos can be crawled, face detection is carried out on the network videos, if face features are detected, the network videos with the detected face features serve as the to-be-processed portrait videos, the to-be-processed portrait videos are stored in a distributed mode and can be stored in a video library server. And if the human face features are not detected, deleting the network video without the human face features.
It should be noted that, in order to be able to identify a portrait appearing in the to-be-processed portrait video, frame extraction processing needs to be performed on the to-be-processed portrait video, the number of frames is the amount of pictures transmitted within 1 second, each frame is a static image, so as to obtain a plurality of frame pictures of the to-be-processed portrait video, perform face detection on each frame picture, draw a face position in each frame picture, intercept the drawn face position, obtain a plurality of to-be-processed face pictures, store the to-be-processed face pictures in a picture library, and perform distributed storage and store the to-be-processed face pictures in a picture library server. In this embodiment, the step S10 includes: performing frame extraction processing on a portrait video to be processed to obtain a frame picture; and drawing the face position in the frame picture, and intercepting the face position to obtain a plurality of face pictures to be processed.
Step S20: and comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
It can be understood that the preset escape person photo library is a set of escape person image pictures of all escape persons recorded in a public security system, and the preset escape person photo library comprises all escape person face pictures of all escape persons. The method comprises the steps of extracting features of each to-be-processed face picture to obtain to-be-processed face features corresponding to each to-be-processed face picture, extracting features of each escaping face picture preset in an escaping person picture library to obtain escaping person face features corresponding to each escaping face picture, inputting the to-be-processed face features and the escaping person face features into a convolutional neural network, and outputting face similarity between the to-be-processed face features and the escaping person face features, namely obtaining the face similarity between each to-be-processed face picture and each escaping face picture.
In this embodiment, the step S20 includes:
positioning face characteristic points of each face picture to be processed to obtain the face characteristic points to be processed corresponding to each face picture to be processed;
comparing the human face characteristic points to be processed with preset positive face characteristic points to obtain a homography matrix;
transforming the face in the face picture to be processed through the homography matrix to obtain a calibration face picture;
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
In a specific implementation, the facial feature points of each facial image to be processed are located, specifically, a global-based locating method (global-based methods) may be adopted, and the global-based locating method is performed in a coarse-to-fine manner, for example, shape estimation generally starts with an initial shape S0, shape fine adjustment is performed step by step through cascading T regressors, and then a final shape is obtained, so as to locate the facial feature points, and obtain the facial feature points to be processed corresponding to each facial image to be processed, where the facial feature points generally refer to feature points of facial five sense organs.
It is understood that the preset positive face feature points are obtained by positioning facial feature points of a standard positive face, and the homography matrix is obtained by comparing the facial feature points to be processed with the preset positive face feature points, dividing the preset positive face feature points into a plurality of non-overlapping units, and evaluating a local homography matrix for each unit. In order to improve the accuracy of the human image similarity evaluation, corresponding deformable units in the human face feature points to be processed are calculated, and the deformable units in the human face picture to be processed are transformed through the homography matrix to obtain the calibration human face picture. And comparing the calibration face picture with each escaping face picture through a convolutional neural network model, so that the accuracy of the face similarity obtained through comparison is improved.
It should be noted that, the calibration face picture and each escape face picture preset in the escape person picture library are input into the convolutional neural network model, the convolution layer in the convolutional neural network model extracts the characteristics of the calibration face picture and each escape face picture, takes the output of the convolution layer as the input of the pooling layer in the convolutional neural network model, performing dimensionality reduction on the extracted features through the pooling layer, wherein the largest pooling and the average pooling are commonly used, the output of the pooling layer is used as the input of a full-link layer in the convolutional neural network model, the full connection layer is of a neural network structure and generally comprises 512 neurons, the output of the full connection layer is subjected to a preset activation function, the output final result is the face similarity between each to-be-processed face picture and each escaping face picture, and the preset activation function can be a sigmoid activation function. In this embodiment, the comparing the calibration face picture with each escaping face picture preset in an escaping people picture library through a convolutional neural network model to obtain a face similarity between each to-be-processed face picture and each escaping face picture includes: and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolution layer, a pooling layer, a full connection layer and a preset activation function of a convolution neural network model to obtain the face similarity between the face picture to be processed and each escaping face picture.
Step S30: and selecting an image of the person escaping from the target from each face picture to be processed according to the face similarity.
It should be understood that, it is determined whether the face similarity exceeds a preset similarity threshold, where the preset similarity threshold is usually set according to an empirical value, for example, 90%, and if the face similarity exceeds the preset similarity threshold, it is indicated that the corresponding to-be-processed portrait picture and the to-be-processed portrait picture preset in the escape photo library may be the same person, that is, an escaping person is identified from the to-be-processed portrait picture, and then the to-be-processed portrait picture exceeding the preset similarity threshold is selected as the target escaping person image.
In the embodiment, human face pictures are intercepted from a human image video to be processed to obtain a plurality of human face pictures to be processed, each human face picture to be processed is compared with each escaping human face picture preset in an escaping human face picture library to obtain human face similarity between each human face picture to be processed and each escaping human face picture, and the video is compared with each escaping human face picture preset in the escaping human face picture library on the basis of artificial intelligence, so that the accuracy and the efficiency of escaping recognition are improved; and selecting the image of the person escaping from the target from each to-be-processed face picture according to the face similarity, so as to reduce the workload of identifying the escaped person.
Referring to fig. 3, fig. 3 is a flowchart illustrating a second embodiment of the method for identifying a network video-based evasive person according to the present invention, and the second embodiment of the method for identifying a network video-based evasive person according to the present invention is proposed based on the first embodiment shown in fig. 2.
In the second embodiment, before the step S10, the method further includes:
step S01: and monitoring the request of the target application program, and analyzing the monitored request through the video warehousing script to obtain a request address.
It should be understood that an automated testing tool including a browser automation testing framework (Selenium, etc.) and a bale plucking tool including httpwatch, etc. may be installed by installing an application in the network video-based escapement recognition device. And driving the target application program through the automatic testing tool to simulate actions of a user, such as sliding, clicking and the like. And monitoring a request of the target application program responding to the action of the user, wherein the request comprises http and the like, analyzing the monitored request through a video warehousing script to obtain a request message section, and extracting the request address from the request message section.
Step S02: and extracting a target uniform resource locator according to the request address, and downloading the network video through the video warehousing script according to the target uniform resource locator.
It can be understood that the request address includes a target Uniform Resource Locator (URL), the URL is a simple representation of a location and an access method of a resource available from the internet, and is an address of a standard resource on the internet, each file on the internet has a unique URL, and information included in the target URL indicates a location of the file and how the file should be processed by the browser. And positioning a network video according to the target uniform resource locator, and downloading the network video through the video warehousing script.
Step S03: and performing face detection on the network video, and if the detected network video comprises face features, taking the network video comprising the face features as a to-be-processed portrait video.
In a specific implementation, the face detection can be performed on the network video through a fast candidate region-based convolutional neural network (fast Regions with a convolutional neural network features, abbreviated as fast R-CNN) algorithm, a region search strategy used in the fast-RCNN algorithm is a region suggestion network RPN, the region suggestion network RPN is a full convolutional network and is used for extracting a high-quality detection region, and the RPN and the whole convolutional neural network for detection can share the convolutional characteristics of a full map. Obtaining a frame picture by performing frame extraction processing on the network video, and performing feature extraction on the frame picture through a convolution layer of the Faster-RCNN algorithm to obtain a feature map; generating a candidate region through a region suggestion network according to the feature map; and performing face detection on the frame picture according to a classification network in the Faster-RCNN algorithm, and if the detected network video comprises face features, taking the network video comprising the face features as a portrait video to be processed. In this embodiment, the step S03 includes: and performing face detection on the network video through a fast candidate area-based convolutional neural network fast R-CNN algorithm, and if the detected network video comprises face features, taking the network video comprising the face features as a portrait video to be processed.
In the embodiment, a network video is captured by monitoring a request of a target application program, face detection is performed on the network video, and if the detected network video comprises face features, the network video comprising the face features is used as a portrait video to be processed, so that the video data collection range for identifying a escaper is expanded, the data range for pursuing the escaper is expanded, the efficiency for pursuing the escaper is higher, and the stability of the society is protected.
Referring to fig. 4, fig. 4 is a flowchart illustrating a third embodiment of the method for identifying a network video-based evasive person according to the present invention, and the third embodiment of the method for identifying a network video-based evasive person according to the present invention is proposed based on the second embodiment shown in fig. 3.
In the third embodiment, after the step S30, the method further includes:
step S40: and acquiring a target in-store personnel image corresponding to the target in-store personnel image from a preset in-store personnel escaping image library.
It should be understood that the target person-in-flight image is a face image captured from a network video, the person-in-flight face image preset in a person-in-flight image library, whose similarity with the target person-in-flight image exceeds the preset similarity, is the target person-in-flight image, and the preset person-in-flight image library includes the target person-in-flight image and corresponding identity information.
Step S50: and acquiring a target portrait video corresponding to the image of the target person escaping from the flight, and acquiring a target shooting address and target shooting time of the target portrait video.
It should be noted that, if the image of the person whose target is escaping is identified from the network video, information related to the image of the person whose target is escaping needs to be sent to the user equipment, so that the police can further pursue the target person to escape according to the image related information of the target person to escape, performing video analysis on the target portrait video to obtain a target shooting address and target shooting time of the target portrait video, wherein the target shooting address is a place address where the target appears in escapers, the on-site investigation can be carried out according to the target shooting address so as to find the person escaping from the target, the target shooting time is the time when the escaper of the target appears at the target shooting address, which is beneficial for police to master the traveling habit of the escaper of the target, the on-site investigation can be carried out according to the target shooting time so as to find out the person escaping from the target. For example, the target shooting address is mall a, and the target shooting time is eleven am.
Step S60: and taking the target escaping personnel image, the target in-store personnel image, the target shooting address and the target shooting time as alarm information, and sending the alarm information to target user equipment for alarm prompt.
In a specific implementation, the target user equipment may be an electronic device such as a smartphone or a desktop computer of a police officer, and the image of the person escaping from the target, the image of the person in the warehouse of the target, the target shooting address and the target shooting time may be sent to the target user equipment through a communication application program to perform alarm prompting. The police officer can check the warning information through the target user equipment and search the target in-store personnel according to the warning information.
In this embodiment, the target image of the person escaping from the bank, the target image of the person in the bank, the target shooting address and the target shooting time are used as warning information, and the warning information is sent to target user equipment to perform warning prompt, so that a police can check the warning information through the target user equipment and search for the person in the bank according to the warning information, thereby improving the efficiency of searching for the escaper and protecting the stability of the society.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium stores a network video-based evasion identification program, and when the network video-based evasion identification program is executed by a processor, the network video-based evasion identification program implements the following steps:
carrying out human face picture interception on a human face video to be processed to obtain a plurality of human face pictures to be processed;
comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture;
and selecting an image of the person escaping from the target from each face picture to be processed according to the face similarity.
Further, the network video-based evasion identification program when executed by the processor further implements the following operations:
monitoring a request of a target application program, and analyzing the monitored request through a video warehousing script to obtain a request address;
extracting a target uniform resource locator according to the request address, and downloading a network video through the video warehousing script according to the target uniform resource locator;
and performing face detection on the network video, and if the detected network video comprises face features, taking the network video comprising the face features as a to-be-processed portrait video.
Further, the network video-based evasion identification program when executed by the processor further implements the following operations:
and performing face detection on the network video through a fast candidate area-based convolutional neural network fast R-CNN algorithm, and if the detected network video comprises face features, taking the network video comprising the face features as a portrait video to be processed.
Further, the network video-based evasion identification program when executed by the processor further implements the following operations:
performing frame extraction processing on a portrait video to be processed to obtain a frame picture;
and drawing the face position in the frame picture, and intercepting the face position to obtain a plurality of face pictures to be processed.
Further, the network video-based evasion identification program when executed by the processor further implements the following operations:
positioning face characteristic points of each face picture to be processed to obtain the face characteristic points to be processed corresponding to each face picture to be processed;
comparing the human face characteristic points to be processed with preset positive face characteristic points to obtain a homography matrix;
transforming the face in the face picture to be processed through the homography matrix to obtain a calibration face picture;
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
Further, the network video-based evasion identification program when executed by the processor further implements the following operations:
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolution layer, a pooling layer, a full connection layer and a preset activation function of a convolution neural network model to obtain the face similarity between the face picture to be processed and each escaping face picture.
Further, the network video-based evasion identification program when executed by the processor further implements the following operations:
acquiring a target in-store personnel image corresponding to the target in-store personnel image from a preset in-store personnel escaping image library;
acquiring a target portrait video corresponding to the image of the target person escaping from the flight, and acquiring a target shooting address and target shooting time of the target portrait video;
and taking the target escaping personnel image, the target in-store personnel image, the target shooting address and the target shooting time as alarm information, and sending the alarm information to target user equipment for alarm prompt.
In the embodiment, human face pictures are intercepted from a human image video to be processed to obtain a plurality of human face pictures to be processed, each human face picture to be processed is compared with each escaping human face picture preset in an escaping human face picture library to obtain human face similarity between each human face picture to be processed and each escaping human face picture, and the video is compared with each escaping human face picture preset in the escaping human face picture library on the basis of artificial intelligence, so that the accuracy and the efficiency of escaping recognition are improved; and selecting the image of the person escaping from the target from each to-be-processed face picture according to the face similarity, so as to reduce the workload of identifying the escaped person.
In addition, referring to fig. 5, an embodiment of the present invention further provides a network video-based escaping identification apparatus, where the network video-based escaping identification apparatus includes:
and the intercepting module 10 is used for intercepting the face pictures of the to-be-processed portrait video to obtain a plurality of to-be-processed face pictures.
It should be understood that the execution subject of the embodiment is the network video-based evasion identification device, and the network video-based evasion identification device may be an electronic device such as a smart phone, a personal computer, a desktop computer, or a server, which is not limited in this embodiment. The to-be-processed portrait video is a video clip comprising human face features. In the current society, internet videos develop rapidly, many users like to publish videos related to personal life information on the internet, a large number of network videos can be crawled, face detection is carried out on the network videos, if face features are detected, the network videos with the detected face features serve as the to-be-processed portrait videos, the to-be-processed portrait videos are stored in a distributed mode and can be stored in a video library server. And if the human face features are not detected, deleting the network video without the human face features.
It should be noted that, in order to be able to identify a portrait appearing in the to-be-processed portrait video, frame extraction processing needs to be performed on the to-be-processed portrait video, the number of frames is the amount of pictures transmitted within 1 second, each frame is a static image, so as to obtain a plurality of frame pictures of the to-be-processed portrait video, perform face detection on each frame picture, draw a face position in each frame picture, intercept the drawn face position, obtain a plurality of to-be-processed face pictures, store the to-be-processed face pictures in a picture library, and perform distributed storage and store the to-be-processed face pictures in a picture library server. In this embodiment, the capturing module 10 is further configured to perform frame extraction processing on the portrait video to be processed to obtain a frame picture; and drawing the face position in the frame picture, and intercepting the face position to obtain a plurality of face pictures to be processed.
A comparison module 20, configured to compare each to-be-processed face picture with each escaping face picture preset in an escaping people picture library, so as to obtain a face similarity between each to-be-processed face picture and each escaping face picture.
It can be understood that the preset escape person photo library is a set of escape person image pictures of all escape persons recorded in a public security system, and the preset escape person photo library comprises all escape person face pictures of all escape persons. The method comprises the steps of extracting features of each to-be-processed face picture to obtain to-be-processed face features corresponding to each to-be-processed face picture, extracting features of each escaping face picture preset in an escaping person picture library to obtain escaping person face features corresponding to each escaping face picture, inputting the to-be-processed face features and the escaping person face features into a convolutional neural network, and outputting face similarity between the to-be-processed face features and the escaping person face features, namely obtaining the face similarity between each to-be-processed face picture and each escaping face picture.
In this embodiment, the comparison module 20 is further configured to perform facial feature point positioning on each to-be-processed face picture to obtain a to-be-processed face feature point corresponding to each to-be-processed face picture; comparing the human face characteristic points to be processed with preset positive face characteristic points to obtain a homography matrix; transforming the face in the face picture to be processed through the homography matrix to obtain a calibration face picture; and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
In a specific implementation, the facial feature points of each facial image to be processed are located, specifically, a global-based locating method (global-based methods) may be adopted, and the global-based locating method is performed in a coarse-to-fine manner, for example, shape estimation generally starts with an initial shape S0, shape fine adjustment is performed step by step through cascading T regressors, and then a final shape is obtained, so as to locate the facial feature points, and obtain the facial feature points to be processed corresponding to each facial image to be processed, where the facial feature points generally refer to feature points of facial five sense organs.
It is understood that the preset positive face feature points are obtained by positioning facial feature points of a standard positive face, and the homography matrix is obtained by comparing the facial feature points to be processed with the preset positive face feature points, dividing the preset positive face feature points into a plurality of non-overlapping units, and evaluating a local homography matrix for each unit. In order to improve the accuracy of the human image similarity evaluation, corresponding deformable units in the human face feature points to be processed are calculated, and the deformable units in the human face picture to be processed are transformed through the homography matrix to obtain the calibration human face picture. And comparing the calibration face picture with each escaping face picture through a convolutional neural network model, so that the accuracy of the face similarity obtained through comparison is improved.
It should be noted that, the calibration face picture and each escape face picture preset in the escape person picture library are input into the convolutional neural network model, the convolution layer in the convolutional neural network model extracts the characteristics of the calibration face picture and each escape face picture, takes the output of the convolution layer as the input of the pooling layer in the convolutional neural network model, performing dimensionality reduction on the extracted features through the pooling layer, wherein the largest pooling and the average pooling are commonly used, the output of the pooling layer is used as the input of a full-link layer in the convolutional neural network model, the full connection layer is of a neural network structure and generally comprises 512 neurons, the output of the full connection layer is subjected to a preset activation function, the output final result is the face similarity between each to-be-processed face picture and each escaping face picture, and the preset activation function can be a sigmoid activation function. In this embodiment, the comparison module 20 is further configured to compare the calibration face picture with each escaping face picture preset in the escaping person picture library through a convolution layer, a pooling layer, a full connection layer, and a preset activation function of a convolution neural network model, so as to obtain a face similarity between the to-be-processed face picture and each escaping face picture.
And the selecting module 30 is configured to select an image of the person escaping from the target from each to-be-processed face image according to the face similarity.
It should be understood that, it is determined whether the face similarity exceeds a preset similarity threshold, where the preset similarity threshold is usually set according to an empirical value, for example, 90%, and if the face similarity exceeds the preset similarity threshold, it is indicated that the corresponding to-be-processed portrait picture and the to-be-processed portrait picture preset in the escape photo library may be the same person, that is, an escaping person is identified from the to-be-processed portrait picture, and then the to-be-processed portrait picture exceeding the preset similarity threshold is selected as the target escaping person image.
In the embodiment, human face pictures are intercepted from a human image video to be processed to obtain a plurality of human face pictures to be processed, each human face picture to be processed is compared with each escaping human face picture preset in an escaping human face picture library to obtain human face similarity between each human face picture to be processed and each escaping human face picture, and the video is compared with each escaping human face picture preset in the escaping human face picture library on the basis of artificial intelligence, so that the accuracy and the efficiency of escaping recognition are improved; and selecting the image of the person escaping from the target from each to-be-processed face picture according to the face similarity, so as to reduce the workload of identifying the escaped person.
In one embodiment, the network video-based escapement recognition device further comprises:
the analysis module is used for monitoring the request of the target application program and analyzing the monitored request through the video warehousing script to obtain a request address;
the downloading module is used for extracting a target uniform resource locator according to the request address and downloading the network video through the video warehousing script according to the target uniform resource locator;
and the detection module is used for carrying out face detection on the network video, and if the detected network video comprises face features, the network video comprising the face features is taken as the portrait video to be processed.
In an embodiment, the detection module is further configured to perform face detection on the network video through a fast candidate area-based convolutional neural network fast R-CNN algorithm, and if the detected network video includes a face feature, take the network video including the face feature as a to-be-processed portrait video.
In one embodiment, the network video-based escapement recognition device further comprises:
the acquisition module is used for acquiring a target in-store personnel image corresponding to the target in-store personnel image from a preset in-store personnel escaping image library; acquiring a target portrait video corresponding to the image of the target person escaping from the flight, and acquiring a target shooting address and target shooting time of the target portrait video;
and the sending module is used for taking the target escaping personnel image, the target library personnel image, the target shooting address and the target shooting time as alarm information and sending the alarm information to target user equipment for alarm prompt.
Other embodiments or specific implementation manners of the device for identifying a crime based on a network video according to the present invention may refer to the above method embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third and the like do not denote any order, but rather the words first, second and the like may be interpreted as indicating any order.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be substantially implemented or a part contributing to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., a Read Only Memory (ROM)/Random Access Memory (RAM), a magnetic disk, an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A network video-based evasion identification method is characterized by comprising the following steps:
carrying out human face picture interception on a human face video to be processed to obtain a plurality of human face pictures to be processed;
comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture;
and selecting an image of the person escaping from the target from each face picture to be processed according to the face similarity.
2. The method for identifying network video-based evasion according to claim 1, wherein before the face image capturing is performed on the to-be-processed human image video to obtain a plurality of to-be-processed face images, the method for identifying network video-based evasion further comprises:
monitoring a request of a target application program, and analyzing the monitored request through a video warehousing script to obtain a request address;
extracting a target uniform resource locator according to the request address, and downloading a network video through the video warehousing script according to the target uniform resource locator;
and performing face detection on the network video, and if the detected network video comprises face features, taking the network video comprising the face features as a to-be-processed portrait video.
3. The method according to claim 2, wherein the performing face detection on the network video, and if the detected network video includes a face feature, taking the network video including the face feature as a to-be-processed portrait video specifically includes:
and performing face detection on the network video through a fast candidate area-based convolutional neural network fast R-CNN algorithm, and if the detected network video comprises face features, taking the network video comprising the face features as a portrait video to be processed.
4. The method for identifying a network video-based evasive according to claim 1, wherein the capturing human face pictures of the human image video to be processed to obtain a plurality of human face pictures to be processed specifically comprises:
performing frame extraction processing on a portrait video to be processed to obtain a frame picture;
and drawing the face position in the frame picture, and intercepting the face position to obtain a plurality of face pictures to be processed.
5. The method for identifying evasion based on web video according to claim 1, wherein the comparing each of the to-be-processed face pictures with each of the in-flight face pictures preset in the escape person picture library to obtain the face similarity between each of the to-be-processed face pictures and each of the in-flight face pictures specifically comprises:
positioning face characteristic points of each face picture to be processed to obtain the face characteristic points to be processed corresponding to each face picture to be processed;
comparing the human face characteristic points to be processed with preset positive face characteristic points to obtain a homography matrix;
transforming the face in the face picture to be processed through the homography matrix to obtain a calibration face picture;
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each escaping face picture.
6. The method for identifying evasion based on web video according to claim 5, wherein the comparing the calibration face picture with each evasive face picture preset in the evasive people picture library by the convolutional neural network model to obtain the face similarity between each to-be-processed face picture and each evasive face picture specifically comprises:
and comparing the calibration face picture with each escaping face picture preset in an escaping person picture library through a convolution layer, a pooling layer, a full connection layer and a preset activation function of a convolution neural network model to obtain the face similarity between the face picture to be processed and each escaping face picture.
7. The network video-based evasion recognition method according to any of claims 1-6, wherein said selecting a target from each of said to-be-processed face pictures according to said face similarity after the image of the evasive person, further comprises:
acquiring a target in-store personnel image corresponding to the target in-store personnel image from a preset in-store personnel escaping image library;
acquiring a target portrait video corresponding to the image of the target person escaping from the flight, and acquiring a target shooting address and target shooting time of the target portrait video;
and taking the target escaping personnel image, the target in-store personnel image, the target shooting address and the target shooting time as alarm information, and sending the alarm information to target user equipment for alarm prompt.
8. A network video-based escapement recognition device, comprising: a memory, a processor and a network video based evasion identification program stored on the memory and executable on the processor, the network video based evasion identification program when executed by the processor implementing the steps of the network video based evasion identification method according to any of claims 1-7.
9. A storage medium having stored thereon a network video-based evasion identification program, which when executed by a processor implements the steps of the network video-based evasion identification method according to any one of claims 1 to 7.
10. A network video-based escapement recognition device, comprising:
the intercepting module is used for intercepting the face pictures of the image video to be processed to obtain a plurality of face pictures to be processed;
the comparison module is used for comparing each to-be-processed face picture with each escaping face picture preset in an escaping person picture library to obtain face similarity between each to-be-processed face picture and each escaping face picture;
and the selecting module is used for selecting the image of the target escaping from each to-be-processed face picture according to the face similarity.
CN201910391648.9A 2019-05-10 2019-05-10 Network video-based evasion identification method, equipment, storage medium and device Pending CN110795980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910391648.9A CN110795980A (en) 2019-05-10 2019-05-10 Network video-based evasion identification method, equipment, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910391648.9A CN110795980A (en) 2019-05-10 2019-05-10 Network video-based evasion identification method, equipment, storage medium and device

Publications (1)

Publication Number Publication Date
CN110795980A true CN110795980A (en) 2020-02-14

Family

ID=69426953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910391648.9A Pending CN110795980A (en) 2019-05-10 2019-05-10 Network video-based evasion identification method, equipment, storage medium and device

Country Status (1)

Country Link
CN (1) CN110795980A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444849A (en) * 2020-03-27 2020-07-24 上海依图网络科技有限公司 Person identification method, person identification device, electronic equipment and computer readable storage medium
CN112101216A (en) * 2020-09-15 2020-12-18 百度在线网络技术(北京)有限公司 Face recognition method, device, equipment and storage medium
CN113361366A (en) * 2021-05-27 2021-09-07 北京百度网讯科技有限公司 Face labeling method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426549A (en) * 2015-12-29 2016-03-23 北京金山安全软件有限公司 Method and device for reading webpage resources and electronic equipment
CN108197565A (en) * 2017-12-29 2018-06-22 深圳英飞拓科技股份有限公司 Target based on recognition of face seeks track method and system
CN108197605A (en) * 2018-01-31 2018-06-22 电子科技大学 Yak personal identification method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426549A (en) * 2015-12-29 2016-03-23 北京金山安全软件有限公司 Method and device for reading webpage resources and electronic equipment
CN108197565A (en) * 2017-12-29 2018-06-22 深圳英飞拓科技股份有限公司 Target based on recognition of face seeks track method and system
CN108197605A (en) * 2018-01-31 2018-06-22 电子科技大学 Yak personal identification method based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444849A (en) * 2020-03-27 2020-07-24 上海依图网络科技有限公司 Person identification method, person identification device, electronic equipment and computer readable storage medium
CN111444849B (en) * 2020-03-27 2024-02-27 上海依图网络科技有限公司 Person identification method, device, electronic equipment and computer readable storage medium
CN112101216A (en) * 2020-09-15 2020-12-18 百度在线网络技术(北京)有限公司 Face recognition method, device, equipment and storage medium
CN113361366A (en) * 2021-05-27 2021-09-07 北京百度网讯科技有限公司 Face labeling method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110431560B (en) Target person searching method, device, equipment and medium
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
US8463025B2 (en) Distributed artificial intelligence services on a cell phone
WO2019033525A1 (en) Au feature recognition method, device and storage medium
US20180232904A1 (en) Detection of Risky Objects in Image Frames
CN108491866B (en) Pornographic picture identification method, electronic device and readable storage medium
CN110795980A (en) Network video-based evasion identification method, equipment, storage medium and device
CN112650875A (en) House image verification method and device, computer equipment and storage medium
CN111104841A (en) Violent behavior detection method and system
CN113139403A (en) Violation behavior identification method and device, computer equipment and storage medium
CN111539338A (en) Pedestrian mask wearing control method, device, equipment and computer storage medium
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
CN113568934B (en) Data query method and device, electronic equipment and storage medium
CN111738199A (en) Image information verification method, image information verification device, image information verification computing device and medium
KR20190066218A (en) Method, computing device and program for executing harmful object control
CN115240203A (en) Service data processing method, device, equipment and storage medium
CN109711287B (en) Face acquisition method and related product
EP3570207B1 (en) Video cookies
JP2022003526A (en) Information processor, detection system, method for processing information, and program
Pinthong et al. The License Plate Recognition system for tracking stolen vehicles
CN115223022B (en) Image processing method, device, storage medium and equipment
Shahab et al. Android application for presence recognition based on face and geofencing
CN113362069A (en) Dynamic adjustment method, device and equipment of wind control model and readable storage medium
CN112241671B (en) Personnel identity recognition method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination