CN106815575B - Optimization system and method for face detection result set - Google Patents

Optimization system and method for face detection result set Download PDF

Info

Publication number
CN106815575B
CN106815575B CN201710047331.4A CN201710047331A CN106815575B CN 106815575 B CN106815575 B CN 106815575B CN 201710047331 A CN201710047331 A CN 201710047331A CN 106815575 B CN106815575 B CN 106815575B
Authority
CN
China
Prior art keywords
face data
face
unique identifier
preferred
deflection angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710047331.4A
Other languages
Chinese (zh)
Other versions
CN106815575A (en
Inventor
马亚宗
钟文康
邓志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yinchen Intelligent Identfiying Science & Technology Co Ltd Shanghai
Original Assignee
Yinchen Intelligent Identfiying Science & Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yinchen Intelligent Identfiying Science & Technology Co Ltd Shanghai filed Critical Yinchen Intelligent Identfiying Science & Technology Co Ltd Shanghai
Priority to CN201710047331.4A priority Critical patent/CN106815575B/en
Publication of CN106815575A publication Critical patent/CN106815575A/en
Application granted granted Critical
Publication of CN106815575B publication Critical patent/CN106815575B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a preferred system and method of a face detection result set, which mainly extracts video streams from all camera equipment, obtains a high-quality face queue containing an optimal set of face data through the previous preferred processing of a tracking detection module, a feature analysis module and a preferred processing module, and then transmits the high-quality face queue to a server through network communication through a face uploading module for the subsequent face comparison processing. Compared with the prior art that the original portrait directly obtained from the monitoring software is adopted as the subsequent face comparison processing, the high-quality face queue provided by the invention is simplified in the number of face data and greatly improved in quality, so that the network flow uploaded through the network can be greatly reduced, the occurrence of a large number of comparison false alarms can be avoided, the comparison effect is greatly improved, and the resource consumption of a CPU is reduced.

Description

Optimization system and method for face detection result set
Technical Field
The invention relates to the technical field of video image calculation, in particular to a method and a system for optimizing a face detection result set.
Background
Generally, an original portrait acquired by video monitoring software is obtained by decoding a Real Time Streaming Protocol (RTSP) network video stream, and then, the acquired original portrait is uploaded through a network without being processed and provided to a background application program for comparison.
When the existing network high-definition camera shoots a video, the video is basically acquired at the speed of 25 frames/second, 25 faces can be generated as long as a monitored target stays in an acquisition area for 1 second, 1500 faces are calculated if the target stays for 1 minute, massive identification data to be compared can be generated, the massive face deflection angles have different sizes, the picture qualities are different, all the faces are used for identification and comparison, the CPU resource consumption is very large, and a large amount of network flow can be consumed if the massive data are uploaded to a background through a network for comparison; in addition, the photos collected by the Rtsp camera may have poor quality due to various factors such as posture, light, weather, etc., and if the collected images are compared, a large number of false alarms may occur.
Therefore, there is a need to provide a preferred system and method for face detection result set that can reduce network traffic, CPU resource consumption, and improve comparison effect, etc., so as to effectively overcome various shortcomings in the prior art.
Disclosure of Invention
in view of the above-mentioned shortcomings of the prior art, the present invention provides a preferred system and method for face detection result set to reduce network traffic and CPU resource consumption and improve comparison effect.
In order to achieve the above and other related objects, the present invention provides a system for optimizing a face detection result set, which is applied to a pre-processing unit and a server based on network communication, wherein the pre-processing unit is connected to at least one image capturing device, and the system for optimizing the face detection result set comprises: the tracking detection module is arranged in the preprocessing unit and is used for extracting video streams from the camera equipment in real time, decoding the video streams to generate frame data, executing face tracking detection processing on the generated frame data to obtain face data, classifying and collecting all the face data belonging to the same person, and distributing a unique identifier to all the face data belonging to the same person; the characteristic analysis module is arranged in the preprocessing unit and is used for performing facial characteristic analysis on the face data obtained by the tracking detection module to obtain an evaluation parameter corresponding to each face data, wherein the evaluation parameter comprises an eye distance threshold, a deflection angle threshold and image definition; a preferred processing module, which is arranged in the pre-processing unit and is used for presetting a timing time, extracting face data with an evaluation parameter from the feature analysis module, judging whether the extracted face data meets a first preferred rule or not according to the first preferred rule, if so, positioning the extracted face data into a low deflection angle preferred queue, if not, positioning the extracted face data into a high definition preferred queue, storing a low deflection angle preferred queue corresponding to at least one unique identifier in the low deflection angle preferred queue according to a second preferred rule, storing the face data in the low deflection angle preferred queue corresponding to each unique identifier in a low deflection angle preferred queue, storing a high definition preferred queue corresponding to at least one unique identifier in the high definition preferred queue according to a third preferred rule, the face data in the high-definition optimal queue corresponding to each unique identifier belong to a high-definition range, when the preset timing time is judged to be reached, the timing time is reset, timing is restarted, a face data with the highest image definition is respectively extracted from the low-deflection angle optimal queue corresponding to each unique identifier, the extracted face data with the highest image definition is stored into a high-quality face queue, a face data with the lowest deflection angle is respectively extracted from the high-definition optimal queue corresponding to each unique identifier, and the extracted face data with the lowest deflection angle is stored into the high-quality face queue; and the face uploading module is arranged in the front processing unit and is used for transmitting the high-quality face queue stored by the optimal processing module to the server through the network communication so as to be used for subsequent face comparison processing. More specifically, the feature analysis module further comprises: the face position values and the eye and mouth position values of the face data are obtained by performing facial feature analysis on the face data, and the eye distance threshold value of the face data is calculated according to the obtained face position values and the obtained eye and mouth position values.
Specifically, the first preferred rule is: the optimal processing module positions the face data meeting the condition that the deflection angle threshold is smaller than a preset deflection angle and the eye distance threshold is larger than a preset eye distance value as the face data with a low deflection angle. In the present embodiment, the predetermined deflection angle may be, for example, 15 degrees, and the predetermined eye distance value may be, for example, 25 pixels. The second preferred rule is: the optimization processing module is used for judging whether the unique identifier exists in the low deflection angle optimization queue or not according to the face data positioned in the low deflection angle optimization queue and further according to the unique identifier distributed to the positioned face data in the tracking detection module, if not, then, according to the unique identifier, a low deflection angle preferred queue corresponding to the unique identifier is created, and storing the positioned face data into a low deflection angle preferred queue corresponding to the unique identifier, if so, further determining whether the low deflection angle preferred queue corresponding to the unique identifier is full, and if so, replacing the face data with the highest deflection angle in the low deflection angle preferred queue corresponding to the unique identifier, if not, the extracted face data is stored into a low deflection angle preferred queue corresponding to the unique identifier. The third preferred rule is: the preferred processing module is used for determining whether the unique identifier exists in the high-definition preferred queue or not according to the face data positioned in the high-definition preferred queue and the unique identifier distributed to the positioned face data in the tracking detection module, and if not, then, based on the unique identifier, a high definition preferred queue corresponding to the unique identifier is created, and storing the located face data into a high definition preferred queue corresponding to the unique identifier, if so, it is further determined whether the high definition preferred queue corresponding to the unique identifier is full and, if so, replacing the face data with the lowest image definition in the high-definition preferred queue corresponding to the unique identifier, if not, the extracted face data is stored in a high definition preferred queue corresponding to the unique identifier.
In addition, the present invention further provides a method for optimizing a face detection result set, which is applied to a pre-processing unit and a server based on network communication, wherein the pre-processing unit is connected to at least one image capturing device, and the method for optimizing the face detection result set includes: 1) extracting a video stream from the image pickup apparatus in real time and performing decoding processing to generate frame data; 2) performing face tracking detection processing on the generated frame data to obtain face data, classifying and collecting the face data belonging to the same person, and allocating a unique identifier to all the face data belonging to the same person; 3) respectively performing facial feature analysis on the face data to obtain an evaluation parameter corresponding to each face data, wherein the evaluation parameter comprises an eye distance threshold, a deflection angle threshold and image definition; 4) extracting face data with the evaluation parameters, and judging whether a deflection angle threshold of the extracted face data is smaller than a preset deflection angle, if so, going to the step 5), and if not, going to the step 10); 5) judging whether the eye distance threshold value of the extracted face data is larger than a preset eye distance value, if so, going to step 6), and if not, going to step 10); 6) positioning the extracted face data into a low deflection angle optimal queue, judging whether the unique identifier exists in the low deflection angle optimal queue according to the unique identifier of the extracted face data, if so, entering a step 7), if not, establishing a low deflection angle optimal queue corresponding to the unique identifier according to the unique identifier, storing the extracted face data into the low deflection angle optimal queue corresponding to the unique identifier, and then entering a next step 8); 7) judging whether the low deflection angle optimal queue corresponding to the unique identifier is full, if so, replacing the face data with the highest deflection angle in the low deflection angle optimal queue corresponding to the unique identifier, and then proceeding to the next step 8), otherwise, storing the extracted face data into the low deflection angle optimal queue corresponding to the unique identifier, and then proceeding to the next step 8); 8) presetting a timing time, judging whether the preset timing time is reached, if so, resetting the timing time, restarting timing, and going to step 9), otherwise, returning to step 4); 9) respectively extracting face data with the highest image definition from the low deflection angle optimal queue corresponding to each unique identifier, storing the extracted face data with the highest image definition into a high-quality face queue, uploading the high-quality face queue to the server through network communication, and then, proceeding to step 14); 10) positioning the extracted face data into a high-definition optimal queue, judging whether the unique identifier exists in the high-definition optimal queue according to the unique identifier of the extracted face data, if so, entering a step 11), if not, establishing a high-definition optimal queue corresponding to the unique identifier according to the unique identifier, storing the extracted face data into the high-definition optimal queue corresponding to the unique identifier, and then entering a next step 12); 11) judging whether the high-definition optimal queue corresponding to the unique identifier is full, if so, replacing the face data with the lowest image definition in the high-definition optimal queue corresponding to the unique identifier, and then proceeding to the next step 12), otherwise, storing the extracted face data into the high-definition optimal queue corresponding to the unique identifier, and then proceeding to the next step 12); 12) judging whether the preset timing time reaches, if so, resetting the timing time, restarting timing, and proceeding to step 13), otherwise, returning to step 4); 13) respectively extracting face data with the lowest deflection angle from a high-definition optimal queue corresponding to each unique identifier, storing the extracted face data with the lowest deflection angle into a high-quality face queue, uploading the high-quality face queue to the server through network communication, and then, proceeding to step 14); and 14) judging whether a preference ending instruction is received or not, if not, returning to the step 4), and if so, ending the preference operation. More specifically, the step 3) further includes: the face position values and the eye and mouth position values of the face data are obtained by performing facial feature analysis on the face data, and the eye distance threshold value of the face data is calculated according to the obtained face position values and the obtained eye and mouth position values. In the present embodiment, the predetermined deflection angle may be, for example, 15 degrees, and the predetermined eye distance value may be, for example, 25 pixels. Further, the above steps 1) to 14) are all operation steps implemented in the preprocessing unit.
As described above, the present invention provides a preferred system and method for a face detection result set, which mainly extracts video streams from each camera device, and performs a previous preferred processing by using a tracking detection module, a feature analysis module and a preferred processing module to obtain a high-quality face queue including an optimal set of face data, and then transmits the high-quality face queue to a server through a face uploading module via network communication for subsequent face comparison processing. Compared with the prior art that the original portrait directly obtained from the monitoring software is adopted as the subsequent face comparison processing, the high-quality face queue provided by the invention is not only simplified in the quantity of the face data, but also greatly improved in the quality, so that the simplified quantity can greatly reduce the network flow uploaded through the network, the improvement in the quality of the face data can also avoid the occurrence of a large number of comparison false alarms, the comparison effect is greatly improved, and the resource consumption of a CPU can also be reduced.
Drawings
Fig. 1 is a block diagram of a preferred system for face detection result set according to the present invention.
Fig. 2 is a schematic operation flow diagram of a preferred method of the face detection result set according to the present invention.
Description of the element reference numerals
1 Pre-processing Unit
100 preferred system of face detection result set
101 tracking detection module
102 feature analysis module
103 preferred processing module
104 face uploading module
2 Server
3 image pickup apparatus
S100 to S119 steps
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, the present invention provides a preferred system of a face detection result set, which IS applied to a pre-processing unit 1 and a server 2 based on network communication, wherein the pre-processing unit (IS _ preprocessing unit for short) 1 IS connected to at least one camera device 3, which IS a device capable of monitoring multiple video streams simultaneously, as shown in fig. 1, the preferred system 100 of the face detection result set of the present invention includes a tracking detection module 101, a feature analysis module 102, a preferred processing module 103, and a face uploading module 104. A preferred system of the face detection result set of the present invention will be described in detail below.
As shown in fig. 1, the tracking detection module 101 is disposed in the pre-processing unit 1, and is configured to extract a video stream from each of the image capturing devices 3 in real time, perform decoding processing to generate frame data, perform face tracking detection processing on the generated frame data to obtain a piece of face data, classify and gather all pieces of face data belonging to the same person, and assign a unique identifier to all pieces of face data belonging to the same person.
The feature analysis module 102 is disposed in the pre-processing unit 1 and configured to perform facial feature analysis on each piece of face data obtained by the tracking detection module 101 to obtain an evaluation parameter corresponding to each piece of face data, where the evaluation parameter includes an eye distance threshold, a deflection angle threshold, and an image sharpness. Specifically, the feature analysis module 102 performs facial feature analysis on each face data to obtain a face position value and an eye and mouth position value of each face data, and calculates an eye distance threshold of each face data according to the obtained face position value and eye and mouth position value.
the optimization processing module 103 is disposed in the pre-processing unit 1, and is configured to preset a timing time, extract face data with evaluation parameters from the feature analysis module 102, determine whether the extracted face data meets a first optimization rule according to the first optimization rule, if so, locate the extracted face data in a low-deflection-angle optimization queue, if not, locate the extracted face data in a high-definition optimization queue, and according to a second optimization rule, store a low-deflection-angle optimization queue corresponding to at least one unique identifier in the low-deflection-angle optimization queue, and each face data in the low-deflection-angle optimization queue corresponding to each unique identifier belongs to a low-deflection-angle range, and according to a third optimization rule, store a high-definition optimization queue corresponding to at least one unique identifier in the high-definition optimization queue, and each face data in the high-definition optimal queue corresponding to each unique identifier belongs to a high-definition range, when the preset timing time is judged to be reached, the timing time is reset, timing is restarted, a face data with the highest image definition is respectively extracted from the low-deflection angle optimal queue corresponding to each unique identifier, the extracted face data with the highest image definition is stored into a high-quality face queue, a face data with the lowest deflection angle is respectively extracted from the high-definition optimal queue corresponding to each unique identifier, and the extracted face data with the lowest deflection angle is stored into the high-quality face queue. More specifically, the real-time face data obtained by the tracking detection module 101 can be periodically polled by the timing time preset by the optimization processing module 103, so as to ensure that new face data can be updated to the high-quality face queue at every interval of the timing time, thereby ensuring the real-time performance of the face data.
Specifically, the first preferred rule is: the preferred processing module 103 locates the face data satisfying that the yaw angle threshold is smaller than a predetermined yaw angle and the eye distance threshold is larger than a predetermined eye distance value as the face data of a low yaw angle. In other words, the face data that does not satisfy the first preferred rule is located as high definition face data. In addition, in the present embodiment, the predetermined deflection angle may be, for example, 15 degrees, and the predetermined eye distance value may be, for example, 25 pixels, but is not limited thereto.
And the second preferred rule is: the optimization processing module 103 further determines whether the unique identifier exists in the low-deflection angle optimization queue according to the unique identifier allocated to the located face data in the tracking detection module 101, if not, creates a low-deflection angle optimization queue corresponding to the unique identifier according to the unique identifier, and stores the located face data into the low-deflection angle optimization queue corresponding to the unique identifier, if so, further determines whether the low-deflection angle optimization queue corresponding to the unique identifier is full, if so, replaces the face data with the highest deflection angle in the low-deflection angle optimization queue corresponding to the unique identifier, and if not, stores the extracted face data into the low-deflection angle optimization queue corresponding to the unique identifier.
The third preferred rule is then: the preferred processing module 103 further determines whether the unique identifier already exists in the high-definition preferred queue according to the unique identifier assigned to the face data in the tracking detection module 101, and if not, then, based on the unique identifier, a high definition preferred queue corresponding to the unique identifier is created, and storing the located face data into a high definition preferred queue corresponding to the unique identifier, and if so, it is further determined whether the high definition preferred queue corresponding to the unique identifier is full and, if so, replacing the face data with the lowest image definition in the high-definition preferred queue corresponding to the unique identifier, if not, the extracted face data is stored in a high definition preferred queue corresponding to the unique identifier.
The face uploading module 104 is disposed in the pre-processing unit 1, and is configured to transmit the high-quality face queue stored in the optimal processing module 103 to the server 2 through the network communication, and further transmit all the face data stored in the high-quality face queue to the server 2 for subsequent face comparison processing. Preferably, the face data in the high-quality face queue is subjected to the previous optimization processing by the tracking detection module 101, the feature analysis module 102 and the optimization processing module 103, and is an optimal set of face data, and compared with the prior art in which an original portrait directly obtained from monitoring software is adopted as the subsequent face comparison processing, the high-quality face queue provided by the invention is simplified in the number of the face data and greatly improved in quality, so that the simplified number can greatly reduce the network flow uploaded through the network, the improvement in the quality of the face data can also avoid the occurrence of a large number of comparison false alarms, the comparison effect is greatly improved, and the resource consumption of the CPU can also be reduced.
referring to fig. 2, there are shown preferred operation steps of executing the face detection result set by the preferred system applying the face detection result set as described above.
As shown in fig. 2, first, step S100 is executed to extract a video stream from at least one image capturing apparatus in real time and perform a decoding process to generate frame data. Subsequently, step S101 is performed.
in step S101, a face tracking detection process is performed on the generated frame data to obtain face data, and classification of all the face data belonging to the same person is collected, and a unique identifier is assigned to all the face data belonging to the same person. Subsequently, step S102 is performed.
In step S102, facial feature analysis is performed on each face data to obtain an evaluation parameter corresponding to each face data, where the evaluation parameter includes an eye distance threshold, a deflection angle threshold, and an image sharpness. Specifically, facial feature analysis is performed on each face data to obtain a face position value and an eye and mouth position value of each face data, and an eye distance threshold of each face data is calculated according to the obtained face position value and eye and mouth position value. Subsequently, step S103 is performed.
In step S103, a piece of face data having obtained the evaluation parameter is extracted, and it is determined whether a yaw angle threshold of the extracted face data is smaller than a predetermined yaw angle, if so, step S104 is performed, and if not, step S112 is performed. In the present embodiment, the predetermined deflection angle may be, for example, 15 degrees, but is not limited thereto.
In step S104, it is determined whether the eye distance threshold of the extracted face data is greater than a predetermined eye distance value, if yes, the process proceeds to step S105, and if no, the process proceeds to step S112. In the present embodiment, the predetermined eye distance value is 25 pixels, but not limited thereto.
In step S105, the extracted face data is positioned in a low deflection angle preferred queue, and it is determined whether the unique identifier already exists in the low deflection angle preferred queue according to the unique identifier of the extracted face data, if so, the process proceeds to step S106, and if not, the process proceeds to step S109.
In step S106, it is determined whether the low deflection angle preferred queue corresponding to the unique identifier is full, and if so, the process proceeds to step S107, and if not, the process proceeds to step S108.
In step S107, the face data with the highest deflection angle in the low deflection angle preferred queue corresponding to the unique identifier is replaced. Subsequently, step S110 is performed.
In step S108, the extracted face data is stored into a low deflection angle preferred queue corresponding to the unique identifier. Subsequently, step S110 is performed.
In step S109, a low-deflection angle preferred queue corresponding to the unique identifier is created according to the unique identifier, and the extracted face data is stored in the low-deflection angle preferred queue corresponding to the unique identifier. Subsequently, step S110 is performed.
In step S110, a predetermined time is preset, and it is determined whether the preset time is reached, if yes, the process proceeds to step S111, and if no, the process returns to step S103.
In step S111, the preset timing time is reset, timing is restarted, a piece of face data with the highest image definition is extracted from the low-deflection angle preferred queue corresponding to each unique identifier, the extracted face data with the highest image definition is stored in a high-quality face queue, and the high-quality face queue is uploaded to the server through network communication. Subsequently, step S119 is performed.
In step S112, the extracted face data is positioned in a high-definition preferred queue, and it is determined whether the unique identifier already exists in the high-definition preferred queue according to the unique identifier of the extracted face data, if so, the process proceeds to step S113, and if not, the process proceeds to step S116.
In step S113, it is determined whether the high definition preferred queue corresponding to the unique identifier is full, and if so, the process proceeds to step S114, and if not, the process proceeds to step S115.
In step S114, the face data with the lowest image definition in the high definition preferred queue corresponding to the unique identifier is replaced. Subsequently, step S117 is performed.
In step S115, the extracted face data is stored into a high definition preferred queue corresponding to the unique identifier. Subsequently, step S117 is performed.
In step S116, a high-definition preferred queue corresponding to the unique identifier is created according to the unique identifier, and the extracted face data is stored in the high-definition preferred queue corresponding to the unique identifier. Subsequently, step S117 is performed.
in step S117, a predetermined time is preset, and it is determined whether the preset time has been reached, if so, the process proceeds to step S118, and if not, the process returns to step S103.
in step S118, the preset timing time is reset, timing is restarted, face data with the lowest deflection angle is extracted from the high-definition preferred queue corresponding to each unique identifier, the extracted face data with the lowest deflection angle is stored in a high-quality face queue, and the high-quality face queue is uploaded to the server through network communication. Subsequently, step S119 is performed.
In step S119, it is determined whether a preference ending instruction is received, and if not, the process returns to step S103, and if so, the present preference operation is ended.
It should be noted that the steps S100 to S119 are all operation steps implemented in a pre-processing unit.
In summary, the present invention provides a preferred system and method for a face detection result set, which mainly extracts video streams from each camera device, and performs a previous preferred processing by using a tracking detection module, a feature analysis module and a preferred processing module to obtain a high-quality face queue containing an optimal set of face data, and then transmits the high-quality face queue to a server through the network communication by using a face uploading module for subsequent face comparison processing. By applying the invention, the face data with less quantity and high quality can be provided for the server to carry out subsequent comparison processing, thereby greatly reducing the flow required by network transmission, avoiding the occurrence of a large number of comparison error alarms, greatly improving the comparison effect and reducing the resource consumption of a CPU. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
the foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (12)

1. a preferred system of a face detection result set is applied to a pre-processing unit and a server based on network communication, wherein the pre-processing unit is connected with at least one camera device, and the preferred system of the face detection result set comprises:
The tracking detection module is arranged in the preprocessing unit and is used for extracting a video stream from the camera equipment in real time, decoding the video stream to generate frame data, executing face tracking detection processing on the generated frame data to obtain face data, classifying and collecting all the face data belonging to the same person, and distributing a unique identifier to all the face data belonging to the same person;
The characteristic analysis module is arranged in the front processing unit and is used for performing facial characteristic analysis on the face data obtained by the tracking detection module to obtain an evaluation parameter corresponding to each face data, wherein the evaluation parameter comprises an eye distance threshold, a deflection angle threshold and image definition;
A preferred processing module, which is arranged in the pre-processing unit and is used for presetting a timing time, extracting face data with an evaluation parameter from the feature analysis module, judging whether the extracted face data meets a first preferred rule or not according to the first preferred rule, if so, positioning the extracted face data into a low deflection angle preferred queue, if not, positioning the extracted face data into a high definition preferred queue, storing a low deflection angle preferred queue corresponding to at least one unique identifier in the low deflection angle preferred queue according to a second preferred rule, storing the face data in the low deflection angle preferred queue corresponding to each unique identifier into low deflection angle preferred queue, and storing a high definition preferred queue corresponding to at least one unique identifier in the high definition preferred queue according to a third preferred rule, the face data in the high-definition optimal queue corresponding to each unique identifier belong to high-definition face data, when the preset timing time is judged to be reached, the timing time is reset, timing is started again, face data with the highest image definition are extracted from the low-deflection angle optimal queue corresponding to each unique identifier respectively, the extracted face data with the highest image definition are stored into a high-quality face queue, face data with the lowest deflection angle are extracted from the high-definition optimal queue corresponding to each unique identifier respectively, and the extracted face data with the lowest deflection angle are stored into the high-quality face queue; and
and the face uploading module is arranged in the front processing unit and is used for transmitting the high-quality face queue stored by the optimal processing module to the server through the network communication so as to be used for subsequent face comparison processing.
2. The preferred system of face detection result set of claim 1, wherein: the feature analysis module further comprises: the method comprises the steps of obtaining a face position value and an eye and mouth position value of face data by performing facial feature analysis on the face data, and calculating an eye distance threshold value of the face data according to the obtained face position value and the obtained eye and mouth position value.
3. The preferred system of face detection result set of claim 1, wherein: the first preferred rule is: the optimal processing module positions the face data meeting the condition that the deflection angle threshold is smaller than a preset deflection angle and the eye distance threshold is larger than a preset eye distance value as the face data with a low deflection angle.
4. The preferred system of face detection result set of claim 3, wherein: the predetermined deflection angle is 15 degrees.
5. The preferred system of face detection result set of claim 3, wherein: the predetermined eye distance value is 25 pixels.
6. The preferred system of face detection result set of claim 1, wherein: the second preferred rule is: the optimization processing module is used for judging whether the unique identifier exists in the low deflection angle optimization queue or not according to the face data positioned in the low deflection angle optimization queue and further according to the unique identifier distributed to the positioned face data in the tracking detection module, if not, then, according to the unique identifier, a low deflection angle preferred queue corresponding to the unique identifier is created, and storing the positioned face data into a low deflection angle preferred queue corresponding to the unique identifier, if so, further determining whether the low deflection angle preferred queue corresponding to the unique identifier is full, and if so, replacing the face data with the highest deflection angle in the low deflection angle preferred queue corresponding to the unique identifier, if not, the extracted face data is stored into a low deflection angle preferred queue corresponding to the unique identifier.
7. The preferred system of face detection result set of claim 1, wherein: the third preferred rule is: the preferred processing module is used for determining whether the unique identifier exists in the high-definition preferred queue or not according to the face data positioned in the high-definition preferred queue and the unique identifier distributed to the positioned face data in the tracking detection module, and if not, then, based on the unique identifier, a high definition preferred queue corresponding to the unique identifier is created, and storing the located face data into a high definition preferred queue corresponding to the unique identifier, if so, it is further determined whether the high definition preferred queue corresponding to the unique identifier is full and, if so, replacing the face data with the lowest image definition in the high-definition preferred queue corresponding to the unique identifier, if not, the extracted face data is stored in a high definition preferred queue corresponding to the unique identifier.
8. A preferred method of a face detection result set is applied to a pre-processing unit and a server based on network communication, wherein the pre-processing unit is connected with at least one camera device, and the preferred method of the face detection result set comprises the following steps:
1) extracting a video stream from the image pickup apparatus in real time and performing decoding processing to generate frame data;
2) Performing face tracking detection processing on the generated frame data to obtain face data, classifying and collecting the face data belonging to the same person, and distributing a unique identifier to all the face data belonging to the same person;
3) Respectively executing facial feature analysis on the face data to obtain evaluation parameters corresponding to each piece of face data, wherein the evaluation parameters comprise an eye distance threshold, a deflection angle threshold and image definition;
4) extracting face data with the evaluation parameters, and judging whether a deflection angle threshold of the extracted face data is smaller than a preset deflection angle, if so, going to the step 5), and if not, going to the step 10);
5) Judging whether the eye distance threshold value of the extracted face data is larger than a preset eye distance value, if so, going to step 6), and if not, going to step 10);
6) Positioning the extracted face data into a low deflection angle optimal queue, judging whether the unique identifier exists in the low deflection angle optimal queue according to the unique identifier of the extracted face data, if so, entering a step 7), if not, establishing a low deflection angle optimal queue corresponding to the unique identifier according to the unique identifier, storing the extracted face data into the low deflection angle optimal queue corresponding to the unique identifier, and then entering a next step 8);
7) Judging whether the low deflection angle optimal queue corresponding to the unique identifier is full, if so, replacing the face data with the highest deflection angle in the low deflection angle optimal queue corresponding to the unique identifier, and then proceeding to the next step 8), otherwise, storing the extracted face data into the low deflection angle optimal queue corresponding to the unique identifier, and then proceeding to the next step 8);
8) presetting a timing time, judging whether the preset timing time is reached, if so, resetting the timing time, restarting timing, and going to step 9), otherwise, returning to step 4);
9) Respectively extracting face data with the highest image definition from the low deflection angle optimal queue corresponding to each unique identifier, storing the extracted face data with the highest image definition into a high-quality face queue, uploading the high-quality face queue to the server through network communication, and then, proceeding to step 14);
10) positioning the extracted face data into a high-definition optimal queue, judging whether the unique identifier exists in the high-definition optimal queue according to the unique identifier of the extracted face data, if so, entering a step 11), if not, establishing a high-definition optimal queue corresponding to the unique identifier according to the unique identifier, storing the extracted face data into the high-definition optimal queue corresponding to the unique identifier, and then entering a next step 12);
11) Judging whether the high-definition optimal queue corresponding to the unique identifier is full, if so, replacing the face data with the lowest image definition in the high-definition optimal queue corresponding to the unique identifier, and then proceeding to the next step 12), otherwise, storing the extracted face data into the high-definition optimal queue corresponding to the unique identifier, and then proceeding to the next step 12);
12) Judging whether the preset timing time reaches, if so, resetting the timing time, restarting timing, and proceeding to step 13), otherwise, returning to step 4);
13) Respectively extracting face data with the lowest deflection angle from a high-definition optimal queue corresponding to each unique identifier, storing the extracted face data with the lowest deflection angle into a high-quality face queue, uploading the high-quality face queue to the server through network communication, and then, proceeding to step 14); and
14) And judging whether a preference ending instruction is received or not, if not, returning to the step 4), and if so, ending the preference operation.
9. The preferred method of face detection result set according to claim 8, characterized in that: the step 3) further comprises: the method comprises the steps of obtaining a face position value and an eye and mouth position value of face data by performing facial feature analysis on the face data, and calculating an eye distance threshold value of the face data according to the obtained face position value and the obtained eye and mouth position value.
10. The preferred method of face detection result set according to claim 8, characterized in that: the predetermined deflection angle is 15 degrees.
11. The preferred method of face detection result set according to claim 8, characterized in that: the predetermined eye distance value is 25 pixels.
12. The preferred method of face detection result set according to claim 8, characterized in that: the steps 1) to 14) are all operation steps implemented in the preprocessing unit.
CN201710047331.4A 2017-01-22 2017-01-22 Optimization system and method for face detection result set Expired - Fee Related CN106815575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710047331.4A CN106815575B (en) 2017-01-22 2017-01-22 Optimization system and method for face detection result set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710047331.4A CN106815575B (en) 2017-01-22 2017-01-22 Optimization system and method for face detection result set

Publications (2)

Publication Number Publication Date
CN106815575A CN106815575A (en) 2017-06-09
CN106815575B true CN106815575B (en) 2019-12-10

Family

ID=59111359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710047331.4A Expired - Fee Related CN106815575B (en) 2017-01-22 2017-01-22 Optimization system and method for face detection result set

Country Status (1)

Country Link
CN (1) CN106815575B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389019B (en) * 2017-08-14 2021-11-05 杭州海康威视数字技术股份有限公司 Face image selection method and device and computer equipment
CN109472278A (en) * 2017-09-08 2019-03-15 上海银晨智能识别科技有限公司 Acquisition method, device, computer-readable medium and the system of human face data
CN107770487B (en) * 2017-09-12 2020-06-16 深圳英飞拓科技股份有限公司 Feature extraction and optimization method, system and terminal equipment
CN108491822B (en) * 2018-04-02 2020-09-08 杭州高创电子科技有限公司 Face detection duplication-removing method based on limited cache of embedded equipment
CN109035246B (en) 2018-08-22 2020-08-04 浙江大华技术股份有限公司 Face image selection method and device
CN109299690B (en) * 2018-09-21 2020-12-29 浙江中正智能科技有限公司 Method capable of improving video real-time face recognition precision
CN109376645B (en) * 2018-10-18 2021-03-26 深圳英飞拓科技股份有限公司 Face image data optimization method and device and terminal equipment
CN111415528B (en) * 2019-01-07 2022-07-22 长沙智能驾驶研究院有限公司 Road safety early warning method and device, road side unit and storage medium
CN110321378A (en) * 2019-06-03 2019-10-11 梁勇 A kind of mobile monitor image identification system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546376A (en) * 2009-04-28 2009-09-30 上海银晨智能识别科技有限公司 Human biological information acquisition system, human face photo acquisition and quality testing system and method
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN104794439A (en) * 2015-04-10 2015-07-22 上海交通大学 Real-time approximate frontal face image optimizing method and system based on several cameras
CN105512599A (en) * 2014-09-26 2016-04-20 数伦计算机技术(上海)有限公司 Face identification method and face identification system
CN106327546A (en) * 2016-08-24 2017-01-11 北京旷视科技有限公司 Face detection algorithm test method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7715595B2 (en) * 2002-01-16 2010-05-11 Iritech, Inc. System and method for iris identification using stereoscopic face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546376A (en) * 2009-04-28 2009-09-30 上海银晨智能识别科技有限公司 Human biological information acquisition system, human face photo acquisition and quality testing system and method
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN105512599A (en) * 2014-09-26 2016-04-20 数伦计算机技术(上海)有限公司 Face identification method and face identification system
CN104794439A (en) * 2015-04-10 2015-07-22 上海交通大学 Real-time approximate frontal face image optimizing method and system based on several cameras
CN106327546A (en) * 2016-08-24 2017-01-11 北京旷视科技有限公司 Face detection algorithm test method and device

Also Published As

Publication number Publication date
CN106815575A (en) 2017-06-09

Similar Documents

Publication Publication Date Title
CN106815575B (en) Optimization system and method for face detection result set
KR102349980B1 (en) Face image deduplication method and device, electronic device, storage medium, program
WO2020094091A1 (en) Image capturing method, monitoring camera, and monitoring system
WO2020094088A1 (en) Image capturing method, monitoring camera, and monitoring system
WO2017071084A1 (en) Alarm method and device
US20090185784A1 (en) Video surveillance system and method using ip-based networks
CN109308460B (en) Article detection method, system and computer readable storage medium
CN108280386B (en) Monitoring scene detection method and device
WO2016172870A1 (en) Video monitoring method, video monitoring system and computer program product
US10366482B2 (en) Method and system for automated video image focus change detection and classification
WO2022041830A1 (en) Pedestrian re-identification method and device
EP2798611A1 (en) Camera calibration using feature identification
WO2021036832A1 (en) Network camera, video monitoring system and method
CN109905641B (en) Target monitoring method, device, equipment and system
US8659676B2 (en) Image analysis device and method thereof
US10282851B2 (en) Video analysis method and apparatus and computer program
WO2021121264A1 (en) Snapshot picture transmission method, apparatus and system, and camera and storage device
US9251423B2 (en) Estimating motion of an event captured using a digital video camera
CN113242431B (en) Marking data preprocessing method for road side perception
CN111263113B (en) Data packet sending method and device and data packet processing method and device
CN111050027A (en) Lens distortion compensation method, device, equipment and storage medium
CN112492261A (en) Tracking shooting method and device and monitoring system
CN111046831A (en) Poultry identification method and device and server
CN109194981B (en) Remote video tracking method, device and storage medium
CN114079777A (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191210