CN115116119A - Face recognition system based on digital image processing technology - Google Patents

Face recognition system based on digital image processing technology Download PDF

Info

Publication number
CN115116119A
CN115116119A CN202210851999.5A CN202210851999A CN115116119A CN 115116119 A CN115116119 A CN 115116119A CN 202210851999 A CN202210851999 A CN 202210851999A CN 115116119 A CN115116119 A CN 115116119A
Authority
CN
China
Prior art keywords
face
image
face image
unit
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210851999.5A
Other languages
Chinese (zh)
Inventor
张凯元
张凯斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Junkai Electronic Technology Co ltd
Original Assignee
Shaanxi Junkai Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Junkai Electronic Technology Co ltd filed Critical Shaanxi Junkai Electronic Technology Co ltd
Priority to CN202210851999.5A priority Critical patent/CN115116119A/en
Publication of CN115116119A publication Critical patent/CN115116119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Abstract

The invention provides a face recognition system based on a digital image processing technology, which comprises: a control area module: dividing a target range through a preset attribute threshold value, and determining a corresponding control area; a face data acquisition module: collecting face images through an image sensor preset in the control area, and establishing a face image sample library; the intelligent environmental control module: detecting a face image sample library, extracting face point coordinates through feature training, and determining the same face image set; a server module is compared: performing optimal output on the same facial image set, processing by using a comparison server to generate a similarity score, and transmitting the similarity score to a preset safety recognition center; a security identification module: and the system is used for collecting the similarity scores and judging the identification result through a preset threshold value to form a corresponding management scheme. The invention realizes the high-precision face recognition process by utilizing the digital image processing technology, and effectively ensures the safety of the target range.

Description

Face recognition system based on digital image processing technology
Technical Field
The invention relates to the digital image processing technology and the technical field of artificial intelligence, in particular to a face recognition system based on the digital image processing technology.
Background
In daily life, banks, office places, confidential bases and schools need strict personnel verification procedures, however, at present, the places mostly depend on traditional modes such as manual inspection and certificate checking for verification. Traditional verification mode relies on human eye contrast and memory to search, can only simply judge, can not accurate discernment, meets the unable timely reaction of emergency simultaneously, not only extravagant a large amount of manpower and materials, and the verification process is spent long, and is inefficient moreover, and the accuracy is not enough, easily leads to the erroneous judgement, and places such as bank, office space, secret base and school need an intelligent novel face identification system who focuses on to the target object point.
Disclosure of Invention
The invention provides a face recognition system based on a digital image processing technology, which is used for solving the problems in the background technology.
The invention provides a face recognition system based on a digital image processing technology, which comprises:
a control area module: dividing the control area through a preset attribute threshold value, and determining a corresponding target range; wherein the attribute threshold represents an attribute threshold of a display attribute of the control region;
a face data acquisition module: collecting a face image through an image sensor preset in the target range, and establishing a face image sample library based on the face image;
the intelligent environmental control module: detecting the face image sample library, extracting face point coordinates of the face image to be recognized through feature training, and determining the same face image set;
a server module is compared: performing optimal output on the same face image set, processing by using a contrast server to generate a similarity score, and transmitting the similarity score to a preset safety recognition center;
a security identification module: and the system is used for collecting the similarity scores and judging the identification result through a preset threshold value to form a corresponding management scheme.
As an embodiment of the present technical solution, the control area module includes:
an area setting unit: the system comprises a control area, a data sharing platform and a control area, wherein the control area is used for setting an integrated control system in preset terminal equipment, forming an integrated range in the control area and building the data sharing platform;
a parameter acquisition unit: the system comprises a data sharing platform, a parameter catalog and a display module, wherein the data sharing platform is used for acquiring environmental parameters in the integration range, uploading the environmental parameters to the data sharing platform, and identifying display attributes of the data sharing platform to generate the parameter catalog; wherein the content of the first and second substances,
the display properties at least comprise texture properties, object properties and detail properties within the integration scope;
a control area unit: the parameter catalog is used for self-adapting according to a preset attribute threshold value, and determining a corresponding target range through segmentation; wherein the content of the first and second substances,
the control regions include at least an intersection control region and a separation control region.
As an embodiment of the present technical solution, the face data obtaining module includes:
a face acquisition unit: the real-time image acquisition module is used for acquiring a real-time image containing human face characteristics in the control area through an image sensor preset in the control area; wherein the content of the first and second substances,
the image sensor at least comprises one or more of a face recognition camera, a face recognition gate and a PDA data acquisition unit;
an image pre-screening unit: the real-time image pre-screening device is used for pre-screening the real-time image based on an image processing technology, eliminating images with insufficient human face characteristics in the real-time image and determining a human face image to be processed; wherein, the first and the second end of the pipe are connected with each other,
the preprocessing technology at least comprises geometric correction, image filtering, image enhancement and image edge detection methods;
image sample library unit: and the face image to be processed is transmitted to the data sharing platform and is integrated into a face image sample library.
As an embodiment of the present technical solution, the intelligent environmental control module includes:
a face detection unit: the face image sample library is input into a preset detection network to obtain the position of a face candidate frame in the face image to be processed;
a feature training unit: the system is used for carrying out convolutional neural network learning on the position of the face candidate frame to determine the coordinates of the face characteristic points;
the same face unit: the face image processing device is used for processing the face images to be processed by applying a classification method based on the face characteristic point coordinates to form a same face image set; wherein, the first and the second end of the pipe are connected with each other,
the classification method at least comprises a maximum likelihood classification method and a minimum distance classification method;
the feature training unit includes:
a first convolutional network subunit: the face candidate frame image unit is used for intercepting a face candidate frame image unit based on the position of the face candidate frame, and a candidate frame screenshot is generated;
a second convolutional network subunit: the face recognition system is used for integrating the candidate frame screenshots, building a face recognition network based on the face image sample library, adjusting parameters of the face recognition network and building a face recognition model;
a feature point extraction subunit: and inputting the face image to be processed into the face recognition model to obtain the coordinates of the face characteristic points.
As an embodiment of the present technical solution, the comparison server module includes:
a feature point acquisition unit: the system comprises a processing unit, a processing unit and a display unit, wherein the processing unit is used for acquiring all face images to be processed of the same face and extracting the coordinates of characteristic points of a standard face;
a calculation unit: the method comprises the steps of obtaining pixel points of a face image to be processed, projecting each pixel point on the face image to be processed into a preset parameter domain, and obtaining screened pixel points;
Figure DEST_PATH_IMAGE002
wherein
Figure DEST_PATH_IMAGE004
The second to be processed human face image after screening
Figure DEST_PATH_IMAGE006
Go to the first
Figure DEST_PATH_IMAGE008
Pixel points of the column;
Figure DEST_PATH_IMAGE010
represents the first
Figure 722169DEST_PATH_IMAGE006
Go to the first
Figure 20426DEST_PATH_IMAGE008
The pixel values of the pixel points of a column,
Figure DEST_PATH_IMAGE012
representing a preset parameter domain with respect to the face image,
Figure DEST_PATH_IMAGE014
representing the inverse of a preset parameter domain with respect to the face image,
Figure DEST_PATH_IMAGE016
represents the first
Figure 109342DEST_PATH_IMAGE006
Go to the first
Figure 134936DEST_PATH_IMAGE008
After the pixel points of the column have been transposedThe value of the pixel is determined by the pixel value,
Figure DEST_PATH_IMAGE018
representing a preset parameter domain relating to a face image
Figure 884586DEST_PATH_IMAGE012
The ideal range of values for the row of (a),
Figure DEST_PATH_IMAGE020
representing a preset parameter domain relating to a face image
Figure 88166DEST_PATH_IMAGE012
The ideal range value of the column of (c);
accumulation peak unit: the method comprises the steps of accumulating pixel points of a face image to be processed through a preset sampling period, and calculating an accumulated peak value;
Figure DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE024
which represents the peak value of the accumulation,
Figure DEST_PATH_IMAGE026
represents
Figure DEST_PATH_IMAGE028
First, the
Figure 21968DEST_PATH_IMAGE006
The sampling period of the pixel points of a row,
Figure DEST_PATH_IMAGE030
represents
Figure 585673DEST_PATH_IMAGE028
First, the
Figure 2879DEST_PATH_IMAGE008
Pixel points of a columnThe sampling period of (a);
standard face feature point coordinate unit: the system is used for analyzing the points corresponding to the accumulated peak values, mapping the points to a face image parameter space and acquiring the coordinates of standard face characteristic points;
constant matrix unit: the constant matrix is used for normalizing the coordinates of the standard face characteristic points and calculating the coordinates of the standard face characteristic points;
mass number result unit: the constant matrix estimation method comprises the steps of performing parameter estimation on constant matrixes of all face images to be processed of the same face through preset quality evaluation indexes, and determining quality value results of all face images to be processed of the same face;
a quality evaluation unit: the quality value result is sequenced, a sequencing result is determined, and the best face image is output based on the sequencing result;
a feature code unit: the central server is used for converting the optimal face image into a feature code in the central server and transmitting the feature code to a preset comparison server;
a similarity calculation unit: and the comparison server is used for calculating and comparing the feature codes with the pre-input human face model, determining a similarity score and transmitting the similarity score to the safety recognition center.
As an embodiment of the present disclosure, the coordinates of the human face feature point at least include a left eye coordinate, a right eye coordinate, a nose coordinate, a left mouth corner coordinate, and a right mouth corner coordinate.
As an embodiment of the present invention, the optimal face image is an image with a positive pose and covering all coordinates of the face feature points.
As an embodiment of the present technical solution, the security identification module includes:
a collection unit: the similarity score is used for receiving the similarity score, displaying the data and generating a report;
a judgment and identification unit: the system is used for comparing and judging the report with a preset threshold value, determining whether the optimal face image has safety abnormity or not and acquiring an identification result; wherein, the first and the second end of the pipe are connected with each other,
when the similarity score is smaller than a preset threshold value, the recognition result is recognition failure;
when the similarity score is larger than a preset threshold value, the recognition result is that the recognition is successful;
a result management unit: and the system is used for responding to the entrance guard action based on the identification result and implementing a related management scheme.
As an embodiment of the present technical solution, the result management unit includes:
and identifying a successful management subunit: when the recognition result is successful, granting authority, controlling a corresponding access control to open, reversely spreading the good face image, restoring the good face image into the same face image set, and supplementing the capacity of the face image sample library;
the identification failure management subunit: and when the identification result is that the identification fails, no authority is granted, controlling a corresponding access control to implement forbidden action, reversely propagating the optimal face image, restoring the optimal face image into the same face image set, tracking and locking the real-time position, and transmitting the same face image set to a security terminal.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block flow diagram of a face recognition system based on digital image processing technology according to an embodiment of the present invention;
FIG. 2 is a flow chart of a face recognition system module based on digital image processing technology according to an embodiment of the present invention;
FIG. 3 is a flowchart of a face recognition system based on digital image processing technology according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
as shown in fig. 1, the present technical solution provides a face recognition system based on a digital image processing technology, including:
a control area module: dividing the control area through a preset attribute threshold value, and determining a corresponding target range; wherein the attribute threshold represents an attribute threshold of a display attribute of the control region;
a face data acquisition module: collecting a face image through an image sensor preset in the target range, and establishing a face image sample library based on the face image;
the intelligent environment control module: detecting the face image sample library, extracting face point coordinates of the face image to be recognized through feature training, and determining the same face image set;
a server module is compared: performing optimal output on the same face image set, processing by using a contrast server to generate a similarity score, and transmitting the similarity score to a preset safety recognition center;
a security identification module: and the system is used for collecting the similarity scores and judging the identification result through a preset threshold value to form a corresponding management scheme.
The working principle and the beneficial effects of the technical scheme are as follows:
in the technical scheme, a control area module divides a target range including an office, a confidential base and an entrance channel of a school through a preset attribute threshold value, determines a corresponding control area, and guarantees the effectiveness of image acquisition through separation control and cross control; the human face data acquisition module collects human face images through an image sensor preset in the control area, reflects the dynamic change of the control area through real-time monitoring, establishes a human face image sample library based on the human face images, provides reference for the identification of a next target object and enhances the system response sensitivity; the intelligent environmental control module detects a face image sample library, extracts face point coordinates of a face image to be recognized through feature training, determines the same face image set, optimizes a recognition program by locking key parameter coordinates, shortens reaction time, integrates all collected images and reduces errors; the comparison server module performs optimal output on the same face image set, improves effective input of a target object face in a target range, processes the face image set by using the comparison server to generate a similarity score, transmits the similarity score to a preset safety recognition center, and performs digital image processing on a real-time face image and a pre-input face model to ensure the recognition reliability; the safety identification module is used for collecting similarity scores, judging identification results through a preset threshold value to form a corresponding management scheme, and the system not only strengthens identification control on a target object, but also avoids risks in time through differentiated response on successful identification and failed identification, so that site safety construction is promoted. As a specific example of the technical solution, for example, at a face card punch location in an office, whether the office is a worker can be determined by recognizing a face, so as to ensure the safety of an office area.
Example 2:
in one embodiment, the control area module, as shown in fig. 2, includes:
an area setting unit: the system comprises a control area, a data sharing platform and a control area, wherein the control area is used for setting an integrated control system in preset terminal equipment, forming an integrated range in the control area and building the data sharing platform;
a parameter acquisition unit: the system comprises a data sharing platform, a parameter catalog and a display module, wherein the data sharing platform is used for acquiring environmental parameters in the integration range, uploading the environmental parameters to the data sharing platform, and identifying display attributes of the data sharing platform to generate the parameter catalog; wherein the content of the first and second substances,
the display properties at least comprise texture properties, object properties and detail properties within the integration scope;
a control area unit: the parameter catalog is used for self-adapting according to a preset attribute threshold value and determining a corresponding target range through segmentation; wherein the content of the first and second substances,
the control regions include at least an intersection control region and a separation control region.
The working principle and the beneficial effects of the technical scheme are as follows:
in the technical scheme, the area setting unit is used for setting an integrated control system in preset terminal equipment, forming an integrated control area in a target range including an office, a confidential base and an entrance channel of a school, building a data sharing platform, building the data sharing platform by building the integrated control area, intelligently planning site environment and reducing site environment information omission; the parameter acquisition unit is used for acquiring environmental parameters of the integrated control area, uploading the environmental parameters to the data sharing platform, and identifying display attributes of the environmental parameters, including textures, objects and details, to generate a parameter directory, so that background management is facilitated; and the control area unit is used for carrying out self-adaptation on the parameter directory according to a preset attribute threshold value, inducing the parameter directory, determining a control area comprising a cross control area and a separation control area through segmentation, and ensuring the coverage of a target range. As an embodiment of the technical scheme, the face recognition system can also be applied to a confidential base and arranged at a gate card, so that the base security is improved, and the monitoring security of the confidential base is improved by acquiring face data.
Example 3:
as shown in fig. 3, in an embodiment, the face data obtaining module includes:
a face acquisition unit: the real-time image acquisition module is used for acquiring a real-time image containing human face characteristics in the control area through an image sensor preset in the control area; wherein the content of the first and second substances,
the image sensor at least comprises one or more of a face recognition camera, a face recognition gate and a PDA data acquisition unit;
an image pre-screening unit: the image pre-screening system is used for pre-screening the real-time image based on an image processing technology, eliminating images with insufficient human face characteristics in the real-time image and determining a human face image to be processed; wherein the content of the first and second substances,
the preprocessing technology at least comprises geometric correction, image filtering, image enhancement and image edge detection methods;
image sample library unit: and the face image to be processed is transmitted to the data sharing platform and is integrated into a face image sample library.
The working principle and the beneficial effects of the technical scheme are as follows:
in the technical scheme, the face acquisition unit is used for acquiring a real-time image containing face characteristics in a control area through an image sensor preset in the control area and comprising one or more of a face recognition camera, a face recognition gate and a PDA data acquisition unit, so that the timeliness of information conduction is ensured; the image pre-screening unit is used for pre-screening the real-time image based on an image processing technology comprising geometric correction, image filtering, image enhancement and image edge detection methods, eliminating images with insufficient human face characteristics in the real-time image, reducing unnecessary storage operation of a system space, determining a human face image to be processed and improving the performance of a human face recognition system; image sample library unit: and the data sharing platform is used for transmitting the facial image to be processed to the data sharing platform and integrating the facial image into a facial image sample library. As a specific embodiment of the present technical solution, the present technical solution can also be applied to an entrance such as a school or a cell to protect the security management of personnel.
Example 4:
in one embodiment, the intelligent ring control module comprises:
a face detection unit: the face image sample library is input into a preset detection network to obtain the position of a face candidate frame in the face image to be processed;
a feature training unit: the system is used for carrying out convolutional neural network learning on the position of the face candidate frame to determine the coordinates of the face characteristic points;
the same face unit: the face image processing device is used for processing the face images to be processed by applying a classification method based on the coordinates of the face characteristic points to form a same face image set; wherein the content of the first and second substances,
the classification method at least comprises a maximum likelihood classification method and a minimum distance classification method;
the feature training unit includes:
a first convolutional network subunit: the face candidate frame image unit is used for intercepting a face candidate frame image unit based on the position of the face candidate frame, and a candidate frame screenshot is generated;
a second convolutional network subunit: the face recognition system is used for integrating the candidate frame screenshots, building a face recognition network based on the face image sample library, adjusting parameters of the face recognition network and building a face recognition model;
a feature point extraction subunit: and inputting the face image to be processed into the face recognition model to obtain the coordinates of the face characteristic points.
The working principle and the beneficial effects of the technical scheme are as follows:
the face detection unit is used for inputting the face image sample library into a preset detection network to obtain the position of a face candidate frame in the face image to be processed; the feature training unit is used for carrying out convolutional neural network learning on the positions of the face candidate frames to determine the coordinates of the face feature points; the coordinates of the human face characteristic points at least comprise a left eye coordinate, a right eye coordinate, a nose coordinate, a left mouth corner coordinate and a right mouth corner coordinate; the same face unit is used for processing the face image to be processed by applying a classification method based on the coordinates of the face characteristic points, identifying a monitoring video according to frames, determining the frame image, realizing refined safety monitoring by taking the frames as units, effectively locking abnormal conditions and forming a same face image set; the classification method includes at least a maximum likelihood classification method and a minimum distance classification method. In the technical scheme, the first convolution network subunit is used for intercepting a face candidate frame image unit based on the position of the face candidate frame, generating a candidate frame screenshot, receiving and storing a frame image, and forming an image database; the second convolution network subunit is used for integrating the candidate frame screenshots, building a face recognition network based on the face image sample library, adjusting parameters of the face recognition network, building a face recognition model, realizing refined safety monitoring, inputting the face images to be processed into the face recognition model through the feature point extraction subunit, acquiring coordinates of the face feature points, and improving the accuracy of data.
Example 5:
in one embodiment, the comparison server module includes:
a feature point acquisition unit: the system comprises a processing module, a processing module and a display module, wherein the processing module is used for acquiring all face images to be processed of the same face and extracting standard face characteristic point coordinates;
a calculation unit: the method comprises the steps of obtaining pixel points of a face image to be processed, projecting each pixel point on the face image to be processed into a preset parameter domain, and obtaining screened pixel points;
Figure DEST_PATH_IMAGE002A
wherein the second one represents the face image to be processed after screening
Figure 547999DEST_PATH_IMAGE006
Go to the first
Figure 581814DEST_PATH_IMAGE008
Pixel points of the column;
Figure 480369DEST_PATH_IMAGE010
represents the first
Figure 493628DEST_PATH_IMAGE006
Go to the first
Figure 835748DEST_PATH_IMAGE008
The pixel values of the pixel points of a column,
Figure 871706DEST_PATH_IMAGE012
representing a preset parameter domain with respect to the face image,
Figure 324684DEST_PATH_IMAGE014
representing the inverse of a preset parameter domain with respect to the face image,
Figure 231329DEST_PATH_IMAGE016
represents the first
Figure 682033DEST_PATH_IMAGE006
Go to the first
Figure 205287DEST_PATH_IMAGE008
The pixel values of the shifted pixel points of the column,
Figure 930797DEST_PATH_IMAGE018
representing a preset parameter domain relating to a face image
Figure 426370DEST_PATH_IMAGE012
The ideal range of values for the row of (a),
Figure 359559DEST_PATH_IMAGE020
representing a predetermined parameter field relating to a face image
Figure 58525DEST_PATH_IMAGE012
The ideal range value of the column of (c);
accumulation peak unit: the method comprises the steps of accumulating pixel points of a face image to be processed through a preset sampling period, and calculating an accumulated peak value;
Figure DEST_PATH_IMAGE022A
wherein the content of the first and second substances,
Figure 40257DEST_PATH_IMAGE024
which represents the peak value of the accumulation,
Figure 187073DEST_PATH_IMAGE026
represents
Figure 41896DEST_PATH_IMAGE028
First, the
Figure 477426DEST_PATH_IMAGE006
The sampling period of the pixel points of a row,
Figure 810318DEST_PATH_IMAGE030
represents
Figure 811641DEST_PATH_IMAGE028
First, the
Figure 571787DEST_PATH_IMAGE008
Sampling period of pixel points of the column;
standard face feature point coordinate unit: the system is used for analyzing the points corresponding to the accumulated peak values, mapping the points to a face image parameter space and acquiring standard face characteristic point coordinates;
constant matrix unit: the constant matrix is used for normalizing the standard face characteristic point coordinates and calculating the constant matrix of the standard face characteristic point coordinates;
mass number result unit: the constant matrix estimation method comprises the steps of performing parameter estimation on constant matrixes of all face images to be processed of the same face through preset quality evaluation indexes, and determining quality value results of all face images to be processed of the same face;
a quality evaluation unit: the quality value result is sequenced, a sequencing result is determined, and the best face image is output based on the sequencing result;
a feature code unit: the central server is used for converting the optimal face image into a feature code in the central server and transmitting the feature code to a preset comparison server;
a similarity calculation unit: and the comparison server is used for calculating and comparing the feature codes with the pre-input human face model, determining a similarity score and transmitting the similarity score to the safety recognition center.
The working principle and the beneficial effects of the technical scheme are as follows:
in the technical scheme, an optimal face image unit is used for carrying out face image quality evaluation on the same face image set and outputting an optimal face image; the system comprises a central server, a feature code unit, a similarity calculation unit, a safety recognition center and a feature point acquisition unit, wherein the central server is used for recording face models of human faces to be processed, the central server is used for recording face models of the human faces to be processed, the feature code unit is used for converting the face images to be processed into feature codes in the central server, the feature codes are transmitted to the safety recognition center, and the feature code unit is used for inputting all face images to be processed of the same face in the safety recognition center; the calculating unit is used for calculating a constant matrix based on the standard face characteristic point coordinates and calculating the quality value results of all the face images to be processed of the same face; and the quality evaluation unit is used for sequencing the quality numerical value results, determining a sequencing result and outputting the best face image based on the sequencing result. The data quality of the face image is improved, the resolution of the image is clearer and more variable, and the fault tolerance rate is improved.
Example 6:
in one embodiment, the face feature point coordinates include at least a left eye coordinate, a right eye coordinate, a nose coordinate, a left mouth corner coordinate, and a right mouth corner coordinate.
Example 7:
in one embodiment, the best face image is an image of a frontal pose and encompassing all of the face feature point coordinates.
Example 8:
in one embodiment, the secure identification module includes:
a collection unit: the similarity score is used for receiving the similarity score, displaying the data and generating a report;
a judgment and identification unit: the system is used for comparing and judging the report with a preset threshold value, determining whether the optimal face image has safety abnormity or not and acquiring an identification result; wherein, the first and the second end of the pipe are connected with each other,
when the similarity score is smaller than a preset threshold value, the recognition result is recognition failure;
when the similarity score is larger than a preset threshold value, the recognition result is that the recognition is successful;
a result management unit: and the system is used for responding to the entrance guard action based on the identification result and implementing a related management scheme.
The working principle and the beneficial effects of the technical scheme are as follows:
in this technical solution, the security identification module includes: the collecting unit is used for receiving the similarity score, displaying the data and generating a report; the judging and identifying unit is used for comparing and judging the report with a preset threshold value, determining whether the best face image has safety abnormity, and acquiring an identification result; when the similarity score is smaller than a preset threshold value, the recognition result is recognition failure; when the similarity score is larger than a preset threshold value, the recognition result is that the recognition is successful; and the result management unit is used for responding to the entrance guard action based on the recognition result and implementing a related management scheme. And implementing a related management scheme on the basis of face management and recognition through a safety recognition module.
Example 9:
in one embodiment, the result management unit includes:
and identifying a successful management subunit: when the recognition result is successful, granting authority, controlling a corresponding access control to open, reversely spreading the good face image, restoring the good face image into the same face image set, and supplementing the capacity of the face image sample library;
the identification failure management subunit: and when the identification result is that the identification fails, no authority is granted, controlling a corresponding access control to implement forbidden action, reversely propagating the optimal face image, restoring the optimal face image into the same face image set, tracking and locking the real-time position, and transmitting the same face image set to a security terminal.
The working principle and the beneficial effects of the technical scheme are as follows:
in the technical scheme, the successfully identified management subunit: when the recognition result is successful, granting authority, controlling a corresponding access control to open, reversely transmitting the good face image, restoring the good face image into the same face image set, and supplementing the capacity of the face image sample library; the identification failure management subunit: and when the recognition result is recognition failure, no authority is granted, the corresponding access control is controlled to implement forbidden action, the optimal face image is reversely propagated and restored to the same face image set, and the same face image set is tracked, locked and transmitted to a security terminal. By correctly identifying the face, the safety identification and safety management of the user are improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A face recognition system based on digital image processing techniques, comprising:
a control area module: segmenting the control area through a preset attribute threshold value, and determining a corresponding target range; wherein the attribute threshold represents an attribute threshold of a display attribute of the control region;
a face data acquisition module: collecting a face image through an image sensor preset in the target range, and establishing a face image sample library based on the face image;
the intelligent environmental control module: detecting the face image sample library, extracting face point coordinates of a face image to be recognized through feature training, and determining the same face image set;
a server module is compared: performing optimal output on the same face image set, processing by using a contrast server to generate a similarity score, and transmitting the similarity score to a preset safety recognition center;
a security identification module: and the system is used for collecting the similarity scores and judging the identification result through a preset threshold value to form a corresponding management scheme.
2. The system of claim 1, wherein the control area module comprises:
an area setting unit: the system comprises a control area, a data sharing platform and a control area, wherein the control area is used for setting an integrated control system in preset terminal equipment, forming an integrated range in the control area and building the data sharing platform;
a parameter acquisition unit: the system comprises a data sharing platform, a parameter catalog and a display module, wherein the data sharing platform is used for acquiring environmental parameters in the integration range, uploading the environmental parameters to the data sharing platform, and identifying display attributes of the data sharing platform to generate the parameter catalog; wherein the content of the first and second substances,
the display properties at least comprise texture properties, object properties and detail properties within the integration scope;
a control area unit: the parameter catalog is used for self-adapting according to a preset attribute threshold value and determining a corresponding target range through segmentation; wherein the content of the first and second substances,
the control regions include at least an intersection control region and a separation control region.
3. The system of claim 1, wherein the face data acquisition module comprises:
a face acquisition unit: the real-time image acquisition module is used for acquiring a real-time image containing human face characteristics in the control area through an image sensor preset in the control area; wherein the content of the first and second substances,
the image sensor at least comprises one or more of a face recognition camera, a face recognition gate and a PDA data acquisition unit;
an image pre-screening unit: the real-time image pre-screening device is used for pre-screening the real-time image based on a pre-processing technology, eliminating images with insufficient human face characteristics in the real-time image and determining a human face image to be processed; wherein the content of the first and second substances,
the preprocessing technology at least comprises geometric correction, image filtering, image enhancement and image edge detection methods;
image sample library unit: and the face image to be processed is transmitted to the data sharing platform and is integrated into a face image sample library.
4. The system for recognizing human face based on digital image processing technology as claimed in claim 1, wherein said intelligent environmental control module comprises:
a face detection unit: the face image sample library is input into a preset detection network to obtain the position of a face candidate frame in the face image to be processed;
a feature training unit: the system is used for carrying out convolutional neural network learning on the position of the face candidate frame to determine the coordinates of the face characteristic points;
the same face unit: the face image processing device is used for processing the face images to be processed by applying a classification method based on the face characteristic point coordinates to form a same face image set; wherein the content of the first and second substances,
the classification method at least comprises a maximum likelihood classification method and a minimum distance classification method;
the feature training unit includes:
a first convolutional network subunit: the face candidate frame image unit is used for intercepting a face candidate frame image unit based on the position of the face candidate frame, and a candidate frame screenshot is generated;
a second convolutional network subunit: the face recognition system is used for integrating the candidate frame screenshots, building a face recognition network based on the face image sample library, adjusting parameters of the face recognition network and building a face recognition model;
a feature point extraction subunit: and inputting the face image to be processed into the face recognition model to obtain the coordinates of the face characteristic points.
5. The system of claim 1, wherein the contrast server module comprises:
a feature point acquisition unit: the system comprises a processing module, a processing module and a display module, wherein the processing module is used for acquiring all face images to be processed of the same face and extracting standard face characteristic point coordinates;
a calculation unit: the method comprises the steps of obtaining pixel points of a face image to be processed, projecting each pixel point on the face image to be processed into a preset parameter domain, and obtaining screened pixel points;
Figure DEST_PATH_IMAGE001
wherein
Figure 383389DEST_PATH_IMAGE002
The second to be processed human face image after screening
Figure DEST_PATH_IMAGE003
Go to the first
Figure 221901DEST_PATH_IMAGE004
Pixel points of the column;
Figure DEST_PATH_IMAGE005
represents the first
Figure 408162DEST_PATH_IMAGE003
Go to the first
Figure 429470DEST_PATH_IMAGE004
The pixel values of the pixel points of a column,
Figure 430793DEST_PATH_IMAGE006
representing a preset parameter domain with respect to the face image,
Figure DEST_PATH_IMAGE007
representing the inverse of a preset parameter domain with respect to the face image,
Figure 407583DEST_PATH_IMAGE008
represents the first
Figure 612300DEST_PATH_IMAGE003
Go to the first
Figure 263730DEST_PATH_IMAGE004
The pixel values of the transposed pixel points of the column,
Figure DEST_PATH_IMAGE009
representing pre-images relating to face imagesParameter domain of setting
Figure 339133DEST_PATH_IMAGE006
The ideal range of values for the row of (a),
Figure 286492DEST_PATH_IMAGE010
representing a predetermined parameter field relating to a face image
Figure 431034DEST_PATH_IMAGE006
The ideal range value of the column of (c);
accumulation peak unit: the method comprises the steps of accumulating pixel points of a face image to be processed through a preset sampling period, and calculating an accumulated peak value;
Figure 371308DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
which represents the peak value of the accumulation,
Figure 806879DEST_PATH_IMAGE014
represents
Figure DEST_PATH_IMAGE015
First, the
Figure 892516DEST_PATH_IMAGE003
The sampling period of the pixel points of a row,
Figure 337403DEST_PATH_IMAGE016
represents
Figure 550210DEST_PATH_IMAGE015
First, the
Figure 819780DEST_PATH_IMAGE004
Pixels of a columnSampling period of the points;
standard face feature point coordinate unit: the system is used for analyzing the points corresponding to the accumulated peak values, mapping the points to a face image parameter space and acquiring standard face characteristic point coordinates;
constant matrix unit: the constant matrix is used for normalizing the standard face characteristic point coordinates and calculating the constant matrix of the standard face characteristic point coordinates;
mass number result unit: the constant matrix estimation method comprises the steps of performing parameter estimation on constant matrixes of all face images to be processed of the same face through preset quality evaluation indexes, and determining quality value results of all face images to be processed of the same face;
a quality evaluation unit: the quality value result is sequenced, a sequencing result is determined, and the best face image is output based on the sequencing result;
a feature code unit: the central server is used for converting the optimal face image into a feature code in the central server and transmitting the feature code to a preset comparison server;
a similarity calculation unit: and the comparison server is used for calculating and comparing the feature codes with the pre-input human face model, determining a similarity score and transmitting the similarity score to the safety recognition center.
6. The system of claim 4, wherein the coordinates of the characteristic points of the face comprise at least left eye coordinates, right eye coordinates, nose coordinates, left mouth corner coordinates, and right mouth corner coordinates.
7. The system of claim 5, wherein the optimal face image is an image with a positive pose covering all the coordinates of the face feature points.
8. The face recognition system based on digital image processing technology as claimed in claim 1, wherein the security recognition module comprises:
a collection unit: the similarity score is used for receiving the similarity score, displaying the data and generating a report;
a judgment and identification unit: the system is used for comparing and judging the report with a preset threshold value, determining whether the best face image has safety abnormity, and acquiring an identification result; wherein the content of the first and second substances,
when the similarity score is smaller than a preset threshold value, the recognition result is recognition failure;
when the similarity score is larger than a preset threshold value, the recognition result is that the recognition is successful;
a result management unit: and the system is used for responding to the entrance guard action based on the identification result and implementing a related management scheme.
9. The system of claim 8, wherein the result management unit comprises:
and identifying a successful management subunit: when the recognition result is successful, granting authority, controlling a corresponding access control to open, reversely spreading the good face image, restoring the good face image into the same face image set, and supplementing the capacity of the face image sample library;
the identification failure management subunit: and when the identification result is that the identification fails, no authority is granted, controlling a corresponding access control to implement forbidden action, reversely propagating the optimal face image, restoring the optimal face image into the same face image set, tracking and locking the real-time position, and transmitting the same face image set to a security terminal.
CN202210851999.5A 2022-07-20 2022-07-20 Face recognition system based on digital image processing technology Pending CN115116119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210851999.5A CN115116119A (en) 2022-07-20 2022-07-20 Face recognition system based on digital image processing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210851999.5A CN115116119A (en) 2022-07-20 2022-07-20 Face recognition system based on digital image processing technology

Publications (1)

Publication Number Publication Date
CN115116119A true CN115116119A (en) 2022-09-27

Family

ID=83333789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210851999.5A Pending CN115116119A (en) 2022-07-20 2022-07-20 Face recognition system based on digital image processing technology

Country Status (1)

Country Link
CN (1) CN115116119A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410261A (en) * 2022-09-28 2022-11-29 范孝徐 Face recognition heterogeneous data association analysis system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410261A (en) * 2022-09-28 2022-11-29 范孝徐 Face recognition heterogeneous data association analysis system
CN115410261B (en) * 2022-09-28 2023-12-15 范孝徐 Face recognition heterogeneous data association analysis system

Similar Documents

Publication Publication Date Title
CN111667011B (en) Damage detection model training and vehicle damage detection method, device, equipment and medium
CN110909651B (en) Method, device and equipment for identifying video main body characters and readable storage medium
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
CN105631439A (en) Human face image collection method and device
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN111191532B (en) Face recognition method and device based on construction area and computer equipment
CN110706261A (en) Vehicle violation detection method and device, computer equipment and storage medium
CN112052731B (en) Intelligent portrait identification card punching attendance system and method
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN114155614B (en) Method and system for identifying anti-violation behavior of operation site
CN110717449A (en) Vehicle annual inspection personnel behavior detection method and device and computer equipment
US20150178544A1 (en) System for estimating gender from fingerprints
CN112183219A (en) Public safety video monitoring method and system based on face recognition
CN115862113A (en) Stranger abnormity identification method, device, equipment and storage medium
CN115116119A (en) Face recognition system based on digital image processing technology
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN112784494B (en) Training method of false positive recognition model, target recognition method and device
CN116189063B (en) Key frame optimization method and device for intelligent video monitoring
CN113592902A (en) Target tracking method and device, computer equipment and storage medium
CN111582404B (en) Content classification method, device and readable storage medium
CN111209860B (en) Video attendance system and method based on deep learning and reinforcement learning
CN114764948A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN112836682A (en) Method and device for identifying object in video, computer equipment and storage medium
CN113887634A (en) Improved two-step detection-based electric safety belt detection and early warning method
CN113869364A (en) Image processing method, image processing apparatus, electronic device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination