CN112001340A - Portrait identification method - Google Patents
Portrait identification method Download PDFInfo
- Publication number
- CN112001340A CN112001340A CN202010879451.2A CN202010879451A CN112001340A CN 112001340 A CN112001340 A CN 112001340A CN 202010879451 A CN202010879451 A CN 202010879451A CN 112001340 A CN112001340 A CN 112001340A
- Authority
- CN
- China
- Prior art keywords
- face
- pixel
- image
- camera shooting
- shooting unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000000875 corresponding effect Effects 0.000 description 29
- 238000012795 verification Methods 0.000 description 23
- 230000000694 effects Effects 0.000 description 4
- 210000000746 body region Anatomy 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002904 solvent Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
A person identification method comprises the following steps that a first camera shooting unit shoots the number of people in a queue, the first camera shooting unit is arranged above people, the included angle between the projection of the central axis direction of the first camera shooting unit in the vertical plane and the horizontal plane is smaller than 45 degrees, a second camera shooting unit is arranged in front of the queue, the included angle between the projection of the central axis direction of the second camera shooting unit in the vertical plane and the horizontal plane is larger than 45 degrees, an input image of the second camera shooting unit is obtained, and face detection operation is conducted on the face in the input image; by the method, the optimal identification configuration can be carried out when the camera module shoots the images of a plurality of people, and the optimal identification object is determined by motion detection and the occupied picture proportion. And the queuing density information is acquired by configuring the cameras for shooting and recording the crowd, so that the matching efficiency of the scheme is increased, and the anti-interference capability under the multi-person environment is enhanced.
Description
Technical Field
The invention relates to the field of automatic image detection, in particular to a portrait identification method under a multi-person scene.
Background
In the existing face recognition technology, such as the technical solutions of application numbers 2017111892326 and 2018116155405, simultaneous recognition of multiple persons can be achieved. If the portrait is shot and recorded in real time, the non-ideal situation that the camera equipment cannot acquire only one person easily occurs in a multi-person scene. When the image acquired by the camera equipment is identified, a plurality of people are likely to queue up, and the face of the back-row person is close to the face of the front-row person, so that the correct identification cannot be realized.
Disclosure of Invention
Therefore, a human face recognition method under a multi-person scene needs to be provided, so as to solve the problem that the face recognition of the multi-person scene is not accurate enough in the prior art;
in order to achieve the above object, the inventor provides a human image recognition method, which includes the following steps that a first camera unit captures the number of people in line, the first camera unit is arranged above people, an included angle between a projection of a central axis direction of the first camera unit in a vertical plane and a horizontal plane is smaller than 45 degrees, a second camera unit is arranged in front of the line, an included angle between a projection of the central axis direction of the second camera unit in the vertical plane and the horizontal plane is larger than 45 degrees, an input image of the second camera unit is obtained, and human face detection operation is carried out on a human face in the input image;
selecting the front n faces occupying the largest picture from the plurality of faces, wherein the numerical value of n obtains the average queuing density by analyzing the video image of the first camera unit, n is in positive correlation with the average queuing density, and performing motion detection on the n faces to obtain a motion detection result;
selecting a single face with the minimum motion detection result as an identification object;
and acquiring user identity information, and matching the user photo in the database belonging to the user identity information with the identification object.
Specifically, the motion detection result is a value obtained by detecting the ith personal face, selecting m feature points, and moving coordinates of the m feature points in the ith personal face in different frame pictures.
Specifically, the motion detection result is to detect the ith personal face, select m feature points, obtain a coordinate movement value a of the m feature points in the ith personal face in the adjacent frame pictures,
a step is also performed of summing the total value Σ a of the coordinate movement in all the adjacent frame pictures within a preset time.
Optionally, the method also comprises the step of,
detecting connected domains below a plurality of faces to obtain body images corresponding to the face images;
and judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, successfully verifying.
Preferably, the method further comprises the step of,
acquiring connected domain detection below a plurality of faces, acquiring body images corresponding to the face images, and extracting depth information from the body images;
and judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, successfully verifying.
In particular, the amount of the solvent to be used,
the method for detecting the connected domain comprises the steps of obtaining a depth image of the lower portion of a face area, calculating an average depth z, and selecting K pixel points to expand right below the face area, wherein the expanding method comprises the steps of setting a set C to be expanded, placing K pixels into the C, calculating the difference value between the depth of five pixels, namely the left pixel, the lower pixel, the right pixel and the right pixel, of each pixel p in the C and the z, if the difference value is smaller than z +/-preset range, placing the pixel into the C, deleting the p from the C after the calculation of the pixel p corresponding to the five pixels is finished, calculating the average depth d 'of all the pixels in the C at the same time, updating the d to d', and calculating until the stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined by selecting an average queue density over the past week, day, or month.
By the method, the optimal identification configuration can be carried out when the camera module shoots the images of a plurality of people, and the optimal identification object is determined by motion detection and the occupied picture proportion. And the queuing density information is acquired by configuring the cameras for shooting and recording the crowd, so that the matching efficiency of the scheme is increased, and the anti-interference capability under the multi-person environment is enhanced.
Drawings
Fig. 1 is a flowchart of a multi-person scene image verification method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a portrait recognition method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a self-service face service method according to an embodiment of the present invention;
FIG. 4 is a diagram of an apparatus for verifying a multi-user scene image according to an embodiment of the present invention;
FIG. 5 is a diagram of a portrait recognition apparatus according to an embodiment of the present invention;
fig. 6 is a diagram of a self-service face service device according to an embodiment of the present invention.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, an image verification method in a multi-person scene includes the following steps,
s100, an input image of an image acquisition module is acquired, and face detection operation is carried out on a face in the input image; the face detection of the face in the input image refers to the preliminary analysis of the input image of the image acquisition module to identify a block which may be a face image of the person in the image. In this context, the face detection means detecting a region that may be a face in an image, and may also be assisted by a rectangular frame to identify the face region, which is generally less in calculation amount and fast in speed. The face identification is to cut out the rectangular frame and compare with the face library after being identified as the face by the face detection, to determine the identity, which has a large amount of calculation
S101, selecting the front n faces occupying the largest picture from a plurality of faces, and carrying out motion detection on the n faces to obtain a motion detection result; the first n faces occupying the largest picture in the plurality of faces are selected, namely area sorting is carried out on a plurality of image blocks recognized as the faces, the area can be pixels, and area correlation judgment is carried out according to the size of the pixel blocks. And selecting the single face with the minimum motion detection result as an identification object. The motion detection here refers to determining whether the position of a block in an image changes, and the criterion for the determination may be whether the block recognized as a human face in an adjacent frame has been translated, deformed, or scaled in size.
Specifically, the motion detection result is a value obtained by detecting the ith personal face, selecting m feature points, and moving coordinates of the m feature points in the ith personal face in different frame pictures. If the sum of the coordinate movement values of the m feature points in different frame pictures is greater than zero or is set to be greater than a certain preset threshold value, the motion is considered to occur. The face movement is judged by designing a plurality of characteristic points in the image, so that more accurate judgment of the movement result can be obtained.
In some more specific embodiments, the motion detection result is specifically a sum a of coordinate movement values of m feature points in the ith human face in adjacent frame pictures, where the m feature points are selected by detecting the ith human face. That is, let the value of the coordinate movement of the mth feature point in the ith face in the adjacent frame be αi,mThen the motion detection result a of the ith facei=∑mαi,m。
In a further embodiment, in order to more accurately reflect the motion state in a relatively long period of time, the motion detection result is further designed to perform a step of designing a preset time, such as 2-4 seconds. 50-100 frames can be in the preset time, then summingThe motion detection result a of the ith human face in a preset timeiThe total value Σ a of the coordinate movement in 50-100 adjacent frame picturesi。
In some further embodiments shown in fig. 1, the method further includes a step of S102 performing connected component detection on the lower portions of the plurality of faces to obtain a body image corresponding to the face image. The step is used for establishing a corresponding relationship between a plurality of face regions and a plurality of body regions, and certainly, there may be a case that the body region below the face region cannot be identified, which indicates that the body region corresponding to the face image may be blocked, and then the step is continued
S103, judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, successfully verifying. If there is a corresponding body image below a certain face region, it can be determined that the corresponding body image occupies a screen of 0. By verifying the recognition target selected in the previous steps S100 and S101 through the above-mentioned steps S102 and S103, if the body area of the recognition target is determined to be the largest in all body images, it is reliably verified that the recognition target selected by the above-mentioned method is most likely the face of the person standing at the forefront because the most likely person having the largest body image occupies the screen and thus covers the largest camera angle. Through the verification steps, the recognition accuracy of the face image can be greatly improved, and the practicability of the scheme of the invention is improved.
In some other further embodiments, in order to verify the recognition object selected in the previous steps S100 and S101, a step may be further designed, in which S105 acquires a plurality of faces below to perform connected domain detection, acquires a body image corresponding to the face image, and extracts depth information from the body image;
s106, judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, successfully verifying. The depth information here refers to distance information between an object in front of the camera module and the camera module, and the information is obtained by conventional means and the prior art, such as setting the camera as an RGBD depth camera using tof or structured light, and is not described herein again. By recognizing the depth information of the body image region in the above steps S105 and S106, it is possible to achieve a technical effect of verifying whether the recognition target is the face of a person standing at the forefront. The interference of the situations that some people with great concentration are in the back of the team and the head of the people is used for guiding the operation and the like to the recognition result is avoided. Through the verification steps, the recognition accuracy of the face image can be greatly improved, and the practicability of the scheme of the invention is improved.
In some specific embodiments as shown below, the depth calculation of the connected component can be performed by: the method comprises the steps of obtaining a depth image of the lower part of a face area, calculating an average depth z, and selecting K pixel points under the face area for expansion, wherein the expansion method comprises the steps of setting a set C to be expanded, placing K pixels into the C, calculating the difference between the depth of five pixels, namely the left pixel, the lower pixel, the right pixel and the right pixel, of each pixel p in the C and the z, if the difference is smaller than z +/-preset range, placing the pixel into the C, deleting the p from the C after the calculation of the pixel p corresponding to the five pixels is finished, simultaneously calculating the average depth d 'of all the pixels in the C, updating the d to d', and calculating until the stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
In order to better distribute computing power, and meanwhile, considering that the number of faces to be processed is increased and the probability of false detection is improved if the value of n is too large, faces which may be really recognized are not covered if the value of n is too small, and the probability of missed detection is improved, the value of n needs to be optimally selected, and is determined according to the average queuing distance. Specifically, the camera shoots the queuing people, intelligently identifies the number of queuing people in the people, and then calculates the queuing length of the team according to the shot queuing images to obtain the average distance which is the queuing length/the number of queuing people. In this example, we set the value of n to be positively correlated with the reciprocal of the average spacing (which can be considered as the queuing density), the positive correlation coefficient can be selected as required, and the value of n can be set to be rounded up. For example, in one embodiment, the number of people in the queue X is 10, the queue length Y is 7.5m, the positive correlation coefficient k is 3, and n ═ k × X/Y ═ 4. It can be seen that under the condition of the queuing density, the selection of the first 4 faces occupying the largest picture among the faces is a setting which meets the calculation requirement and saves the calculation power. Through the scheme, the cameras can record and record the queuing people in real time and calculate the density, so that the determination of the number of the face detection can be better completed. In other embodiments, in addition to the real-time adjustment scheme, the average queue density over the past period may be obtained based on the average queue length and the average number of people over the past period. The calculation of the preferred n value can be performed by selecting the average queuing density of the last week, day and month, and the technical effect of optimizing the value selection of the n value can also be obtained.
In some embodiments as shown in fig. 2, a method for recognizing a person is further performed, where the method includes a step S200 of capturing the number of people in line by a first image capturing unit, where the first image capturing unit is arranged above the people, and an angle between a projection of a central axis direction of the first image capturing unit in a vertical plane and a horizontal plane is less than 45 °, that is, a horizontal field angle of the first image capturing unit is larger than a vertical field angle. Is convenient for capturing queuing people. S202, a second camera shooting unit is arranged in front of the team, shooting is slightly lower than the height of the crowd, and an included angle between the projection of the central axis direction of the second camera shooting unit in the vertical plane and the horizontal plane is larger than 45 degrees, namely the horizontal field angle of the second camera shooting unit is smaller than the vertical field angle, so that face recognition is facilitated. And the first camera unit shoots the queued people, intelligently identifies the number of the queued people in the people, calculates the queuing length of the queue according to the shot queuing images, and obtains the average distance which is the queuing length/the number of the queued people. The value of n is set to be positively correlated with the inverse of the average spacing (which can be considered as the queuing density). After the step S202 is completed, step S101 is performed to select the first n faces occupying the largest picture from the plurality of faces, and the n faces are subjected to motion detection to obtain a motion detection result. Therefore, the design method is connected with the image verification method under the multi-person scene. Through the scheme, the number of the queuing people can be shot and the face detection can be carried out simultaneously in a labor-division mode, the identification accuracy is improved by arranging the special camera, and finally the execution quality of the scheme is improved.
In the embodiment shown in fig. 3, the information acquisition process is further designed, and the scheme is a self-service face recognition service method, which further includes step S1, acquiring a user ID, detecting whether the user ID exists in an owned server, if so, acquiring a user profile related to the user ID, otherwise, creating a new set of profiles. S10 detects whether the user profile includes the user photo, and if not, the execution starts from step S100 until the verification of the identification object is completed, and the identification object is used as the user photo. If the credit user file comprises user photos, an input image of an image acquisition module is acquired in step S100, face detection operation is carried out on faces in the input image, then, the step is preferentially carried out, in step S1001, the first n faces occupying the largest picture in the faces are selected to be matched with the user photos, and if an ith face which is successfully matched exists, the successfully matched face can be set as an identification object. By the scheme, the user ID can be matched with the existing database after being acquired, the user ID can be quickly acquired when the user photo does not exist, the identification object can be found out through preferential matching when the user photo exists, and the flexibility of the scheme is improved.
The present disclosure also introduces an image verification apparatus in a multi-person scene as shown in fig. 4, which can be used to operate the image recognition method in the multi-person scene, the apparatus includes an image acquisition module 400, a face recognition module 402, and a motion detection module 404, where the image acquisition module 400 is configured to input an image, and the face recognition module 402 is configured to perform a face detection operation on a face in the input image; the motion detection module 404 is configured to select the first n faces occupying the largest picture from the multiple faces, and perform motion detection on the n faces to obtain a motion detection result; the face recognition module 402 is further configured to select a single face with the smallest motion detection result as a recognition object.
Specifically, the motion detection module 404 is specifically configured to detect an ith personal face, and select m feature points, where the coordinate of the m feature points in the ith personal face moves in different frames of pictures.
Specifically, the motion detection result is to detect the ith personal face, select m feature points, sum the coordinate movement values a of the m feature points in the ith personal face in the adjacent frame pictures with the total coordinate movement value Σ a in all the adjacent frame pictures within the preset time.
Further, the system further comprises a connected domain detection module 406 and a judgment and verification module 408, wherein the connected domain detection module is further configured to perform connected domain detection on the lower portions of the multiple faces to obtain body images corresponding to the face images. The judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful.
In other further embodiments, the system further includes a connected domain detection module 406 and a judgment and verification module 408, where the connected domain detection module is further configured to perform connected domain detection below multiple faces, obtain a body image corresponding to the face image, and extract depth information from the body image; the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
Specifically, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels into C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in C and z, and placing the pixel into C if the difference is smaller than z ± a preset range, deleting p from C after the pixel p is completely calculated corresponding to the five pixels, and simultaneously calculating an average depth d 'of all pixels in C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined based on the average queue spacing. Specifically, the camera shoots the queuing people, intelligently identifies the number of queuing people in the people, and then calculates the queuing length of the team according to the shot queuing images to obtain the average distance which is the queuing length/the number of queuing people. In this example, we set the value of n to be positively correlated with the reciprocal of the average spacing (which can be considered as the queuing density), the positive correlation coefficient can be selected as required, and the value of n can be set to be rounded up. For example, in one embodiment, the number of people in the queue X is 10, the queue length Y is 7.5m, the positive correlation coefficient k is 3, and n ═ k × X/Y ═ 4. It can be seen that under the condition of the queuing density, the selection of the first 4 faces occupying the largest picture among the faces is a setting which meets the calculation requirement and saves the calculation power. Through the scheme, the cameras can record and record the queuing people in real time and calculate the density, so that the determination of the number of the face detection can be better completed. In other embodiments, in addition to the real-time adjustment scheme, the average queue density over the past period may be obtained based on the average queue length and the average number of people over the past period. The calculation of the preferred n value can be performed by selecting the average queuing density of the last week, day and month, and the technical effect of optimizing the value selection of the n value can also be obtained.
Through the design of the device, the interference to the face recognition system under the condition of multiple persons can be avoided, the anti-interference capability of the scheme of the invention is improved, and the accuracy of face recognition under the condition of multiple persons is further optimized.
In the embodiment shown in fig. 5, a human face recognition apparatus is shown, which is used for executing the human face recognition method, and includes an image acquisition module 400, an image analysis module 401, a human face recognition module 402, a motion detection module 404, and a user data module 405, where the image acquisition module includes a first camera unit and a second camera unit, the first camera unit is used for capturing the number of people in queue, the first camera unit is disposed above the people, an included angle between a projection of a central axis direction of the first camera unit in a vertical plane and a horizontal plane is less than 45 °, the second camera unit is disposed in front of the queue, an included angle between a projection of the central axis direction of the second camera unit in the vertical plane and the horizontal plane is more than 45 °, and the human face recognition module is used for performing human face detection operation on a human face in an input image of the second camera unit; the image analysis module is used for analyzing the video images of the first camera unit to obtain the average queuing density and obtaining the value of n according to the positive correlation principle of n and the average queuing density. The motion detection module is used for selecting the front n faces occupying the largest picture from a plurality of faces and carrying out motion detection on the n faces to obtain a motion detection result, and the face recognition module is also used for selecting a single face with the smallest motion detection result as a recognition object; the user data module is also used for acquiring user identity information and matching the user photos in the database which belongs to the user identity information with the identification objects. Through the design, the image recognition device can divide work to make a photograph of required image through first camera unit, second camera unit, and n people's face's way before the rethread is selected has promoted face recognition's accuracy and interference killing feature under many people's circumstances.
Specifically, the motion detection module is specifically configured to detect an ith personal face, select m feature points, and obtain coordinate movement values of the m feature points in the ith personal face in different frame pictures.
Further, the motion detection result is specifically that the ith personal face is detected, m feature points are selected, and the value a of coordinate movement of the m feature points in the ith personal face in adjacent frame pictures is summed up with the total value Σ a of coordinate movement in all adjacent frame pictures within the preset time.
In a further embodiment, the system further includes a connected domain detection module 406 and a judgment and verification module 408, where the connected domain detection module is further configured to perform connected domain detection on the lower portions of multiple faces to obtain a body image corresponding to the face image. The judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful.
In particular, the system also comprises a connected domain detection module 406 and a judgment and check module 408,
the connected domain detection module is also used for detecting connected domains below a plurality of faces, acquiring body images corresponding to the face images and extracting depth information of the body images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
In a further embodiment, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels in the set C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in the set C, and if the difference is smaller than z ± preset range, placing the pixel in the set C, deleting p from the set C after the calculation of the pixel p corresponding to the five pixels is completed, and simultaneously calculating an average depth d 'of all pixels in the set C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined by selecting an average queue density over the past week, day, or month.
In other embodiments shown in fig. 6, a self-service face recognition service device is further provided, which is used for operating the self-service face recognition service method shown in fig. 3, and includes a user information detection module 407, an image acquisition module 400, a face recognition module 402, a motion detection module 404, and a user data module 405, where the user information detection module is used to acquire a user ID, detect whether the user ID exists in a self-owned server, acquire a user profile related to the user ID, detect whether the user profile includes a user photo, and enable the image acquisition module when detecting that the user photo is not included; the image acquisition module is used for inputting images, and the face recognition module is used for carrying out face detection operation on faces in the input images; the motion detection module is used for selecting the front n faces occupying the largest picture from the plurality of faces, and carrying out motion detection on the n faces to obtain a motion detection result. The face recognition module is also used for selecting the single face with the minimum motion detection result as a recognition object. The user data module is also used for acquiring user identity information and matching the user photos in the database which belongs to the user identity information with the identification objects. Through the design, the scheme can rapidly match the user ID with the user information through the user data module, n faces before selection are used, and the face with the minimum motion result is found out by the motion detection method, so that the accuracy and the anti-interference capability of face identification under the situation of multiple persons are improved.
Further, the motion detection module is specifically configured to detect an ith personal face, select m feature points, and obtain coordinate movement values of the m feature points in the ith personal face in different frame pictures.
The motion detection result is specifically that the ith personal face is detected, m feature points are selected, the coordinate movement values a of the m feature points in the ith personal face in adjacent frame pictures are summed up, and the total coordinate movement value sigma a in all the adjacent frame pictures is obtained within the preset time.
In particular, the system also comprises a connected domain detection module 406 and a judgment and check module 408,
the connected domain detection module 406 is further configured to perform connected domain detection on the lower portions of the multiple faces to obtain body images corresponding to the face images;
the judgment and verification module 408 is further configured to judge whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if yes, the verification is successful.
In particular, the device also comprises a connected domain detection module and a judgment and verification module,
the connected domain detection module is also used for detecting connected domains below a plurality of faces, acquiring body images corresponding to the face images and extracting depth information of the body images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
Specifically, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels into C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in C and z, and placing the pixel into C if the difference is smaller than z ± a preset range, deleting p from C after the pixel p is completely calculated corresponding to the five pixels, and simultaneously calculating an average depth d 'of all pixels in C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Further, the value of n is determined based on the average queue spacing.
Specifically, the user information detection module 407 is configured to obtain a user ID by reading medical insurance card information.
Further, the face recognition module 408 is further configured to perform matching with the user photo when the user profile includes the user photo, and if a successfully matched face exists, the successfully matched face may be set as a recognition object. By the scheme, the user ID can be matched with the existing database after being acquired, the user ID can be quickly acquired when the user photo does not exist, the identification object can be found out through preferential matching when the user photo exists, and the flexibility of the scheme is improved.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.
Claims (7)
1. A portrait recognition method is characterized by comprising the following steps that a first camera shooting unit shoots the number of people in a queue, the first camera shooting unit is arranged above people, the included angle between the projection of the central axis direction of the first camera shooting unit in the vertical plane and the horizontal plane is smaller than 45 degrees, a second camera shooting unit is arranged in front of the queue, the included angle between the projection of the central axis direction of the second camera shooting unit in the vertical plane and the horizontal plane is larger than 45 degrees, an input image of the second camera shooting unit is obtained, and a face detection operation is carried out on the face in the input image;
selecting the front n faces occupying the largest picture from the plurality of faces, wherein the numerical value of n obtains the average queuing density by analyzing the video image of the first camera unit, n is in positive correlation with the average queuing density, and performing motion detection on the n faces to obtain a motion detection result;
selecting a single face with the minimum motion detection result as an identification object;
and acquiring user identity information, and matching the user photo in the database belonging to the user identity information with the identification object.
2. The method according to claim 1, wherein the motion detection result is specifically a value obtained by detecting an ith personal face, selecting m feature points, and moving coordinates of the m feature points in the ith personal face in different frame images.
3. The face recognition method according to claim 2, wherein the motion detection result is specifically that an i-th face is detected, m feature points are selected, a value a of coordinate movement of the m feature points in the i-th face in adjacent frames is selected,
a step is also performed of summing the total value Σ a of the coordinate movement in all the adjacent frame pictures within a preset time.
4. The face recognition method of claim 1, further comprising the step of,
detecting connected domains below a plurality of faces to obtain body images corresponding to the face images;
and judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, successfully verifying.
5. The face recognition method of claim 1, further comprising the step of,
acquiring connected domain detection below a plurality of faces, acquiring body images corresponding to the face images, and extracting depth information from the body images;
and judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, successfully verifying.
6. The portrait recognition method according to one of claims 4 or 5,
the method for detecting the connected domain comprises the steps of obtaining a depth image of the lower portion of a face area, calculating an average depth z, and selecting K pixel points to expand right below the face area, wherein the expanding method comprises the steps of setting a set C to be expanded, placing K pixels into the C, calculating the difference value between the depth of five pixels, namely the left pixel, the lower pixel, the right pixel and the right pixel, of each pixel p in the C and the z, if the difference value is smaller than z +/-preset range, placing the pixel into the C, deleting the p from the C after the calculation of the pixel p corresponding to the five pixels is finished, calculating the average depth d 'of all the pixels in the C at the same time, updating the d to d', and calculating until the stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
7. The portrait recognition method of claim 1, wherein the value of n is determined by selecting an average queuing density of a past week, day or month.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010879451.2A CN112001340B (en) | 2020-08-27 | 2020-08-27 | Portrait recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010879451.2A CN112001340B (en) | 2020-08-27 | 2020-08-27 | Portrait recognition method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112001340A true CN112001340A (en) | 2020-11-27 |
CN112001340B CN112001340B (en) | 2024-03-29 |
Family
ID=73471046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010879451.2A Active CN112001340B (en) | 2020-08-27 | 2020-08-27 | Portrait recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001340B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447597A (en) * | 2018-12-27 | 2019-03-08 | 深圳市沃特沃德股份有限公司 | More people carry out the method, apparatus and face identification system of attendance jointly |
CN109934733A (en) * | 2019-03-20 | 2019-06-25 | 合肥浮点信息科技有限公司 | One kind being based on face recognition technology intelligent canteen queue management system |
CN110032966A (en) * | 2019-04-10 | 2019-07-19 | 湖南华杰智通电子科技有限公司 | Human body proximity test method, intelligent Service method and device for intelligent Service |
CN110543867A (en) * | 2019-09-09 | 2019-12-06 | 北京航空航天大学 | crowd density estimation system and method under condition of multiple cameras |
CN111209812A (en) * | 2019-12-27 | 2020-05-29 | 深圳市优必选科技股份有限公司 | Target face picture extraction method and device and terminal equipment |
CN112001334A (en) * | 2020-08-27 | 2020-11-27 | 闽江学院 | Portrait recognition device |
-
2020
- 2020-08-27 CN CN202010879451.2A patent/CN112001340B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447597A (en) * | 2018-12-27 | 2019-03-08 | 深圳市沃特沃德股份有限公司 | More people carry out the method, apparatus and face identification system of attendance jointly |
CN109934733A (en) * | 2019-03-20 | 2019-06-25 | 合肥浮点信息科技有限公司 | One kind being based on face recognition technology intelligent canteen queue management system |
CN110032966A (en) * | 2019-04-10 | 2019-07-19 | 湖南华杰智通电子科技有限公司 | Human body proximity test method, intelligent Service method and device for intelligent Service |
CN110543867A (en) * | 2019-09-09 | 2019-12-06 | 北京航空航天大学 | crowd density estimation system and method under condition of multiple cameras |
CN111209812A (en) * | 2019-12-27 | 2020-05-29 | 深圳市优必选科技股份有限公司 | Target face picture extraction method and device and terminal equipment |
CN112001334A (en) * | 2020-08-27 | 2020-11-27 | 闽江学院 | Portrait recognition device |
Also Published As
Publication number | Publication date |
---|---|
CN112001340B (en) | 2024-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10990191B2 (en) | Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data | |
CN112001334A (en) | Portrait recognition device | |
US8731249B2 (en) | Face recognition using face tracker classifier data | |
KR101971866B1 (en) | Method and apparatus for detecting object in moving image and storage medium storing program thereof | |
JP6494253B2 (en) | Object detection apparatus, object detection method, image recognition apparatus, and computer program | |
JP6754642B2 (en) | Biodetector | |
CN109858375B (en) | Living body face detection method, terminal and computer readable storage medium | |
US20080013837A1 (en) | Image Comparison | |
JP2009211311A (en) | Image processing apparatus and method | |
JP2004301607A (en) | Moving object detection device, moving object detection method, and moving object detection program | |
CN111985424A (en) | Image verification method under multi-person scene | |
CN111967422A (en) | Self-service face recognition service method | |
JP2009301242A (en) | Head candidate extraction method, head candidate extraction device, head candidate extraction program and recording medium recording the program | |
KR20140134549A (en) | Apparatus and Method for extracting peak image in continuously photographed image | |
CN112001340A (en) | Portrait identification method | |
CN111967425A (en) | Self-service face recognition service device | |
CN111985425A (en) | Image verification device under multi-person scene | |
CN115019364A (en) | Identity authentication method and device based on face recognition, electronic equipment and medium | |
CN114743264A (en) | Shooting behavior detection method, device, equipment and storage medium | |
CN113469135A (en) | Method and device for determining object identity information, storage medium and electronic device | |
JP4812743B2 (en) | Face recognition device, face recognition method, face recognition program, and recording medium recording the program | |
US11335123B2 (en) | Live facial recognition system and method | |
JP2002170096A (en) | Passing object count device and count method | |
CN111046788A (en) | Method, device and system for detecting staying personnel | |
OA20233A (en) | Device and method for authenticating an individual. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |