CN112001340B - Portrait recognition method - Google Patents

Portrait recognition method Download PDF

Info

Publication number
CN112001340B
CN112001340B CN202010879451.2A CN202010879451A CN112001340B CN 112001340 B CN112001340 B CN 112001340B CN 202010879451 A CN202010879451 A CN 202010879451A CN 112001340 B CN112001340 B CN 112001340B
Authority
CN
China
Prior art keywords
face
image
faces
motion detection
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010879451.2A
Other languages
Chinese (zh)
Other versions
CN112001340A (en
Inventor
汪涛
陈智炳
叶文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Medical Union Health Information Technology Co ltd
Minjiang University
Original Assignee
Fujian Medical Union Health Information Technology Co ltd
Minjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Medical Union Health Information Technology Co ltd, Minjiang University filed Critical Fujian Medical Union Health Information Technology Co ltd
Priority to CN202010879451.2A priority Critical patent/CN112001340B/en
Publication of CN112001340A publication Critical patent/CN112001340A/en
Application granted granted Critical
Publication of CN112001340B publication Critical patent/CN112001340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/167Detection; Localisation; Normalisation using comparisons between temporally consecutive images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The human image recognition method comprises the following steps that a first camera shooting unit shoots the number of people in a queue, the first camera shooting unit is arranged above the people, the included angle between the projection of the central axis direction of the first camera shooting unit in a vertical plane and the horizontal plane is smaller than 45 degrees, a second camera shooting unit is arranged in front of a team, the included angle between the projection of the central axis direction of the second camera shooting unit in the vertical plane and the horizontal plane is larger than 45 degrees, an input image of the second camera shooting unit is obtained, and face detection operation is carried out on faces in the input image; by the method, the optimal recognition configuration can be carried out when the image pickup module picks up images of multiple persons, and the optimal recognition object is determined through motion detection and the occupied picture proportion. The queuing density information is acquired by configuring the cameras of the shooting crowd, so that the matching efficiency of the scheme is increased, and the anti-interference capability under the multi-person environment is enhanced.

Description

Portrait recognition method
Technical Field
The invention relates to the field of automatic image detection, in particular to a human image recognition method under a multi-person scene.
Background
In the existing face recognition technology, the technical schemes such as application number 2017111892326 and application number 2018116155405 can achieve simultaneous recognition of multiple persons. If the image is a real-time photographed image, the non-ideal situation that the photographing device cannot obtain only one person easily occurs in a multi-person scene. When the image acquired by the image pickup device is identified, the technical problem that the face of the person at the back row is close to the face of the person at the front row, and thus the correct identification cannot be realized is likely to occur while the plurality of people are queued.
Disclosure of Invention
Therefore, it is necessary to provide a method for identifying a person image in a multi-person scene, so as to solve the problem that the face recognition of the multi-person scene is not accurate enough in the prior art;
in order to achieve the above object, the present inventors provide a method for recognizing a person, comprising the steps of a first camera shooting the number of people in a queue, setting the first camera shooting unit above the people, wherein the angle between the projection of the central axis direction of the first camera shooting unit in a vertical plane and the horizontal plane is smaller than 45 °, a second camera shooting unit in front of a team, the angle between the projection of the central axis direction of the second camera shooting unit in the vertical plane and the horizontal plane is larger than 45 °, obtaining an input image of the second camera shooting unit, and performing face detection operation on a face in the input image;
selecting the first n faces with the largest frames among the faces, wherein the numerical value of n is used for acquiring average queuing density by analyzing the video shot by the first shooting unit, n is positively correlated with the average queuing density, and the n faces are subjected to motion detection to obtain a motion detection result;
selecting a single face with the minimum motion detection result as a recognition object;
and acquiring user identity information, and matching the acquired user photos in the database belonging to the user identity information with the identification object.
Specifically, the motion detection result is that the ith face is detected, m feature points are selected, and the coordinates of the m feature points in the ith face move in different frames.
Specifically, the motion detection result is that the ith face is detected, m feature points are selected, the coordinates of the m feature points in the ith face in adjacent frames move by a value a,
the step is also performed of summing the total value Σa of the coordinate movements in all the adjacent frame pictures within a preset time.
Optionally, the method also comprises the steps of,
detecting connected domains below a plurality of faces to obtain body images corresponding to the face images;
judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, checking successfully.
Preferably, the method further comprises the step of,
acquiring a plurality of human faces, detecting connected domains, acquiring a body image corresponding to the human face image, and extracting depth information from the body image;
judging whether the picture occupied by the body image corresponding to the identification object is the nearest depth in all the body images, and if so, checking successfully.
In particular, the method comprises the steps of,
the connected domain detection method comprises the steps of obtaining a depth image of a part below a face area, calculating an average depth z, selecting K pixel points to expand right below the face area, setting a set C to be expanded, placing K pixels into the C, calculating the difference value between the depth of five pixels of left, lower right and right of each pixel p in the C and z, placing the pixel into the C if the difference value is smaller than z+/-a preset range, deleting the p from the C after the pixel p is calculated corresponding to the five pixels, simultaneously calculating the average depth d 'of all the pixels in the C, updating d into d', and calculating the d until a stop condition is reached. The stopping condition is that the number of calculated pixels reaches MAX, or that no pixel in C meets the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined by selecting an average queuing density for the past week, day, or month.
By the method, the optimal recognition configuration can be carried out when the image pickup module picks up images of multiple persons, and the optimal recognition object is determined through motion detection and the occupied picture proportion. The queuing density information is acquired by configuring the cameras of the shooting crowd, so that the matching efficiency of the scheme is increased, and the anti-interference capability under the multi-person environment is enhanced.
Drawings
FIG. 1 is a flowchart of a method for verifying a multi-person scene image according to an embodiment of the present invention;
FIG. 2 is a flowchart of a portrait identification method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a self-service face service method according to an embodiment of the present invention;
FIG. 4 is a diagram of a multi-person scene image verification device according to an embodiment of the invention;
FIG. 5 is a diagram of a portrait identification apparatus according to an embodiment of the present invention;
fig. 6 is a diagram of a self-service face service device according to an embodiment of the present invention.
Detailed Description
In order to describe the technical content, constructional features, achieved objects and effects of the technical solution in detail, the following description is made in connection with the specific embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, an image verification method in a multi-person scene includes the following steps,
s100, acquiring an input image of an image acquisition module, and performing face detection operation on a face in the input image; the step of face detection on the face in the input image refers to that the input image of the image acquisition module is subjected to preliminary analysis, and the block of the image, which is possibly a face image, is identified. In this context, face detection refers to detecting a region that may be a face in an image, and may be further assisted with a rectangular frame to identify the face region, which generally has a smaller calculation amount and a faster speed. Face recognition is to cut a rectangular frame and compare the rectangular frame with a face library to determine identity after the face is detected and recognized as a face, so that the calculated amount is large
S101, selecting the front n faces with the largest pictures among the faces, and performing motion detection on the n faces to obtain a motion detection result; and selecting the first n faces with the largest frames in the faces, namely, sequencing the areas of the image blocks identified as the faces, wherein the areas can be pixels, and judging the area correlation according to the sizes of the pixel blocks. And selecting the single face with the minimum motion detection result as the identification object. Here, the motion detection refers to determining whether a block has changed in position in an image, and the criterion for the determination may be whether a block recognized as a human face in an adjacent frame has been shifted, deformed, or scaled in size, or the like.
Specifically, the motion detection result is that the ith face is detected, m feature points are selected, and the coordinates of the m feature points in the ith face move in different frames. If the sum of the values of the coordinate movements of the m feature points in different frames is greater than zero or is set to be greater than a certain preset threshold value, the motion is considered to occur. By designing a plurality of feature points in the image to judge the movement of the human face, more accurate judgment of the movement result can be obtained.
In other more specific embodiments, the motion detection result is specifically that the ith face is detected, m feature points are selected, and the sum a of the values of coordinate movements of the m feature points in the ith face in the adjacent frames is selected. That is, the value of the coordinate shift of the mth feature point in the ith face in the adjacent frame picture is set to be α i,m The motion detection result a of the ith face i =∑ m α i,m
In a further embodiment, in order to more accurately reflect the motion state over a relatively long period of time as a result of the motion detection, we also design the step to be performed for a preset time, e.g. 2-4 seconds. There may be 50-100 frames in the preset time, and the motion detection result a of the ith face in the preset time is summed i Total value Σa of coordinate movement in 50-100 adjacent frame pictures i
In some further embodiments shown in fig. 1, the method further includes step S102 of detecting connected domains under the faces to obtain a body image corresponding to the face image. This step is for associating a plurality of face areas with a plurality of body areas, but there may be cases where the body areas below the face areas are not recognized, indicating that the body areas corresponding to the face image may be blocked, and continuing the step
S103, judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, checking is successful. If there is no corresponding body image below a certain face area, it can be determined that the screen occupied by the corresponding body image is 0. Through the above steps S102 and S103, the identification object selected in the previous steps S100 and S101 is verified, and if the body area of the identification object is determined to be the largest in all the body images, it is truly proved that the identification object selected through the above method is most likely to be the face of the person standing in front, because the most likely of the screen occupied by the largest body image stands in front, and thus the largest camera view angle is covered. Through the verification step, the identification accuracy of the face image can be greatly improved, and the practicability of the scheme of the invention is improved.
In other further embodiments, in order to verify the identification object selected in the previous steps S100 and S101, the method may further include the steps of S105 acquiring a body image corresponding to a face image and extracting depth information from the body image, where the connected domain detection is performed under a plurality of faces;
s106, judging whether the picture occupied by the body image corresponding to the identification object is the nearest depth in all the body images, and if so, checking successfully. The depth information here refers to distance information between an object in front of the camera module and the camera module, which is obtained by conventional means and in the prior art, for example, a camera is set to be an RGBD depth camera using tof or structured light, and the like, which is not described herein. By recognizing the depth information of the body image region in the steps S105 and S106, it is possible to achieve the technical effect of verifying whether the recognition target is the face of the person standing in the forefront. The interference of the conditions such as directing operation and the like caused by the condition that some hot persons probe the head behind the team on the recognition result is avoided. Through the verification step, the identification accuracy of the face image can be greatly improved, and the practicability of the scheme of the invention is improved.
In some specific embodiments as shown below, the depth calculation of the connected domain may be performed by: the method comprises the steps of obtaining a depth image of a part below a face area, calculating an average depth z, selecting K pixel points to expand under the face area, setting a set C to be expanded, putting K pixels into the C, calculating the difference value between the depths of five pixels of left, left lower, right lower and right of each pixel p in the C and z, if the difference value is smaller than a z+/-preset range, putting the pixel into the C, deleting the pixel p from the C after the pixel p is calculated corresponding to the five pixels, simultaneously calculating the average depth d 'of all the pixels in the C, and updating d to d', so that the calculation is carried out until a stop condition is reached. The stopping condition is that the number of calculated pixels reaches MAX, or that no pixel in C meets the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
In order to better distribute calculation force, the fact that if the value of n is too large, the number of faces to be processed is increased, the probability of false detection is improved, if the value of n is too small, the probability of missed detection is improved, the value of n is required to be optimized and selected, and the value of n is determined according to the average queuing space. Specifically, a camera shoots and records the queuing crowd, intelligently identifies the queuing people in the crowd, and calculates the queuing length of the team through the shot and recorded queuing images to obtain average interval = queuing length/queuing number. In this example, we set the value of n to be positively correlated with the inverse of the average spacing (which may be considered as the queuing density), the positive correlation coefficient may be chosen as desired, and the value of n may be set to be rounded up. For example, in one embodiment, the number X of the queuing group is 10, the queuing length Y is 7.5m, and the positive correlation coefficient k is selected to be 3, where n=k×x/y=4. Under the condition of the queuing density, the first 4 faces with the largest picture are selected from the faces, which meet the calculation requirement and save the calculation force. Through the scheme, the camera can record the crowd in the queue in real time and calculate the intensity, so that the determination of the face detection number can be completed better. In other embodiments, in addition to the real-time adjustment scheme, the average queuing density of the past period of time may be obtained according to the average queuing length and the average number of people in the past period of time. The optimal n value can be calculated by selecting the average queuing density of the past week, day and month, and the technical effect of optimizing the numerical selection of the n value can be obtained.
In some embodiments as shown in fig. 2, a method for identifying a person is further performed, where the method includes the following steps that S200 captures the number of people in the queue through a first camera unit, and sets the first camera unit to be above the people, where an included angle between a projection of a central axis direction of the first camera unit in a vertical plane and a horizontal plane is smaller than 45 °, that is, a horizontal view angle of the first camera unit is larger than a vertical view angle. Is convenient for capturing queuing people. S202, the second camera unit is arranged in front of a team, the shooting is slightly lower than the crowd height setting, the included angle between the projection of the central axis direction of the second camera unit in the vertical plane and the horizontal plane is larger than 45 degrees, namely the horizontal direction view angle of the second camera unit is smaller than the vertical direction view angle, and face recognition is facilitated. And the first camera unit records the queuing crowd, intelligently identifies the queuing people in the crowd, calculates the queuing length of the team through the recorded queuing images, and obtains average distance = queuing length/queuing number. The value of n is set to be positively correlated with the inverse of the average spacing (which can be considered as queuing density). After the step S202 is completed, step S101 is performed, the first n faces with the largest frames are selected from the plurality of faces, and the n faces are subjected to motion detection, so as to obtain a motion detection result. The design method is connected with the image verification method in the multi-person scene. Through the scheme, the number of people in the queue can be photographed and face detection can be performed simultaneously, and the accuracy of recognition is improved through the arrangement of the special camera, so that the execution quality of the scheme is finally improved.
In the embodiment shown in fig. 3, we have designed the flow of information acquisition, and the scheme is a self-service face recognition service method, and further includes step S1, obtaining a user ID, detecting whether the user ID exists in an own server, if yes, obtaining a user profile related to the user ID, otherwise, creating a set of profiles. S10, detecting whether the user file contains the user photo, if not, starting execution from step S100 until verification of the identification object is completed, and taking the identification object as the user photo. If the credit user file includes a user photo, in step S100, an input image of the image acquisition module is acquired, face detection is performed on faces in the input image, and then step S1001 is preferably performed, the first n faces with the largest frames among the faces are selected to match the user photo, and if there is an i-th face with successful matching, the face with successful matching can be set as a recognition object. By the scheme, the user ID can be matched with the existing database after being acquired, the user ID can be acquired quickly when the user photo does not exist, and the identification object can be found out as soon as possible by preferential matching when the user photo exists.
The scheme also introduces an image verification device under a multi-person scene as shown in fig. 4, which can be used for running the aforementioned image recognition method under the multi-person scene, and the device comprises an image acquisition module 400, a face recognition module 402 and a motion detection module 404, wherein the image acquisition module 400 is used for inputting an image, and the face recognition module 402 is used for performing face detection operation on a face in the input image; the motion detection module 404 is configured to select a first n faces with the largest frames among the plurality of faces, and perform motion detection on the n faces to obtain a motion detection result; the face recognition module 402 is further configured to select a single face with a minimum motion detection result as a recognition object.
Specifically, the motion detection module 404 is specifically configured to detect an ith face, select m feature points, and move coordinates of the m feature points in the ith face in different frames.
Specifically, the motion detection result is that an ith face is detected, m feature points are selected, the values a of coordinate movements of the m feature points in the ith face in adjacent frame images are summed up, and the total value sigma a of the coordinate movements in all the adjacent frame images in preset time.
Further, the device further comprises a connected domain detection module 406 and a judgment and verification module 408, wherein the connected domain detection module is further used for detecting the connected domains below the faces and obtaining a body image corresponding to the face image. The judging and checking module is also used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the checking is successful.
In other further embodiments, the device further includes a connected domain detection module 406 and a judgment and verification module 408, where the connected domain detection module is further configured to perform connected domain detection under a plurality of faces, obtain a body image corresponding to the face image, and extract depth information from the body image; the judging and checking module is also used for judging whether the picture occupied by the body image corresponding to the identification object is the nearest depth in all the body images, and if so, the checking is successful.
Specifically, the connected domain detection module is further configured to perform connected domain detection under a plurality of faces, and specifically configured to obtain a depth image of a portion under a face area, calculate an average depth z, select K pixels directly under the face area, and perform expansion, where the expansion method is to set a set C to be expanded, place K pixels into the C, calculate a difference value between the depth of each pixel p in the C and z of five pixels of left, lower right, and if the difference value is less than z±a preset range, place the pixel in the C, delete p from the C after the pixel p is calculated corresponding to five pixels, calculate an average depth d 'of all pixels in the C at the same time, and update d to d', so as to calculate until a stop condition is reached. The stopping condition is that the number of calculated pixels reaches MAX, or that no pixel in C meets the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined from the average queuing space. Specifically, a camera shoots and records the queuing crowd, intelligently identifies the queuing people in the crowd, and calculates the queuing length of the team through the shot and recorded queuing images to obtain average interval = queuing length/queuing number. In this example, we set the value of n to be positively correlated with the inverse of the average spacing (which may be considered as the queuing density), the positive correlation coefficient may be chosen as desired, and the value of n may be set to be rounded up. For example, in one embodiment, the number X of the queuing group is 10, the queuing length Y is 7.5m, and the positive correlation coefficient k is selected to be 3, where n=k×x/y=4. Under the condition of the queuing density, the first 4 faces with the largest picture are selected from the faces, which meet the calculation requirement and save the calculation force. Through the scheme, the camera can record the crowd in the queue in real time and calculate the intensity, so that the determination of the face detection number can be completed better. In other embodiments, in addition to the real-time adjustment scheme, the average queuing density of the past period of time may be obtained according to the average queuing length and the average number of people in the past period of time. The optimal n value can be calculated by selecting the average queuing density of the past week, day and month, and the technical effect of optimizing the numerical selection of the n value can be obtained.
Through the design of the device, the interference to the face recognition system under the condition of multiple persons can be avoided, the anti-interference capability of the scheme of the invention is improved, and the accuracy of face recognition under the condition of multiple persons is further optimized.
In the embodiment shown in fig. 5, a human image recognition device is shown, which is configured to perform the human image recognition method, and includes an image acquisition module 400, an image analysis module 401, a face recognition module 402, a motion detection module 404, and a user data module 405, where the image acquisition module includes a first camera unit and a second camera unit, the first camera unit is configured to capture the number of people in a queue, the first camera unit is disposed above the people, an included angle between a projection of a central axis direction of the first camera unit in a vertical plane and a horizontal plane is smaller than 45 °, the second camera unit is in front of a team, an included angle between a projection of a central axis direction of the second camera unit in the vertical plane and the horizontal plane is larger than 45 °, and the face recognition module is configured to perform a face detection operation on a face in an input image of the second camera unit; the image analysis module is used for analyzing the video shot by the first shooting unit to obtain the average queuing density, and the value of n is obtained according to the principle that n and the average queuing density are positively correlated. The motion detection module is used for selecting the front n faces with the largest pictures among the faces, performing motion detection on the n faces to obtain a motion detection result, and selecting the single face with the smallest motion detection result as a recognition object; the user data module is also used for acquiring user identity information, and matching the acquired user photos in the database belonging to the user identity information with the identification object. Through the design, the portrait identification device can make required images through the first camera unit and the second camera unit separately, and the accuracy and the anti-interference capability of face identification under the situation of multiple people are improved through selecting the front n faces.
Specifically, the motion detection module is specifically configured to detect an ith face, select m feature points, and move coordinates of the m feature points in the ith face in different frames.
Further, the motion detection result is specifically that the ith face is detected, m feature points are selected, the values a of coordinate movements of the m feature points in the ith face in adjacent frame images are summed up, and the total value sigma a of the coordinate movements in all the adjacent frame images in preset time.
In a further embodiment, the device further includes a connected domain detection module 406 and a judgment and verification module 408, where the connected domain detection module is further configured to perform connected domain detection on the lower sides of the multiple faces, and obtain a body image corresponding to the face image. The judging and checking module is also used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the checking is successful.
Specifically, the device also comprises a connected domain detection module 406 and a judgment and verification module 408,
the connected domain detection module is also used for carrying out connected domain detection under a plurality of faces, acquiring body images corresponding to the face images and extracting depth information from the body images;
the judging and checking module is also used for judging whether the picture occupied by the body image corresponding to the identification object is the nearest depth in all the body images, and if so, the checking is successful.
In a further embodiment, the connected domain detection module is further configured to perform connected domain detection under a plurality of faces, and specifically configured to obtain a depth image of a portion under a face area, calculate an average depth z, select K pixels directly under the face area, and perform expansion, where the expansion method is to set a set C to be expanded, place K pixels into the C, calculate a difference value between the depth of each pixel p in the C and z of five pixels in the left, lower right, and right of the C, and if the difference value is less than z±a preset range, place the pixel in the C, delete p from the C after the pixel p is calculated corresponding to five pixels, calculate an average depth d 'of all pixels in the C at the same time, and update d to d', so as to calculate d until a stop condition is reached. The stopping condition is that the number of calculated pixels reaches MAX, or that no pixel in C meets the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined by selecting an average queuing density for the past week, day, or month.
In other embodiments as shown in fig. 6, a self-service face recognition service apparatus is further provided, which is configured to operate a self-service face recognition service method as shown in fig. 3, and includes a user information detection module 407, an image acquisition module 400, a face recognition module 402, a motion detection module 404, and a user data module 405, where the user information detection module is configured to acquire a user ID, detect whether the user ID exists in a self server, acquire a user profile related to the user ID, and further detect whether the user profile includes a user photo, and enable the image acquisition module when detecting that the user photo is not included; the image acquisition module is used for inputting an image, and the face recognition module is used for carrying out face detection operation on a face in the input image; the motion detection module is used for selecting the front n faces with the largest pictures among the faces, and performing motion detection on the n faces to obtain a motion detection result. The face recognition module is also used for selecting a single face with the minimum motion detection result as a recognition object. The user data module is also used for acquiring user identity information, and matching the acquired user photos in the database belonging to the user identity information with the identification object. Through the design, the user ID can be rapidly matched with the user information through the user data module, the front n faces are selected, the motion detection method is carried out, the face with the minimum motion result is found, and then the accuracy and the anti-interference capability of face recognition under the situation of multiple people are improved.
Further, the motion detection module is specifically configured to detect an ith face, select m feature points, and move coordinates of the m feature points in the ith face in different frames.
The motion detection result is specifically that the ith face is detected, m feature points are selected, the coordinate movement values a of the m feature points in the ith face in the adjacent frame pictures are summed up, and the total value sigma a of the coordinate movement in all the adjacent frame pictures in preset time.
Specifically, the device also comprises a connected domain detection module 406 and a judgment and verification module 408,
the connected domain detection module 406 is further configured to perform connected domain detection on the lower sides of the faces, and obtain a body image corresponding to the face image;
the judging and checking module 408 is further configured to judge whether the frame occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, check successfully.
Specifically, the device also comprises a connected domain detection module and a judgment and verification module,
the connected domain detection module is also used for carrying out connected domain detection under a plurality of faces, acquiring body images corresponding to the face images and extracting depth information from the body images;
the judging and checking module is also used for judging whether the picture occupied by the body image corresponding to the identification object is the nearest depth in all the body images, and if so, the checking is successful.
Specifically, the connected domain detection module is further configured to perform connected domain detection under a plurality of faces, and specifically configured to obtain a depth image of a portion under a face area, calculate an average depth z, select K pixels directly under the face area, and perform expansion, where the expansion method is to set a set C to be expanded, place K pixels into the C, calculate a difference value between the depth of each pixel p in the C and z of five pixels of left, lower right, and if the difference value is less than z±a preset range, place the pixel in the C, delete p from the C after the pixel p is calculated corresponding to five pixels, calculate an average depth d 'of all pixels in the C at the same time, and update d to d', so as to calculate until a stop condition is reached. The stopping condition is that the number of calculated pixels reaches MAX, or that no pixel in C meets the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Further, the value of n is determined based on the average queuing distance.
Specifically, the user information detection module 407 is configured to obtain the user ID by reading the medical insurance card information.
Further, the face recognition module 408 is further configured to perform matching with the user photo when the user file contains the user photo, and if there is a face successfully matched, set the face successfully matched as a recognition object. By the scheme, the user ID can be matched with the existing database after being acquired, the user ID can be acquired quickly when the user photo does not exist, and the identification object can be found out as soon as possible by preferential matching when the user photo exists.
It should be noted that, although the foregoing embodiments have been described herein, the scope of the present invention is not limited thereby. Therefore, based on the innovative concepts of the present invention, alterations and modifications to the embodiments described herein, or equivalent structures or equivalent flow transformations made by the present description and drawings, apply the above technical solution, directly or indirectly, to other relevant technical fields, all of which are included in the scope of the invention.

Claims (5)

1. The human image recognition method is characterized by comprising the following steps that S200, the number of people in a queue is shot through a first camera unit, the first camera unit is arranged above the people, the included angle between the projection of the central axis direction of the first camera unit in a vertical plane and the horizontal plane is smaller than 45 degrees, S202, the projection of the central axis direction of the second camera unit in the vertical plane and the horizontal plane are arranged in front of a team, the included angle between the projection of the central axis direction of the second camera unit in the vertical plane and the horizontal plane is larger than 45 degrees, an input image of the second camera unit is obtained, and face detection operation is carried out on faces in the input image;
s101, selecting the first n faces with the largest picture in a plurality of faces, wherein the numerical value of n is used for obtaining average queuing density by analyzing the photographed images of a first photographing unit, n is positively correlated with the average queuing density, and the n faces are subjected to motion detection to obtain a motion detection result, and the motion detection refers to whether the block identified as the face of a person in the adjacent frame is translated, deformed or scaled in size;
selecting a single face with the minimum motion detection result as a recognition object;
detecting connected domains below a plurality of faces to obtain body images corresponding to the face images;
judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, if so, checking successfully, or extracting depth information from the body image;
judging whether the picture occupied by the body image corresponding to the identification object is the nearest depth in all the body images, if so, checking successfully;
and acquiring user identity information, and matching the acquired user photos in the database belonging to the user identity information with the identification object.
2. The method according to claim 1, wherein the motion detection result is specifically that an ith face is detected, m feature points are selected, and coordinates of the m feature points in the ith face move in different frames.
3. The method of claim 2, wherein the motion detection result is specifically that an ith face is detected, m feature points are selected, the coordinates of the m feature points in the ith face in adjacent frames move by a value a,
the step is also performed of summing the total value Σa of the coordinate movements in all the adjacent frame pictures within a preset time.
4. The method for recognizing a person according to claim 1, wherein,
the connected domain detection method comprises the steps of obtaining a depth image of a part below a face area, calculating an average depth z, selecting K pixel points to expand under the face area, setting a set C to be expanded, placing K pixels into the C, calculating the difference value between the depth of five pixels of the left, the left lower, the right lower and the right of each pixel p in the C and z, if the difference value is smaller than z+/-a preset range, placing the pixel into the C, deleting the pixel p from the C after the pixel p is calculated corresponding to the five pixels, calculating the average depth D 'of all the pixels in the C at the same time, and updating D into D', so that the stopping condition is that the calculated pixels reach MAX, or no pixel in the C can be expanded, and the obtained D is the distance D between a body image and equipment.
5. The portrait identification method of claim 1 wherein the value of n is determined by selecting an average queuing density for the past week, day or month.
CN202010879451.2A 2020-08-27 2020-08-27 Portrait recognition method Active CN112001340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010879451.2A CN112001340B (en) 2020-08-27 2020-08-27 Portrait recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010879451.2A CN112001340B (en) 2020-08-27 2020-08-27 Portrait recognition method

Publications (2)

Publication Number Publication Date
CN112001340A CN112001340A (en) 2020-11-27
CN112001340B true CN112001340B (en) 2024-03-29

Family

ID=73471046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010879451.2A Active CN112001340B (en) 2020-08-27 2020-08-27 Portrait recognition method

Country Status (1)

Country Link
CN (1) CN112001340B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447597A (en) * 2018-12-27 2019-03-08 深圳市沃特沃德股份有限公司 More people carry out the method, apparatus and face identification system of attendance jointly
CN109934733A (en) * 2019-03-20 2019-06-25 合肥浮点信息科技有限公司 One kind being based on face recognition technology intelligent canteen queue management system
CN110032966A (en) * 2019-04-10 2019-07-19 湖南华杰智通电子科技有限公司 Human body proximity test method, intelligent Service method and device for intelligent Service
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras
CN111209812A (en) * 2019-12-27 2020-05-29 深圳市优必选科技股份有限公司 Target face picture extraction method and device and terminal equipment
CN112001334A (en) * 2020-08-27 2020-11-27 闽江学院 Portrait recognition device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447597A (en) * 2018-12-27 2019-03-08 深圳市沃特沃德股份有限公司 More people carry out the method, apparatus and face identification system of attendance jointly
CN109934733A (en) * 2019-03-20 2019-06-25 合肥浮点信息科技有限公司 One kind being based on face recognition technology intelligent canteen queue management system
CN110032966A (en) * 2019-04-10 2019-07-19 湖南华杰智通电子科技有限公司 Human body proximity test method, intelligent Service method and device for intelligent Service
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras
CN111209812A (en) * 2019-12-27 2020-05-29 深圳市优必选科技股份有限公司 Target face picture extraction method and device and terminal equipment
CN112001334A (en) * 2020-08-27 2020-11-27 闽江学院 Portrait recognition device

Also Published As

Publication number Publication date
CN112001340A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112001334B (en) Portrait recognition device
CN107093171B (en) Image processing method, device and system
JP3954484B2 (en) Image processing apparatus and program
JP6077655B2 (en) Shooting system
US10956715B2 (en) Decreasing lighting-induced false facial recognition
CN110969118B (en) Track monitoring system and method
CN110245561B (en) Face recognition method and device
JP6292540B2 (en) Information processing system, information processing method, and program
CN101131727A (en) Image collecting method and its application
JP4288428B2 (en) Video analysis system and video analysis method
US8588480B2 (en) Method for generating a density image of an observation zone
CN112053397A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111985424B (en) Image verification method under multi-person scene
CN111967422B (en) Self-service face recognition service method
US11048915B2 (en) Method and a device for detecting fraud by examination using two different focal lengths during automatic face recognition
JP2009211122A (en) Image processor and object estimation program
KR101990823B1 (en) Method, program, and apparatus for estimating location and managing identification of target using facial information
CN112001340B (en) Portrait recognition method
CN111985425B (en) Image verification device under multi-person scene
JP4985742B2 (en) Imaging system, method and program
CN111967425B (en) Self-service face recognition service device
CN113469135A (en) Method and device for determining object identity information, storage medium and electronic device
JP2009301242A (en) Head candidate extraction method, head candidate extraction device, head candidate extraction program and recording medium recording the program
JP6548306B2 (en) Image analysis apparatus, program and method for tracking a person appearing in a captured image of a camera
CN115019364A (en) Identity authentication method and device based on face recognition, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant