CN111767757A - Identity information determination method and device - Google Patents

Identity information determination method and device Download PDF

Info

Publication number
CN111767757A
CN111767757A CN201910251349.5A CN201910251349A CN111767757A CN 111767757 A CN111767757 A CN 111767757A CN 201910251349 A CN201910251349 A CN 201910251349A CN 111767757 A CN111767757 A CN 111767757A
Authority
CN
China
Prior art keywords
face image
image
target face
shot
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910251349.5A
Other languages
Chinese (zh)
Other versions
CN111767757B (en
Inventor
王开元
方家乐
徐楠
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910251349.5A priority Critical patent/CN111767757B/en
Publication of CN111767757A publication Critical patent/CN111767757A/en
Application granted granted Critical
Publication of CN111767757B publication Critical patent/CN111767757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an identity information determining method and device, and belongs to the field of intelligent monitoring. The method comprises the following steps: n shot images are acquired, and then each shot image is divided into at least two areas with different imaging qualities. And determining the area of the target face image from at least two areas included in each shot image. And then selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image. And finally, determining the identity information corresponding to the target face image according to the at least one optimal face image. According to the method and the device, at least one optimal face image is selected according to the region where the target face image included in each shot image is located, so that the determined at least one optimal face image is more accurate, and further the identity information corresponding to the target face image is more accurate to determine.

Description

Identity information determination method and device
Technical Field
The present application relates to the field of intelligent monitoring, and in particular, to a method and an apparatus for determining identity information.
Background
Face recognition is a biometric technology that determines identity information based on face feature information. That is, a face image may be detected from a video according to a face recognition technique, and identity information corresponding to the face image may be determined.
In the related art, an identity information determining method is provided, which includes: shooting a video, and determining the quality scores of target face images included in a plurality of video frame images in the shot video according to a face image quality scoring algorithm, wherein the target face images refer to the face images of which the identity information is to be determined. And selecting the target face image with the highest quality score from the target face images contained in the plurality of video frame images as the optimal target face image. And retrieving a face image with the highest similarity with the optimal target face image from the face database, and determining the identity information corresponding to the retrieved face image as the identity information corresponding to the target face image.
However, since the imaging quality of the edge region of an image is generally lower than that of the central region of the image, the imaging quality of a face image detected in the edge region is generally lower than that of a face image detected in the central region. Therefore, the optimal target face image may be misjudged by the method, so that the accuracy of the identity information corresponding to the determined target face image is low.
Disclosure of Invention
The embodiment of the application provides an identity information determining method and device, and the problem that in the related art, due to the fact that the imaging quality of the edge area of an image is lower than that of the central area of the image, the accuracy of the identity information corresponding to the determined target face image is low is solved. The technical scheme is as follows:
in a first aspect, a method for determining identity information is provided, where the method includes:
acquiring N shot images, wherein N is a positive integer;
dividing each shot image into at least two areas with different imaging qualities;
determining the area where the target face image is located from at least two areas included in each shot image;
selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image;
and determining the identity information corresponding to the target face image according to the at least one optimal face image.
Optionally, the dividing each captured image into at least two regions with different imaging qualities includes:
dividing each shot image into at least two areas with different imaging qualities according to a preset corresponding relation between the imaging qualities and the position information; alternatively, the first and second electrodes may be,
determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
Optionally, the selecting at least one optimal face image from the target face images included in the N captured images according to the region of the target face image in each captured image includes:
selecting at least one face image in an area with highest imaging quality from the target face images included in the N shot images;
determining a quality score of each face image in the at least one face image;
and selecting at least one optimal face image from the at least one face image according to the quality score of the at least one face image.
Optionally, the selecting at least one optimal face image from the target face images included in the N captured images according to the region of the target face image in each captured image includes:
determining the quality score of a target face image contained in each shot image;
and selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image, wherein the selecting comprises the following steps:
when the target face image is in the first area in M shot images included in the N shot images, selecting the target face image with the highest quality score as a first candidate face image from the target face images included in the M shot images according to the quality scores of the target face images included in the M shot images, wherein M is a positive integer smaller than N;
when the target face image is in the second area in K shooting images included in the N shooting images, selecting the target face image with the highest quality score as a second candidate face image from the target face images included in the K shooting images according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
and selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
Optionally, the selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image includes:
determining a first score difference, the first score difference being a difference between the quality score of the second candidate face image and the quality score of the first candidate face image;
when the first score difference value is equal to a score threshold value, determining the first candidate face image and the second candidate face image as optimal face images; and when the first score difference is larger than a score threshold, determining that the second candidate face image is the optimal face image, and when the first score difference is smaller than the score threshold, determining that the first candidate face image is the optimal face image.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the determining the quality score of the target face image included in each shot image comprises the following steps:
determining a plurality of scoring items of a target face image included in a current shot image, wherein the current shot image is any one of the N shot images;
when the target face image is in the first area in the current shot image, determining the quality score of the target face image included in the current shot image according to a plurality of score items of the target face image included in the current shot image and the first weight of each score item;
and when the target face image is in the second area in the current shot image, determining the quality score of the target face image included in the current shot image according to a plurality of score items of the target face image included in the current shot image and the second weight of each score item.
Optionally, the selecting at least one optimal face image from the target face images included in the N captured images according to the region of the target face image in each captured image includes:
selecting one shot image from the N shot images, and determining the quality score of a target face image included in the selected shot image;
determining a candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the last moment and the quality score of the candidate face image determined at the last moment;
judging whether the N shot images are processed or not;
and if so, taking the candidate face image determined at the current moment as the optimal face image, otherwise, selecting a shot image from unprocessed shot images included in the N shot images, and returning to the step of determining the quality score of the target face image included in the selected shot image until the N shot images are processed.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the determining the candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the previous moment, and the quality score of the candidate face image determined at the previous moment includes:
when the area of the target face image in the selected shot image is the same as the area of the candidate face image determined at the previous moment, and the quality score of the target face image included in the selected shot image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score as the candidate face image at the current moment from the target face image included in the selected shot image and the candidate face image determined at the previous moment;
when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the candidate face image determined at the previous moment is the first area, determining a second score difference value, wherein the second score difference value is the difference value between the quality score of the target face image included in the selected shot image and the quality score of the candidate face image determined at the previous moment;
when the second score difference is larger than a score threshold, taking a target face image included in the selected shot image as a candidate face image at the current moment;
when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the target face image in the selected shot image is the first area, determining a third score difference value, wherein the third score difference value is a difference value between the quality score of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shot image;
and when the third score difference is smaller than or equal to the score threshold, taking the target face image included in the selected shot image as the candidate face image at the current moment.
In a second aspect, an identity information determination apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring N shot images, wherein N is a positive integer;
the dividing module is used for dividing each shot image into at least two areas with different imaging qualities;
the first determining module is used for determining the area where the target face image is located from at least two areas included in each shot image;
the selection module is used for selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image;
and the second determining module is used for determining the identity information corresponding to the target face image according to the at least one optimal face image.
Optionally, the dividing module includes:
the dividing submodule is used for dividing each shot image into at least two areas with different imaging qualities according to the preset corresponding relation between the imaging qualities and the position information; alternatively, the first and second electrodes may be,
determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
Optionally, the selection module comprises:
the first selection submodule is used for selecting at least one face image in an area with the highest imaging quality from the target face images contained in the N shot images;
the first determining submodule is used for determining the quality score of each face image in the at least one face image;
and the second selection submodule is used for selecting at least one optimal facial image from the at least one facial image according to the quality score of the at least one facial image.
Optionally, the selection module comprises:
the second determining submodule is used for determining the quality score of the target face image contained in each shot image;
and the third selection sub-module is used for selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the third selection submodule includes:
a first selection unit, configured to, when the target face image is in the first region in M captured images included in the N captured images, select, according to quality scores of the target face images included in the M captured images, a target face image with a highest quality score from the target face images included in the M captured images as a first candidate face image, where M is a positive integer smaller than N;
a second selecting unit, configured to, when the target face image is in the second region in K captured images included in the N captured images, select, according to quality scores of the target face images included in the K captured images, a target face image with a highest quality score from the target face images included in the K captured images as a second candidate face image, where K is a positive integer smaller than N, and a sum of K and M is equal to N;
and the third selection unit is used for selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
Optionally, the third selecting unit includes:
a first determining subunit, configured to determine a first score difference value, where the first score difference value is a difference value between the quality score of the second candidate face image and the quality score of the first candidate face image;
a second determining subunit, configured to determine, when the first score difference is equal to a score threshold, both the first candidate face image and the second candidate face image as an optimal face image; and when the first score difference is larger than a score threshold, determining that the second candidate face image is the optimal face image, and when the first score difference is smaller than the score threshold, determining that the first candidate face image is the optimal face image.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the second determination submodule includes:
the first determining unit is used for determining a plurality of scoring items of a target face image included in a current shot image, wherein the current shot image is any one of the N shot images;
a second determining unit, configured to determine, when the target face image is in the first region in the current captured image, a quality score of the target face image included in the current captured image according to the plurality of score items of the target face image included in the current captured image and the first weight of each score item;
and a third determining unit, configured to determine, when the target face image is in the second region in the current captured image, a quality score of the target face image included in the current captured image according to the plurality of score items of the target face image included in the current captured image and the second weight of each score item.
Optionally, the selection module comprises:
the third determining submodule is used for selecting one shot image from the N shot images and determining the quality score of the target face image included in the selected shot image;
a fourth determining submodule, configured to determine a candidate face image at a current time according to a region of the target face image in the selected captured image, a quality score of the target face image included in the selected captured image, a region of the candidate face image determined at a previous time, and a quality score of the candidate face image determined at the previous time;
the judgment submodule is used for judging whether the N shot images are processed or not;
and the triggering sub-module is used for taking the candidate face image determined at the current moment as the optimal face image when the N shot images are processed, selecting a shot image from the unprocessed shot images included in the N shot images when the N shot images are not processed, and triggering the third determining sub-module to determine the quality score of the target face image included in the selected shot image until the N shot images are processed.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the fourth determination submodule includes:
a fourth selecting unit, configured to select, when a region of the target face image in the selected captured image is the same as a region of the candidate face image determined at the previous time, and a quality score of the target face image included in the selected captured image is different from a quality score of the candidate face image determined at the previous time, a face image with a highest quality score from the target face image included in the selected captured image and the candidate face image determined at the previous time as a candidate face image at the current time;
a fourth determining unit, configured to determine a second score difference value when the region of the target face image in the selected captured image is different from the region of the candidate face image determined at the previous time, and the region of the candidate face image determined at the previous time is the first region, where the second score difference value is a difference value between a quality score of the target face image included in the selected captured image and a quality score of the candidate face image determined at the previous time;
a fifth determining unit, configured to, when the second score difference is greater than a score threshold, take a target face image included in the selected captured image as a candidate face image at the current time;
a sixth determining unit, configured to determine a third score difference value when the area where the target face image is located in the selected captured image is different from the area where the candidate face image is located at the previous time, and the area where the target face image is located in the selected captured image is the first area, where the third score difference value is a difference value between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected captured image;
and the seventh determining unit is used for taking the target face image included in the selected shot image as the candidate face image at the current moment when the third score difference is smaller than or equal to the score threshold.
In a third aspect, an identity information determination apparatus is provided, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect described above.
In a fourth aspect, a computer-readable storage medium is provided, having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of any one of the methods of the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of any of the first aspects above.
The technical scheme provided by the embodiment of the application can at least bring the following beneficial effects:
in the embodiment of the present application, N photographed images are acquired first, and due to lens distortion of a photographing lens and the like, the imaging quality of each photographed image is not uniform, so that each photographed image can be divided into at least two regions with different imaging qualities. Then, the area where the target face image is located is determined from at least two areas included in each shot image. And then selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image. And finally, determining the identity information corresponding to the target face image according to the at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the region where the target face image included in each shot image is located, so that the determined at least one optimal face image is more accurate, and further the identity information corresponding to the target face image is more accurate to determine.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
Fig. 2 is a flowchart of a first identity information determining method according to an embodiment of the present application.
Fig. 3 is a flowchart of a second identity information determination method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a division area of a captured image according to an embodiment of the present application.
Fig. 5 is a flowchart of a third identity information determining method according to an embodiment of the present application.
Fig. 6 is a flowchart of a fourth identity information determination method according to an embodiment of the present application.
Fig. 7 is a block diagram of an identity information determination apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an identity information determination apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with aspects of the present application.
Before explaining the embodiments of the present application in detail, an implementation environment of the embodiments of the present application is described:
fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, and referring to fig. 1, the implementation environment includes an image capturing device 101 and a server 102. The image capturing apparatus 101 and the server 102 are connected to each other via a network. The image capturing apparatus 101 can capture a picture or a video. The image capturing apparatus 101 can also transmit the captured picture or video to the server 102. The server 102 may determine to take an image from a picture or video taken by the image taking device 101. Specifically, the server 102 may take a picture taken by the image capturing apparatus 101 as a captured image, and the server 102 may also take a video frame image included in a video captured by the image capturing apparatus 101 as a captured image. The server 102 may perform identification information determination based on the determined photographed image. The image capturing apparatus 101 may be a camera, a video camera, or the like. The server 102 is a server that provides a background service for the image capturing device 101, and may be a server, a server cluster composed of a plurality of servers, or a cloud computing server center, which is not limited in this embodiment of the present application. In the embodiment of the present application, a server 102 is illustrated.
The identity information determination method provided in the embodiments of the present application is explained in detail below.
Fig. 2 is a flowchart of an identity information determining method provided in an embodiment of the present application, and referring to fig. 2, the method includes:
step 201: n shot images are obtained, wherein N is a positive integer.
Step 202: each of the taken images is divided into at least two regions different in imaging quality.
Step 203: the region where the target face image is located is determined from at least two regions included in each captured image.
Step 204: and selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image.
Step 205: and determining the identity information corresponding to the target face image according to the at least one optimal face image.
In the embodiment of the present application, N photographed images are acquired first, and due to lens distortion of a photographing lens and the like, the imaging quality of each photographed image is not uniform, so that each photographed image can be divided into at least two regions with different imaging qualities. Then, the area where the target face image is located is determined from at least two areas included in each shot image. And then selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image. And finally, determining the identity information corresponding to the target face image according to the at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the region where the target face image included in each shot image is located, so that the determined at least one optimal face image is more accurate, and further the identity information corresponding to the target face image is more accurate to determine.
Optionally, the dividing each captured image into at least two regions with different imaging qualities includes:
dividing each shot image into at least two areas with different imaging qualities according to a preset corresponding relation between the imaging qualities and the position information; alternatively, the first and second electrodes may be,
determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
Optionally, the selecting at least one optimal face image from the target face images included in the N captured images according to the region of the target face image in each captured image includes:
selecting at least one face image in an area with highest imaging quality from the target face images included in the N shot images;
determining a quality score of each face image in the at least one face image;
and selecting at least one optimal face image from the at least one face image according to the quality score of the at least one face image.
Optionally, the selecting at least one optimal face image from the target face images included in the N captured images according to the region of the target face image in each captured image includes:
determining the quality score of a target face image contained in each shot image;
and selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image contained in each shot image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the method for selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image contained in each shot image comprises the following steps:
when the target face image is in a first area in M shooting images included in N shooting images, selecting the target face image with the highest quality score as a first candidate face image from the target face images included in the M shooting images according to the quality scores of the target face images included in the M shooting images, wherein M is a positive integer smaller than N;
when the target face image is in a second area in K shooting images included in N shooting images, selecting the target face image with the highest quality score as a second candidate face image from the target face images included in the K shooting images according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
and selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
Optionally, the selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image includes:
determining a first score difference, wherein the first score difference is a difference between the quality score of the second candidate face image and the quality score of the first candidate face image;
when the first score difference value is equal to a score threshold value, determining the first candidate face image and the second candidate face image as the optimal face images; and when the first score difference is smaller than the score threshold, determining the first candidate face image as the optimal face image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the quality score of the target face image contained in each shot image is determined, and the quality score comprises the following steps:
determining a plurality of scores of a target face image included in a current shot image, wherein the current shot image is any one of N shot images;
when the target face image is in a first area in the current shot image, determining the quality score of the target face image included in the current shot image according to a plurality of scores of the target face image included in the current shot image and the first weight of each score;
and when the target face image is in a second area in the current shot image, determining the quality score of the target face image included in the current shot image according to the multiple scoring items of the target face image included in the current shot image and the second weight of each scoring item.
Optionally, the selecting at least one optimal face image from the target face images included in the N captured images according to the region of the target face image in each captured image includes:
selecting a shot image from the N shot images, and determining the quality score of a target face image included in the selected shot image;
determining a candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the last moment and the quality score of the candidate face image determined at the last moment;
judging whether the N shot images are processed or not;
and if so, taking the candidate face image determined at the current moment as the optimal face image, otherwise, selecting a shot image from unprocessed shot images included in the N shot images, and returning to the step of determining the quality score of the target face image included in the selected shot image until the N shot images are processed.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the determining the candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the previous moment, and the quality score of the candidate face image determined at the previous moment comprises:
when the area of the target face image in the selected shot image is the same as the area of the candidate face image determined at the previous moment, and the quality score of the target face image included in the selected shot image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score from the target face image included in the selected shot image and the candidate face image determined at the previous moment as the candidate face image at the current moment;
when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the candidate face image determined at the previous moment is a first area, determining a second score difference value, wherein the second score difference value is the difference value between the quality score of the target face image included in the selected shot image and the quality score of the candidate face image determined at the previous moment;
when the second score difference is larger than the score threshold, the target face image included in the selected shot image is used as the candidate face image at the current moment;
when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the target face image in the selected shot image is the first area, determining a third score difference value, wherein the third score difference value is the difference value between the quality score of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shot image;
and when the third score difference is smaller than or equal to the score threshold, taking the target face image included in the selected shot image as the candidate face image at the current moment.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.
In the identity information determining method provided by the application, at least one optimal face image needs to be selected from the target face images included in the N shot images, and then the identity information corresponding to the target face image is determined according to the optimal face image. The selection of the at least one best face image can be realized in three different ways. Therefore, the identity information determination method provided by the present application will be described below by three embodiments.
Fig. 3 is a flowchart of an identity information determining method according to an embodiment of the present application. The method is applied to a server, and the embodiment introduces the identity information determination method in combination with a first manner of selecting at least one optimal face image. Referring to fig. 3, the method includes:
step 301: n shot images are obtained, wherein N is a positive integer.
It should be noted that the N captured images may be determined from images or videos captured by the image capturing device. The image capturing apparatus may be the image capturing apparatus 101 shown in fig. 1.
Step 302: each of the taken images is divided into at least two regions different in imaging quality.
Since a photographing lens of an image photographing apparatus generally has lens distortion, it is easy to make the imaging quality of a photographed picture or video, that is, to make the imaging quality of a photographed image uneven. Specifically, lens distortion generally causes distortion or distortion of an object or person located in an edge region in a captured image. I.e. typically appears to have a lower imaging quality near the edge regions in the captured image than near the center region in the captured image. Therefore, each shot image can be divided into at least two areas according to the different imaging qualities of different areas in the shot image, and the respective imaging qualities of each area can be regarded as the same.
Wherein step 302 may include: dividing each shot image into at least two areas with different imaging qualities according to a preset corresponding relation between the imaging qualities and the position information; or, determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
In one possible case, the correspondence between the imaging quality and the positional information may be such that the imaging quality is highest at the center position of each captured image, and the imaging quality gradually decreases in a direction spreading from the center position to the edge position. In the following, by way of example, each captured image is divided into at least two regions having different imaging qualities according to the correspondence between the imaging qualities and the positional information. The captured image i is any one of the acquired N captured images.
For example, referring to fig. 4, the imaging quality is highest at the center position of the captured image i, and gradually decreases in the direction of spreading from the center position to the edge position, that is, the imaging quality is lowest at the edge position. According to such a correspondence relationship between the imaging quality and the positional information, a rectangular frame whose center position coincides with the center position of the captured image i can be determined in the captured image i, and the rectangular frame can divide the captured image i into the area a and the area B. The region a is a region including the center position of the captured image i, and the region B is a region located outside the region a including the edge position of the captured image i, that is, the imaging quality of the region a is higher than that of the region B. Of course, each shot image may be divided into at least two regions with different imaging qualities according to the corresponding relationship between the imaging qualities and the position information in other manners, which is not limited in the embodiment of the present application.
In addition, in a possible case where the server stores therein the history captured images before the acquired N captured images, under such a condition, the imaging quality and the positional information of each region in the history captured images may be counted to determine the correspondence between the imaging quality and the positional information of each captured image in the N captured images, thereby dividing each captured image into at least two regions different in imaging quality. Of course, in practical application, each shot image may be divided into at least two regions with different imaging qualities in other manners, which is not limited in the embodiment of the present application.
Step 303: the region where the target face image is located is determined from at least two regions included in each captured image.
It should be noted that the target face image refers to a face image whose identity information is to be determined. The target face image is an image that can present facial features of the target face.
After step 303 is performed, at least one optimal face image may be selected from the target face images included in the N captured images according to the region in each captured image in which the target face image is located. Specifically, this can be achieved by steps 304-306 as follows.
Step 304: at least one face image in an area with the highest imaging quality is selected from the target face images included in the N shot images.
Since the at least two regions included in each of the N captured images are divided according to the difference in imaging quality, the at least two regions included in each of the N captured images may be sorted according to the imaging quality, so that the region with the highest imaging quality in the at least two regions may be determined. For the target face images included in the N shot images, at least one face image in the region with the highest imaging quality is located, that is, at least one face image with the highest imaging quality in the target face images included in the N shot images.
Step 305: and determining the quality score of each face image in the at least one face image.
It should be noted that after the quality score of the target face image included in each shot image is determined, at least one face image can be compared more conveniently, so that at least one optimal face image can be selected from the at least one face image conveniently. Since the at least one facial image is in the region with the highest imaging quality, for the at least one facial image, the quality score of each facial image in the at least one facial image may be determined according to the plurality of score items of each facial image in the at least one facial image and the third weight of each score item.
The plurality of scoring items are scoring items capable of evaluating facial features of the target face and used for determining the quality score of the target face image included in each shot image. The plurality of scoring items may include the interpupillary distance of the target face, the deflection angle of the target face relative to the photographing lens, and the like. The third weight of each score may be a proportion of each score among the plurality of scores. The larger the proportion of any scoring item in the scoring items is, the larger the third weight of any scoring item in the scoring items is, namely, the higher the importance degree of any scoring item in the scoring items is.
And determining the quality score of each face image in the at least one face image according to the plurality of scoring items of each face image in the at least one face image and the third weight of each scoring item, namely, multiplying each scoring item in the plurality of scoring items by the respective third weight, and then summing to determine the quality score of each face image in the at least one face image.
Step 306: and selecting at least one optimal face image from the at least one face image according to the quality score of the at least one face image.
Since in some examples, there may be facial images with the same quality score in at least one facial image, that is, there may be facial images with the same imaging quality in at least one facial image. Therefore, at least one best facial image can be selected from the at least one facial image according to the quality score of the at least one facial image.
Step 307: and determining the identity information corresponding to the target face image according to the at least one optimal face image.
It should be noted that, when the identity information corresponding to the target face image is determined according to the at least one optimal face image, a similar face image with the highest similarity to the at least one optimal face image may be retrieved from the face database. The face database stores a plurality of face images and identity information corresponding to each face image. And then determining the identity information corresponding to the similar face images as the identity information corresponding to the target face image. Of course, the identity information corresponding to the target face image may also be determined according to the at least one optimal face image in other manners, which is not specifically limited in this embodiment of the application.
In the embodiment of the present application, N photographed images are acquired first, and due to lens distortion of a photographing lens and the like, the imaging quality of each photographed image is not uniform, so that each photographed image can be divided into at least two regions with different imaging qualities. Then, the area where the target face image is located is determined from at least two areas included in each shot image. Since the imaging quality of the target face image in the region with the highest imaging quality is the highest in the at least two regions of each captured image, at least one face image in the region with the highest imaging quality can be selected from the target face images included in the N captured images. A quality score for each of the at least one facial image is then determined. And selecting at least one optimal face image from the at least one face image according to the quality score of the at least one face image. And finally, determining the identity information corresponding to the target face image according to the at least one optimal face image. According to the embodiment of the application, each shot image is divided into at least two areas according to different imaging qualities, and then at least one optimal face image is selected from at least one face image in the area with the highest imaging quality, so that the determined at least one optimal face image is more efficient and accurate, and further the identity information corresponding to the target face image is more efficiently and accurately determined.
Fig. 5 is a flowchart of an identity information determining method according to an embodiment of the present application. The method is applied to a server, and the embodiment introduces the identity information determination method in combination with a second method of selecting at least one optimal face image. Referring to fig. 5, the method includes:
step 501: n shot images are obtained, wherein N is a positive integer.
Step 502: each of the taken images is divided into at least two regions different in imaging quality.
Step 503: the region where the target face image is located is determined from at least two regions included in each captured image.
It should be noted that the contents of steps 501 to 503 are similar to the contents of steps 301 to 303 in the embodiment shown in fig. 3, and therefore are not described herein again.
After steps 501-503 are performed, at least one optimal face image may be selected from the target face images included in the N captured images according to the region where the target face image is located in each captured image. Specifically, this can be realized by steps 504 to 505 as follows.
Step 504: and determining the quality score of the target face image contained in each shot image.
It should be noted that after the quality score of the target face image included in each shot image is determined, the target face image included in each shot image can be compared more conveniently, so that at least one optimal face image can be selected from the target face images included in the N shot images conveniently. However, in the embodiment of the present application, the same quality scoring algorithm may be used for each region in the captured image to determine the quality score of the target face image included in the captured image, or different quality scoring algorithms may be used for different regions in the captured image to determine the quality score of the target face image included in the captured image. Next, the determination of the quality score of the target face image included in each captured image will be described in two cases.
First caseAnd determining a plurality of scoring items of the target face image included in each shot image. And determining the quality score of the target face image included in each shot image according to the plurality of scoring items of the target face image included in each shot image and the fourth weight of each scoring item.
Specifically, the quality score of each of the at least one facial image is determined according to the plurality of score items of each of the at least one facial image and the fourth weight of each score item, that is, each score item of the plurality of score items is multiplied by the respective fourth weight, and then the result is summed to determine the quality score of the target facial image included in each captured image. That is, the same quality scoring algorithm is employed for each region in the captured image to determine the quality score of the target face image included in the captured image, regardless of which region of the at least two regions included in the captured image the target face image is in.
It should be noted that, since the plurality of scoring items have been described in step 305 in the embodiment shown in fig. 3, they are not described herein again.
Second caseAt least two regions of each of the captured images including a first region and a second regionAnd the imaging quality of the first area is higher than that of the second area. For the currently captured image, a plurality of scores of the target face image included in the currently captured image may be determined, the currently captured image being any one of the N captured images. When the target face image is in the first area in the current shot image, determining the quality score of the target face image included in the current shot image according to a plurality of scoring items of the target face image included in the current shot image and the first weight of each scoring item. And when the target face image is in a second area in the current shot image, determining the quality score of the target face image included in the current shot image according to the multiple scoring items of the target face image included in the current shot image and the second weight of each scoring item. That is, different quality scoring algorithms are used for different regions in the captured image to determine the quality score of the target face image included in the captured image.
For the second case, illustratively, the at least two regions of each captured image include a first region, which may be a region including the center position in each captured image, and the second region may be a region including the edge position outside the first region. For example, referring to fig. 4, a region a in fig. 4 represents a first region, and a region B represents a second region. Of course, the first area and the second area may also be determined according to other dividing manners, which is not specifically limited in this embodiment of the application.
It should be noted that the first weight of each score may be a proportion of each score in the plurality of scores when the target face image is in the first area in the current captured image. The larger the proportion of any scoring item in the scoring items is, the larger the first weight of any scoring item in the scoring items is, namely, the higher the importance degree of any scoring item in the scoring items is.
The quality score of the target face image included in the current captured image is determined according to the plurality of scoring items of the target face image included in the current captured image and the first weight of each scoring item, that is, each scoring item in the plurality of scoring items is multiplied by the respective first weight, and then the multiple scoring items are summed to determine the quality score of the target face image included in the current captured image.
Similarly, the second weight of each score may be a proportion of each score in the plurality of scores when the target face image is in the second region in the current captured image. The larger the proportion of any scoring item in the scoring items is, the larger the second weight of any scoring item in the scoring items is, namely, the higher the importance degree of any scoring item in the scoring items is.
The quality score of the target face image included in the current captured image is determined according to the plurality of scoring items of the target face image included in the current captured image and the second weight of each scoring item, that is, each scoring item in the plurality of scoring items is multiplied by the respective second weight, and then the result is summed to determine the quality score of the target face image included in the current captured image.
It should be noted that when the target face image is in the first region in the current captured image, the scoring items of the target face may be the same as the scoring items of the target face when the target face image is in the second region in the current captured image. For example, when the target face image is in the first area in the current captured image, the multiple scoring items of the target face are the interpupillary distance of the target face and the deflection angle of the target face relative to the shooting lens. When the target face image is in the second area in the current shot image, the multiple scoring items of the target face are also the interpupillary distance of the target face and the deflection angle of the target face relative to the shooting lens. Under such a condition, the value of the first weight and the value of the second weight may not be the same for the same scoring item. It can also be understood that, for the same score item, when the target face image is in the first region in the current captured image, the importance degree of the same score item is higher and the value of the first weight is higher, but when the target face image is in the second region in the current captured image, the importance degree of the same score item is lower and the value of the first weight is lower.
For example, when the plurality of scoring items include the interpupillary distance of the target face, regarding the scoring item of the interpupillary distance of the target face, the interpupillary distance of the target face is a scoring item that can more directly reflect the distance between the target face and the shooting lens. When the target face is far away from the shooting lens, the pupil distance of the target face is small; when the target face is close to the shooting lens, the pupil distance of the target face is large. The imaging quality of the first area is higher than that of the second area, so that the target face image is not distorted or distorted when located in the first area, and the target face image is distorted or distorted when located in the second area. Therefore, for the same target face in the first area, the interpupillary distance of the target face is not affected by the imaging quality of the first area no matter where the target face is in the first area. But for the same target face in the second area, the interpupillary distance of the target face is affected by the imaging quality of the second area. In short, the scoring term of the interpupillary distance of the target face is more important for the target face image in the second area than for the target face image in the first area. Therefore, the second weight of the score of the interpupillary distance of the target face can be higher than the first weight, so that the importance degree of the interpupillary distance of the target face when the target face is in the second area is improved. Of course, the values of the first weight and the second weight of other same scoring items may also be set according to actual situations, which is not specifically limited in this embodiment of the application.
Step 505: and selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image contained in each shot image.
It should be noted that the optimal face image may be an image that can most clearly and completely represent the facial features of the target face, among the target face images included in the N captured images. For example, in the target face images included in the N captured images, the best face image shows the five sense organs of the target face most clearly, and the target face is not occluded by other objects or persons.
In one possible case, more than one image which can present the facial features of the target face clearly and completely may be selected from the target face images included in the N captured images according to the area of the target face image in each captured image and the quality score of the target face image included in each captured image, that is, at least one optimal face image may be selected from the target face images included in the N captured images.
In a possible case, when some of the N shot images are shot, the target face may have a large deflection angle with respect to the shot lens, or the target face may be far from the shot lens, for example, so that the shot images include a target face image that does not clearly and completely represent the facial features of the target face. Further, the identity information determined from the target face images included in the captured images is relatively inaccurate. Therefore, by selecting at least one optimal target face image which can present the facial features of the target face clearly and completely from the N captured images, the identity information which can be determined from the at least one optimal face image is more accurate.
Here, the step 505 may be implemented by the following steps (1) to (3) under the condition that each captured image is divided into a first region and a second region.
(1): when the target face image is in the first area in M shooting images included in the N shooting images, selecting the target face image with the highest quality score from the target face images included in the M shooting images as a first candidate face image according to the quality scores of the target face images included in the M shooting images, wherein M is a positive integer smaller than N.
When the target face image is in the first area in M shot images included in the N shot images, the target face image with the highest quality score is selected as the first candidate face image from the target face images included in the M shot images, that is, the target face image which can present the facial features of the target face clearly and completely is selected as the first candidate face image from all the target face images in the first area.
(2): and when the target face image is in a second area in K shooting images included in the N shooting images, selecting the target face image with the highest quality score as a second candidate face image from the target face images included in the K shooting images according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N.
When K taken images included in the N taken images are in the second region, selecting a target face image with the highest quality score from the target face images included in the K taken images as a second candidate face image, that is, selecting a target face image which can show the facial features of the target face clearly and completely from all the target face images in the second region as the second candidate face image.
(3): and selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
The first candidate face image is a target face image with the highest quality score selected from the M shot images; the second candidate face image is a target face image with the highest quality score selected from the K captured images, and the sum of K and M is equal to N. That is, of the target face images included in the N photographed images, the candidate face image that can be used as at least one optimal face image is the first candidate face image in the first region and the second candidate face image in the second region. At least one best face image may be selected from the first candidate face image and the second candidate face image based on the quality score of the first candidate face image and the quality score of the second candidate face image.
Wherein, step (3) may include: determining a first score difference, and determining a first candidate face image and a second candidate face image as the best face image when the first score difference is equal to a score threshold; and when the first score difference is smaller than the score threshold, determining the first candidate face image as the optimal face image.
It should be noted that the first score difference is a difference between the quality score of the second candidate face image and the quality score of the first candidate face image. Since the imaging quality of the second region is lower than the imaging quality of the first region. Therefore, for a target face image included in the same captured image, the imaging quality when the target face image is in the second region is lower than that when the target face image is in the first region. Therefore, the quality score of the target face image in the second region is lower than the quality score of the target face image in the first region, and the difference between the target face image in the second region and the target face image in the first region can be reflected. That is, the quality score of the target face image in the second region needs to be added with a score threshold value to be equal to the quality score of the target face image in the first region. Therefore, when the difference between the quality score of the second candidate face image and the quality score of the first candidate face image is equal to the score threshold, both the first candidate face image and the second candidate face image may be determined as the best face image. When the difference value between the quality score of the second candidate face image and the quality score of the first candidate face image is larger than the score threshold value, the second candidate face image is shown to be capable of presenting the facial features of the target face more clearly and completely compared with the first candidate face image. Therefore, the second candidate face image can be determined as the best face image, and relatively, when the difference value between the quality score of the second candidate face image and the quality score of the first candidate face image is smaller than the score threshold value, it indicates that the first candidate face image can more clearly and completely present the facial features of the target face compared with the second candidate face image. Therefore, the first candidate face image can be determined as the best face image.
It should be noted that the above steps (1) - (3) may be applied to the first case of the above step 504, and may also be applied to the second case of the above step 504. When the above-described steps (1) to (3) are applied to the first case in the above-described step 504, each of the captured images is divided into at least two regions different in imaging quality including a first region and a second region. Since in the first case, the quality score of the target face image included in each captured image is determined according to the plurality of score items and the fourth weight of each score item, that is, according to the same quality scoring algorithm for the first case. And in the second case, the quality score of the target face image included in each shot image is determined according to the plurality of scores of the target face image included in the current shot image and the first weight of each score when the target face image is in the first area in the current shot image. And when the target face image is in a second area in the current shot image, determining the quality score of the target face image included in the current shot image according to the multiple scoring items of the target face image included in the current shot image and the second weight of each scoring item. I.e. for the second case according to a different quality scoring algorithm.
Therefore, the scoring threshold value applied to the first case in the steps (1) to (3) may be different from the scoring threshold value applied to the second case in the steps (1) to (3). The specific value of the scoring threshold may be set according to actual conditions, which is not specifically limited in the embodiment of the present application.
Step 506: and determining the identity information corresponding to the target face image according to the at least one optimal face image.
It should be noted that the content of step 506 is similar to the content of step 307 in the embodiment shown in fig. 3, and therefore, the description thereof is omitted here.
In the embodiment of the present application, N photographed images are acquired first, and due to lens distortion of a photographing lens and the like, the imaging quality of each photographed image is not uniform, so that each photographed image can be divided into at least two regions with different imaging qualities. Then, the area where the target face image is located is determined from at least two areas included in each shot image. And then determining the quality score of the target face image included in each shot image, and then selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image. And finally, determining the identity information corresponding to the target face image according to the at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the area where the target face image included in each shot image is located and the quality score of the target face image included in each shot image, so that the determined at least one optimal face image can be more accurate, and further the determination of the identity information corresponding to the target face image is more accurate.
Fig. 6 is a flowchart of an identity information determining method according to an embodiment of the present application. The method is applied to a server, and the embodiment introduces the identity information determination method in combination with a third method of selecting at least one optimal face image. Referring to fig. 6, the method includes:
step 601: n shot images are obtained, wherein N is a positive integer.
Step 602: each of the taken images is divided into at least two regions different in imaging quality.
Step 603: the region where the target face image is located is determined from at least two regions included in each captured image.
It should be noted that the contents of step 601 to step 603 are similar to the contents of step 301 to step 303 in the embodiment shown in fig. 3, and therefore are not described herein again.
After steps 601-603 are performed, at least one optimal face image may be selected from the target face images included in the N captured images according to the region in which the target face image is located in each captured image. Specifically, this can be achieved by steps 604 to 607 as follows.
Step 604: and selecting one shot image from the N shot images, and determining the quality score of the target face image included in the selected shot image.
It should be noted that the quality score of the target face image included in the shot image determined in step 604 is similar to the quality score of the target face image included in the shot image determined in step 504 in the embodiment shown in fig. 5, and details are not repeated here.
Step 605: and determining the candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the last moment and the quality score of the candidate face image determined at the last moment.
Here, similarly to the above-described embodiment shown in fig. 3, the at least two regions of each captured image may include a first region and a second region. The first region and the second region have been described in the above embodiments, and are not described again here. The step 605 may be implemented by the following steps (1) to (5) on the condition that the at least two regions of each captured image include a first region and a second region.
(1): and when the area of the target face image in the selected shot image is the same as the area of the candidate face image determined at the previous moment, and the quality score of the target face image included in the selected shot image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score from the target face image included in the selected shot image and the candidate face image determined at the previous moment as the candidate face image at the current moment.
It should be noted that, when the area of the target face image in the selected captured image is the same as the area of the candidate face image determined at the previous time, it indicates that the imaging quality of the target face image included in the selected captured image is the same as the imaging quality of the candidate face image determined at the previous time. Therefore, when the quality score of the target face image included in the selected shot image is different from the quality score of the candidate face image determined at the previous moment, the face image with the highest quality score can be directly selected from the target face image included in the selected shot image and the candidate face image determined at the previous moment to serve as the candidate face image at the current moment.
(2): and when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the candidate face image determined at the previous moment is the first area, determining a second grading difference value.
It should be noted that the second score difference is a difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time. When the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the candidate face image determined at the previous moment is the first area, the target face image included in the selected shot image is in the second area. Under such conditions, the imaging quality of the target face image in the selected shot image is lower than the imaging quality of the region where the candidate face image is determined at the previous moment. Therefore, it is necessary to determine a difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time, so as to select the candidate face image at the current time based on the difference.
(3): and when the second score difference is larger than the score threshold, taking the target face image included in the selected shot image as the candidate face image at the current moment.
It should be noted that, when the second score difference is greater than the score threshold, that is, the difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time is greater than the score threshold, it indicates that the facial features of the target face can be more clearly and completely presented in the target face image included in the selected captured image than in the candidate face image determined at the previous time. Therefore, the target face image included in the selected captured image may be used as the candidate face image at the current time, or it may be understood that the candidate face image is updated. In contrast, when the second score difference is less than or equal to the score threshold, the candidate face image determined at the previous time may be used as the candidate face image at the current time. In other words, when the second score difference is greater than the score threshold, the candidate face image is updated; and when the second score difference value is less than or equal to the score threshold value, the candidate face image is not updated.
(4): and when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the target face image in the selected shot image is the first area, determining a third score difference.
It should be noted that the third score difference is a difference between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected captured image. And when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the target face image in the selected shot image is the first area, namely the candidate face image determined at the previous moment is in the second area. Under such a condition, the imaging quality of the selected shot image of the target face image is higher than the imaging quality of the region where the candidate face image is located, which is determined at the previous time, so that a difference between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected shot image needs to be determined, so as to select the candidate face image at the current time according to the difference.
(5): and when the third score difference is smaller than or equal to the score threshold, taking the target face image included in the selected shot image as the candidate face image at the current moment.
It should be noted that, when the third score difference is less than or equal to the score threshold, that is, the difference between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected captured image is less than or equal to the score threshold, it indicates that the facial features of the target face can be presented more clearly and completely compared with the candidate face image determined at the previous time in the target face image included in the selected captured image. Therefore, the target face image included in the selected captured image may be used as the candidate face image at the current time, or it may be understood that the candidate face image is updated. In contrast, when the third score difference is greater than the score threshold, the candidate face image determined at the previous time may be used as the candidate face image at the current time. In other words, when the third score difference is greater than the score threshold, the candidate face image is not updated; and when the third score difference value is less than or equal to the score threshold value, updating the candidate face image.
Step 606: and judging whether the N photographed images are processed or not, if so, executing the step 607, otherwise, selecting one photographed image from unprocessed photographed images included in the N photographed images, and returning to the step 604.
Step 607: and taking the candidate face image determined at the current moment as the optimal face image.
It should be noted that the above steps 604 to 606 determine the candidate face image at the current time by continuously selecting one shot image from the N shot images, so as to realize iterative update of the candidate face image at the current time. Under the condition, when the N shot images are processed, the optimal face image can be directly determined, so that the process of determining the optimal face image is simpler and more efficient.
Step 608: and determining the identity information corresponding to the target face image according to the optimal face image.
It should be noted that the content of step 608 is similar to the content of step 307 in the embodiment shown in fig. 3, and therefore, the description thereof is omitted here.
In the embodiment of the present application, N photographed images are acquired first, and due to lens distortion of a photographing lens and the like, the imaging quality of each photographed image is not uniform, so that each photographed image can be divided into at least two regions with different imaging qualities. Then, the area where the target face image is located is determined from at least two areas included in each shot image. And then selecting one shot image from the N shot images, and determining the quality score of the target face image included in the selected shot image. And then determining the candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the last moment and the quality score of the candidate face image determined at the last moment. Then judging whether the N shot images are processed or not, and if so, taking the candidate face image determined at the current moment as the optimal face image; and if not, selecting one shot image from the unprocessed shot images included in the N shot images, and returning to the step of determining the quality score of the target face image included in the selected shot image until the N shot images are processed. And finally, determining the identity information corresponding to the target face image according to the optimal face image. In the embodiment of the application, the candidate face image at the current moment is determined by continuously selecting one shot image from the N shot images, so that the iterative update of the candidate face image at the current moment is realized. Under the condition, when the N shot images are processed, the optimal face image can be directly determined, so that the process of determining the optimal face image is simpler and more efficient, the determined optimal face image can be more accurate, and the identity information corresponding to the target face image can be more accurately determined.
It should be noted that the above-mentioned embodiment shown in fig. 3, the embodiment shown in fig. 5, and the embodiment shown in fig. 6 are three parallel embodiments that can implement the identity information determination method provided in the present application. Briefly, the embodiments shown in fig. 3, 5 and 6 have regions in that the embodiment shown in fig. 3 selects at least one optimal face image from at least one face image in the region with the highest imaging quality according to the fact that each image is divided into at least two regions, and the target face image in the region with the highest imaging quality has the highest imaging quality. The embodiment shown in fig. 5 selects at least one optimal face image according to the region where the target face image is included in each captured image and the quality score of the target face image included in each captured image. The embodiment shown in fig. 5 is implemented by iteratively updating the candidate face image at the current time, and when N photographed images are processed, the best face image can be determined. However, the three embodiments can achieve the technical effect that the determined optimal face image is more accurate, and further the identity information corresponding to the target face image is more accurate to determine.
Fig. 7 is a block diagram of an identity information determining apparatus according to an embodiment of the present application, and referring to fig. 7, the apparatus includes an obtaining module 701, a dividing module 702, a first determining module 703, a selecting module 704, and a second determining module 705.
An obtaining module 701, configured to obtain N captured images, where N is a positive integer;
a dividing module 702 configured to divide each captured image into at least two regions with different imaging qualities;
a first determining module 703, configured to determine, from at least two regions included in each captured image, a region where the target face image is located;
a selecting module 704, configured to select at least one optimal face image from the target face images included in the N captured images according to a region where the target face image is located in each captured image;
the second determining module 705 is configured to determine, according to the at least one optimal face image, identity information corresponding to the target face image.
Optionally, the dividing module 702 includes:
the dividing submodule is used for dividing each shot image into at least two areas with different imaging qualities according to the preset corresponding relation between the imaging qualities and the position information; alternatively, the first and second electrodes may be,
determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
Optionally, the selecting module 704 includes:
the first selection submodule is used for selecting at least one face image in an area with the highest imaging quality from the target face images contained in the N shot images;
the first determining submodule is used for determining the quality score of each face image in the at least one face image;
and the second selection submodule is used for selecting at least one optimal facial image from the at least one facial image according to the quality score of the at least one facial image.
Optionally, the selecting module 704 includes:
the second determining submodule is used for determining the quality score of the target face image contained in each shot image;
and the third selection sub-module is used for selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image contained in each shot image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the third selection submodule includes:
the first selection unit is used for selecting a target face image with the highest quality score from the target face images included in the M shot images as a first candidate face image according to the quality scores of the target face images included in the M shot images when the target face image is in a first area in the M shot images included in the N shot images, wherein M is a positive integer smaller than N;
the second selection unit is used for selecting a target face image with the highest quality score from the target face images contained in the K shot images as a second candidate face image according to the quality scores of the target face images contained in the K shot images when the target face image is in a second area in the K shot images contained in the N shot images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
and the third selection unit is used for selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
Optionally, the third selecting unit includes:
a first determining subunit, configured to determine a first score difference, where the first score difference is a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
a second determining subunit, configured to determine, when the first score difference is equal to the score threshold, both the first candidate face image and the second candidate face image as an optimal face image; and when the first score difference is smaller than the score threshold, determining the first candidate face image as the optimal face image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the second determination submodule includes:
the first determining unit is used for determining a plurality of scores of a target face image included in a current shot image, wherein the current shot image is any one of N shot images;
the second determining unit is used for determining the quality score of the target face image included in the current shot image according to the multiple scoring items of the target face image included in the current shot image and the first weight of each scoring item when the target face image is in the first area in the current shot image;
and the third determining unit is used for determining the quality score of the target face image included in the current shot image according to the plurality of score items of the target face image included in the current shot image and the second weight of each score item when the target face image is in the second area in the current shot image.
Optionally, the selecting module 704 includes:
the third determining submodule is used for selecting one shot image from the N shot images and determining the quality score of the target face image included in the selected shot image;
the fourth determining submodule is used for determining the candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the last moment and the quality score of the candidate face image determined at the last moment;
the judgment submodule is used for judging whether the N shot images are processed or not;
and the triggering sub-module is used for taking the candidate face image determined at the current moment as the best face image when the N shot images are processed, selecting one shot image from the unprocessed shot images included in the N shot images when the N shot images are not processed, and triggering the third determining sub-module to determine the quality score of the target face image included in the selected shot image until the N shot images are processed.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the fourth determination submodule includes:
the fourth selection unit is used for selecting the face image with the highest quality score from the target face image included in the selected shot image and the candidate face image determined at the previous moment as the candidate face image at the current moment when the area of the target face image in the selected shot image is the same as the area of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shot image is different from the quality score of the candidate face image determined at the previous moment;
a fourth determining unit, configured to determine a second score difference value when a region of the target face image in the selected captured image is different from a region of the candidate face image determined at the previous time, and the region of the candidate face image determined at the previous time is the first region, where the second score difference value is a difference value between a quality score of the target face image included in the selected captured image and a quality score of the candidate face image determined at the previous time;
a fifth determining unit, configured to, when the second score difference is greater than the score threshold, take a target face image included in the selected captured image as a candidate face image at the current time;
a sixth determining unit, configured to determine a third score difference value when a region where the target face image is located in the selected captured image is different from a region where the candidate face image is located, and the region where the target face image is located in the selected captured image is the first region, where the third score difference value is a difference value between a quality score of the candidate face image determined at the previous time and a quality score of the target face image included in the selected captured image;
and the seventh determining unit is used for taking the target face image included in the selected shot image as the candidate face image at the current moment when the third score difference is smaller than or equal to the score threshold.
In the embodiment of the present application, N photographed images are acquired first, and due to lens distortion of a photographing lens and the like, the imaging quality of each photographed image is not uniform, so that each photographed image can be divided into at least two regions with different imaging qualities. Then, the area where the target face image is located is determined from at least two areas included in each shot image. And then selecting at least one optimal face image from the target face images contained in the N shot images according to the area of the target face image in each shot image. And finally, determining the identity information corresponding to the target face image according to the at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the region where the target face image included in each shot image is located, so that the determined at least one optimal face image is more accurate, and further the identity information corresponding to the target face image is more accurate to determine.
It should be noted that: in the above embodiment, when the identity information determining apparatus determines the identity information corresponding to the target face, only the division of the functional modules is described as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the above described functions. In addition, the identity information determining apparatus and the identity information determining method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 8 is a schematic structural diagram of an identity information determining apparatus 800 according to an embodiment of the present disclosure, where the identity information determining apparatus 800 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 801 and one or more memories 802, where the memory 802 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 801. Certainly, the identity information determining apparatus 800 may further include a wired or wireless network interface, a keyboard, an input/output interface, and other components to facilitate input and output, and the identity information determining apparatus 800 may further include other components for implementing functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor in an identity information determination device to perform the identity information determination method of the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A method for identity information determination, the method comprising:
acquiring N shot images, wherein N is a positive integer;
dividing each shot image into at least two areas with different imaging qualities;
determining the area where the target face image is located from at least two areas included in each shot image;
selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image;
and determining the identity information corresponding to the target face image according to the at least one optimal face image.
2. The method of claim 1, wherein said dividing each captured image into at least two regions of different imaging quality comprises:
dividing each shot image into at least two areas with different imaging qualities according to a preset corresponding relation between the imaging qualities and the position information; alternatively, the first and second electrodes may be,
determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
3. The method of claim 1, wherein selecting at least one best face image from the target face images included in the N captured images according to the area of the target face image in each captured image comprises:
selecting at least one face image in an area with highest imaging quality from the target face images included in the N shot images;
determining a quality score of each face image in the at least one face image;
and selecting at least one optimal face image from the at least one face image according to the quality score of the at least one face image.
4. The method of claim 1, wherein selecting at least one best face image from the target face images included in the N captured images according to the area of the target face image in each captured image comprises:
determining the quality score of a target face image contained in each shot image;
and selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image.
5. The method of claim 4, wherein the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image, wherein the selecting comprises the following steps:
when the target face image is in the first area in M shot images included in the N shot images, selecting the target face image with the highest quality score as a first candidate face image from the target face images included in the M shot images according to the quality scores of the target face images included in the M shot images, wherein M is a positive integer smaller than N;
when the target face image is in the second area in K shooting images included in the N shooting images, selecting the target face image with the highest quality score as a second candidate face image from the target face images included in the K shooting images according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
and selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
6. The method of claim 5, wherein selecting at least one best face image from the first candidate face image and the second candidate face image based on the quality score of the first candidate face image and the quality score of the second candidate face image comprises:
determining a first score difference, the first score difference being a difference between the quality score of the second candidate face image and the quality score of the first candidate face image;
when the first score difference value is equal to a score threshold value, determining the first candidate face image and the second candidate face image as optimal face images; and when the first score difference is larger than a score threshold, determining that the second candidate face image is the optimal face image, and when the first score difference is smaller than the score threshold, determining that the first candidate face image is the optimal face image.
7. The method of claim 5, wherein the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
the determining the quality score of the target face image included in each shot image comprises the following steps:
determining a plurality of scoring items of a target face image included in a current shot image, wherein the current shot image is any one of the N shot images;
when the target face image is in the first area in the current shot image, determining the quality score of the target face image included in the current shot image according to a plurality of score items of the target face image included in the current shot image and the first weight of each score item;
and when the target face image is in the second area in the current shot image, determining the quality score of the target face image included in the current shot image according to a plurality of score items of the target face image included in the current shot image and the second weight of each score item.
8. The method of claim 1, wherein selecting at least one best face image from the target face images included in the N captured images according to the area of the target face image in each captured image comprises:
selecting one shot image from the N shot images, and determining the quality score of a target face image included in the selected shot image;
determining a candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the last moment and the quality score of the candidate face image determined at the last moment;
judging whether the N shot images are processed or not;
and if so, taking the candidate face image determined at the current moment as the optimal face image, otherwise, selecting a shot image from unprocessed shot images included in the N shot images, and returning to the step of determining the quality score of the target face image included in the selected shot image until the N shot images are processed.
9. The method of claim 8, wherein the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
the determining the candidate face image at the current moment according to the area of the target face image in the selected shot image, the quality score of the target face image included in the selected shot image, the area of the candidate face image determined at the previous moment, and the quality score of the candidate face image determined at the previous moment includes:
when the area of the target face image in the selected shot image is the same as the area of the candidate face image determined at the previous moment, and the quality score of the target face image included in the selected shot image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score as the candidate face image at the current moment from the target face image included in the selected shot image and the candidate face image determined at the previous moment;
when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the candidate face image determined at the previous moment is the first area, determining a second score difference value, wherein the second score difference value is the difference value between the quality score of the target face image included in the selected shot image and the quality score of the candidate face image determined at the previous moment;
when the second score difference is larger than a score threshold, taking a target face image included in the selected shot image as a candidate face image at the current moment;
when the area of the target face image in the selected shot image is different from the area of the candidate face image determined at the previous moment, and the area of the target face image in the selected shot image is the first area, determining a third score difference value, wherein the third score difference value is a difference value between the quality score of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shot image;
and when the third score difference is smaller than or equal to the score threshold, taking the target face image included in the selected shot image as the candidate face image at the current moment.
10. An apparatus for identity information determination, the apparatus comprising:
the acquisition module is used for acquiring N shot images, wherein N is a positive integer;
the dividing module is used for dividing each shot image into at least two areas with different imaging qualities;
the first determining module is used for determining the area where the target face image is located from at least two areas included in each shot image;
the selection module is used for selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image;
and the second determining module is used for determining the identity information corresponding to the target face image according to the at least one optimal face image.
11. The apparatus of claim 10, wherein the partitioning module comprises:
the dividing submodule is used for dividing each shot image into at least two areas with different imaging qualities according to the preset corresponding relation between the imaging qualities and the position information; alternatively, the first and second electrodes may be,
determining the corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging qualities according to the determined corresponding relation.
12. The apparatus of claim 10, wherein the selection module comprises:
the first selection submodule is used for selecting at least one face image in an area with the highest imaging quality from the target face images contained in the N shot images;
the first determining submodule is used for determining the quality score of each face image in the at least one face image;
and the second selection submodule is used for selecting at least one optimal facial image from the at least one facial image according to the quality score of the at least one facial image.
13. The apparatus of claim 10, wherein the selection module comprises:
the second determining submodule is used for determining the quality score of the target face image contained in each shot image;
and the third selection sub-module is used for selecting at least one optimal face image from the target face images included in the N shot images according to the area of the target face image in each shot image and the quality score of the target face image included in each shot image.
14. The apparatus of claim 13, wherein the at least two regions comprise a first region and a second region, an imaging quality of the first region being higher than an imaging quality of the second region;
the third selection submodule includes:
a first selection unit, configured to, when the target face image is in the first region in M captured images included in the N captured images, select, according to quality scores of the target face images included in the M captured images, a target face image with a highest quality score from the target face images included in the M captured images as a first candidate face image, where M is a positive integer smaller than N;
a second selecting unit, configured to, when the target face image is in the second region in K captured images included in the N captured images, select, according to quality scores of the target face images included in the K captured images, a target face image with a highest quality score from the target face images included in the K captured images as a second candidate face image, where K is a positive integer smaller than N, and a sum of K and M is equal to N;
and the third selection unit is used for selecting at least one optimal face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
15. The apparatus of claim 14, wherein the third selection unit comprises:
a first determining subunit, configured to determine a first score difference value, where the first score difference value is a difference value between the quality score of the second candidate face image and the quality score of the first candidate face image;
a second determining subunit, configured to determine, when the first score difference is equal to a score threshold, both the first candidate face image and the second candidate face image as an optimal face image; and when the first score difference is larger than a score threshold, determining that the second candidate face image is the optimal face image, and when the first score difference is smaller than the score threshold, determining that the first candidate face image is the optimal face image.
16. The apparatus of claim 14, wherein the at least two regions comprise a first region and a second region, an imaging quality of the first region being higher than an imaging quality of the second region;
the second determination submodule includes:
the first determining unit is used for determining a plurality of scoring items of a target face image included in a current shot image, wherein the current shot image is any one of the N shot images;
a second determining unit, configured to determine, when the target face image is in the first region in the current captured image, a quality score of the target face image included in the current captured image according to the plurality of score items of the target face image included in the current captured image and the first weight of each score item;
and a third determining unit, configured to determine, when the target face image is in the second region in the current captured image, a quality score of the target face image included in the current captured image according to the plurality of score items of the target face image included in the current captured image and the second weight of each score item.
17. The apparatus of claim 10, wherein the selection module comprises:
the third determining submodule is used for selecting one shot image from the N shot images and determining the quality score of the target face image included in the selected shot image;
a fourth determining submodule, configured to determine a candidate face image at a current time according to a region of the target face image in the selected captured image, a quality score of the target face image included in the selected captured image, a region of the candidate face image determined at a previous time, and a quality score of the candidate face image determined at the previous time;
the judgment submodule is used for judging whether the N shot images are processed or not;
and the triggering sub-module is used for taking the candidate face image determined at the current moment as the optimal face image when the N shot images are processed, selecting a shot image from the unprocessed shot images included in the N shot images when the N shot images are not processed, and triggering the third determining sub-module to determine the quality score of the target face image included in the selected shot image until the N shot images are processed.
18. The apparatus of claim 17, wherein the at least two regions comprise a first region and a second region, an imaging quality of the first region being higher than an imaging quality of the second region;
the fourth determination submodule includes:
a fourth selecting unit, configured to select, when a region of the target face image in the selected captured image is the same as a region of the candidate face image determined at the previous time, and a quality score of the target face image included in the selected captured image is different from a quality score of the candidate face image determined at the previous time, a face image with a highest quality score from the target face image included in the selected captured image and the candidate face image determined at the previous time as a candidate face image at the current time;
a fourth determining unit, configured to determine a second score difference value when the region of the target face image in the selected captured image is different from the region of the candidate face image determined at the previous time, and the region of the candidate face image determined at the previous time is the first region, where the second score difference value is a difference value between a quality score of the target face image included in the selected captured image and a quality score of the candidate face image determined at the previous time;
a fifth determining unit, configured to, when the second score difference is greater than a score threshold, take a target face image included in the selected captured image as a candidate face image at the current time;
a sixth determining unit, configured to determine a third score difference value when the area where the target face image is located in the selected captured image is different from the area where the candidate face image is located at the previous time, and the area where the target face image is located in the selected captured image is the first area, where the third score difference value is a difference value between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected captured image;
and the seventh determining unit is used for taking the target face image included in the selected shot image as the candidate face image at the current moment when the third score difference is smaller than or equal to the score threshold.
CN201910251349.5A 2019-03-29 2019-03-29 Identity information determining method and device Active CN111767757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910251349.5A CN111767757B (en) 2019-03-29 2019-03-29 Identity information determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910251349.5A CN111767757B (en) 2019-03-29 2019-03-29 Identity information determining method and device

Publications (2)

Publication Number Publication Date
CN111767757A true CN111767757A (en) 2020-10-13
CN111767757B CN111767757B (en) 2023-11-17

Family

ID=72717937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910251349.5A Active CN111767757B (en) 2019-03-29 2019-03-29 Identity information determining method and device

Country Status (1)

Country Link
CN (1) CN111767757B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188075A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930261A (en) * 2012-12-05 2013-02-13 上海市电力公司 Face snapshot recognition method
US20130243268A1 (en) * 2012-03-13 2013-09-19 Honeywell International Inc. Face image prioritization based on face quality analysis
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN108229297A (en) * 2017-09-30 2018-06-29 深圳市商汤科技有限公司 Face identification method and device, electronic equipment, computer storage media
CN109389019A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 Facial image selection method, device and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243268A1 (en) * 2012-03-13 2013-09-19 Honeywell International Inc. Face image prioritization based on face quality analysis
CN102930261A (en) * 2012-12-05 2013-02-13 上海市电力公司 Face snapshot recognition method
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN109389019A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 Facial image selection method, device and computer equipment
CN108229297A (en) * 2017-09-30 2018-06-29 深圳市商汤科技有限公司 Face identification method and device, electronic equipment, computer storage media

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188075A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method

Also Published As

Publication number Publication date
CN111767757B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN110222787B (en) Multi-scale target detection method and device, computer equipment and storage medium
US20200320726A1 (en) Method, device and non-transitory computer storage medium for processing image
CN110569731B (en) Face recognition method and device and electronic equipment
CN108491794B (en) Face recognition method and device
EP1542155A1 (en) Object detection
CN109871821B (en) Pedestrian re-identification method, device, equipment and storage medium of self-adaptive network
US8406535B2 (en) Invariant visual scene and object recognition
CN110889314B (en) Image processing method, device, electronic equipment, server and system
EP3598385A1 (en) Face deblurring method and device
EP1542153A1 (en) Object detection
EP1542154A2 (en) Object detection
CN109815823B (en) Data processing method and related product
CN112651321A (en) File processing method and device and server
CN111445487A (en) Image segmentation method and device, computer equipment and storage medium
CN112507981B (en) Model generation method, iris image quality evaluation method and electronic equipment
CN109165572B (en) Method and apparatus for generating information
CN108769543B (en) Method and device for determining exposure time
CN111767757B (en) Identity information determining method and device
JP2016219879A (en) Image processing apparatus, image processing method and program
JP2016118971A (en) Image feature amount registration device, method, and program
CN112188075B (en) Snapshot, image processing device and image processing method
CN112131984A (en) Video clipping method, electronic device and computer-readable storage medium
CN109657083B (en) Method and device for establishing textile picture feature library
CN112001280A (en) Real-time online optimization face recognition system and method
JP6770363B2 (en) Face direction estimator and its program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant