CN111767757B - Identity information determining method and device - Google Patents

Identity information determining method and device Download PDF

Info

Publication number
CN111767757B
CN111767757B CN201910251349.5A CN201910251349A CN111767757B CN 111767757 B CN111767757 B CN 111767757B CN 201910251349 A CN201910251349 A CN 201910251349A CN 111767757 B CN111767757 B CN 111767757B
Authority
CN
China
Prior art keywords
face image
image
target face
images
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910251349.5A
Other languages
Chinese (zh)
Other versions
CN111767757A (en
Inventor
王开元
方家乐
徐楠
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910251349.5A priority Critical patent/CN111767757B/en
Publication of CN111767757A publication Critical patent/CN111767757A/en
Application granted granted Critical
Publication of CN111767757B publication Critical patent/CN111767757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an identity information determining method and device, and belongs to the field of intelligent monitoring. The method comprises the following steps: n photographed images are acquired first, and then each photographed image is divided into at least two areas having different imaging qualities. And determining the region where the target face image is located from at least two regions included in each photographed image. And then selecting at least one optimal face image from the target face images included in the N shooting images according to the region of the target face image in each shooting image. And finally, determining identity information corresponding to the target face image according to at least one optimal face image. According to the application, at least one optimal face image is selected according to the region where the target face image included in each shot image is located, so that the determined at least one optimal face image is more accurate, and the identity information corresponding to the target face image is more accurately determined.

Description

Identity information determining method and device
Technical Field
The application relates to the field of intelligent monitoring, in particular to an identity information determining method and device.
Background
Face recognition is a biological recognition technology for determining identity information based on face feature information. That is, a face image may be detected from a video according to a face recognition technique, and identity information corresponding to the face image may be determined.
In the related art, an identity information determining method is provided, which includes: shooting a video, and determining the quality scores of target face images included in a plurality of video frame images in the shot video according to a face image quality scoring algorithm, wherein the target face images refer to face images of identity information to be determined. And selecting a target face image with the highest quality score from target face images included in the plurality of video frame images as an optimal target face image. And retrieving a face image with the highest similarity with the optimal target face image from the face database, and determining the identity information corresponding to the retrieved face image as the identity information corresponding to the target face image.
However, since the imaging quality of the edge region of one image is generally lower than that of the center region of the image, the imaging quality of the face image detected at the edge region is generally lower than that of the face image detected at the center region. Therefore, the best target face image may be misjudged by the method, so that the accuracy of the identity information corresponding to the determined target face image is low.
Disclosure of Invention
The embodiment of the application provides an identity information determining method and device, which can solve the problem of lower accuracy of identity information corresponding to a determined target face image in the related art because the imaging quality of an edge area of an image is lower than that of a central area of the image. The technical scheme is as follows:
in a first aspect, there is provided an identity information determining method, the method comprising:
acquiring N shooting images, wherein N is a positive integer;
dividing each photographed image into at least two regions having different imaging qualities;
determining the region where the target face image is located from at least two regions included in each photographed image;
selecting at least one optimal face image from target face images included in the N shooting images according to the area of the target face image in each shooting image;
and determining identity information corresponding to the target face image according to the at least one optimal face image.
Optionally, the dividing each photographed image into at least two areas with different imaging quality includes:
dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or,
Determining a corresponding relation between imaging quality and position information by counting imaging quality and position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
Optionally, the selecting at least one best face image from the target face images included in the N photographed images according to the area where the target face image is located in each photographed image includes:
selecting at least one face image in a region with highest imaging quality from target face images included in the N shooting images;
determining a quality score for each of the at least one face image;
and selecting at least one optimal face image from the at least one face image according to the quality scores of the at least one face image.
Optionally, the selecting at least one best face image from the target face images included in the N photographed images according to the area where the target face image is located in each photographed image includes:
determining a quality score of a target face image included in each photographed image;
And selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image and the quality score of the target face image included in each shooting image.
Optionally, the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
the selecting at least one best face image from the target face images included in the N shot images according to the area where the target face image is located in each shot image and the quality score of the target face image included in each shot image, including:
when the target face image is in the first area in M shooting images included in the N shooting images, selecting a target face image with the highest quality score from the target face images included in the M shooting images as a first candidate face image according to the quality scores of the target face images included in the M shooting images, wherein M is a positive integer smaller than N;
when the target face image is in the second area in K shooting images included in the N shooting images, selecting a target face image with the highest quality score from the target face images included in the K shooting images as a second candidate face image according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
And selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality scores of the first candidate face image and the second candidate face image.
Optionally, the selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image includes:
determining a first score difference, the first score difference being a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
when the first scoring difference value is equal to a scoring threshold value, determining that the first candidate face image and the second candidate face image are both optimal face images; and when the first scoring difference value is smaller than the scoring threshold value, determining that the first candidate face image is the optimal face image.
Optionally, the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
The determining the quality score of the target face image included in each photographed image includes:
determining a plurality of scoring items of a target face image included in a current shooting image, wherein the current shooting image is any image in the N shooting images;
when the target face image is in the first area in the current shooting image, determining a quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and a first weight of each scoring item;
and when the target face image is in the second area in the current shooting image, determining a quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and a second weight of each scoring item.
Optionally, the selecting at least one best face image from the target face images included in the N photographed images according to the area where the target face image is located in each photographed image includes:
selecting one shooting image from the N shooting images, and determining the quality scores of target face images included in the selected shooting images;
Determining a candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located, and the quality score of the candidate face image determined at the last moment;
judging whether the N shot images are processed or not;
if so, taking the candidate face image determined at the current moment as the optimal face image, if not, selecting one shot image from unprocessed shot images contained in the N shot images, and returning to the step of determining the quality score of the target face image contained in the selected shot image until the N shot images are processed.
Optionally, the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
the determining the candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located, and the quality score of the candidate face image determined at the last moment includes:
When the area of the target face image in the selected shooting image is the same as the area of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shooting image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score from the target face image included in the selected shooting image and the candidate face image determined at the previous moment as the candidate face image at the current moment;
when the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the candidate face image determined at the previous moment is the first area, determining a second grading difference value, wherein the second grading difference value is the difference value between the quality grading of the target face image included in the selected shooting image and the quality grading of the candidate face image determined at the previous moment;
when the second scoring difference value is larger than the scoring threshold value, taking the target face image included in the selected shooting image as a candidate face image at the current moment;
When the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the target face image in the selected shooting image is the first area, determining a third grading difference value, wherein the third grading difference value is the difference value between the quality grading of the candidate face image determined at the previous moment and the quality grading of the target face image included in the selected shooting image;
and when the third scoring difference value is smaller than or equal to the scoring threshold value, taking the target face image included in the selected shooting image as a candidate face image at the current moment.
In a second aspect, there is provided an identity information determining apparatus, the apparatus comprising:
the acquisition module is used for acquiring N shooting images, wherein N is a positive integer;
the dividing module is used for dividing each shot image into at least two areas with different imaging quality;
the first determining module is used for determining the area where the target face image is located from at least two areas included in each shot image;
the selection module is used for selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image;
And the second determining module is used for determining the identity information corresponding to the target face image according to the at least one optimal face image.
Optionally, the dividing module includes:
the dividing sub-module is used for dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or,
determining a corresponding relation between imaging quality and position information by counting imaging quality and position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
Optionally, the selecting module includes:
the first selection submodule is used for selecting at least one face image in the area with the highest imaging quality from target face images included in the N shooting images;
a first determining submodule, configured to determine a quality score of each face image in the at least one face image;
and the second selection sub-module is used for selecting at least one optimal face image from the at least one face image according to the quality scores of the at least one face image.
Optionally, the selecting module includes:
the second determining submodule is used for determining the quality scores of the target face images included in each shot image;
and the third selection sub-module is used for selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image and the quality score of the target face image included in each shooting image.
Optionally, the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
the third selection submodule includes:
a first selecting unit, configured to select, when the target face image is in the first area in M photographed images included in the N photographed images, a target face image with a highest quality score from the target face images included in the M photographed images according to quality scores of the target face images included in the M photographed images, where M is a positive integer less than N, as a first candidate face image;
a second selecting unit, configured to select, when the target face image is in the second area in K pieces of photographed images included in the N pieces of photographed images, a target face image with a highest quality score from the target face images included in the K pieces of photographed images, as a second candidate face image, where K is a positive integer smaller than N, and a sum of K and M is equal to N;
And a third selection unit, configured to select at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
Optionally, the third selecting unit includes:
a first determining subunit, configured to determine a first score difference, where the first score difference is a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
a second determining subunit, configured to determine, when the first score difference value is equal to a score threshold value, both the first candidate face image and the second candidate face image as optimal face images; and when the first scoring difference value is smaller than the scoring threshold value, determining that the first candidate face image is the optimal face image.
Optionally, the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
The second determination submodule includes:
a first determining unit, configured to determine a plurality of scoring items of a target face image included in a current captured image, where the current captured image is any one of the N captured images;
a second determining unit configured to determine, when the target face image is in the first area in the current captured image, a quality score of the target face image included in the current captured image according to a plurality of score items and a first weight of each score item of the target face image included in the current captured image;
and a third determining unit, configured to determine, when the target face image is in the second area in the current captured image, a quality score of the target face image included in the current captured image according to a plurality of score items and a second weight of each score item of the target face image included in the current captured image.
Optionally, the selecting module includes:
a third determining submodule, configured to select one photographed image from the N photographed images, and determine a quality score of a target face image included in the selected photographed image;
a fourth determining submodule, configured to determine a candidate face image at the current moment according to an area where the target face image is located in the selected captured image, a quality score of the target face image included in the selected captured image, an area where the candidate face image determined at the previous moment is located, and a quality score of the candidate face image determined at the previous moment;
The judging submodule is used for judging whether the N shot images are processed or not;
and the triggering sub-module is used for taking the candidate face image determined at the current moment as the optimal face image when the N shooting images are processed, selecting one shooting image from unprocessed shooting images contained in the N shooting images when the N shooting images are not processed, and triggering the third determination sub-module to determine the quality score of the target face image contained in the selected shooting image until the N shooting images are processed.
Optionally, the at least two regions include a first region and a second region, the first region having a higher imaging quality than the second region;
the fourth determination submodule includes:
a fourth selecting unit, configured to select, when the area where the target face image is located in the selected captured image is the same as the area where the candidate face image determined at the previous time is located, and the quality score of the target face image included in the selected captured image is different from the quality score of the candidate face image determined at the previous time, a face image with the highest quality score from the target face image included in the selected captured image and the candidate face image determined at the previous time as the candidate face image at the current time;
A fourth determining unit, configured to determine a second score difference when the area in which the target face image is located in the selected captured image is different from the area in which the candidate face image determined at the previous time is located and the area in which the candidate face image determined at the previous time is located is the first area, where the second score difference is a difference between a quality score of the target face image included in the selected captured image and a quality score of the candidate face image determined at the previous time;
a fifth determining unit, configured to, when the second score difference value is greater than a score threshold value, use a target face image included in the selected captured image as a candidate face image at the current time;
a sixth determining unit, configured to determine a third score difference when an area in which the target face image is located in the selected captured image is different from an area in which the candidate face image determined at the previous time is located, and the area in which the target face image is located in the selected captured image is the first area, where the third score difference is a difference between a quality score of the candidate face image determined at the previous time and a quality score of the target face image included in the selected captured image;
And a seventh determining unit, configured to, when the third score difference value is less than or equal to the score threshold value, use a target face image included in the selected captured image as a candidate face image at the current time.
In a third aspect, there is provided an identity information determining apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect above.
In a fourth aspect, there is provided a computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the steps of any of the methods of the first aspect above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of any of the first aspects above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
in the embodiment of the application, N photographed images are acquired first, and the imaging quality of each photographed image is uneven due to the distortion of the lens of the photographing lens, etc., so that each photographed image can be divided into at least two areas with different imaging quality. And then determining the area where the target face image is located from at least two areas included in each photographed image. And then selecting at least one optimal face image from the target face images included in the N shooting images according to the region of the target face image in each shooting image. And finally, determining identity information corresponding to the target face image according to at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the area of the target face image included in each shot image, so that the determined at least one optimal face image is more accurate, and the identity information corresponding to the target face image is more accurately determined.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
Fig. 2 is a flowchart of a first identity information determining method according to an embodiment of the present application.
Fig. 3 is a flowchart of a second method for determining identity information according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a divided area of a captured image according to an embodiment of the present application.
Fig. 5 is a flowchart of a third method for determining identity information according to an embodiment of the present application.
Fig. 6 is a flowchart of a fourth method for determining identity information according to an embodiment of the present application.
Fig. 7 is a block diagram of an identity information determining apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an identity information determining apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application.
Before explaining the embodiments of the present application in detail, an implementation environment of the embodiments of the present application will be described:
fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, and referring to fig. 1, the implementation environment includes an image capturing apparatus 101 and a server 102. The image capturing apparatus 101 and the server 102 are connected via a network. The image capturing apparatus 101 may capture a picture or a video. The image capturing apparatus 101 may also transmit captured pictures or videos to the server 102. The server 102 may determine a captured image from a picture or video captured by the image capturing apparatus 101. Specifically, the server 102 may take a picture taken by the image taking device 101 as a taken image, and the server 102 may also take a video frame image included in a video taken by the image taking device 101 as a taken image. The server 102 may perform identity information determination according to the determined photographed image. The image capturing apparatus 101 may be a camera, a video camera, or the like. The server 102 is a server that provides a background service for the image capturing apparatus 101, and may be a server, or a server cluster formed by a plurality of servers, or a cloud computing server center, which is not limited in the embodiment of the present application. In the embodiment of the present application, a server 102 is illustrated.
The identity information determining method provided by the embodiment of the application is explained in detail below.
Fig. 2 is a flowchart of a method for determining identity information according to an embodiment of the present application, referring to fig. 2, the method includes:
step 201: n shooting images are acquired, wherein N is a positive integer.
Step 202: each photographed image is divided into at least two regions having different imaging qualities.
Step 203: and determining the region where the target face image is located from at least two regions included in each photographed image.
Step 204: and selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image.
Step 205: and determining identity information corresponding to the target face image according to at least one optimal face image.
In the embodiment of the application, N photographed images are acquired first, and the imaging quality of each photographed image is uneven due to the distortion of the lens of the photographing lens, etc., so that each photographed image can be divided into at least two areas with different imaging quality. And then determining the area where the target face image is located from at least two areas included in each photographed image. And then selecting at least one optimal face image from the target face images included in the N shooting images according to the region of the target face image in each shooting image. And finally, determining identity information corresponding to the target face image according to at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the area of the target face image included in each shot image, so that the determined at least one optimal face image is more accurate, and the identity information corresponding to the target face image is more accurately determined.
Optionally, the dividing each photographed image into at least two areas with different imaging quality includes:
dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or,
determining a corresponding relation between imaging quality and position information by counting imaging quality and position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
Optionally, the selecting at least one best face image from the target face images included in the N captured images according to the area where the target face image is located in each captured image includes:
selecting at least one face image in a region with highest imaging quality from target face images included in the N photographed images;
determining a quality score for each of the at least one face image;
at least one best face image is selected from the at least one face image based on the quality score of the at least one face image.
Optionally, the selecting at least one best face image from the target face images included in the N captured images according to the area where the target face image is located in each captured image includes:
Determining a quality score of a target face image included in each photographed image;
and selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image and the quality score of the target face image included in each shooting image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
according to the region of the target face image in each shot image and the quality score of the target face image included in each shot image, selecting at least one optimal face image from the target face images included in the N shot images, wherein the method comprises the following steps:
when the target face image is in a first area in M shooting images included in N shooting images, selecting a target face image with the highest quality score from the target face images included in the M shooting images as a first candidate face image according to the quality scores of the target face images included in the M shooting images, wherein M is a positive integer smaller than N;
when the target face image is in a second area in K shooting images included in N shooting images, selecting a target face image with the highest quality score from the target face images included in the K shooting images as a second candidate face image according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
And selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality scores of the first candidate face image and the second candidate face image.
Optionally, the selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image includes:
determining a first scoring difference, the first scoring difference being a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
when the first scoring difference value is equal to the scoring threshold value, determining the first candidate face image and the second candidate face image as the best face images; and when the first grading difference value is smaller than the grading threshold value, determining the first candidate face image as the optimal face image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the determining the quality score of the target face image included in each photographed image includes:
Determining a plurality of scoring items of a target face image included in a current shooting image, wherein the current shooting image is any image in N shooting images;
when the target face image is in a first area in the current shooting image, determining a quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and a first weight of each scoring item;
and when the target face image is in the second area in the current shooting image, determining the quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and the second weight of each scoring item.
Optionally, the selecting at least one best face image from the target face images included in the N captured images according to the area where the target face image is located in each captured image includes:
selecting one shooting image from the N shooting images, and determining the quality score of a target face image included in the selected shooting image;
determining a candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located, and the quality score of the candidate face image determined at the last moment;
Judging whether N shot images are processed or not;
if so, the candidate face image determined at the current moment is taken as the optimal face image, if not, one shooting image is selected from unprocessed shooting images contained in the N shooting images, and the step of determining the quality score of the target face image contained in the selected shooting image is returned until the N shooting images are processed.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the method for determining the candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located, and the quality score of the candidate face image determined at the last moment comprises the following steps:
when the area of the target face image in the selected shooting image is the same as the area of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shooting image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score from the target face image included in the selected shooting image and the candidate face image determined at the previous moment as the candidate face image at the current moment;
When the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the candidate face image determined at the previous moment is the first area, determining a second grading difference value, wherein the second grading difference value is the difference value between the quality grading of the target face image included in the selected shooting image and the quality grading of the candidate face image determined at the previous moment;
when the second scoring difference value is larger than the scoring threshold value, taking the target face image included in the selected shooting image as a candidate face image at the current moment;
when the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the target face image in the selected shooting image is the first area, determining a third grading difference value, wherein the third grading difference value is the difference value between the quality grading of the candidate face image determined at the previous moment and the quality grading of the target face image included in the selected shooting image;
and when the third scoring difference value is smaller than or equal to the scoring threshold value, taking the target face image included in the selected shooting image as the candidate face image at the current moment.
All the above optional technical solutions may be combined according to any choice to form an optional embodiment of the present application, and the embodiments of the present application will not be described in detail.
In the identity information determining method provided by the application, at least one optimal face image is selected from target face images included in N shooting images, and then identity information corresponding to the target face images is determined according to the optimal face images. The application can be implemented in three different ways for the selection of at least one optimal face image. Therefore, the identity information determination method provided by the present application will be described below by way of three embodiments, respectively.
Fig. 3 is a flowchart of a method for determining identity information according to an embodiment of the present application. The method is applied to a server, and the embodiment will introduce an identity information determining method in combination with a first mode of selecting at least one optimal face image. Referring to fig. 3, the method includes:
step 301: n shooting images are acquired, wherein N is a positive integer.
The N captured images may be determined from images or videos captured by the image capturing apparatus. The image capturing apparatus may be the image capturing apparatus 101 shown in fig. 1.
Step 302: each photographed image is divided into at least two regions having different imaging qualities.
Since a photographing lens of an image photographing apparatus generally has lens distortion, it is easy to make the imaging quality of a photographed picture or video uneven, that is, the imaging quality of a photographed image uneven. Specifically, lens distortion often causes distortion or distortion of an object or person in the edge region in a captured image. I.e. it generally appears that the imaging quality in the captured image near the edge area is lower than the imaging quality in the captured image near the center area. Therefore, each photographed image can be divided into at least two regions according to different imaging quality of different regions in the photographed image, and the respective imaging quality of each region can be regarded as the same.
Step 302 may include, among other things: dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or, determining a corresponding relation between the imaging quality and the position information by counting the imaging quality and the position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
In one possible case, the correspondence between the imaging quality and the position information may be a direction in which the imaging quality of the center position of each captured image is highest, spreading from the center position to the edge position, and gradually decreasing. By way of example, the following describes dividing each photographed image into at least two areas having different imaging qualities according to the correspondence between the imaging qualities and the positional information. The captured image i is any captured image of the acquired N captured images.
For example, referring to fig. 4, the imaging quality of the center position of the photographed image i is highest, and the imaging quality gradually decreases in a direction of spreading from the center position to the edge position, that is, the imaging quality of the edge position is lowest. From such correspondence between imaging quality and position information, a rectangular frame whose center position coincides with the center position of the captured image i may be determined in the captured image i, which may divide the captured image i into the region a and the region B. The region a is a region including the center position of the captured image i, and the region B is a region including the edge position of the captured image i outside the region a, that is, the imaging quality of the region a is higher than that of the region B. Of course, each shot image may be divided into at least two areas with different imaging quality according to the corresponding relation between the imaging quality and the position information in other manners, which is not limited in the embodiment of the present application.
In addition, in a possible case where the history captured images before the acquired N captured images are stored in the server, under such a condition, the imaging quality and the position information of each area in the history captured images may be counted, thereby determining the correspondence between the imaging quality and the position information of each of the N captured images, and further dividing each captured image into at least two areas having different imaging qualities. Of course, in practical application, each shot image may be divided into at least two areas with different imaging quality by other methods, which is not limited in the embodiment of the present application.
Step 303: and determining the region where the target face image is located from at least two regions included in each photographed image.
It should be noted that the target face image refers to a face image of identity information to be determined. The target face image is an image that can present facial features of the target face.
After step 303 is performed, at least one optimal face image may be selected from the target face images included in the N captured images according to the region in which the target face image is located in each captured image. Specifically, this can be achieved by the following steps 304-306.
Step 304: at least one face image in a region of highest imaging quality is selected from target face images included in the N photographed images.
Because at least two areas included in each of the N photographed images are divided according to different imaging quality, the at least two areas included in each of the N photographed images can be ordered according to the imaging quality, and therefore the area with the highest imaging quality in the at least two areas can be determined. And for the target face images included in the N shooting images, at least one face image in the area with the highest imaging quality is at least one face image with the highest imaging quality in the target face images included in the N shooting images.
Step 305: a quality score for each of the at least one face image is determined.
It should be noted that, after determining the quality score of the target face image included in each captured image, at least one face image may be compared more conveniently, so as to conveniently select at least one optimal face image from at least one face image. Since the at least one face image is in the region with the highest imaging quality, for the at least one face image, the quality score of each face image in the at least one face image may be determined according to the plurality of score items and the third weight of each score item of each face image in the at least one face image, for example.
Wherein the plurality of score items are score items capable of evaluating facial features of the target face for determining a quality score of the target face image included in each captured image. The plurality of score items may include a pupil distance of the target face, a deflection angle of the target face with respect to the photographing lens, and the like. The third weight of each scoring item may be a proportion of each scoring item in the plurality of scoring items. The larger the proportion of any one of the plurality of the score items is, the larger the third weight of the any one of the plurality of the score items is, that is, the higher the importance degree of the any one of the plurality of the score items is.
And determining the quality score of each face image in the at least one face image according to the plurality of scoring items of each face image in the at least one face image and the third weight of each scoring item, namely multiplying each scoring item in the plurality of scoring items by the respective third weight and then summing to determine the quality score of each face image in the at least one face image.
Step 306: at least one best face image is selected from the at least one face image based on the quality score of the at least one face image.
Since in some examples there may be face images with the same quality score in at least one face image, i.e. there may be face images with the same imaging quality in at least one face image. Thus, at least one best face image may be selected from the at least one face image based on the quality score of the at least one face image.
Step 307: and determining identity information corresponding to the target face image according to at least one optimal face image.
When the identity information corresponding to the target face image is determined according to at least one optimal face image, the similar face image with the highest similarity with the at least one optimal face image may be retrieved from the face database. The face database stores a plurality of face images and identity information corresponding to each face image. And then, the identity information corresponding to the similar face image is determined as the identity information corresponding to the target face image. Of course, the identity information corresponding to the target face image may also be determined according to at least one optimal face image in other manners, which is not particularly limited in the embodiment of the present application.
In the embodiment of the application, N photographed images are acquired first, and the imaging quality of each photographed image is uneven due to the distortion of the lens of the photographing lens, etc., so that each photographed image can be divided into at least two areas with different imaging quality. And then determining the area where the target face image is located from at least two areas included in each photographed image. Since the imaging quality of the target face image in the region with the highest imaging quality is the highest among at least two regions of each photographed image, at least one face image in the region with the highest imaging quality can be selected from the target face images included in the N photographed images. A quality score for each of the at least one face image is then determined. At least one best face image is selected from the at least one face image based on the quality score of the at least one face image. And finally, determining identity information corresponding to the target face image according to the at least one optimal face image. In the embodiment of the application, each shooting image is divided into at least two areas according to different imaging quality, and then at least one optimal face image is selected from at least one face image of the area with highest imaging quality, so that the determined at least one optimal face image is more efficient and accurate, and the determination of the identity information corresponding to the target face image is more efficient and accurate.
Fig. 5 is a flowchart of a method for determining identity information according to an embodiment of the present application. The method is applied to a server, and the embodiment will introduce an identity information determining method in combination with a second mode of selecting at least one optimal face image. Referring to fig. 5, the method includes:
step 501: n shooting images are acquired, wherein N is a positive integer.
Step 502: each photographed image is divided into at least two regions having different imaging qualities.
Step 503: and determining the region where the target face image is located from at least two regions included in each photographed image.
It should be noted that, the contents of step 501 to step 503 are similar to those of step 301 to step 303 in the embodiment shown in fig. 3, so that the description thereof is omitted here.
After steps 501-503 are performed, at least one optimal face image may be selected from the target face images included in the N captured images according to the region in which the target face image is located in each captured image. Specifically, this can be achieved by the following steps 504 to 505.
Step 504: a quality score of a target face image included in each captured image is determined.
It should be noted that, after the quality scores of the target face images included in each photographed image are determined, the target face images included in each photographed image may be compared more conveniently, so as to facilitate selection of at least one optimal face image from the target face images included in the N photographed images. However, in the embodiment of the present application, the same quality scoring algorithm may be used for each region in the captured image to determine the quality score of the target face image included in the captured image, or different quality scoring algorithms may be used for different regions in the captured image to determine the quality score of the target face image included in the captured image. Next, description will be made of the determination of the quality score of the target face image included in each photographed image, divided into the following two cases.
First caseDetermining a target face included in each photographed imageA plurality of scoring items of the image. And determining the quality score of the target face image included in each shot image according to the plurality of scoring items of the target face image included in each shot image and the fourth weight of each scoring item.
Specifically, the quality score of each face image in at least one face image is determined according to the multiple scoring items of each face image and the fourth weight of each scoring item, that is, each scoring item in the multiple scoring items is multiplied by the respective fourth weight, and then the quality scores of the target face images included in each photographed image are determined by summing. That is, regardless of which of at least two areas included in the captured image the target face image is in, the same quality scoring algorithm is employed for each area in the captured image to determine the quality score of the target face image included in the captured image.
It should be noted that, the step 305 has been described in the embodiment shown in fig. 3, so that a detailed description is omitted here.
Second caseAt least two regions of each captured image include a first region and a second region, the first region having a higher imaging quality than the second region. For the current captured image, a plurality of score items of the target face image included in the current captured image, which is any one of the N captured images, may be determined. And when the target face image is in the first area in the current shooting image, determining the quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and the first weight of each scoring item. And when the target face image is in the second area in the current shooting image, determining the quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and the second weight of each scoring item. That is, different quality scoring algorithms are employed for different regions in the captured image to determine a quality score for a target face image included in the captured image.
For the second case, the first region included in at least two regions of each captured image may be a region including a center position in each captured image, and the second region may be a region including an edge position outside the first region, for example. For example, referring to fig. 4, a region a in fig. 4 represents a first region and a region B represents a second region. Of course, the first area and the second area may also be determined according to other division manners, which is not specifically limited in the embodiment of the present application.
Note that the first weight of each score item may be a proportion of each score item in the plurality of score items when the target face image is in the first region in the current captured image. The larger the proportion of any one of the plurality of the score items is, the larger the first weight of the any one of the plurality of the score items is, that is, the higher the importance of the any one of the plurality of the score items is.
According to the multiple scoring items of the target face image included in the current shooting image and the first weight of each scoring item, determining the quality score of the target face image included in the current shooting image, namely, multiplying each scoring item in the multiple scoring items by the respective first weight and then summing to determine the quality score of the target face image included in the current shooting image.
Similarly, the second weight of each of the score items may be a proportion of each of the score items in the plurality of score items when the target face image is in the second region in the current captured image. The larger the proportion of any one of the plurality of the score items is, the larger the second weight of the any one of the plurality of the score items is, that is, the higher the importance of the any one of the plurality of the score items is.
According to the multiple scoring items of the target face image included in the current shooting image and the second weight of each scoring item, determining the quality score of the target face image included in the current shooting image, namely multiplying each scoring item in the multiple scoring items by the respective second weight, and then summing to determine the quality score of the target face image included in the current shooting image.
It is noted that the plurality of score items of the target face may be the same as the plurality of score items of the target face when the target face image is in the first region in the current captured image. For example, when the target face image is in the first region in the current captured image, the plurality of evaluation items of the target face are the pupil distance of the target face and the yaw angle of the target face with respect to the capturing lens. When the target face image is in the second area in the current shooting image, the plurality of scoring items of the target face are pupil distance of the target face and deflection angle of the target face relative to the shooting lens. Under such conditions, the value of the first weight and the value of the second weight may be different for the same scoring item. It is also understood that, for the same scoring item, the importance of the same scoring item is higher when the target face image is in the first region in the current captured image, the value of the first weight is higher, but the importance of the same scoring item is lower when the target face image is in the second region in the current captured image, the value of the first weight is lower.
For example, when the plurality of score items includes the pupil distance of the target face, the pupil distance of the target face is one score item that can more directly reflect the distance of the target face from the photographing lens. Namely, when the target face is far away from the shooting lens, the pupil distance of the target face is smaller; when the target face is closer to the photographing lens, the pupil distance of the target face is larger. Because the imaging quality of the first area is higher than that of the second area, namely the phenomena such as distortion or distortion can not occur when the target face image is in the first area, and the phenomena such as distortion or distortion can occur when the target face image is in the second area. Thus, for the same target face in the first region, the pupil distance of the target face is not affected by the imaging quality of the first region, regardless of the position of the target face in the first region. But for the same target face in the second region, the pupil distance of the target face may be affected by the imaging quality of the second region. In short, the importance of the score of the pupil distance of the target face to the target face image in the second region is higher than that of the score of the pupil distance of the target face to the target face image in the first region. Therefore, the value of the second weight of the score term, which is the pupil distance of the target face, may be higher than the value of the first weight, thereby increasing the importance degree of the pupil distance of the target face when the target face is in the second region. Of course, the values of the first weight and the second weight of other same scoring items may be set according to practical situations, which is not particularly limited in the embodiment of the present application.
Step 505: and selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image and the quality score of the target face image included in each shooting image.
The best face image may be an image that most clearly and completely shows the facial features of the target face among the target face images included in the N captured images. For example, in the target face images included in the N photographed images, the five sense organs of the target face presented by the optimal face image are the most clear, and the target face is not blocked by other objects or people, and so on.
In one possible case, according to the area where the target face image is located in each photographed image and the quality score of the target face image included in each photographed image, more than one image that can most clearly and completely exhibit the facial features of the target face may be selected from the target face images included in the N photographed images, that is, at least one optimal face image may be selected from the target face images included in the N photographed images.
In one possible case, when some of the N captured images are captured, the deflection angle of the target face with respect to the capturing lens may be large, or the target face may be far from the capturing lens, or the like, so that the target face images included in these captured images cannot clearly and completely exhibit the facial features of the target face. And the identity information determined according to the target face images included in the photographed images is relatively inaccurate. Therefore, by selecting at least one optimal target face image which can clearly and completely present the facial features of the target face from the N shooting images, the identity information which can be determined according to the at least one optimal face image is more accurate.
Wherein, step 505 may be implemented by the following steps (1) -step (3) under the condition that each photographed image is divided into a first region and a second region.
(1): when the target face image is in a first area in M shooting images included in N shooting images, selecting a target face image with the highest quality score from the target face images included in the M shooting images as a first candidate face image according to the quality scores of the target face images included in the M shooting images, wherein M is a positive integer smaller than N.
When the target face image is in the first area in the M shooting images included in the N shooting images, selecting the target face image with the highest quality score from the target face images included in the M shooting images as a first candidate face image, namely selecting the target face image which can clearly and completely show the facial features of the target face from all the target face images in the first area as the first candidate face image.
(2): when the target face image is in the second area in K shooting images included in the N shooting images, selecting a target face image with the highest quality score from the target face images included in the K shooting images according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N.
When the K shooting images included in the N shooting images are in the second area, selecting the target face image with the highest quality score from the target face images included in the K shooting images as a second candidate face image, namely selecting the target face image which can clearly and completely show the facial features of the target face from all the target face images in the second area as the second candidate face image.
(3): and selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality scores of the first candidate face image and the second candidate face image.
Since the first candidate face image is the target face image with the highest quality score selected from the M shooting images; the second candidate face image is the target face image with the highest quality score selected from the K shooting images, and the sum of K and M is equal to N. That is, among the target face images included in the N shot images, the candidate face image that may be the at least one best face image is a first candidate face image in the first area and a second candidate face image in the second area. At least one best face image may be selected from the first candidate face image and the second candidate face image based on the quality score of the first candidate face image and the quality score of the second candidate face image.
Wherein, the step (3) may include: determining a first scoring difference value, and determining the first candidate face image and the second candidate face image as the best face image when the first scoring difference value is equal to a scoring threshold value; and when the first grading difference value is smaller than the grading threshold value, determining the first candidate face image as the optimal face image.
It should be noted that, the first score difference is a difference between the quality score of the second candidate face image and the quality score of the first candidate face image. Since the imaging quality of the second region is lower than the imaging quality of the first region. Therefore, for a target face image included in the same photographed image, the imaging quality when the target face image is in the second region is lower than that when the target face image is in the first region. Therefore, the quality score of the target face image in the second area is lower than that of the target face image in the first area, so that the difference between the target face image in the second area and the target face image in the first area can be represented. That is, the quality score of the target face image in the second region needs to be added with the score threshold to be equal to the quality score of the target face image in the first region. Therefore, when the difference between the quality score of the second candidate face image and the quality score of the first candidate face image is equal to the score threshold value, both the first candidate face image and the second candidate face image may be determined as the best face image. When the difference between the quality score of the second candidate face image and the quality score of the first candidate face image is greater than the score threshold, it is indicated that the second candidate face image is capable of more clearly and completely presenting the facial features of the target face than the first candidate face image. Accordingly, it is possible to determine the second candidate face image as the optimal face image, and relatively, when the difference between the quality score of the second candidate face image and the quality score of the first candidate face image is smaller than the score threshold value, it is indicated that the first candidate face image can more clearly and completely present the facial features of the target face than the second candidate face image. Thus, the first candidate face image may be determined to be the optimal face image.
It should be noted that the above steps (1) - (3) may be applied to the first case of the above step 504, and may also be applied to the second case of the above step 504. When the above steps (1) to (3) are applied to the first case in the above step 504, each captured image is divided into at least two areas different in imaging quality including a first area and a second area. Since in the first case the quality score of the target face image included in each captured image is determined according to the plurality of scoring items and the fourth weight of each scoring item, i.e. according to the same quality scoring algorithm for the first case. In the second case, the quality score of the target face image included in each photographed image is determined according to a plurality of score items and a first weight of each score item of the target face image included in the current photographed image when the target face image is in the first region in the current photographed image. And when the target face image is in the second area in the current shooting image, determining the quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and the second weight of each scoring item. I.e. for the second case according to a different quality scoring algorithm.
Therefore, the scoring threshold value in the case where the steps (1) to (3) are applied to the first case may be different from the scoring threshold value in the case where the steps (1) to (3) are applied to the second case. The specific value of the scoring threshold may be set according to practical situations, which is not specifically limited in the embodiment of the present application.
Step 506: and determining identity information corresponding to the target face image according to at least one optimal face image.
It should be noted that, the content of step 506 is similar to that of step 307 in the embodiment shown in fig. 3, so that the description is omitted here.
In the embodiment of the application, N photographed images are acquired first, and the imaging quality of each photographed image is uneven due to the distortion of the lens of the photographing lens, etc., so that each photographed image can be divided into at least two areas with different imaging quality. And then determining the area where the target face image is located from at least two areas included in each photographed image. And then determining the quality scores of the target face images included in each photographed image, and selecting at least one optimal face image from the target face images included in the N photographed images according to the areas of the target face images in each photographed image and the quality scores of the target face images included in each photographed image. And finally, determining identity information corresponding to the target face image according to at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the area where the target face image included in each shot image is located and the quality score of the target face image included in each shot image, so that the determined at least one optimal face image is more accurate, and the identity information corresponding to the target face image is more accurately determined.
Fig. 6 is a flowchart of a method for determining identity information according to an embodiment of the present application. The method is applied to a server, and the embodiment will introduce an identity information determining method in combination with a third mode of selecting at least one optimal face image. Referring to fig. 6, the method includes:
step 601: n shooting images are acquired, wherein N is a positive integer.
Step 602: each photographed image is divided into at least two regions having different imaging qualities.
Step 603: and determining the region where the target face image is located from at least two regions included in each photographed image.
It should be noted that, the contents of step 601 to step 603 are similar to those of step 301 to step 303 in the embodiment shown in fig. 3, so that the description thereof is omitted herein.
After steps 601-603 are performed, at least one optimal face image may be selected from the target face images included in the N captured images according to the region in which the target face image is located in each captured image. Specifically, this can be achieved by the following steps 604 to 607.
Step 604: and selecting one shot image from the N shot images, and determining the quality score of the target face image included in the selected shot image.
It should be noted that, the quality score of the target face image included in the selected captured image in step 604 is similar to the quality score of the target face image included in the captured image determined in step 504 in the embodiment shown in fig. 5, and will not be described herein.
Step 605: and determining the candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located, and the quality score of the candidate face image determined at the last moment.
Wherein, similar to the embodiment shown in fig. 3 described above, the at least two regions of each captured image may include a first region and a second region. The first area and the second area have been described in the above embodiments, and are not described herein. Step 605 may be implemented by the following steps (1) - (5) under the condition that at least two areas of each photographed image include a first area and a second area.
(1): when the area of the target face image in the selected shooting image is the same as the area of the candidate face image determined at the last moment and the quality score of the target face image included in the selected shooting image is different from the quality score of the candidate face image determined at the last moment, selecting the face image with the highest quality score from the target face image included in the selected shooting image and the candidate face image determined at the last moment as the candidate face image at the current moment.
It should be noted that, when the area of the target face image in the selected photographed image is the same as the area of the candidate face image determined at the previous moment, it indicates that the selected photographed image includes the same imaging quality as the target face image determined at the previous moment. Therefore, when the quality score of the target face image included in the selected photographed image is different from the quality score of the candidate face image determined at the previous time, the face image with the highest quality score can be directly selected from the target face image included in the selected photographed image and the candidate face image determined at the previous time as the candidate face image at the current time.
(2): and when the region of the target face image in the selected shooting image is different from the region of the candidate face image determined at the previous moment and the region of the candidate face image determined at the previous moment is the first region, determining a second grading difference value.
It should be noted that, the second score difference is a difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time. When the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the candidate face image determined at the previous moment is the first area, namely the target face image included in the selected shooting image is in the second area. Under such conditions, the imaging quality of the target face image in the selected photographed image is lower than that of the region where the candidate face image determined at the previous moment is located. Therefore, it is necessary to determine a difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time, so as to select the candidate face image at the current time based on the difference.
(3): and when the second scoring difference value is larger than the scoring threshold value, taking the target face image included in the selected shooting image as the candidate face image at the current moment.
It should be noted that, when the second score difference is greater than the score threshold, that is, when the difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time is greater than the score threshold, it is indicated that the facial feature of the target face can be more clearly and completely presented in the target face image included in the selected captured image than in the candidate face image determined at the previous time. Therefore, the target face image included in the selected captured image may be used as the candidate face image at the current time, or it may be understood that the candidate face image is updated. In contrast, when the second score difference value is less than or equal to the score threshold value, the candidate face image determined at the previous time may be taken as the candidate face image at the current time. In other words, when the second score difference is greater than the score threshold, updating the candidate face image; and when the second scoring difference value is smaller than or equal to the scoring threshold value, not updating the candidate face images.
(4): and when the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the target face image in the selected shooting image is the first area, determining a third grading difference value.
It should be noted that, the third score difference is a difference between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected photographed image. When the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment, and the area of the target face image in the selected shooting image is the first area, namely the candidate face image determined at the previous moment is in the second area. Under such conditions, the imaging quality of the target face image in the selected photographed image is higher than that of the region in which the candidate face image determined at the previous time is located, and therefore, a difference between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected photographed image needs to be determined so as to select the candidate face image at the current time according to the difference.
(5): and when the third scoring difference value is smaller than or equal to the scoring threshold value, taking the target face image included in the selected shooting image as the candidate face image at the current moment.
It should be noted that, when the third score difference is smaller than or equal to the score threshold, that is, the difference between the quality score of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected photographed image is smaller than or equal to the score threshold, it is indicated that the facial feature of the target face can be presented more clearly and completely compared with the target face image included in the selected photographed image and the candidate face image determined at the previous moment. Therefore, the target face image included in the selected captured image may be used as the candidate face image at the current time, or it may be understood that the candidate face image is updated. In contrast, when the third score difference is greater than the score threshold, the candidate face image determined at the previous time may be taken as the candidate face image at the current time. In other words, when the third score difference is greater than the score threshold, the candidate face image is not updated; and updating the candidate face images when the third scoring difference value is smaller than or equal to the scoring threshold value.
Step 606: whether the N photographed images have been processed is judged, if yes, step 607 is performed, and if no, one photographed image is selected from unprocessed photographed images included in the N photographed images, and step 604 is returned.
Step 607: and taking the candidate face image determined at the current moment as the optimal face image.
It should be noted that, the steps 604-606 described above enable iterative updating of the candidate face image at the current time by continuously selecting one shot image from the N shot images to determine the candidate face image at the current time. Under the condition, when N shot images are processed, the optimal face image can be directly determined, so that the process of determining the optimal face image is simpler and more efficient.
Step 608: and determining identity information corresponding to the target face image according to the optimal face image.
It should be noted that, the content of step 608 is similar to that of step 307 in the embodiment shown in fig. 3, so that the description is omitted here.
In the embodiment of the application, N photographed images are acquired first, and the imaging quality of each photographed image is uneven due to the distortion of the lens of the photographing lens, etc., so that each photographed image can be divided into at least two areas with different imaging quality. And then determining the area where the target face image is located from at least two areas included in each photographed image. And then selecting one shot image from the N shot images, and determining the quality score of the target face image included in the selected shot image. And then determining the candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located and the quality score of the candidate face image determined at the last moment. Then judging whether N shooting images are processed, if so, taking the candidate face image determined at the current moment as the optimal face image; if not, selecting one shot image from unprocessed shot images contained in the N shot images, and returning to the step of determining the quality score of the target face image contained in the selected shot image until the N shot images are processed. And finally, determining identity information corresponding to the target face image according to the optimal face image. In the embodiment of the application, the candidate face image at the current moment is determined by continuously selecting one shot image from N shot images, so that the iterative updating of the candidate face image at the current moment is realized. Under the condition, when N shooting images are processed, the optimal face image can be directly determined, so that the process of determining the optimal face image is simpler and more efficient, the determined optimal face image is more accurate, and the identity information corresponding to the target face image is more accurately determined.
It should be noted that the embodiment shown in fig. 3, the embodiment shown in fig. 5, and the embodiment shown in fig. 6 are three parallel embodiments capable of implementing the identity information determining method provided by the present application. Briefly, the areas of the embodiment shown in fig. 3, the embodiment shown in fig. 5, and the embodiment shown in fig. 6 are that the embodiment shown in fig. 3 is divided into at least two areas according to each image, the imaging quality of the target face image located in the area of the highest imaging quality is highest, and at least one optimal face image is selected from at least one face image of the area of the highest imaging quality. The embodiment shown in fig. 5 is to select at least one optimal face image based on the area where the target face image included in each photographed image is located and the quality score of the target face image included in each photographed image. In the embodiment shown in fig. 5, the candidate face image at the current moment is iteratively updated, so that when the N photographed images are processed, the optimal face image can be determined. However, the three embodiments can achieve the technical effects of enabling the determined optimal face image to be more accurate, and further enabling the determination of the identity information corresponding to the target face image to be more accurate.
Fig. 7 is a block diagram of an identity information determining apparatus according to an embodiment of the present application, and referring to fig. 7, the apparatus includes an obtaining module 701, a dividing module 702, a first determining module 703, a selecting module 704, and a second determining module 705.
An acquiring module 701, configured to acquire N photographed images, where N is a positive integer;
a dividing module 702, configured to divide each captured image into at least two areas with different imaging quality;
a first determining module 703, configured to determine an area in which the target face image is located from at least two areas included in each captured image;
a selection module 704, configured to select at least one best face image from the target face images included in the N captured images according to an area where the target face image is located in each captured image;
the second determining module 705 is configured to determine identity information corresponding to the target face image according to the at least one optimal face image.
Optionally, the partitioning module 702 includes:
the dividing sub-module is used for dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or,
determining a corresponding relation between imaging quality and position information by counting imaging quality and position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
Optionally, the selection module 704 includes:
the first selection submodule is used for selecting at least one face image in the area with the highest imaging quality from target face images included in the N shooting images;
a first determining sub-module for determining a quality score for each of the at least one face image;
and the second selection sub-module is used for selecting at least one optimal face image from the at least one face image according to the quality scores of the at least one face image.
Optionally, the selection module 704 includes:
the second determining submodule is used for determining the quality scores of the target face images included in each shot image;
and the third selection sub-module is used for selecting at least one optimal face image from the target face images included in the N shooting images according to the area of the target face image in each shooting image and the quality score of the target face image included in each shooting image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the third selection submodule includes:
a first selecting unit, configured to select, when a target face image is in a first area in M photographed images included in N photographed images, a target face image with a highest quality score from the target face images included in the M photographed images according to quality scores of the target face images included in the M photographed images, where M is a positive integer smaller than N, as a first candidate face image;
A second selecting unit, configured to select, when the target face image is in a second area in K shot images included in the N shot images, a target face image with the highest quality score from the target face images included in the K shot images according to the quality scores of the target face images included in the K shot images, where K is a positive integer smaller than N, and a sum of K and M is equal to N;
and a third selection unit, configured to select at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image.
Optionally, the third selecting unit includes:
a first determining subunit, configured to determine a first score difference, where the first score difference is a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
a second determining subunit, configured to determine, when the first score difference value is equal to the score threshold value, both the first candidate face image and the second candidate face image as the best face image; and when the first grading difference value is smaller than the grading threshold value, determining the first candidate face image as the optimal face image.
Optionally, the at least two regions include a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the second determination submodule includes:
a first determining unit configured to determine a plurality of score items of a target face image included in a current captured image, the current captured image being any one of the N captured images;
a second determining unit, configured to determine, when the target face image is in the first area in the current captured image, a quality score of the target face image included in the current captured image according to a plurality of score items and a first weight of each score item of the target face image included in the current captured image;
and a third determining unit configured to determine, when the target face image is in the second region in the current captured image, a quality score of the target face image included in the current captured image according to a plurality of score items and a second weight of each score item of the target face image included in the current captured image.
Optionally, the selection module 704 includes:
a third determining submodule, configured to select one shot image from the N shot images, and determine a quality score of a target face image included in the selected shot image;
A fourth determining submodule, configured to determine a candidate face image at the current moment according to an area where the target face image is located in the selected captured image, a quality score of the target face image included in the selected captured image, an area where the candidate face image determined at the previous moment is located, and a quality score of the candidate face image determined at the previous moment;
the judging submodule is used for judging whether N shot images are processed or not;
and the triggering sub-module is used for taking the candidate face image determined at the current moment as the optimal face image when the N shooting images are processed, selecting one shooting image from unprocessed shooting images contained in the N shooting images when the N shooting images are unprocessed, and triggering the third determining sub-module to determine the quality score of the target face image contained in the selected shooting image until the N shooting images are processed.
Optionally, the at least two regions comprise a first region and a second region, the imaging quality of the first region being higher than the imaging quality of the second region;
the fourth determination submodule includes:
a fourth selecting unit, configured to select, when the area where the target face image is located in the selected captured image is the same as the area where the candidate face image determined at the previous time is located, and the quality score of the target face image included in the selected captured image is different from the quality score of the candidate face image determined at the previous time, a face image with the highest quality score from the target face image included in the selected captured image and the candidate face image determined at the previous time as the candidate face image at the current time;
A fourth determining unit, configured to determine a second score difference when the area in which the target face image is located in the selected captured image is different from the area in which the candidate face image determined at the previous time is located and the area in which the candidate face image determined at the previous time is located is the first area, where the second score difference is a difference between the quality score of the target face image included in the selected captured image and the quality score of the candidate face image determined at the previous time;
a fifth determining unit, configured to, when the second score difference value is greater than the score threshold value, use a target face image included in the selected photographed image as a candidate face image at the current time;
a sixth determining unit, configured to determine a third score difference when the area in which the target face image is located in the selected captured image is different from the area in which the candidate face image determined at the previous time is located, and the area in which the target face image is located in the selected captured image is the first area, where the third score difference is a difference between the quality score of the candidate face image determined at the previous time and the quality score of the target face image included in the selected captured image;
And a seventh determining unit, configured to, when the third score difference value is less than or equal to the score threshold value, use the target face image included in the selected captured image as the candidate face image at the current time.
In the embodiment of the application, N photographed images are acquired first, and the imaging quality of each photographed image is uneven due to the distortion of the lens of the photographing lens, etc., so that each photographed image can be divided into at least two areas with different imaging quality. And then determining the area where the target face image is located from at least two areas included in each photographed image. And then selecting at least one optimal face image from the target face images included in the N shooting images according to the region of the target face image in each shooting image. And finally, determining identity information corresponding to the target face image according to at least one optimal face image. In the embodiment of the application, at least one optimal face image is selected according to the area of the target face image included in each shot image, so that the determined at least one optimal face image is more accurate, and the identity information corresponding to the target face image is more accurately determined.
It should be noted that: the identity information determining apparatus provided in the above embodiment only illustrates the division of the above functional modules when determining the identity information corresponding to the target face, and in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the identity information determining apparatus provided in the foregoing embodiments and the identity information determining method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not repeated herein.
Fig. 8 is a schematic structural diagram of an identity information determining apparatus according to an embodiment of the present application, where the identity information determining apparatus 800 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 801 and one or more memories 802, where at least one instruction is stored in the memories 802, and the at least one instruction is loaded and executed by the processor 801. Of course, the identity information determining apparatus 800 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, such as a memory comprising instructions executable by a processor in an identity information determining apparatus to perform the identity information determining method of the above embodiment is also provided. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (14)

1. A method for determining identity information, the method comprising:
acquiring N shooting images, wherein N is a positive integer;
dividing each photographed image into at least two regions with different imaging quality, wherein the at least two regions comprise a first region and a second region, and the imaging quality of the first region is higher than that of the second region;
Determining the region where the target face image is located from at least two regions included in each photographed image;
selecting at least one optimal face image from target face images included in the N shooting images according to the area of the target face image in each shooting image;
determining identity information corresponding to the target face image according to the at least one optimal face image;
wherein, according to the region where the target face image is located in each shot image, selecting at least one optimal face image from the target face images included in the N shot images includes:
determining a quality score of a target face image included in each photographed image;
when the target face image is in the first area in M shooting images included in the N shooting images, selecting a target face image with the highest quality score from the target face images included in the M shooting images as a first candidate face image according to the quality scores of the target face images included in the M shooting images, wherein M is a positive integer smaller than N;
when the target face image is in the second area in K shooting images included in the N shooting images, selecting a target face image with the highest quality score from the target face images included in the K shooting images as a second candidate face image according to the quality scores of the target face images included in the K shooting images, wherein K is a positive integer smaller than N, and the sum of K and M is equal to N;
And selecting at least one best face image from the first candidate face image and the second candidate face image according to the quality scores of the first candidate face image and the second candidate face image.
2. The method of claim 1, wherein dividing each captured image into at least two regions of different imaging quality comprises:
dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or,
determining a corresponding relation between imaging quality and position information by counting imaging quality and position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
3. The method of claim 1, wherein the selecting at least one best face image from the target face images included in the N captured images according to the region in which the target face image is located in each captured image, further comprises:
selecting at least one face image in a region with highest imaging quality from target face images included in the N shooting images;
Determining a quality score for each of the at least one face image;
and selecting at least one optimal face image from the at least one face image according to the quality scores of the at least one face image.
4. The method of claim 1, wherein the selecting at least one best face image from the first candidate face image and the second candidate face image based on the quality score of the first candidate face image and the quality score of the second candidate face image comprises:
determining a first score difference, the first score difference being a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
when the first scoring difference value is equal to a scoring threshold value, determining that the first candidate face image and the second candidate face image are both optimal face images; and when the first scoring difference value is smaller than the scoring threshold value, determining that the first candidate face image is the optimal face image.
5. The method of claim 1, wherein determining a quality score for a target face image included in each captured image comprises:
determining a plurality of scoring items of a target face image included in a current shooting image, wherein the current shooting image is any image in the N shooting images;
when the target face image is in the first area in the current shooting image, determining a quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and a first weight of each scoring item;
and when the target face image is in the second area in the current shooting image, determining a quality score of the target face image included in the current shooting image according to a plurality of scoring items of the target face image included in the current shooting image and a second weight of each scoring item.
6. The method of claim 1, wherein the selecting at least one best face image from the target face images included in the N captured images according to the region in which the target face image is located in each captured image includes:
Selecting one shooting image from the N shooting images, and determining the quality scores of target face images included in the selected shooting images;
determining a candidate face image at the current moment according to the region where the target face image is located in the selected shooting image, the quality score of the target face image included in the selected shooting image, the region where the candidate face image determined at the last moment is located, and the quality score of the candidate face image determined at the last moment;
judging whether the N shot images are processed or not;
if so, taking the candidate face image determined at the current moment as the optimal face image, if not, selecting one shot image from unprocessed shot images contained in the N shot images, and returning to the step of determining the quality score of the target face image contained in the selected shot image until the N shot images are processed.
7. The method of claim 6, wherein the determining the candidate face image at the current time based on the region in which the target face image is located in the selected captured image, the quality score of the target face image included in the selected captured image, the region in which the candidate face image determined at the previous time is located, and the quality score of the candidate face image determined at the previous time comprises:
When the area of the target face image in the selected shooting image is the same as the area of the candidate face image determined at the previous moment and the quality score of the target face image included in the selected shooting image is different from the quality score of the candidate face image determined at the previous moment, selecting the face image with the highest quality score from the target face image included in the selected shooting image and the candidate face image determined at the previous moment as the candidate face image at the current moment;
when the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the candidate face image determined at the previous moment is the first area, determining a second grading difference value, wherein the second grading difference value is the difference value between the quality grading of the target face image included in the selected shooting image and the quality grading of the candidate face image determined at the previous moment;
when the second scoring difference value is larger than the scoring threshold value, taking the target face image included in the selected shooting image as a candidate face image at the current moment;
When the area of the target face image in the selected shooting image is different from the area of the candidate face image determined at the previous moment and the area of the target face image in the selected shooting image is the first area, determining a third grading difference value, wherein the third grading difference value is the difference value between the quality grading of the candidate face image determined at the previous moment and the quality grading of the target face image included in the selected shooting image;
and when the third scoring difference value is smaller than or equal to the scoring threshold value, taking the target face image included in the selected shooting image as a candidate face image at the current moment.
8. An identity information determining apparatus, the apparatus comprising:
the acquisition module is used for acquiring N shooting images, wherein N is a positive integer;
the imaging device comprises a dividing module, a first imaging module and a second imaging module, wherein the dividing module is used for dividing each shot image into at least two areas with different imaging quality, the at least two areas comprise a first area and a second area, and the imaging quality of the first area is higher than that of the second area;
the first determining module is used for determining the area where the target face image is located from at least two areas included in each shot image;
The selection module comprises a second determination sub-module and a third selection sub-module, the third selection sub-module comprises a first selection unit, a second selection unit and a third selection unit,
the second determining submodule is used for determining the quality score of the target face image included in each shooting image;
the first selecting unit is configured to select, when the target face image is in the first area in M photographed images included in the N photographed images, a target face image with a highest quality score from the target face images included in the M photographed images according to quality scores of the target face images included in the M photographed images, where M is a positive integer smaller than N, as a first candidate face image;
the second selecting unit is configured to select, when the target face image is in the second area in K shot images included in the N shot images, a target face image with a highest quality score from the target face images included in the K shot images according to quality scores of the target face images included in the K shot images, where K is a positive integer less than N, and a sum of K and M is equal to N;
The third selecting unit is configured to select at least one best face image from the first candidate face image and the second candidate face image according to the quality score of the first candidate face image and the quality score of the second candidate face image;
and the second determining module is used for determining the identity information corresponding to the target face image according to the at least one optimal face image.
9. The apparatus of claim 8, wherein the partitioning module comprises:
the dividing sub-module is used for dividing each shot image into at least two areas with different imaging quality according to the corresponding relation between the preset imaging quality and the position information; or,
determining a corresponding relation between imaging quality and position information by counting imaging quality and position information of each region in the historical shooting image; and dividing each shot image into at least two areas with different imaging quality according to the determined corresponding relation.
10. The apparatus of claim 8, wherein the selection module further comprises:
the first selection submodule is used for selecting at least one face image in the area with the highest imaging quality from target face images included in the N shooting images;
A first determining submodule, configured to determine a quality score of each face image in the at least one face image;
and the second selection sub-module is used for selecting at least one optimal face image from the at least one face image according to the quality scores of the at least one face image.
11. The apparatus of claim 8, wherein the third selection unit comprises:
a first determining subunit, configured to determine a first score difference, where the first score difference is a difference between a quality score of the second candidate face image and a quality score of the first candidate face image;
a second determining subunit, configured to determine, when the first score difference value is equal to a score threshold value, both the first candidate face image and the second candidate face image as optimal face images; and when the first scoring difference value is smaller than the scoring threshold value, determining that the first candidate face image is the optimal face image.
12. The apparatus of claim 8, wherein the second determination submodule comprises:
A first determining unit, configured to determine a plurality of scoring items of a target face image included in a current captured image, where the current captured image is any one of the N captured images;
a second determining unit configured to determine, when the target face image is in the first area in the current captured image, a quality score of the target face image included in the current captured image according to a plurality of score items and a first weight of each score item of the target face image included in the current captured image;
and a third determining unit, configured to determine, when the target face image is in the second area in the current captured image, a quality score of the target face image included in the current captured image according to a plurality of score items and a second weight of each score item of the target face image included in the current captured image.
13. The apparatus of claim 8, wherein the selection module further comprises:
a third determining submodule, configured to select one photographed image from the N photographed images, and determine a quality score of a target face image included in the selected photographed image;
a fourth determining submodule, configured to determine a candidate face image at the current moment according to an area where the target face image is located in the selected captured image, a quality score of the target face image included in the selected captured image, an area where the candidate face image determined at the previous moment is located, and a quality score of the candidate face image determined at the previous moment;
The judging submodule is used for judging whether the N shot images are processed or not;
and the triggering sub-module is used for taking the candidate face image determined at the current moment as the optimal face image when the N shooting images are processed, selecting one shooting image from unprocessed shooting images contained in the N shooting images when the N shooting images are not processed, and triggering the third determination sub-module to determine the quality score of the target face image contained in the selected shooting image until the N shooting images are processed.
14. The apparatus of claim 13, wherein the fourth determination submodule comprises:
a fourth selecting unit, configured to select, when the area where the target face image is located in the selected captured image is the same as the area where the candidate face image determined at the previous time is located, and the quality score of the target face image included in the selected captured image is different from the quality score of the candidate face image determined at the previous time, a face image with the highest quality score from the target face image included in the selected captured image and the candidate face image determined at the previous time as the candidate face image at the current time;
A fourth determining unit, configured to determine a second score difference when the area in which the target face image is located in the selected captured image is different from the area in which the candidate face image determined at the previous time is located and the area in which the candidate face image determined at the previous time is located is the first area, where the second score difference is a difference between a quality score of the target face image included in the selected captured image and a quality score of the candidate face image determined at the previous time;
a fifth determining unit, configured to, when the second score difference value is greater than a score threshold value, use a target face image included in the selected captured image as a candidate face image at the current time;
a sixth determining unit, configured to determine a third score difference when an area in which the target face image is located in the selected captured image is different from an area in which the candidate face image determined at the previous time is located, and the area in which the target face image is located in the selected captured image is the first area, where the third score difference is a difference between a quality score of the candidate face image determined at the previous time and a quality score of the target face image included in the selected captured image;
And a seventh determining unit, configured to, when the third score difference value is less than or equal to the score threshold value, use a target face image included in the selected captured image as a candidate face image at the current time.
CN201910251349.5A 2019-03-29 2019-03-29 Identity information determining method and device Active CN111767757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910251349.5A CN111767757B (en) 2019-03-29 2019-03-29 Identity information determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910251349.5A CN111767757B (en) 2019-03-29 2019-03-29 Identity information determining method and device

Publications (2)

Publication Number Publication Date
CN111767757A CN111767757A (en) 2020-10-13
CN111767757B true CN111767757B (en) 2023-11-17

Family

ID=72717937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910251349.5A Active CN111767757B (en) 2019-03-29 2019-03-29 Identity information determining method and device

Country Status (1)

Country Link
CN (1) CN111767757B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188075B (en) * 2019-07-05 2023-04-18 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930261A (en) * 2012-12-05 2013-02-13 上海市电力公司 Face snapshot recognition method
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN108229297A (en) * 2017-09-30 2018-06-29 深圳市商汤科技有限公司 Face identification method and device, electronic equipment, computer storage media
CN109389019A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 Facial image selection method, device and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8861802B2 (en) * 2012-03-13 2014-10-14 Honeywell International Inc. Face image prioritization based on face quality analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930261A (en) * 2012-12-05 2013-02-13 上海市电力公司 Face snapshot recognition method
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN109389019A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 Facial image selection method, device and computer equipment
CN108229297A (en) * 2017-09-30 2018-06-29 深圳市商汤科技有限公司 Face identification method and device, electronic equipment, computer storage media

Also Published As

Publication number Publication date
CN111767757A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN110222787B (en) Multi-scale target detection method and device, computer equipment and storage medium
US20200320726A1 (en) Method, device and non-transitory computer storage medium for processing image
CN108269254B (en) Image quality evaluation method and device
CN104346811B (en) Object real-time tracking method and its device based on video image
US20110142299A1 (en) Recognition of faces using prior behavior
CN111935479B (en) Target image determination method and device, computer equipment and storage medium
US20110299787A1 (en) Invariant visual scene and object recognition
CN111340749B (en) Image quality detection method, device, equipment and storage medium
CN110889314B (en) Image processing method, device, electronic equipment, server and system
CN113362441B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, computer equipment and storage medium
CN111553302B (en) Key frame selection method, device, equipment and computer readable storage medium
CN113255685B (en) Image processing method and device, computer equipment and storage medium
CN110163265A (en) Data processing method, device and computer equipment
CN109815823B (en) Data processing method and related product
CN112651321A (en) File processing method and device and server
CN111767757B (en) Identity information determining method and device
CN112204957A (en) White balance processing method and device, movable platform and camera
CN115953813B (en) Expression driving method, device, equipment and storage medium
CN112200775A (en) Image definition detection method and device, electronic equipment and storage medium
JP2016219879A (en) Image processing apparatus, image processing method and program
CN114092850A (en) Re-recognition method and device, computer equipment and storage medium
CN112188075B (en) Snapshot, image processing device and image processing method
JP6770363B2 (en) Face direction estimator and its program
KR20230060439A (en) Method and system for detecting recaptured image method thereof
CN115294493A (en) Visual angle path acquisition method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant