CN114944004A - Face image storage method, device, equipment, computer medium and program product - Google Patents

Face image storage method, device, equipment, computer medium and program product Download PDF

Info

Publication number
CN114944004A
CN114944004A CN202210874980.2A CN202210874980A CN114944004A CN 114944004 A CN114944004 A CN 114944004A CN 202210874980 A CN202210874980 A CN 202210874980A CN 114944004 A CN114944004 A CN 114944004A
Authority
CN
China
Prior art keywords
image
face
region
feature
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210874980.2A
Other languages
Chinese (zh)
Inventor
吴冬伟
李浩浩
刘忠平
迟金星
孙国亮
刘子雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiyi Technology Beijing Co Ltd
Original Assignee
Haiyi Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiyi Technology Beijing Co Ltd filed Critical Haiyi Technology Beijing Co Ltd
Priority to CN202210874980.2A priority Critical patent/CN114944004A/en
Publication of CN114944004A publication Critical patent/CN114944004A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The embodiment of the disclosure discloses a face image storage method, a face image storage device, face image storage equipment, a computer medium and a program product. One embodiment of the method comprises: in response to the fact that the number of the faces displayed in the face image is determined to be 1, intercepting a face area image displayed by the face image; calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify the feature region outline corresponding to the feature region; identifying a characteristic image corresponding to the contour of the characteristic region from the face region image; generating a face score value corresponding to the face region image according to the characteristic image group; and storing the face region image into a target database in response to determining that the face score value is greater than or equal to a preset score value. The embodiment shortens the registration time of the foreign person, and improves the passing efficiency of the foreign person in the target area.

Description

Face image storage method, device, equipment, computer medium and program product
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, a device, a computer medium, and a program product for storing a face image.
Background
At present, when information is registered for a person outside a target area (school, factory) (for example, a visitor who needs to enter the school), the following methods are generally adopted: the information of the foreign person is registered by a manual registration mode, or the registration is carried out by a photographing mode.
However, the following technical problems generally exist in the above manner:
firstly, when passing at different places in a target area, face recognition is usually required, and when the information of the foreign person is registered in a manual registration mode, the passing efficiency of the foreign person in the target area is reduced;
secondly, when the photographed face image is not detected during the registration by adopting the photographing mode, the passing efficiency of the external personnel in the target area is reduced when the photographed face image is not clear.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a face image storage method, apparatus, electronic device, computer readable medium and program product to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a face image storage method, including: the method comprises the steps of responding to a received face image sent by a face acquisition terminal, and determining the number of faces displayed in the face image; in response to determining that the number of faces displayed in the face image is 1, intercepting a face area image displayed by the face image; for each target area in the set of target areas, performing the following processing steps: calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region; identifying a characteristic image corresponding to the contour of the characteristic region from the face region image; determining each identified characteristic image as a characteristic image group; generating a face score value corresponding to the face region image according to the characteristic image group; and storing the face region image into a target database in response to the fact that the face score value is larger than or equal to a preset score value.
In a second aspect, some embodiments of the present disclosure provide a face image storage apparatus, including: the first determining unit is configured to respond to the received face image sent by the face acquisition terminal and determine the number of faces displayed in the face image; an intercepting unit configured to intercept a face region image displayed by the face image in response to determining that the number of faces displayed in the face image is 1; an identification unit configured to perform, for each target area in the target area group, the following processing steps: calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region; identifying a characteristic image corresponding to the contour of the characteristic area from the face area image; a second determination unit configured to determine the respective recognized feature images as a feature image group; a generating unit configured to generate a face score value corresponding to the face region image from the feature image group; and the storage unit is configured to store the face region image into a target database in response to determining that the face score value is greater than or equal to a preset score value.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
In a fifth aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following beneficial effects: by the face image storage method of some embodiments of the present disclosure, the registration time for the foreign person is shortened, and the passing efficiency of the foreign person in the target area is improved. Specifically, the reason why the passing efficiency of the alien person in the target area is reduced is that: when passing through different places in the target area, face recognition is usually required, and when the information of the foreign person is registered in a manual registration manner, the passing efficiency of the foreign person in the target area is reduced. Based on this, the face image storage method according to some embodiments of the present disclosure first determines the number of faces displayed in a face image in response to receiving the face image sent by a face acquisition terminal. Thereby, the quality of the photographed face image can be preliminarily determined. For example, when the number of faces displayed in the face image is greater than 1, the faces of the external persons cannot be confirmed. And secondly, in response to the fact that the number of the faces displayed in the face image is determined to be 1, intercepting a face area image displayed by the face image. Therefore, data support is provided for resolving the definition of the shot face image. Next, for each target area in the target area group, the following processing steps are performed: calibrating the characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region; and identifying a characteristic image corresponding to the contour of the characteristic region from the face region image. This makes it possible to detect key feature regions in the face region image. For example, the critical feature regions may include, but are not limited to: eye feature area, nose feature area, mouth feature area. Then, each of the recognized feature images is determined as a feature image group. Then, according to the characteristic image group, a face score value corresponding to the face region image is generated. Therefore, the grading detection of the shot face image can be completed. The larger the face score value is, the higher the sharpness of the face image is. And finally, in response to the fact that the face score value is larger than or equal to a preset score value, storing the face region image into a target database. Thereby, registration of the face image of the alien person can be completed. Therefore, the registration time of the foreign person is shortened, and the passing efficiency of the foreign person in the target area is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a face image storage method according to the present disclosure;
fig. 2 is a schematic diagram of an application scenario of extracting iris pixel lines in a face image storage method according to some embodiments of the present disclosure;
FIG. 3 is a schematic block diagram of some embodiments of a face image storage device according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flow diagram of some embodiments of a face image storage method according to the present disclosure. A flow 100 of some embodiments of a face image storage method according to the present disclosure is shown. The face image storage method comprises the following steps:
step 101, in response to receiving a face image sent by a face acquisition terminal, determining the number of faces displayed in the face image.
In some embodiments, an executing entity (e.g., a server) of the face image storage method may determine the number of faces displayed in the face image in response to receiving the face image sent by the face acquisition terminal. The face acquisition terminal may refer to a camera device set in a target area for acquiring a face image of an external person. The target area may refer to an area of a school, a factory, or the like.
And 102, in response to the fact that the number of the faces displayed in the face image is determined to be 1, intercepting a face area image displayed by the face image.
In some embodiments, the execution subject may intercept a face region image displayed by the face image in response to determining that the number of faces displayed in the face image is 1. In practice, the minimum matrix image including the face in the face image may be intercepted as the face region image. The face area image shows five sense organs.
Step 103, for each target area in the target area group, executing the following processing steps:
step 1031, calibrating the characteristic region corresponding to the target region in the face region image to generate a calibrated face region image.
In some embodiments, the execution subject may calibrate a feature region corresponding to the target region in the face region image to generate a calibrated face region image. Here, the target region group may represent respective regions of the face feature. The target zone group may include, but is not limited to: eye feature region, nose feature region, mouth feature region, ear feature region, eyebrow feature region. Here, the feature region pointed by the target region in the face region image may be calibrated by a VGG model to generate a calibrated face region image. Or receiving a feature region pointed by the target region in the human face region image calibrated manually to generate a calibrated human face region image.
Step 1032, performing edge detection processing on the calibrated face region image to identify a feature region contour corresponding to the feature region.
In some embodiments, the execution subject may perform an edge detection process on the calibrated face region image through an edge detection model (e.g., an OpenCV-based edge detection model) to identify a feature region contour corresponding to the feature region.
And 1033, identifying a feature image corresponding to the feature area outline from the face area image.
In some embodiments, the execution subject may identify a feature image framed by the feature region outline from the face region image.
And 104, determining each identified characteristic image as a characteristic image group.
In some embodiments, the execution subject may determine each of the recognized feature images as a feature image group.
And 105, generating a face score value corresponding to the face region image according to the characteristic image group.
In some embodiments, the execution subject may generate a face score value corresponding to the face region image according to the feature image group. Wherein, the characteristic image group comprises: an eye feature image. The eye characteristic image is a characteristic image for characterizing the eye.
In practice, the executing entity may generate the face score value corresponding to the face region image by:
firstly, inputting the eye characteristic image into a pre-trained image definition detection model to obtain the eye image definition corresponding to the eye characteristic image. Here, the image sharpness detection model trained in advance may be a neural network model trained in advance with an image as an input and an image sharpness as an output. For example, the pre-trained image sharpness detection model may be a convolutional neural network model.
And secondly, extracting a preset number of iris pixel lines from the eye characteristic image in response to the fact that the eye image definition is larger than or equal to the preset eye image definition. In practice, a preset number of iris pixel lines perpendicular to the outer circumference of the pupil and the outer circumference of the iris may be extracted from the iris region in the eye feature image. Here, the setting of the preset number is not limited. As shown in fig. 2, the extracted iris pixel line is perpendicular to the outer circles of the iris and the pupil.
And thirdly, for each iris pixel line in the preset number of iris pixel lines, performing differential processing on every two pixels in the iris pixel lines to generate pixel difference degrees corresponding to the two pixels, and obtaining a pixel difference degree group. Here, the differential processing may refer to first order differential processing or second order differential processing. Here, the first order differentiation processing may refer to extracting a first order gradient value (pixel difference degree) of every two pixels in the above-described iris pixel line by a first order differential operator. For example, the first order differential operator may refer to a Sobel operator (Sobel operator). The second order differential processing may refer to extracting a second order gradient value (pixel difference degree) of every two pixels in the above iris pixel line by a second order differential operator. For example, the second order differential operator may refer to the Laplacian operator (Laplacian operator).
And fourthly, generating an iris score value as a face score value according to the obtained pixel difference degree groups.
In practice, the fourth step described above may comprise the following sub-steps:
the first sub-step selects the maximum pixel difference from the pixel difference groups as the target pixel difference.
And a second sub-step of determining, in response to a determination that the target pixel difference is greater than or equal to a first preset difference, a sum of differences of pixels greater than or equal to a second preset difference in a pixel difference group corresponding to the target pixel difference as an iris score. Here, the setting of the first preset difference degree is not limited. The pixel difference degree group corresponding to the target pixel difference degree may be a pixel difference degree group including the target pixel difference degree in each pixel difference degree group.
Optionally, the fourth step may further include the following sub-steps:
a third sub-step of selecting a pixel difference degree group including the largest number of pixel difference degrees from the respective pixel difference degree groups as a target pixel difference degree group in response to determining that the target pixel difference degree is smaller than the first preset difference degree.
And a fourth substep of determining a total sum of the differences of the respective target pixels included in the target pixel difference degree group as an iris score value.
The related content in step 105 serves as an inventive point of the present disclosure, thereby solving the technical problem mentioned in the background art that "when the photographed face image is registered, the photographed face image is not detected, and when the photographed face image is unclear, the passing efficiency of the external person in the target area is reduced. ". Factors that reduce the efficiency of passage of a foreign person in the target area tend to be as follows: when the photographed face image is not clear, the passing efficiency of the external personnel in the target area is reduced. If the factors are solved, the effect of improving the passing efficiency of the foreign people in the target area can be achieved. To achieve this effect, the eye feature image is first input to a pre-trained image sharpness detection model to obtain an eye image sharpness corresponding to the eye feature image. Since the eyes have a distinct identity (e.g., each person's iris is unique). Therefore, when the captured eye feature image is clear, the alien person can be clearly identified. Then, in response to determining that the eye image definition is greater than or equal to a preset eye image definition, extracting a preset number of iris pixel lines from the eye characteristic image. Therefore, the iris pixel line in the eye characteristic image can be extracted clearly. Thus, data support is provided for determining the image quality of the captured face region image. Then, for each iris pixel line in the preset number of iris pixel lines, performing differential processing on every two pixels in the iris pixel lines to generate pixel difference degrees corresponding to the two pixels, and obtaining a pixel difference degree group. Therefore, the speed (pixel difference degree) of the change of the iris area pixel textures in the eye characteristic image can be detected, and the subsequent comprehensive detection of the eye characteristic image is facilitated. And finally, generating an iris score value as a face score value according to the obtained pixel difference degree groups. Thus, the score of the iris may be determined as a face score value. And because the iris can uniquely identify a person, when the iris is clear, the photographed face image can be represented as a high-quality and high-definition image. Thus, the detection of the photographed face image is completed. The method avoids the face images which are not clearly stored, and improves the passing efficiency of the external personnel in the target area.
In some optional implementations of some embodiments, the executing subject may further generate a face score value corresponding to the face region image by:
firstly, inputting the eye characteristic image into a pre-trained image definition detection model to obtain the eye image definition corresponding to the eye characteristic image. Here, the image sharpness detection model trained in advance may be a neural network model trained in advance with an image as an input and an image sharpness as an output. For example, the pre-trained image sharpness detection model may be a convolutional neural network model.
And secondly, in response to the fact that the definition of the eye image is smaller than the preset definition of the eye image, inputting each feature image in the feature image group into the image definition detection model to obtain an image definition group. And the characteristic images in the characteristic image group correspond to the image definition in the image definition group. In practice, the executing subject may input each feature image in the feature image group into the image sharpness detecting model to generate an image sharpness, resulting in an image sharpness group.
And thirdly, according to a preset feature image weight group, carrying out weighted summation processing on each image definition in the image definition group to generate weighted definition serving as a face score value. Wherein, the characteristic image weight in the characteristic image weight group corresponds to the characteristic image in the characteristic image group. Here, the feature image weight in the feature image weight group is a weight set according to the importance of the sharpness of the feature image. For example, the feature image group includes: eye feature images, nose feature images, mouth feature images, ear feature images, eyebrow feature images. Wherein, the set feature image weight of the eye feature image may be 0.4. The feature image weight of the set nose feature image may be 0.1. The feature image weight of the set mouth feature image may be 0.2. The feature image weight of the ear feature image set may be 0.1. The feature image weight of the set eyebrow feature image may be 0.2. In practice, first, the product of each image sharpness in the image sharpness group and the feature image weight corresponding to the image sharpness may be determined as an image sub-sharpness, and an image sub-sharpness group may be obtained. Then, the sum of the individual image sub-resolutions in the above image sub-resolution group may be determined as a weighted resolution as a face score value.
And 106, in response to the fact that the face score value is larger than or equal to a preset score value, storing the face region image into a target database.
In some embodiments, the execution subject may store the face region image in a target database in response to determining that the face score value is greater than or equal to a predetermined score value. Here, the setting of the preset score value is not limited. The target database refers to a database which is set for storing images of the foreign persons.
Optionally, in response to determining that the number of faces displayed in the face image is greater than 1, generating first prompt information representing the abnormality of the face image, and sending the first prompt information to the face acquisition terminal to prompt the face acquisition terminal to acquire the face image again.
In some embodiments, the execution subject may generate first prompt information representing an abnormality of the face image in response to determining that the number of faces displayed in the face image is greater than 1, and send the first prompt information to the face acquisition terminal to prompt the face acquisition terminal to acquire the face image again. Here, the first prompt information may indicate that a plurality of faces exist in the face image. For example, the first prompt information may be "there are multiple faces in the face image, please shoot the image again".
Optionally, in response to determining that the face score value is smaller than the preset score value, second prompt information representing that the face image is unclear is generated, and the second prompt information is sent to the face acquisition terminal to prompt the face acquisition terminal to acquire the face image again.
In some embodiments, the execution subject may generate second prompt information representing that the face image is unclear in response to determining that the face score is smaller than the preset score, and send the second prompt information to the face acquisition terminal to prompt the face acquisition terminal to re-acquire the face image. Here, the first prompt information may indicate that the face image is unclear. For example, the first prompt message may be "the above-mentioned face image is unclear, please shoot the image again".
The above embodiments of the present disclosure have the following beneficial effects: by the face image storage method of some embodiments of the present disclosure, the registration time of the external person is shortened, and the passing efficiency of the external person in the target area is improved. Specifically, the reason why the passing efficiency of the alien person in the target area is reduced is that: when passing through different places in the target area, face recognition is usually required, and when the information of the foreign person is registered in a manual registration mode, the passing efficiency of the foreign person in the target area is reduced. Based on this, the face image storage method according to some embodiments of the present disclosure first determines the number of faces displayed in the face image in response to receiving the face image sent by the face acquisition terminal. Thereby, the quality of the photographed face image can be preliminarily determined. For example, when the number of faces displayed in the face image is greater than 1, the faces of the external persons cannot be confirmed. And secondly, in response to the fact that the number of the faces displayed in the face image is determined to be 1, intercepting a face area image displayed by the face image. Therefore, data support is provided for resolving the definition of the shot face image. Next, for each target area in the target area group, the following processing steps are performed: calibrating the characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region; and identifying a characteristic image corresponding to the contour of the characteristic region from the face region image. Thus, the key feature region in the face region image can be detected. For example, the critical feature regions may include, but are not limited to: eye feature area, nose feature area, mouth feature area. Then, each of the recognized feature images is determined as a feature image group. Then, according to the characteristic image group, a face score value corresponding to the face region image is generated. Therefore, the grading detection of the shot face image can be completed. The larger the face score value is, the higher the sharpness of the face image is. And finally, in response to the fact that the face score value is larger than or equal to a preset score value, storing the face region image into a target database. Thereby, registration of the face image of the alien person can be completed. Therefore, the registration time of the foreign person is shortened, and the passing efficiency of the foreign person in the target area is improved.
With further reference to fig. 3, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a face image storage apparatus, which correspond to those of the method embodiments shown in fig. 3, and which can be applied in various electronic devices.
As shown in fig. 3, the face image storage device 300 of some embodiments includes: a first determination unit 301, an interception unit 302, a recognition unit 303, a second determination unit 304, a generation unit 305, and a storage unit 306. The first determining unit 301 is configured to determine the number of faces displayed in a face image in response to receiving the face image sent by a face acquisition terminal; an intercepting unit 302 configured to intercept a face region image displayed by the face image in response to determining that the number of faces displayed in the face image is 1; an identifying unit 303 configured to perform the following processing steps for each target area in the target area group: calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region; identifying a characteristic image corresponding to the contour of the characteristic area from the face area image; a second determination unit 304 configured to determine the respective recognized feature images as a feature image group; a generating unit 305 configured to generate a face score value corresponding to the face region image from the feature image group; the storage unit 306 is configured to store the face region image in a target database in response to determining that the face score value is greater than or equal to a preset score value.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to fig. 4, a block diagram of an electronic device (e.g., server) 400 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the method comprises the steps of responding to a received face image sent by a face acquisition terminal, and determining the number of faces displayed in the face image; in response to determining that the number of faces displayed in the face image is 1, intercepting a face area image displayed by the face image; for each target area in the set of target areas, performing the following processing steps: calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region; identifying a characteristic image corresponding to the contour of the characteristic region from the face region image; determining each identified characteristic image as a characteristic image group; generating a face score value corresponding to the face region image according to the characteristic image group; and storing the face region image into a target database in response to determining that the face score value is greater than or equal to a preset score value.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first determination unit, an interception unit, an identification unit, a second determination unit, a generation unit, and a storage unit. The names of these units do not constitute a limitation to the unit itself in some cases, and for example, the first determination unit may also be described as "a unit that determines the number of faces displayed in a face image in response to receiving the face image transmitted from the face acquisition terminal".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, implements any of the above-described face image storage methods.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combinations of the above-mentioned features, and other embodiments in which the above-mentioned features or their equivalents are combined arbitrarily without departing from the spirit of the invention are also encompassed. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (7)

1. A face image storage method comprises the following steps:
the method comprises the steps of responding to a received face image sent by a face acquisition terminal, and determining the number of faces displayed in the face image;
intercepting a face area image displayed by the face image in response to determining that the number of faces displayed in the face image is 1;
for each target area in the set of target areas, performing the following processing steps:
calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image;
carrying out edge detection processing on the calibrated face region image so as to identify a feature region outline corresponding to the feature region;
identifying a characteristic image corresponding to the characteristic region outline from the face region image;
determining each identified characteristic image as a characteristic image group;
generating a face score value corresponding to the face region image according to the characteristic image group;
and in response to determining that the face score value is greater than or equal to a preset score value, storing the face region image into a target database.
2. The method of claim 1, wherein the set of feature images comprises: the eye characteristic image is a characteristic image for characterizing the eye; and
generating a face score value corresponding to the face region image according to the characteristic image group, wherein the face score value comprises the following steps:
inputting the eye characteristic image into a pre-trained image definition detection model to obtain the eye image definition corresponding to the eye characteristic image;
in response to the fact that the eye image definition is smaller than the preset eye image definition, inputting each feature image in the feature image group into the image definition detection model to obtain an image definition group, wherein the feature images in the feature image group correspond to the image definition in the image definition group;
and according to a preset feature image weight set, carrying out weighted summation processing on each image definition in the image definition group to generate weighted definition serving as a face score value, wherein the feature image weight in the feature image weight set corresponds to the feature image in the feature image group.
3. The method of claim 1, wherein the method further comprises:
in response to the fact that the number of the faces displayed in the face image is larger than 1, generating first prompt information representing the abnormality of the face image, and sending the first prompt information to the face acquisition terminal to prompt the face acquisition terminal to acquire the face image again;
and in response to the fact that the face score value is smaller than the preset score value, generating second prompt information representing the unsharpness of the face image, and sending the second prompt information to the face acquisition terminal to prompt the face acquisition terminal to acquire the face image again.
4. A face image storage device comprising:
the first determination unit is configured to respond to the received face image sent by a face acquisition terminal and determine the number of faces displayed in the face image;
an intercepting unit configured to intercept a face region image displayed by the face image in response to determining that the number of faces displayed in the face image is 1;
an identification unit configured to perform, for each target area in the target area group, the following processing steps: calibrating a characteristic region corresponding to the target region in the face region image to generate a calibrated face region image; performing edge detection processing on the calibrated face region image to identify a feature region outline corresponding to the feature region; identifying a characteristic image corresponding to the characteristic region outline from the face region image;
a second determination unit configured to determine the respective recognized feature images as a feature image group;
a generating unit configured to generate a face score value corresponding to the face region image from the feature image group;
a storage unit configured to store the face region image into a target database in response to determining that the face score value is greater than or equal to a preset score value.
5. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-3.
6. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-3.
7. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-3.
CN202210874980.2A 2022-07-25 2022-07-25 Face image storage method, device, equipment, computer medium and program product Pending CN114944004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210874980.2A CN114944004A (en) 2022-07-25 2022-07-25 Face image storage method, device, equipment, computer medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210874980.2A CN114944004A (en) 2022-07-25 2022-07-25 Face image storage method, device, equipment, computer medium and program product

Publications (1)

Publication Number Publication Date
CN114944004A true CN114944004A (en) 2022-08-26

Family

ID=82910224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210874980.2A Pending CN114944004A (en) 2022-07-25 2022-07-25 Face image storage method, device, equipment, computer medium and program product

Country Status (1)

Country Link
CN (1) CN114944004A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201205472A (en) * 2010-07-21 2012-02-01 Hon Hai Prec Ind Co Ltd Camera device and method for taking photos using the camera device
CN107977639A (en) * 2017-12-11 2018-05-01 浙江捷尚视觉科技股份有限公司 A kind of face definition judgment method
US20190026575A1 (en) * 2017-07-20 2019-01-24 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
CN110321753A (en) * 2018-03-28 2019-10-11 浙江中正智能科技有限公司 A kind of quality of human face image evaluation method based on Face geometric eigenvector
CN111353368A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Pan-tilt camera, face feature processing method and device and electronic equipment
CN111476808A (en) * 2020-03-19 2020-07-31 北京万里红科技股份有限公司 Iris image definition evaluation method and device
CN112036242A (en) * 2020-07-28 2020-12-04 重庆锐云科技有限公司 Face picture acquisition method and device, computer equipment and storage medium
WO2021004112A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Anomalous face detection method, anomaly identification method, device, apparatus, and medium
CN108460765B (en) * 2018-04-09 2021-03-30 北京无线电计量测试研究所 Iris image quality detection method
CN113869198A (en) * 2021-09-27 2021-12-31 上海聚虹光电科技有限公司 Iris image processing method, iris image processing device, electronic equipment and computer readable medium
CN114332983A (en) * 2021-12-01 2022-04-12 杭州鸿泉物联网技术股份有限公司 Face image definition detection method, face image definition detection device, electronic equipment and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201205472A (en) * 2010-07-21 2012-02-01 Hon Hai Prec Ind Co Ltd Camera device and method for taking photos using the camera device
US20190026575A1 (en) * 2017-07-20 2019-01-24 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
CN107977639A (en) * 2017-12-11 2018-05-01 浙江捷尚视觉科技股份有限公司 A kind of face definition judgment method
CN110321753A (en) * 2018-03-28 2019-10-11 浙江中正智能科技有限公司 A kind of quality of human face image evaluation method based on Face geometric eigenvector
CN108460765B (en) * 2018-04-09 2021-03-30 北京无线电计量测试研究所 Iris image quality detection method
WO2021004112A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Anomalous face detection method, anomaly identification method, device, apparatus, and medium
CN111353368A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Pan-tilt camera, face feature processing method and device and electronic equipment
CN111476808A (en) * 2020-03-19 2020-07-31 北京万里红科技股份有限公司 Iris image definition evaluation method and device
CN112036242A (en) * 2020-07-28 2020-12-04 重庆锐云科技有限公司 Face picture acquisition method and device, computer equipment and storage medium
CN113869198A (en) * 2021-09-27 2021-12-31 上海聚虹光电科技有限公司 Iris image processing method, iris image processing device, electronic equipment and computer readable medium
CN114332983A (en) * 2021-12-01 2022-04-12 杭州鸿泉物联网技术股份有限公司 Face image definition detection method, face image definition detection device, electronic equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
都伊林: "《智能安防新发展与应用》", 31 July 2018 *
马嘉林: "《平安校园提升建设中的安防信息化升级探究》", 31 October 2019 *

Similar Documents

Publication Publication Date Title
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
CN109829432B (en) Method and apparatus for generating information
CN110021052B (en) Method and apparatus for generating fundus image generation model
CN109472264B (en) Method and apparatus for generating an object detection model
US11514263B2 (en) Method and apparatus for processing image
CN112381717A (en) Image processing method, model training method, device, medium, and apparatus
CN110288625B (en) Method and apparatus for processing image
CN111402112A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN108470131B (en) Method and device for generating prompt message
CN111325704A (en) Image restoration method and device, electronic equipment and computer-readable storage medium
CN115311178A (en) Image splicing method, device, equipment and medium
CN114022662A (en) Image recognition method, device, equipment and medium
CN112418249A (en) Mask image generation method and device, electronic equipment and computer readable medium
CN114494071A (en) Image processing method, device, equipment and storage medium
CN110837332A (en) Face image deformation method and device, electronic equipment and computer readable medium
CN116524206B (en) Target image identification method and device
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
CN114882576B (en) Face recognition method, electronic device, computer-readable medium, and program product
CN112101258A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN109840059B (en) Method and apparatus for displaying image
CN114944004A (en) Face image storage method, device, equipment, computer medium and program product
CN111385460A (en) Image processing method and device
CN111461964B (en) Picture processing method, device, electronic equipment and computer readable medium
CN114821540A (en) Parking space detection method and device, electronic equipment and computer readable medium
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220826