CN113128304B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN113128304B
CN113128304B CN201911426136.8A CN201911426136A CN113128304B CN 113128304 B CN113128304 B CN 113128304B CN 201911426136 A CN201911426136 A CN 201911426136A CN 113128304 B CN113128304 B CN 113128304B
Authority
CN
China
Prior art keywords
region
image
key
face
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911426136.8A
Other languages
Chinese (zh)
Other versions
CN113128304A (en
Inventor
程冰
尹义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911426136.8A priority Critical patent/CN113128304B/en
Publication of CN113128304A publication Critical patent/CN113128304A/en
Application granted granted Critical
Publication of CN113128304B publication Critical patent/CN113128304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application discloses an image processing method and electronic equipment, wherein the method can comprise the following steps: the method comprises the steps that electronic equipment obtains a first drawing of a person, wherein the first drawing is an image formed by line outlines, the first drawing comprises M key areas, and M is a positive integer; the electronic equipment determines the similarity between the I-th area and the I-th key area of each image in the image library according to the I-th key area, wherein the I-th area and the I-th key area correspond to the same face part, and I is more than or equal to 1 and less than or equal to M; the electronic equipment determines N images with highest similarity between an I-th area and an I-th key area in an image library, wherein N is a positive integer; the electronic equipment fuses the I-th areas corresponding to the N images respectively to obtain an I-th target area, wherein the I-th target area and the I-th key area correspond to the same face part; and the electronic equipment determines a first face image of the person according to the acquired M target areas.

Description

Image processing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and an electronic device.
Background
With the rapid development of computer technology, face recognition is receiving more and more attention. Face recognition is an effective identity authentication technology in criminal investigation and case breaking. In general, in criminal investigation, a simulated pictogram is drawn according to the description of a witness, and then a police performs retrieval and identification in a face database according to the face drawing to confirm the identity of the criminal suspect.
However, it is difficult to express a realistic image effect with rich color gradation, such as texture information of a human face, due to the human face drawing drawn by the analog pictograph. The accuracy of face recognition by directly using the face drawing depicted by the analog pictograph is not high. Therefore, in order to more accurately identify the identity of the criminal suspects, the face drawing can be converted into a face image, and then face recognition can be performed. How to obtain more detailed face images through face drawing is a problem that needs to be solved currently.
Disclosure of Invention
In view of the foregoing, the present application has been developed to provide an image processing method and an electronic device that overcome or at least partially solve the foregoing problems. By adopting the method provided by the application, the electronic equipment converts the face drawing into the face image, so that the person described by the face image is more detailed, and the accuracy of face recognition is further improved.
In a first aspect, an embodiment of the present application provides a method for image processing, which may include: the method comprises the steps that electronic equipment obtains a first drawing of a person, wherein the first drawing is an image formed by line outlines, the first drawing comprises M key areas, and M is a positive integer; the electronic equipment determines the similarity between an I-th region and an I-th key region of each image in the image library according to the I-th key region, wherein the I-th region and the I-th key region correspond to the same face part, and I is more than or equal to 1 and less than or equal to M; the electronic equipment determines N images with highest similarity between the I-th area and the I-th key area in the image library, wherein N is a positive integer; the electronic equipment fuses the I-th areas corresponding to the N images respectively to obtain an I-th target area, wherein the I-th target area and the I-th key area correspond to the same face part; and the electronic equipment determines a first face image of the person according to the acquired M target areas.
By the method of the first aspect, the electronic device divides the first drawing of the character into M key areas. And (3) weighting and fusing the I-th areas of the N images corresponding to the I-th key areas in the M key areas to obtain the I-th target area, and splicing the M I-th areas to obtain the final first face image. The target areas can be obtained simultaneously by a plurality of key areas in the M key areas, so that the working efficiency is improved. The I-th key region is obtained by weighting and fusing N I-th regions, the I-th target region has more image features, the generated first face image can describe more accurate characters, and the accuracy of face recognition is improved.
In some possible implementations, the I-th region corresponding to each of the N images includes: an I-th region of the 1 st image, an I-th region of the 2 nd image, …, an I-th region of the N-th image; the electronic device fuses the I-th regions corresponding to the N images respectively to obtain the I-th target region, and the method specifically comprises the following steps: the electronic equipment determines feature vectors of the I-th areas corresponding to the N images respectively; the electronic device obtains the feature vector of the I-th target region using the following formula:wherein J is the feature vector of the I-th target region, I i Is the feature vector of the ith region of the ith image, A i Is a feature vector I corresponding to an I-th region of the I-th image i The set of weights corresponding to the I-th key region includes: weight A 1 Weight A 1 …, weight A N ,A 1 +A 2 +…+A N =1; the higher the similarity between the ith region and the ith critical region of the ith image, the feature vector I of the ith region of the ith image i Weight A of (2) i The larger; i is more than or equal to 1 and less than or equal to N; and the electronic equipment obtains the I target area according to the feature vector J of the I target area.
In some possible implementations, the electronic device determines feature vectors of the I-th regions corresponding to the N images respectively, before including: the electronic equipment inputs the similarity of the I-th key region and the I-th region in the N images into a pre-trained machine learning model to obtain weights of feature vectors of the N I-th regions, namely a group of weights corresponding to the I-th key region; the machine learning model is obtained by face drawing and face image training of a plurality of people.
In some possible implementations, the electronic device obtains a first drawing of the character, specifically including: the electronic equipment acquires a second face image of the person, and the second face image is shot by the monitoring equipment; determining image features in a second face image not included in the second plot; the second drawing is drawn by the second user from a description of the persona by the third user; and adding the image features in the second face image which are not included in the second drawing to obtain the first drawing.
In a second aspect, an embodiment of the present application provides an electronic device for image processing, including: the first acquisition unit is used for acquiring a first drawing, wherein the first drawing is an image formed by line outlines, the first drawing comprises M key areas, and M is a positive integer; the first determining unit is used for determining the similarity between the I-th region and the I-th key region of each image in the image library according to the I-th key region, wherein the I-th region and the I-th key region correspond to the same face part, and I is more than or equal to 1 and less than or equal to M; the second determining unit is used for determining N images with highest similarity between the I-th area and the I-th key area in the image library, wherein N is a positive integer; the fusion unit is used for fusing the I-th areas corresponding to the N images respectively to obtain an I-th target area, wherein the I-th target area and the I-th key area correspond to the same face part; and a third determining unit for determining a first face image of the person according to the acquired M target areas.
In some possible implementations, the I-th region corresponding to each of the N images includes: an I-th region of the 1 st image, an I-th region of the 2 nd image, …, an I-th region of the N-th image; the electronic device fuses the I-th regions corresponding to the N images respectively to obtain the I-th target region, and the method specifically comprises the following steps: a fourth determining unit, configured to determine feature vectors of the I-th regions corresponding to the N images respectively; a first processing unit, configured to obtain a feature vector of the I-th target area by using the following formula:wherein J is the feature vector of the I-th target region, I i Is the feature vector of the ith region of the ith image, A i Is the feature vector I of the ith region corresponding to the ith image i The set of weights corresponding to the I-th key region includes: weight A 1 Weight A 1 、…Weight A N ,A 1 +A 2 +…+A N =1; the higher the similarity between the ith region and the ith critical region of the ith image, the feature vector I of the ith region of the ith image i Weight A of (2) i The larger; i is more than or equal to 1 and less than or equal to N; and the second processing unit is used for obtaining the I target area according to the characteristic vector J of the I target area.
In some possible implementations, determining the feature vector of the I-th region to which the N images respectively correspond includes, before: the third processing unit is used for inputting the similarity between the ith key region and the ith region in the N images into a pre-trained machine learning model before the fourth determining unit determines the feature vectors of the ith region corresponding to the N images respectively, so as to obtain weights of the feature vectors of the N ith regions, namely a group of weights corresponding to the ith key region; the machine learning model is obtained by face drawing and face image training of a plurality of people.
In some possible implementations, the electronic device obtains a first drawing of the character, specifically including: the second acquisition unit is used for acquiring a second face image of the person, and the second face image is shot by the monitoring equipment; a fifth determining unit configured to determine an image feature in a second face image not included in the second drawing; the second drawing is drawn by the second user from a description of the persona by the third user; and the adding unit is used for adding the image characteristics in the second face image which are not included in the second drawing to obtain the first drawing.
In a third aspect, embodiments of the present application provide an electronic device for image processing, including a memory, one or more processors; the memory is coupled to one or more processors, the memory being for storing computer program code, the computer program code comprising computer instructions, the one or more processors invoking the computer instructions to cause the electronic device to perform the method of the first aspect, which is not described in detail herein.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium, where a computer program is stored, the computer program being executed by a processor to implement the method of the first aspect.
By implementing the technical scheme provided by the embodiment of the application, the electronic equipment converts the face drawing into the face image, so that the person described by the face image is more detailed, and the accuracy of face recognition is further improved.
Drawings
Fig. 1A is a schematic diagram of a face drawing according to an embodiment of the present application;
fig. 1B is a schematic diagram of a face image according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of image processing according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a plurality of key regions in a first plot provided in an embodiment of the present application;
fig. 4 is a schematic flow chart of fusing the ith area to obtain the ith target area according to the embodiment of the present application;
FIG. 5 is a schematic image of an I-th region fused to an I-th target region according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device for image processing according to an embodiment of the present application;
fig. 7 is a schematic diagram of an entity structure of an electronic device for image processing according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims of the application and in the drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
As used in this application, the terms "server," "unit," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a server may be, but is not limited to being, a processor, a data processing platform, a computing device, a computer, two or more computers, or the like.
The image processing method and the electronic device can be applied to various fields, for example, criminal investigation fields for converting face drawing drawn by a simulated pictograph into face images for identity authentication. And are not limited herein.
The differences between the face drawings and the face images are briefly described below.
In the embodiment of the application, the face drawing is an image formed by line outlines, that is, the face drawing is an image described by straight lines and curves. The information of lines and tiles is stored in the image, so that the image effect with rich color gradation cannot be represented. Such as shadows. As shown in fig. 1A, a schematic diagram of a face drawing is shown. Image block 101 and image block 102 constitute image 103.
The face image is an image composed of a matrix of pixels. Each pixel is dyed independently, so long as enough pixels with different colors exist, a colorful and rich image can be manufactured, and a scene in nature can be realistically represented. As shown in fig. 1B, a schematic view of a face image is shown. The plurality 111 of pixels constitutes the image 112.
Both face drawings and face images belong to the image. An image corresponds to a feature vector that is used to describe the shape features of the image.
In this embodiment of the present application, the electronic device may be a portable electronic device such as a smart phone, a tablet computer, a notebook computer, a wearable device (such as a smart watch), a non-portable electronic device such as a desktop computer with a touch-sensitive surface or a touch panel, or a server. The electronic device may be equipped with iOS, android, microsoft or other operating systems, which embodiments of the present application are not limited to. And, the electronic device has enough memory space to store data.
The embodiment of the application provides an image processing method and electronic equipment, wherein the method comprises the steps that the electronic equipment obtains a first drawing of a person, and according to key areas in the first drawing, N images with highest similarity between an I-th area and an I-th key area of each image in an image library are determined. Then, the electronic equipment fuses N I-th areas corresponding to the N images to obtain an I-th target area, and finally, a first face image of the person is determined according to the I-th target area. The images in the image library may also be referred to as face images. Reference may be made specifically to the following embodiments, and details are not described here.
By the method provided by the application, the electronic device divides the first drawing of the character into M key areas. And (3) weighting and fusing the I-th areas of the N images corresponding to the I-th key areas in the M key areas to obtain the I-th target area, and splicing the M I-th areas to obtain the final first face image. The target areas can be obtained simultaneously by a plurality of key areas in the M key areas, so that the working efficiency is improved. The I-th key region is obtained by weighting and fusing N I-th regions, the I-th target region has more image features, the generated first face image can describe more accurate characters, and the accuracy of face recognition is improved.
Embodiments of the present application will be described in detail below.
Fig. 2 is a flowchart of a method for image processing according to an embodiment of the present application. As shown in fig. 2, the image processing method may include:
step S201: the electronic device obtains a first drawing of the persona.
The first drawing of the character acquired by the electronic device may be in several ways:
1. the first plot of the person is a face plot of the criminal suspect plotted by a simulated pictograph from a witness's description of the criminal suspect. Wherein the face drawing is an image composed of line contours.
2. The first plot of the person is determined in combination with a simulated imager plot of the face of the criminal suspect and a monitored device taken image of the face of the criminal suspect.
The electronic device in the embodiment of the application can be connected with a plurality of monitoring devices, and each monitoring device is used for capturing video images. Typically, the monitoring device is installed in public places such as gas stations, airports, stations, supermarkets, intersections, hospitals, entertainment places, and the like. After capturing the video image, the monitoring device may save the video image to the electronic device.
Further, the video image collected by the monitoring device is usually a dynamic face image, so that the electronic device in the embodiment of the application can screen the dynamic face image. The face images screened by the electronic equipment include but are not limited to the following characteristics: the distance between two eyes of the human face is not less than 30 pixels, the pitch angle of the human face is not more than 20 degrees, and the horizontal rotation angle of the human face is not more than 30 degrees.
The electronic device combines the face drawing of the criminal suspect drawn by the analog imager and the face image of the criminal suspect shot by the monitoring device to determine the first drawing of the person comprises the following modes:
1. the electronic device compares the image features of the face image shot by the monitoring device with the image features of the face drawing, and then adds the image features which are contained in the face image shot by the monitoring device but not contained in the face drawing to the face drawing, so that a first drawing of the person is obtained. The face drawing is a face image of a criminal suspects drawn by a simulated pictographer according to the description of a witness.
2. The electronic device compares the image features of the face image shot by the monitoring device with the image features of the face drawing, and then adds the image features contained in the face image but not contained in the face image shot by the monitoring device to the face image shot by the monitoring device, so that a first drawing of the person is obtained. Wherein the face drawing is a face image of criminal suspects drawn by a simulated pictographer according to the description of witness
Step S202: and the electronic equipment determines the similarity between the I-th area and the I-th key area of each image in the image library according to the I-th key area.
The first drawing comprises M key areas, wherein the key areas describe important areas of faces of people, I is more than or equal to 1 and less than or equal to M, M is a positive integer, and I is a positive integer.
As shown in fig. 3, the M key regions in the first plot include: critical area 1, critical area 2, critical area 3, critical area 4, critical area 5, critical area 6. Wherein, the key region 1 corresponds to the right earlobe of the face, the key region 2 corresponds to the left eyebrow peak of the face, the key region 3 corresponds to the right eye corner of the face, the key region 4 corresponds to the left eye pupil of the face, the key region 5 corresponds to the left mouth corner of the face, and the key region 6 corresponds to the chin of the face. It should be understood that the above-described M key regions are for illustration only and should not be construed as limiting. The more key areas, i.e., the larger M, the electronic device determines in the first plot, the more accurate the character image described by the first plot, and thus the more clear the character described by the character image.
The manner in which the electronic device determines the M key regions in the first plot includes the following:
1. the electronic device determines that the M key regions in the first plot are determined by user operation of the first user. The user operations may include: the clicking operation, pressing operation, swiping operation, and the like based on the important area are not limited here.
2. The electronic device determines that the M key regions in the first plot are determined by the human face positional relationship. The electronic device stores the facial position relationship of the person. For example, human face position relationship: the nose is 3 cm above the mouth, and the chin is 4 cm below the mouth. As shown in fig. 3, after the electronic device responds to the key area where the left eye pupil is located determined by the user operation, the key area where the left mouth corner is located may be determined according to the relative position of the left eye pupil and the left mouth corner in the face position relationship of the person. That is, the remaining one or more critical areas in the first plot may be determined from the one or more critical areas annotated by the user operation.
3. The electronic device determines that M key regions in the first plot are determined based on the face-part recognition result. For example, the electronic device determines the face recognition result in the first drawing according to an active shape model (Active Shape Model, ASM) algorithm, a cascade shape regression (Cascaded Pose Regression, CPR) algorithm, or the like, and determines M key regions in the first drawing according to the face recognition result, without limitation.
Further, the shapes of the M key areas determined by the electronic device in the first drawing may be square, rectangular, or circular, which is not limited.
Specifically, the image library includes a plurality of P face images. In some possible embodiments, the image library comprises a plurality of sub-image libraries comprising: a Chinese face image library, a American face image library and a Vietnam face image library. When the electronic device converts the face drawing of the Chinese into a face image, a general Chinese face image library, such as Beijing industry university face database (BJUT Face Database), can be used.
And the electronic equipment determines the similarity between the I-th area and the I-th key area of each image in the image library according to the I-th key area. The following ways may be included: 1. the electronic device may determine the similarity based on feature vectors of the I-th region and the I-th key region of each face image in the image library. 2. The electronic device may determine the similarity based on the information sequences of the I-th region and the I-th key region of each face image in the image library. The I-th area and the I-th key area correspond to the same face part.
The electronic device may determine the similarity based on feature vectors of the I-th region and the I-th key region of each face image in the image library. The smaller the difference between the feature vector of the I-th region and the feature vector of the I-th key region of the face image in the image library, the higher the similarity between the I-th region and the I-th key region of the face image. Wherein the feature vector indicates a shape feature of the image.
In some possible embodiments, the feature vector may be a k-dimensional vector. The difference between feature vectors can be measured by the euclidean distance between feature vectors, the smaller the euclidean distance, the smaller the difference between feature vectors, and the higher the similarity between the I-th region and the I-th key region of the face image. In other possible embodiments, the difference between feature vectors may also be measured by a cosine distance between feature vectors, where the smaller the cosine distance, the smaller the difference between feature vectors, and the higher the similarity between the I-th region and the I-th key region of the face image.
The electronic device may determine the similarity based on the information sequences of the I-th region and the I-th key region of each face image in the image library. The more the number of bits that the information sequence of the I-th region of the face image in the image library is identical to the information sequence of the I-th key region, the higher the similarity between the I-th region and the I-th key region of the face image.
Step S203: the electronic equipment determines N images with highest similarity between the I-th area and the I-th key area in the image library, wherein N is a positive integer.
For example, assuming that 1000 face images are included in the image library, the electronic device determines the 3 face images having the highest similarity between the I-th region and the I-th key region of the 1000 face images. This is merely an example and should not be construed as limiting.
Step S204: the electronic equipment fuses the I-th areas corresponding to the N images respectively to obtain the I-th target area, wherein the I-th target area and the I-th key area correspond to the same face part.
In some possible embodiments, the size of the I-th target region in the N face images is the same as the size of the I-th critical region.
Step S204 may include the steps of:
referring to fig. 3, a flow chart of fusion to obtain a target area is provided in an embodiment of the present application. As shown in fig. 3, the method of image processing may include:
step S2041: the electronic device determines feature vectors of the I-th areas corresponding to the N images respectively, wherein the feature vectors I of the N I-th areas comprise: feature vector I 1 Feature vector I N
Further, the I-th region of the N images corresponding to the N face images has different influences on the target region. Specifically, the higher the similarity between the I-th region and the I-th key region in the N face images, the greater the influence of the I-th region on the target region.
The group of weights corresponding to the I-th key region and the group of weights corresponding to the K-th key region in the M key regions can be the same or different, wherein I is not less than 1 and not more than M, K is not less than 1 and not more than M, and I is not less than K.
For the M key regions in the same plot, in some possible embodiments, the set of weights corresponding to the I-th key region is the same as the set of weights corresponding to the K-th key region. For example, the weight of the feature vector of the I-th region in the N face images determined by the electronic device according to the similarity between the I-th key region and the image library is a 1 、A 2 、A 3 …A N . The weight of the feature vector of the Kth region in the N face images determined by the electronic equipment according to the similarity of the Kth key region in the image library is also A 1 、A 2 、A 3 …A N Wherein A is 1 +A 2 +…+A N =1。
In other possible embodiments, for M key regions in the same plot, in some possible embodiments, the set of weights corresponding to the I-th key region is different from the set of weights corresponding to the K-th key region. For example, the weight of the feature vector of the I-th region in the N face images determined by the electronic device according to the similarity between the I-th key region and the image library is a 1 、A 2 、A 3 …A N . The electronic equipment stores the images according to the imagesThe weight of the feature vector of the Kth region in the N face images determined by the similarity of the Kth key region is B 1 、B 2 、B 3 …B N Wherein A is 1 +A 2 +…+A N =1,B 1 +B 2 +…+B N =1。
In the following embodiments, the I-th key area is taken as an example, to describe how the electronic device fuses the N I-th areas corresponding to the N face images with the highest similarity of the I-th key area to obtain the target area. The I-th key region corresponds to N-th I-th regions, which are: the I-th region of the 1 st image, the I-th region of the 2 nd image, …, the I-th region of the N-th image.
In some possible implementations, the I-th critical region corresponds to a set of weights. Each weight is the feature vector I of the I-th region corresponding to the I-th key region i Is a weight of (2). Feature vector I of the I-th region i Weight A of (2) i The effect of the ith area on the ith target area may be calculated. Feature vector I of the I-th region i Weight A of (2) i The larger the I-th region has a greater effect on the I-th target region. The manner in which the set of weights is determined may be referred to in the following description.
Specifically, the ith key region corresponds to N ith regions. Feature vector I of the I-th region corresponding to the I-th key region i The weight of (A) is A i I is more than or equal to 1 and less than or equal to N, and A 1 +A 2 +…+A N =1. For a clearer description of the embodiments of the present application, reference may be made to table 1, which shows weights of feature vectors of the I-th region corresponding to the 3 face images with the highest similarity of a key region.
TABLE 1 weights of feature vectors of the I th region corresponding to the I th critical region
Step S2042: and the electronic equipment obtains the I target area according to the feature vector fusion of the N I areas.
The electronic equipment determines the characteristic vector of the I target area through the characteristic vectors of the N I areas, and then determines the I target area through the characteristic vector of the I target area. In some possible implementations, the electronic device may calculate the feature vector for the I-th target region by the following formula:
wherein J is the feature vector of the I-th target region, I i Is the feature vector of the ith region of the ith image, A i Is the feature vector I of the ith region corresponding to the ith image i Weights of A 1 +A 2 +…+A N =1。
The electronic equipment determines the characteristic vector of the I target area through the characteristic vectors of the N I areas, and presumes that the characteristic vector of the 3I areas corresponding to the I key area is a two-dimensional vector and the characteristic vector is I 1 =[4,6],I 2 =[2,2],I 3 =[3,1]. Feature vector I 1 The weight of (2) is 0.5, the feature vector I 2 The weight of (2) is 0.3, and the feature vector I 3 If the weight of (2) is 0.2, then j=0.5×i 1 +0.3*I 2 +0.2*I 3 =[3.2,3.8]。
Fig. 5 is a schematic diagram of the I-th target area obtained by fusing the I-th area of the 3 face images corresponding to the I-th key area. Referring to FIG. 5, feature vector I 1 The I-th region 511 in the face image 501 is described, feature vector I 2 The I-th region 512 in the face image 502 is described, feature vector I 3 The I-th region 513 in the face image 503 is described. The electronic device fuses the I-th region 511 in the face image 501, the I-th region 512 in the face image 502, and the I-th region 513 in the face image 503 to obtain the I-th target region 504.
The weight of the feature vector of the I-th region of the N face images corresponding to the I-th key region may be obtained in several ways:
1. the weight of the feature vector of the I-th region of the N face images corresponding to the I-th key region may be set by the first user.
2. The weights of the feature vectors of the I-th region of the N face images corresponding to the I-th key region may be determined according to a machine learning model.
The machine learning model describes the similarity of the I-th key region and the I-th region in the N images as a function of a set of weights. The machine learning model is determined from face drawings and face images of a plurality of people.
Step S205: and the electronic equipment determines a first face image of the person according to the acquired M target areas.
The first plot includes M key regions, each of which can define a target region, and thus there can be M target regions. And the electronic equipment splices the M target areas to obtain a first face image of the person.
In some possible embodiments, the electronic device may stitch the M target areas to obtain the incomplete first face image of the task, that is, the first face image does not cover all parts of the face. And the electronic equipment extracts skin color parameters with the largest proportion in the first face image, and then colors the missing face parts in the first face image according to the skin color parameters to obtain a complete first face image.
Through the scheme of the embodiment of the application, the electronic equipment divides the first drawing of the character into M key areas. And (3) weighting and fusing the I-th areas of the N images corresponding to the I-th key areas in the M key areas to obtain the I-th target area, and splicing the M I-th areas to obtain the final first face image. The target areas can be obtained simultaneously by a plurality of key areas in the M key areas, so that the working efficiency is improved. The I-th key region is obtained by weighting and fusing N I-th regions, the I-th target region has more image features, the generated first face image can describe more accurate characters, and the accuracy of face recognition is improved.
The foregoing details the method of the embodiments of the present application, and the electronic device for image processing provided below and according to the embodiments of the present application may be a service device that provides various convenience for third parties to use by rapidly acquiring, processing, analyzing and extracting valuable data based on the interactive data. Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device for image processing according to an embodiment of the present application. A first acquisition unit 601, a first determination unit 602, a second determination unit 603, a fusion unit 604, a third determination unit 605 may be included. A fourth determination unit 606, a first processing unit 607, a second processing unit 608, a third processing unit 609, second acquisition units 6,10, a fifth determination unit 611, an addition unit 612 may also be included.
The first obtaining unit 601 is configured to obtain a first plot, where the first plot is an image formed by line contours, and the first plot includes M key regions, where M is a positive integer.
The first determining unit 602 is configured to determine, according to the I-th key region, a similarity between the I-th region and the I-th key region of each image in the image library, where the I-th region and the I-th key region correspond to the same face part, and 1.ltoreq.i.ltoreq.m.
The second determining unit 603 is configured to determine N images with highest similarity between the ith area and the ith key area in the image library, where N is a positive integer.
And a fusion unit 604, configured to fuse the I-th regions corresponding to the N images respectively to obtain an I-th target region, where the I-th target region and the I-th key region correspond to the same face part.
A third determining unit 605 determines a first face image of the person from the acquired M target areas.
A fourth determining unit 606, configured to determine feature vectors of the I-th regions corresponding to the N images respectively.
The first processing unit 607 is configured to obtain the feature vector of the I-th target area by using the following formula:wherein J is the feature vector of the I-th target region, I i Is the feature vector of the I-th region of the I-th image,A i is the feature vector I of the ith region corresponding to the ith image i The set of weights corresponding to the I-th key region includes: weight A 1 Weight A 1 …, weight A N ,A 1 +A 2 +…+A N =1; the higher the similarity between the ith region and the ith critical region of the ith image, the feature vector I of the ith region of the ith image i Weight A of (2) i The larger; i is more than or equal to 1 and N is more than or equal to N.
The second processing unit 608 is configured to obtain the I-th target area according to the feature vector J of the I-th target area.
A third processing unit 609, configured to input, before the fourth determining unit determines feature vectors of the I-th regions corresponding to the N images respectively, a similarity between the I-th key region and an I-th region in the N images into a pre-trained machine learning model, so as to obtain weights of the feature vectors of the N-th regions, that is, a set of weights corresponding to the I-th key region; the machine learning model is obtained by face drawing and face image training of a plurality of people.
A second acquiring unit 610, configured to acquire a second face image of the person, where the second face image is captured by the monitoring device.
A fifth determining unit 611 for determining image features in the second face image not included in the second drawing; the second drawing is drawn by the second user from a description of the character by the third user.
And an adding unit 612, configured to add the image features in the second face image that are not included in the second drawing to the second drawing, so as to obtain the first drawing.
Referring to fig. 7, fig. 7 is a simplified physical device structure of an electronic apparatus for image processing according to an embodiment of the present application, which is convenient for understanding and illustration, and fig. 7 may include one or more of the following parts in the electronic apparatus 70: a memory 701, one or more processors 702.
Memory 701 may include one or more memory units, each of which may include one or more memories, and memory 701 is coupled to one or more processors, operable to store programs and various data, and capable of high-speed, automated access to programs or data during operation of electronic device 70. In the present embodiment, the memory 701 may be used to store face images and related program codes in an image library.
The processor 702, which may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP) or a combination of CPU and NP. The processor 702 is configured to invoke the data of the memory 701 to perform the related description of the method, which is not described herein.
It should be noted that, the specific implementation of each operation may also correspond to the corresponding description of the embodiment of the reference method, which is not repeated herein.
In this application, units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
In addition, each functional component in the embodiments of the present application may be integrated in one component, or each component may exist alone physically, or two or more components may be integrated in one component. The above-described integrated components may be implemented in hardware or in software functional units.
The integrated components, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. Although the present application has been described herein in connection with various embodiments, other variations of the disclosed embodiments can be understood and effected by those skilled in the art in the course of the application, which embodiments claim.

Claims (10)

1. A method of image processing, comprising:
the method comprises the steps that electronic equipment obtains a first drawing of a person, wherein the first drawing is an image formed by line outlines, the first drawing comprises M key areas, and M is a positive integer;
the electronic equipment determines the similarity between an I-th region and the I-th key region of each image in an image library according to the I-th key region, wherein the I-th region and the I-th key region correspond to the same face part, and I is more than or equal to 1 and less than or equal to M;
the electronic equipment determines N images with highest similarity between the I-th area and the I-th key area in the image library, wherein N is a positive integer;
the electronic equipment fuses the I-th areas corresponding to the N images respectively to obtain an I-th target area, wherein the I-th target area and the I-th key area correspond to the same face part;
and the electronic equipment determines a first face image of the person according to the acquired M target areas.
2. The method of claim 1, wherein the I-th region to which the N images respectively correspond comprises: an I-th region of the 1 st image, an I-th region of the 2 nd image, …, an I-th region of the N-th image;
the electronic device fuses the I-th regions corresponding to the N images respectively to obtain an I-th target region, and the method specifically comprises the following steps:
the electronic equipment determines the characteristic vectors of the I-th areas corresponding to the N images respectively;
the electronic device obtains the feature vector of the I-th target area using the following formula:
wherein J is the feature vector of the I-th target region, I i Is the feature vector of the ith region of the ith image, A i Is a feature vector I corresponding to an I-th region of the I-th image i The set of weights corresponding to the I-th key region includes: weight A 1 Weight A 1 …, weight A N ,A 1 +A 2 +…+A N =1,1≤i≤N;
And the electronic equipment obtains the I target area according to the characteristic vector J of the I target area.
3. The method of claim 2, the electronic device determining feature vectors of the I-th region to which the N images respectively correspond, previously comprising:
the electronic equipment inputs the similarity of the I-th key region and the I-th region in the N images into a pre-trained machine learning model to obtain weights of feature vectors of the N I-th regions, namely a group of weights corresponding to the I-th key region;
the machine learning model is obtained by face drawing and face image training of a plurality of people.
4. The method of claim 1, wherein the electronic device obtains a first plot of the persona, comprising:
the electronic equipment acquires a second face image of the person, wherein the second face image is shot by monitoring equipment;
determining image features in the second face image that are not included in the second plot; the second drawing is drawn by a second user from a description of the persona by a third user;
and adding the image features in the second face image which are not included in the second drawing to obtain the first drawing.
5. An electronic device for image processing, comprising:
the first acquisition unit is used for acquiring a first drawing, wherein the first drawing is an image formed by line outlines, the first drawing comprises M key areas, and M is a positive integer;
the first determining unit is used for determining the similarity between an I-th region and an I-th key region of each image in the image library according to the I-th key region, wherein the I-th region and the I-th key region correspond to the same face part, and I is more than or equal to 1 and less than or equal to M;
the second determining unit is used for determining N images with highest similarity between the I-th area and the I-th key area in the image library, wherein N is a positive integer;
the fusion unit is used for fusing the I-th areas corresponding to the N images respectively to obtain an I-th target area, wherein the I-th target area and the I-th key area correspond to the same face part;
and a third determining unit, configured to determine a first face image of the person according to the acquired M target areas.
6. The electronic device of claim 5, wherein the I-th region to which the N images respectively correspond comprises: an I-th region of the 1 st image, an I-th region of the 2 nd image, …, an I-th region of the N-th image;
fusing the I-th regions corresponding to the N images respectively to obtain an I-th target region, wherein the method specifically comprises the following steps of:
a fourth determining unit, configured to determine feature vectors of the I-th regions corresponding to the N images respectively;
a first processing unit, configured to obtain, by using the electronic device, a feature vector of the I-th target area by using the following formula:
wherein J is the feature vector of the I-th target region, I i Is the feature vector of the ith region of the ith image, A i Is a feature vector I corresponding to an I-th region of the I-th image i The set of weights corresponding to the I-th key region includes: weight A 1 Weight A 1 …, weight A N ,A 1 +A 2 +…+A N =1; the higher the similarity between the ith region of the ith image and the ith key region, the feature vector I of the ith region of the ith image i Weight A of (2) i The larger; i is more than or equal to 1 and less than or equal to N;
and the second processing unit is used for obtaining the I target area according to the characteristic vector J of the I target area.
7. The electronic device of claim 6, prior to determining feature vectors for the I-th region to which the N images respectively correspond, comprising:
the third processing unit is configured to input, before the fourth determining unit determines feature vectors of the I-th regions corresponding to the N images respectively, similarity between the I-th key region and an I-th region in the N images into a pre-trained machine learning model, so as to obtain weights of the feature vectors of the N I-th regions, that is, a set of weights corresponding to the I-th key region;
the machine learning model is obtained by face drawing and face image training of a plurality of people.
8. The electronic device of claim 5, wherein the electronic device obtains the first depiction of the persona, comprising:
a second acquiring unit, configured to acquire a second face image of the person, where the second face image is captured by a monitoring device;
a fifth determining unit configured to determine an image feature in the second face image that is not included in the second drawing; the second drawing is drawn by a second user from a description of the persona by a third user;
and the adding unit is used for adding the image features in the second face image which are not included in the second drawing to obtain the first drawing.
9. An electronic device for processing a workflow, the electronic device comprising a memory, one or more processors; the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any one of claims 1 to 7.
CN201911426136.8A 2019-12-31 2019-12-31 Image processing method and electronic equipment Active CN113128304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911426136.8A CN113128304B (en) 2019-12-31 2019-12-31 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911426136.8A CN113128304B (en) 2019-12-31 2019-12-31 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113128304A CN113128304A (en) 2021-07-16
CN113128304B true CN113128304B (en) 2024-01-05

Family

ID=76770916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911426136.8A Active CN113128304B (en) 2019-12-31 2019-12-31 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113128304B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
WO2018033156A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, device, and electronic apparatus
CN108550176A (en) * 2018-04-19 2018-09-18 咪咕动漫有限公司 Image processing method, equipment and storage medium
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN110363047A (en) * 2018-03-26 2019-10-22 普天信息技术有限公司 Method, apparatus, electronic equipment and the storage medium of recognition of face
WO2019228317A1 (en) * 2018-05-28 2019-12-05 华为技术有限公司 Face recognition method and device, and computer readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
WO2018033156A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, device, and electronic apparatus
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110363047A (en) * 2018-03-26 2019-10-22 普天信息技术有限公司 Method, apparatus, electronic equipment and the storage medium of recognition of face
CN108550176A (en) * 2018-04-19 2018-09-18 咪咕动漫有限公司 Image processing method, equipment and storage medium
WO2019228317A1 (en) * 2018-05-28 2019-12-05 华为技术有限公司 Face recognition method and device, and computer readable medium
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种特征加权融合人脸识别方法;孙劲光;孟凡宇;;智能系统学报(第06期);全文 *
基于GLOH算子和局部特征融合的人脸识别;郜晓晶;潘新;王亮;;计算机应用与软件(第05期);全文 *
基于体绘制思维的人脸识别算法优化研究;黄孝平;;现代电子技术(第24期);全文 *
融合人脸轮廓和区域信息改进人脸检测;方昱春, 王蕴红, 谭铁牛;计算机学报(第04期);全文 *

Also Published As

Publication number Publication date
CN113128304A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
Xu et al. Virtual u: Defeating face liveness detection by building virtual models from your public photos
Xu et al. Human re-identification by matching compositional template with cluster sampling
Nai et al. Fast hand posture classification using depth features extracted from random line segments
Elaiwat et al. A curvelet-based approach for textured 3D face recognition
CN108369785A (en) Activity determination
Jia et al. 3D face anti-spoofing with factorized bilinear coding
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
WO2022188697A1 (en) Biological feature extraction method and apparatus, device, medium, and program product
CN109934187B (en) Random challenge response method based on face activity detection-eye sight
CN112101123A (en) Attention detection method and device
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
CN109635021A (en) A kind of data information input method, device and equipment based on human testing
Haji et al. Real time face recognition system (RTFRS)
EP2701096A2 (en) Image processing device and image processing method
Hore et al. A real time dactylology based feature extractrion for selective image encryption and artificial neural network
Roopa et al. Sensor based attendance system using feature detection and matching with augmented reality
CN111274602B (en) Image characteristic information replacement method, device, equipment and medium
CN111353325A (en) Key point detection model training method and device
Powell et al. A multibiometrics-based CAPTCHA for improved online security
US9501710B2 (en) Systems, methods, and media for identifying object characteristics based on fixation points
CN113128304B (en) Image processing method and electronic equipment
CN116205723A (en) Artificial intelligence-based face tag risk detection method and related equipment
CN112991555B (en) Data display method, device, equipment and storage medium
CN114299569A (en) Safe face authentication method based on eyeball motion
Kocejko et al. Gaze pattern lock for elders and disabled

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant