CN111222448A - Image conversion method and related product - Google Patents
Image conversion method and related product Download PDFInfo
- Publication number
- CN111222448A CN111222448A CN201911426180.9A CN201911426180A CN111222448A CN 111222448 A CN111222448 A CN 111222448A CN 201911426180 A CN201911426180 A CN 201911426180A CN 111222448 A CN111222448 A CN 111222448A
- Authority
- CN
- China
- Prior art keywords
- image
- sketch
- distance
- template
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000015654 memory Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 231100000241 scar Toxicity 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000002349 favourable effect Effects 0.000 abstract 1
- 238000009877 rendering Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses an image conversion method and a related product, wherein the method comprises the following steps: obtaining the distance between any two preset feature points in the sketch image to obtain N first distances; acquiring the distance between any two preset feature points on each face image template to obtain N second distances; comparing the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template; and determining a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template. The method and the device are favorable for improving the conversion efficiency of the sketch image.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image conversion method and a related product.
Background
Currently, portrait synthesis technology has attracted attention in recent years. For example, in the judicial field, searching for criminal suspects in an image database of the police with sketch portraits is a very important application. The searching process is to match the drawn sketch image with each face template image in an image database of the police to obtain a best matched target face template image, and then the identity corresponding to the target face template image is used as the criminal suspect. Specifically, when image matching is performed, block processing is performed on the sketch images, each block of sketch image is matched, and the target face template image with the highest overall matching degree is obtained. However, the algorithm complexity of the block matching based mode is high, which results in low image conversion efficiency.
Disclosure of Invention
The embodiment of the application provides an image conversion method and a related product, and a target face image template of a sketch image is quickly matched by comparing the distances between characteristic points, so that the image conversion efficiency is improved.
In a first aspect, an embodiment of the present application provides an image conversion method, including:
obtaining the distance between any two preset feature points in the sketch image to obtain N first distances;
acquiring the distance between any two preset feature points on each face image template to obtain N second distances;
comparing the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template;
and determining a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
In a second aspect, an embodiment of the present application provides an image conversion apparatus, including:
the first acquisition unit is used for acquiring the distance between any two preset feature points in the sketch image to obtain N first distances;
the second acquisition unit is used for acquiring the distance between any two preset feature points on each face image template to obtain N second distances;
the comparison unit is used for comparing the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template;
and the determining unit is used for determining a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
In a third aspect, embodiments of the present application provide an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the steps in the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, where the computer program makes a computer execute the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program, the computer being operable to cause a computer to perform the method according to the first aspect.
The embodiment of the application has the following beneficial effects:
it can be seen that, in the embodiment of the application, a first distance between any two preset feature points in a sketch image is calculated, and the first distance is compared with a second distance between the two preset feature points on each face image template one by one, so that a distance error corresponding to each face image template is obtained, a target face template is determined according to the distance error corresponding to each face image template, integral matching of the sketch image is achieved, and conversion efficiency of the sketch image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image conversion method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another image conversion method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another image conversion method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image conversion apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating functional units of an image conversion apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The image conversion device in the present application may include a smart phone (such as an Android phone, an iOS phone, a windows phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile internet device MID (MID for short), or a wearable device. The electronic devices mentioned above are only examples, not exhaustive, and include, but not limited to, the image conversion apparatus mentioned above. In practical applications, the image conversion apparatus may further include: intelligent vehicle-mounted terminal, computer equipment and the like.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image conversion method according to an embodiment of the present application, where the method is applied to an image conversion device. The method of the present embodiment includes, but is not limited to, the following steps:
101: the image conversion device obtains the distance between any two preset feature points in the sketch image to obtain N first distances.
The two arbitrary preset feature points are two feature points on the face.
Optionally, a plurality of feature points are pre-divided on the face. Generally, a human face can be divided into 68 feature points. For example, feature points on a human face include pupil center, nostrils, mouth, dimple, and the like.
102: and the image conversion device acquires the distance between any two preset feature points on each face image template to obtain N second distances.
The distance between any two preset feature points on each face image template can be calculated in advance, can be directly read out, and can also be calculated in real time, and the distance is not limited in the application.
103: and the image conversion device compares the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template.
Specifically, the image conversion device compares each first distance with a second distance corresponding to the first distance to obtain a distance error corresponding to the first distance. The corresponding relation refers to a corresponding relation between a first distance and a second distance corresponding to two preset feature points with the same position on the sketch image and the face template image. For example, when the first distance is the distance between two pupils in the sketch image, the corresponding second distance is the distance between two pupils in the face image template.
104: and the image conversion device determines a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
Optionally, the N distance errors corresponding to each face image template are weighted to obtain a final distance error corresponding to each face image template, and then the face image template with the minimum final distance error is used as the target face image template matched with the sketch image.
And the weight coefficient corresponding to each distance error is in inverse proportion to the distance between two preset characteristic points corresponding to the distance error. Because the farther the two feature points are away from each other, the greater the probability of error, and in order to reduce the existence of such error, a smaller weight coefficient needs to be set.
Specifically, the weight coefficient ai=α*(1/di) Wherein a isiα is a preset parameter for the ith distance error in the N distance errors corresponding to each face image template, d is a preset parameteriIs the distance between two preset feature points corresponding to the ith distance error. After the weight coefficient corresponding to each distance error is obtained, normalization processing is carried out on the weight coefficients corresponding to the N distance errors to obtain the weight coefficient corresponding to each distance errorA target weight coefficient corresponding to each range error.
It can be seen that, in the embodiment of the application, a first distance between any two preset feature points in a sketch image is calculated, and the first distance is compared with a second distance between the two preset feature points on each face image template one by one, so that a distance error corresponding to each face image template is obtained, a target face template is determined according to the distance error corresponding to each face image template, integral matching of the sketch image is achieved, and conversion efficiency of the sketch image is improved.
In a possible implementation manner, the obtaining of the distance between any two preset feature points in the sketch image and the obtaining of the N first distances may be:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
determining a three-dimensional coordinate of any preset feature point in the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points according to the three-dimensional coordinates of any two preset feature points in the sketch image to obtain N first distances, namely calculating the Euclidean distance between the two preset feature points, and taking the Euclidean distance as the distance between the two preset feature points.
Optionally, the sketch image includes M sub sketch images corresponding to M observation perspectives. The M observation visual angles are the sight angles of the user when observing the target person. For example, the M angles may include a side angle, a front angle, a top view angle, and the like. And the target person is a person corresponding to the target face template.
The following provides a process for constructing a three-dimensional image based on M sub-sketch images.
And performing grid division on each sub-pixel image to obtain a grid image corresponding to a preset feature point on each sub-pixel image. The method includes the steps that each sub-sketch image is subjected to grid division according to pixel points, a grid image corresponding to each pixel point is obtained, and the grid image of preset feature points on each sub-sketch image is obtained. N pixel points may be divided into a raster image, where N is a positive even number greater than or equal to 1.
Further, the raster image corresponding to the preset feature point on each sub-pixel image is decomposed and compressed, and surface light field data of the preset feature point on each sub-pixel image is obtained.
Specifically, the surface light field data of each preset feature point may be represented by a four-dimensional function, that is, D ═ F (u, v, x, y), where (u, v) is a pixel coordinate of the preset feature point on the sub-pixel rendering image, and (x, y) is a view direction of the preset feature point on the sub-pixel rendering image (i.e., an irradiation direction of a light ray, which is equivalent to a shooting direction when the sub-pixel rendering image is obtained when the image is shot by using a camera), where the shooting direction may be represented by an observation angle corresponding to the sub-pixel rendering, that is, the shooting direction is projected onto the xoy plane, a unit vector projected onto the xoy plane is obtained, and the unit vector is used as the view direction of the preset feature point on the sub-pixel rendering image. F is a pre-constructed surface light field function.
Then, obtaining depth information of each preset characteristic point on the face according to surface light field data of the preset characteristic point on each sub-pixel image; and constructing the three-dimensional image according to the depth information of each preset feature point on the human face.
The depth information obtained from the surface light field data is the prior art and is not described.
When the sketch image includes M sub sketch images, the target persons corresponding to the sub sketch images are consistent, and the preset feature points represented on the sub sketch images are consistent. Therefore, the sketch image mentioned in the present application may be on any one of the M sub-sketch images, that is, the distance between any two preset feature points on any one sub-sketch image may be calculated to obtain N first distances.
In one possible embodiment, before performing grid division on each of the M sub-sketch images, the method further includes:
adjusting each sub-sketch image to obtain a target sketch image of each sub-sketch image in a preset direction, wherein the preset direction is the positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub sketch image, and dividing the target sketch image of each sub sketch image into a first area and a second area according to the symmetry axis;
completing the feature points of the first region and the second region to ensure that the positions and the number of preset feature points contained in the first region and the second region are the same;
and constructing a three-dimensional model by adopting the supplemented M sub-sketch images to obtain the three-dimensional image. The process of constructing the three-dimensional model by using the completed M sub-sketch images is the same as or similar to the process of constructing the three-dimensional image, and is not repeated herein.
Specifically, the preset feature points of the first area are adopted to complement the preset feature points in the second area; and then, completing the preset characteristic points of the first area by adopting the preset characteristic points in the second area, so that the positions of the preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
For example, if the first region includes the pupil center and the second region does not include the pupil center, the pupil center is complemented at a position symmetrical to the pupil center in the second region; if the second area contains a dimple but the first area does not, the dimple is completed at a position symmetrical to the dimple in the first area.
In the embodiment, the characteristic points are supplemented, so that each sub-sketch image contains abundant characteristic points, the constructed three-dimensional image is more accurate, and the image matching precision is improved.
In one possible embodiment, the sketch image may be a sketch image of a front face.
The following provides a process for constructing a three-dimensional image based on a sketch image of a frontal face.
Acquiring a pixel matrix corresponding to the sketch image;
and inputting the pixel matrix into a pre-trained three-dimensional model to obtain the three-dimensional image.
Optionally, the three-dimensional model is obtained by training sample data. Specifically, a plurality of face sample images are obtained, each face sample image corresponds to a frame of three-dimensional point cloud data (three-dimensional image), the three-dimensional point cloud data corresponding to each face sample image is projected on a xoy plane, an RGB pixel matrix corresponding to each face sample image is obtained, and a two-dimensional image corresponding to each face sample is also obtained; then, the RGB pixel matrix corresponding to each face sample image is used as training data, the three-dimensional point cloud data corresponding to each face sample image is used as supervision information, the training data and the supervision information are adopted to train the initial model, and the pre-trained three-dimensional model is obtained. The cross entropy loss function and the gradient descent method are adopted to train the training process, and the training process is the prior art and is not described.
In one possible embodiment, the sketch image may also be an RGB image including a human face, and another process for constructing a three-dimensional image is provided below, including but not limited to the following steps:
acquiring a face area in a sketch image;
taking the face area in the sketch image as a first area and other areas as second areas, and respectively generating a first histogram corresponding to the first area and a second histogram corresponding to the second area;
obtaining a depth map of the sketch image according to the first histogram and the second histogram; namely, the difference value of the corresponding pixel points in the first histogram and the second histogram is obtained, and the depth map is obtained.
Obtaining a target sketch image according to the depth map;
and fusing the target sketch image and the sketch image to obtain a three-dimensional image.
In the embodiment, the depth information of the sketch image can be obtained by simply constructing the histogram, so that the three-dimensional image of the sketch image can be rapidly obtained, and the image conversion efficiency is improved.
In one possible implementation, after obtaining the target face template corresponding to the sketch image, the method further includes:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of a pendant, a tattoo or a scar;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
Specifically, since the face template may be previously stored, the sketch image is rendered in real time based on the depiction of the observer. Therefore, the sketch image may include features that do not exist in the target face template, so after the target face template is acquired, the auxiliary features are added to the target face template, and the target face template with the auxiliary feature points added are synchronously displayed, so that the accuracy of character recognition is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of another image conversion method according to an embodiment of the present disclosure, and the method is applied to an image conversion apparatus. The same contents in this embodiment as those in the embodiment shown in fig. 1 will not be repeated here. The method of the present embodiment includes, but is not limited to, the following steps:
201: the image conversion device adjusts each sub-sketch image to obtain a target sketch image of each sub-sketch image in a preset direction.
202: the image conversion device acquires a symmetry axis on a target sketch image of each sub sketch image and divides the target sketch image of each sub sketch image into a first area and a second area according to the symmetry axis.
203: and the image conversion device completes the characteristic points of the first area and the second area so as to ensure that the positions of the preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
204: and the image conversion device adopts the supplemented M sub-sketch images to construct a three-dimensional model to obtain a three-dimensional image.
205: the image conversion device obtains the three-dimensional coordinates of any one preset feature point on the three-dimensional image on the sketch image, and determines the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the three-dimensional image on the sketch image to obtain N first distances.
206: and the image conversion device acquires the distance between any two preset feature points on each face image template to obtain N second distances.
207: and the image conversion device compares the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template.
208: and the image conversion device determines a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
It can be seen that, in the embodiment of the application, a distance error corresponding to each face image template is obtained by calculating a first distance between any two preset feature points in a sketch image and comparing the first distance with a second distance between the two preset feature points on each face image template one by one, and a target face template is determined according to the distance error corresponding to each face image template, so that the whole matching of the sketch image is realized, and the conversion efficiency of the sketch image is improved; and before the distance between the preset feature points is calculated, the feature points are supplemented, so that the calculated first distance is richer, the matched target face template is more accurate, and the conversion accuracy of the sketch image is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of another image conversion method according to an embodiment of the present disclosure, where the method is applied to an image conversion device. The same contents in this embodiment as those in the embodiment shown in fig. 1 and 2 will not be repeated here. The method of the present embodiment includes, but is not limited to, the following steps:
301: the image conversion device adjusts each sub-sketch image to obtain a target sketch image of each sub-sketch image in a preset direction.
302: the image conversion device acquires a symmetry axis on a target sketch image of each sub sketch image and divides the target sketch image of each sub sketch image into a first area and a second area according to the symmetry axis.
303: and the image conversion device completes the characteristic points of the first area and the second area so as to ensure that the positions of the preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
304: and the image conversion device adopts the supplemented M sub-sketch images to construct a three-dimensional model to obtain a three-dimensional image.
305: the image conversion device obtains the three-dimensional coordinates of any one preset feature point on the sketch image on the three-dimensional image, and determines the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image to obtain N first distances.
306: and the image conversion device acquires the distance between any two preset feature points on each face image template to obtain N second distances.
307: and the image conversion device compares the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template.
308: and the image conversion device determines a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
309: an image conversion device obtains assist features on the sketch image.
Wherein the auxiliary feature comprises one or a combination of more of a pendant, a tattoo, or a scar.
310: and the image conversion device adds the auxiliary features to the target face template to obtain a new target face template, and synchronously displays the target face template and the new target face template.
It can be seen that, in the embodiment of the application, a distance error corresponding to each face image template is obtained by calculating a first distance between any two preset feature points in a sketch image and comparing the first distance with a second distance between the two preset feature points on each face image template one by one, and a target face template is determined according to the distance error corresponding to each face image template, so that the whole matching of the sketch image is realized, and the conversion efficiency of the sketch image is improved; in addition, before the distance between the preset feature points is calculated, the feature points are supplemented, so that the calculated first distance is richer, the matched target face template is more accurate, and the conversion accuracy of the sketch image is improved; and moreover, auxiliary features in the sketch image are also acquired, and the auxiliary features are added to the target face template, so that the conversion precision of the sketch image is further improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image conversion device according to an embodiment of the present disclosure. As shown in fig. 4, the image conversion apparatus 400 includes a processor, a memory, a communication interface, and one or more programs, and the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
obtaining the distance between any two preset feature points in the sketch image to obtain N first distances;
acquiring the distance between any two preset feature points on each face image template to obtain N second distances;
comparing the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template;
and determining a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
In a possible embodiment, in obtaining the distance between any two preset feature points in the sketch image to obtain the N first distances, the program is specifically configured to execute the following steps:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
acquiring a three-dimensional coordinate of any preset feature point on the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image to obtain N first distances.
In a possible embodiment, the sketch image includes M sub sketch images corresponding to M observation perspectives, and the program is specifically configured to execute the following instructions in terms of performing three-dimensional model construction on the sketch image to obtain a three-dimensional image:
performing raster division on each sub-pixel image in the M sub-pixel images to obtain a raster image corresponding to a preset feature point on each sub-pixel image;
determining surface light field data of a preset characteristic point on each sub-sketch image according to the observation visual angle of each sub-sketch image and a raster image corresponding to the preset characteristic point on each sub-sketch image;
obtaining depth information of each preset characteristic point on the face according to surface light field data of the preset characteristic point on each sub-pixel image;
and constructing the three-dimensional image according to the depth information of each preset feature point on the human face.
In one possible embodiment, before the grid-dividing each of the M sub-sketch images, the program further comprises instructions for:
adjusting each sub-sketch image according to the observation visual angle of each sub-sketch image to obtain a target sketch image of each sub-sketch image in a preset direction, wherein the preset direction is the positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub sketch image, and dividing the target sketch image of each sub sketch image into a first area and a second area according to the symmetry axis;
and completing the characteristic points of the first area and the second area so as to ensure that the positions of preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
In a possible embodiment, in terms of determining the target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template, the program is specifically configured to execute the following instructions:
determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient is in inverse proportional relation with the distance between two preset feature points corresponding to the distance error;
normalizing the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error;
weighting the N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error;
and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
In a possible embodiment, the program is further adapted to execute the instructions of the following steps:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of a pendant, a tattoo or a scar;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
Referring to fig. 5, fig. 5 is a block diagram illustrating functional units of an image conversion apparatus according to an embodiment of the present disclosure. The image conversion apparatus 500 includes: a first obtaining unit 510, a second obtaining unit 520, a comparing unit 530 and a determining unit 540, wherein:
a first obtaining unit 510, configured to obtain a distance between any two preset feature points in the sketch image, to obtain N first distances;
the second obtaining unit 520 obtains a distance between any two preset feature points on each face image template to obtain N second distances;
a comparing unit 530, configured to compare the N first distances with the N second distances of each face image template one by one, to obtain N distance errors corresponding to each face image template;
and the determining unit 540 is configured to determine a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
In a possible implementation manner, in acquiring a distance between any two preset feature points in the sketch image to obtain N first distances, the first acquiring unit 540 is specifically configured to:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
acquiring a three-dimensional coordinate of any preset feature point on the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image to obtain N first distances.
In a possible implementation manner, the sketch image includes M sub sketch images corresponding to M observation perspectives, and in terms of performing three-dimensional model construction on the sketch image to obtain a three-dimensional image, the first obtaining unit 540 is specifically configured to:
performing raster division on each sub-pixel image in the M sub-pixel images to obtain a raster image corresponding to a preset feature point on each sub-pixel image;
determining surface light field data of a preset characteristic point on each sub-sketch image according to the observation visual angle of each sub-sketch image and a raster image corresponding to the preset characteristic point on each sub-sketch image;
obtaining depth information of each preset characteristic point on the face according to surface light field data of the preset characteristic point on each sub-pixel image;
and constructing the three-dimensional image according to the depth information of each preset feature point on the human face.
In a possible embodiment, the image conversion apparatus 500 further comprises an adjustment unit 5, 50;
before performing grid division on each sub-pixel tracing image in the M sub-tracing images, the adjusting unit 550 is configured to adjust each sub-pixel tracing image according to an observation angle of each sub-tracing image to obtain a target tracing image of each sub-tracing image in a preset direction, where the preset direction is a positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub sketch image, and dividing the target sketch image of each sub sketch image into a first area and a second area according to the symmetry axis;
and completing the characteristic points of the first area and the second area so as to ensure that the positions of preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
In a possible implementation manner, in determining the target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template, the determining unit 540 is specifically configured to:
determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient is in inverse proportional relation with the distance between two preset feature points corresponding to the distance error;
normalizing the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error;
weighting the N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error;
and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
In a possible embodiment, the image conversion apparatus 500 further comprises an adding unit 560;
an adding unit 560 for:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of a pendant, a tattoo or a scar;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
Embodiments of the present application also provide a computer storage medium, which stores a computer program, where the computer program is executed by a processor to implement part or all of the steps of any one of the image conversion methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the image conversion methods as set forth in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An image conversion method, comprising:
obtaining the distance between any two preset feature points in the sketch image to obtain N first distances;
acquiring the distance between any two preset feature points on each face image template to obtain N second distances;
comparing the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template;
and determining a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
2. The method of claim 1, wherein obtaining the distance between any two preset feature points in the sketch image to obtain N first distances comprises:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
acquiring a three-dimensional coordinate of any preset feature point on the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image to obtain N first distances.
3. The method of claim 2, wherein the sketch images comprise M sub sketch images corresponding to M observation perspectives, and wherein the performing three-dimensional model construction on the sketch images to obtain three-dimensional images comprises:
performing raster division on each sub-pixel image in the M sub-pixel images to obtain a raster image corresponding to a preset feature point on each sub-pixel image;
determining surface light field data of a preset characteristic point on each sub-sketch image according to the observation visual angle of each sub-sketch image and a raster image corresponding to the preset characteristic point on each sub-sketch image;
obtaining depth information of each preset characteristic point on the face according to surface light field data of the preset characteristic point on each sub-pixel image;
and constructing the three-dimensional image according to the depth information of each preset feature point on the human face.
4. The method of claim 3, wherein prior to raster-dividing each of the M sub-sketch images, the method further comprises:
adjusting each sub-sketch image according to the observation visual angle of each sub-sketch image to obtain a target sketch image of each sub-sketch image in a preset direction, wherein the preset direction is the positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub sketch image, and dividing the target sketch image of each sub sketch image into a first area and a second area according to the symmetry axis;
and completing the characteristic points of the first area and the second area so as to ensure that the positions of preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
5. The method according to any one of claims 1-4, wherein the determining the target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template comprises:
determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient is in inverse proportional relation with the distance between two preset feature points corresponding to the distance error;
normalizing the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error;
weighting the N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error;
and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
6. The method according to any one of claims 1-5, further comprising:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of a pendant, a tattoo or a scar;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
7. An image conversion apparatus characterized by comprising:
the first acquisition unit is used for acquiring the distance between any two preset feature points in the sketch image to obtain N first distances;
the second acquisition unit is used for acquiring the distance between any two preset feature points on each face image template to obtain N second distances;
the comparison unit is used for comparing the N first distances with the N second distances of each face image template one by one to obtain N distance errors corresponding to each face image template;
and the determining unit is used for determining a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
8. The apparatus of claim 7,
in obtaining a distance between any two preset feature points in the sketch image to obtain N first distances, the second obtaining unit is specifically configured to:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
determining a three-dimensional coordinate of any preset feature point in the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points according to the three-dimensional coordinates of any two preset feature points in the sketch image to obtain N first distances.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911426180.9A CN111222448B (en) | 2019-12-31 | 2019-12-31 | Image conversion method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911426180.9A CN111222448B (en) | 2019-12-31 | 2019-12-31 | Image conversion method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111222448A true CN111222448A (en) | 2020-06-02 |
CN111222448B CN111222448B (en) | 2023-05-12 |
Family
ID=70829253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911426180.9A Active CN111222448B (en) | 2019-12-31 | 2019-12-31 | Image conversion method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111222448B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001889A (en) * | 2020-07-22 | 2020-11-27 | 杭州依图医疗技术有限公司 | Medical image processing method and device and medical image display method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510257A (en) * | 2009-03-31 | 2009-08-19 | 华为技术有限公司 | Human face similarity degree matching method and device |
CN109145737A (en) * | 2018-07-18 | 2019-01-04 | 新乡医学院 | A kind of fast human face recognition, device, electronic equipment and storage medium |
CN109376596A (en) * | 2018-09-14 | 2019-02-22 | 广州杰赛科技股份有限公司 | Face matching process, device, equipment and storage medium |
CN110414452A (en) * | 2019-07-31 | 2019-11-05 | 中国工商银行股份有限公司 | A kind of face searching method and system based on facial features location information |
-
2019
- 2019-12-31 CN CN201911426180.9A patent/CN111222448B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510257A (en) * | 2009-03-31 | 2009-08-19 | 华为技术有限公司 | Human face similarity degree matching method and device |
CN109145737A (en) * | 2018-07-18 | 2019-01-04 | 新乡医学院 | A kind of fast human face recognition, device, electronic equipment and storage medium |
CN109376596A (en) * | 2018-09-14 | 2019-02-22 | 广州杰赛科技股份有限公司 | Face matching process, device, equipment and storage medium |
CN110414452A (en) * | 2019-07-31 | 2019-11-05 | 中国工商银行股份有限公司 | A kind of face searching method and system based on facial features location information |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001889A (en) * | 2020-07-22 | 2020-11-27 | 杭州依图医疗技术有限公司 | Medical image processing method and device and medical image display method |
Also Published As
Publication number | Publication date |
---|---|
CN111222448B (en) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020228389A1 (en) | Method and apparatus for creating facial model, electronic device, and computer-readable storage medium | |
CN109558764B (en) | Face recognition method and device and computer equipment | |
CN108875524B (en) | Sight estimation method, device, system and storage medium | |
US10832039B2 (en) | Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium | |
CN110147744B (en) | Face image quality assessment method, device and terminal | |
US11288837B2 (en) | Method of influencing virtual objects of augmented reality | |
CN106897675B (en) | Face living body detection method combining binocular vision depth characteristic and apparent characteristic | |
US11238272B2 (en) | Method and apparatus for detecting face image | |
WO2020103700A1 (en) | Image recognition method based on micro facial expressions, apparatus and related device | |
CN113822982B (en) | Human body three-dimensional model construction method and device, electronic equipment and storage medium | |
CN113822977A (en) | Image rendering method, device, equipment and storage medium | |
CN108509915A (en) | The generation method and device of human face recognition model | |
CN107463865B (en) | Face detection model training method, face detection method and device | |
CN111325846B (en) | Expression base determination method, avatar driving method, device and medium | |
CN109376518A (en) | Privacy leakage method and relevant device are prevented based on recognition of face | |
CN109271930B (en) | Micro-expression recognition method, device and storage medium | |
CN109948397A (en) | A kind of face image correcting method, system and terminal device | |
JP2022550948A (en) | 3D face model generation method, device, computer device and computer program | |
CN107292299B (en) | Side face recognition methods based on kernel specification correlation analysis | |
CN108470178B (en) | Depth map significance detection method combined with depth credibility evaluation factor | |
CN111815768B (en) | Three-dimensional face reconstruction method and device | |
CN114495241B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN110909634A (en) | Visible light and double infrared combined rapid in vivo detection method | |
CN107368817B (en) | Face recognition method and device | |
CN115797451A (en) | Acupuncture point identification method, device and equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |