CN111222448B - Image conversion method and related product - Google Patents
Image conversion method and related product Download PDFInfo
- Publication number
- CN111222448B CN111222448B CN201911426180.9A CN201911426180A CN111222448B CN 111222448 B CN111222448 B CN 111222448B CN 201911426180 A CN201911426180 A CN 201911426180A CN 111222448 B CN111222448 B CN 111222448B
- Authority
- CN
- China
- Prior art keywords
- image
- sketch
- distance
- template
- preset feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 208000032544 Cicatrix Diseases 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 5
- 238000005034 decoration Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 231100000241 scar Toxicity 0.000 claims description 5
- 230000037387 scars Effects 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 9
- 238000012549 training Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses an image conversion method and related products, wherein the method comprises the following steps: obtaining the distance between any two preset feature points in the sketch image to obtain N first distances; obtaining the distance between any two preset feature points on each face image template to obtain N second distances; comparing the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template; and determining a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template. The embodiment of the application is beneficial to improving the conversion efficiency of sketch images.
Description
Technical Field
The application relates to the technical field of image recognition, in particular to an image conversion method and related products.
Background
Currently, portrait synthesis technology has attracted attention in recent years. For example, in the judicial field, searching for criminal suspects in an image database of police using sketch portraits is a very important application. The searching process is to match the drawn sketch image with each face template image in the image database of the police to obtain a best matched target face template image, and then the identity corresponding to the target face template image is used as the criminal suspicion. Specifically, when the images are matched, the sketch images are subjected to block processing, and each block of sketch image is matched, so that the image with the highest overall matching degree is obtained as the target face template image. But the algorithm complexity is higher based on the block matching mode, so that the image conversion efficiency is slower.
Disclosure of Invention
The embodiment of the application provides an image conversion method and related products, which are used for rapidly matching a target face image template of a sketch image through the comparison of the distances between characteristic points, so that the image conversion efficiency is improved.
In a first aspect, an embodiment of the present application provides an image conversion method, including:
obtaining the distance between any two preset feature points in the sketch image to obtain N first distances;
obtaining the distance between any two preset feature points on each face image template to obtain N second distances;
comparing the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template;
and determining a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template.
In a second aspect, an embodiment of the present application provides an image conversion apparatus, including:
the first acquisition unit is used for acquiring the distance between any two preset feature points in the sketch image to obtain N first distances;
the second acquisition unit acquires the distance between any two preset feature points on each face image template to obtain N second distances;
the comparison unit is used for performing one comparison on the N first distances and the N second distances of each face image template to obtain N distance errors corresponding to each face image template;
and the determining unit is used for determining a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program causing a computer to perform the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer being operable to cause a computer to perform the method according to the first aspect.
The implementation of the embodiment of the application has the following beneficial effects:
it can be seen that in the embodiment of the application, by calculating the first distance between any two preset feature points in the sketch image and comparing the first distance with the second distance between the two preset feature points on each face image template, a distance error corresponding to each face image template is obtained, and then a target face template is determined according to the distance error corresponding to each face image template, overall matching of the sketch image is achieved, and further conversion efficiency of the sketch image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image conversion method according to an embodiment of the present application;
fig. 2 is a flowchart of another image conversion method according to an embodiment of the present application;
fig. 3 is a flowchart of another image conversion method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image conversion device according to an embodiment of the present application;
fig. 5 is a functional unit composition block diagram of an image conversion device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The image conversion device in the application can comprise a smart Phone (such as an Android mobile Phone, an iOS mobile Phone, a Windows Phone mobile Phone and the like), a tablet computer, a palm computer, a notebook computer, a mobile internet device MID (Mobile Internet Devices, abbreviated as MID) or a wearable device and the like. The above-described electronic devices are merely examples, and are not exhaustive, including but not limited to the above-described image conversion apparatus. In practical applications, the image conversion apparatus may further include: intelligent vehicle terminals, computer devices, etc.
Referring to fig. 1, fig. 1 is a flowchart of an image conversion method according to an embodiment of the present application, where the method is applied to an image conversion device. The method of the embodiment includes, but is not limited to, the following steps:
101: the image conversion device obtains the distance between any two preset feature points in the sketch image to obtain N first distances.
The arbitrary two preset feature points are two feature points on the face.
Optionally, a plurality of feature points are pre-divided on the face. Generally, a face may be partitioned into 68 feature points. For example, feature points on a face include pupil center, nostrils, mouth, dimple, etc.
102: the image conversion device obtains the distance between any two preset feature points on each face image template to obtain N second distances.
The distance between any two preset feature points on each face image template can be calculated in advance, directly read out or calculated in real time, and the face image template is not limited to the distance.
103: the image conversion device compares the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template.
Specifically, the image conversion device compares each first distance with a second distance corresponding to the first distance to obtain a distance error corresponding to the first distance. The corresponding relation is a corresponding relation between a first distance and a second distance corresponding to two preset feature points with the same positions on the sketch image and the face template image. For example, when the first distance is the distance between two pupils on the sketch image, then the corresponding second distance is the distance between two pupils on the face image template.
104: and the image conversion device determines a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template.
Optionally, weighting the N distance errors corresponding to each face image template to obtain a final distance error corresponding to each face image template, and then using the face image template with the minimum final distance error as a target face image template matched with the sketch image.
The weight coefficient corresponding to each distance error is in inverse proportion to the distance between two preset feature points corresponding to the distance error. Because the farther the two feature points are, the greater the probability of error, and in order to reduce the existence of such error, a smaller weight coefficient needs to be set.
Specifically, the weight coefficient a i =α*(1/d i ) Wherein a is i For the ith distance error in N distance errors corresponding to each face image template, alpha is a preset parameter and d i Is the distance between two preset feature points corresponding to the ith distance error. After the weight coefficient corresponding to each distance error is obtained, normalization processing is carried out on the weight coefficients corresponding to the N distance errors, and the target weight coefficient corresponding to each distance error is obtained.
It can be seen that in the embodiment of the application, by calculating the first distance between any two preset feature points in the sketch image and comparing the first distance with the second distance between the two preset feature points on each face image template, a distance error corresponding to each face image template is obtained, and then a target face template is determined according to the distance error corresponding to each face image template, overall matching of the sketch image is achieved, and further conversion efficiency of the sketch image is improved.
In one possible implementation manner, the obtaining the distance between any two preset feature points in the sketch image and obtaining N first distances may be:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
determining three-dimensional coordinates of any one preset feature point in the sketch image on the three-dimensional image;
determining the distance between any two preset feature points according to the three-dimensional coordinates of the any two preset feature points in the sketch image to obtain N first distances, namely calculating the Euclidean distance between the two preset feature points, and taking the Euclidean distance as the distance between the two preset feature points.
Optionally, the sketch image includes M sub-sketch images corresponding to M observation perspectives. The M observation angles are the line of sight angles when the user observes the target person. For example, the M angles may include a side angle, a front angle, a top view angle, and the like. The target person is a person corresponding to the target face template.
A process for constructing a three-dimensional image based on M sub-sketch images is provided below.
And carrying out grid division on each sub-sketch image to obtain a grid image corresponding to the preset feature point on each sub-sketch image. Dividing grids of each sub-sketch image according to pixel points to obtain grid images corresponding to each pixel point, namely obtaining the grid images of preset feature points on each sub-sketch image. Wherein N pixel points may be divided into one raster image, N being a positive even number of 1 or more.
Further, the grid image corresponding to the preset feature point on each sub sketch image is decomposed and compressed, and surface light field data of the preset feature point on each sub sketch image is obtained.
Specifically, the surface light field data of each preset feature point may be represented by a four-dimensional function, i.e., d=f (u, v, x, y), where (u, v) is a pixel coordinate of the preset feature point on the sub-sketch image, (x, y) is a view direction of the preset feature point on the sub-sketch image (i.e., an irradiation direction of light, that is, a shooting direction equivalent to a shooting direction when the sub-sketch image is shot when the camera is used for image shooting), where the shooting direction may be represented by an observation view angle corresponding to the sub-sketch map, that is, the shooting direction is projected onto the xoy plane to obtain a unit vector of the projection on the xoy plane, and the unit vector is used as a view direction of the preset feature point on the sub-sketch image. F is a pre-constructed surface light field function.
Then, according to the surface light field data of the preset feature points on each sub sketch image, obtaining the depth information of each preset feature point on the human face; and constructing the three-dimensional image according to the depth information of each preset feature point on the face.
The depth information obtained by the surface light field data is the prior art and will not be described.
When the sketch image includes M sub-sketch images, the target characters corresponding to the sub-sketch images are identical, and the preset feature points represented on the sub-sketch images are identical. Therefore, the sketch image mentioned in the application can be any one of the M sub-sketch images, that is, the distance between any two preset feature points on any one of the M sub-sketch images can be calculated, so as to obtain N first distances.
In one possible implementation, before rasterizing each of the M sub-sketch images, the method further includes:
each sub sketch image is adjusted to obtain a target sketch image of each sub sketch image in a preset direction, wherein the preset direction is the positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub-sketch image, and dividing the target sketch image of each sub-sketch image into a first area and a second area according to the symmetry axis;
the first area and the second area are complemented by feature points, so that the positions and the number of preset feature points contained in the first area and the second area are the same;
and constructing a three-dimensional model by adopting the M complemented sub-sketch images to obtain the three-dimensional image. The three-dimensional model construction by using the M complemented sub-sketch images is the same as or similar to the above process of constructing the three-dimensional image, and will not be described herein.
Specifically, the preset characteristic points of the first area are adopted, and the complementation of the preset characteristic points is carried out in the second area; and then, complementing the preset characteristic points in the first area by adopting the preset characteristic points in the second area so as to ensure that the positions of the preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
For example, if the first region includes the pupil center and the second region does not include the pupil center, the pupil center is complemented in the second region at a position symmetrical to the pupil center; if the second region contains a dimple, but the first region does not, the dimple is complemented at a location in the first region that is symmetrical to the dimple.
In the embodiment, the feature points are complemented, so that each sub sketch image contains rich feature points, the constructed three-dimensional image is more accurate, and the image matching precision is improved.
In one possible embodiment, the sketch image may be a sketch image of a face.
The following provides a process for constructing a three-dimensional image based on a sketch image of a face.
Acquiring a pixel matrix corresponding to the sketch image;
and inputting the pixel matrix into a pre-trained three-dimensional model to obtain the three-dimensional image.
Optionally, the three-dimensional model is trained using sample data. Specifically, a plurality of face sample images are obtained, each face sample image corresponds to a frame of three-dimensional point cloud data (three-dimensional image), the three-dimensional point cloud data corresponding to each face sample image is projected on an xoy plane to obtain an RGB pixel matrix corresponding to each face sample image, and a two-dimensional image corresponding to each face sample is obtained; and then, taking the RGB pixel matrix corresponding to each face sample image as training data, taking the three-dimensional point cloud data corresponding to each face sample image as supervision information, and training the initial model by adopting the training data and the supervision information to obtain the pre-trained three-dimensional model. The cross entropy loss function and the gradient descent method are adopted to train the training process, and the training process is the prior art and is not described.
In one possible embodiment, the sketch image may also be an RGB image containing a human face, and another process of constructing a three-dimensional image is provided below, including but not limited to the following steps:
acquiring a face area in a sketch image;
taking a face region in the sketch image as a first region, taking other regions as a second region, and respectively generating a first histogram corresponding to the first region and a second histogram corresponding to the second region;
obtaining a depth map of the sketch image according to the first histogram and the second histogram; and obtaining a difference value of the corresponding pixel points in the first histogram and the second histogram to obtain the depth map.
Obtaining a target sketch image according to the depth map;
and fusing the target sketch image and the sketch image to obtain a three-dimensional image.
In the embodiment, the depth information of the sketch image can be obtained by simply constructing a histogram, so that the three-dimensional image of the sketch image can be obtained rapidly, and the image conversion efficiency is improved.
In one possible embodiment, after obtaining the target face template corresponding to the sketch image, the method further includes:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of wearing decorations, tattoos or scars;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
In particular, since the face template may be previously stored, the sketch image is drawn in real-time from the depiction of the observer. Therefore, the sketch image may contain features which are not present in the target face template, so that after the target face template is obtained, the auxiliary features are added into the target face template, and the target face template added with the auxiliary feature points are synchronously displayed, so that the accuracy of character recognition is improved.
Referring to fig. 2, fig. 2 is a flowchart of another image conversion method according to an embodiment of the present application, where the method is applied to an image conversion device. The same contents of this embodiment as those of the embodiment shown in fig. 1 are not repeated here. The method of the embodiment includes, but is not limited to, the following steps:
201: the image conversion device adjusts each sub sketch image to obtain a target sketch image of each sub sketch image in a preset direction.
202: the image conversion device acquires a symmetry axis on the target sketch image of each sub-sketch image, and divides the target sketch image of each sub-sketch image into a first area and a second area according to the symmetry axis.
203: the image conversion device complements the characteristic points of the first area and the second area so that the positions of preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
204: the image conversion device adopts the M complemented sub-sketch images to construct a three-dimensional model, and a three-dimensional image is obtained.
205: the image conversion device obtains the three-dimensional coordinates of any one preset feature point on the sketch image on the three-dimensional image, and determines the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image to obtain N first distances.
206: the image conversion device obtains the distance between any two preset feature points on each face image template to obtain N second distances.
207: the image conversion device compares the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template.
208: and the image conversion device determines a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template.
It can be seen that in the embodiment of the present application, by calculating a first distance between any two preset feature points in a sketch image and comparing the first distance with a second distance between the two preset feature points on each face image template, a distance error corresponding to each face image template is obtained, and then a target face template is determined according to the distance error corresponding to each face image template, so that overall matching of the sketch image is achieved, and further conversion efficiency of the sketch image is improved; and before calculating the distance between preset feature points, the feature points are complemented, so that the calculated first distance is richer, the matched target face template is more accurate, and the conversion accuracy of the sketch image is improved.
Referring to fig. 3, fig. 3 is a flowchart of another image conversion method according to an embodiment of the present application, where the method is applied to an image conversion device. The same contents as those of the embodiment shown in fig. 1 and 2 are used in this embodiment, and the description thereof will not be repeated here. The method of the embodiment includes, but is not limited to, the following steps:
301: the image conversion device adjusts each sub sketch image to obtain a target sketch image of each sub sketch image in a preset direction.
302: the image conversion device acquires a symmetry axis on the target sketch image of each sub-sketch image, and divides the target sketch image of each sub-sketch image into a first area and a second area according to the symmetry axis.
303: the image conversion device complements the characteristic points of the first area and the second area so that the positions of preset characteristic points in the first area and the second area are symmetrical and the number of the preset characteristic points is the same.
304: the image conversion device adopts the M complemented sub-sketch images to construct a three-dimensional model, and a three-dimensional image is obtained.
305: the image conversion device obtains the three-dimensional coordinates of any one preset feature point on the sketch image on the three-dimensional image, and determines the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image to obtain N first distances.
306: the image conversion device obtains the distance between any two preset feature points on each face image template to obtain N second distances.
307: the image conversion device compares the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template.
308: and the image conversion device determines a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template.
309: an image conversion device obtains auxiliary features on the sketch image.
Wherein the auxiliary feature comprises one or a combination of a plurality of wearing decorations, tattoos or scars.
310: and the image conversion device adds the auxiliary features to the target face template to obtain a new target face template, and synchronously displays the target face template and the new target face template.
It can be seen that in the embodiment of the present application, by calculating a first distance between any two preset feature points in a sketch image and comparing the first distance with a second distance between the two preset feature points on each face image template, a distance error corresponding to each face image template is obtained, and then a target face template is determined according to the distance error corresponding to each face image template, so that overall matching of the sketch image is achieved, and further conversion efficiency of the sketch image is improved; before calculating the distance between preset feature points, the feature points are complemented, so that the calculated first distance is richer, the matched target face template is more accurate, and the conversion accuracy of sketch images is improved; and the auxiliary features in the sketch image are also obtained and added to the target face template, so that the conversion accuracy of the sketch image is further improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image conversion device according to an embodiment of the present application. As shown in fig. 4, the image conversion apparatus 400 includes a processor, a memory, a communication interface, and one or more programs, and the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
obtaining the distance between any two preset feature points in the sketch image to obtain N first distances;
obtaining the distance between any two preset feature points on each face image template to obtain N second distances;
comparing the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template;
and determining a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template.
In one possible implementation manner, in obtaining the distance between any two preset feature points in the sketch image to obtain N first distances, the program is specifically configured to execute the following instructions:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
acquiring three-dimensional coordinates of any one preset feature point on the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image, so as to obtain N first distances.
In one possible implementation manner, the sketch image includes M sub-sketch images corresponding to M observation angles, and the program is specifically configured to execute the following instructions in terms of performing three-dimensional model construction on the sketch image to obtain a three-dimensional image:
performing grid division on each sub-sketch image in the M sub-sketch images to obtain grid images corresponding to preset feature points on each sub-sketch image;
determining surface light field data of preset feature points on each sub-sketch image according to the observation view angle of each sub-sketch image and the grid image corresponding to the preset feature points on each sub-sketch image;
obtaining depth information of each preset feature point on the face according to the surface light field data of the preset feature point on each sub sketch image;
and constructing the three-dimensional image according to the depth information of each preset feature point on the face.
In one possible implementation, before rasterizing each of the M sub-sketch images, the above-described program is further configured to execute instructions for:
adjusting each sub sketch image according to the observation view angle of each sub sketch image to obtain a target sketch image of each sub sketch image in a preset direction, wherein the preset direction is the positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub-sketch image, and dividing the target sketch image of each sub-sketch image into a first area and a second area according to the symmetry axis;
and carrying out feature point complementation on the first area and the second area so as to ensure that the positions of preset feature points in the first area and the second area are symmetrical and the number of the preset feature points is the same.
In one possible implementation manner, the above program is specifically configured to execute the following instructions in determining the target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template:
determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient and the distance between two preset feature points corresponding to the distance errors are in an inverse proportion relation;
carrying out normalization processing on the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error;
weighting N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error;
and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
In a possible implementation manner, the above program is further used for executing the following instructions:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of wearing decorations, tattoos or scars;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
Referring to fig. 5, fig. 5 is a functional unit block diagram of an image conversion device according to an embodiment of the present application. The image conversion apparatus 500 includes: a first acquisition unit 510, a second acquisition unit 520, an alignment unit 530, and a determination unit 540, wherein:
the first obtaining unit 510 is configured to obtain a distance between any two preset feature points in the sketch image, so as to obtain N first distances;
a second obtaining unit 520, configured to obtain a distance between the arbitrary two preset feature points on each face image template, so as to obtain N second distances;
a comparison unit 530, configured to compare the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template;
the determining unit 540 is configured to determine a target face image template corresponding to the sketch image according to the N distance errors corresponding to each face image template.
In one possible implementation manner, in acquiring the distance between any two preset feature points in the sketch image, N first distances are obtained, and the first acquiring unit 510 is specifically configured to:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
acquiring three-dimensional coordinates of any one preset feature point on the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image, so as to obtain N first distances.
In one possible implementation manner, the sketch image includes M sub-sketch images corresponding to M observation angles, and the first obtaining unit 510 is specifically configured to:
performing grid division on each sub-sketch image in the M sub-sketch images to obtain grid images corresponding to preset feature points on each sub-sketch image;
determining surface light field data of preset feature points on each sub-sketch image according to the observation view angle of each sub-sketch image and the grid image corresponding to the preset feature points on each sub-sketch image;
obtaining depth information of each preset feature point on the face according to the surface light field data of the preset feature point on each sub sketch image;
and constructing the three-dimensional image according to the depth information of each preset feature point on the face.
In a possible embodiment, the image conversion apparatus 500 further comprises an adjustment unit 550;
before performing raster division on each sub-sketch image in the M sub-sketch images, an adjusting unit 550 is configured to adjust each sub-sketch image according to an observation view angle of each sub-sketch image, so as to obtain a target sketch image of each sub-sketch image in a preset direction, where the preset direction is a positive direction of a face;
acquiring a symmetry axis on a target sketch image of each sub-sketch image, and dividing the target sketch image of each sub-sketch image into a first area and a second area according to the symmetry axis;
and carrying out feature point complementation on the first area and the second area so as to ensure that the positions of preset feature points in the first area and the second area are symmetrical and the number of the preset feature points is the same.
In one possible implementation manner, the determining unit 540 is specifically configured to, in determining, according to N distance errors corresponding to each face image template, a target face image template corresponding to the sketch image:
determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient and the distance between two preset feature points corresponding to the distance errors are in an inverse proportion relation;
carrying out normalization processing on the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error;
weighting N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error;
and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
In one possible embodiment, the image conversion apparatus 500 further includes an adding unit 560;
an adding unit 560 for:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of wearing decorations, tattoos or scars;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
The present application also provides a computer storage medium storing a computer program that is executed by a processor to implement some or all of the steps of any one of the image conversion methods described in the above method embodiments.
The present application also provides a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the image conversion methods as described in the method embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software program modules.
The integrated units, if implemented in the form of software program modules, may be stored in a computer-readable memory for sale or use as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (9)
1. An image conversion method, comprising:
obtaining the distance between any two preset feature points in the sketch image to obtain N first distances;
obtaining the distance between any two preset feature points on each face image template to obtain N second distances;
comparing the N first distances with N second distances of each face image template to obtain N distance errors corresponding to each face image template;
determining a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template, wherein the target face image template comprises the following specific steps: determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient and the distance between two preset feature points corresponding to the distance errors are in an inverse proportion relation; carrying out normalization processing on the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error; weighting N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error; and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
2. The method of claim 1, wherein obtaining the N first distances from the distance between any two preset feature points in the sketch image includes:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
acquiring three-dimensional coordinates of any one preset feature point on the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points on the sketch image according to the three-dimensional coordinates of any two preset feature points on the sketch image on the three-dimensional image, so as to obtain N first distances.
3. The method of claim 2, wherein the sketch image comprises M sub-sketch images corresponding to M viewing angles, and wherein the three-dimensional model construction of the sketch image results in a three-dimensional image, comprising:
performing grid division on each sub-sketch image in the M sub-sketch images to obtain grid images corresponding to preset feature points on each sub-sketch image;
determining surface light field data of preset feature points on each sub-sketch image according to the observation view angle of each sub-sketch image and the grid image corresponding to the preset feature points on each sub-sketch image;
obtaining depth information of each preset feature point on the face according to the surface light field data of the preset feature point on each sub sketch image;
and constructing the three-dimensional image according to the depth information of each preset feature point on the face.
4. The method of claim 3, wherein prior to rasterizing each of the M sub-sketch images, the method further comprises:
adjusting each sub sketch image according to the observation view angle of each sub sketch image to obtain a target sketch image of each sub sketch image in a preset direction, wherein the preset direction is the positive direction of a human face;
acquiring a symmetry axis on a target sketch image of each sub-sketch image, and dividing the target sketch image of each sub-sketch image into a first area and a second area according to the symmetry axis;
and carrying out feature point complementation on the first area and the second area so as to ensure that the positions of preset feature points in the first area and the second area are symmetrical and the number of the preset feature points is the same.
5. The method according to any one of claims 1-4, further comprising:
acquiring auxiliary features on the sketch image, wherein the auxiliary features comprise one or a combination of more of wearing decorations, tattoos or scars;
adding the auxiliary features to the target face template to obtain a new target face template;
and synchronously displaying the target face template and the new target face template.
6. An image conversion apparatus, comprising:
the first acquisition unit is used for acquiring the distance between any two preset feature points in the sketch image to obtain N first distances;
the second acquisition unit acquires the distance between any two preset feature points on each face image template to obtain N second distances;
the comparison unit is used for performing one comparison on the N first distances and the N second distances of each face image template to obtain N distance errors corresponding to each face image template;
the determining unit is used for determining a target face image template corresponding to the sketch image according to N distance errors corresponding to each face image template, and is specifically used for: determining a weight coefficient corresponding to each distance error in the N distance errors, wherein the weight coefficient and the distance between two preset feature points corresponding to the distance errors are in an inverse proportion relation; carrying out normalization processing on the weight coefficient corresponding to each distance error to obtain a target weight coefficient corresponding to each distance error; weighting N distance errors corresponding to each face image template according to the target weight coefficient corresponding to each distance error to obtain a final distance error; and determining a target face image template corresponding to the sketch image according to the final distance error of each face image template.
7. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the second obtaining unit is specifically configured to obtain N first distances from a distance between any two preset feature points in the sketch image:
constructing a three-dimensional model of the sketch image to obtain a three-dimensional image;
determining three-dimensional coordinates of any one preset feature point in the sketch image on the three-dimensional image;
and determining the distance between any two preset feature points according to the three-dimensional coordinates of the any two preset feature points in the sketch image to obtain N first distances.
8. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-5.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911426180.9A CN111222448B (en) | 2019-12-31 | 2019-12-31 | Image conversion method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911426180.9A CN111222448B (en) | 2019-12-31 | 2019-12-31 | Image conversion method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111222448A CN111222448A (en) | 2020-06-02 |
CN111222448B true CN111222448B (en) | 2023-05-12 |
Family
ID=70829253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911426180.9A Active CN111222448B (en) | 2019-12-31 | 2019-12-31 | Image conversion method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111222448B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001889A (en) * | 2020-07-22 | 2020-11-27 | 杭州依图医疗技术有限公司 | Medical image processing method and device and medical image display method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510257B (en) * | 2009-03-31 | 2011-08-10 | 华为技术有限公司 | Human face similarity degree matching method and device |
CN109145737B (en) * | 2018-07-18 | 2022-04-15 | 新乡医学院 | Rapid face recognition method and device, electronic equipment and storage medium |
CN109376596B (en) * | 2018-09-14 | 2020-11-13 | 广州杰赛科技股份有限公司 | Face matching method, device, equipment and storage medium |
CN110414452A (en) * | 2019-07-31 | 2019-11-05 | 中国工商银行股份有限公司 | A kind of face searching method and system based on facial features location information |
-
2019
- 2019-12-31 CN CN201911426180.9A patent/CN111222448B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111222448A (en) | 2020-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210209851A1 (en) | Face model creation | |
US20190384967A1 (en) | Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium | |
CN105913487B (en) | One kind is based on the matched direction of visual lines computational methods of iris edge analysis in eye image | |
US11238272B2 (en) | Method and apparatus for detecting face image | |
CN110147744B (en) | Face image quality assessment method, device and terminal | |
CN111325846B (en) | Expression base determination method, avatar driving method, device and medium | |
KR20220066366A (en) | Predictive individual 3D body model | |
EP3992919A1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
EP3839807A1 (en) | Facial landmark detection method and apparatus, computer device and storage medium | |
CN111369428B (en) | Virtual head portrait generation method and device | |
CN108875485A (en) | A kind of base map input method, apparatus and system | |
CN109271930B (en) | Micro-expression recognition method, device and storage medium | |
CN105518710B (en) | Video detecting method, video detection system and computer program product | |
CN108734078B (en) | Image processing method, image processing apparatus, electronic device, storage medium, and program | |
EP2856426A1 (en) | Body measurement | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
CN107944435A (en) | Three-dimensional face recognition method and device and processing terminal | |
JP2022550948A (en) | 3D face model generation method, device, computer device and computer program | |
WO2019128676A1 (en) | Light spot filtering method and apparatus | |
CN112036284B (en) | Image processing method, device, equipment and storage medium | |
CN111815768B (en) | Three-dimensional face reconstruction method and device | |
CN114495241B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN113327191A (en) | Face image synthesis method and device | |
CN109977764A (en) | Vivo identification method, device, terminal and storage medium based on plane monitoring-network | |
CN112076073A (en) | Automatic massage area dividing method and device, massage robot and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |