CN113570498A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113570498A
CN113570498A CN202110184203.0A CN202110184203A CN113570498A CN 113570498 A CN113570498 A CN 113570498A CN 202110184203 A CN202110184203 A CN 202110184203A CN 113570498 A CN113570498 A CN 113570498A
Authority
CN
China
Prior art keywords
image
group
positions
pixel
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110184203.0A
Other languages
Chinese (zh)
Inventor
周勤
刘浩
陈忠磊
李琛
吕静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110184203.0A priority Critical patent/CN113570498A/en
Publication of CN113570498A publication Critical patent/CN113570498A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method and device and electronic equipment. Wherein, the method comprises the following steps: acquiring a first group of positions of a first group of characteristic points in a first image to be deformed, and moving the first group of characteristic points to obtain a second group of positions; determining a radial basis fitting parameter according to the offset between each position in the first set of positions and each position in the third set of positions, the first set of positions and the second set of bit values, and determining a target deformation function according to the radial basis parameter; according to the target deformation function, each pixel point in the first image is moved to be a corresponding pixel point in the second image, the deformed second image is obtained, the purpose that a deformation field of the image is constructed based on the radial basis function is achieved, the deformation image is generated by transmitting the deformation field to the image processor, and the technical problems that in the prior art, due to the fact that the space structure of points is determined due to grids, any large-scale deformation cannot be conducted, and smoothness in image deformation is low are solved.

Description

Image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
The research of the human face deformation technology is one of the important subjects of the image processing research, and the human face deformation technology has wide application in human face special effects in short videos such as human face fusion and face changing, and some shapes in virtual images such as thin faces, large eyes and the like. Due to the influence of large-scale deformation, real-time speed is needed for real-time dragging deformation, and meanwhile, smoothness and naturalness of a deformed edge are guaranteed, so that the difficulty in the face deformation technology is caused, and most of face thinning and large eyes in a face beautifying algorithm are mainly concentrated on small-scale deformation in a local small area.
The existing two-dimensional face deformation schemes mainly have two types, the first type is an interpolation algorithm, such as a thin plate spline, an IDW interpolation and the like, and the deformation is carried out in the whole pixel domain, the deformation speed is in direct proportion to the number of pixels, particularly the speed is very low when global deformation is carried out on a high-resolution image, and the problem that the algorithm cannot carry out very complicated global face deformation is also limited; the second type is mainly algorithms such as triangulation patch subdivision, for example, mls (moving least squares) deformation, arap (as vertical as probability) deformation, bbw (bound biharmonic weights) deformation, and the like, mainly using mesh subdivision technology to subdivide the whole two-dimensional image, and obtaining the final deformation effect by calculating the mesh point position and texture coordinate on the image and using openGL to render, and this deformation calculation can also deform in real time on a high-resolution image, but is generally limited on local small-scale deformation, and because the mesh leads to the determination of the spatial structure of the point, any large-scale deformation cannot be performed, and the problem of smooth deformation also exists.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device and electronic equipment, and aims to at least solve the technical problems that in the prior art, due to the fact that the spatial structure of points is determined due to grids, any large-scale deformation cannot be carried out, and smoothness in image deformation is low.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: acquiring positions of a first group of characteristic points in a first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions; moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions; determining radial basis fit parameters based on the offset between each of the first set of locations and each of a third set of locations, the first set of locations and the second set of bit values, wherein the radial basis fit parameters are used to fit a target deformation function for the first image, wherein the third set of locations are locations selected from the first set of locations; determining the target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position, and the target deformation function is used for determining the movement amount corresponding to the input position according to the input position; and moving each pixel point in the first image into a corresponding pixel point in a second image according to the target deformation function to obtain the deformed second image.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: a first acquisition unit, configured to acquire positions of a first set of feature points in a first image to be deformed, where the positions of the first set of feature points constitute a first set of positions; a first obtaining unit, configured to move the first group of feature points to obtain a second group of feature points, where positions of the second group of feature points form a second group of positions; a first determining unit, configured to determine radial basis fitting parameters according to an offset between each of the first set of positions and each of a third set of positions, the first set of positions and the second set of bit values, wherein the radial basis fitting parameters are used for fitting a target deformation function of the first image, and the third set of positions are selected from the first set of positions; a second determining unit, configured to determine the target deformation function according to the radial basis fitting parameter, where the target deformation function is configured to determine a movement amount corresponding to an input position according to the input position; and the second obtaining unit is used for moving each pixel point in the first image into a corresponding pixel point in a second image according to the target deformation function to obtain the deformed second image.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned image processing method when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored, and a processor configured to execute the image processing method described above by the computer program.
In the embodiment of the invention, the positions of a first group of characteristic points are obtained in a first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions; moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions; determining radial basis fit parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations, and the second set of bit values, wherein the radial basis fit parameters are used to fit the target deformation function of the first image, and the third set of locations are selected from the first set of locations; determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position; according to the target deformation function, each pixel point in the first image is moved to be a corresponding pixel point in the second image, the deformed second image is obtained, a deformation field of the image is constructed based on the radial basis function according to the first group position and the second group position of a group of feature points is achieved, the deformation image is generated by transmitting the deformation field to a graphic processor, a smooth deformation result is obtained, and the deformation effect in a smooth and large range is generated due to the fact that the deformation field is not limited by a point space structure, so that the technical problems that in the prior art, due to the fact that a grid leads to the determination of the space structure of the point, any large-scale deformation cannot be carried out, and smoothness in image deformation is low are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment of an alternative image processing method according to an embodiment of the invention;
FIG. 2 is a flow diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative relationship between a small-scale deformation field and a large-scale deformation field according to an embodiment of the present invention;
FIG. 4 is a block flow diagram of an alternative deformation field-based large-scale face deformation method according to an embodiment of the present invention;
FIG. 5 is a flow chart of an alternative large-scale face deformation based on deformation fields according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, there is provided an image processing method, which may be applied, but not limited, to the environment shown in fig. 1 as an optional implementation manner. Terminal device 102, network 104, and server 106. An image client is run in the terminal device 102, and is used for completing image processing.
The terminal device 102 may include, but is not limited to: a human-computer interaction screen 104, a processor 106 and a memory 108. The human-computer interaction screen 104 is used for acquiring a human-computer interaction instruction through a human-computer interaction interface and presenting a first image to be deformed and a second image after the first image is deformed; the processor 106 is configured to respond to the human-computer interaction instruction, drag the first image to obtain a second image, and complete image processing. The memory 108 is used for storing the attribute information of the first image to be deformed, the first set of positions, the second set of positions, the deformation function and the attribute information of the second image. Here, the server may include but is not limited to: the processing engine 116 is used for calling the first image to be deformed stored in the database 114, and acquiring the positions of a first group of feature points in the first image to be deformed, wherein the positions of the first group of feature points form a first group of positions; moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions; determining radial basis fit parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations, and the second set of bit values, wherein the radial basis fit parameters are used to fit the target deformation function of the first image, and the third set of locations are selected from the first set of locations; determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position; and moving each pixel point in the first image into a corresponding pixel point in the second image according to the target deformation function to obtain the deformed second image. The method and the device achieve the technical problems that according to a first group of positions and a second group of positions of a group of feature points, a deformation field of an image is constructed based on a radial basis function, the deformation image is generated by transferring the deformation field to a graphic processor, a smooth deformation result is obtained, and the deformation effect within a smooth and large range is generated due to the fact that the deformation effect is not limited by a point space structure, and further the method and the device solve the problems that in the prior art, due to the fact that the space structure of points is determined by grids, any large-scale deformation cannot be conducted, and smoothness in image deformation is low.
The specific process comprises the following steps: a man-machine interaction screen 104 in the terminal device 102 displays a first image to be distorted (shown in fig. 1 as a screenshot of a game). As shown in steps S102-S112, acquiring positions of a first set of feature points in the first image to be deformed, wherein the positions of the first set of feature points form a first set of positions; the first set of feature points is moved to obtain a second set of feature points, wherein the positions of the second set of feature points form a second set of positions, and the first set of positions and the second set of positions are transmitted to the server 112 via the network 110. Determining, at the server 112, radial basis fit parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations, and the second set of bit values, wherein the radial basis fit parameters are used to fit the target deformation function of the first image, and the third set of locations are selected from the first set of locations; determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position; and moving each pixel point in the first image into a corresponding pixel point in the second image according to the target deformation function to obtain the deformed second image. And then returns the determined second image to the terminal device 102.
Then, as shown in steps S114-S116, the terminal device 102 obtains the positions of the first set of feature points in the first image to be deformed, where the positions of the first set of feature points form the first set of positions; moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions; determining radial basis fit parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations, and the second set of bit values, wherein the radial basis fit parameters are used to fit the target deformation function of the first image, and the third set of locations are selected from the first set of locations; determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position; according to the target deformation function, each pixel point in the first image is moved to be a corresponding pixel point in the second image, the deformed second image is obtained, a deformation field of the image is constructed based on the radial basis function according to the first group position and the second group position of a group of feature points is achieved, the deformation image is generated by transmitting the deformation field to a graphic processor, a smooth deformation result is obtained, and the deformation effect in a smooth and large range is generated due to the fact that the deformation field is not limited by a point space structure, so that the technical problems that in the prior art, due to the fact that a grid leads to the determination of the space structure of the point, any large-scale deformation cannot be carried out, and smoothness in image deformation is low are solved.
In this embodiment, the image processing may include, but is not limited to, being executed by the terminal device 102, being executed by the server 106, or being executed by both the terminal device 102 and the server 106. The above is merely an example, and this is not limited in this embodiment.
Optionally, in this embodiment, the terminal device 102 may be a terminal device configured with a target client, and may include but is not limited to at least one of the following: mobile phones (such as Android phones, iOS phones, etc.), notebook computers, tablet computers, palm computers, MID (Mobile Internet Devices), PAD, desktop computers, smart televisions, etc. The target client may be an image editing client, an image viewing client, an instant messaging client with image processing functionality, a live client with image processing, and the like. Such networks may include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
Optionally, as an optional implementation, as shown in fig. 2, the image processing method includes:
step S202, acquiring positions of a first group of characteristic points in a first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions.
Step S204, the first group of feature points are moved to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions.
Step S206, determining radial basis fitting parameters according to the offset between each position in the first group of positions and each position in the third group of positions, the first group of positions and the second group of bit values, wherein the radial basis fitting parameters are used for fitting the target deformation function of the first image, and the third group of positions are selected from the first group of positions.
And step S208, determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position.
Step S210, according to the target deformation function, moving each pixel point in the first image to a corresponding pixel point in the second image, so as to obtain a deformed second image.
Optionally, in this embodiment, the image processing method may include, but is not limited to, any-scale face deformation editing by a user, and various interesting face special effects such as a frog mouth, a shape, and the like may be designed according to the requirements of the user, and may also be used for face changing, face fusion, and the like, and may also be used for face cartoon animation, design of an exaggerated cartoon image, and the like.
In this embodiment, a first image may be dragged in an image editor, a first group of positions of the first image before being dragged and a second group of positions of the first image after being dragged are obtained, a target deformation function of the first image is determined according to the first group of positions and the second group of positions, deformation of each pixel point in the first image from the first image to the second image and deformation of the pixel point are determined according to the target deformation function, each pixel point in the first image is moved by a corresponding deformation, a pixel point corresponding to each pixel point in the second image is obtained, and a second image obtained after the first image is dragged is obtained.
The first group of feature points may include, but are not limited to, a group of feature points obtained by extracting human face feature points, or some feature points may be set in a user interaction manner, that is, the human face feature points obtained by the human face recognition algorithm may be obtained by obtaining a group of feature points, or feature points in an image may be set by user interaction. The first group of feature points has a first group of positions corresponding to each other, and the coordinates of each feature point in the group of feature points obtain a group of position information, that is, the group of position information is a coordinate set. It should be noted that the number of the group of feature points may include, but is not limited to, 1, two, three, ten, and the like, and in this embodiment, is not specifically limited.
It should be further noted that, the larger the number of the first group of feature points is, the more the second group of feature points is, the closer the deformation function of the first image determined according to the first group of positions and the second group of positions is to the deformation field of the first image, and the smoother second image can be obtained.
In this embodiment, the positions of the pixels in the first image are input into the target deformation function, so that the pixels in the first image can be determined to be deformed into the pixels in the second image, and the amount of movement by which the pixels in the first image need to be moved can be determined. If a pixel point a (x1, y1) in the first image is dragged to obtain a second image, the pixel point a (x2, y2) in the second image corresponds to a movement amount that may include Δ x being x2-x1 and Δ y being y2-y1, the pixel point a (x1, y1) in the first image needs to move Δ x in the x direction and Δ y in the y direction through dragging operation, where x and y are pixel coordinates, the coordinate positions of the pixel points are changed from the first image to be deformed to the second image, and each pixel point in the first image to be deformed needs to have a corresponding movement amount in the x direction and the y direction.
In practical applications, a set of feature points of the first image to be deformed, one feature point at the forehead, two feature points of the eyes (left and right eyes), one feature point at the nose, two feature points at the ears (left and right ears), a first feature point at the mouth, one feature point at the chin, two feature points at the cheeks (left and right cheeks), and 10 feature points are a set of feature points, the positions of the first set of 10 feature points are obtained in the first image to be deformed, the positions of each of the 10 feature points are obtained, a first set of positions having one-to-one correspondence with the first set of feature points is obtained, the position information may include, but is not limited to, two-dimensional coordinates corresponding to pixel points in the first image to be deformed, the first set of position information of the set of feature points is represented as (x1, y1), (x2, y2), (x3, y3), (x4, y4), … … (x10, y 10).
It should be noted that the first group of 10 feature points may also be selected by human interaction.
In this embodiment, positions of a group of feature points in the first image to be deformed and positions of a second group of feature points obtained by moving the group of feature points may be obtained in advance, that is, a group of feature points whose positions are known to change are obtained in advance, and the position information may be coordinate information, and an original coordinate before the change of the group of feature points and a coordinate after the change of the movement of the group of feature points may be obtained in advance. And constructing a radial basis function according to the coordinates before and after the change, determining a target deformation function of the first image, determining the movement amount of each pixel point in the first image after each first image is dragged according to the target deformation function, further determining the pixel coordinates of each pixel point in the first image in the second image according to the original coordinates and the movement amount in the first image, namely obtaining the second image after the first image is dragged, and finishing the processing of the images.
By the embodiment provided by the application, the positions of a first group of characteristic points are obtained in a first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions; moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions; determining radial basis fit parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations, and the second set of bit values, wherein the radial basis fit parameters are used to fit the target deformation function of the first image, and the third set of locations are selected from the first set of locations; determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position; according to the target deformation function, each pixel point in the first image is moved to be a corresponding pixel point in the second image, the deformed second image is obtained, a deformation field of the image is constructed based on the radial basis function according to the first group position and the second group position of a group of feature points is achieved, the deformation image is generated by transmitting the deformation field to a graphic processor, a smooth deformation result is obtained, and the deformation effect in a smooth and large range is generated due to the fact that the deformation field is not limited by a point space structure, so that the technical problems that in the prior art, due to the fact that a grid leads to the determination of the space structure of the point, any large-scale deformation cannot be carried out, and smoothness in image deformation is low are solved.
Optionally, determining the radial basis fit parameters according to the offset between each position in the first set of positions and each position in the third set of positions, the first set of positions, and the second set of bit values may include:
s1, acquiring the position offset between each position in the first group of positions and each position in the third group of positions to obtain a group of position offsets;
and S2, determining a first group of weight values and a second group of weight values according to the group of position offsets, the first group of positions and the second group of positions, wherein the radial basis fitting parameters comprise the first group of weight values and the second group of weight values.
In this embodiment, the obtained offset between each position in the group of positions and each position in the third group of positions, for example, a group of positions includes positions of 10 feature points, the third group of positions may include positions of 5, 6, and 10 feature points, and the offset before and after deformation of the feature points in the third group may be known.
It should be noted that, in this embodiment, when the first image to be deformed is deformed into the second image, and when the first image is changed into the second image, it is necessary to determine a variation of each pixel point in the first image, where the variation includes a variation in an X direction and a variation in a Y direction in pixel coordinates.
In this embodiment, the variation of each pixel point in the first image may be determined through the target deformation function, and the target deformation function is fitted according to the radial basis function by constructing the radial basis function. The variable quantity of the pixel point is a parameter in the radial basis function, and then the target deformation function is determined according to the radial basis function, so that the variable quantity of each pixel point in the first image is obtained.
In this embodiment, in order to construct a globally smooth deformation field, N feature points at a first position are predefined, which is equivalent to influencing the positions of the remaining points in the first image through the changes of the feature points, and the original positions of the N feature points are set as uiI is 0, 1, … N, and the changed positions of the N feature points are viThen, thenVariation M of N feature pointsf(ui)=vi-uiI 1, 2, … n, using radial basis function construction to fit Mf(ui) As in formula (1):
Figure BDA0002942933000000111
in this embodiment, taking m as 1,
Figure BDA0002942933000000112
is a linear radical, is alpha01u, m is 2, and is a function of degree 2, i.e. alpha01u+α2u2Typically m is a low degree polynomial or m<n, which, in the present embodiment,
Figure BDA0002942933000000113
can be replaced by affine transformation base, can provide more changes for radial basis function, presets a plurality of fixed points before deformation and after deformation in advance, and assumes { (u)i,vi)}i=0,1,…NThe system of equations is thus obtained as follows:
Figure BDA0002942933000000114
it should be noted that, N ≧ N + m feature points are constraint points used to construct the radial basis, and other points are used to solve m variables.
In order to obtain the target deformation function through calculation conveniently, the equation set in the formula (2) is converted into a matrix, and the matrix is expressed as a formula (3):
φ(U)·W=V (3)
wherein the content of the first and second substances,
Figure BDA0002942933000000121
can be represented as a matrix of N x (N + m),
Figure BDA0002942933000000122
can be expressed as a matrix of (n + m) × 1,
Figure BDA0002942933000000123
can be represented as an N × 1 matrix.
In the present embodiment, to provide the first weight wiThe method ensures the robustness and the smoothness of the deformation field, constructs a regularization term,
Figure BDA0002942933000000124
r is a regularization matrix, and the following formula (4) is obtained by derivation:
Tφ+λRTR)W=φTV (4)
obtaining the weights corresponding to the first group of weight values and the second group of weight values of the above formula (4) as:
Wreg=(φTφ+λRTR)-1ΦTV
carrying out matrix singular decomposition (SVD) on phi to obtain a formula (5),
φ=X∑YT (5)
wherein X is (X)1,…xm) And Y ═ Y1,…yn) Is an orthogonal matrix, where xi,yiAre m-dimensional vector, n-dimensional vector sigma-diag (sigma)1,…σn),σ1≥σ2≥…σn> 0, formula (6) can be obtained if R ═ 0:
Figure BDA0002942933000000131
it can be seen that some singular values are small, which results in
Figure BDA0002942933000000132
There are some very small disturbances that will result in WregThe solution error is large and not smooth enough, where let R ═ λ InThereby obtaining formula (7):
Figure BDA0002942933000000133
in the present embodiment, not only a stable solution can be obtained, but also
Figure BDA0002942933000000134
Will also produce a smoothed weight WregAnd obtaining a smooth deformation field, wherein the obtained deformation field is the position v of each pixel after deformation, calculating a function formula (8) of the deformation amount, and determining the deformation amount of each pixel through the formula (8):
Figure BDA0002942933000000135
it should be noted that, in order to generate a smooth deformation field, it may include, but is not limited to, generating a smooth deformation field by using a smooth regularized radial basis function, and also may generate a deformation field by using a moving least squares method mls, or inverse distance Interpolation (IDW), etc.
Note that, in this embodiment, although Singular Value Decomposition (SVD) is also used to decompose the matrix, unlike eigen Decomposition, the matrix that SVD does not need to be decomposed is a square matrix, and assuming that the matrix a is an s × p matrix, the SVD of the matrix a is a ═ U Σ VTWhere U is an s × s matrix, Σ is an s × p matrix, all 0 except for the elements on the main diagonal, each element on the main diagonal is called a singular value, and V is a p × p matrix. Both U and V satisfy UTU=I,VTV=I。
In the corresponding application, if a space-based mesh generation such as triangulation or quadrilateral generation is established based on an image, the movement of point positions is limited, so that any large-scale deformation cannot be performed, and the deformation smoothness is also influenced.
Optionally, determining the first set of weight values and the second set of weight values according to the set of position offsets, the first set of positions, and the second set of positions may include: determining a target matrix according to the set of position offsets and the first set of positions; decomposing the target matrix to obtain a first matrix and a second matrix; determining a first set of weight values and a second set of weight values according to the first matrix, the second matrix and the second set of positions.
The decomposing the target matrix to obtain a first matrix and a second matrix may include:
decomposing the target matrix through the following formula to obtain a first matrix and a second matrix:
φ(U)=X∑YT
wherein the target matrix is:
Figure BDA0002942933000000141
where X represents a first matrix, Y represents a second matrix, and Σ represents an N × P matrix in which all 0's except the elements on the main diagonal are included by Λ11,Λ22,...,ΛPPN is a number of positions in the first group of positions, N is a number of positions in the third group of positions, N is not less than N + m, N and m are preset natural numbers,
Figure BDA0002942933000000142
φ(uj-ui) Represents the jth position u in the first set of positionsjAnd the ith position u in the third group of positionsiWith an offset between, j takes on a value of1, 2 … N, i is 1, 2 … N,
Figure BDA0002942933000000143
s takes the value 1, 2, … m.
It should be noted that, determining the first set of weight values and the second set of weight values according to the first matrix, the second matrix, and the second set of positions may include:
Figure BDA0002942933000000151
Figure BDA0002942933000000152
wherein x isiRepresenting the ith column vector in the first matrix,
Figure BDA0002942933000000153
yithe representation is represented as the ith column vector in the second matrix,
Figure BDA0002942933000000154
the second set of positions comprising v1,v2,...,vN
Figure BDA0002942933000000155
The first set of weight values includes w1,w2,...,wnThe second group of weighted values includes alpha1,α2,...,αm,σiIs an element Λ on the main diagonal11,Λ22,...,ΛPPThe ith value of (a)iiAnd λ is a preset constant.
Optionally, determining the target deformation function according to the radial basis fitting parameters may include:
the following function is determined as the target deformation function:
Figure BDA0002942933000000156
wherein the first set of weight values comprises w1,w2,...,wnThe second group of weighted values includes alpha1,α2,...,αmN represents the number of positions in the third group of positions, n and m are preset natural numbers, wiRepresenting the ith weight value, alpha, of the first set of weight valuesjRepresents the jth weight value, u, of the second set of weight valuesiIndicating the ith position in the third set of positions,
Figure BDA0002942933000000161
representing the position offset, M, between the target position u and the ith position in the first imagef(u) represents the amount of movement corresponding to the target position u,
Figure BDA0002942933000000162
in the present embodiment, m is 1,
Figure BDA0002942933000000163
is a linear radical, is alpha01u, m is 2, and is a function of degree 2, i.e. alpha01u+α2u2Typically m is a low degree polynomial or m<n, it should be noted that,
Figure BDA0002942933000000164
and the method can also be replaced by an affine transformation base, can provide more changes for the radial basis function, and further can determine the corresponding optimal movement amount when the target position deforms.
Optionally, moving each pixel point in the first image to a corresponding pixel point in the second image according to the target deformation function to obtain the deformed second image, where the method includes:
s1, performing the following steps on each pixel point in the first image to obtain a second image, where each pixel point in the first image is regarded as a first current point when the following steps are performed:
s2, acquiring a first current position of a first current point in the first image;
s3, acquiring the current movement amount output by the target deformation function and corresponding to the position of the first current point under the condition that the first current position is the input position of the target deformation function;
and S4, determining a second current position of a corresponding second current point in the second image according to the current movement amount and the first current position, wherein the second current point is a pixel point of the first current point moving to the second image.
Determining a second current position of a corresponding second current point in the second image according to the current movement amount and the first current position may include:
determining the second current position by:
x′=x0+Δx
y′=y0+Δy
wherein (x ', y ') denotes a second current position of the second current point P ' (x)0,y0) Indicates the first current position of the first current point P, (Δ x, Δ y) indicates the current movement amount.
In this embodiment, the amount of movement of the pixel point is determined by the difference of the coordinates in the pixel point coordinates, and the amount of movement may include, but is not limited to, the amount of movement in the X direction and the amount of movement in the Y direction in the pixel point coordinates.
When the image is a three-dimensional stereoscopic image, the movement amount may include a movement amount in the Z direction in pixel coordinates.
Optionally, moving each pixel point in the first image to a corresponding pixel point in the second image according to the target deformation function to obtain the deformed second image, where the method includes: reducing the first image into a third image according to a preset proportion; acquiring the movement amount corresponding to the position of each pixel point in the third image according to the target deformation function, wherein the movement amount corresponding to the position of each pixel point in the third image forms a first group of movement amounts; and moving each pixel point in the first image into a corresponding pixel point in the second image according to the first group of movement amounts and the preset proportion.
In this embodiment, assuming that the width and height of the image are W and H, to generate a two-dimensional deformation field (x, y) with W × H size, a small size can be used first
Figure BDA0002942933000000171
Determining a target deformation function and determining a small size
Figure BDA0002942933000000172
The amount of movement of each pixel point change, i.e. any point in the small-scale deformation field
Figure BDA0002942933000000173
The amount of deformation occurring at this point is solved for:
Figure BDA0002942933000000174
a small-scale deformation field can be obtained, the deformation of each point in the small-scale deformation field is corresponding to the large scale, and then the large-scale deformation field is obtained through the mapping and interpolation of the small-scale deformation field, namely, for one point on the small-scale deformation field
Figure BDA0002942933000000181
The amount of deformation is v (x, y), corresponding to the amount of deformation on the large scale deformation field (s x, t y). As shown in fig. 3, the corresponding relationship between the small-scale deformation field and the large-scale deformation field is schematically illustrated. In the small-scale deformation field in fig. 3, there are 4 pixel points (x, y), (x +1, y), (x, y +1), and (x +1, y + 1).
Corresponding to any point (p, q) in the large-scale deformation field, when s x is not less than p and not more than s (x +1) and t x y is not less than q and not more than s (y +1) in the large-scale deformation field, interpolating the deformation amount corresponding to each pixel point in the large-scale deformation field through distance weight, namely the deformation amount corresponding to each pixel point in the large-scale deformation field
v(p,q)=w1v(s*x,t*y)+w2v(s*x,t(y+1))+w3v(s*(x+1),t*y)+w4v(s*(x+1),t*(y+1))
When s x is p and t y is q, w is1Value of 1, w2、w3、w4、、w1Values are all 0, w is in the case of s x p, q s (y +1)2Value of 1, w1、w3、w4All are 0, and in the case of p ═ s (x +1) and t ═ y ═ q, w is3Value of 1, w1、w2、w4Values are all 0, and in the case of p ═ s (x +1) and q ═ s (y +1), w is4Value of 1, w1、w2、w3The values are all 0. That is, when (p, q) takes the equivalent value, the large-scale deformation field corresponding to the small-scale deformation field can be determined directly through the mapping relation.
In this embodiment, the deformation amount corresponding to each pixel point in the large deformation field is calculated in an interpolation manner.
Optionally, moving each pixel point in the first image to a corresponding pixel point in the second image according to the first group of movement amounts and the preset ratio may include:
s1, restoring the pixel points in the third image into the pixel points in the first image according to a preset proportion, wherein the restored pixel points in the first image form a first group of pixel points, and a second group of movement amounts corresponding to the first group of pixel points are determined according to the first group of movement amounts and the preset proportion;
s2, determining a third group of movement amount corresponding to a second group of pixel points except the first group of pixel points in the first image according to the second group of movement amount and the first group of pixel points, wherein the first group of pixel points and the second group of pixel points form the first image;
and S3, according to the second group movement amount and the third group movement amount, each pixel point in the first image is moved to be a corresponding pixel point in the second image.
It should be noted that, determining, according to the second group of movement amounts and the first group of pixel points, a third group of movement amounts corresponding to the second group of pixel points except the first group of pixel points in the first image may include:
determining a third group of movement amounts corresponding to a second group of pixel points except the first group of pixel points in the first image by the following formula:
v(p,q)=w1v(s*x,t*y)+w2v(s*x,t(y+1))+w3v(s*(x+1),t*y)+w4v(s*(x+1),t*(y+1))
wherein, s x is not less than p is not less than s (x +1), t y is not less than q is not less than s (y +1)
The first image has a size of W × H and the third image has a size of W × H
Figure BDA0002942933000000191
s > 1, t > 1, and the second set of shift amounts comprises: v (s x, t y), v (s x, t (y +1)), v (s x (x +1), t y), v (s x (x +1), t x (y + 1));
(x, y) represents a pixel point in the third image,
Figure BDA0002942933000000192
v (p, q) represents the movement amount corresponding to the pixel (p, q) in the third group of pixels, and the first group of pixels comprises pixels (s x, t y), (s x, t (y +1)), (s x (x +1), t y), (s x (x +1), t x (y + 1));
w1,w2,w3,w4is a preset weight value.
Optionally, in this embodiment, the display parameter value of the pixel point (p, q) in the second image may be determined by the following formula:
I(p,q)=w1I(s*x,t*y)+w2I(s*x,t(y+1))+w3I(s*(x+1),t*y)+w4I(s*(x+1),t*(y+1))
wherein the content of the first and second substances,
i (p, q) represents a display parameter value of a pixel point (p, q) in the second image,
i (sx, tjy) represents values of display parameters of pixel points (sx, tjy) in the second image,
i (s x, t (y +1)) represents values of display parameters of pixels (s x, t (y +1)) in the second image,
i (s (x +1), t x y) represents values of display parameters of pixel points (s (x +1), t x y) in the second image,
i (s x (x +1), t x (y +1)) represents the display parameter values for the pixel points (s x (x +1), t x (y +1) in the second image.
It should be noted that, in this embodiment, a third group of movement amounts corresponding to a second group of pixel points in the first image except for the first group of pixel points is calculated by an interpolation method. And then the display parameters of all the pixel points in the first image are determined. The display parameters may include, but are not limited to, the brightness of the pixel points.
Optionally, moving each pixel point in the first image to a corresponding pixel point in the second image according to the first group of movement amounts and the second group of movement amounts may include:
executing the following steps on each pixel point in the first image to obtain a second image, wherein each pixel point in the first image is regarded as a third current point under the condition that the following steps are executed:
acquiring a third current position of a third current point in the first image and a third current movement amount corresponding to the third current position in the second group movement amount and the third group movement amount;
and moving the third current position by the third current movement amount to obtain a fourth current position of a corresponding fourth current point in the second image, wherein the fourth current point is a pixel point of the third current point moved to the second image.
Optionally, as an optional implementation manner, as shown in fig. 4, a flow chart of a large-scale face deformation method based on a deformation field is shown.
As shown in fig. 4, the core idea of the large-scale face deformation method based on the deformation field is as follows: determining the reduction ratio of the first image to be deformed, scaling the first image to be deformed according to a preset reduction ratio to obtain a third image, determining a small deformation field corresponding to the third image based on the third image, then determining a large deformation field corresponding to the first image based on the small deformation field, and further determining the deformed image of the first image according to the large deformation field.
Assuming that the width and height of the first image are W and H, to generate a two-dimensional deformation field (x, y) direction of W × H, the first image is reduced by a preset ratio, i.e. the W direction is reduced by s, and the H direction is reduced by t, wherein(s > 1, t > 1) to obtain a third image with a small size of
Figure BDA0002942933000000211
Constructing a radial basis function according to the third image, and determining a small deformation field corresponding to the third image, namely any point of the small-scale deformation field
Figure BDA0002942933000000212
Solving for the amount of deformation occurring at this point
Figure BDA0002942933000000213
Figure BDA0002942933000000214
A small-scale deformation field can be obtained, and because the deformation of each point in the small-scale deformation field corresponds to the large scale, then the large-scale deformation field is obtained through the mapping and interpolation of the small-scale deformation field, namely, for one point on the small-scale deformation field
Figure BDA0002942933000000215
The amount of deformation is v (x, y) corresponding to the amount of deformation on the large scale deformation field (s x, t y), as shown in fig. 3.
As shown in fig. 5, a flow chart of large-scale face deformation based on deformation field.
Step S501, start;
step S502, obtaining an original face image;
in step S502, the original face image corresponds to a first image.
Step S503, setting a control point and a mobile control point;
in step S503, the set control point corresponds to the feature point corresponding to the first group position, and the move control point corresponds to the feature point corresponding to the third position.
Step S504, fast smooth deformation field based on small scale mapping;
in step S504, the original face image is first reduced according to a preset ratio to obtain a small-sized third image, and a small deformation field corresponding to the third image is obtained through solving, and because the original image and the third image have the preset ratio, a large deformation field corresponding to the original face can be obtained through restoration according to the preset ratio, and then a deformed image of the original image is obtained. It should be noted that, when determining the large deformation field corresponding to the original image, in order to obtain an accurate large deformation field, the deformation amount of each pixel point in the original image may be determined through an interpolation algorithm. And then determining the position of the changed pixel point based on the deformation and the original position in the original image to obtain the deformed image.
In the present embodiment, the large deformation field of the original image is determined based on the deformation field determined from the small-sized image, the calculation of the deformation field is based on the small-sized calculation, and the calculation speed is relatively fast. And mapping the express to obtain a smooth deformation field of the original image based on the small-scale deformation field.
Step S505, transmitting the deformation field to an image processor;
in step S505, the large deformation field determined based on the small deformation field is delivered to the image processor.
Step S506, the image processor interpolates deformation;
in step S506, the two-dimensional deformation field with the size consistent with the image size obtained on the CPU is transferred to an image processor (GPU), and the original pixel coordinate of a certain point P on the image is assumed to be (x)0,y0) The amount of distortion in the corresponding distortion field is (Δ x, Δ y), and assuming that the brightness of one point (x, y) on the image is I (x, y), the point P ' (x ', y ') after P is distorted.
x′=x0+Δx
y′=y0+Δy
Then in one domain of (x ', y'), the closest integer pixel is assumed to be (x)1,y1) The transformed luminance is obtained by bilinear interpolation:
I(x′,y′)=w1I(x1,y1)+W2I(x1,y1+1)+w3I(x1+1,y1)+w4I(x1+1,y1+1)
therefore, each pixel of the original image can obtain the deformed position and the corresponding color value, and meanwhile, the deformed smooth image of each pixel point in the image can be quickly obtained in parallel.
Step S507, rendering the file;
in step S507, the file is rendered through OpenGL (Open Graphics Library, abbreviated as OpenGL), so as to obtain a deformed image.
And step S508, ending.
The determination of the deformation field of the third image, in which the third image is a face image, is explained as follows.
The method comprises the steps of extracting face characteristic points through a face algorithm, selecting some face characteristic points as control points, setting some control points in a user interaction mode, and then constructing a small-scale deformation field by taking a radial basis as a basis and adding a smooth regular term, wherein the method is specifically described as follows.
N control points are selected in advance, the positions of the rest points of the image are influenced through the change of the N control points, and the original positions of the N control points are set as uiI is 0, 1, … N, and the changed positions of the N control points are viThe variation M of the N control pointsf(ui)=vi-uiI 1, 2, … n, using radial basis function construction to fit Mf(ui) Namely:
Figure BDA0002942933000000231
wherein, m is 1,
Figure BDA0002942933000000232
is a linear radical, is alpha01u, m is 2, and is a function of degree 2, i.e. alpha01u+α2u2Typically m is a low degree polynomial or m<n,
Figure BDA0002942933000000233
Can also be replaced by imitationThe radial basis function may provide more variation to the radial basis function, and in this embodiment, m is 1,
Figure BDA0002942933000000234
for linear basis, a number of fixed pre-and post-deformation points are preset in advance, assuming { (u)i,vi)}i=0,1,…NThe system of equations is thus obtained as follows:
Figure BDA0002942933000000241
it should be noted that, in this embodiment, of N points (N ≧ N + m), N points are constraint points for constructing a radial basis, and the other points are used to solve for m variables.
The above formula (2) can be simplified as follows:
φ(U)·W=V (3)
wherein the content of the first and second substances,
Figure BDA0002942933000000242
can be represented as a matrix of N x (N + m),
Figure BDA0002942933000000243
can be expressed as a matrix of (n + m) × 1,
Figure BDA0002942933000000244
can be represented as an N × 1 matrix.
In the present embodiment, to provide the first weight wiThe method ensures the robustness and the smoothness of the deformation field, constructs a regularization term,
Figure BDA0002942933000000245
r is a regularization matrix, and the following formula (4) is obtained by derivation:
Tφ+λRTR)W=φTV (4)
obtaining the weights corresponding to the first group of weight values and the second group of weight values of the above formula (4) as:
Wreg=(φTφ+λRTR)-1ΦTV
carrying out matrix singular decomposition (SVD) on phi to obtain
φ=X∑YT
Wherein X is (X)1,…xm) And Y ═ Y1,…yn) Is an orthogonal matrix, where xi,yiAre m-dimensional vector, n-dimensional vector sigma-diag (sigma)1,…σn),σ1≥σ2≥…σn> 0, if R ═ 0, one can obtain:
Figure BDA0002942933000000251
it can be seen that some singular values are small, which results in
Figure BDA0002942933000000252
There are some very small disturbances that will result in WregThe solution error is large and not smooth enough, where let R ═ λ InThereby to make
Figure BDA0002942933000000253
In the present embodiment, not only a stable solution can be obtained, but also
Figure BDA0002942933000000254
Will also produce a smoothed weight WregAnd obtaining a smooth deformation field, wherein the obtained deformation field is the position v after the deformation of each pixel point is obtained, and the following steps are carried out:
Figure BDA0002942933000000255
furthermore, a large-scale deformation field which is not influenced by a point space structure and keeps smooth deformation is quickly generated through the established mapping rule of the small-scale deformation field and the large-scale deformation field, and a deformation image is generated through dense mapping by transmitting the deformation field to an image processor GPU and utilizing parallel interpolation to obtain a deformation result.
The two-dimensional distortion field obtained on the CPU and having the same size as the image is transferred to the GPU, and the original pixel coordinate of a certain point P on the image is assumed to be (x)0,y0) If the amount of distortion in the corresponding distortion field is (Δ x, Δ y) and the luminance of a point (x, y) on the image is I (x, y), then the point P ' (x ', y ') after the distortion is P,
x′=x0+Δx
y′=y0+Δy
then in one domain of (x ', y'), the closest integer pixel is assumed to be (x)1,y1) The transformed luminance is obtained by bilinear interpolation:
I(x′,y′)=w1I(x1,y1)+w2I(x1,y1+1)+w3I(x1+1,y1)+w4I(x1+1,y1+1)
therefore, each pixel of the original image can obtain the deformed position and the corresponding color value, and meanwhile, the deformed smooth image of each pixel point in the image can be quickly obtained in parallel.
By the scheme of the embodiment, various natural smooth and large-scale deformation effects can be provided for the face special effect product. Meanwhile, the human face deformation provides a key basic technology for the application of subsequent human face animation, human face fusion, face changing and the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus for implementing the above-described image processing method. As shown in fig. 6, the image processing apparatus includes: a first obtaining unit 61, a first obtaining unit 63, a first determining unit 65, a second determining unit 67, and a second obtaining unit 69.
A first obtaining unit 61 for obtaining positions of a first set of feature points in the first image to be deformed, wherein the positions of the first set of feature points constitute the first set of positions.
The first obtaining unit 63 is configured to move the first group of feature points to obtain a second group of feature points, where positions of the second group of feature points form a second group of positions.
A first determining unit 65 for determining radial basis fitting parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations and the second set of bit values, wherein the radial basis fitting parameters are used for fitting the target deformation function of the first image, and the third set of locations are selected locations in the first set of locations.
A first determining unit 67 for determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining a movement amount corresponding to the input position according to the input position.
And a second obtaining unit 69, configured to move each pixel point in the first image to a corresponding pixel point in the second image according to the target deformation function, so as to obtain a deformed second image.
With the embodiment provided by the present application, the first acquiring unit 61 acquires the positions of a first set of feature points in the first image to be deformed, where the positions of the first set of feature points constitute a first set of positions; the first obtaining unit 63 moves the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions; the first determination unit 65 determines radial basis fitting parameters for fitting the target deformation function of the first image, based on the offset between each of the first set of positions and each of the third set of positions, which are selected among the first set of positions, and the second set of bit values; the first determining unit 67 determines a target deformation function according to the radial basis fitting parameter, wherein the target deformation function is used for determining a movement amount corresponding to the input position according to the input position; the second obtaining unit 69 moves each pixel point in the first image to a corresponding pixel point in the second image according to the target deformation function to obtain a deformed second image, so that a deformation field of the image constructed based on the radial basis function according to the first group position and the second group position of a group of feature points is achieved, the deformed image is generated by transferring the deformation field to the image processor to obtain a smooth deformation result, and the deformation effect in a smooth and large range is generated due to no constraint of a point space structure, so that the technical problems that in the prior art, due to the fact that a grid causes the space structure of a point to be determined, any large-scale deformation cannot be performed, and smoothness in image deformation is low are solved.
Optionally, the first determining unit 65 may include:
the first obtaining module is used for obtaining the position offset between each position in the first group of positions and each position in the third group of positions to obtain a group of position offsets.
The first determining module is configured to determine a first set of weight values and a second set of weight values according to a set of position offsets, a first set of positions, and a second set of positions, wherein the radial basis fitting parameters include the first set of weight values and the second set of weight values.
The first determining module may include:
a first determining submodule for determining a target matrix based on the set of position offsets and the first set of positions;
the decomposition submodule is used for decomposing the target matrix to obtain a first matrix and a second matrix;
and the second determining submodule is used for determining the first group of weight values and the second group of weight values according to the first matrix, the second matrix and the second group of positions.
It should be noted that, the decomposition submodule is further configured to perform the following operations:
decomposing the target matrix through the following formula to obtain a first matrix and a second matrix:
φ(U)=X∑YT
wherein the target matrix is:
Figure BDA0002942933000000281
where X represents a first matrix, Y represents a second matrix, and Σ represents an N × P matrix in which all 0's except the elements on the main diagonal are included by Λ11,Λ22,…,ΛPPN is a number of positions in the first group of positions, N is a number of positions in the third group of positions, N is not less than N + m, N and m are preset natural numbers,
Figure BDA0002942933000000282
φ(uj-ui) Represents the jth position u in the first set of positionsjAnd the ith position u in the third group of positionsiThe offset between, j is 1, 2 … N, i is 1, 2 … N,
Figure BDA0002942933000000291
s takes the value 1, 2, … m.
It should be further noted that, the second determining submodule is further configured to perform the following operations:
Figure BDA0002942933000000292
Figure BDA0002942933000000293
wherein x isiRepresenting the ith column vector in the first matrix,
Figure BDA0002942933000000294
yithe representation is represented as the ith column vector in the second matrix,
Figure BDA0002942933000000295
the second set of positions comprising v1,v2,...,vN
Figure BDA0002942933000000296
The first set of weight values includes w1,w2,...,wnThe second group of weighted values includes alpha1,α2,...,αm,σiIs an element Λ on the main diagonal11,Λ22,...,ΛPPThe ith value of (a)iiAnd λ is a preset constant.
Optionally, the second determining unit 67 may include: a third determination submodule for determining the following function as the target deformation function:
Figure BDA0002942933000000301
wherein the first set of weight values comprises w1,w2,...,wnThe second group of weighted values includes alpha1,α2,...,αmN represents the number of positions in the third group of positions, n and m are preset natural numbers, wiRepresenting the ith weight value, alpha, of the first set of weight valuesjRepresents the jth weight value, u, of the second set of weight valuesiIndicating the ith position in the third set of positions,
Figure BDA0002942933000000302
to representPosition offset, M, between target position u and ith position in first imagef(u) represents the amount of movement corresponding to the target position u,
Figure BDA0002942933000000303
optionally, the second obtaining unit 69 may include:
executing the following steps on each pixel point in the first image to obtain a second image, wherein each pixel point in the first image is regarded as a first current point under the condition that the following steps are executed:
the second acquisition module is used for acquiring a first current position of a first current point in the first image;
the third acquisition module is used for acquiring the current movement amount, corresponding to the position of the first current point, output by the target deformation function under the condition that the first current position is the input position of the target deformation function;
and the second determining module is used for determining a second current position of a corresponding second current point in the second image according to the current movement amount and the first current position, wherein the second current point is a pixel point of the first current point moving to the second image.
The second determining module may include: a fourth determination submodule for determining the second current position by the following formula:
x′=x0+Δx
y′=y0+Δy
wherein (x ', y ') denotes a second current position of the second current point P ' (x)0,y0) Indicates the first current position of the first current point P, (Δ x, Δ y) indicates the current movement amount.
Optionally, the second obtaining unit 69 may include: the zooming module is used for zooming the first image into a third image according to a preset proportion; the fourth obtaining module is used for obtaining the movement amount corresponding to the position of each pixel point in the third image according to the target deformation function, wherein the movement amount corresponding to the position of each pixel point in the third image forms a first group of movement amounts; and the moving module is used for moving each pixel point in the first image into a corresponding pixel point in the second image according to the first group of movement amounts and the preset proportion.
Wherein, the mobile module may include:
the restoring submodule is used for restoring the pixel points in the third image into the pixel points in the first image according to a preset proportion, wherein the restored pixel points in the first image form a first group of pixel points, and a second group of movement amount corresponding to the first group of pixel points is determined according to the first group of movement amount and the preset proportion;
the fifth determining submodule is used for determining a third group of movement amount corresponding to a second group of pixel points except the first group of pixel points in the first image according to the second group of movement amount and the first group of pixel points, wherein the first group of pixel points and the second group of pixel points form the first image;
and the moving submodule is used for moving each pixel point in the first image into a corresponding pixel point in the second image according to the second group of moving quantities and the third group of moving quantities.
It should be noted that, the fifth determining sub-module is further configured to perform the following operations: determining a third group of movement amounts corresponding to a second group of pixel points except the first group of pixel points in the first image by the following formula:
v(p,q)=w1v(s*x,t*y)+w2v(s*x,t(y+1))+w3v(s*(x+1),t*y)+w4v(s*(x+1),t*(y+1))
wherein, s x is not less than p is not more than s (x +1), t x y is not less than q is not more than s (y +1),
the first image has a size of W × H and the third image has a size of W × H
Figure BDA0002942933000000321
s > 1, t > 1, and the second set of shift amounts comprises: v (s x, t y), v (s x, t (y +1)), v (s x (x +1), t y), v (s x (x +1), t x (y + 1));
(x, y) represents a pixel point in the third image,
Figure BDA0002942933000000322
v (p, q) represents the movement amount corresponding to the pixel (p, q) in the third group of pixels, and the first group of pixels comprises pixels (s x, t y), (s x, t (y +1)), (s x (x +1), t y), (s x (x +1), t x (y + 1));
w1,w2,w3,w4is a preset weight value.
Optionally, the image processing apparatus may further include: a sixth determining submodule for determining display parameter values for pixel points (p, q) in the second image by the following formula:
I(p,q)=w1I(s*x,t*y)+w2I(s*x,t(y+1))+w3I(s*(x+1),t*y)+w4I(s*(x+1),t*(y+1))
wherein the content of the first and second substances,
i (p, q) represents a display parameter value of a pixel point (p, q) in the second image,
i (sx, tjy) represents values of display parameters of pixel points (sx, tjy) in the second image,
i (s x, t (y +1)) represents values of display parameters of pixels (s x, t (y +1)) in the second image,
i (s (x +1), t x y) represents values of display parameters of pixel points (s (x +1), t x y) in the second image,
i (s x (x +1), t x (y +1)) represents the display parameter values for the pixel points (s x (x +1), t x (y +1) in the second image.
It should be noted that, the mobile sub-module is further configured to perform the following operations:
executing the following steps on each pixel point in the first image to obtain a second image, wherein each pixel point in the first image is regarded as a third current point under the condition that the following steps are executed:
acquiring a third current position of a third current point in the first image and a third current movement amount corresponding to the third current position in the second group movement amount and the third group movement amount;
and moving the third current position by the third current movement amount to obtain a fourth current position of a corresponding fourth current point in the second image, wherein the fourth current point is a pixel point of the third current point moved to the second image.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above image processing method, where the electronic device may be a terminal device or a server shown in fig. 1. The present embodiment takes the electronic device as a server as an example for explanation. As shown in fig. 7, the electronic device comprises a memory 702 and a processor 704, the memory 702 having stored therein a computer program, the processor 704 being arranged to perform the steps of any of the above-described method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring the positions of a first group of characteristic points in the first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions;
s2, moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions;
s3, determining radial basis fitting parameters according to the offset between each position in the first group of positions and each position in the third group of positions, the first group of positions and the second group of bit values, wherein the radial basis fitting parameters are used for fitting a target deformation function of the first image, and the third group of positions are selected from the first group of positions;
s4, determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position;
and S5, according to the target deformation function, moving each pixel point in the first image into a corresponding pixel point in the second image to obtain the deformed second image.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 7 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
The memory 702 may be used to store software programs and modules, such as program instructions/modules corresponding to the image processing method and apparatus in the embodiments of the present invention, and the processor 704 executes various functional applications and data processing by running the software programs and modules stored in the memory 702, so as to implement the above-described image processing method. The memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 702 can further include memory located remotely from the processor 704, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 702 may be specifically, but not limited to, used to store information such as a first image, a position of a first group of feature points, a first group of positions, a second group of feature points, a second group of positions, and a second image. As an example, as shown in fig. 7, the memory 702 may include, but is not limited to, the first acquiring unit 61, the first obtaining unit 63, the first determining unit 65, the second determining unit 67, and the second obtaining unit 69 in the image processing apparatus. In addition, other module units in the image processing apparatus may also be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmitting device 706 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 706 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 706 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In addition, the electronic device further includes: a display 708 for displaying the first image and the second image; and a connection bus 710 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the image processing method provided in the image processing aspect or various alternative implementations of the image processing aspect described above. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring the positions of a first group of characteristic points in the first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions;
s2, moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions;
s3, determining radial basis fitting parameters according to the offset between each position in the first group of positions and each position in the third group of positions, the first group of positions and the second group of bit values, wherein the radial basis fitting parameters are used for fitting a target deformation function of the first image, and the third group of positions are selected from the first group of positions;
s4, determining a target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining the movement amount corresponding to the input position according to the input position;
and S5, according to the target deformation function, moving each pixel point in the first image into a corresponding pixel point in the second image to obtain the deformed second image.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. An image processing method, comprising:
acquiring positions of a first group of characteristic points in a first image to be deformed, wherein the positions of the first group of characteristic points form a first group of positions;
moving the first group of feature points to obtain a second group of feature points, wherein the positions of the second group of feature points form a second group of positions;
determining radial basis fit parameters based on the offset between each of the first set of locations and each of a third set of locations selected from the first set of locations, and the second set of bit values, wherein the radial basis fit parameters are used to fit a target deformation function for the first image;
determining the target deformation function according to the radial basis fitting parameters, wherein the target deformation function is used for determining a movement amount corresponding to the input position according to the input position;
and moving each pixel point in the first image into a corresponding pixel point in a second image according to the target deformation function to obtain the deformed second image.
2. The method of claim 1, wherein determining the radial basis fit parameters based on the offset between each of the first set of locations and each of the third set of locations, the first set of locations, and the second set of bit values comprises:
obtaining the position offset between each position in the first group of positions and each position in the third group of positions to obtain a group of position offsets;
determining a first set of weight values and a second set of weight values based on the set of position offsets, the first set of positions, and the second set of positions, wherein the radial basis fit parameters include the first set of weight values and the second set of weight values.
3. The method of claim 2, wherein determining the first and second sets of weight values based on the set of position offsets, the first set of positions, and the second set of positions comprises:
determining a target matrix according to the set of position offsets and the first set of positions;
decomposing the target matrix to obtain a first matrix and a second matrix;
determining the first set of weight values and the second set of weight values according to the first matrix, the second matrix, and the second set of locations.
4. The method of claim 3, wherein decomposing the target matrix to obtain the first matrix and the second matrix comprises:
decomposing the target matrix through the following formula to obtain the first matrix and the second matrix:
φ(U)=X∑YT
wherein the target matrix is:
Figure FDA0002942932990000021
wherein X represents the first matrix, Y represents the second matrix, and Σ represents an N × P matrix in which all 0's except the elements on the main diagonal including Λ are included11,Λ22,...,ΛPPN is a number of positions in the first group of positions, N is a number of positions in the third group of positions, N is not less than N + m, N and m are preset natural numbers,
Figure FDA0002942932990000022
φ(uj-ui) Represents the jth position u in the first set of positionsjAnd the ith position u in the third group of positionsiThe offset between, j is 1, 2 … N, i is 1, 2 … N,
Figure FDA0002942932990000031
s takes the value 1, 2, … m.
5. The method of claim 4, wherein determining the first set of weight values and the second set of weight values based on the first matrix, the second matrix, and the second set of locations comprises:
Figure FDA0002942932990000032
Figure FDA0002942932990000033
wherein x isiRepresenting the ith column vector in the first matrix,
Figure FDA0002942932990000034
yirepresented as the ith column vector in the second matrix,
Figure FDA0002942932990000035
the second set of positions comprises v1,v2,...,vN
Figure FDA0002942932990000036
The first set of weight values comprises w1,w2,...,wnThe second set of weight values comprises alpha1,α2,...,αm,σiIs an element Λ on the main diagonal11,Λ22,…,ΛPPThe ith value of (a)iiAnd λ is a preset constant.
6. The method of claim 2, wherein said determining the target deformation function from the radial basis fit parameters comprises:
determining the following function as the target deformation function:
Figure FDA0002942932990000041
wherein the first set of weight values comprises w1,w2,...,wnThe second set of weight values comprises alpha1,α2,...,αmN represents the number of positions in the third set of positions, n and m are both preset natural numbers, wiRepresenting the ith weight value, a, of the first set of weight valuesjRepresents the jth weight value, u, of the second set of weight valuesiRepresenting the ith position in the third set of positions,
Figure FDA0002942932990000042
representing a position offset between a target position u and the ith position in the first image, Mf(u) represents a moving amount corresponding to the target position u,
Figure FDA0002942932990000043
7. the method of claim 1, wherein the moving each pixel point in the first image to a corresponding pixel point in the second image according to the target deformation function to obtain the deformed second image comprises:
executing the following steps on each pixel point in the first image to obtain the second image, wherein each pixel point in the first image is regarded as a first current point under the condition that the following steps are executed:
acquiring a first current position of the first current point in the first image;
under the condition that the first current position is an input position of the target deformation function, acquiring a current movement amount, corresponding to the position of the first current point, output by the target deformation function;
and determining a second current position of a corresponding second current point in the second image according to the current movement amount and the first current position, wherein the second current point is a pixel point of the first current point moving to the second image.
8. The method of claim 7, wherein determining a second current position of a corresponding second current point in the second image based on the current movement amount and the first current position comprises:
determining the second current position by:
x′=x0+Δx
y′=y0+Δy
wherein (x ', y ') represents the second current position of the second current point P ' (x)0,y0) Represents the first current position of the first current point P, (Δ x, Δ y) represents the current movement amount.
9. The method of claim 1, wherein the moving each pixel point in the first image to a corresponding pixel point in the second image according to the target deformation function to obtain the deformed second image comprises:
reducing the first image into a third image according to a preset proportion;
acquiring the movement amount corresponding to the position of each pixel point in the third image according to the target deformation function, wherein the movement amount corresponding to the position of each pixel point in the third image forms a first group of movement amounts;
and moving each pixel point in the first image into a corresponding pixel point in the second image according to the first group of movement amounts and the preset proportion.
10. The method of claim 9, wherein the moving each pixel in the first image to a corresponding pixel in the second image according to the first set of movement amounts and the preset ratio comprises:
restoring the pixel points in the third image into the pixel points in the first image according to the preset proportion, wherein the restored pixel points in the first image form a first group of pixel points, and a second group of movement amounts corresponding to the first group of pixel points is determined according to the first group of movement amounts and the preset proportion;
determining a third group of movement amount corresponding to a second group of pixel points except the first group of pixel points in the first image according to the second group of movement amount and the first group of pixel points, wherein the first group of pixel points and the second group of pixel points form the first image;
and moving each pixel point in the first image into a corresponding pixel point in the second image according to the second group of movement amounts and the third group of movement amounts.
11. The method of claim 10, wherein said determining a third set of shift amounts corresponding to a second set of pixels in the first image other than the first set of pixels based on the second set of shift amounts and the first set of pixels comprises:
determining a third set of shift amounts corresponding to a second set of pixel points in the first image, except for the first set of pixel points, by:
v(p,q)=w1v(s*x,t*y)+w2v(s*x,t(y+1))+w3v(s*(x+1),t*y)+w4v(s*(x+1),t*(y+1))
wherein, s x is not less than p is not less than s (x +1), t y is not less than q is not less than s (y +1)
The first image has a size of WXH and the third image has a size of WXH
Figure FDA0002942932990000061
s > 1, t > 1, said second set of shift amounts comprising: v (s x, t y), V (s x, t (y +1)), V (s x (x +1), t y), V (s x (x +1), t x (y + 1));
(x, y) represents a pixel point in the third image,
Figure FDA0002942932990000062
v (p, q) represents the movement amount corresponding to the pixel (p, q) in the third group of pixels, and the first group of pixels comprises pixels (s x, t y), (s x, t (y +1)), (s x (x +1), t y), (s x (x +1), t x (y + 1));
w1,w2,w3,w4is a preset weight value.
12. The method of claim 11, further comprising:
determining display parameter values for pixel points (p, q) in the second image by:
I(p,q)=w1I(s*x,t*y)+w2I(s*x,t(y+1))+w3I(s*(x+1),t*y)+w4I(s*(x+1),t*(y+1))
wherein the content of the first and second substances,
i (p, q) represents a display parameter value of said pixel point (p, q) in said second image,
i (sx, tjy) represents values of display parameters of the pixel points (sx, tjy) in the second image,
i (s x, t (y +1)) represents the pixel points in the second image (s x, t (display parameter values of y +1,
i (s x +1), t y, indicating values of display parameters of said pixels (s x +1, t y) in said second image,
i (s x (x +1), t x (y +1)) represents the display parameter values of the pixel points (s x (x +1), t x (y +1)) in the second image.
13. The method of claim 11, wherein said shifting each pixel in the first image to a corresponding pixel in the second image according to the first set of shift amounts and the second set of shift amounts comprises:
executing the following steps on each pixel point in the first image to obtain the second image, wherein each pixel point in the first image is regarded as a third current point under the condition that the following steps are executed:
acquiring a third current position of the third current point in the first image and a third current movement amount corresponding to the third current position in the second set of movement amounts and the third set of movement amounts;
and moving the third current position by the third current movement amount to obtain a fourth current position of a corresponding fourth current point in the second image, wherein the fourth current point is a pixel point of the third current point moved to the second image.
14. An image processing apparatus characterized by comprising:
a first acquisition unit, configured to acquire positions of a first set of feature points in a first image to be deformed, where the positions of the first set of feature points constitute a first set of positions;
a first obtaining unit, configured to move the first group of feature points to obtain a second group of feature points, where positions of the second group of feature points form a second group of positions;
a first determining unit, configured to determine radial basis fitting parameters according to an offset between each of the first set of positions and each of a third set of positions, the first set of positions and the second set of bit values, wherein the radial basis fitting parameters are used for fitting a target deformation function of the first image, and the third set of positions are selected from the first set of positions;
a second determining unit, configured to determine the target deformation function according to the radial basis fitting parameter, where the target deformation function is configured to determine a movement amount corresponding to an input position according to the input position;
and the second obtaining unit is used for moving each pixel point in the first image into a corresponding pixel point in a second image according to the target deformation function to obtain the deformed second image.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 13 by means of the computer program.
CN202110184203.0A 2021-02-10 2021-02-10 Image processing method and device and electronic equipment Pending CN113570498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110184203.0A CN113570498A (en) 2021-02-10 2021-02-10 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110184203.0A CN113570498A (en) 2021-02-10 2021-02-10 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113570498A true CN113570498A (en) 2021-10-29

Family

ID=78161143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110184203.0A Pending CN113570498A (en) 2021-02-10 2021-02-10 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113570498A (en)

Similar Documents

Publication Publication Date Title
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
US10482639B2 (en) Deep high-resolution style synthesis
KR20190100320A (en) Neural Network Model Training Method, Apparatus and Storage Media for Image Processing
CN109325990B (en) Image processing method, image processing apparatus, and storage medium
CN106575158B (en) Environment mapping virtualization mechanism
CN112819947A (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
WO2023103576A1 (en) Video processing method and apparatus, and computer device and storage medium
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
CN111047509A (en) Image special effect processing method and device and terminal
CN109598672B (en) Map road rendering method and device
CN113766117B (en) Video de-jitter method and device
WO2024067320A1 (en) Virtual object rendering method and apparatus, and device and storage medium
AU2022241513B2 (en) Transformer-based shape models
CN113570498A (en) Image processing method and device and electronic equipment
Sanchez et al. Morphological shape generation through user-controlled group metamorphosis
CN116051722A (en) Three-dimensional head model reconstruction method, device and terminal
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
JP5071900B2 (en) Image generating apparatus and method
CN116681818B (en) New view angle reconstruction method, training method and device of new view angle reconstruction network
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN116030150B (en) Avatar generation method, device, electronic equipment and medium
WO2023179091A1 (en) Three-dimensional model rendering method and apparatus, and device, storage medium and program product
US20220351479A1 (en) Style transfer program and style transfer method
CN115512193A (en) Facial expression generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055742

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination