CN113538212A - Image processing method, device, equipment and computer readable storage medium - Google Patents

Image processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113538212A
CN113538212A CN202011458276.6A CN202011458276A CN113538212A CN 113538212 A CN113538212 A CN 113538212A CN 202011458276 A CN202011458276 A CN 202011458276A CN 113538212 A CN113538212 A CN 113538212A
Authority
CN
China
Prior art keywords
target
feature point
determining
feature
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011458276.6A
Other languages
Chinese (zh)
Inventor
周勤
李琛
吕静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011458276.6A priority Critical patent/CN113538212A/en
Publication of CN113538212A publication Critical patent/CN113538212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image to be processed, and acquiring a plurality of feature points of the image to be processed; determining a target feature point to be moved and a target position of the target feature point based on the received feature point moving instruction; determining the target positions of other feature points based on the original positions of the target feature points, the target position, the original positions of other feature points and a preset attenuation function, wherein the other feature points are all the feature points except the target feature points; determining a first moving distance of each feature point, and determining a target texture coordinate of each feature point based on the first moving distance, the original position and a harmonic texture mapping algorithm of each feature point; and rendering the processed image based on the target position of each characteristic point and the target texture coordinate of each characteristic point. By the method and the device, smooth and natural large-scale deformation effect can be achieved.

Description

Image processing method, device, equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, and relates to but is not limited to an image processing method, an image processing device, image processing equipment and a computer-readable storage medium.
Background
The research of the human face deformation technology is one of the important subjects of the image processing research, and the human face deformation technology has wide application in human face special effects in short videos such as human face fusion and face changing, and some shapes in virtual images such as thin faces, large eyes and the like. The real-time dragging deformation is influenced by large-scale deformation, the real-time speed is needed, meanwhile, the smoothness and the naturalness of the deformation edge are guaranteed, the difficulty in the face deformation technology is caused, the face thinning and the large eyes in most face beautifying algorithms are mainly concentrated on small-scale deformation of local small areas, and the smooth and natural large-scale deformation effect cannot be achieved.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an image processing apparatus and a computer-readable storage medium, which realize smooth and natural large-scale deformation effect through vector synthesis deformation position calculation based on an attenuation mechanism and texture coordinate calculation based on self-adaption.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed, and performing feature extraction on the image to be processed to obtain a plurality of image feature points;
carrying out interpolation processing on the plurality of image characteristic points to obtain a plurality of characteristic points after interpolation;
determining a target feature point to be moved and a target position of the target feature point based on the received feature point moving instruction;
determining the target positions of other feature points based on the original positions of the target feature points, the target position, the original positions of other feature points and a preset attenuation function, wherein the other feature points are all the feature points except the target feature points;
determining a first moving distance of each feature point, and determining a target texture coordinate of each feature point based on the first moving distance, the original position and a harmonic texture mapping algorithm of each feature point;
and rendering the processed image based on the target position of each characteristic point and the target texture coordinate of each characteristic point.
An embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring an image to be processed and extracting the characteristics of the image to be processed to obtain a plurality of image characteristic points;
the interpolation processing module is used for carrying out interpolation processing on the plurality of image characteristic points to obtain a plurality of characteristic points after interpolation;
the first determination module is used for determining a target feature point to be moved and a target position of the target feature point based on a received feature point movement instruction;
a second determining module, configured to determine target positions of other feature points based on the original position of the target feature point, the target position, the original positions of the other feature points, and a preset attenuation function, where the other feature points are each feature point except the target feature point;
the third determining module is used for determining the first moving distance of each feature point and determining the target texture coordinate of each feature point based on the first moving distance, the original position and the harmonic texture mapping algorithm of each feature point;
and the rendering module is used for rendering the processed image based on the target position of each characteristic point and the target texture coordinate of each characteristic point.
In some embodiments, the second determining module is further configured to:
determining each first distance between each other feature point and each target feature point based on the original positions of the other feature points and the original positions of each target feature point;
determining the influence weight of each target characteristic point on the other characteristic points based on each first distance and a preset influence distance threshold;
determining candidate target positions of the other feature points based on the original position, the target position and the influence weight of each target feature point;
and determining the target positions of the other characteristic points based on the original positions of the other characteristic points, the candidate target positions and a preset attenuation function.
In some embodiments, the second determining module is further configured to:
when the ith first distance is smaller than or equal to the influence distance threshold, determining the ratio of the ith first distance to the influence distance threshold as a first parameter of an influence function; i is 1,2, … N, N is the total number of the target characteristic points;
determining the influence weight of the ith target characteristic point on the other characteristic points based on a second parameter preset by the influence function and the first parameter;
and when the ith first distance is greater than the influence distance threshold, determining a preset value as the influence weight of the ith target characteristic point on the other characteristic points.
In some embodiments, the second determining module is further configured to:
determining each target vector based on the original position of each target feature point and the corresponding target position;
determining candidate motion vectors corresponding to the other feature points based on the influence weight of each target feature point on the other feature points and the target vectors;
determining candidate target positions of the other feature points based on the candidate motion vectors and the original positions of the other feature points.
In some embodiments, the second determining module is further configured to:
determining second movement distances of the other feature points based on the original positions of the other feature points and the candidate target positions;
when the second moving distance is larger than a preset adjusting distance threshold, determining attenuation values of the other characteristic points based on the second moving distance and an attenuation function;
determining target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, the adjusted distance threshold, and the attenuation values.
In some embodiments, the second determining module is further configured to:
when the second movement distance is smaller than or equal to the adjustment distance threshold, determining the candidate target position as the target position of the other feature point.
In some embodiments, the second determining module is further configured to:
determining unit vectors of the other feature points in the moving direction based on the original positions and the candidate target positions of the other feature points;
determining a third parameter outside of the adjusted distance threshold based on the second movement distance, the adjusted distance threshold, and the attenuation value, the adjusted distance threshold being determined as a fourth parameter;
determining target movement vectors of the other feature points based on the third parameter, the fourth parameter and the unit vector;
and determining the target positions of the other characteristic points based on the target movement vector and the original positions of the other characteristic points.
In some embodiments, the interpolation processing module is further configured to:
carrying out interpolation processing on the extracted image characteristic points to obtain interpolated image characteristic points;
determining contour feature points of a target image area based on the interpolated image feature points, and acquiring image edge feature points of the image to be processed;
determining the image characteristic points, the contour characteristic points and the image edge characteristic points after interpolation as a plurality of characteristic points after interpolation;
in some embodiments, the apparatus further comprises:
and the triangulation module is used for triangulating the plurality of feature points of the image to be processed to obtain index information of a plurality of triangular patches.
In some embodiments, the third determining module is further configured to:
determining the original texture coordinate of the q-th feature point based on the original position of the q-th feature point and the size information of the image to be processed;
and when the first moving distance of the q-th feature point is smaller than or equal to the adjusting distance threshold, determining the original texture coordinate of the q-th feature point as the target texture coordinate of the q-th feature point.
In some embodiments, the third determining module is further configured to:
when the first moving distance of the qth feature point is larger than the adjustment distance threshold, re-triangulating based on the target position of each feature point to obtain index information of a plurality of triangular patches;
and determining the target texture coordinate of the q-th feature point based on the index information of the plurality of triangular patches and a harmonic mapping algorithm.
An embodiment of the present application provides an image processing apparatus, including:
a memory for storing executable instructions; and the processor is used for realizing the method when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions for causing a processor to implement the above-mentioned method when executed.
The embodiment of the application has the following beneficial effects:
in this embodiment of the application, after an image to be processed is obtained, and a plurality of feature points of the face image are obtained, a user may trigger a movement instruction of moving the feature points, at this time, based on the received feature point movement instruction, a target feature point to be moved and a target position of the target feature point are determined, and the movement of the target feature point may act on other feature points except the target feature point, the target positions of the other feature points may be determined based on an original position, a target position, original positions of the other feature points, and a preset attenuation function, and based on a first movement distance, the original position, and a harmonic texture mapping algorithm of each feature point, a target texture coordinate of each feature point is determined, and finally, a processed image is rendered based on the target position of each feature point and the target texture coordinate of each feature point, where the attenuation function is used when a movement distance of the other feature points under the influence of the target feature point exceeds a preset value The influence of the target feature point on the other feature points is attenuated when the influence distance threshold is reached, so that the moving distance of the other feature points is attenuated, and a smooth and natural effect can be achieved when large-scale deformation is carried out.
Drawings
Fig. 1 is a schematic network architecture diagram of an image processing system 10 according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal 300 according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an implementation of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation of determining target positions of other feature points according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another implementation of the image processing method according to the embodiment of the present application;
fig. 6 is a schematic flowchart of another implementation of the image processing method according to the embodiment of the present application;
FIG. 7 is a schematic diagram of extracted human face feature points according to an embodiment of the present disclosure;
fig. 8 is a schematic view of an image obtained by triangulating the interpolated feature points according to the embodiment of the present application;
fig. 9 is a schematic diagram illustrating an effect of a constraint point on a free point according to an embodiment of the present application;
fig. 10 is a comparative schematic diagram of large-scale deformation of an image.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
The embodiment of the application provides a relatively complete robust real-time large-scale dragging deformation scheme. Firstly, facial feature points are extracted by using a face feature recognition algorithm, forehead points, cheek points, points around the outer edge of a contour and image boundary points are interpolated on original feature points, a deformation area is expanded, the whole image is favorably split, some constraint points are set from the interpolated feature points and are arbitrarily dragged by a user, and in the embodiment of the application, a vector synthesis deformation position calculation and self-adaptive texture coordinate calculation scheme based on an attenuation mechanism is provided.
An exemplary application of the image processing apparatus provided in the embodiment of the present application is described below, and the image processing apparatus provided in the embodiment of the present application may be implemented as any terminal having an on-screen display function, such as a notebook computer, a tablet computer, a desktop computer, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), an intelligent robot, or may be implemented as a server. Next, an exemplary application when the image processing apparatus is implemented as a terminal will be explained.
Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture of an image processing system 10 according to an embodiment of the present application. As shown in fig. 1, the network architecture includes a server 100, a network 200 and a terminal 300, wherein the network 200 may be a wide area network or a local area network, or a combination thereof. The server 100 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The above is merely an example, and this is not limited in any way in the embodiments of the present application.
The terminal 300 has applications running thereon, which may be, for example, an instant messaging application, a shopping application, and an image capture application. The image processing method provided by the embodiment of the application may be implemented by a special application program, for example, a beauty application program; and the method can also be embedded in other application programs in the form of functional plug-ins, such as an instant messaging application program or a video viewing application program in the form of plug-ins. When the image processing method of the embodiment of the application is implemented, the terminal 300 first obtains an image to be processed, where the image to be processed may be acquired by the terminal 300 by using an image acquisition application program, may also be downloaded from a network, or may also be sent by other terminals, the terminal 300 first obtains feature points of the image to be processed, and performs triangulation, when the terminal 300 receives a moving operation on some of the feature points, determines target positions of other feature points based on attenuation mechanisms based on the target positions of the feature points, and performs texture coordinate calculation in a self-adaptive manner, obtains a processed image and renders the processed image, and then the terminal 300 sends the processed image to the server 100 based on a received image upload instruction.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal 300 according to an embodiment of the present application, where the terminal 300 shown in fig. 2 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in terminal 300 are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable communications among the components connected. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 2.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331, including one or more speakers and/or one or more visual display screens, that enable presentation of media content. The user interface 330 also includes one or more input devices 332, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310. The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 350 described in embodiments herein is intended to comprise any suitable type of memory. In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
an input processing module 353 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates an image processing apparatus 354 stored in the memory 350, where the image processing apparatus 354 may be an image processing apparatus in the terminal 300, and may be software in the form of programs and plug-ins, and the like, and includes the following software modules: the first obtaining module 3541, the interpolation processing module 3542, the first determining module 3543, the second determining module 3544, the third determining module 3545, and the rendering module 3546 are logical and thus may be arbitrarily combined or further divided according to the functions implemented. The functions of the respective modules will be explained below.
In other embodiments, the apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the image processing method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The image processing method provided by the embodiment of the present application will be described below in conjunction with an exemplary application and implementation of the terminal 300 provided by the embodiment of the present application. Referring to fig. 3, fig. 3 is a schematic flow chart of an implementation of the image processing method according to the embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
And S101, acquiring an image to be processed, and performing feature extraction to obtain a plurality of image feature points.
Here, the image to be processed may be a real person image, a cartoon person character, an animal image, or the like, and includes all or part of an extremity image of the target object, the all or part of the extremity image including at least a face image area.
When the to-be-processed image is acquired, the to-be-processed image may be an image acquired by the terminal by using an image acquisition device (for example, a camera) of the terminal, a previously stored image acquired by the terminal from a storage space of the terminal, or an image downloaded from a network.
The feature extraction can not only extract features beneficial to identification from original image information, but also greatly reduce the dimensionality of image data. When the step S101 is implemented, the feature extraction performed on the image to be processed refers to extracting features of a limb image region in the image to be processed, for example, feature extraction of a human face, feature extraction of legs, arms, and necks, and the like may be performed. Further, a preset feature extraction algorithm may be used to extract features of the image to be processed, for example, the HOG algorithm or the Dlib algorithm, and in some embodiments, a neural network model may also be used to extract image feature points in the image to be processed.
And step S102, carrying out interpolation processing on the plurality of image characteristic points to obtain a plurality of characteristic points after interpolation.
Here, the image feature points extracted in step S101 are sparse, and may also lack feature points of a part of regions, for example, feature points of a forehead region are missing during face feature extraction, so in order to make deformation smoother, the extracted image feature points need to be interpolated in step S102, and in implementation, the extracted sparse image feature points may be first interpolated, and then feature points on one circle outside the limb region contour and edge feature points of the image to be processed may be interpolated.
In some embodiments, after the feature points are obtained, the to-be-processed image is triangulated based on the feature points, so as to obtain a plurality of triangular patches.
Step S103, based on the received characteristic point moving instruction, determining a target characteristic point to be moved and a target position of the target characteristic point.
In the embodiment of the present application, a target object (for example, a real person, a cartoon person, or an animal) limb image area is deformed, and thus target feature points in the limb image area, for example, target feature points in a face area, or target feature points in a leg, arm, neck, or other areas, are moved. The feature point movement instruction may be triggered based on a drag operation of a user on at least one feature point, and at this time, a target feature point to be moved may be determined according to an action position of the drag operation, and a target position of the target feature point may be determined according to a termination position of the drag operation. In actual implementation, the drag operation may be an operation for one feature point, or may be an operation for a plurality of feature points.
It should be noted that, when the feature point movement instruction is triggered by a drag operation, and the target feature point to be moved is determined according to the action position of the drag operation, the feature point corresponding to the action position of the drag operation may be directly determined as the target feature point to be moved. Because the user generally drags the feature point of the face, and the face is symmetrical, in order to simplify the user operation and avoid the problem that the left and right distances are inconsistent when the user drags left and right, when the target feature point to be moved is determined according to the action position of the dragging operation, the feature point corresponding to the action position of the dragging operation and the feature point symmetrical to the feature point can be simultaneously determined as the target feature point to be moved. For example, the drag operation application position corresponds to a feature point on the left cheek, and then the feature point on the left cheek and a feature point on the right cheek that is symmetrical to the feature point are determined as the target feature points.
In some embodiments, a human-computer interaction interface for performing movement setting on feature points may be provided, where a user may select one or more target feature points to be moved through the human-computer interaction interface, and may further set a movement distance, a movement direction, and the like of the target feature points to be moved, where the feature point movement instruction may be triggered after setting information such as a feature point identifier, a movement distance, a movement direction, and the like of the target feature points to be moved, where the target feature points to be moved may be determined based on the feature point identifier in the setting information of the user, and a target position of the target feature points may be determined based on the movement distance, the movement direction, and an initial position of the target feature points.
And step S104, determining the target positions of other characteristic points based on the original positions of the target characteristic points, the target position, the original positions of other characteristic points and a preset attenuation function.
Here, the other feature points are feature points other than the target feature point, and the attenuation function is configured to attenuate an influence of the target feature point on the other feature points when a moving distance of the other feature points under the influence of the target feature point exceeds a preset influence distance threshold, so as to reduce a moving speed of the other feature points following the target feature point.
When the step S104 is implemented, a distance between some other feature point and each target feature point may be first calculated, an influence weight of each target feature point on the other feature point is determined according to the distance, and then a target position of the other feature point is determined according to the influence weight and a motion vector of each target feature point.
In some embodiments, when the distance between the target feature point and the other feature point is greater than the preset influence distance threshold, the influence weight between the target feature point and the other feature point may be a preset value, for example, may be 0, that is, when the distance between the target feature point and some other feature point is greater than the preset influence distance threshold, the influence of the target feature point on the other feature point is 0, that is, the other feature point does not move due to the movement of the target feature point. Therefore, when determining the target positions of other feature points, it is calculated that the other feature points whose distance from the target feature point is less than or equal to the influence distance threshold value, and the positions of the other feature points whose distance from each target feature point is greater than the influence distance threshold value do not change, at this time, the original positions are also the target positions.
Step S105, determining the first moving distance of each feature point, and determining the target texture coordinate of each feature point based on the first moving distance, the original position and the harmonic texture mapping algorithm of each feature point.
Here, since the feature point movement instruction may cause the target feature point to move to a position far away from the original position, and in the embodiment of the present application, the influence distance threshold is set, so that other feature points moving along with the target feature point move within the influence domain of the feature point itself, so that the texture coordinate of the target feature point and the texture coordinate of other feature points are definitely different, when the movement distance of the target feature point exceeds the influence domain, the texture coordinate should be determined by the texture coordinate of the target feature point near the target position, and the texture coordinate remains unchanged in other feature points within the influence domain, so as to generate a deformation by stretching the texture, where for the target feature point, the texture coordinate with the minimum texture space deformation may be obtained by using a mesh topology and using a principle of two-dimensional harmonic mapping.
When the step S105 is implemented, firstly, the original texture coordinates of each feature point are determined, and then when the first movement distance of a certain feature point is determined to be less than or equal to the preset influence distance threshold, the texture coordinates of the feature point are not changed, that is, the target texture coordinates are also the original texture coordinates; when the first moving distance of the feature point is larger than the influence distance threshold, re-triangulation is performed on the basis of each moved feature point, and the target texture coordinate of the feature point is determined by utilizing a harmonic mapping algorithm.
And S106, rendering the processed image based on the target position of each characteristic point and the target texture coordinate of each characteristic point.
Here, after the target position and the corresponding target texture coordinate of each feature point are determined, the processed image may be obtained, and then the processed image may be rendered by a rendering algorithm, for example, the processed image may be rendered by an OpenGL algorithm.
In this embodiment of the application, after an image to be processed is obtained, and a plurality of feature points of the face image are obtained, a user may trigger a movement instruction of moving the feature points, at this time, based on the received feature point movement instruction, a target feature point to be moved and a target position of the target feature point are determined, and the movement of the target feature point may act on other feature points except the target feature point, the target positions of the other feature points may be determined based on an original position, a target position, original positions of the other feature points, and a preset attenuation function, and based on a first movement distance, the original position, and a harmonic texture mapping algorithm of each feature point, a target texture coordinate of each feature point is determined, and finally, a processed image is rendered based on the target position of each feature point and the target texture coordinate of each feature point, where the attenuation function is used when a movement distance of the other feature points under the influence of the target feature point exceeds a preset value The influence of the target feature point on the other feature points is attenuated when the influence distance threshold is reached, so that the moving distance of the other feature points is attenuated, and a smooth and natural effect can be achieved when large-scale deformation is carried out.
In some embodiments, the step S102 shown in fig. 3 of "interpolating the plurality of image feature points to obtain a plurality of interpolated feature points" may be implemented by:
step S1021, performing interpolation processing on the extracted image feature points to obtain interpolated image feature points;
in step S101, relatively sparse image feature points are extracted, for example, a human face image may be extracted, as shown in fig. 7, 68 feature points are extracted, and as shown in fig. 7, the 68 feature points do not include feature points of the forehead edge, so that in step S1021, it may be implemented that the forehead point and the forehead point may be interpolated based on the 28 th point and the 31 th point on the nasal alar midline, and then the points of other forehead edges in the forehead region are interpolated according to the forehead point and the forehead point, and in addition, it can be seen from fig. 7 that the feature points of the cheek edge are also sparse, so that interpolation may be performed between each extracted cheek feature point, and thus the interpolated facial feature points are obtained.
Step S1022, determining contour feature points of the target image region based on the interpolated image feature points, and acquiring image edge feature points of the image to be processed.
Here, the contour feature point of the target image area may be a point on a contour circle outside the target image area, that is, a point on a contour circle outside the limb image area in the image to be processed. Taking the human face image region as an example for explanation, when the step S1022 is implemented, the feature points on the cheek edge and the feature points on the forehead edge may be selected from the interpolated facial feature points, and then the corresponding points of the feature points on the cheek edge and the feature points on the forehead edge that are moved outward by a certain distance are determined as the contour feature points of the facial image region, and the image edge feature points of the image to be processed may be obtained by sampling each pixel point on the image edge.
In step S1023, the interpolated image feature points, contour feature points, and image edge feature points are determined as a plurality of interpolated feature points.
Correspondingly, after step S101, the method further comprises:
and S001, triangulating the image to be processed based on the plurality of feature points after interpolation to obtain index information of a plurality of triangular patches.
Here, the plurality of feature points obtained in step S1023 constitute a feature point set of the image to be processed, and each feature point has its own number.
When the step S001 is implemented, triangulation may be performed on the image to be processed based on the feature point set by using a Delaunay triangulation algorithm, that is, a plurality of triangular patches using each feature point in the feature point set as a vertex can be obtained, and it is satisfied that, except for an endpoint, an edge in a planar graph formed by the plurality of triangular patches does not include any point in the point set; the triangular patches have no intersecting edges; all faces in the plan view are triangular faces, and the collection of all triangular faces is the convex hull of the feature point set. In addition, the index information of each triangular patch is also the number of the vertex feature point of the triangular patch, for example, if the vertex of a triangular patch is the feature point No. 6, the feature point No. 79, and the feature point No. 60, respectively, then the index information of the triangular patch is (6,79, 60).
In the embodiment of the foregoing steps S1021 to S1023, the plurality of feature points of the image to be processed include not only image feature points, but also image edge feature points and contour feature points of the target image region, the entire image can be triangulated by the image edge feature points, and the boundary between the face contour and the image can be divided more finely by the contour feature points of the face image region, so that subsequent deformation is smoother.
In some embodiments, the step S104 "determining the target positions of the other feature points based on the original positions of the target feature points, the target position, the original positions of the other feature points, and the preset attenuation function" shown in fig. 3 may be implemented by:
step S1041, determining each first distance between each target feature point and each other feature point based on the original positions of the other feature points and the original positions of the respective target feature points.
Here, it is assumed that there are N target feature points, and the distance between some other feature point and the ith target feature point is the ith first distance.
When the step S1041 is implemented, the jth other feature point P may be determined according to the formula (1-1)jAnd the ith target feature point NiIth first distance therebetween:
Figure BDA0002830150950000151
wherein,
Figure BDA0002830150950000152
is PjIs detected in the direction of the original position of the,
Figure BDA0002830150950000153
is NiThe original position of (a).
Step S1042, determining an influence weight of each target feature point on the other feature points based on each first distance and a preset influence distance threshold.
Here, the influence distance threshold is preset to determine the influence weight of each target feature point on other feature points. When the step S1042 is implemented, when the first distance is less than or equal to the influence distance threshold, determining the influence weight of the target feature point on the other feature point based on a preset influence function and the first distance, and when the first distance is greater than the influence distance threshold, directly determining a preset value as the influence weight of the target feature point on the other feature point, where the preset value may be 0, and may also be approximately 0.
Step S1043, determining candidate target positions of the other feature points based on the original position, the target position, and the influence weight of each target feature point.
Here, in implementation, in step S1043, the motion vector of each target feature point may be determined, and then the candidate motion vector of the other feature point may be determined by performing weighted summation on the motion vector of each target feature point and the corresponding influence weight, and then the candidate target position of the other feature point may be determined by following the original positions of the candidate motion vector and the other feature point.
Step S1044 is to determine the target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, and a preset attenuation function.
Here, since the candidate target position is determined directly based on the weight of the target feature point on the influence of the other feature point, and the influence of the attenuation target feature point on the other feature point is not considered after moving for a certain distance, in step S1044, the moving distance of the other feature point is determined based on the original position of the other feature point and the candidate target position, and it is determined whether the candidate target position needs to be adjusted based on the moving distance, and when it is determined that the candidate target position needs to be adjusted, the moving vector of the other feature point is re-determined based on the attenuation function and the moving position, and the final target position of the other feature point is determined.
Through the above steps S1041 to S1044, the influence weight of each target feature point on each other feature point can be determined according to the distance between each other feature point and each target feature point, and the candidate movement vector of each other feature point can be determined by performing vector synthesis based on the weighted sum of the movement vector of each target feature point and the corresponding influence weight, so as to determine the candidate target position of the other feature point, and the target position of the other feature point can be further determined based on the movement distance and the attenuation function of the other feature point, so that the influence of the target feature point on the other feature point is attenuated after the other feature point moves for a certain distance, thereby realizing smooth deformation and also being capable of adaptively solving the problem of large-scale deformation.
In some embodiments, the step S1042 may be implemented by:
in step S421, it is determined whether the ith first distance is less than or equal to the influence distance threshold.
Here, when the ith first distance is smaller than or equal to the influence distance threshold, it indicates that the first distance between the other feature point and the ith target feature point is still within the preset influence domain range, and then step S422 is entered; when the ith first distance is greater than the influence distance threshold, it indicates that the first distance between the other feature point and the ith target feature point exceeds the influence domain range, and then step S424 is performed.
Step S422, determining a ratio of the ith first distance to the influence distance threshold as a first parameter of the influence function.
Here, i is 1,2, … N, and N is the total number of target feature points. In the embodiment of the present application, the influence function may be a gaussian function as shown in equation (1-2):
Figure BDA0002830150950000161
wherein r is a first parameter, i.e.
Figure BDA0002830150950000162
d is the ith first distance, δ is the influence distance threshold, and r ≦ 1.
Step S423, determining an influence weight of the ith target feature point on the other feature points based on the second parameter preset by the influence function and the first parameter.
Here, β in the formula (1-2) is also a preset second parameter, and after the first parameter is determined, the influence weight of the ith target feature point on the other feature points can be determined by substituting the preset second parameter and the first parameter into the formula (1-2).
Step S424, determine a preset value as an influence weight of the ith target feature point on the other feature points.
Here, the preset value is preset, and is a relatively small value, for example, in actual implementation, the preset value may be set to 0, or may be a numerical value that is approximately 0, so as to indicate that when the first distance between the other feature point and the ith target feature point is greater than the influence distance threshold, it is determined that the influence of the target feature point on the other feature point is approximately 0, that is, the ith target feature point has no influence on the other feature point.
In some embodiments, the step S1043 "determining the candidate target positions of the other feature points based on the original positions, the target positions and the influence weights of the respective target feature points" may be implemented by:
in step S431, each target vector is determined based on the original position of each target feature point and the corresponding target position.
Here, the target vector corresponding to the ith target feature point can be determined by the formula (1-3):
Figure BDA0002830150950000171
wherein,
Figure BDA0002830150950000172
is NiIs detected in the direction of the original position of the,
Figure BDA0002830150950000173
is NiThe target position of (2).
Step S432, determining candidate motion vectors corresponding to the other feature points based on the influence weight of each target feature point on the other feature points and the target vectors.
Here, when step S432 is implemented, the influence weight of each target feature point on another feature point and each target vector may be weighted and summed to obtain a candidate motion vector corresponding to the other feature point.
Step S433, determining candidate target positions of the other feature points based on the candidate motion vectors and the original positions of the other feature points.
Here, assume that the jth other feature point PjIs a candidate motion vector of
Figure BDA0002830150950000174
Figure BDA0002830150950000175
Is PjOriginal position of, then PjCandidate target position of
Figure BDA0002830150950000176
Can be determined by the equations (1-4):
Figure BDA0002830150950000181
through the steps S431 to S433, the weighted summation can be performed according to the target vector of each target feature point and the corresponding influence weight to realize vector synthesis, so that a faster calculation speed is achieved, and then candidate motion vectors of other feature points are determined, and then target positions of other feature points are determined according to the original positions of the other feature points and the candidate motion vectors, so as to realize large-scale smooth deformation.
In some embodiments, the step S1044 "of determining the target position of the other feature point based on the original position of the other feature point, the candidate target position and the preset attenuation function" may be implemented by steps S441 to S445 shown in fig. 4, which are described below with reference to fig. 4.
Step S441, determining a second movement distance of the other feature points based on the original positions of the other feature points and the candidate target positions.
In step S442, it is determined whether the second moving distance is greater than a preset adjustment distance threshold.
Here, when the second movement distance is greater than the adjustment distance threshold, it indicates that the candidate target position needs to be adjusted, and then the process proceeds to step S443; when the second movement distance is less than or equal to the adjustment distance threshold, it indicates that the candidate target position does not need to be adjusted, and the process proceeds to step S445.
In step S443, attenuation values of the other feature points are determined based on the second moving distance and the attenuation function.
Here, in the embodiment of the present application, the attenuation function may be as shown in the formula (1-5):
Figure BDA0002830150950000182
where σ is the attenuation degree parameter.
When determining s, the movement direction of the other feature points may be extended until the edge of the image to be processed is reached, and an edge point G intersecting the edge of the image to be processed in the movement direction is determinedjAnd determining a second distance between the original position and the edge point of the other feature point, and determining a ratio between the second moving distance and the second distance as s, namely formula (1-6):
Figure BDA0002830150950000183
wherein, is
Figure BDA0002830150950000191
And l is a second moving distance, wherein the second moving distance is a second distance between the original position of the other feature point and the edge point.
Step S444, determining target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, the adjustment distance threshold, and the attenuation values.
In some embodiments, step S444 shown in fig. 4 can be implemented by:
step S4441, determining unit vectors of the other feature points in the moving direction based on the original positions and the candidate target positions of the other feature points;
here, the unit vector can be determined by the formula (1-7):
Figure BDA0002830150950000192
a step S4442 of determining a third parameter out of the adjusted distance threshold based on the second movement distance, the adjusted distance threshold, and the attenuation value, and determining the adjusted distance threshold as a fourth parameter;
here, the third parameter may be determined by the formula (1-8):
Figure BDA0002830150950000193
wherein,
Figure BDA0002830150950000194
and R is an adjusting distance threshold value.
Step S4443, determining target movement vectors of the other feature points based on the third parameter, the fourth parameter and the unit vector.
Here, the target movement vector may be determined by the formula (1-9):
Figure BDA0002830150950000195
step S4444, determining target positions of the other feature points based on the target motion vectors and the original positions of the other feature points.
Here, after determining the original positions of the target motion vector and other feature points, the original position coordinates and the target motion vector may be added to obtain the final target positions of other feature points.
Step S445, determine the candidate target position as the target position of the other feature point.
Here, when the second movement distance is less than or equal to the adjustment distance threshold, it indicates that the other feature point is still within the adjustment range, and at this time, the candidate target position is directly determined as the target position of the other feature point.
In some embodiments, the step S105 "determining the target texture coordinates of each feature point based on the first moving distance, the original position and the harmonic texture mapping algorithm of each feature point" may be implemented by:
step S1051, determining the original texture coordinate of the q-th feature point based on the original position of the q-th feature point and the size information of the image to be processed.
In the embodiment of the present application, the image to be processed is a two-dimensional image, and the original texture coordinates of the qth feature point can be determined by the following formulas (1-10):
Figure BDA0002830150950000201
wherein, width is the width of the image to be processed, and height is the height of the image to be processed.
In step S1052, it is determined whether the first movement distance of the q-th feature point is less than or equal to the influence distance threshold.
Here, when the first movement distance of the q-th feature point is smaller than or equal to the influence distance threshold, it indicates that the movement of the q-th feature point is within the influence domain, and then the process proceeds to step S1053, and when the first movement distance of the q-th feature point is greater than the influence distance threshold, it indicates that the q-th feature point has moved out of the influence domain, and then the process proceeds to step S1054.
Step S1053, when the first movement distance of the q-th feature point is less than or equal to the influence distance threshold, determining the original texture coordinate of the q-th feature point as the target texture coordinate of the q-th feature point.
Step S1054, when the first movement distance of the qth feature point is greater than the adjustment distance threshold, re-triangulating based on the target position of each feature point to obtain index information of a plurality of triangular patches.
Here, since the movement of the target feature point causes at least some of the other feature points to follow up and move, and when the movement distance is large, the adjacent point of each feature point also changes, triangulation is performed again at this time, and new index information of a plurality of triangular patches is obtained.
Step S1055, determining the target texture coordinate of the q-th feature point based on the index information of the plurality of triangular patches and a harmonic mapping algorithm.
Here, assuming φ is a smooth mapping between two smooth manifolds (M, g) and (N, h), then equations (1-11) are:
Figure BDA0002830150950000211
conversion to the current grid yields equations (1-12):
Figure BDA0002830150950000212
wherein,
Figure BDA0002830150950000213
vqis the q-th interior point, v, of the triangulated mesh model obtained by re-subdivisiontIs vqAdjacent point v ofqtRepresents a connection vqAnd vtGenerated edge, k1,k2Two vertices of an edge (q, t) map to positions in the projection plane, LqtIndicates the length of the edge (q, t),
Figure BDA0002830150950000214
the area of the triangular patch f (q, t, k) is shown.
Partial derivatives for v (q) are obtained for E (v), and equations (1-13) can be obtained:
Figure BDA0002830150950000215
here, v is shown in the formulae (1-13)tThe original texture coordinates of the points adjacent to the qth characteristic point in the triangular patch after the re-subdivision can be determined by the formulas (1-13)iNew texture coordinates.
Through the steps S1051 to S1055, when the texture coordinate calculation is performed, and when the moving distance of the feature point is within the range of the adjustment distance threshold, it is indicated that the moving distance of the feature point is small, and at this time, the texture coordinate of the feature point can be kept unchanged, that is, the original texture coordinate of the feature point is determined as the target texture coordinate, so that the calculation complexity can be reduced while the calculation result is ensured to be correct; when the moving distance of the feature point exceeds the adjustment distance threshold, the moving distance of the feature point is larger, and at this time, the target texture coordinate of the feature point needs to be determined according to the texture coordinate of the feature point near the target position after the feature point moves.
Based on the foregoing embodiments, an image processing method is further provided in an embodiment of the present application, and fig. 5 is a schematic diagram of a further implementation flow of the image processing method provided in the embodiment of the present application, as shown in fig. 5, the flow includes:
step S501, the terminal acquires an image to be processed and acquires a plurality of feature points of the image to be processed.
Here, the image to be processed may be an image of a real person, may also be a cartoon character, may also be an image of an animal, or the like, and includes at least a face image area. The plurality of feature points acquired in step S501 at least include facial feature points of a facial image region, may further include edge feature points of an image to be processed, and further includes feature points on a circle outside the contour of the facial image region.
Step S502, the terminal triangulates a plurality of feature points of the image to be processed to obtain index information of a plurality of triangular patches.
Step S503, the terminal determines the target feature point to be moved and the target position of the target feature point based on the received feature point moving instruction.
Here, the target feature point to be moved is determined based on the feature point movement instruction, and when implemented, may be determined according to the action position of the movement operation that triggers the feature point movement instruction, or may be determined according to the setting information that triggers the feature point movement instruction.
Step S504, the terminal determines each first distance between the other feature points and each target feature point based on the original positions of the other feature points and the original positions of each target feature point.
Step S505, the terminal determines, based on the first distances and a preset influence distance threshold, influence weights of the target feature points on the other feature points.
Here, when the first distance is less than or equal to the influence distance threshold, an influence weight of the target feature point on the other feature point is determined according to a ratio of the first distance to the influence distance threshold and a preset influence function, and when the first distance is greater than the influence distance threshold, the preset value is determined as an influence weight of the target feature point on the other feature point, and the preset value may be 0.
Step S506, the terminal determines candidate target positions of the other feature points based on the original positions of the respective target feature points, the target positions, and the influence weights.
Here, the target vector of each facial feature point is determined from the original position and the target position of each target feature point, vector synthesis is performed based on the target vector of each target feature point and the corresponding influence weight, candidate motion vectors of the other feature points are determined, and candidate target positions of the other feature points are determined based on the candidate motion vectors and the original positions of the other feature points.
Step S507, the terminal determines a second movement distance of the other feature points based on the original positions of the other feature points and the candidate target positions.
In step S508, the terminal determines whether the second movement distance of the other feature points is greater than a preset adjustment distance threshold.
Here, when the second movement distance of the other feature point is greater than the adjustment distance threshold, the process proceeds to step S509; when the second movement distance of the other feature points is smaller than or equal to the adjustment distance threshold, step S511 is entered.
In step S509, the terminal determines attenuation values of the other feature points based on the second moving distance and the attenuation function.
Here, when step S509 is implemented, the extending distance (i.e. the second distance in other embodiments) when the other feature point extends to the edge of the image in the moving direction of the feature point itself is determined, and then the parameter of the attenuation function is determined based on the ratio of the second moving distance to the second distance, so as to determine the attenuation value of the other feature point.
In step S510, the terminal determines the target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, the adjustment distance threshold, and the attenuation values.
When the step S510 is implemented, other feature points may first move in the original moving direction to adjust the distance threshold, and then, outside the adjusted distance threshold, the remaining moving distance may be attenuated according to the attenuation value, so as to determine the target positions of other feature points.
In step S511, the terminal determines the candidate target positions of other feature points as the target positions of the other feature points.
Step S512, the terminal determines the original texture coordinate of the q-th feature point based on the original position of the q-th feature point and the size information of the image to be processed.
Here, since the image to be processed is a two-dimensional image, and the texture coordinate of each pixel point is a real number from 0 to 1, the original texture coordinate of the q-th feature point can be obtained by dividing the position coordinate of the q-th feature point by the width or height of the image to be processed.
In step S513, the terminal determines whether the first movement distance of the qth feature point is greater than the adjustment distance threshold.
Here, when the first movement distance of the q-th feature point is less than or equal to the adjustment distance threshold, the process proceeds to step S514, and when the first movement distance of the q-th feature point is greater than the adjustment distance threshold, the process proceeds to step S515.
Step S514, the terminal determines the original texture coordinate of the qth feature point as the target texture coordinate of the qth feature point.
And step S515, the terminal re-triangulates based on the target position of each feature point to obtain index information of a plurality of triangular patches.
In step S516, the terminal determines a target texture coordinate of the qth feature point based on the index information of the plurality of triangular patches and a harmonic mapping algorithm.
And step S517, rendering the processed image by the terminal based on the target position of each characteristic point and the target texture coordinate of each characteristic point.
In step S518, the terminal sends an image upload request to the server.
Here, the image upload request carries a processed image, and is used to request the processed image to be issued.
Step S519, the server obtains the processed image carried in the image uploading request, and verifies the processed image, and issues the processed image after the verification is passed.
In the image processing method provided by the embodiment of the application, after acquiring an image to be processed, a terminal acquires feature points including a face, an image edge and a face contour, triangulates the image, moves some feature points (namely target feature points) based on the movement operation of the feature points, further influences other feature points near the target feature points based on the movement of the target feature points to move accordingly, determines the influence weight of the target feature points on other feature points according to the distance between the target feature points and other feature points when determining the final target positions of the other feature points, performs vector synthesis, and finally adjusts the positions of the other feature points based on an attenuation mechanism when the movement distance is larger, so that large-scale smooth deformation can be realized; when texture coordinate calculation is carried out, the feature points with the moving distance within the adjusting distance threshold value are moved, the texture coordinate is kept unchanged, the feature points with the moving distance outside the adjusting distance threshold value are triangulated again, the obtained triangular surface patches can be utilized, the texture coordinate with the minimum texture space deformation is obtained by using the principle of two-dimensional harmonic mapping, and texture disorder can be avoided.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Fig. 6 is a schematic diagram of a further implementation flow of the image processing method according to the embodiment of the present application, and as shown in fig. 6, the flow includes:
step S601, an original face image is acquired.
Here, the original face image may be a real face image or a face image of a cartoon character, and the original face image may be a color image or a grayscale image.
Step S602, feature points are extracted.
In the implementation of step S602, facial feature extraction may be performed by using a facial feature point recognition algorithm, such as dlib algorithm, to obtain a plurality of facial feature points, and the extracted facial feature points may not cover the forehead, cheek, or other regions of the face. Fig. 7 is a schematic diagram of facial image feature points provided in the embodiment of the present application, and as shown in fig. 7, 68 feature points are extracted in total, where feature points 1 to 17 are cheek feature points, feature points 18 to 27 are eyebrow feature points, feature points 28 to 36 are nose feature points, feature points 37 to 48 are eye feature points, and feature points 49 to 68 are mouth feature points.
Step S603, feature points are interpolated.
When feature point interpolation is performed, the forehead edge and the cheek feature points need to be interpolated, and the image boundary and the outer circle of points of the contour need to be interpolated.
When the forehead point is interpolated, the forehead point M can be interpolated according to the formula (2-1) by using the 28 th point and the 31 th point on the central line of the nasal winghead_centerThe interpolation peak point is set as M according to the formula (2-2)head
Mhead_center=α1M282M31 (2-1);
Mhead=β1M282Mhead_center (2-2);
Wherein alpha is1,α2,β1,β2Is an interpolation coefficient, and can be preset.
The point interpolation of both sides of forehead can be respectively deflected to left and right by forehead vertex and forehead midpointiI.e. midpoint of forehead Mhead_center(xhead_center,yhead_center) Around forehead top point Mhead(xhead,yhead) Rotate counterclockwise by thetaiAnd clockwise rotation of thetaiCoordinates of (2)
Figure BDA0002830150950000261
Wherein,
Figure BDA0002830150950000262
can be determined by equation (2-3):
Figure BDA0002830150950000263
the point at which a series of brows can be obtained is set to MiThe cheek point can be interpolated from the facial contour point and the nose wing feature point, and is set as Hi
In the embodiment of the application, in order to render the whole image by fully dividing the subsequent triangulation, the feature points of the edge of the image need to be acquired, and in the implementation, each pixel point on the edge of the image can be sampled at equal intervals to obtain the feature points of the edge of the image. In addition, in order to divide the boundary between the face contour and the image more finely and make the deformation smoother, a circle of feature points outside the face contour needs to be interpolated.
Step S604, triangulating all the points obtained by the interpolation values to obtain a triangular patch index connecting the point set.
All the points obtained by interpolation form a feature point set of the face image, and in the implementation of step S604, the face image can be triangulated based on the feature point set by using a Delaunay triangulation algorithm, so as to obtain a triangular patch index connecting the feature point set. The index of each triangular patch may be the sequence number of the three vertices of the triangular patch in the feature point set. Fig. 8 is a schematic view of an image obtained by triangulating the interpolated feature points according to the embodiment of the present application.
In step S605, the user defines the setting restriction point.
In the embodiment of the present application, the constrained point refers to a point dragged by the user, and the free point is another point that generates deformation movement due to the movement of the constrained point.
Step S606, image rendering is performed based on OpenGL.
And step S607, when the user pulls the restriction point is monitored, calculating the deformation position.
Here, in step S607, when the deformation position is calculated, the deformation positions of the constrained point and the free point are calculated. Once the user drags, the deformation positions and texture coordinates of the constrained points and the free points are recalculated, and in consideration of large-scale deformation and smooth deformation, in step S607, the deformation position of vector synthesis based on the attenuation mechanism is calculated.
Suppose a constraint point N1From the home position (x 1)src,y1src) Move to the deformed position (x 1)dst,y1dst) Constraint point N2From the home position (x 2)src,y2src) Move to the deformed position (x 2)dst,y2dst) Constraint point NnFrom (xn)src,ynsrc) Move to the deformed position (xn)dst,yndst) Considering the movement of a constraint point as a force action, acting on a free point, the closer it is to a certain constraint point, the greater the influence of this constraint point on this free point, and as shown in fig. 9, the closest the distance between the free point Pi and the constraint point Nn, and therefore the greater the influence of Nn on this point than N1, N2, and N3, the closest the direction of movement of Pi to Nn.
Suppose a free point P1(s1src,t1src),P2(s2src,t2src),…Pm(smsrc,tmsrc) The position after deformation of the constrained point is P'1(s1dst,t1dst),P'2(s2dst,t2dst),…P'm(smdst,tmdst) Then Pj moves from the starting position Pj to the vector of the target position Pj
Figure BDA0002830150950000271
Can be calculated by the formula (2-4):
Figure BDA0002830150950000272
wherein is the motion vector of the constraint point Ni, wi(r) is a Gaussian weight function, i.e. the influence weight in other embodiments, wi(r) can be determined using equation (2-5):
Figure BDA0002830150950000273
wherein d is the distance from the constraint point to the free point, and δ is the radius of the influence domain, and the formula (2-5) shows that the influence is reduced to 0 after the free point and the constraint point exceed a certain range, that is, each constraint point only has an effect on the free point within a certain range.
Since the influence applied to the free point should increase with the increase of the dragging distance in the influence domain of the free point in consideration of the large-scale dragging, and slowly decay with the increase of the dragging distance if the influence domain is exceeded, thereby smoothing the deformation and adaptively solving the problem of the large-scale deformation, the decay function shown in the formula (2-6) is designed in the embodiment of the present application
Figure BDA0002830150950000281
In the embodiment of the present application, an own deformation domain may be set for each free point, and the deformation domain is set as a circular deformation domain, assuming that the radius is R, and σ is an attenuation degree parameter, which may be a preset value.
The boundary of the image is taken as the deformation limit, and the constraint point is opposite to the free point PiIs obtained by the synthesis of the formula (2-4), after the action direction of the constraint point on the free point is determined, the intersection point of the moving direction of the free point and the image boundary can be further determined and set as Gi(xborder,yborder) Then s in equation (2-6) can be determined by equation (2-7):
Figure BDA0002830150950000282
where l is the distance the free point moves from the original position, i.e. is
Figure BDA0002830150950000283
After exceeding the influence field, as l increases, g(s) attenuation increases, so that the final deformation vector of the free point can be determined according to the formula (2-8):
Figure BDA0002830150950000284
then G is1(v) And determining the final deformation position of the free point according to the deformation vector and the original position of the free point.
In practical implementation, the synthetic deformation position calculation algorithm based on the vector field in the step can be replaced by ml s deformation, thin plate splines and the like, but in contrast, the deformation algorithm based on the vector field synthesis is high in speed and can smoothly deform.
In step S608, texture coordinates are calculated.
And step 609, rendering the image based on OpenGL again to obtain the deformed image.
After a user drags, the dragged constraint point may move to a place far away from the original position, which is assumed to be O, and other free points moving along with the constraint point move in the range of the deformation domain of the user, so that texture coordinates of the dragged constraint point and texture coordinates of the free points are different, when the dragged constraint point exceeds the deformation domain, the texture coordinates are determined by the texture coordinates of the constraint point near the O, and in the free points in the range of the deformation domain, the texture coordinates are kept unchanged, so that the texture is stretched to generate deformation.
Assuming that the position of the beam spot before deformation is s (x)pos,ypos) The control point position after the drag deformation is s '(x'pos,y'pos)。
Since the face image is a two-dimensional image, the texture coordinates of all points T (x, y) can be calculated by the following formula (2-9):
Figure BDA0002830150950000291
wherein, width is the width of the original face image, and height is the height of the original face image.
In the embodiment of the present application, the texture coordinates after deformation of each feature point can be implemented by the following formula (2-10):
Figure BDA0002830150950000292
wherein R is the radius of the deformation domain, and distance (s', s) is the distance of the deformation moving back and forth. That is, when the distance of the feature point moving back and forth after being deformed is less than R, the deformed texture coordinate is different from the texture coordinate before being deformed, and when the moving distance exceeds the deformation domain, the original triangular patch index is discarded, and the Delaunay triangulation is reused to obtain a new triangular patch index. The effect is shown in fig. 10, where 1001 is the face image before deformation, when a point of the cheek is dragged on a large scale, if the texture is just kept unchanged, then as shown in 1002 in fig. 10, a texture error occurs in the dragged point, and adaptive texture coordinate calculation is performed, as shown in 1003 in fig. 10, so that a better effect is achieved.
For a better understanding of embodiments of the present application, harmonic texture mapping is described herein.
Assuming φ is a smooth mapping between two smooth manifolds (M, g) and (N, h), then there are equations (2-11):
Figure BDA0002830150950000301
conversion to the current grid yields equation (2-12):
Figure BDA0002830150950000302
wherein,
Figure BDA0002830150950000303
vi(i ═ 1,2, …, n) are the interior points of the triangular mesh model, vijRepresents a connection viAnd vjGenerated edge, k1,k2For the two vertices of the edge (i, j) to map to positions in the projection plane, LijIndicates the length of the edge (i, j),
Figure BDA0002830150950000304
the area of the triangular patch f (i, j, k) is shown.
Obtaining the partial derivatives of v (i) for E (v) can obtain the formulas (2-13):
Figure BDA0002830150950000305
v can be determined by the formula (2-13)iNew texture coordinates.
By the image processing method provided by the embodiment of the application, various natural smooth large-scale deformation effects can be provided for a face special effect product, and meanwhile, the face deformation provides a key basic technology for application of subsequent face animation, face fusion, face changing and the like.
Continuing with the exemplary structure of the image processing device 354 implemented as a software module provided in the embodiments of the present application, in some embodiments, as shown in fig. 2, the software module stored in the image processing device 354 of the memory 350 may be an image processing device in the terminal 300, including:
a first obtaining module 3541, configured to obtain an image to be processed, and perform feature extraction on the image to be processed to obtain a plurality of image feature points;
an interpolation processing module 3542, which performs interpolation processing on the plurality of image feature points to obtain a plurality of feature points after interpolation;
a first determining module 3543, configured to determine, based on the received feature point movement instruction, a target feature point to be moved and a target position of the target feature point;
a second determining module 3544, configured to determine target positions of other feature points based on the original position of the target feature point, the target position, the original positions of the other feature points, and a preset attenuation function, where the other feature points are each feature point other than the target feature point;
a third determining module 3545, configured to determine the first movement distance of each feature point, and determine a target texture coordinate of each feature point based on the first movement distance of each feature point, the original position, and a harmonic texture mapping algorithm;
a rendering module 3546, configured to render the processed image based on the target position of each feature point and the target texture coordinate of each feature point.
In some embodiments, the second determining module is further configured to:
determining each first distance between each other feature point and each target feature point based on the original positions of the other feature points and the original positions of each target feature point;
determining the influence weight of each target characteristic point on the other characteristic points based on each first distance and a preset influence distance threshold;
determining candidate target positions of the other feature points based on the original position, the target position and the influence weight of each target feature point;
and determining the target positions of the other characteristic points based on the original positions of the other characteristic points, the candidate target positions and a preset attenuation function.
In some embodiments, the second determining module is further configured to:
when the ith first distance is smaller than or equal to the influence distance threshold, determining the ratio of the ith first distance to the influence distance threshold as a first parameter of an influence function; i is 1,2, … N, N is the total number of the target characteristic points;
determining the influence weight of the ith target characteristic point on the other characteristic points based on a second parameter preset by the influence function and the first parameter;
and when the ith first distance is greater than the influence distance threshold, determining a preset value as the influence weight of the ith target characteristic point on the other characteristic points.
In some embodiments, the second determining module is further configured to:
determining each target vector based on the original position of each target feature point and the corresponding target position;
determining candidate motion vectors corresponding to the other feature points based on the influence weight of each target feature point on the other feature points and the target vectors;
determining candidate target positions of the other feature points based on the candidate motion vectors and the original positions of the other feature points.
In some embodiments, the second determining module is further configured to:
determining second movement distances of the other feature points based on the original positions of the other feature points and the candidate target positions;
when the second moving distance is larger than a preset adjusting distance threshold, determining attenuation values of the other characteristic points based on the second moving distance and an attenuation function;
determining target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, the adjusted distance threshold, and the attenuation values.
In some embodiments, the second determining module is further configured to:
when the second movement distance is smaller than or equal to the adjustment distance threshold, determining the candidate target position as the target position of the other feature point.
In some embodiments, the second determining module is further configured to:
determining unit vectors of the other feature points in the moving direction based on the original positions and the candidate target positions of the other feature points;
determining a third parameter outside of the adjusted distance threshold based on the second movement distance, the adjusted distance threshold, and the attenuation value, the adjusted distance threshold being determined as a fourth parameter;
determining target movement vectors of the other feature points based on the third parameter, the fourth parameter and the unit vector;
and determining the target positions of the other characteristic points based on the target movement vector and the original positions of the other characteristic points.
In some embodiments, the interpolation processing module is further configured to:
carrying out interpolation processing on the extracted image characteristic points to obtain interpolated image characteristic points;
determining contour feature points of a target image area based on the interpolated image feature points, and acquiring image edge feature points of the image to be processed;
determining the image characteristic points, the contour characteristic points and the image edge characteristic points after interpolation as a plurality of characteristic points after interpolation;
in some embodiments, the apparatus further comprises:
and the triangulation module is used for triangulating the plurality of feature points of the image to be processed to obtain index information of a plurality of triangular patches.
In some embodiments, the third determining module is further configured to:
determining the original texture coordinate of the q-th feature point based on the original position of the q-th feature point and the size information of the image to be processed;
and when the first moving distance of the q-th feature point is smaller than or equal to the adjusting distance threshold, determining the original texture coordinate of the q-th feature point as the target texture coordinate of the q-th feature point.
In some embodiments, the third determining module is further configured to:
when the first moving distance of the qth feature point is larger than the adjustment distance threshold, re-triangulating based on the target position of each feature point to obtain index information of a plurality of triangular patches;
and determining the target texture coordinate of the q-th feature point based on the index information of the plurality of triangular patches and a harmonic mapping algorithm.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a storage medium having stored therein executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 4.
In some embodiments, the storage medium may be a computer-readable storage medium, such as a Ferroelectric Random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a charged Erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), and the like; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (13)

1. An image processing method, comprising:
acquiring an image to be processed, and performing feature extraction on the image to be processed to obtain a plurality of image feature points;
carrying out interpolation processing on the plurality of image characteristic points to obtain a plurality of characteristic points after interpolation;
determining a target feature point to be moved and a target position of the target feature point based on the received feature point moving instruction;
determining the target positions of other feature points based on the original positions of the target feature points, the target position, the original positions of other feature points and a preset attenuation function, wherein the other feature points are all the feature points except the target feature points;
determining a first moving distance of each feature point, and determining a target texture coordinate of each feature point based on the first moving distance, the original position and a harmonic texture mapping algorithm of each feature point;
and rendering the processed image based on the target position of each characteristic point and the target texture coordinate of each characteristic point.
2. The method according to claim 1, wherein the determining the target positions of the other feature points based on the original position of the target feature point, the target position, the original positions of the other feature points and a preset attenuation function comprises:
determining each first distance between each other feature point and each target feature point based on the original positions of the other feature points and the original positions of each target feature point;
determining the influence weight of each target characteristic point on the other characteristic points based on each first distance and a preset influence distance threshold;
determining candidate target positions of the other feature points based on the original position, the target position and the influence weight of each target feature point;
and determining the target positions of the other characteristic points based on the original positions of the other characteristic points, the candidate target positions and a preset attenuation function.
3. The method according to claim 2, wherein the determining the influence weight of each target feature point on the other feature points based on the respective first distances and a preset influence distance threshold comprises:
when the ith first distance is smaller than or equal to the influence distance threshold, determining the ratio of the ith first distance to the influence distance threshold as a first parameter of an influence function; i is 1,2, … N, N is the total number of the target characteristic points;
determining the influence weight of the ith target characteristic point on the other characteristic points based on a second parameter preset by the influence function and the first parameter;
and when the ith first distance is greater than the influence distance threshold, determining a preset value as the influence weight of the ith target characteristic point on the other characteristic points.
4. The method of claim 3, wherein determining the candidate target positions of the other feature points based on the original position, the target position and the influence weights of the respective target feature points comprises:
determining each target vector based on the original position of each target feature point and the corresponding target position;
determining candidate motion vectors corresponding to the other feature points based on the influence weight of each target feature point on the other feature points and the target vectors;
determining candidate target positions of the other feature points based on the candidate motion vectors and the original positions of the other feature points.
5. A method as recited in claim 2, further wherein said determining target positions of the other feature points based on their original positions, candidate target positions, and a preset attenuation function comprises:
determining second movement distances of the other feature points based on the original positions of the other feature points and the candidate target positions;
when the second moving distance is larger than a preset adjusting distance threshold, determining attenuation values of the other characteristic points based on the second moving distance and an attenuation function;
determining target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, the adjusted distance threshold, and the attenuation values.
6. The method of claim 5, further comprising:
when the second movement distance is smaller than or equal to the adjustment distance threshold, determining the candidate target position as the target position of the other feature point.
7. The method of claim 5, wherein determining the target positions of the other feature points based on the original positions of the other feature points, the candidate target positions, the adjusted distance threshold, and the attenuation values comprises:
determining unit vectors of the other feature points in the moving direction based on the original positions and the candidate target positions of the other feature points;
determining a third parameter outside of the adjusted distance threshold based on the second movement distance, the adjusted distance threshold, and the attenuation value, the adjusted distance threshold being determined as a fourth parameter;
determining target movement vectors of the other feature points based on the third parameter, the fourth parameter and the unit vector;
and determining the target positions of the other characteristic points based on the target movement vector and the original positions of the other characteristic points.
8. The method according to claim 1, wherein the interpolating the plurality of image feature points to obtain a plurality of interpolated feature points comprises:
carrying out interpolation processing on the extracted image characteristic points to obtain interpolated image characteristic points;
determining contour feature points of a target image area based on the interpolated image feature points, and acquiring image edge feature points of the image to be processed;
determining the image characteristic points, the contour characteristic points and the image edge characteristic points after interpolation as a plurality of characteristic points after interpolation;
the method further comprises the following steps:
and triangulating the image to be processed based on the plurality of feature points after interpolation to obtain index information of a plurality of triangular surface patches.
9. The method of claim 8, wherein determining the target texture coordinates of each feature point based on the first travel distance, the original location, and the harmonic texture mapping algorithm of each feature point comprises:
determining the original texture coordinate of the q-th feature point based on the original position of the q-th feature point and the size information of the image to be processed;
and when the first moving distance of the q-th feature point is smaller than or equal to the adjusting distance threshold, determining the original texture coordinate of the q-th feature point as the target texture coordinate of the q-th feature point.
10. The method of claim 9, wherein determining the target texture coordinates of each feature point based on the first travel distance, the original location, and the harmonic texture mapping algorithm of each feature point comprises:
when the first moving distance of the qth feature point is larger than the adjustment distance threshold, re-triangulating based on the target position of each feature point to obtain index information of a plurality of triangular patches;
and determining the target texture coordinate of the q-th feature point based on the index information of the plurality of triangular patches and a harmonic texture mapping algorithm.
11. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an image to be processed and acquiring a plurality of characteristic points of the image to be processed;
the first determination module is used for determining a target feature point to be moved and a target position of the target feature point based on a received feature point movement instruction;
a second determining module, configured to determine target positions of other feature points based on the original position of the target feature point, the target position, the original positions of the other feature points, and a preset attenuation function, where the other feature points are each feature point except the target feature point;
the third determining module is used for determining the first moving distance of each feature point and determining the target texture coordinate of each feature point based on the first moving distance, the original position and the harmonic texture mapping algorithm of each feature point;
and the rendering module is used for rendering the processed image based on the target position of each characteristic point and the target texture coordinate of each characteristic point.
12. An image processing apparatus characterized by comprising:
a memory for storing executable instructions; a processor for implementing the method of any one of claims 1 to 10 when executing executable instructions stored in the memory.
13. A computer-readable storage medium having stored thereon executable instructions for causing a processor, when executing, to implement the method of any one of claims 1 to 10.
CN202011458276.6A 2020-12-10 2020-12-10 Image processing method, device, equipment and computer readable storage medium Pending CN113538212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011458276.6A CN113538212A (en) 2020-12-10 2020-12-10 Image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011458276.6A CN113538212A (en) 2020-12-10 2020-12-10 Image processing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113538212A true CN113538212A (en) 2021-10-22

Family

ID=78094298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011458276.6A Pending CN113538212A (en) 2020-12-10 2020-12-10 Image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113538212A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726499A (en) * 2023-05-29 2024-03-19 荣耀终端有限公司 Image deformation processing method, electronic device, and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726499A (en) * 2023-05-29 2024-03-19 荣耀终端有限公司 Image deformation processing method, electronic device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US11017580B2 (en) Face image processing based on key point detection
WO2022078041A1 (en) Occlusion detection model training method and facial image beautification method
US11941737B2 (en) Artificial intelligence-based animation character control and drive method and apparatus
WO2020001014A1 (en) Image beautification method and apparatus, and electronic device
CN109151540A (en) The interaction processing method and device of video image
WO2021083133A1 (en) Image processing method and device, equipment and storage medium
JP2024004444A (en) Three-dimensional face reconstruction model training, three-dimensional face image generation method, and device
CN113284041B (en) Image processing method, device and equipment and computer storage medium
WO2023001095A1 (en) Face key point interpolation method and apparatus, computer device, and storage medium
US11087514B2 (en) Image object pose synchronization
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
WO2021098545A1 (en) Pose determination method, apparatus, and device, storage medium, chip and product
KR20220083830A (en) Image processing method and image synthesis method, image processing apparatus and image synthesis apparatus, and storage medium
US11741678B2 (en) Virtual object construction method, apparatus and storage medium
CN111507259B (en) Face feature extraction method and device and electronic equipment
WO2023077972A1 (en) Image data processing method and apparatus, virtual digital human construction method and apparatus, device, storage medium, and computer program product
CN113538212A (en) Image processing method, device, equipment and computer readable storage medium
CN113887507A (en) Face image processing method and device, electronic equipment and storage medium
CN114399424A (en) Model training method and related equipment
KR20200097201A (en) Method and apparatus of generating 3d data based on deep learning
CN113570634A (en) Object three-dimensional reconstruction method and device, electronic equipment and storage medium
US20220301348A1 (en) Face reconstruction using a mesh convolution network
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107194980A (en) Faceform's construction method, device and electronic equipment
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40054506

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination