CN117636408A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN117636408A
CN117636408A CN202210951359.1A CN202210951359A CN117636408A CN 117636408 A CN117636408 A CN 117636408A CN 202210951359 A CN202210951359 A CN 202210951359A CN 117636408 A CN117636408 A CN 117636408A
Authority
CN
China
Prior art keywords
image
face
processing
target
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210951359.1A
Other languages
Chinese (zh)
Inventor
蒋林均
黄秋晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210951359.1A priority Critical patent/CN117636408A/en
Publication of CN117636408A publication Critical patent/CN117636408A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the disclosure provides a video processing method and device, wherein the method comprises the following steps: receiving a first input of a first image of a target video by a user, wherein the first input is used for inputting an operation instruction to a target face in the first image; responding to the first input, and processing a face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face; and processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video. Thus, the processing efficiency of the image of the target video is improved without manual operation of a user.

Description

Video processing method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers and image processing, in particular to a video processing method and video processing equipment.
Background
With the continuous development of electronic devices, the electronic devices have more and more functions, and currently, users can shoot videos through the electronic devices and process faces in the videos. Most of the existing video face-thinning deformation modes are deformation modes designed in advance by designers according to standard model faces, such as natural face-thinning, users can only select corresponding small items, strength is adjusted through sliding rods, and unique designs of the designers or ideas of the designers cannot be added to design the face-thinning mode.
The existing picture face thinning modes are various, and deformation modes designed in advance by existing designers and modes of manual face thinning by users exist. However, the picture mode is only for a single picture, and corresponds to a certain frame in the video, and cannot be applied to all frames of the whole video.
In the actual use process, when the faces in the video need to be processed, the parameters of the faces in different frame images are different, so that users are usually required to manually process the faces in each frame image in the video respectively. As can be seen, the current processing efficiency of faces of images in video is low.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method and device, which are used for solving the problem that a user is usually required to manually process faces in each frame of image in a video respectively, so that the processing efficiency of the faces of the images in the video is low.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
receiving a first input of a first image of a target video by a user, wherein the first input is used for inputting an operation instruction to a target face in the first image;
responding to the first input, and processing a face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face;
And processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
In a second aspect, embodiments of the present disclosure provide a video processing apparatus, including:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a first input of a first image of a target video by a user, and the first input is used for inputting an operation instruction to a target face in the first image;
the first processing unit is used for responding to the first input, processing the face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face;
and the second processing unit is used for processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
The memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory such that the at least one processor performs the video processing method as described above in the first aspect and the various possible aspects of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the video processing method as described above in the first aspect and the various possible aspects of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the video processing method of the first aspect and the various possibilities of the first aspect as described above.
In the video processing method and the video processing device provided by the embodiment of the disclosure, the operation parameters of the operation instruction of the target face of the first image can be firstly processed on the face of the preset image, and the universality of the face of the preset image is better, so that the obtained deformation displacement parameters are better in universality, and the deformation displacement parameters can be applied to other images (namely the second image), so that the deformation displacement parameters can be applicable to the faces comprising different images, and the images comprise the faces with different characteristic parameters, namely the operation parameters of the image of a certain frame can be applied to the second image without manual operation of a user, and the processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
FIG. 2 is a second flowchart of a video processing method according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
In order to facilitate understanding, concepts related to the embodiments of the present disclosure are described below.
Electronic equipment: is a device with wireless receiving and transmitting function. The electronic device may be deployed on land, including indoors or outdoors, hand-held, wearable, or vehicle-mounted; can also be deployed on the water surface (such as a ship, etc.). The electronic device may be a mobile phone (mobile phone), a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) electronic device, an augmented reality (augmented reality, AR) electronic device, a wireless terminal in industrial control (industrial control), a vehicle-mounted electronic device, a wireless terminal in unmanned driving (self driving), a wireless electronic device in transportation safety (transportation safety), a wireless electronic device in smart city, a wireless electronic device in smart home (smart home), a wearable electronic device, etc. The electronic device according to the embodiments of the present disclosure may also be referred to as a terminal, a User Equipment (UE), an access electronic device, a vehicle-mounted terminal, an industrial control terminal, a UE unit, a UE station, a mobile station, a remote electronic device, a mobile device, a UE electronic device, a wireless communication device, a UE proxy, a UE apparatus, or the like. The electronic device may also be stationary or mobile.
Operating parameters: typically parameters for adjusting faces in images displayed on electronic devices, such as: may include at least one of the following parameters: a parameter for reducing the area of the face (i.e., an operation parameter at this time may be referred to as a lean face operation parameter), a parameter for enlarging the area of the face (i.e., an operation parameter at this time may be referred to as a face expansion operation parameter or a fat face operation parameter), and a parameter for changing the face shape of the face.
Deformation displacement parameters: the parameter used for processing the target face in the second image, namely the deformation displacement parameter, is usually the effect obtained by processing the target face in the second image, and can be matched with the effect obtained by operating the target face in the first image by the operation parameter of the operation instruction.
Currently, the target video may include a first image and a second image, where the feature parameters of the target face in the first image and the second image (the feature parameters may include at least one of the orientation of the face, the size of the face, and the depth information of the face) are different, and if the operation parameters corresponding to the target face in the first image are directly applied to the target face in the second image, the processing effect of the target face in the second image is easily poor (for example, when the operation parameters are face-thinning operation parameters, the face-thinning operation is not required at the part where the face is required, but the face-thinning operation is not required at the part where the face is required, so that in the actual use process, when the user processes the faces of different images in the target video, the user is required to manually process the faces of different images, thereby resulting in lower processing efficiency of the images in the video.
In order to solve the above technical problems, the embodiments of the present disclosure provide a video processing method, which may process a face of a preset image by using an operation parameter of an operation instruction of a target face of a first image, and the universality of the face of the preset image is better, so that the obtained deformation displacement parameter is better in universality, and the deformation displacement parameter may be applied to other images (i.e., a second image), so that the deformation displacement parameter may be applied to the face including different images, and the image includes the face of different feature parameters, i.e., the operation parameter of an image of a certain frame may be applied to the second image without manual operation by a user, thereby improving processing efficiency.
The application scenario of the embodiments of the present disclosure is described below.
The embodiment of the disclosure can be applied to electronic equipment, the target video can be displayed on the electronic equipment, and after a user selects a certain frame of image in the target video and performs face thinning operation on a face in the image, the face thinning operation is hoped to be synchronized to other frame images of the target video. Therefore, the embodiment of the disclosure can be adopted, so that the thin face operation parameter of the image of a certain frame can be applied to other image frames without manual operation of a user, and the processing efficiency is improved.
In addition, the embodiment of the present disclosure may also be applied to other application scenarios, where the difference between the other application scenarios and the application scenarios is that: the application scene is a face-thinning operation scene of an image frame of the whole target video, and other application scenes may be face-fat operation scenes (may also be referred to as face-expanding operation scenes) of the image frame of the target video.
The following describes the technical solutions of the present disclosure and how the technical solutions of the present disclosure solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the disclosure, referring to fig. 1, the method may include:
s101, receiving a first input of a first image of a target video of a user, wherein the first input is used for inputting an operation instruction to a target face in the first image.
The type of the first input is not limited herein, for example: the first input may be a sliding input, a touch input or a voice input, and in addition, when the first input is a sliding input, the first input may be a one-time sliding input, or the first input may also include a multi-time sliding input, that is, the first input may be an input composed of a plurality of time sliding inputs.
The position of the first image in the target video is not limited herein, for example: the first image may be located at a start play position of the target video, or the first image may be located at an intermediate play position of the target video, or the first image may be located at an end play position of the target video. That is to say: the first image may be a frame of image in the target video, or: the first image may be a certain image frame in the target video, and similarly, the second image may be other image frames in the target video except for the image frame corresponding to the first image.
The specific content of the operation parameters is not limited herein, for example: the operating parameters may include at least one of the following: the face thinning position, face thinning direction, face thinning range and face thinning intensity can also comprise fat face parameters, so that the face curve is smooth and plump, or the face curve is a specific treatment on eyes, mouth, nose, hair and stature.
S102, responding to the first input, and processing the face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face.
The characteristic parameters of the faces of the preset images can be preset parameters, and the faces of the preset images can be obtained by synthesizing the characteristic parameters of a plurality of faces, so that a mapping relationship can be formed between the faces of the preset images and the faces in each image, namely the universality of the faces of the preset images is good, and the operation parameters of the operation instructions in each image can process the faces of the preset images to obtain deformation displacement parameters after the faces of the preset images are processed.
In addition, the face of the preset image may be referred to as a standard face or model face, and the characteristic parameters of the face as preset parameters may be understood as at least one of the following: the face direction may be a target direction, the face size may be a preset size, and the depth information of the face may be a preset depth value.
The manner of processing the face of the preset image by the operation parameter is not limited herein.
As an alternative implementation manner, when the characteristic parameters of the target face of the first image are matched with the characteristic parameters of the face of the preset image, the operation parameters of the target face of the first image may be directly adopted to process the face of the preset image.
For example: when the orientation of the target face of the first image is matched with the orientation of the face of the preset image, the target face of the first image can be determined to obtain the characteristic parameter matched with the characteristic parameter of the face of the preset image.
In addition, when the orientation of the target face of the first image is not matched with the orientation of the face of the preset image, it can be determined that the characteristic parameter of the target face of the first image is not matched with the characteristic parameter of the face of the preset image, and then the orientation of the first image can be corrected, so that the orientation of the target face of the first image is matched with the orientation of the face of the preset image.
As another optional implementation manner, the processing, by using the operation parameter, the face of the pre-acquired preset image to obtain the deformation displacement parameter includes:
determining a first mapping relation between the first image and the preset image, wherein the first mapping relation is used for mapping and converting parameters in the first image into parameters in the preset image;
determining a mapping operation parameter corresponding to the operation parameter of the operation instruction according to the first mapping relation;
and processing the face of the preset image acquired in advance through the mapping operation parameters to obtain deformation displacement parameters.
The first mapping relationship may be a relationship established in advance between the first image and the preset image, or the first mapping relationship may be a mapping relationship established between the first image and the preset image in response to the first input.
In the embodiment of the disclosure, because specific parameters of the operation parameters are different in different images, it is required to determine the mapping operation parameters corresponding to the operation parameters according to the first mapping relation, and then process the face of the preset image obtained in advance according to the mapping operation parameters to obtain the deformation displacement parameters, so that the result of processing the face of the preset image is more accurate, and the accuracy of the obtained deformation displacement parameters can be improved.
Note that, the specific determination method of the first mapping relationship is not specifically limited herein.
As an optional implementation manner, the determining the first mapping relationship between the first image and the preset image includes:
constructing a first coordinate system of the first image through key point information of a target face of the first image, and constructing a second coordinate system of the preset image through key point information of a face of the preset image;
And determining a first mapping relation between the first image and the preset image according to the conversion relation between the first coordinate system and the second coordinate system.
The first coordinate system and the second coordinate system may include direction information of the coordinate system and scale information of the coordinate system, and the conversion of the first coordinate system and the second coordinate system is completed according to a corresponding relation between the direction information of the coordinate system in the first coordinate system and the direction information of the coordinate system in the second coordinate system and a corresponding relation between the scale information of the coordinate system in the first coordinate system and the scale information of the coordinate system in the second coordinate system.
For example: the scale of the coordinate system in the first coordinate system and the scale of the coordinate system in the second coordinate system are in a preset proportion, so that the conversion between the first coordinate system and the second coordinate system can be performed according to the preset proportion, and further the mapping operation parameter can be equal to the product of the operation parameter and the preset proportion.
In the embodiment of the disclosure, the first coordinate system of the first image and the second coordinate system of the preset image can be respectively established, and then the first mapping relation is determined according to the conversion relation between the first coordinate system and the second coordinate system, so that the accuracy of the first mapping relation can be improved.
It should be noted that the types of the first coordinate system and the second coordinate system may be the same, for example: the first coordinate system and the second coordinate system belong to rectangular coordinate systems, so that the calculated amount of conversion between the first coordinate system and the second coordinate system can be reduced, the calculation resources are saved, and the calculation resources for determining the first mapping relation are further saved.
In another alternative embodiment, corresponding weights may be respectively assigned to the first coordinate system and the second coordinate system, and then the first mapping relationship is determined according to the conversion relationship between the first coordinate system and the second coordinate system after the weights are assigned, where the weights may be the importance degrees or the correction coefficients of the first coordinate system and the second coordinate system.
As an optional implementation manner, the deformation displacement parameter is a parameter in a deformation displacement map, and the processing, by using the mapping operation parameter, the face of the pre-acquired preset image to obtain the deformation displacement parameter includes:
performing deformation processing on grids corresponding to the faces in the preset images by using the mapping operation parameters;
rendering grids corresponding to the face after deformation processing in the preset image to obtain the deformation displacement map;
And acquiring the deformation displacement parameters according to the deformation displacement diagram.
The method may adopt a target algorithm, and use mapping operation parameters to perform deformation processing on a grid corresponding to a face in a preset image, where the specific type of the target algorithm is not limited, for example: the target algorithm may be a liquefaction algorithm.
In the embodiment of the disclosure, the grid corresponding to the face in the preset image can be deformed through the mapping operation parameters, the grid corresponding to the deformed face in the preset image is rendered to obtain the deformation displacement image, and the deformation displacement parameters are acquired according to the deformation displacement image, so that the deformation displacement parameters are convenient to store and transfer, and the target face of the second image can be processed directly according to the deformation displacement parameters in the deformation displacement image, so that the processing mode of the target face of the second image is more concise and convenient.
S103, processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
It should be noted that, when the characteristic parameters of the target face in the second image are matched with the characteristic parameters of the face of the preset image, the deformation displacement parameters may be directly adopted to process the target face in the second image, so as to obtain the target video.
Correspondingly, when the characteristic parameters of the target face in the second image are not matched with the characteristic parameters of the face of the preset image, as an optional implementation manner, the processing the target face in the second image of the target video through the deformation displacement parameter to obtain the target video includes:
determining a second mapping relation between a second image of the target video and the preset image, wherein the second mapping relation is used for mapping and converting parameters in the preset image into parameters in the second image;
determining a mapping deformation displacement parameter corresponding to the deformation displacement parameter according to the second mapping relation;
and processing the target face in the second image through the mapping deformation displacement parameters to obtain a target video.
The second mapping relationship may refer to the corresponding expression in the first mapping relationship, which is not described herein.
In the embodiment of the disclosure, specific parameters of the deformation displacement parameters are different in different images, so that a mapping deformation displacement parameter corresponding to the deformation displacement parameter needs to be determined according to a second mapping relation, and then a target face in the second image is processed according to the mapping deformation displacement parameter to obtain a target video, so that a result of processing the target face in the second image is more accurate, and accuracy of the obtained mapping deformation displacement parameter can be improved.
It should be noted that, the specific determination manner of the second mapping relationship is not limited herein.
As an optional implementation manner, the determining the second mapping relationship between the second image of the target video and the preset image includes:
constructing a third coordinate system of a second image of the target video through key point information of a target face of the second image, and constructing a fourth coordinate system of the preset image through key point information of a face of the preset image;
constructing a first face grid of the second image through the third coordinate system, and constructing a second face grid of the preset image through the fourth coordinate system, wherein the first face grid corresponds to the vertexes of the second face grid one by one;
And determining a second mapping relation between the second image and the preset image according to the corresponding relation between the first face grid and the second face grid.
The fourth coordinate system and the second coordinate system in the above embodiment may be the same coordinate system, so that the coordinate system of the preset image does not need to be repeatedly constructed, thereby further reducing the consumption of computing resources.
The vertices of the first face mesh and the second face mesh are in one-to-one correspondence, which specifically may include: the number of the vertexes of the first face grid and the second face grid and the positions of the vertexes are in one-to-one correspondence.
In the embodiment of the disclosure, a third coordinate system of the second image and a fourth coordinate system of the preset image may be respectively established, then a first face mesh of the second image is constructed according to the third coordinate system, a second face mesh of the preset image is constructed according to the fourth coordinate system, and a second mapping relationship is determined according to the corresponding relationship between the first face mesh and the second face mesh, so that the accuracy of the second mapping relationship may be improved.
As an optional implementation manner, the mapping deformation displacement parameter is a parameter in a deformation displacement map, and the processing, by the mapping deformation displacement parameter, the target face in the second image of the target video to obtain the target video includes:
When a fault position exists in the deformation displacement diagram, gaussian blur processing is carried out on the fault position of the deformation displacement diagram;
and processing the parameters in the deformation displacement graph after Gaussian blur processing on the target face in the second image to obtain a target video.
When a fault position exists in the deformation displacement diagram, gaussian blur processing can be performed on the fault position of the deformation displacement diagram, so that the fault position can be smoothed, and the display effect of the deformation displacement diagram is enhanced.
In addition, the above-mentioned fault location may be a fault phenomenon due to grid overlapping.
In the embodiment of the disclosure, the fault position of the deformation displacement diagram is subjected to Gaussian blur processing, and then the parameters in the deformation displacement diagram after Gaussian blur processing are used for processing the target face in the second image to obtain the target video, so that the accuracy of a result of processing the target face in the second image is further enhanced.
As an optional implementation manner, the number of the second images is at least two frames, and the processing, by the deformation displacement parameter, the target face in the second image of the target video to obtain the target video includes:
And processing the target face in the second image of each frame according to the deformation displacement parameters and the preset sequence to obtain at least two frames of target images so as to obtain a corrected target video.
The modified target video may include the at least two frames of target images and a first image processed by an operation instruction.
In the embodiment of the disclosure, the target face in each frame of the second image is processed according to the sequence to obtain at least two frames of target images, so that each frame of the second image in the target video can be processed according to the preset sequence, namely the second image included in the whole target video is processed, the intelligent degree of operating the second image included in the target video is improved, and the efficiency of operating the second image included in the target video is improved.
It should be noted that the specific content of the preset sequence is not limited herein.
As an alternative embodiment, the preset sequence includes: and displaying the at least two frames of second images in the target video.
In the embodiment of the disclosure, the second images can be sequentially processed according to the display sequence of the second images in the target video, so that the processing order of the second images is better, the second images are not easy to miss, and the phenomenon that part of the second images are not processed occurs.
As another alternative embodiment, the preset sequence includes: and the definition of the human face in the at least two frames of second images. In this way, the diversity and flexibility of the preset sequence is increased.
In the embodiment of the disclosure, through steps S101 to S103, since the universality of the deformation displacement parameters obtained by processing the face of the preset image by the operation parameters of the operation instruction is higher, that is, the deformation displacement parameters can be applied to the image, and the image can include faces with different characteristic parameters, that is, the operation parameters of the operation instruction corresponding to the image of a certain frame can be applied to the second image, without manual operation of a user, so that the processing efficiency is improved.
It should be noted that, in the embodiment of the present disclosure, the image in the target video may be a two-dimensional image or an N-dimensional image, where N is greater than or equal to three, and in particular, the embodiment of the present disclosure uses two-dimensional images as an exemplary illustration, and does not constitute a specific limitation.
Referring to fig. 2, the following illustrates the above embodiment in a specific embodiment. The embodiment of the present disclosure is mainly illustrated by performing a face-thinning operation on the first image and the second image of the target video, but the embodiment is merely an exemplary illustration and is not limited to the specific limitation of the present disclosure.
In the first stage:
s201, respectively acquiring a preset image (namely a standard model diagram) and a first image of a target video;
the size of the preset image is not limited herein, for example: the size of the preset image may be 1280X 1280, the key point information of the face of the preset image may be X1, and the preset image may be divided into a plurality of standard grid meshes, where the size of the grid meshes is not limited, for example: may be 250 x 250.
Wherein, the thin face operation parameter of the target face of the first image may be referred to as Y1, and the key point information of the target face of the first image may be referred to as Z1.
S202, constructing a second coordinate system P2 of the preset image and a first coordinate system P1 of the first image through X1 and Z1 respectively, and acquiring direction information (X) of the coordinate system from the first coordinate system and the second coordinate system respectively x1 、y x1 、x y1 And y y1 ) And scale information (L) of the coordinate system x1 And L y1 );
S203, mapping the thin face operation parameter Y1 on the first image from the first coordinate system P1 to the second coordinate system P2 through the conversion relation between the first coordinate system and the second coordinate system to obtain a mapped thin face operation parameter Y2;
s204, deforming the grid M1 corresponding to the face on the preset image by using Y2 according to a liquefaction algorithm to obtain a deformed grid M2;
S205, rendering by using M2 to obtain a deformation displacement map M3;
and a second stage:
s206, constructing a fourth coordinate system P4 of the preset image and a third coordinate system P3 of the second image through X2 and Z2 respectively;
the key point information of the face of the preset image may be X2, and the key point information of the target face of the second image may be referred to as Z2; and X2 and X1 may be the same key point information, and the fourth coordinate system P4 and the second coordinate system P2 may be the same coordinate system.
S207, respectively constructing a face grid M4 of a preset image and a face grid M5 of a second image through X2 and Z2;
wherein the number of the vertexes of M4 and M5 and the positions of the vertexes are in one-to-one correspondence;
s208, sampling a deformation displacement map M6 of the face of the preset image through the vertex correspondence between M4 and M5, and carrying out coordinate system conversion on the sampling result through the coordinate systems (namely a third coordinate system and a fourth coordinate system) of the preset image and the second image to obtain a deformation displacement map M7 of the second image;
the deformation displacement map M6 of the face of the preset image and the deformation displacement map M3 may be the same deformation displacement map.
S209, carrying out Gaussian blur processing on M7 to smooth edges in order to solve the fault phenomenon caused by grid overlapping;
S210, applying M7 to the second image, and rendering to obtain a thin face result (namely a target image) of the second image;
s211, the operation is circulated on the second image of the whole target video to obtain the video corresponding to the whole target video, namely: the steps S201 to S211 may be sequentially performed on all the second images included in the target video, so as to implement face thinning operation on the second images included in the entire target video, and improve face thinning efficiency.
Wherein, the first image and the second image may each refer to a certain image frame in the target image.
According to the embodiment of the disclosure, according to the steps, the face thinning operation can be performed on the image of the whole target video, so that the face thinning operation efficiency of the image frame of the target video is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 3, a video processing apparatus 30 includes: a receiving unit 301, a first processing unit 302 and a second processing unit 303, wherein:
a receiving unit 301, configured to receive a first input of a first image of a target video by a user, where the first input is used to input an operation instruction to a target face of the first image;
The first processing unit 302 is configured to process, in response to the first input, a face of a preset image obtained in advance through an operation parameter to obtain a deformation displacement parameter, where the operation parameter is a parameter corresponding to the target face after the operation instruction is executed on the target face;
and the second processing unit 303 is configured to process, according to the deformation displacement parameter, a target face in a second image of the target video to obtain the target video, where the second image is another image frame in the target video except for the image frame corresponding to the first image.
As an alternative embodiment, the first processing unit 302 is specifically configured to:
determining a first mapping relation between the first image and the preset image, wherein the first mapping relation is used for mapping and converting parameters in the first image into parameters in the preset image;
determining a mapping operation parameter corresponding to the operation parameter of the operation instruction according to the first mapping relation;
and processing the face of the preset image acquired in advance through the mapping operation parameters to obtain deformation displacement parameters.
As an optional implementation manner, the deformation displacement parameter is a parameter in a deformation displacement chart, and the first processing unit 302 is specifically configured to:
Performing deformation processing on grids corresponding to the faces in the preset images by using the mapping operation parameters;
rendering grids corresponding to the face after deformation processing in the preset image to obtain the deformation displacement map;
and acquiring the deformation displacement parameters according to the deformation displacement diagram.
As an alternative embodiment, the first processing unit 302 is specifically configured to:
constructing a first coordinate system of the first image through key point information of a target face of the first image, and constructing a second coordinate system of the preset image through key point information of a face of the preset image;
and determining a first mapping relation between the first image and the preset image according to the conversion relation between the first coordinate system and the second coordinate system.
As an alternative embodiment, the second processing unit 303 is specifically configured to:
determining a second mapping relation between a second image of the target video and the preset image, wherein the second mapping relation is used for mapping and converting parameters in the preset image into parameters in the second image;
determining a mapping deformation displacement parameter corresponding to the deformation displacement parameter according to the second mapping relation;
And processing the target face in the second image through the mapping deformation displacement parameters to obtain a target video.
As an alternative embodiment, the second processing unit 303 is specifically configured to:
constructing a third coordinate system of a second image of the target video through key point information of a target face of the second image, and constructing a fourth coordinate system of the preset image through key point information of a face of the preset image;
constructing a first face grid of the second image through the third coordinate system, and constructing a second face grid of the preset image through the fourth coordinate system, wherein the first face grid corresponds to the vertexes of the second face grid one by one;
and determining a second mapping relation between the second image and the preset image according to the corresponding relation between the first face grid and the second face grid.
As an optional implementation manner, the mapped deformation displacement parameter is a parameter in a deformation displacement map, and the second processing unit 303 is specifically configured to:
when a fault position exists in the deformation displacement diagram, gaussian blur processing is carried out on the fault position of the deformation displacement diagram;
And processing the parameters in the deformation displacement graph after Gaussian blur processing on the target face in the second image to obtain a target video.
As an alternative embodiment, the number of the second images is at least two frames, and the second processing unit 303 is specifically configured to:
and processing the target face in the second image of each frame according to the deformation displacement parameters and the preset sequence to obtain at least two frames of target images so as to obtain a corrected target video.
As an alternative embodiment, the preset sequence includes: and displaying the at least two frames of second images in the target video.
The video processing device provided in the embodiments of the present disclosure may be used to execute the technical solutions of the embodiments of the method, and the implementation principle and the technical effects are similar, and the embodiments of the present disclosure are not repeated here.
In order to achieve the above embodiments, the embodiments of the present disclosure further provide an electronic device.
Referring to fig. 4, there is shown a schematic structural diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure, which electronic device 400 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic apparatus 400 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a random access Memory (Random Access Memory, RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a video processing method, including:
receiving a first input of a first image of a target video by a user, wherein the first input is used for inputting an operation instruction to a target face in the first image;
responding to the first input, and processing a face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face;
and processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
According to one or more embodiments of the present disclosure, the processing, by the operation parameter, the face of the pre-acquired preset image to obtain the deformation displacement parameter includes:
determining a first mapping relation between the first image and the preset image, wherein the first mapping relation is used for mapping and converting parameters in the first image into parameters in the preset image;
Determining a mapping operation parameter corresponding to the operation parameter of the operation instruction according to the first mapping relation;
and processing the face of the preset image acquired in advance through the mapping operation parameters to obtain deformation displacement parameters.
According to one or more embodiments of the present disclosure, the deformation displacement parameter is a parameter in a deformation displacement map, and the processing, by the mapping operation parameter, the face of the preset image obtained in advance to obtain the deformation displacement parameter includes:
performing deformation processing on grids corresponding to the faces in the preset images by using the mapping operation parameters;
rendering grids corresponding to the face after deformation processing in the preset image to obtain the deformation displacement map;
and acquiring the deformation displacement parameters according to the deformation displacement diagram.
According to one or more embodiments of the present disclosure, the determining the first mapping relationship between the first image and the preset image includes:
constructing a first coordinate system of the first image through key point information of a target face of the first image, and constructing a second coordinate system of the preset image through key point information of a face of the preset image;
And determining a first mapping relation between the first image and the preset image according to the conversion relation between the first coordinate system and the second coordinate system.
According to one or more embodiments of the present disclosure, the processing, by the deformation displacement parameter, the target face in the second image to obtain a target image includes:
determining a second mapping relation between the second image and the preset image, wherein the second mapping relation is used for mapping and converting parameters in the preset image into parameters in the second image;
determining a mapping deformation displacement parameter corresponding to the deformation displacement parameter according to the second mapping relation;
and processing the target face in the second image through the mapping deformation displacement parameters to obtain a target video.
According to one or more embodiments of the present disclosure, the determining a second mapping relationship between the second image of the target video and the preset image includes:
constructing a third coordinate system of a second image of the target video through key point information of a target face of the second image, and constructing a fourth coordinate system of the preset image through key point information of a face of the preset image;
Constructing a first face grid of the second image through the third coordinate system, and constructing a second face grid of the preset image through the fourth coordinate system, wherein the first face grid corresponds to the vertexes of the second face grid one by one;
and determining a second mapping relation between the second image and the preset image according to the corresponding relation between the first face grid and the second face grid.
According to one or more embodiments of the present disclosure, the mapping deformation displacement parameter is a parameter in a deformation displacement map, and the processing, by the mapping deformation displacement parameter, the target face in the second image to obtain a target video includes:
when a fault position exists in the deformation displacement diagram, gaussian blur processing is carried out on the fault position of the deformation displacement diagram;
and processing the parameters in the deformation displacement graph after Gaussian blur processing on the target face in the second image to obtain a target video.
According to one or more embodiments of the present disclosure, the number of the second images is at least two frames, and the processing, by the deformation displacement parameter, the target face in the second image of the target video to obtain the target video includes:
And processing the target face in the second image of each frame according to the deformation displacement parameters and the preset sequence to obtain at least two frames of target images so as to obtain a corrected target video.
According to one or more embodiments of the present disclosure, the preset sequence includes: and displaying the at least two frames of second images in the target video.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a video processing apparatus including:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a first input of a first image of a target video by a user, and the first input is used for inputting an operation instruction to a target face of the first image;
the first processing unit is used for responding to the first input, processing the face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face;
and the second processing unit is used for processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
According to one or more embodiments of the present disclosure, the first processing unit is specifically configured to:
determining a first mapping relation between the first image and the preset image, wherein the first mapping relation is used for mapping and converting parameters in the first image into parameters in the preset image;
determining a mapping operation parameter corresponding to the operation parameter of the operation instruction according to the first mapping relation;
and processing the face of the preset image acquired in advance through the mapping operation parameters to obtain deformation displacement parameters.
According to one or more embodiments of the present disclosure, the deformation displacement parameter is a parameter in a deformation displacement map, and the first processing unit is specifically configured to:
performing deformation processing on grids corresponding to the faces in the preset images by using the mapping operation parameters;
rendering grids corresponding to the face after deformation processing in the preset image to obtain the deformation displacement map;
and acquiring the deformation displacement parameters according to the deformation displacement diagram.
According to one or more embodiments of the present disclosure, the first processing unit is specifically configured to:
constructing a first coordinate system of the first image through key point information of a target face of the first image, and constructing a second coordinate system of the preset image through key point information of a face of the preset image;
And determining a first mapping relation between the first image and the preset image according to the conversion relation between the first coordinate system and the second coordinate system.
According to one or more embodiments of the present disclosure, the second processing unit is specifically configured to:
determining a second mapping relation between a second image of the target video and the preset image, wherein the second mapping relation is used for mapping and converting parameters in the preset image into parameters in the second image;
determining a mapping deformation displacement parameter corresponding to the deformation displacement parameter according to the second mapping relation;
and processing the target face in the second image through the mapping deformation displacement parameters to obtain a target video.
According to one or more embodiments of the present disclosure, the second processing unit is specifically configured to:
constructing a third coordinate system of a second image of the target video through key point information of a target face of the second image, and constructing a fourth coordinate system of the preset image through key point information of a face of the preset image;
constructing a first face grid of the second image through the third coordinate system, and constructing a second face grid of the preset image through the fourth coordinate system, wherein the first face grid corresponds to the vertexes of the second face grid one by one;
And determining a second mapping relation between the second image and the preset image according to the corresponding relation between the first face grid and the second face grid.
According to one or more embodiments of the present disclosure, the mapped deformation displacement parameter is a parameter in a deformation displacement map, and the second processing unit is specifically configured to:
when a fault position exists in the deformation displacement diagram, gaussian blur processing is carried out on the fault position of the deformation displacement diagram;
and processing the parameters in the deformation displacement graph after Gaussian blur processing on the target face in the second image to obtain a target video.
According to one or more embodiments of the present disclosure, the number of the second images is at least two frames, and the second processing unit is specifically configured to:
and processing the target face in the second image of each frame according to the deformation displacement parameters and the preset sequence to obtain at least two frames of target images so as to obtain a corrected target video.
According to one or more embodiments of the present disclosure, the preset sequence includes: and displaying the at least two frames of second images in the target video.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device comprising: at least one processor and memory;
The memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory such that the at least one processor performs the above first aspect and the various possible video processing methods of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the above first aspect and the various possible video processing methods of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above first aspect and the various possible video processing methods of the first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. A video processing method, comprising:
receiving a first input of a first image of a target video by a user, wherein the first input is used for inputting an operation instruction to a target face in the first image;
Responding to the first input, and processing a face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face;
and processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
2. The method according to claim 1, wherein the processing the face of the pre-acquired preset image by the operation parameter to obtain the deformation displacement parameter includes:
determining a first mapping relation between the first image and the preset image, wherein the first mapping relation is used for mapping and converting parameters in the first image into parameters in the preset image;
determining a mapping operation parameter corresponding to the operation parameter according to the first mapping relation;
and processing the face of the preset image acquired in advance through the mapping operation parameters to obtain deformation displacement parameters.
3. The method according to claim 2, wherein the deformation displacement parameter is a parameter in a deformation displacement map, and the processing, by the mapping operation parameter, the face of the pre-acquired preset image to obtain the deformation displacement parameter includes:
performing deformation processing on grids corresponding to the faces in the preset images by using the mapping operation parameters;
rendering grids corresponding to the face after deformation processing in the preset image to obtain the deformation displacement map;
and acquiring the deformation displacement parameters according to the deformation displacement diagram.
4. The method of claim 2, wherein the determining a first mapping relationship between the first image and the preset image comprises:
constructing a first coordinate system of the first image through key point information of a target face of the first image, and constructing a second coordinate system of the preset image through key point information of a face of the preset image;
and determining a first mapping relation between the first image and the preset image according to the conversion relation between the first coordinate system and the second coordinate system.
5. The method according to any one of claims 1 to 4, wherein the processing the target face in the second image of the target video by the deformation displacement parameter to obtain the target video includes:
Determining a second mapping relation between a second image of the target video and the preset image, wherein the second mapping relation is used for mapping and converting parameters in the preset image into parameters in the second image;
determining a mapping deformation displacement parameter corresponding to the deformation displacement parameter according to the second mapping relation;
and processing the target face in the second image through the mapping deformation displacement parameters to obtain a target video.
6. The method of claim 5, wherein determining a second mapping relationship between the second image of the target video and the preset image comprises:
constructing a third coordinate system of a second image of the target video through key point information of a target face of the second image, and constructing a fourth coordinate system of the preset image through key point information of a face of the preset image;
constructing a first face grid of the second image through the third coordinate system, and constructing a second face grid of the preset image through the fourth coordinate system, wherein the first face grid corresponds to the vertexes of the second face grid one by one;
And determining a second mapping relation between the second image and the preset image according to the corresponding relation between the first face grid and the second face grid.
7. The method according to claim 5, wherein the mapped deformation displacement parameter is a parameter in a deformation displacement map, and the processing the target face in the second image of the target video by the mapped deformation displacement parameter to obtain the target video includes:
when a fault position exists in the deformation displacement diagram, gaussian blur processing is carried out on the fault position of the deformation displacement diagram;
and processing the parameters in the deformation displacement graph after Gaussian blur processing on the target face in the second image to obtain a target video.
8. The method according to any one of claims 1 to 4, wherein the number of the second images is at least two frames, and the processing the target face in the second image of the target video by the deformation displacement parameter to obtain the target video includes:
and processing the target face in the second image of each frame according to the deformation displacement parameters and the preset sequence to obtain at least two frames of target images so as to obtain a corrected target video.
9. The method of claim 8, wherein the predetermined sequence comprises: and displaying the at least two frames of second images in the target video.
10. A video processing apparatus, comprising:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a first input of a first image of a target video by a user, and the first input is used for inputting an operation instruction to a target face in the first image;
the first processing unit is used for responding to the first input, processing the face of a preset image acquired in advance through operation parameters to obtain deformation displacement parameters, wherein the operation parameters are parameters corresponding to the target face after the operation instructions are executed on the target face;
and the second processing unit is used for processing the target face in a second image of the target video through the deformation displacement parameters to obtain the target video, wherein the second image is other image frames except the image frame corresponding to the first image in the target video.
11. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
The processor executing computer-executable instructions stored in the memory, causing the processor to perform the video processing method of any one of claims 1 to 9.
12. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the video processing method of any of claims 1 to 9.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method of video processing according to any one of claims 1 to 9.
CN202210951359.1A 2022-08-09 2022-08-09 Video processing method and device Pending CN117636408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210951359.1A CN117636408A (en) 2022-08-09 2022-08-09 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210951359.1A CN117636408A (en) 2022-08-09 2022-08-09 Video processing method and device

Publications (1)

Publication Number Publication Date
CN117636408A true CN117636408A (en) 2024-03-01

Family

ID=90029048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210951359.1A Pending CN117636408A (en) 2022-08-09 2022-08-09 Video processing method and device

Country Status (1)

Country Link
CN (1) CN117636408A (en)

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN110298851B (en) Training method and device for human body segmentation neural network
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN110070495B (en) Image processing method and device and electronic equipment
CN109754464B (en) Method and apparatus for generating information
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN111127603B (en) Animation generation method and device, electronic equipment and computer readable storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN111652675A (en) Display method and device and electronic equipment
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN112308769B (en) Image synthesis method, apparatus and storage medium
WO2024016923A1 (en) Method and apparatus for generating special effect graph, and device and storage medium
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
WO2023169287A1 (en) Beauty makeup special effect generation method and apparatus, device, storage medium, and program product
WO2023231926A1 (en) Image processing method and apparatus, device, and storage medium
WO2023193613A1 (en) Highlight shading method and apparatus, and medium and electronic device
CN115330925A (en) Image rendering method and device, electronic equipment and storage medium
CN115861503A (en) Rendering method, device and equipment of virtual object and storage medium
CN117636408A (en) Video processing method and device
CN114693860A (en) Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment
CN114596383A (en) Line special effect processing method and device, electronic equipment, storage medium and product
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN115914497A (en) Video processing method, device, equipment, medium and program product
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination