WO2023020283A1 - 图像处理方法、装置、设备、介质及程序产品 - Google Patents

图像处理方法、装置、设备、介质及程序产品 Download PDF

Info

Publication number
WO2023020283A1
WO2023020283A1 PCT/CN2022/110097 CN2022110097W WO2023020283A1 WO 2023020283 A1 WO2023020283 A1 WO 2023020283A1 CN 2022110097 W CN2022110097 W CN 2022110097W WO 2023020283 A1 WO2023020283 A1 WO 2023020283A1
Authority
WO
WIPO (PCT)
Prior art keywords
deformation
image
processed
texture
superimposed
Prior art date
Application number
PCT/CN2022/110097
Other languages
English (en)
French (fr)
Inventor
曾光
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US18/567,138 priority Critical patent/US20240221257A1/en
Publication of WO2023020283A1 publication Critical patent/WO2023020283A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to an image processing method, device, equipment, medium and program product.
  • Image deformation is a common method in image processing, for example, to perform deformation processing such as thin face, thin body, and big eyes on the characters in the image.
  • the current technology performs multiple image deformation processing on the image, it first processes one deformation in the original image, and then uses the output result of the deformation as the input of the next deformation, so as to repeatedly superimpose on the original image, and then obtain the final deformation effect.
  • the present disclosure provides an image processing method, device, equipment, medium, and program product, which are used to solve the problem that the original image needs to be updated multiple times when the object in the image is deformed multiple times, which leads to the deformation of the superimposed image. Vague technical questions.
  • an embodiment of the present disclosure provides an image processing method, including:
  • sequence of deformation instructions acting on the target object in the image to be processed Acquiring a sequence of deformation instructions acting on the target object in the image to be processed, wherein the sequence of deformation instructions includes a plurality of deformation instructions input in sequence;
  • the deformation displacement corresponding to each deformation instruction in the deformation instruction sequence is sequentially superimposed to determine the superimposed deformed image texture, wherein the image texture to be processed is the image texture corresponding to the image to be processed,
  • the resolution of the texture of the image to be processed is smaller than that of the image to be processed;
  • an image processing device including:
  • a deformation command acquiring module configured to acquire a sequence of deformation commands acting on the target object in the image to be processed, wherein the sequence of deformation commands includes a plurality of sequentially inputted deformation commands;
  • the deformation displacement superposition module is used to sequentially superimpose the deformation displacement corresponding to each deformation instruction in the deformation instruction sequence on the image texture to be processed, so as to determine the superimposed deformation image texture, wherein the image texture to be processed is the image texture to be processed.
  • the image texture corresponding to the image the resolution of the image texture to be processed is smaller than the image to be processed;
  • the target object deformation module is further configured to deform the target object in the image to be processed according to the texture of the superimposed deformed image, so as to generate a processed image.
  • an electronic device including:
  • a memory for storing the computer program of the processor
  • the processor is configured to implement the image processing method described in the above first aspect and various possible designs of the first aspect by executing the computer program.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the processor executes the computer-executable instructions, the above first aspect and the first Aspects of the image processing methods described in various possible designs.
  • an embodiment of the present disclosure provides a computer program product, including computer instructions.
  • the computer instructions are executed by a processor, the image processing method described in the above first aspect and various possible designs of the first aspect is implemented.
  • an embodiment of the present disclosure provides a computer program, which implements the image processing method described in the above first aspect and various possible designs of the first aspect when the computer program is executed by a processor.
  • An image processing method, device, device, medium, and program product obtained by the embodiments of the present disclosure obtains a sequence of deformation instructions acting on a target object in an image to be processed, and then transforms the texture of the image to be processed according to the deformation instructions in the sequence of deformation instructions.
  • the deformation displacement is superimposed sequentially to determine the texture of the superimposed deformed image, wherein the texture of the image to be processed is the image texture corresponding to the image to be processed, and the resolution of the texture of the image to be processed is smaller than that of the image to be processed.
  • the target object in the image is deformed to generate the processed image.
  • the total deformation displacement information is first accumulated by using the low-resolution image texture to be processed, and then the displacement information in the superimposed deformed image texture is performed once.
  • Interpolation operations on high-resolution maps Since the superposition of deformation effect and displacement is carried out on a smaller resolution image, the overhead is relatively small, so it can still run smoothly when multiple effects are superimposed on the mobile terminal. In addition, since only one interpolation operation is performed, the loss of image quality will be smaller.
  • FIG. 1 is an application scene diagram of an image processing method according to an example embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method according to an example embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of an image processing method according to another exemplary embodiment of the present disclosure.
  • Fig. 4 is a schematic flowchart of an image processing method according to another exemplary embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of an image to be processed in an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a grid constructed for an initial key point set in an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of the displacement of the deformation displacement region in the embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a grid constructed for adjusting a set of key points in an embodiment of the present disclosure
  • Fig. 9 is an image after processing in an embodiment of the present disclosure.
  • Fig. 10 is a schematic structural diagram of an image processing device according to an example embodiment of the present disclosure.
  • Fig. 11 is a schematic structural diagram of an image processing device according to another exemplary embodiment of the present disclosure.
  • Fig. 12 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • Image deformation is a common method in image processing, for example, to perform deformation processing such as thin face, thin body, and big eyes on the characters in the image.
  • the deformation algorithm of the current image deformation generally the main process is:
  • the existing algorithm first processes the first A deformation, and then use the output of the first deformation as the input of the next deformation, and repeat and superimpose to obtain the final deformation effect.
  • this method needs to perform calculations on high-resolution images multiple times, causing serious performance problems, and each deformation effect requires an interpolation operation , the image will become blurred after multiple overlays.
  • an image processing method, device, device, medium, and program product which acquires a sequence of deformation instructions that act on the target object in the image to be processed, and then performs an operation according to the deformation instructions in the sequence of deformation instructions.
  • the deformation displacement is sequentially superimposed on the processed image texture to determine the superimposed deformed image texture, wherein the image texture to be processed is the image texture corresponding to the image to be processed, and the resolution of the image texture to be processed is smaller than the image to be processed.
  • the target object in the image to be processed is deformed according to the superimposed deformed image texture to generate a processed image.
  • the total deformation displacement information is first accumulated by using the low-resolution image texture to be processed, and then the displacement information in the superimposed deformed image texture is performed once.
  • Interpolation operations on high-resolution maps Since the superposition of deformation effect and displacement is carried out on a smaller resolution image, the overhead is relatively small, so it can still run smoothly when multiple effects are superimposed on the mobile terminal. In addition, since only one interpolation operation is performed, the loss of image quality will be smaller.
  • Fig. 1 is an application scene diagram showing an image processing method according to an example embodiment of the present disclosure.
  • the image processing method provided in this embodiment can be executed by a terminal device with a camera and a display screen.
  • the video capture of the target object may be performed through a camera on the terminal device (for example, a front camera, a rear camera, an external camera, etc.).
  • the video of the target object can also be obtained by the user uploading locally stored video data, or by receiving video data captured by other terminal devices.
  • the target object may be illustrated as a human face.
  • the target object may include at least one of a face object, a body object, and an object object.
  • the target object as a face object as an example
  • the face needs to be reduced, it is to move the point of the jaw to the inside of the face; and when the face is processed multiple times, if the face needs to be To shrink and enlarge the eyes, first move the point of the jaw toward the inside of the face, and then move the point of the eyelid outward, or move the point of the eyelid outward first, and then move the point of the jaw toward the face Move inside.
  • the deformation displacement is sequentially superimposed on the texture of the image to be processed according to the deformation commands in the deformation command sequence, so as to determine the texture of the superimposed deformed image.
  • the processed image texture is an image texture corresponding to the image to be processed, and the resolution of the image texture to be processed is smaller than that of the image to be processed. Then, the target object in the image to be processed is deformed according to the superimposed deformed image texture to generate a processed image.
  • the deformation displacement corresponding to each deformation command is superimposed on the texture of the image to be processed with a smaller resolution, and after the deformation displacement corresponding to all deformation commands is completed, through Overlay deformed image texture performs a deformation on the target object in the image to be processed. Therefore, in this way, the deformation displacement corresponding to the deformation command is performed on the lower-resolution texture, which greatly reduces the amount of calculation, can improve performance, and only needs to perform one interpolation operation, which can effectively avoid image blurring.
  • Fig. 2 is a schematic flowchart of an image processing method according to an example embodiment of the present disclosure. As shown in Figure 2, the image processing method provided in this embodiment includes:
  • Step 101 Obtain a sequence of deformation instructions acting on a target object in an image to be processed.
  • the target object can be deformed by moving the key points of the area to be deformed.
  • the target object can be described as a human face. By moving the point of the jaw to the inside of the face, the effect of thinning the face can be achieved, and by moving the point of the eyelid outward, the effect of widening the eyes can be achieved.
  • the sequence of deformation instructions acting on the target object in the image to be processed can be obtained, wherein, in the sequence of deformation instructions It includes a plurality of deformation instructions input in sequence, for example, includes a face-thinning deformation instruction and an eye-enlarging deformation instruction input in sequence.
  • Step 102 On the image texture to be processed, sequentially superimpose the deformation displacement corresponding to each deformation command in the deformation command sequence, so as to determine the superimposed deformed image texture.
  • the deformation displacement can be sequentially superimposed on the texture of the image to be processed according to the deformation instructions in the sequence of deformation instructions to determine the texture of the superimposed deformed image, wherein the image to be processed
  • the texture is an image texture corresponding to the image to be processed, and the resolution of the texture of the image to be processed is smaller than the resolution of the image to be processed.
  • the image texture to be processed may be first generated according to the image to be processed, wherein the resolution of the image texture to be processed is smaller than the image to be processed, for example, the image to be processed may have a resolution of 3000x4000, while the image texture to be processed may be 512x512 resolution.
  • Step 103 Deform the target object in the image to be processed according to the superimposed deformed image texture to generate a processed image.
  • the target object in the image to be processed may be deformed according to the superimposed deformed image texture to generate a processed image.
  • the sequence of deformation instructions acting on the target object in the image to be processed is obtained, and then the deformation displacement is sequentially superimposed on the texture of the image to be processed according to the deformation instructions in the sequence of deformation instructions, so as to determine the superimposed deformed image texture, wherein,
  • the image texture to be processed is the image texture corresponding to the image to be processed, and the resolution of the image texture to be processed is smaller than that of the image to be processed.
  • the target object in the image to be processed is deformed according to the superimposed deformed image texture to generate a processed image.
  • the superimposed deformed image texture can be obtained by using the low-resolution image texture to be processed to determine the cumulative total deformation displacement information, and then according to the superimposed deformed image
  • the displacement information in the texture performs an interpolation operation on the high-resolution map. Since the superimposition of deformation effect and displacement is carried out on a small resolution image, the overhead is small, so it can still run smoothly when multiple effects are superimposed on the mobile terminal. In addition, because only one interpolation operation is performed, the image quality is not affected The loss will also be smaller.
  • Fig. 3 is a schematic flowchart of an image processing method according to another exemplary embodiment of the present disclosure. As shown in Figure 3, the image processing method provided in this embodiment includes:
  • Step 201 Obtain a sequence of deformation instructions acting on a target object in an image to be processed.
  • the area can be deformed by moving the key points of the area where the target object needs to be deformed.
  • the target object can be described as a human face. By moving the point of the jaw to the inside of the face, the effect of thinning the face can be achieved, and by moving the point of the eyelid outward, the effect of widening the eyes can be achieved.
  • the user When processing the image to be processed, the user usually needs to perform deformation processing on multiple parts, so that during the processing, the sequence of deformation instructions acting on the target object in the image to be processed can be obtained, wherein the sequence of deformation instructions includes
  • the sequentially input multiple deformation instructions include, for example, sequentially inputted face-thinning deformation instructions and eye-expanding deformation instructions.
  • Step 202 Superimpose the first deformation displacement on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence, so as to determine the first deformed image texture.
  • Step 203 Superimpose the second deformation displacement on the first deformed image texture according to the second deformation instruction in the deformation instruction sequence, so as to determine the second deformed image texture.
  • Step 204 starting from the first deformation instruction in the deformation instruction sequence, execute step 202 to step 203 in a loop until all deformation instructions in the superimposed deformation instruction sequence correspond to deformation displacements, so as to determine the texture of the superimposed deformed image.
  • the first deformation displacement can be superimposed on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence to determine the first deformation image texture, and then according to the second deformation instruction in the deformation instruction sequence in the first deformation Superimpose the second deformation displacement on the image texture to determine the second deformation image texture, and then, according to the third deformation instruction in the deformation instruction sequence, superimpose the third deformation displacement on the second deformation image texture to determine the third deformation image Texture until all the deformation commands in the sequence of superimposed deformation commands correspond to deformation displacements, so as to determine the texture of the superimposed deformed image.
  • the face-slimming deformation displacement can be superimposed on the image texture to be processed according to the face-slimming deformation instruction in the deformation instruction sequence to determine the first deformed image texture , and then superimpose the eye-expanding deformation displacement on the first deformed image texture according to the eye-expanding deformation instruction in the deformation instruction sequence, so as to determine the superimposed deformed image texture.
  • the target object may also include a first target object and a second target object, the first deformation instruction is used to deform the first target object, and the second deformation instruction is used to deform the second target object.
  • the face-slimming deformation instruction can be used to deform the first target face
  • the eye-expanding deformation instruction can be used to deform the second target human face. To deform.
  • the thin-face deformation displacement can be superimposed on the image texture to be processed according to the thin-face deformation instruction, that is, the thin-face deformation displacement can be superimposed on the area corresponding to the first target face to determine the first deformed image texture, and then according to the deformation instruction sequence in the eye-opening deformation instruction
  • the eye-expanding deformation displacement is superimposed on the first deformed image texture, that is, the eye-expanding deformation displacement is superimposed on the area corresponding to the second target face, so as to determine the superimposed deformed image texture.
  • Step 205 Deform the target object in the image to be processed according to the superimposed deformed image texture to generate a processed image.
  • the target object in the image to be processed may be deformed according to the superimposed deformed image texture to generate a processed image.
  • the position of the corresponding original key point can be adjusted according to the position of the superimposed deformation key point, so that the adjusted position of the original key point coincides with the position of the superimposed deformation key point, wherein the position of the superimposed deformation key point is the superimposed deformation image
  • the position of the key point on the target object in the texture, the original key point position is the position of the key point on the target object in the image texture to be processed, and the above-mentioned processed image is an image generated after position adjustment.
  • each key point in the superimposed deformed image texture is used as a reference, and each corresponding key point in the image texture to be processed is moved to the position of the corresponding key point in the superimposed deformed image texture, and then Generate processed images.
  • Fig. 4 is a schematic flowchart of an image processing method according to yet another exemplary embodiment of the present disclosure. As shown in Figure 4, the image processing method provided by this embodiment includes:
  • Step 301 Obtain an initial set of key points of a target object in an image to be processed.
  • the initial key point set of the target object in the image to be processed can be obtained through a preset key point model, wherein the initial key point set is used as the initial position information of the texture of the image to be processed before superimposed deformation and displacement.
  • Fig. 5 is a schematic diagram of an image to be processed in an embodiment of the present disclosure.
  • the input original image that is, the image to be processed
  • the initial key point set of the face obtained through the preset key point model can be named PtsA.
  • Step 302. Obtain a sequence of deformation instructions acting on the target object in the image to be processed.
  • the user When processing the image to be processed, the user usually needs to perform deformation processing on multiple parts, so that during the processing, the sequence of deformation instructions acting on the target object in the image to be processed can be obtained, wherein the sequence of deformation instructions includes
  • the sequentially input multiple deformation instructions include, for example, sequentially inputted face-thinning deformation instructions and eye-expanding deformation instructions.
  • FIG. 6 is a schematic diagram of a grid constructed for an initial set of key points in an embodiment of the present disclosure. As shown in FIG. 6 , when the user needs to perform deformation processing on the target object in the image to be processed, the target object can be deformed by moving the key points of the area to be deformed.
  • Step 303 in response to the first deformation instruction, move at least one key point in the initial key point set to generate an adjusted key point set.
  • At least one key point in the initial key point set is moved to generate an adjusted key point set.
  • the first deformation command can be used as an example to illustrate the face-slimming deformation command. After moving the point of the jaw to the inside of the face, the face-slimming effect can be achieved, that is, the deformation and displacement operation is performed on the initial key point set PtsA of the face to obtain the adjusted key point set Pts B.
  • FIG. 7 is a schematic diagram of the displacement of the deformation displacement region in the embodiment of the present disclosure.
  • the key points in the jaw area in the initial key point set are moved.
  • the gray (light color) is the area without displacement
  • the current deformation operation is to thin the lower jaw. It can be seen that there is a displacement change in the lower jaw (that is, a dark area appears).
  • Step 304 generating a grid according to the set of adjusted key points.
  • the mesh may be generated according to the set of adjusted key points, wherein the vertex information of the mesh includes initial position information of each key point in the initial set of key points and adjusted position information of each key point in the set of adjusted key points.
  • FIG. 8 is a schematic diagram of a grid constructed for adjusting a set of key points in an embodiment of the present disclosure.
  • the triangulation algorithm can be used to construct the grid, and the face initial key point set PtsA is used as the uvA attribute of the grid vertex, and the adjusted key point set PtsB is used as the grid vertex The uvB property.
  • Step 305 generating an image texture to be processed corresponding to the image to be processed.
  • a graphics library such as OpenGL (or other graphics libraries such as metal, vulkan, etc.) may be used to create an image texture TexMap to be processed that is smaller in size than the high-resolution image TexSrc to be processed.
  • Step 306 Superimpose the first deformation displacement on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence, so as to determine the first deformed image texture.
  • the first deformation displacement is superimposed on the image texture to be processed, wherein the first deformation displacement of each key point is a difference between the adjusted position information and the initial position information.
  • the offset of the floating-point type will be converted into two 8-bit precision numbers for storage, that is, the network
  • the first deformation displacement corresponding to each grid vertex of the grid is stored by two 8-precision numbers respectively, and the two 8-precision numbers are respectively used to store the integer part and the fractional part of the first deformation displacement.
  • offset (uvA-uvB).
  • float is a floating-point data type
  • floor is a function of rounding down
  • fract is a function of taking the fractional part of the number
  • offset is the displacement of each key point.
  • Step 307 Superimpose the second deformation displacement on the first deformed image texture according to the second deformation instruction in the deformation instruction sequence, so as to determine the second deformed image texture.
  • steps 301-step 306. When there are multiple deformation superpositions, just repeat steps 301-step 306. For example, when there is a second deformation instruction in the deformation instruction sequence, you can refer to the corresponding processing steps for the first deformation instruction in steps 301-step 306 to The second deformation displacement is superimposed on the first deformed image texture according to the second deformation instruction to determine the second deformed image texture.
  • Step 308 Starting from the first deformation command in the deformation command sequence, execute step 202 to step 203 in a loop until all deformation commands in the superimposed deformation command sequence correspond to deformation displacements, so as to determine the superimposed deformed image texture.
  • the first deformation displacement can be superimposed on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence to determine the first deformation image texture, and then according to the second deformation instruction in the deformation instruction sequence in the first deformation superimposing the second deformation displacement on the image texture to determine the second deformation image texture, and then superimposing the third deformation displacement on the second deformation image texture according to the third deformation instruction in the deformation instruction sequence to determine the third deformation image texture until all deformation instructions in the sequence of superimposed deformation instructions correspond to deformation displacements, so as to determine the texture of the superimposed deformed image.
  • Step 309 Deform the target object in the image to be processed according to the texture of the superimposed deformed image, so as to generate a processed image.
  • FIG. 9 is an image after processing in an embodiment of the present disclosure.
  • the target object in the image to be processed is deformed according to the superimposed deformed image texture to generate a processed image.
  • the processed image TexDst is obtained by superimposing the deformed image texture TexMap to offset the image to be processed TexSrc.
  • the value of TexMap at coordinate CoordA is offsetA
  • the value of TexDst at coordinate CoordA is equal to the value of the image to be processed TexSrc at coordinate (CoordA+offsetA).
  • the total deformation displacement information is first accumulated by using the low-resolution image texture to be processed, and then according to the displacement in the superimposed deformed image texture information to perform an interpolation operation on a high-resolution map. Since the superimposition of the deformation effect and displacement is carried out on a smaller resolution image, the overhead is relatively small, so it can still run smoothly when multiple effects are superimposed on the mobile terminal. In addition, since only one interpolation operation is performed, the image quality The loss will also be smaller.
  • Fig. 10 is a schematic structural diagram of an image processing device according to an example embodiment of the present disclosure. As shown in FIG. 10, the image processing device 400 provided in this embodiment includes:
  • a deformation instruction acquisition module 401 configured to acquire a sequence of deformation instructions acting on the target object in the image to be processed, the sequence of deformation instructions includes a plurality of deformation instructions input in sequence;
  • the deformation displacement superposition module 402 is configured to sequentially superimpose the deformation displacement corresponding to each deformation instruction in the deformation instruction sequence on the texture of the image to be processed, so as to determine the texture of the superimposed deformed image, wherein the texture of the image to be processed is the texture of the image to be processed processing the image texture corresponding to the image, the resolution of the image texture to be processed is smaller than the image to be processed;
  • the target object deformation module 403 is further configured to deform the target object in the image to be processed according to the superimposed deformed image texture, so as to generate a processed image.
  • the deformation displacement superposition module 402 is specifically used for:
  • Step 1 superimposing a first deformation displacement on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence, so as to determine a first deformation image texture;
  • Step 2 Superimpose a second deformation displacement on the first deformed image texture according to the second deformation instruction in the deformation instruction sequence to determine the second deformed image texture, the second deformation instruction is the first deformation the instruction following the instruction;
  • step 1 to step 2 are executed cyclically until the deformation displacements corresponding to all deformation commands in the deformation command sequence are superimposed, so as to determine the texture of the superimposed deformed image.
  • the target object includes a first target object and a second target object
  • the first deformation instruction is used to deform the first target object
  • the second deformation instruction uses to deform the second target object
  • the deformation displacement superposition module 402 is also specifically used for:
  • the position of the key point on the target object in the deformed image texture, the original key point position is the position of the key point on the target object in the image texture to be processed, and the processed image is to carry out the position The resulting image after adjustment.
  • FIG. 11 is a schematic structural diagram of an image processing device according to an example embodiment of the present disclosure.
  • the image processing device 400 provided in this embodiment further includes:
  • the key point determination module 404 is configured to acquire an initial key point set of the target object in the image to be processed, and the initial key point set is used as initial position information of the texture of the image to be processed before superimposed deformation and displacement.
  • the deformation displacement superposition module 402 is specifically used for:
  • At least one key point in the initial key point set is moved to generate an adjusted key point set, and the first deformation instruction acts on the at least one key point so that all At least one key point is displaced;
  • a grid is generated according to the adjusted key point set, and the vertex information of the grid includes initial position information of each key point of the initial key point set and adjusted position information of each key point of the adjusted key point set;
  • the first deformation displacement is superimposed on the image texture to be processed to determine the first deformation image texture.
  • Fig. 12 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in FIG. 12 , it shows a schematic structural diagram of an electronic device 500 suitable for implementing the embodiments of the present disclosure.
  • the terminal equipment in the embodiments of the present disclosure may include but not limited to mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (personal digital assistant, PDA), tablet computers (portable android device, PAD), portable multimedia players (portable media player, PMP), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), wearable electronic devices, and other mobile terminals with image acquisition functions, as well as fixed terminals with image acquisition devices such as digital TVs and desktop computers.
  • the electronic device shown in FIG. 12 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 500 may include a processor (such as a central processing unit, a graphics processing unit, etc.) 501, which may be stored in a read-only memory (read only memory, ROM) 502 according to a program Various appropriate actions and processes are executed by accessing a program in a random access memory (random access memory, RAM) 503 . In the RAM 503, various programs and data necessary for the operation of the electronic device 500 are also stored.
  • the processor 501, ROM 502, and RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504 .
  • the memory is used to store programs for executing the video processing methods described in the above method embodiments; the processor is configured to execute the programs stored in the memory.
  • an input device 506 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; including, for example, a liquid crystal display (liquid crystal display, LCD) , an output device 507 such as a speaker, a vibrator, etc.; a storage device 508 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 509.
  • the communication means 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data. While FIG. 12 shows electronic device 500 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • the processes described above with reference to the flowcharts can be implemented as computer software programs.
  • the embodiments of the present disclosure include a computer-readable storage medium including a computer program carried on a non-transitory computer-readable medium, the computer program including the video shown in the flowchart for executing the embodiments of the present disclosure
  • the program code for the processing method may be downloaded and installed from a network via communication means 509, or from storage means 508, or from ROM 502.
  • the above video processing function defined in the method of the embodiment of the present disclosure is executed.
  • the computer-readable storage medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Erasable programmable read-only memory (EPROM), optical fiber, compact disc read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above combination.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium may be transmitted by any appropriate medium, including but not limited to: electric wire, optical cable, radio frequency (radio frequency, RF), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable storage medium may be included in the above-mentioned electronic device, or may exist independently without being assembled into the electronic device.
  • the above-mentioned computer-readable storage medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires a target image in response to a trigger instruction, and the target image is a previous reference image
  • the video frame, the reference image is the video frame currently acquired by the image sensor; the background area in the target image is used to fill the target area in the reference image to generate a processed image, and the target area is the target object in the reference image
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer ( For example, use an Internet service provider to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • the client and the server can communicate using any currently known or future network protocols such as Hypertext Transfer Protocol (HyperText Transfer Protocol, HTTP), and can communicate with digital data in any form or medium
  • HTTP Hypertext Transfer Protocol
  • Examples of communication networks include local area networks (LANs), wide area networks (WANs), internetworks (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation of the unit itself under certain circumstances, for example, the display module can also be described as "a unit that displays the target human face and human face mask sequence".
  • exemplary types of hardware logic components include: field-programmable gate array (FPGA), application specific integrated circuit (ASIC), application-specific standard product (application Specific standard parts, ASSP), system on chip (system on chip, SOC), complex programmable logic device (complex programmable logic device, CPLD) and so on.
  • FPGA field-programmable gate array
  • ASIC application specific integrated circuit
  • ASSP application-specific standard product
  • SOC system on chip
  • complex programmable logic device complex programmable logic device
  • CPLD complex programmable logic device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • an image processing method including:
  • sequence of deformation instructions acting on the target object in the image to be processed Acquiring a sequence of deformation instructions acting on the target object in the image to be processed, wherein the sequence of deformation instructions includes a plurality of deformation instructions input in sequence;
  • the deformation displacement corresponding to each deformation instruction in the deformation instruction sequence is sequentially superimposed to determine the superimposed deformed image texture, wherein the image texture to be processed is the image texture corresponding to the image to be processed,
  • the resolution of the texture of the image to be processed is smaller than that of the image to be processed;
  • the deformation displacement corresponding to each deformation instruction in the deformation instruction sequence is sequentially superimposed to determine the superimposed deformed image texture, including:
  • Step 1 superimposing a first deformation displacement on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence, so as to determine a first deformation image texture;
  • Step 2 Superimpose a second deformation displacement on the first deformed image texture according to the second deformation instruction in the deformation instruction sequence to determine the second deformed image texture, the second deformation instruction is the first deformation the instruction following the instruction;
  • step 1 to step 2 are executed cyclically until the deformation displacements corresponding to all deformation commands in the deformation command sequence are superimposed, so as to determine the texture of the superimposed deformed image.
  • the target object includes a first target object and a second target object
  • the first deformation instruction is used to deform the first target object
  • the second deformation instruction is used to deform the first target object.
  • the second target object is deformed.
  • the deforming the target object in the image to be processed according to the superimposed deformed image texture to generate a processed image includes:
  • the position of the key point on the target object in the deformed image texture, the original key point position is the position of the key point on the target object in the image texture to be processed, and the image to be processed is to carry out the position The resulting image after adjustment.
  • An initial set of key points of the target object in the image to be processed is acquired, and the set of initial key points is used as initial position information of the texture of the image to be processed before superposition, deformation and displacement.
  • the image texture to be processed according to the first deformation instruction in the sequence of deformation instructions is superimposed on the first deformation displacement to determine the first deformation image texture, including:
  • At least one key point in the initial key point set is moved to generate an adjusted key point set, and the first deformation instruction acts on the at least one key point so that all At least one key point is displaced;
  • a grid is generated according to the adjusted key point set, and the vertex information of the grid includes initial position information of each key point of the initial key point set and adjusted position information of each key point of the adjusted key point set;
  • the first deformation displacement is superimposed on the image texture to be processed to determine the first deformation image texture.
  • the first deformation displacement corresponding to each grid vertex of the grid is respectively saved by two 8-precision numbers, and the two 8-precision numbers are used to save the Integer and fractional parts of the first deformation displacement.
  • an image processing device including:
  • a deformation command acquiring module configured to acquire a sequence of deformation commands acting on the target object in the image to be processed, wherein the sequence of deformation commands includes a plurality of sequentially inputted deformation commands;
  • the deformation displacement superposition module is used to sequentially superimpose the deformation displacement corresponding to each deformation instruction in the deformation instruction sequence on the image texture to be processed, so as to determine the superimposed deformation image texture, wherein the image texture to be processed is the image texture to be processed.
  • the image texture corresponding to the image the resolution of the image texture to be processed is smaller than the image to be processed;
  • the target object deformation module is further configured to deform the target object in the image to be processed according to the texture of the superimposed deformed image, so as to generate a processed image.
  • the deformation displacement superposition module is specifically used for:
  • Step 1 superimposing a first deformation displacement on the image texture to be processed according to the first deformation instruction in the deformation instruction sequence, so as to determine a first deformation image texture;
  • Step 2 Superimpose a second deformation displacement on the first deformed image texture according to the second deformation instruction in the deformation instruction sequence to determine the second deformed image texture, the second deformation instruction is the first deformation the instruction following the instruction;
  • step 1 to step 2 are executed cyclically until the deformation displacements corresponding to all deformation commands in the deformation command sequence are superimposed, so as to determine the texture of the superimposed deformed image.
  • the target object includes a first target object and a second target object
  • the first deformation instruction is used to deform the first target object
  • the second deformation instruction uses to deform the second target object
  • the deformation displacement superposition module is also specifically used for:
  • the position of the key point on the target object in the deformed image texture, the original key point position is the position of the key point on the target object in the image texture to be processed, and the processed image is to carry out the position The resulting image after adjustment.
  • the image processing device further includes:
  • a key point determining module configured to acquire an initial key point set of the target object in the image to be processed, the initial key point set being used as initial position information of the texture of the image to be processed before superimposed deformation and displacement.
  • the deformation displacement superposition module is specifically used for:
  • At least one key point in the initial key point set is moved to generate an adjusted key point set, and the first deformation instruction acts on the at least one key point so that all At least one key point is displaced;
  • a grid is generated according to the adjusted key point set, and the vertex information of the grid includes initial position information of each key point of the initial key point set and adjusted position information of each key point of the adjusted key point set;
  • the first deformation displacement is superimposed on the image texture to be processed to determine the first deformation image texture.
  • an electronic device including:
  • a memory for storing the computer program of the processor
  • the processor is configured to implement the image processing method described in the above first aspect and various possible designs of the first aspect by executing the computer program.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the processor executes the computer-executable instructions, the above first aspect and the first Aspects of the image processing methods described in various possible designs.
  • an embodiment of the present disclosure provides a computer program product, including computer instructions.
  • the computer instructions are executed by a processor, the image processing method described in the above first aspect and various possible designs of the first aspect is implemented.
  • an embodiment of the present disclosure provides a computer program, which implements the image processing method described in the above first aspect and various possible designs of the first aspect when the computer program is executed by a processor.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本公开提供一种图像处理方法、装置、设备、介质及程序产品。本公开提供的图像处理方法,获取作用于待处理图像中目标对象上的形变指令序列,再在待处理图像纹理上,依次叠加形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,待处理图像纹理为待处理图像所对应的图像纹理,待处理图像纹理的分辨率小于待处理图像,最后,根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。由于形变效果位移的叠加是在较小分辨率图片上进行,开销较小,因此可以做到移动端叠加多个效果时,仍可以流畅运行,此外,由于只是进行了一次插值操作,对画质的损失也会更小。

Description

图像处理方法、装置、设备、介质及程序产品
相关申请交叉引用
本申请要求于2021年08月16日提交中国专利局、申请号为202110935096.0、发明名称为“图像处理方法、装置、设备、介质及程序产品”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、设备、介质及程序产品。
背景技术
随着智能终端技术的发展,智能终端采集图像的功能越来越强大。于是,对智能终端采集到的图像进行相应处理的各种应用程序也越来越多。
图像形变是图像处理中的一个常见方法,例如,对图像中的人物进行瘦脸、瘦身、大眼等形变处理。然而,当前技术在对图像进行多次图像形变处理时,是通过在原图像中先处理一个形变,然后以该形变的输出结果作为下一次形变的输入,从而在原图像上不断重复叠加,进而得到最终形变效果。
可见,这种方式需要对原图像进行多次更新,会导致叠加后的图像变模糊。
发明内容
本公开提供一种图像处理方法、装置、设备、介质及程序产品,用于解决当前在对图像中的目标进行多次形变时,需要对原图像进行多次更新,进而导致叠加后的图像变模糊的技术问题。
第一方面,本公开实施例提供一种图像处理方法,包括:
获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
第二方面,本公开实施例提供一种图像处理装置,包括:
形变指令获取模块,用于获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
形变位移叠加模块,用于在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
目标对象形变模块,还用于根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
第三方面,本公开实施例提供一种电子设备,包括:
处理器;以及
存储器,用于存储所述处理器的计算机程序;
其中,所述处理器被配置为通过执行所述计算机程序来实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机指令,所述计算机指令被处理器执行时实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
本公开实施例提供的一种图像处理方法、装置、设备、介质及程序产品,获取作用于待处理图像中目标对象上的形变指令序列,再根据形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理,其中,待处理图像纹理为待处理图像所对应的图像纹理,待处理图像纹理的分辨率小于待处理图像,最后,根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。可见,针对高分辨率的待处理图像在应用多个形变效果叠加时,是先通过使用低分辨率的待处理图像纹理累计总的形变位移信息,再根据叠加形变图像纹理中的位移信息进行一次高分辨率图上的插值操作。由于形变效果位移的叠加是在较小分辨率图片上进行,开销较小,因此可以做到移动端叠加多个效果时,仍可以流畅运行。此外,由于只是进行了一次插值操作,对画质的损失也会更小。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开根据一示例实施例示出的图像处理方法的应用场景图;
图2为本公开根据一示例实施例示出的图像处理方法的流程示意图;
图3为本公开根据另一示例实施例示出的图像处理方法的流程示意图;
图4为本公开根据再一示例实施例示出的图像处理方法的流程示意图;
图5为本公开实施例中待处理图像示意图;
图6为本公开实施例中针对初始关键点集合所构建的网格示意图;
图7为本公开实施例中存在形变位移区域的位移示意图;
图8为本公开实施例中针对调整关键点集合所构建的网格示意图;
图9为本公开实施例中处理后图像;
图10为本公开根据一示例实施例示出的图像处理装置的结构示意图;
图11为本公开根据另一示例实施例示出的图像处理装置的结构示意图;
图12为本公开根据一示例实施例示出的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
图像形变是图像处理中的一个常见方法,例如,对图像中的人物进行瘦脸、瘦身、大眼等形变处理。其中,当前图像形变的形变算法,一般主要流程是:
1、通过人脸、肢体关键点模型计算出待处理图像中人脸或肢体的关键点;
2、再根据用户的形变需求去移动关键点,例如,如需将人脸进行变小处理,就是把下颚的点向脸内部进行移动;
3、根据关键点构建网格;
4、最后,根据网格对图片进行重新采样得到形变后的图片结果。
然而,当待处理图像中有多个人,需要分人叠加形变效果,又或者需要叠加多种不同的形变(如分别叠加大眼,瘦脸,长腿等时),现有的算法采用先处理第一个形变,然后以第一个形变的输出结果作为下一次形变的输入,不断重复叠加得到最终形变效果。其中,在移动端处理高分辨率图片(如1200M像素以上的图片)时,这种方式需要多次在高分辨率图片上进行运算,性能问题严重,且每个形变效果都需要进行一次插值操作,多次叠加后会导致图片变得模糊。
而在本公开中,旨在提供一种图像处理方法、装置、设备、介质及程序产品,获取作用于待处理图像中目标对象上的形变指令序列,再根据形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理,其中,待处理图像纹理为待处理图像所对应的图像纹理,待处理图像纹理的分辨率小于待处理图像。最后,根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。可见,针对高分辨率的待处理图像在应用多个形变效果叠加时,是先通过使用低分辨率的待处 理图像纹理累计总的形变位移信息,再根据叠加形变图像纹理中的位移信息进行一次高分辨率图上的插值操作。由于形变效果位移的叠加是在较小分辨率图片上进行,开销较小,因此可以做到移动端叠加多个效果时,仍可以流畅运行。此外,由于只是进行了一次插值操作,对画质的损失也会更小。
图1为本公开根据一示例实施例示出的图像处理方法的应用场景图。如图1所示,本实施例提供的图像处理方法,可以通过带有摄像头以及显示屏幕的终端设备执行。具体的,可以通过终端设备上的摄像头(例如,前置摄像头,后置摄像头,外接摄像头等)来对目标对象(例如:人脸、身体、物体等)进行视频拍摄。此外,对于目标对象的视频,还可以是用户通过上传本地所存储的视频数据进行获得,也可以是通过接受其他终端设备拍摄的视频数据的方式进行获得。
可以以目标对象为人脸进行举例说明,在一种可能的场景中,当利用终端设备对目标对象进行图像拍摄后,通常是需要对拍摄得到的待处理图像进行进一步处理。在一种实施例中,目标对象可以包括人脸对象、肢体对象以及物体对象中的至少一种。
其中,以在目标对象为人脸对象为例,如需将人脸进行变小处理,就是把下颚的点向脸内部进行移动;而在对人脸进行多次处理时,如需将人脸进行变小以及对眼睛进行变大处理,就是先把下颚的点向脸内部进行移动之后,再将眼皮的点向外移动,或者,先把眼皮的点向外移动,再把下颚的点向脸内部进行移动。而在对人脸进行多次处理时,在本实施例提供的方法中,是根据形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理,其中,待处理图像纹理为待处理图像所对应的图像纹理,待处理图像纹理的分辨率小于待处理图像。然后,根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。可见,在本实施例提供的方法中,对于各个形变指令所对应的形变位移均是在分辨率较小的待处理图像纹理上进行叠加处理,在完成所有形变指令对应的形变位移之后,再通过叠加形变图像纹理对待处理图像中的目标对象进行一次形变。从而通过这种方式,形变指令对应的形变位移都是在较低分辨率的纹理上进行,大大降低了运算量,可以提高性能,且只需进行一次插值操作,能够有效地避免图像变模糊。
图2为本公开根据一示例实施例示出的图像处理方法的流程示意图。如图2所示,本实施例提供的图像处理方法,包括:
步骤101、获取作用于待处理图像中目标对象上的形变指令序列。
当用户需要对待处理图像中的目标对象进行形变处理时,可以通过移动目标对象需要进行形变区域的关键点,从而来对该区域进行形变。可以以目标对象为人脸进行说明,通过移动下颚的点向脸内部进行移动之后,可以实现瘦脸的效果,而通过将眼皮的点向外移动之后,可以实现扩眼的效果。
而用户在对待处理图像进行处理时,通常需要对多个部位进行形变处理,从而在进行处理时,可以获取作用于待处理图像中目标对象上的形变指令序列,其中,在该形变指令序列中包括依次输入的多个形变指令,例如,包括依次输入的瘦脸形变指令以及扩眼形变指令。
步骤102、在待处理图像纹理上,依次叠加形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理。
在获取到作用于待处理图像中目标对象上的形变指令序列之后,可以根据形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理,其中,待处理图像纹理为待处理图像所对应的图像纹理,待处理图像纹理的分辨率小于待处理图像的分辨率。
具体的,可以是先根据待处理图像生成待处理图像纹理,其中,待处理图像纹理的分辨率小于待处理图像,例如,待处理图像可以为3000x4000的分辨率,而待处理图像纹理可以为512x512的分辨率。
步骤103、根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。
在确定叠加形变图像纹理之后,可以根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。
在本实施例中,获取作用于待处理图像中目标对象上的形变指令序列,再根据形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理,其中,待处理图像纹理为待处理图像所对应的图像纹理,待处理图像纹理的分辨率小于待处理图像,最后,根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。可见,针对高分辨率的待处理图像在应用多个形变效果叠加时,可以先通过使用低分辨率的待处理图像纹理获得叠加形变图像纹理以确定累计总的形变位移信息,再根据叠加形变图像纹理中的位移信息进行一次高分辨率图上的插值操作。由于形变效果位移的叠加是在较小分辨率图片上进行,开销较小,因此可以做到移动端叠加多个效果时,仍可以流畅运行,此外,由于只是进行了一次插值操作,对画质的损失也会更小。
图3为本公开根据另一示例实施例示出的图像处理方法的流程示意图。如图3所示,本实施例提供的图像处理方法,包括:
步骤201、获取作用于待处理图像中目标对象上的形变指令序列。
当用户需要对待处理图像中的目标对象进行形变处理时,可以通过移动目标对象需要进行形变区域的关键点的方式,来对该区域进行形变。可以以目标对象为人脸进行说明,通过移动下颚的点向脸内部进行移动之后,可以实现瘦脸的效果,而通过将眼皮的点向外移动之后,可以实现扩眼的效果。
用户在对待处理图像进行处理时,通常需要对多个部位进行形变处理,从而在进行处理时,可以获取作用于待处理图像中目标对象上的形变指令序列,其中,在该形变指令序列中包括依次输入的多个形变指令,例如,包括依次输入的瘦脸形变指令以及扩眼形变指令。
步骤202、根据形变指令序列中的第一形变指令在待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理。
步骤203、根据形变指令序列中的第二形变指令在第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理。
步骤204、从形变指令序列中的首个形变指令开始,循环执行步骤202至步骤203,直至叠加形变指令序列中的所有形变指令对应形变位移,以确定叠加形变图像纹理。
具体的,可以是根据形变指令序列中的第一形变指令在待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理,再根据形变指令序列中的第二形变指令在第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理,接着,还可以根据形变指令序列中的第三形变指令在第二形变图像纹理上叠加第三形变位移,以确定第三形变图像纹理,直至叠加形变指令序列中的所有形变指令对应形变位移,以确定叠加形变图像纹理。
例如,当上述形变指令序列中包括依次输入的瘦脸形变指令以及扩眼形变指令时,可以根据形变指令序列中的瘦脸形变指令在待处理图像纹理上叠加瘦脸形变位移,以确定第一形变图像纹理,再根据形变指令序列中的扩眼形变指令在第一形变图像纹理上叠加扩眼形变位移,以确定叠加形变图像纹理。
此外,目标对象还可以包括第一目标对象以及第二目标对象,则第一形变指令用于对第一目标对象进行形变,第二形变指令用于对第二目标对象进行形变。
例如,当上述形变指令序列中包括依次输入的瘦脸形变指令以及扩眼形变指令时,瘦脸形变指令可以用于对第一目标人脸进行形变,扩眼形变指令可以用于对第二目标人脸进行形变。可以根据瘦脸形变指令在待处理图像纹理上叠加瘦脸形变位移,即在第一目标人脸对应的区域叠加瘦脸形变位移,以确定第一形变图像纹理,再根据形变指令序列中的扩眼形变指令在第一形变图像纹理上叠加扩眼形变位移,即在第二目标人脸对应的区域叠加扩眼形变位移,以确定叠加形变图像纹理。
步骤205、根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。
在确定叠加形变图像纹理之后,可以根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。
具体的,可以根据叠加形变关键点位置对对应的原始关键点位置进行位置调整,以使原始关键点调整后的位置与叠加形变关键点位置相重合,其中,叠加形变关键点位置为叠加形变图像纹理中目标对象上的关键点的位置,原始关键点位置为待处理图像纹理中目标对象上的关键点的位置,上述的处理后图像为进行位置调整后所生成的图像。可以理解的,在上述位置调整中,是以叠加形变图像纹理中各个关键点的位置为参照,将待处理图像纹理中对应的各个关键点移动至叠加形变图像纹理中对应关键点的位置,进而生成处理后图像。
可见,针对高分辨率的待处理图像在应用多个形变效果叠加时,通过使用低分辨的待处理图像纹理累计总的形变位移信息,其中,多个形变效果叠加可以是在一个目标对象上的多个形变效果叠加,也可以是在多个目标对象上的形变效果叠加。最后,再根据叠加形变图像纹理中的位移信息进行一次高分辨率图上的插值操作,既可以减小计算开销,因此可以做到移动端叠加多个效果时,仍可以流畅运行,此外,由于只是进行了一次插值操作,对画质的损失也会更小。
图4为本公开根据再一示例实施例示出的图像处理方法的流程示意图。如图4所示,本实施例提供的图像处理方法,包括:
步骤301、获取待处理图像中目标对象的初始关键点集合。
在本步骤中,可以通过预设关键点模型得到待处理图像中目标对象的初始关键点集合,其中,初始关键点集合用于作为待处理图像纹理在叠加形变位移前的初始位置信息。
图5为本公开实施例中待处理图像示意图。如图5所示,对于输入的原图,即待处理图像,可以命名为TexSrc,通过预设关键点模型得到的人脸初始关键点集合,可以命名为PtsA。
步骤302、获取作用于待处理图像中目标对象上的形变指令序列。
用户在对待处理图像进行处理时,通常需要对多个部位进行形变处理,从而在进行处理时,可以获取作用于待处理图像中目标对象上的形变指令序列,其中,在该形变指令序列中包括依次输入的多个形变指令,例如,包括依次输入的瘦脸形变指令以及扩眼形变指令。
图6为本公开实施例中针对初始关键点集合所构建的网格示意图。如图6所示,当用户需要对待处理图像中的目标对象进行形变处理时,可以通过移动目标对象需要进行形变区域的关键点,从而来对该区域进行形变。
步骤303、响应于第一形变指令,对初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合。
在本步骤中,响应于第一形变指令,对初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合。可以以第一形变指令为瘦脸形变指令进行举例说明,通过将下颚的点向脸内部进行移动之后,可以实现瘦脸的效果,即对人脸初始关键点集合PtsA进行形变位移操作得到调整关键点集合PtsB。
其中,图7为本公开实施例中存在形变位移区域的位移示意图。如图7所示,响应于瘦脸形变指令,对初始关键点集合中下颚区域的关键点进行位置移动。其中,灰色(浅色)为无位移的区域,当前形变操作为瘦下颚,可见在下颚部分存在位移变化(即出现了深色的区域)。
步骤304、根据调整关键点集合生成网格。
在本步骤中,可以是根据调整关键点集合生成网格,其中,网格的顶点信息包括初始关键点集合的各个关键点的初始位置信息以及调整关键点集合的各个关键点的调整位置信息。
其中,图8为本公开实施例中针对调整关键点集合所构建的网格示意图。如图8所示,可以根据调整关键点集合PtsB,采用三角剖分算法构建网格,并使用人脸初始关键点集合PtsA作为网格顶点的uvA属性,使用调整关键点集合PtsB作为网格顶点的uvB属性。
步骤305、生成待处理图像所对应的待处理图像纹理。
具体的,可以是使用诸如OpenGL等的图形库(或其他图形库如metal、vulkan等),创建一个比高分辨率待处理图像TexSrc尺寸更小的待处理图像纹理TexMap。
步骤306、根据形变指令序列中的第一形变指令在待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理。
然后,响应于第一形变指令,对初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合,并根据调整关键点集合生成网格,网格的顶点信息包括初始关键点集合的各个关键点的初始位置信息以及调整关键点集合的各个关键点的调整位置信 息。最后,在待处理图像纹理上叠加第一形变位移,其中,各个关键点的第一形变位移为调整位置信息与初始位置信息的差值。
此外,在一种实施例中,为兼容更多设备(其中,部分低端机不支持浮点纹理),会将浮点类型的偏移量转换为2个8位精度的数保存,即将网格的各个网格顶点对应的第一形变位移分别通过2个8精度的数进行保存,其中,2个8精度的数分别用于保存第一形变位移的整数部分以及小数部分例如,
float a=floor(offset*255.0)/255.0;
float b=fract(offset*255.0);
其中,offset=(uvA-uvB)。
值得说明的,float为浮点型数据类型,floor为向下取整函数,fract为取该数的小数部分的函数,offset为各个关键点的位移。
步骤307、根据形变指令序列中的第二形变指令在第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理。
当有多个形变叠加时,重复步骤301-步骤306即可,例如,当形变指令序列中存在第二形变指令时,可以参照步骤301-步骤306中对于第一形变指令的对应处理步骤,以根据第二形变指令在第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理。
步骤308、从形变指令序列中的首个形变指令开始,循环执行步骤202至步骤203,直至叠加形变指令序列中的所有形变指令对应形变位移,以确定叠加形变图像纹理。
具体的,可以是根据形变指令序列中的第一形变指令在待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理,再根据形变指令序列中的第二形变指令在第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理,接着,还可以是根据形变指令序列中的第三形变指令在第二形变图像纹理上叠加第三形变位移,以确定第三形变图像纹理,直至叠加形变指令序列中的所有形变指令对应形变位移,以确定叠加形变图像纹理。
步骤309、根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。
图9为本公开实施例中处理后图像。如图9所示,根据叠加形变图像纹理对待处理图像中的目标对象进行形变,以生成处理后图像。具体的,通过叠加形变图像纹理TexMap对待处理图像TexSrc进行偏移,得到处理后图像TexDst。具体为,假设TexMap在坐标CoordA的值为offsetA,则TexDst在坐标CoordA的值等于待处理图像TexSrc在坐标(CoordA+offsetA)的值。需要注意,获取叠加形变图像纹理TexMap的offset时,需要将2个8位精度的数转化回浮点数,即将2个8精度的数进行相加,即可获得该浮点数。
在本实施例中,针对高分辨率的待处理图像在应用多个形变效果叠加时,是先通过使用低分辨的待处理图像纹理累计总的形变位移信息,再根据叠加形变图像纹理中的位移信息进行一次高分辨率图上的插值操作。由于形变效果位移的叠加是在较小分辨率图片上进行,开销较小,因此可以做到移动端叠加多个效果时,仍可以流畅运行,此外,由于只是进行了一次插值操作,对画质的损失也会更小。
图10为本公开根据一示例实施例示出的图像处理装置的结构示意图。如图10所示,本实施例提供的图像处理装置400,包括:
形变指令获取模块401,用于获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
形变位移叠加模块402,用于在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
目标对象形变模块403,还用于根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
在一种可能的设计中,所述形变位移叠加模块402,具体用于:
步骤1:根据所述形变指令序列中的第一形变指令在所述待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理;
步骤2:根据所述形变指令序列中的第二形变指令在所述第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理,所述第二形变指令为所述第一形变指令的后一个指令;
从所述形变指令序列中的首个形变指令开始,循环执行步骤1至步骤2,直至叠加所述形变指令序列中的所有形变指令对应的形变位移,以确定所述叠加形变图像纹理。
在一种可能的设计中,若所述目标对象包括第一目标对象以及第二目标对象,则所述第一形变指令用于对所述第一目标对象进行形变,所述第二形变指令用于对所述第二目标对象进行形变。
在一种可能的设计中,所述形变位移叠加模块402,还具体用于:
根据叠加形变关键点位置对对应的原始关键点位置进行位置调整,以使所述原始关键点调整后的位置与所述叠加形变关键点位置相重合,所述叠加形变关键点位置为所述叠加形变图像纹理中所述目标对象上的关键点的位置,所述原始关键点位置为所述待处理图像纹理中所述目标对象上的关键点的位置,所述处理后图像为进行所述位置调整后所生成的图像。
在图10所示实施例的基础上,图11为本公开根据一示例实施例示出的图像处理装置的结构示意图。如图11所示,本实施例提供的图像处理装置400,还包括:
关键点确定模块404,用于获取所述待处理图像中所述目标对象的初始关键点集合,所述初始关键点集合用于作为所述待处理图像纹理在叠加形变位移前的初始位置信息。
在一种可能的设计中,所述形变位移叠加模块402,具体用于:
响应于所述第一形变指令,对所述初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合,所述第一形变指令作用于所述至少一个关键点,以使所述至少一个关键点发生位移;
根据所述调整关键点集合生成网格,所述网格的顶点信息包括所述初始关键点集合的各个关键点的初始位置信息以及所述调整关键点集合的各个关键点的调整位置信息;
获取所述调整位置信息与所述初始位置信息的差值作为各个关键点的第一形变位移;
在所述待处理图像纹理上叠加所述第一形变位移,以确定所述第一形变图像纹理。
值得说明的,图10-图11所示实施例提供的图像处理装置,可用于执行上述任一方法实施例所提供的方法步骤,具体实现方式和技术效果类似,此处不再赘述。
图12为本公开根据一示例实施例示出的电子设备的结构示意图。如图12所示,其示出了适于用来实现本公开实施例的电子设备500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(personal digital assistant,PDA)、平板电脑(portable android device,PAD)、便携式多媒体播放器(portable media player,PMP)、车载终端(例如车载导航终端)、可穿戴电子设备等等具有图像获取功能的移动终端以及诸如数字TV、台式计算机等等外接有具有图像获取设备的固定终端。图12示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图12所示,电子设备500可以包括处理器(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(read only memory,ROM)502中的程序或者从存储器508加载到随机访问存储器(random access memory,RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理器501、ROM 502以及RAM503通过总线504彼此相连。输入/输出(input/output,I/O)接口505也连接至总线504。存储器用于存储执行上述各个方法实施例所述视频处理方法的程序;处理器被配置为执行存储器中存储的程序。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(liquid crystal display,LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图12示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机可读存储介质,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行本公开实施例的流程图所示的视频处理方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理器501执行时,执行本公开实施例的方法中限定的上述视频处理功能。
需要说明的是,本公开上述的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(erasable programmable read-only memory,EPROM)、光纤、便携式紧凑磁盘只读存储器(compact disc read-only memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号 介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(radio frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读存储介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于触发指令,获取目标图像,目标图像为参考图像的前一视频帧,参考图像为图像传感器当前所获取到的视频帧;利用目标图像中的背景区域对参考图像中的目标区域进行补全填充,以生成处理后图像,目标区域为目标对象在参考图像中所覆盖的区域;将处理后图像作为当前视频帧进行显示,处理后图像为目标对象至少部分从参考图像中移除后的图像。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(local area network,LAN)或广域网(wide area network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(LAN),广域网(WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该单元本身的限定,例如,显示模块还可以被描述为“显示对象人脸以及人脸面具序列的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(field-programmable gate array,FPGA)、专用集成电路(application specific integrated circuit,ASIC)、专用标准产品(application specific standard parts,ASSP)、片上系统(system on chip,SOC)、复杂可编程逻辑设备(complex programmable logic device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,根据本公开的一个或多个实施例,提供了一种图像处理方法,包括:
获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
在一种可能的设计中,所述在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,包括:
步骤1:根据所述形变指令序列中的第一形变指令在所述待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理;
步骤2:根据所述形变指令序列中的第二形变指令在所述第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理,所述第二形变指令为所述第一形变指令的后一个指令;
从所述形变指令序列中的首个形变指令开始,循环执行步骤1至步骤2,直至叠加所述形变指令序列中的所有形变指令对应的形变位移,以确定所述叠加形变图像纹理。
在一种可能的设计中,所述目标对象包括第一目标对象以及第二目标对象,所述第一形变指令用于对所述第一目标对象进行形变,所述第二形变指令用于对所述第二目标对象进行形变。
在一种可能的设计中,所述根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像,包括:
根据叠加形变关键点位置对对应的原始关键点位置进行位置调整,以使所述原始关键点调整后的位置与所述叠加形变关键点位置相重合,所述叠加形变关键点位置为所述叠加形变图像纹理中所述目标对象上的关键点的位置,所述原始关键点位置为所述待处 理图像纹理中所述目标对象上的关键点的位置,所述待处理图像为进行所述位置调整后所生成的图像。
在一种可能的设计中,在所述根据所述形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理之前,还包括:
获取所述待处理图像中所述目标对象的初始关键点集合,所述初始关键点集合用于作为所述待处理图像纹理在叠加形变位移前的初始位置信息。
在一种可能的设计中,若所述第一形变指令为所述形变指令序列中的第一个指令,则所述根据所述形变指令序列中的第一形变指令在所述待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理,包括:
响应于所述第一形变指令,对所述初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合,所述第一形变指令作用于所述至少一个关键点,以使所述至少一个关键点发生位移;
根据所述调整关键点集合生成网格,所述网格的顶点信息包括所述初始关键点集合的各个关键点的初始位置信息以及所述调整关键点集合的各个关键点的调整位置信息;
获取所述调整位置信息与所述初始位置信息的差值作为各个关键点的第一形变位移;
在所述待处理图像纹理上叠加所述第一形变位移,以确定所述第一形变图像纹理。
在一种可能的设计中,所述网格的各个网格顶点对应的所述第一形变位移分别通过2个8精度的数进行保存,所述2个8精度的数分别用于保存所述第一形变位移的整数部分以及小数部分。
第二方面,根据本公开的一个或多个实施例,提供了一种图像处理装置,包括:
形变指令获取模块,用于获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
形变位移叠加模块,用于在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
目标对象形变模块,还用于根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
在一种可能的设计中,所述形变位移叠加模块,具体用于:
步骤1:根据所述形变指令序列中的第一形变指令在所述待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理;
步骤2:根据所述形变指令序列中的第二形变指令在所述第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理,所述第二形变指令为所述第一形变指令的后一个指令;
从所述形变指令序列中的首个形变指令开始,循环执行步骤1至步骤2,直至叠加所述形变指令序列中的所有形变指令对应的形变位移,以确定所述叠加形变图像纹理。
在一种可能的设计中,若所述目标对象包括第一目标对象以及第二目标对象,则所述第一形变指令用于对所述第一目标对象进行形变,所述第二形变指令用于对所述第二目标对象进行形变。
在一种可能的设计中,所述形变位移叠加模块,还具体用于:
根据叠加形变关键点位置对对应的原始关键点位置进行位置调整,以使所述原始关键点调整后的位置与所述叠加形变关键点位置相重合,所述叠加形变关键点位置为所述叠加形变图像纹理中所述目标对象上的关键点的位置,所述原始关键点位置为所述待处理图像纹理中所述目标对象上的关键点的位置,所述处理后图像为进行所述位置调整后所生成的图像。
在一种可能的设计中,所述图像处理装置,还包括:
关键点确定模块,用于获取所述待处理图像中所述目标对象的初始关键点集合,所述初始关键点集合用于作为所述待处理图像纹理在叠加形变位移前的初始位置信息。
在一种可能的设计中,所述形变位移叠加模块,具体用于:
响应于所述第一形变指令,对所述初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合,所述第一形变指令作用于所述至少一个关键点,以使所述至少一个关键点发生位移;
根据所述调整关键点集合生成网格,所述网格的顶点信息包括所述初始关键点集合的各个关键点的初始位置信息以及所述调整关键点集合的各个关键点的调整位置信息;
获取所述调整位置信息与所述初始位置信息的差值作为各个关键点的第一形变位移;
在所述待处理图像纹理上叠加所述第一形变位移,以确定所述第一形变图像纹理。
第三方面,本公开实施例提供一种电子设备,包括:
处理器;以及
存储器,用于存储所述处理器的计算机程序;
其中,所述处理器被配置为通过执行所述计算机程序来实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机指令,所述计算机指令被处理器执行时实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计中所述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对 本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (12)

  1. 一种图像处理方法,其中,包括:
    获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
    在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
    根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
  2. 根据权利要求1所述的图像处理方法,其中,所述在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,包括:
    步骤1:根据所述形变指令序列中的第一形变指令在所述待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理;
    步骤2:根据所述形变指令序列中的第二形变指令在所述第一形变图像纹理上叠加第二形变位移,以确定第二形变图像纹理,所述第二形变指令为所述第一形变指令的后一个指令;
    从所述形变指令序列中的首个形变指令开始,循环执行步骤1至步骤2,直至叠加所述形变指令序列中的所有形变指令对应的形变位移,以确定所述叠加形变图像纹理。
  3. 根据权利要求2所述的图像处理方法,其中,所述目标对象包括第一目标对象以及第二目标对象,所述第一形变指令用于对所述第一目标对象进行形变,所述第二形变指令用于对所述第二目标对象进行形变。
  4. 根据权利要求1-3中任意一项所述的图像处理方法,其中,所述根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像,包括:
    根据叠加形变关键点位置对对应的原始关键点位置进行位置调整,以使所述原始关键点调整后的位置与所述叠加形变关键点位置相重合,所述叠加形变关键点位置为所述叠加形变图像纹理中所述目标对象上的关键点的位置,所述原始关键点位置为所述待处理图像纹理中所述目标对象上的关键点的位置,所述处理后图像为进行所述位置调整后所生成的图像。
  5. 根据权利要求2-4中任意一项所述的图像处理方法,其中,在所述根据所述形变指令序列中的形变指令在待处理图像纹理上依次叠加形变位移,以确定叠加形变图像纹理之前,还包括:
    获取所述待处理图像中所述目标对象的初始关键点集合,所述初始关键点集合用于作为所述待处理图像纹理在叠加形变位移前的初始位置信息。
  6. 根据权利要求5所述的图像处理方法,其中,若所述第一形变指令为所述形变指令序列中的第一个指令,则所述根据所述形变指令序列中的第一形变指令在所述待处理图像纹理上叠加第一形变位移,以确定第一形变图像纹理,包括:
    响应于所述第一形变指令,对所述初始关键点集合中至少一个关键点进行位置移动,以生成调整关键点集合,所述第一形变指令作用于所述至少一个关键点,以使所述至少 一个关键点发生位移;
    根据所述调整关键点集合生成网格,所述网格的顶点信息包括所述初始关键点集合的各个关键点的初始位置信息以及所述调整关键点集合的各个关键点的调整位置信息;
    获取所述调整位置信息与所述初始位置信息的差值作为各个关键点的第一形变位移;
    在所述待处理图像纹理上叠加所述第一形变位移,以确定所述第一形变图像纹理。
  7. 根据权利要求6所述的图像处理方法,其中,所述网格的各个网格顶点对应的所述第一形变位移分别通过2个8精度的数进行保存,所述2个8精度的数分别用于保存所述第一形变位移的整数部分以及小数部分。
  8. 一种图像处理装置,其中,包括:
    形变指令获取模块,用于获取作用于待处理图像中目标对象上的形变指令序列,所述形变指令序列中包括依次输入的多个形变指令;
    形变位移叠加模块,用于在待处理图像纹理上,依次叠加所述形变指令序列中各个形变指令对应的形变位移,以确定叠加形变图像纹理,其中,所述待处理图像纹理为所述待处理图像所对应的图像纹理,所述待处理图像纹理的分辨率小于所述待处理图像;
    目标对象形变模块,用于根据所述叠加形变图像纹理对所述待处理图像中的所述目标对象进行形变,以生成处理后图像。
  9. 一种电子设备,其中,包括:
    处理器;以及
    存储器,用于存储计算机程序;
    其中,所述处理器被配置为通过执行所述计算机程序来实现权利要求1-7中任意一项所述的图像处理方法。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-7中任意一项所述的图像处理方法。
  11. 一种计算机程序产品,包括计算机指令,其中,所述计算机指令被处理器执行时实现权利要求1-7中任意一项所述的图像处理方法。
  12. 一种计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-7中任意一项所述的图像处理方法。
PCT/CN2022/110097 2021-08-16 2022-08-03 图像处理方法、装置、设备、介质及程序产品 WO2023020283A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/567,138 US20240221257A1 (en) 2021-08-16 2022-08-03 Image processing method and apparatus, device, medium and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110935096.0A CN115908104A (zh) 2021-08-16 2021-08-16 图像处理方法、装置、设备、介质及程序产品
CN202110935096.0 2021-08-16

Publications (1)

Publication Number Publication Date
WO2023020283A1 true WO2023020283A1 (zh) 2023-02-23

Family

ID=85240040

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/110097 WO2023020283A1 (zh) 2021-08-16 2022-08-03 图像处理方法、装置、设备、介质及程序产品

Country Status (3)

Country Link
US (1) US20240221257A1 (zh)
CN (1) CN115908104A (zh)
WO (1) WO2023020283A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859097A (zh) * 2019-01-08 2019-06-07 北京奇艺世纪科技有限公司 脸部图像处理方法、设备、图像处理设备、介质
US20200051303A1 (en) * 2018-08-13 2020-02-13 Pinscreen, Inc. Real-time avatars using dynamic textures
CN111652791A (zh) * 2019-06-26 2020-09-11 广州虎牙科技有限公司 人脸的替换显示、直播方法、装置、电子设备和存储介质
CN112241933A (zh) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 人脸图像处理方法、装置、存储介质及电子设备
CN112562026A (zh) * 2020-10-22 2021-03-26 百果园技术(新加坡)有限公司 一种皱纹特效渲染方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200051303A1 (en) * 2018-08-13 2020-02-13 Pinscreen, Inc. Real-time avatars using dynamic textures
CN109859097A (zh) * 2019-01-08 2019-06-07 北京奇艺世纪科技有限公司 脸部图像处理方法、设备、图像处理设备、介质
CN111652791A (zh) * 2019-06-26 2020-09-11 广州虎牙科技有限公司 人脸的替换显示、直播方法、装置、电子设备和存储介质
CN112241933A (zh) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 人脸图像处理方法、装置、存储介质及电子设备
CN112562026A (zh) * 2020-10-22 2021-03-26 百果园技术(新加坡)有限公司 一种皱纹特效渲染方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20240221257A1 (en) 2024-07-04
CN115908104A (zh) 2023-04-04

Similar Documents

Publication Publication Date Title
CN111242881B (zh) 显示特效的方法、装置、存储介质及电子设备
CN110766777B (zh) 虚拟形象的生成方法、装置、电子设备及存储介质
CN110058685B (zh) 虚拟对象的显示方法、装置、电子设备和计算机可读存储介质
JP2024505995A (ja) 特殊効果展示方法、装置、機器および媒体
WO2022068451A1 (zh) 风格图像生成方法、模型训练方法、装置、设备和介质
WO2022007627A1 (zh) 一种图像特效的实现方法、装置、电子设备及存储介质
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
WO2022042290A1 (zh) 一种虚拟模型处理方法、装置、电子设备和存储介质
WO2022057868A1 (zh) 图像超分方法和电子设备
WO2022132032A1 (zh) 人像图像处理方法及装置
WO2022247630A1 (zh) 图像处理方法、装置、电子设备及存储介质
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
WO2024131652A1 (zh) 特效处理方法、装置、电子设备及存储介质
CN114596383A (zh) 线条特效处理方法、装置、电子设备、存储介质及产品
WO2023193613A1 (zh) 高光渲染方法、装置、介质及电子设备
WO2023025085A1 (zh) 视频处理方法、装置、设备、介质及程序产品
US20230237625A1 (en) Video processing method, electronic device, and storage medium
WO2023020283A1 (zh) 图像处理方法、装置、设备、介质及程序产品
US20230284768A1 (en) Beauty makeup special effect generation method, device, and storage medium
US20230334801A1 (en) Facial model reconstruction method and apparatus, and medium and device
CN115937010B (zh) 一种图像处理方法、装置、设备及介质
US20240096023A1 (en) Information processing method and device
US20220272283A1 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
CN117636408A (zh) 视频处理方法及设备
KR20220099584A (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22857595

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18567138

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE