WO2022160914A1 - 一种效果处理方法、装置、设备及存储介质 - Google Patents

一种效果处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022160914A1
WO2022160914A1 PCT/CN2021/133879 CN2021133879W WO2022160914A1 WO 2022160914 A1 WO2022160914 A1 WO 2022160914A1 CN 2021133879 W CN2021133879 W CN 2021133879W WO 2022160914 A1 WO2022160914 A1 WO 2022160914A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance field
target
effect
field image
image
Prior art date
Application number
PCT/CN2021/133879
Other languages
English (en)
French (fr)
Inventor
徐力有
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022160914A1 publication Critical patent/WO2022160914A1/zh
Priority to US18/357,651 priority Critical patent/US20240005573A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • the present disclosure relates to the field of data processing, and in particular, to an effect processing method, apparatus, device, and storage medium.
  • the present disclosure provides an effect processing method, device, device and storage medium, which can realize the effect processing of the object to be processed based on the distance field image and meet the user's effect processing needs. .
  • the present disclosure provides a text effect processing method, the method comprising:
  • a first target effect image of the object to be processed is drawn.
  • the method further includes:
  • the second effect distance field image and the first effect distance field image both belong to sequence frame distance field images and have different time stamps;
  • a sequence frame effect animation of the object to be processed is generated.
  • the sequence frame distance field images with time stamps include consecutive multiple frames of distance field images generated based on a preset transformation law, and the preset transformation law includes sine transformation, cosine transformation or pulse curve. ;
  • the sequence of frame distance field images includes consecutive multiple frames of noise effect distance field images.
  • the object to be processed includes target text.
  • the drawing of the first target effect image of the object to be processed based on the distance field values of the pixels on the first target distance field image includes:
  • a first target effect image of the object to be processed is drawn.
  • the drawing of the first target effect image of the object to be processed based on the distance field values of the pixels on the first target distance field image includes:
  • a first target effect image of the object to be processed is drawn.
  • the present disclosure also provides an effect processing device, the device comprising:
  • a first acquisition module used for acquiring the distance field image of the object to be processed, and acquiring the first effect distance field image
  • the first addition module is used to add the distance field image of the object to be processed and the distance field values of the pixels at the same position coordinates in the first effect distance field image, and add the value obtained after the addition Determined as the distance field value of the pixel at the same position coordinate in the first target distance field image;
  • a first drawing module configured to draw a first target effect image of the object to be processed based on the distance field values of the pixels on the first target distance field image.
  • the present disclosure provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is made to implement the above method.
  • the present disclosure provides a device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program, Implement the above method.
  • the present disclosure provides a computer program product, wherein the computer program product includes a computer program/instruction, and the computer program/instruction implements the above method when executed by a processor.
  • An embodiment of the present disclosure provides an effect processing method. First, a distance field image of an object to be processed and a first effect distance field image are acquired, and then the distance field image of the object to be processed and the first effect distance field image are in the same position. The distance field values of the pixel points of the position coordinates are added, and the value obtained after the addition is determined as the distance field value of the pixel points at the same position coordinates in the first target distance field image. Finally, a first target effect image of the object to be processed is drawn based on the distance field values of each pixel on the first target distance field image. Based on the superposition of the distance field image of the object to be processed and the distance field value of the corresponding position pixel on the effect distance field image, the embodiment of the present disclosure can realize the effect processing of the object to be processed, and meet the effect processing requirements of users.
  • FIG. 1 is a flowchart of an effect processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a dynamic effect processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of one frame of effect images in a sequence of frame effect images provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an effect processing device according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an effect processing device according to an embodiment of the present disclosure.
  • a distance field image is used to record the shortest distance value of each pixel inside and outside the object in the image from the object outline (also known as the boundary), also known as the distance field value. .
  • the distance field value of the pixel points in the inner area is set as a negative number
  • the distance field value of the pixel points in the outer area around the object is set as a positive number. The inventor found that it is easier to process the effect of the object based on the distance field image.
  • the present disclosure provides an effect processing method.
  • a distance field image of an object to be processed and a first effect distance field image are acquired, and then, the distance field image of the object to be processed and the first effect distance field image are The distance field values of the pixels with the same position coordinates are added, and the value obtained after the addition is determined as the distance field values of the pixels with the same position coordinates in the first target distance field image.
  • a first target effect image of the object to be processed is drawn based on the distance field values of the pixels on the first target distance field image.
  • the present disclosure can realize the effect processing of the object to be processed and meet the user's effect processing needs.
  • an embodiment of the present disclosure provides an effect processing method.
  • a flowchart of an effect processing method provided by an embodiment of the present disclosure includes:
  • S101 Acquire a distance field image of the object to be processed, and acquire a first effect distance field image.
  • the object to be processed in the embodiment of the present disclosure may be a target text, a target image, or the like. Specifically, before effect processing is performed on the object to be processed, a distance field image of the object to be processed is first acquired.
  • the first effect distance field image may be a frame of distance field image determined from a sequence of frames of distance field images.
  • the sequence frame distance field images may include continuous multiple frames of distance field images generated based on a preset transformation law, wherein the preset transformation law includes sine transformation, cosine transformation, or pulse curve.
  • sequence frame distance field images may also include consecutive multiple frames of noise effect distance field images.
  • the first effect distance field image may be a distance field image generated based on a preset image, for example, a distance field image corresponding to the image is generated based on a frame image in a sequence frame of flame animation.
  • the distance field image of the object to be processed and the pixel points at the same position coordinates on the first effect distance field image are respectively determined, Hereinafter, they are respectively referred to as the first pixel point and the second pixel point.
  • the distance field value of the first pixel point is X
  • the distance field value of the second pixel point is Y
  • the distance field value of the third pixel point is X+Y
  • the distance field image of the object to be processed and the distance field values of the pixels with the same position coordinates on the first effect distance field image are added, and the value obtained after the addition is determined as: The distance field value of the pixel at the same position coordinate in the first target distance field image.
  • the distance field value of each pixel on the first target distance field image can be determined, that is, the sum of the distance field value of the distance field image of the object to be processed and the distance field value of the pixel point at the corresponding position on the first effect distance field image.
  • the first target effect of the object to be processed is drawn based on the distance field value of each pixel on the first target distance field image image.
  • the embodiment of the present disclosure can determine the distance field value of the pixel point based on the distance field value of the pixel point. color value, and then based on the color value of the pixel point, the first target effect image of the object to be processed is drawn.
  • the corresponding relationship between the distance field value and the color value may be preset, and then the color value corresponding to the distance field value of each pixel point on the first target distance field image is determined based on the corresponding relationship as the color value of the corresponding pixel point.
  • the corresponding relationship between the color value and the distance field value range may be established in advance, for example, the distance field value range [-m, n] (where m and n are arbitrary values, for example, m is 0.8 , n is 0), the corresponding color value is white, and the distance field value range [j, k] (where j and k are arbitrary values, for example, j is 0, k is 1)
  • the corresponding color value is yellow.
  • a color value corresponding to the distance field value of each pixel point on the first target distance field image is determined as the color value of the corresponding pixel point based on the corresponding relationship.
  • a gradient color may be drawn for the character body of the target character to enhance the display effect of the target character. Specifically, first, since the pixels with the distance field value less than 0 are the pixels in the inner area of the target text, the embodiment of the present disclosure determines the pixels with the distance field value less than 0 on the first target distance field image as The first target pixel.
  • the offset coordinate is the first The difference between the position coordinates of the target pixel point and the gradient origin; further, according to the gradient value of the first target pixel point and the target gradient color, determine the color value of the first target pixel point; finally, based on the first target pixel point color value; The color value of the target pixel is used to draw the first target effect image of the object to be processed.
  • the target gradient color is usually two colors, such as white and black, and the gradient color effect is the effect of gradient from black to white, or the effect of gradient from black to white.
  • the origin of the gradient is usually the center point of the text body, which is specifically used to determine the point where the two colors in the target gradient color meet.
  • the numerical range of the gradient value can be set to 0 to 1.
  • the gradient value of the first target pixel belongs to a value between 0 and 1, and the target gradient color is assumed to be white and black, then white It may correspond to 0, and black may correspond to 1.
  • the gradient value of the first target pixel point may correspond to a color value between white and black, and the color value may be determined as the color value of the first target pixel point.
  • a light-emitting effect may be drawn for the character body of the target character to enhance the display effect of the target character.
  • the embodiment of the present disclosure first determines the pixels with the distance field value less than 0 on the first target distance field image as the first target distance field image.
  • Two target pixel points then, based on the distance field value of the second target pixel point, determine the color transparency value of the second target pixel point; wherein, the absolute value of the distance field value of the second target pixel point and The color transparency value of the second target pixel is inversely proportional; finally, based on the color transparency value of the second target pixel, a first target effect image of the object to be processed is drawn.
  • the embodiment of the present disclosure realizes the drawing of the text body of the target text by setting the relationship between the absolute value of the distance field value and the color transparency value which is inversely proportional, and realizes the effect of the text body emitting light.
  • the distance field image of the object to be processed and the first effect distance field image are acquired, and then the distance field image of the object to be processed and the first effect distance field image are located in the same position
  • the distance field values of the pixel points of the coordinates are added, and the value obtained after the addition is determined as the distance field value of the pixel point at the same position coordinate in the first target distance field image.
  • a first target effect image of the object to be processed is drawn based on the distance field values of each pixel on the first target distance field image.
  • the embodiment of the present disclosure can realize the effect processing of the object to be processed and meet the user's effect processing requirements.
  • the embodiment of the present disclosure further provides a dynamic effect processing method.
  • a flowchart of the dynamic effect processing method provided by the embodiment of the present disclosure includes:
  • S201 Acquire a distance field image of the object to be processed, and acquire a first effect distance field image and a second effect distance field image.
  • the first effect distance field image and the second effect distance field image both belong to the sequence frame distance field image, and have different time stamps.
  • the sequence frame distance field images refer to images that are played sequentially in time sequence.
  • the first effect distance field image and the second effect distance field image may be at the same time one of the continuous multiple frames of distance field images generated based on a preset transformation law.
  • the preset transformation law includes sine transformation, cosine transformation or pulse transformation. curve.
  • the first effect distance field image may be the 1st frame image played at the 1/20th second in the sequence frame distance field image
  • the second effect distance field image may be the sequence frame distance field image played at the 10/20th second 10th frame image.
  • each frame of the sequence of frame distance field images can be used as the first effect distance field image or the second effect distance field image, respectively, for effect processing the object to be processed.
  • the distance field image of the object to be processed and the second effect distance field image are obtained, the distance field image of the object to be processed and the pixel points at the same position coordinates on the second effect distance field image are respectively determined, Hereinafter, they are respectively referred to as the fourth pixel point and the fifth pixel point. Then add the distance field values of the fourth pixel point and the fifth pixel point, and use the value obtained after the addition as the pixel point at the same position coordinate in the second target distance field image (also called the sixth pixel point pixel) distance field value.
  • the distance field image of the object to be processed and the distance field value of the pixel point corresponding to the position coordinate on the second effect distance field image are added, and the value obtained after the addition is determined as the second effect.
  • the distance field value of the pixel at the corresponding position coordinate in the target distance field image is determined as the second effect.
  • the distance field value of each pixel on the second target distance field image can be determined, that is, the sum of the distance field value of the distance field image of the object to be processed and the distance field value of the pixel point at the corresponding position on the second effect distance field image.
  • the second target effect of the object to be processed is drawn based on the distance field value of the pixel point on the second target distance field image image.
  • S206 Generate a sequence frame effect animation of the object to be processed based on the first target effect image and the second target effect image according to the timestamps of the sequence frame distance field images.
  • the effect image of the object to be processed can be generated separately for each frame of the distance field image of the sequence frame distance field image to which the first effect distance field image and the second effect distance field image belong, and then based on the sequence frame distance field image
  • the effect images corresponding to each frame of distance field images in the sequence frame distance field images are combined, so that the effect images corresponding to each frame of distance field images can be displayed in chronological order, and finally a sequence of objects to be processed is generated.
  • Frame effect animation
  • sequence frame effect animation is played according to the time stamp, which can display the dynamic effect of the object to be processed.
  • the sequence frame distance field image may include the distance field image corresponding to each frame image in the flame burning animation, and after effect processing is performed on the target text (such as "you are") based on the sequence frame distance field image, a dynamic image can be obtained.
  • the effect's sequence frame effect animation As shown in FIG. 3 , it is a schematic diagram of one frame of effect images in a sequence frame effect animation provided by an embodiment of the present disclosure.
  • effect processing is performed on the object to be processed, and then the effect images after the effect processing are combined to finally generate a dynamic effect.
  • the effect is a sequence frame effect animation that displays the object to be processed, which can meet the user's demand for dynamic effect display.
  • the present disclosure also provides a text outline effect processing apparatus.
  • FIG. 4 a schematic structural diagram of an effect processing apparatus provided in an embodiment of the present disclosure, the apparatus includes:
  • the first acquisition module 401 is used for acquiring the distance field image of the object to be processed, and acquiring the first effect distance field image;
  • the first addition module 402 is configured to add the distance field image of the object to be processed and the distance field values of the pixels at the same position coordinates in the first effect distance field image, and add the obtained distance field value.
  • the value is determined as the distance field value of the pixel at the same position coordinate in the first target distance field image;
  • the first drawing module 403 is configured to draw the first target effect image of the object to be processed based on the distance field values of the pixels on the first target distance field image.
  • the device further includes:
  • a second acquisition module configured to acquire a second effect distance field image; wherein, the second effect distance field image and the first effect distance field image belong to sequence frame distance field images and have different time stamps;
  • the second adding module is configured to add the distance field image of the object to be processed and the distance field values of the pixels at the same position coordinates in the second effect distance field image, and add the value obtained after the addition Determined as the distance field value of the pixel at the same position coordinate in the second target distance field image;
  • a second drawing module configured to draw the second target effect image of the object to be processed based on the distance field value of each pixel on the second target distance field image
  • a generating module configured to generate a sequence frame effect animation of the object to be processed based on the first target effect image and the second target effect image according to the sequence of the time stamps.
  • the sequence of frame distance field images includes continuous multiple frames of distance field images generated based on a preset transformation law, and the preset transformation law includes sine transformation, cosine transformation or pulse curve;
  • the sequence of frame distance field images includes consecutive multiple frames of noise effect distance field images.
  • the object to be processed includes target text.
  • the first drawing module includes:
  • a first determination sub-module configured to determine a pixel whose distance field value is less than 0 on the first target distance field image as the first target pixel
  • the second determination sub-module is configured to determine the gradient value of the first target pixel based on the result of the dot product operation between the offset coordinates of the first target pixel and the gradient direction vector; wherein the offset The coordinates are the difference between the position coordinates of the first target pixel and the gradient origin;
  • a third determination sub-module configured to determine the color value of the first target pixel point according to the gradient value of the first target pixel point and the target gradient color
  • the first drawing sub-module is configured to draw the first target effect image of the object to be processed based on the color value of the first target pixel.
  • the first drawing module includes:
  • a fourth determination sub-module configured to determine the pixels whose distance field value is less than 0 on the first target distance field image as the second target pixel;
  • the fifth determination sub-module is used for determining the color transparency value of the second target pixel point based on the distance field value of the second target pixel point; wherein, the distance field value of the second target pixel point and the The color transparency value of the second target pixel is inversely proportional;
  • the second drawing sub-module is configured to draw the first target effect image of the object to be processed based on the color transparency value of the second target pixel.
  • the distance field image of the object to be processed and the first effect distance field image are acquired, and then the distance field image of the object to be processed and the first effect distance field image are located in the same position
  • the distance field values of the pixel points of the coordinates are added, and the value obtained after the addition is determined as the distance field value of the pixel point at the same position coordinate in the first target distance field image.
  • a first target effect image of the object to be processed is drawn based on the distance field values of each pixel on the first target distance field image.
  • the embodiment of the present disclosure can realize the effect processing of the object to be processed and meet the user's effect processing requirements.
  • the embodiments of the present disclosure respectively perform effect processing on the objects to be processed, and then combine the effect images after effect processing to finally generate a dynamic effect that can display the objects to be processed.
  • the sequence frame effect animation can meet the user's needs for dynamic effect display.
  • embodiments of the present disclosure also provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is made to implement the present invention.
  • the effect processing method described in the embodiment is disclosed.
  • Embodiments of the present disclosure also provide a computer program product, including computer programs/instructions, characterized in that, when the computer program/instructions are executed by a processor, the effect processing methods described in the embodiments of the present disclosure are implemented.
  • an embodiment of the present disclosure also provides an effect processing device, as shown in FIG. 5 , which may include:
  • Processor 501 memory 502 , input device 503 and output device 504 .
  • the number of processors 501 in the effect processing device may be one or more, and one processor is taken as an example in FIG. 5 .
  • the processor 501 , the memory 502 , the input device 503 and the output device 504 may be connected by a bus or in other ways, wherein the connection by a bus is taken as an example in FIG. 5 .
  • the memory 502 can be used to store software programs and modules, and the processor 501 executes various functional applications and data processing of the effect processing device by running the software programs and modules stored in the memory 502 .
  • the memory 502 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function, and the like. Additionally, memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input device 503 may be used to receive input numerical or character information, and to generate signal input related to user settings and function control of the effect processing device.
  • the processor 501 loads the executable files corresponding to the processes of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the executable files stored in the memory 502 application to realize various functions of the above-mentioned effect processing device.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开提供了一种效果处理方法、装置、设备及存储介质,所述方法包括:首先,获取待处理对象的距离场图像以及第一效果距离场图像,然后,将待处理对象的距离场图像和第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于该相同位置坐标的像素点的距离场值。最终,基于第一目标距离场图像上各个像素点的距离场值,绘制待处理对象的第一目标效果图像。本公开实施例基于距离场图像上对应位置像素点的距离场值的叠加,能够实现对待处理对象的效果处理,满足了用户的效果处理需求。

Description

一种效果处理方法、装置、设备及存储介质
本申请要求于2021年1月28日提交中国国家知识产权局、申请号为202110118987.7、申请名称为“一种效果处理方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理领域,尤其涉及一种效果处理方法、装置、设备及存储介质。
背景技术
目前,对文字、图像等对象进行效果处理的需求越来越多,例如,网络游戏的场景绘制、视频中的文字显示以及综艺节目的字幕显示的应用中,对文字的效果处理要求较高。
因此,如何满足用户的效果处理需求,是目前亟需解决的技术问题。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种效果处理方法、装置、设备及存储介质,能够基于距离场图像实现对待处理对象的效果处理,满足用户的效果处理需求。
第一方面,本公开提供了一种文效果处理方法,所述方法包括:
获取待处理对象的距离场图像,以及获取第一效果距离场图像;
将所述待处理对象的距离场图像和所述第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像。
一种可选的实施方式中,所述方法还包括:
获取第二效果距离场图像;其中,所述第二效果距离场图像和第一效果距离场图像均属于序列帧距离场图像且具有不同的时间戳;
将所述待处理对象的距离场图像和所述第二效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第二目标距离 场图像中处于所述相同位置坐标的像素点的距离场值;
基于所述第二目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第二目标效果图像;
按照所述时间戳的先后顺序,基于所述第一目标效果图像和所述第二目标效果图像,生成所述待处理对象的序列帧效果动画。
一种可选的实施方式中,所述具有时间戳的序列帧距离场图像包括基于预设变换规律生成的连续多帧距离场图像,所述预设变换规律包括正弦变换、余弦变换或者脉冲曲线;
或者,所述序列帧距离场图像包括连续多帧噪声特效距离场图像。
一种可选的实施方式中,所述待处理对象包括目标文字。
一种可选的实施方式中,所述基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像,包括:
将所述第一目标距离场图像上距离场值小于0的像素点,确定为第一目标像素点;
基于所述第一目标像素点的偏移坐标与渐变方向向量之间的点积运算结果,确定所述第一目标像素点的渐变值;其中,所述偏移坐标为所述第一目标像素点的位置坐标与渐变原点的差值;
根据所述第一目标像素点的渐变值与目标渐变颜色,确定所述第一目标像素点的颜色值;
基于所述第一目标像素点的颜色值,绘制所述待处理对象的第一目标效果图像。
一种可选的实施方式中,所述基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像,包括:
将所述第一目标距离场图像上距离场值小于0的像素点,确定为第二目标像素点;
基于所述第二目标像素点的距离场值,确定所述第二目标像素点的颜色透明度值;其中,所述第二目标像素点的距离场值与所述第二目标像素点的颜色 透明度值成反比;
基于所述第二目标像素点的颜色透明度值,绘制所述待处理对象的第一目标效果图像。
第二方面,本公开还提供了一种效果处理装置,所述装置包括:
第一获取模块,用于获取待处理对象的距离场图像,以及获取第一效果距离场图像;
第一相加模块,用于将所述待处理对象的距离场图像和所述第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
第一绘制模块,用于基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像。
第三方面,本公开提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现上述的方法。
第四方面,本公开提供了一种设备,包括:存储器,处理器,及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述的方法。
第五方面,本公开提供了一种计算机程序产品,所述计算机程序产品包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述的方法。
本公开实施例提供的技术方案与现有技术相比至少具有如下优点:
本公开实施例提供了一种效果处理方法,首先,获取待处理对象的距离场图像以及第一效果距离场图像,然后,将待处理对象的距离场图像和第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于相同位置坐标的像素点的距离场值。最终,基于第一目标距离场图像上各个像素点的距离场值,绘制待处理对象的第一目标效果图像。本公开实施例基于待处理对象的距离场图像与效果距离场图像上对应位置像素点的距离场值的叠加,能够实现对待处理对象的效果处理,满足 了用户的效果处理需求。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种效果处理方法的流程图;
图2为本公开实施例提供的一种动态效果处理方法的流程图;
图3为本公开实施例提供的序列帧效果图像中的其中一帧效果图像的示意图;
图4为本公开实施例提供的一种效果处理装置的结构示意图;
图5为本公开实施例提供的一种效果处理设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
距离场图像,通常是指有向距离场图像,用于记录图像中对象的内部及其周围外部的各个像素点距离对象轮廓(也可称为边界)的最短距离值,也称为距离场值。其中,处于内部区域的像素点的距离场值设为负数,而处于对象周围外部区域的像素点的距离场值设为正数。发明人发现,基于距离场图像能够更便于对对象的效果进行处理。
为此,本公开提供了一种效果处理方法,首先,获取待处理对象的距离场图像以及第一效果距离场图像,然后,将待处理对象的距离场图像和第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于相同位置坐标的像素点的距离场值。最终,基于第一目标距离场图像上的像素点的距离场值,绘制待处理对象的第一目标效果图像。
本公开基于待处理对象的距离场图像与效果距离场图像上对应位置像素点的距离场值的叠加,能够实现对待处理对象的效果处理,满足了用户的效果处理需求。
基于此,本公开实施例提供了一种效果处理方法,参考图1,为本公开实施例提供的一种效果处理方法的流程图,该方法包括:
S101:获取待处理对象的距离场图像,以及获取第一效果距离场图像。
本公开实施例中的待处理对象可以为目标文字,也可以为目标图像等。具体的,在对待处理对象进行效果处理之前,首先获取待处理对象的距离场图像。
另外,在对待处理对象进行效果处理之前,还需要获取用于效果处理的第一效果距离场图像。其中,第一效果距离场图像可以是从序列帧距离场图像中确定的一帧距离场图像。
具体的,序列帧距离场图像可以包括基于预设变换规律生成的连续多帧距离场图像,其中,预设变换规律包括正弦变换、余弦变换或者脉冲曲线等。
另外,序列帧距离场图像还可以包括连续多帧噪声特效距离场图像。
另外,第一效果距离场图像可以是基于预设图像生成的距离场图像,例如,基于序列帧火焰动画中的一帧图像,生成该图像对应的距离场图像。
S102:将所述待处理对象的距离场图像和所述第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值。
本公开实施例中,在获取到待处理对象的距离场图像以及第一效果距离场 图像后,分别确定待处理对象的距离场图像以及第一效果距离场图像上处于相同位置坐标的像素点,以下分别对应称为第一像素点和第二像素点。然后将第一像素点和第二像素点的距离场值相加,并将相加后得到的值,作为第一目标距离场图像中处于该相同位置坐标的像素点(也可以称为第三像素点)的距离场值。
例如,确定第一像素点的距离场值为X,第二像素点的距离场值为Y,则第三像素点的距离场值为X+Y。
实际应用中,依照上述处理方式,将待处理对象的距离场图像与第一效果距离场图像上各个相同位置坐标的像素点的距离场值均相加,并将相加后得到的值确定为第一目标距离场图像中处于该相同位置坐标的像素点的距离场值。
基于上述方式,能够确定第一目标距离场图像上各个像素点的距离场值,即为待处理对象的距离场图像和第一效果距离场图像上对应位置的像素点的距离场值的和。
S103:基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像。
本公开实施例中,在确定第一目标距离场图像上各个像素点的距离场值后,基于该第一目标距离场图像上各个像素点的距离场值,绘制待处理对象的第一目标效果图像。
一种可选的实施方式中,由于像素点的距离场值是指该像素点距离待处理对象轮廓的最短距离值,因此,本公开实施例可以基于像素点的距离场值确定该像素点的颜色值,然后基于像素点的颜色值,绘制待处理对象的第一目标效果图像。
具体的,可以预先设置距离场值与颜色值的对应关系,然后基于该对应关系确定第一目标距离场图像上各个像素点的距离场值对应的颜色值,作为对应的像素点的颜色值。
一种可选的实施方式中,可以预先建立颜色值与距离场值范围的对应关系,例如,距离场值范围[-m,n](其中,m和n为任意值,例如,m为0.8,n为 0)对应的颜色值为白色,距离场值范围[j,k](其中,j和k为任意值,例如,j为0,k为1)对应的颜色值为黄色。然后,基于该对应关系确定第一目标距离场图像上各个像素点的距离场值对应的颜色值,作为对应的像素点的颜色值。
另一种可选的实施方式中,在待处理对象为目标文字时,可以为目标文字的文字本体绘制渐变色,以增强目标文字的显示效果。具体的,首先,由于距离场值小于0的像素点为目标文字内部区域的像素点,因此,本公开实施例将所述第一目标距离场图像上距离场值小于0的像素点,确定为第一目标像素点。然后,基于所述第一目标像素点的偏移坐标与渐变方向向量之间的点积运算结果,确定所述第一目标像素点的渐变值;其中,所述偏移坐标为所述第一目标像素点的位置坐标与渐变原点的差值;进而,根据所述第一目标像素点的渐变值与目标渐变颜色,确定所述第一目标像素点的颜色值;最终,基于所述第一目标像素点的颜色值,绘制所述待处理对象的第一目标效果图像。
其中,目标渐变颜色通常为两种颜色,例如白色和黑色,渐变色效果则为由黑色渐变为白色的效果,或者由黑色渐变为白色的效果。渐变原点通常为文字本体的中心点,具体用于确定目标渐变颜色中的两种颜色交接的点。
实际应用中,渐变值的数值范围可以设置为0到1,本公开实施例中第一目标像素点的渐变值属于0-1中的一个数值,而目标渐变色假设为白色和黑色,则白色可以对应0,而黑色可以对应1,相应的,第一目标像素点的渐变值可以对应白色和黑色之间的颜色值,可以将该颜色值确定为第一目标像素点的颜色值。
另一种可选的实施方式中,在待处理对象为目标文字时,可以为目标文字的文字本体绘制发光效果,以增强目标文字的显示效果。具体的,由于距离场值小于0的像素点为目标文字内部区域的像素点,因此,本公开实施例首先将所述第一目标距离场图像上距离场值小于0的像素点,确定为第二目标像素点;然后,基于所述第二目标像素点的距离场值,确定所述第二目标像素点的颜色透明度值;其中,所述第二目标像素点的距离场值的绝对值与所述第二目标像素点的颜色透明度值成反比;最终,基于所述第二目标像素点的颜色透明 度值,绘制所述待处理对象的第一目标效果图像。
本公开实施例通过设置距离场值的绝对值与颜色透明度值成反比的关系,实现对目标文字的文字本体的绘制,实现了文字本体发光的效果。
本公开实施例提供的效果处理方法中,首先,获取待处理对象的距离场图像以及第一效果距离场图像,然后,将待处理对象的距离场图像和第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于该相同位置坐标的像素点的距离场值。最终,基于第一目标距离场图像上各个像素点的距离场值,绘制待处理对象的第一目标效果图像。本公开实施例基于待处理对象的距离场图像与效果距离场图像上对应位置像素点的距离场值的叠加,能够实现对待处理对象的效果处理,满足了用户的效果处理需求。
在上述实施例的基础上,本公开实施例还提供了一种动态效果处理方法,参考图2,为本公开实施例提供的一种动态效果处理方法的流程图,该方法包括:
S201:获取待处理对象的距离场图像,以及获取第一效果距离场图像和第二效果距离场图像。
本公开实施例中,第一效果距离场图像和第二效果距离场图像均属于序列帧距离场图像,且具有不同的时间戳。其中,序列帧距离场图像是指按照时间顺序依次播放的图像。
具体的,第一效果距离场图像和第二效果距离场图像可以同时为基于预设变换规律生成的连续多帧距离场图像中的其中一帧图像,预设变换规律包括正、余弦变换或者脉冲曲线。例如,第一效果距离场图像可以为序列帧距离场图像中第1/20秒播放的第1帧图像,而第二效果距离场图像可以为序列帧距离场图像中第10/20秒播放的第10帧图像。
基于上述方式,可以将序列帧距离场图像中的每一帧图像分别作为第一效果距离场图像或第二效果距离场图像,用于对待处理对象进行效果处理。
S202:将所述待处理对象的距离场图像和所述第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
S203:基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像。
本公开实施例中对S202和S203的理解,可参照上述实施例中的S102和S103的描述,在此不做赘述。
S204:将所述待处理对象的距离场图像和第二效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第二目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
本公开实施例中,在获取到待处理对象的距离场图像以及第二效果距离场图像后,分别确定待处理对象的距离场图像以及第二效果距离场图像上处于相同位置坐标的像素点,以下分别对应称为第四像素点和第五像素点。然后将第四像素点和第五像素点的距离场值相加,并将相加后得到的值,作为第二目标距离场图像中处于该相同位置坐标的像素点(也可以称为第六像素点)的距离场值。
实际应用中,依照上述处理方式,将待处理对象的距离场图像与第二效果距离场图像上对应位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第二目标距离场图像中处于该对应位置坐标的像素点的距离场值。
基于上述方式,能够确定第二目标距离场图像上各个像素点的距离场值,即为待处理对象的距离场图像和第二效果距离场图像上对应位置的像素点的距离场值的和。
S205:基于所述第二目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第二目标效果图像。
本公开实施例中,在确定第二目标距离场图像上的像素点的距离场值后,基于该第二目标距离场图像上的像素点的距离场值,绘制待处理对象的第二目标效果图像。
S206:按照所述序列帧距离场图像的时间戳,基于所述第一目标效果图像和所述第二目标效果图像,生成所述待处理对象的序列帧效果动画。
本公开实施例中,在完成对待处理对象的第一目标效果图像和第二目标效果图像的绘制之后,按照第一效果距离场图像和第二效果距离场图像所属的序列帧距离场图像的时间戳,生成待处理对象的序列帧效果动画。
实际应用中,可以针对第一效果距离场图像和第二效果距离场图像所属的序列帧距离场图像的每一帧距离场图像,分别生成待处理对象的效果图像,然后基于序列帧距离场图像的时间戳,将序列帧距离场图像的每一帧距离场图像分别对应的效果图像进行组合,使得各帧距离场图像分别对应的效果图像能够按照时间顺序依次显示,最终生成待处理对象的序列帧效果动画。
其中,对序列帧效果动画按照时间戳播放,能够展示待处理对象的动态效果。
例如,序列帧距离场图像可以包括火焰燃烧动画中每一帧图像对应的距离场图像,而基于该序列帧距离场图像对目标文字(如“你是”)进行效果处理后,可以得到具有动态效果的序列帧效果动画。如图3所示,为本公开实施例提供的序列帧效果动画中的其中一帧效果图像的示意图。
本公开实施例提供的动态效果处理方法中,基于属于同一序列帧距离场图像的效果距离场图像,分别对待处理对象进行效果处理,然后对效果处理后的效果图像进行组合,最终生成能够以动态效果对待处理对象进行展示的序列帧效果动画,能够满足用户对动态效果展示的需求。
与上述方法实施例基于同一个发明构思,本公开还提供了一种文字的轮廓效果处理装置,参考图4,为本公开实施例提供的一种效果处理装置的结构示意图,所述装置包括:
第一获取模块401,用于获取待处理对象的距离场图像,以及获取第一效果距离场图像;
第一相加模块402,用于将所述待处理对象的距离场图像和所述第一效果 距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
第一绘制模块403,用于基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像。
一种可选的实施方式中,所述装置还包括:
第二获取模块,用于获取第二效果距离场图像;其中,所述第二效果距离场图像和第一效果距离场图像属于均属于序列帧距离场图像且具有不同的时间戳;
第二相加模块,用于将所述待处理对象的距离场图像和所述第二效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第二目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
第二绘制模块,用于基于所述第二目标距离场图像上的个像素点的距离场值,绘制所述待处理对象的第二目标效果图像;
生成模块,用于按照所述时间戳的先后顺序,基于所述第一目标效果图像和所述第二目标效果图像,生成所述待处理对象的序列帧效果动画。
一种可选的实施方式中,所述序列帧距离场图像包括基于预设变换规律生成的连续多帧距离场图像,所述预设变换规律包括正弦变换、余弦变换或者脉冲曲线;
或者,所述序列帧距离场图像包括连续多帧噪声特效距离场图像。
一种可选的实施方式中,所述待处理对象包括目标文字。
一种可选的实施方式中,所述第一绘制模块,包括:
第一确定子模块,用于将所述第一目标距离场图像上距离场值小于0的像素点,确定为第一目标像素点;
第二确定子模块,用于基于所述第一目标像素点的偏移坐标与渐变方向向量之间的点积运算结果,确定所述第一目标像素点的渐变值;其中,所述偏移坐标为所述第一目标像素点的位置坐标与渐变原点的差值;
第三确定子模块,用于根据所述第一目标像素点的渐变值与目标渐变颜色, 确定所述第一目标像素点的颜色值;
第一绘制子模块,用于基于所述第一目标像素点的颜色值,绘制所述待处理对象的第一目标效果图像。
一种可选的实施方式中,所述第一绘制模块,包括:
第四确定子模块,用于将所述第一目标距离场图像上距离场值小于0的像素点,确定为第二目标像素点;
第五确定子模块,用于基于所述第二目标像素点的距离场值,确定所述第二目标像素点的颜色透明度值;其中,所述第二目标像素点的距离场值与所述第二目标像素点的颜色透明度值成反比;
第二绘制子模块,用于基于所述第二目标像素点的颜色透明度值,绘制所述待处理对象的第一目标效果图像。
本公开实施例提供的效果处理方法中,首先,获取待处理对象的距离场图像以及第一效果距离场图像,然后,将待处理对象的距离场图像和第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于该相同位置坐标的像素点的距离场值。最终,基于第一目标距离场图像上各个像素点的距离场值,绘制待处理对象的第一目标效果图像。本公开实施例基于待处理对象的距离场图像与效果距离场图像上对应位置像素点的距离场值的叠加,能够实现对待处理对象的效果处理,满足了用户的效果处理需求。
另外,本公开实施例基于属于同一序列帧距离场图像的效果距离场图像,分别对待处理对象进行效果处理,然后对效果处理后的效果图像进行组合,最终生成能够以动态效果对待处理对象进行展示的序列帧效果动画,能够满足用户对动态效果展示的需求。
除了上述方法和装置以外,本公开实施例还提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现本公开实施例所述的效果处理方法。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现本公开实施例所述的效果处理方法。
另外,本公开实施例还提供了一种效果处理设备,参见图5所示,可以包括:
处理器501、存储器502、输入装置503和输出装置504。效果处理设备中的处理器501的数量可以一个或多个,图5中以一个处理器为例。在本公开的一些实施例中,处理器501、存储器502、输入装置503和输出装置504可通过总线或其它方式连接,其中,图5中以通过总线连接为例。
存储器502可用于存储软件程序以及模块,处理器501通过运行存储在存储器502的软件程序以及模块,从而执行效果处理设备的各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。输入装置503可用于接收输入的数字或字符信息,以及产生与效果处理设备的用户设置以及功能控制有关的信号输入。
具体在本实施例中,处理器501会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器502中,并由处理器501来运行存储在存储器502中的应用程序,从而实现上述效果处理设备的各种功能。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的 要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种效果处理方法,其特征在于,所述方法包括:
    获取待处理对象的距离场图像,以及获取第一效果距离场图像;
    将所述距离场图像和所述第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
    基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取第二效果距离场图像;其中,所述第二效果距离场图像和第一效果距离场图像均属于序列帧距离场图像且具有不同的时间戳;
    将所述距离场图像和所述第二效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第二目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
    基于所述第二目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第二目标效果图像;
    按照所述时间戳的先后顺序,基于所述第一目标效果图像和所述第二目标效果图像,生成所述待处理对象的序列帧效果动画。
  3. 根据权利要求2所述的方法,其特征在于,所述序列帧距离场图像包括基于预设变换规律生成的连续多帧距离场图像,所述预设变换规律包括正弦变换、余弦变换或者脉冲曲线,所述连续多帧距离场图像,包括所述第一效果距离场图像和所述第二效果距离场图像;
    或者,所述序列帧距离场图像包括连续多帧噪声特效距离场图像。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述待处理对象包括目标文字。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像, 包括:
    将所述第一目标距离场图像上距离场值小于0的像素点,确定为第一目标像素点;
    基于所述第一目标像素点的偏移坐标与渐变方向向量之间的点积运算结果,确定所述第一目标像素点的渐变值,其中,所述偏移坐标为所述第一目标像素点的位置坐标与渐变原点的差值;
    根据所述第一目标像素点的渐变值与目标渐变颜色,确定所述第一目标像素点的颜色值;
    基于所述第一目标像素点的颜色值,绘制所述待处理对象的第一目标效果图像。
  6. 根据权利要求1所述的方法,其特征在于,所述基于所述第一目标距离场图像上的像素点的距离场值,绘制所述待处理对象的第一目标效果图像,包括:
    将所述第一目标距离场图像上距离场值小于0的像素点,确定为第二目标像素点;
    基于所述第二目标像素点的距离场值,确定所述第二目标像素点的颜色透明度值;其中,所述第二目标像素点的距离场值与所述第二目标像素点的颜色透明度值成反比;
    基于所述第二目标像素点的颜色透明度值,绘制所述待处理对象的第一目标效果图像。
  7. 一种效果处理装置,其特征在于,所述装置包括:
    第一获取模块,用于获取待处理对象的距离场图像,以及获取第一效果距离场图像;
    第一相加模块,用于将所述待处理对象的距离场图像和所述第一效果距离场图像中处于相同位置坐标的像素点的距离场值相加,并将相加后得到的值确定为第一目标距离场图像中处于所述相同位置坐标的像素点的距离场值;
    第一绘制模块,用于基于所述第一目标距离场图像上的像素点的距离场值, 绘制所述待处理对象的第一目标效果图像。
  8. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现如权利要求1-6任一项所述的方法。
  9. 一种设备,其特征在于,包括:存储器,处理器,及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如权利要求1-6任一项所述的方法。
  10. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现如权利要求1-6任一项所述的方法。
PCT/CN2021/133879 2021-01-28 2021-11-29 一种效果处理方法、装置、设备及存储介质 WO2022160914A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/357,651 US20240005573A1 (en) 2021-01-28 2023-07-24 Method for generating signed distance field image, method for generating text effect image, device and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110118987.7 2021-01-28
CN202110118987.7A CN114820834A (zh) 2021-01-28 2021-01-28 一种效果处理方法、装置、设备及存储介质

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135953 Continuation-In-Part WO2022160946A1 (zh) 2021-01-28 2021-12-07 一种文字的阴影效果处理方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/357,651 Continuation-In-Part US20240005573A1 (en) 2021-01-28 2023-07-24 Method for generating signed distance field image, method for generating text effect image, device and medium

Publications (1)

Publication Number Publication Date
WO2022160914A1 true WO2022160914A1 (zh) 2022-08-04

Family

ID=82525867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/133879 WO2022160914A1 (zh) 2021-01-28 2021-11-29 一种效果处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114820834A (zh)
WO (1) WO2022160914A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385469A (zh) * 2022-09-08 2023-07-04 北京字跳网络技术有限公司 特效图像生成方法、装置、电子设备及存储介质
CN116363239A (zh) * 2022-12-20 2023-06-30 北京字跳网络技术有限公司 特效图的生成方法、装置、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189644A1 (en) * 2003-03-25 2004-09-30 Frisken Sarah F. Method for animating two-dimensional objects
CN1998023A (zh) * 2004-03-16 2007-07-11 三菱电机株式会社 用于绘制组合字形的区域的方法
US20150113372A1 (en) * 2013-10-18 2015-04-23 Apple Inc. Text and shape morphing in a presentation application

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189644A1 (en) * 2003-03-25 2004-09-30 Frisken Sarah F. Method for animating two-dimensional objects
CN1998023A (zh) * 2004-03-16 2007-07-11 三菱电机株式会社 用于绘制组合字形的区域的方法
US20150113372A1 (en) * 2013-10-18 2015-04-23 Apple Inc. Text and shape morphing in a presentation application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG, TING: "Computer Generation of 3D Inscriptions from 2D Images of Chinese Calligraphy", CHINESE JOURNAL OF COMPUTERS, vol. 37, no. 11, 1 November 2014 (2014-11-01), pages 2380 - 2388, XP055955865 *

Also Published As

Publication number Publication date
CN114820834A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
US10388004B2 (en) Image processing method and apparatus
WO2022160914A1 (zh) 一种效果处理方法、装置、设备及存储介质
WO2018196606A1 (zh) 一种文档配色方案生成方法及装置
WO2022037634A1 (zh) 一种图片处理方法、装置、设备及存储介质
TWI695295B (zh) 基於擴增實境的圖像處理方法、裝置及電子設備
CN110163831B (zh) 三维虚拟沙盘的物体动态展示方法、装置及终端设备
CN105005970A (zh) 一种增强现实的实现方法及装置
US8952981B2 (en) Subpixel compositing on transparent backgrounds
CN110363837B (zh) 游戏中纹理图像的处理方法及装置、电子设备、存储介质
WO2021203720A1 (zh) 页面显示方法和装置、电子设备和存储介质
CN110288670B (zh) 一种ui描边特效的高性能渲染方法
CN109472852A (zh) 点云图像的显示方法及装置、设备及存储介质
KR20160130455A (ko) 애니메이션 데이터 생성 방법, 장치, 및 전자 기기
CN101354789A (zh) 一种图像面具特效的实现方法和设备
CN111402354B (zh) 适用于光学穿透式头戴显示器的颜色对比增强绘制方法、装置以及系统
WO2022160946A1 (zh) 一种文字的阴影效果处理方法、装置、设备及存储介质
US20170017370A1 (en) Device and method for processing data
CN116489377A (zh) 图像处理方法和电子设备
WO2023071595A9 (zh) 一种音效展示方法及终端设备
CN109614064A (zh) 一种图片显示方法、图片显示装置及终端设备
WO2022160945A1 (zh) 一种渐变颜色效果处理方法、装置、设备及存储介质
CN103839217A (zh) 一种水印图片的实现方法
WO2022161237A1 (zh) 一种文字的轮廓效果处理方法、装置、设备及存储介质
WO2022160915A1 (zh) 一种文字的发光效果处理方法、装置、设备及存储介质
WO2018107601A1 (zh) 动态演示使用说明的方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922504

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922504

Country of ref document: EP

Kind code of ref document: A1