WO2024051756A1 - 特效图像绘制方法、装置、设备及介质 - Google Patents

特效图像绘制方法、装置、设备及介质 Download PDF

Info

Publication number
WO2024051756A1
WO2024051756A1 PCT/CN2023/117329 CN2023117329W WO2024051756A1 WO 2024051756 A1 WO2024051756 A1 WO 2024051756A1 CN 2023117329 W CN2023117329 W CN 2023117329W WO 2024051756 A1 WO2024051756 A1 WO 2024051756A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
sampling
trajectory
dimensional
point set
Prior art date
Application number
PCT/CN2023/117329
Other languages
English (en)
French (fr)
Inventor
袁琦
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024051756A1 publication Critical patent/WO2024051756A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular, to a special effect image rendering method, device, equipment and medium.
  • special effects can be displayed for users in fields such as augmented reality (AR) virtual interaction and short video playback.
  • AR augmented reality
  • pre-generated special effects images such as flowers, rockets and other special effects scenes, can be added to the faces in the video.
  • embodiments of the present disclosure provide a method for drawing special effects images, including:
  • the target special effect image corresponding to the movement trajectory of the target object is displayed in the target video.
  • embodiments of the present disclosure provide a special effects image rendering device, including:
  • a trajectory tracking unit configured to respond to a special effects drawing request triggered by the user, perform trajectory tracking on the target video provided by the user, and obtain a three-dimensional point set corresponding to the motion trajectory of the target object in the target video;
  • a grid generation unit used to grid process the three-dimensional point set to obtain the target grid structure
  • a special effects generation unit is used to perform mapping processing on the target grid structure to obtain the target special effects image
  • a special effects display unit is configured to display the target special effects image corresponding to the movement trajectory of the target object in the target video.
  • embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and an output device;
  • the memory stores computer execution instructions
  • the processor executes the computer execution instructions stored in the memory, so that the processor is configured with the special effect image rendering method described in the first aspect and various possible designs of the first aspect, and the output device is used to output a The target video of the target special effects image.
  • embodiments of the present disclosure provide a computer-readable storage medium.
  • Computer-executable instructions are stored in the computer-readable storage medium.
  • the processor executes the computer-executable instructions, the above first aspect and the first aspect are implemented. square
  • the special effects image drawing method described above covers various possible designs.
  • the present disclosure provides a computer program that, when executed by a processor, implements the special effect image rendering method described in the first aspect and various possible designs in the first aspect.
  • the present disclosure provides a computer program product, including a computer program that, when executed by a processor, implements the special effect image rendering method described in the first aspect and various possible designs in the first aspect.
  • the corresponding target video is obtained by interacting with the user. Perform trajectory tracking on the target video to obtain a three-dimensional point set corresponding to the movement trajectory of the target object in the target video.
  • the three-dimensional point set is meshed to obtain the target mesh structure.
  • the target grid structure can be mapped to obtain the target special effects image, so that the target special effects image can be displayed corresponding to the movement trajectory of the target object, and then the target special effects image can be displayed corresponding to the movement trajectory of the video.
  • Figure 1 is an application example diagram of a special effect image drawing method provided by an embodiment of the present disclosure
  • Figure 2 is a flow chart of an embodiment of a special effects image rendering method provided by an embodiment of the present disclosure
  • Figure 3 is a flow chart of another embodiment of a special effects image rendering method provided by an embodiment of the present disclosure
  • Figure 4 is a resampling example diagram provided by an embodiment of the present disclosure.
  • Figure 5 is a sampling example diagram provided by an embodiment of the present disclosure.
  • Figure 6 is a flow chart of another embodiment of a special effects image rendering method provided by an embodiment of the present disclosure.
  • Figure 7 is an example diagram of triangulation of a three-dimensional point set provided by an embodiment of the present disclosure.
  • Figure 8 is a flow chart of another embodiment of a special effects image rendering method provided by an embodiment of the present disclosure.
  • FIG. 9 is an example diagram of a ring-shaped strip special effect diagram provided by an embodiment of the present disclosure.
  • Figure 10 is a schematic structural diagram of an embodiment of a special effects image rendering device provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • the technical solution of the present disclosure can be applied in AR technology scenarios.
  • the target special effect image is obtained through grid processing and mapping processing based on the movement trajectory of the target object, and the special effects corresponding to the trajectory are achieved. output to improve the real-time and accuracy of special effects output.
  • pre-generated special effects images are generally fused with image frames in the video, or special effects images are directly generated for areas of interest such as faces in the image frames. generate The image frames with special effects can then be displayed as videos with special effects through output.
  • the special effects processing in the above method uses the addition of special effects performed on image frames, which has little relationship with the user's behavior and lacks interaction with the user. This results in the application scenarios of special effects images being relatively limited and lacks real-time interaction between special effects and users. sex.
  • the present disclosure considers tracking the trajectory of the target object in the video, using the tracked trajectory to generate special effects, and generating a three-dimensional point set for the movement trajectory of the target object in the video.
  • Special effects images are generated through three-dimensional point sets to obtain three-dimensional special effects images and improve the displayability of special effects images.
  • the present disclosure relates to technical fields such as computers, cloud computing, and virtual reality, and specifically relates to a special effects image rendering method, device, equipment, and medium.
  • the corresponding target video is obtained by interacting with the user. Perform trajectory tracking on the target video to obtain a three-dimensional point set corresponding to the movement trajectory of the target object in the target video. Grid the three-dimensional point set to obtain the target grid structure. After obtaining the target grid structure corresponding to the movement trajectory, the target grid structure can be mapped to obtain the target special effects image, so that the target special effects image can be displayed corresponding to the movement trajectory of the target object, and then the target can be displayed corresponding to the movement trajectory of the video. Special effects images.
  • this technical solution realizes the generation basis of the grid structure using the motion trajectory as the special effect, obtains the target special effects image that matches the user's trajectory, and obtains the generation of special effects that matches the user's movement behavior. It improves the application scenarios of special effects and improves the efficiency of special effects use.
  • FIG. 1 is an application example diagram of a special effect image drawing method provided according to the present disclosure.
  • the special effect image drawing method can be applied to an electronic device 1 .
  • the electronic device 1 may include an AR device, for example.
  • the electronic device 1 can detect the special effect drawing request triggered by the user.
  • the electronic device 1 can configure the technical solution of the present disclosure, respond to the special effect drawing request, obtain the target special effect image, and display and output the target special effect image corresponding to the movement trajectory of the target object.
  • a moon 3 can be generated based on the curve 2.
  • Moon 3 can be output at curve 2 corresponding to the motion trajectory.
  • the trajectory of the target object U changes with the movement of the target object.
  • the target object U may remain unchanged during the trajectory tracking process.
  • the multiple target objects shown in each figure It is just to show the changes in the movement trajectory of the target object U, but it does not mean that there are multiple trajectory objects.
  • a corresponding motion trajectory can be generated, that is, curve 2.
  • Figure 2 is a flow chart of an embodiment of a special effects image drawing method provided by an embodiment of the present disclosure.
  • the method can be configured as a special effects image drawing device.
  • the special effects image drawing device can be located in an electronic device.
  • the special effects image The drawing method can include the following steps:
  • the special effects drawing request can refer to a Uniform Resource Locator (URL) request for special effects drawing.
  • the special effects drawing control can be set on the video playback page. When it is detected that the user triggers the special effects drawing control, the special effects drawing request is triggered. .
  • the video playback page may refer to the webpage that plays the target video.
  • the special effects drawing request triggered by the user can be detected during the playback of the target video.
  • the target video can be currently playing on the electronic device Play real-time video.
  • Web pages can be written in programming languages such as HyperText Markup Language 5 (HTLM5), Objective-C, and Java.
  • HTTPLM5 HyperText Markup Language 5
  • Objective-C Objective-C
  • Java Java
  • a selection page or dialog box for the target video can also be provided, and the target video can be selected through the selection page or dialog box.
  • the specific uploading method of the target video is not too limited.
  • the target video can include ordinary types of two-dimensional videos, and can also include videos played through AR devices.
  • the target object may include at least one of the object types in the target object, such as moving objects, vehicles, pedestrians, and human facial features.
  • the target object during tracking can be any point on the target object.
  • the target object can refer to key points in the human face that are easily recognized, such as the center point of the forehead, the center of the eyes, the tip of the nose, and any other point as a key point.
  • the motion trajectory is generally a curve formed by sampling the key points of the image frames in the video and connecting the obtained key points.
  • Obtaining the three-dimensional point set corresponding to the motion trajectory of the target object in the target video can specifically refer to the image frame in which the target object exists in the target video, and determining the position point of the target object, that is, the key point or trajectory point, from the image frame.
  • the key points of each image frame form a motion trajectory, which can be subsampled from the motion trajectory to obtain sampling points with a higher density than the original trajectory points.
  • the sampling points can be used to determine the three-dimensional point set.
  • the target grid structure may include a curved grid structure or an annular strip grid structure.
  • the target grid structure may be a grid surface determined according to the three-dimensional point set corresponding to the trajectory, and may include a surface with a common surface structure or a circular surface formed by the trajectory profile.
  • the target grid structure does not have a texture set. You can paste the texture image into the target grid structure. After the texture image is pasted into the target grid structure, the target special effect image is obtained.
  • mapping of grid structure can be achieved through mapping software.
  • the target special effect image can be displayed at a position corresponding to the movement trajectory of the target object in the target video.
  • the target special effect image can be a three-dimensional image.
  • the target special effects image can be displayed through the output device of the electronic device. Specifically, when the electronic device displays the target video, the target special effects image can be displayed in association with the position of the movement trajectory of the target object. Specifically, the center position of the motion trajectory can be used as the center position of the target special effects image, and the target special effects image is displayed at the center position.
  • Electronic devices may include mobile phones, AR devices, virtual reality devices, etc.
  • the specific types of electronic devices are not too limited.
  • Electronic devices can display three-dimensional target special effects images.
  • the target special effects image can be converted into a two-dimensional special effects image or special effects point cloud through coordinate conversion to perform special effects display.
  • trajectory tracking is performed on the target video to obtain a three-dimensional point set corresponding to the movement trajectory of the target object in the target video.
  • the target grid structure is obtained.
  • the target grid structure of the motion trajectory can be generated.
  • the target grid structure can be mapped to obtain the target special effects image, so that the target special effects image can be displayed corresponding to the movement trajectory of the target object, and then the target can be displayed corresponding to the movement trajectory of the video.
  • Special effects images are provided.
  • this technical solution realizes the generation basis of the grid structure using the motion trajectory as the special effect, obtains the target special effects image that matches the user's trajectory, and obtains the generation of special effects that matches the user's movement behavior. It improves the application scenarios of special effects and improves the efficiency of special effects use.
  • FIG 3 it is a flow chart of another embodiment of a special effect image drawing method provided by the embodiment of the present disclosure.
  • the difference from the embodiment shown in Figure 2 is that the target video provided by the user is tracked. , obtain the three-dimensional target point set corresponding to the motion trajectory of the target object in the target video, including:
  • 301 Identify the target object in the target video.
  • a target tracking algorithm can be used to identify target objects in the target video.
  • the key points of the target object can be determined.
  • the target object as a human face
  • any point in the face can be used as a key point, such as the tip of the nose, the center of the eyes, etc. can be used as key points.
  • Mapping the target sampling points to the three-dimensional space coordinate system respectively may include: mapping the target sampling points from the two-dimensional space coordinate system to the three-dimensional space coordinate system.
  • the target object in the target video can be identified to sample the motion trajectory of the target object to obtain a two-dimensional point set, and by mapping the two-dimensional point set to a three-dimensional spatial coordinate system, a three-dimensional point set is obtained.
  • trajectory sampling and spatial mapping the three-dimensional point set corresponding to the motion trajectory can be accurately extracted.
  • sampling the movement trajectory of the target object in the target video to obtain at least one target sampling point may include:
  • the two-dimensional point set includes multiple first sampling points
  • resampling the two-dimensional point set to obtain multiple second sampling points may include: sampling between two adjacent sampling points among the multiple first sampling points in the two-dimensional point set to obtain the second sampling points. Sampling points, multiple second sampling points can be obtained at the end of resampling the two-dimensional point set.
  • the first sampling points collected are 401 to 404 respectively.
  • the distance between two adjacent sampling points is relatively small. If the special effects are drawn directly based on the sampling points, a large contour error may occur.
  • the technical solution of the present disclosure adopts resampling. That is, the segmented curve can be segmented based on the fitting curve of the first sampling point, and the segmented curve can be subsampled.
  • the sampling point obtained by the subsampling is the second sampling point.
  • the segmented curve sampling corresponding to the first sampling points 401 and 402 can obtain the second sampling point 405.
  • the segmented curve sampling corresponding to the first sampling points 402 and 403 can obtain the second sampling point 406.
  • the first sampling point The segmented curve sampling corresponding to 403 and 404 can obtain three second sampling points 407.
  • the segmented curve can also be resampled using whole curve fitting and segmentation methods. For details, please refer to the description of the following embodiments.
  • the first sampling point may be a key point obtained by sampling from the target video.
  • the second sampling point may be a sampling point obtained by resampling based on the first sampling point, and the second sampling point may include the first sampling point.
  • the trajectory smoothing condition can refer to the distance between the sampling point and the trajectory being less than a distance threshold or located on the trajectory. Through the trajectory smoothing condition, sampling points that are significantly different from the trajectory can be removed, so that the trajectory formed by at least one target sampling point is smoothed, avoiding the occurrence of sharp points, and improving the accuracy of the trajectory.
  • a two-dimensional point set composed of a plurality of first sampling points is obtained. Resample the two-dimensional point set to obtain a denser second sampling point. The density of the sampling points can be enhanced through resampling.
  • at least one target sampling point that satisfies the trajectory smoothing condition can be determined from the second sampling point, and the second sampling point can be optimized using the trajectory smoothing condition to make the trajectory corresponding to the target sampling point smoother. Improve the collection precision and accuracy of target sampling points.
  • the motion trajectory of the target object is sampled from the target video to obtain a two-dimensional point set, including:
  • the trajectory points of the target object are collected from the image frames of the target video
  • collecting the trajectory points of the target object from the image frame of the target video according to the preset sampling frequency may include: performing image sampling from the target video according to the preset sampling frequency, obtaining the image frame, and identifying the target object from the image frame.
  • the key points of the target object are determined as track points.
  • the trajectory points may include multiple trajectory points, and the multiple trajectory points may be collected from multiple image frames respectively.
  • the sampling frequency can be preset, and specifically it can be set according to the sampling time interval.
  • the product of the sampling time interval and the sampling frequency can be 1, and the two are the reciprocal of each other.
  • replacing the trajectory point with the last first sampling point in the two-dimensional point set includes: deleting the last existing first sampling point in the two-dimensional point set and using the trajectory point as the new last one in the two-dimensional point set. The first sampling point.
  • the trajectory points when sampling the motion trajectory of the target object, can be initially obtained based on the sampling of image frames in the target video. By judging whether the trajectory points satisfy the distance constraint, and under the constraints of the distance constraint, the trajectory points collected in different image frames are screened in more detail to improve the sampling accuracy of the first sampling point.
  • the trajectory point satisfies the distance constraint, including:
  • Determining the first distance between the trajectory point and the last first sampling point in the two-dimensional point set may include: calculating the pixel coordinate distance according to the pixel coordinates of the trajectory point and the pixel coordinates of the last first sampling point in the two-dimensional point set, The first distance is determined based on the pixel coordinate distance.
  • the distance unit of the pixel coordinate distance may be different from or the same as the distance unit of the first distance. In different situations, two distance units can be used for conversion to achieve distance comparison in the same unit to achieve accurate distance comparison.
  • the distance threshold can be a limit distance between two adjacent trajectory points.
  • the first distance between the trajectory point and the last first sampling point in the two-dimensional point set is determined, and the distance constraint is performed on the trajectory points in the two-dimensional point set through a distance threshold to avoid occurrence of differences with already collected sampling points.
  • the distance constraint is performed on the trajectory points in the two-dimensional point set through a distance threshold to avoid occurrence of differences with already collected sampling points.
  • the sampling points can be accurately extracted.
  • the first sampling point is determined based on the trajectory points, including:
  • the trajectory point is added to the last first sampling point in the two-dimensional point set
  • the mean point is obtained based on the weighted sum of the first N first sampling points and trajectory points in the two-dimensional point set; the mean point is increased to the last Nth sampling point in the two-dimensional point set.
  • a sampling point; N is a positive integer greater than 0.
  • the closed distance threshold determination may be triggered. That is, the distance between the current trajectory point and its previous point is less than the distance threshold, and the trajectory can be determined to be closed, resulting in invalid sampling or sampling stop. Therefore, the number of sampling points can be determined after the distance is determined. Through the constraint of the number of sampling points, weighted sampling can be performed after the number of currently drawn sampling points reaches the threshold of the number of sampling points, so that the sampling process is subject to the distance
  • the dual constraints of threshold and sampling number threshold improve sampling accuracy.
  • obtaining the mean point can make the drawing process of trajectory points have a certain control effect, so that the final first sampling point is not limited to the trajectory points to avoid the problem that the fitted curve is not smooth due to large fluctuations in the actual trajectory points.
  • the weighted summation can make the first sampling point collected smoother and more accurate.
  • the sampling number threshold is used to judge the number of sampling points, and it is possible to confirm whether the trajectory points can be used directly.
  • the sampling point and its previous sampling points can be weighted and summed, so that the accuracy of the first sampling point is higher, and the first sampling point is accurately sampled.
  • resample the two-dimensional point set to obtain multiple second sampling points including:
  • the closed curve is resampled to obtain multiple second sampling points.
  • the curve fitting algorithm may include Bezier curve algorithm or Kammer-Rohm algorithm.
  • Curve fitting the first first sampling point and the last first sampling point in the two-dimensional point set, and obtaining the second spline may include: according to the average curvature of the first spline, fitting the two-dimensional point set into Perform curve fitting on the first first sampling point and the last first sampling point to obtain the second spline. You can also directly connect the first first sampling point and the last first sampling point in the two-dimensional point set to form a line segment, and obtain the second spline corresponding to the line segment.
  • the first sampling point 501 and the last sampling point 502 The first spline 503 obtained by fitting the sampling points between them is not closed.
  • the second spline 504 can be obtained through curve fitting.
  • the first spline 503 and the second spline 504 can be spliced to form a closed one. curve.
  • multiple first sampling points in the two-dimensional point set can be obtained through curve fitting, and curve fitting is performed to obtain the first spline.
  • curve fitting can be performed on the first sampling point and the last sampling point in the two-dimensional point set to obtain the second spline.
  • a closed curve is obtained.
  • the closed curve corresponds to the motion trajectory.
  • the closed curve is resampled according to the segmented interval sampling strategy to obtain multiple second sampling points, including:
  • collecting the second sampling points from the segmented curve may include determining at least one curve sampling point from the segmented curve, and determining the at least one curve sampling point as the second sampling point respectively. It can also be determined that the original first sampling point is the second sampling point.
  • the plurality of second sampling points may include a plurality of first sampling points in the two-dimensional point set.
  • performing segmentation processing on the closed curve according to the number of segments may include: determining the segment length of the closed curve based on the number of segments, and segmenting the closed curve into at least one segment according to the segment length and the number of segments. curve. At least one segmented curve can have the same number of curves as the number of segments.
  • At least one segmented curve is obtained by segmenting the closed curve in a segmented manner.
  • the segmented curve is a line segment. Sampling from the segmented curve can make multiple second sampling points belong to the same line segment with high accuracy. higher.
  • determining the number of segments corresponding to the segment interval sampling strategy may include:
  • the maximum number of segments Based on the maximum number of segments, the maximum number of input points, and the minimum number of segments, determine the number of segments corresponding to the segment interval sampling strategy.
  • parameters such as the maximum number of segments, the maximum number of input points, and the minimum number of segments can be set.
  • the accurate number of segments is obtained according to the constraints of the maximum number of segments, the maximum number of input points, and the minimum number of segments.
  • determining the number of segments corresponding to the segment interval sampling strategy may include:
  • the number of segments is calculated.
  • the determined curve length is used to calculate the number of segments, so that the number of segments and the curve length can be matched in real time, dynamic adjustment of the number of segments is realized, and an accurate number of segments is obtained.
  • map at least one target sampling point to a three-dimensional spatial coordinate system respectively to obtain a three-dimensional point set including:
  • the target sampling point is converted to a three-dimensional space coordinate
  • the three-dimensional transformation coordinates corresponding to each target sampling point are obtained;
  • the camera transformation matrix can be obtained by matrix calculation using camera parameters, which can include camera intrinsic parameters and camera extrinsic parameters.
  • the target sampling point can be a point in the two-dimensional image coordinate system.
  • the target sampling point can be converted into the three-dimensional spatial coordinate system through data such as the camera transformation matrix to obtain the three-dimensional transformation coordinates.
  • data such as the camera transformation matrix to obtain the three-dimensional transformation coordinates.
  • the target sampling point can be converted into a three-dimensional space coordinate system based on the depth information of each target sampling point combined with the camera conversion distance of the camera coordinate system, thereby achieving accurate conversion of pixel points in the three-dimensional space coordinate system.
  • At least one target sampling point that satisfies the trajectory smoothing condition is determined from multiple second sampling points, including:
  • the tangent vector of the target group sampling point is greater than or equal to zero, then it is determined that the two second sampling points in the target group are both target sampling points;
  • the step of calculating the tangent vectors of the two second sampling points may include arc fitting based on the two sampling points, and using the fitted arcs to calculate the tangent vectors.
  • two adjacent sampling points can be divided into a group of sampling points to determine whether the tangent vector of each group of sampling points is greater than zero. If the tangent vector is greater than zero, it means that the If the difference between the two second sampling points is small, two sampling points can be retained. If the tangent vector is less than zero, it means that the difference between the two second sampling points in the group is large. One of the sampling points is more prominent and may affect The display effect of special effects, so any second sampling point can be retained. Through the judgment of tangent vectors, protruding points in adjacent sampling points can be processed to improve the processing efficiency of sampling points.
  • At least one target sampling point that satisfies the trajectory smoothing condition is determined from the second sampling point, including:
  • judging whether the second sampling point belongs to a point on the smooth curve may include: performing curve fitting on the second sampling point, obtaining the fitting curve, and calculating the absolute distance from each second sampling point to the fitting curve. If the absolute distance is greater than the absolute threshold, it is determined that the second sampling point satisfies the trajectory smoothing condition; if the absolute distance is less than or equal to the absolute threshold, it is determined that It is determined that the second sampling point does not satisfy the trajectory smoothing condition.
  • whether the second sampling point belongs to a point on the smooth curve can be judged based on the edge average sampling algorithm.
  • the smooth curve whether the sampling point belongs to a more prominent sharp point can be detected, so that the target sampling point It is a point along a smooth curve, and when generating special effects based on at least one target sampling point, the contour of the special effect image is smoother without sharp parts, thereby obtaining at least one target sampling point with a smoother contour.
  • FIG. 6 it is a flow chart of another embodiment of a special effect image drawing method provided by the embodiment of the present disclosure.
  • the difference between the special effects image drawing method and the previous embodiment is that the three-dimensional point set is gridded processing to obtain the target grid structure, including:
  • 603 Perform texture mapping on the three-dimensional mesh surface to obtain the target texture image for three-dimensional rendering
  • 604 Perform edge smoothing on the target texture image of the three-dimensional rendering to obtain the target special effect image.
  • the triangular mesh surface can be obtained by connecting multiple points in a three-dimensional point set to form a mesh (mash).
  • Smoothing the edges of the target texture image for three-dimensional rendering may refer to smoothing the edges of the grid of the target texture image for three-dimensional rendering.
  • the three-dimensional point set can be generated according to the network topology to generate a triangular mesh surface, and the triangular mesh surface can be used as the target mesh structure for mapping processing to obtain a three-dimensional texture map.
  • the triangular mesh surface can be used as the target mesh structure for mapping processing to obtain a three-dimensional texture map.
  • network processing accurate triangular mesh surfaces can be obtained.
  • edge smoothing processing can be performed on the target texture image for three-dimensional rendering to obtain the target special effect image. Through edge smoothing, the target special effects image can be made smoother and the display effect better.
  • triangulate the entire area contained in the three-dimensional point set to obtain a triangular mesh surface including:
  • the triangular mesh surface is determined.
  • determining the center point corresponding to the three-dimensional point set may include the following implementation methods:
  • Embodiment 1 Solve the internal center of gravity of the convex polygon corresponding to the three-dimensional point set and obtain the center point.
  • Embodiment 2 Perform circular fitting processing on the outline of the three-dimensional point set to obtain the center point of the fitting circle as the center point.
  • Embodiment 3 Perform triangulation (Delaunay) decomposition on the three-dimensional point set. If the points in the three-dimensional point set are not within all the divided triangles, it means that the center/gravity center is outside the convex polygon corresponding to the three-dimensional point set, and it can be calculated. The mean of all points in the three-dimensional point set is used as the center point.
  • a convex polygon can be a polygonal figure formed by connecting edge points in a three-dimensional point set.
  • Figure 7 shows an example diagram of triangulation of a three-dimensional point set. Map a 3D point set to a plane
  • the point of the three-dimensional point set is 701 and the center point is 702.
  • Points 701 and center points 702 of two adjacent three-dimensional point sets can be connected in pairs to form a triangle, and a corresponding three-dimensional topological structure 703 is obtained.
  • the three-dimensional point set shown in Figure 7 is only exemplary. In practical applications, the three-dimensional point set can change along with the motion trajectory.
  • the three-dimensional topological structure 703 can be mapped to the UV coordinate system to obtain the UV value of each point to obtain a triangular mesh surface.
  • the boundary value of the three-dimensional topology structure may refer to the boundary size of the three-dimensional topology structure, such as the length and width of the three-dimensional topology structure.
  • the length and width are actually calibrated values.
  • the percentage coordinates of the image can refer to the UV coordinate system.
  • mapping a three-dimensional topology structure in order to ensure that the image fits the three-dimensional topology structure more closely and does not cause over-fitting, the boundary value of the three-dimensional topology structure can be determined and converted into the calculated value of the UV coordinate system.
  • the UV coordinate is the percentage coordinate of the image, with U in the horizontal direction and V in the vertical direction. For example, assume that the three-dimensional topological structure includes four boundary points: the upper left point, the upper right point, the lower left point, and the lower right point.
  • U can be determined that U equals 0 and V equals 0 in the upper left corner, U equals 1 and V equals 1 in the lower right corner, U equals 1 and V equals 0 in the upper right corner, U equals 0 and V equals 1 in the lower left corner.
  • U can be four of the three-dimensional topological structures.
  • the calculated value of the angle in the UV coordinate system, and the UV value of the center point of the three-dimensional topology structure can be U equal to 0.5 and V equal to 0.5.
  • the calculated value of each point that is, the UV value can be used for mapping.
  • a sector network topology when generating a triangular mesh surface, can be constructed based on the center point of a three-dimensional point set to obtain a three-dimensional topological structure.
  • the three-dimensional topological structure By converting the boundary values of the three-dimensional topological structure into UV calculation values, the three-dimensional topological structure can be made In the UV coordinate system, a triangular mesh surface with a coordinate structure is obtained. Through surface construction and coordinate conversion, an accurate triangular mesh surface can be obtained to ensure the accurate execution of subsequent special effects mapping.
  • edge smoothing is performed on the target texture image of the three-dimensional rendering to obtain the target special effects image, including:
  • the texture image and rendering texture can include four channels: red (Red, R), green (Green, G), blue (Blue, B) and alpha (alpha, A).
  • Each channel can be displayed at a pixel point. Corresponding to the channel value.
  • the channel value of each channel of the rendering texture can be 0, that is, the initial value of each pixel is (0, 0, 0, 0), and the channel value of each pixel is The background of the render texture at 0 is transparent.
  • Grayscale rendering texture can refer to the grayscale value of the pixel value of the rendering texture being 0-1.
  • the alpha channel value can be used to convert the color value of the grayscale rendering texture.
  • the color value corresponding to an alpha channel value of 0 is (0, 0, 0), which is pure black;
  • the color value corresponding to an alpha channel value of 1 is (1, 1, 1), the alpha channel value belonging to the range of 0-1 corresponds to the grayscale color value of (alpha, alpha, alpha).
  • the grayscale mask map can be used to mark the mask of the first texture map, so that the outline of the target special effect image is affected by the grayscale mask map and converted into a polygonal outline.
  • performing erosion processing and blurring processing on the grayscale mask image to obtain a smooth mask image may include performing erosion processing on the grayscale mask image to obtain the corrosion mask image, and performing blurring processing on the corrosion mask image to obtain Smooth mask map.
  • Blurring the corrosion mask map may include: blurring the corrosion mask map based on a Gaussian function.
  • the target texture image of the three-dimensional rendering onto the rendering texture with an initial value of zero to obtain the first texture map, and drawing the alpha channel value of the first texture map onto the grayscale rendering texture we can obtain Grayscale mask image, using erosion processing and blurring processing of grayscale mask image to obtain a smoother smooth mask image.
  • the smoothing mask map to perform image fusion processing on the first texture map, the contour of the image can be smoothed to obtain a target special effect image with a smoother contour.
  • FIG. 8 it is a flow chart of another embodiment of a special effect image drawing method provided by the embodiment of the present disclosure.
  • the difference between the special effects image drawing method and the previous embodiment is that the three-dimensional point set is gridded processing to obtain the target grid structure, including:
  • interpolating two adjacent sampling points in the three-dimensional point set may refer to interpolating two adjacent sampling points in the three-dimensional point set until the sampling of two adjacent sampling points in the three-dimensional point set is completed.
  • interpolation processing based on Catmull Curve can be used to obtain a denser set of target points.
  • FIG. 9 shows an example diagram of a ring-shaped strip special effect chart.
  • the three-dimensional bar trajectory chart 901 can be used to determine the target special effect image, showing a ring-shaped special effect.
  • the target effect image can have translucent properties.
  • Rendering transparency can be preset according to usage requirements. It can be a value greater than 0 and less than or equal to 1, or a decimal between 0 and 1.
  • a strip network corresponding to the target point set can be generated, a strip target network structure can be obtained, and the generation of a strip grid can be achieved.
  • a three-dimensional bar trajectory map can be obtained.
  • the three-dimensional bar trajectory chart can be used to determine the target special effects image.
  • determining the target special effects image based on the three-dimensional strip trajectory includes:
  • the trajectory mixed image is transparently processed according to the rendering transparency to obtain the target special effect image.
  • the second texture map may be a rendered image showing a three-dimensional bar trajectory.
  • RT can be used to perform edge smoothing on the three-dimensional bar trajectory, so that the bar shape of the obtained target special effect image is smoother and the display effect is better.
  • One possible design also includes:
  • the texture image is used to map the target grid structure to obtain the target special effect image.
  • At least one festival type can be set and obtained, and the time period corresponding to each festival type is known.
  • the time period corresponding to the Mid-Autumn Festival is known.
  • the texture image can be pre-associated for each festival type.
  • the texture image for the Mid-Autumn Festival can be a moon texture image
  • the texture image for the Dragon Boat Festival can be a rice dumpling leaf texture image.
  • the target holiday type is determined through the triggering time of the special effects drawing request, and the texture image corresponding to the target holiday type is used for mapping, so that the texture of the target special effects image matches the target holiday type in real time, and the applicable time is longer, which can be obtained Higher display efficiency.
  • Application scenario 1 In traditional holiday scenarios, such as Dragon Boat Festival, Mid-Autumn Festival and other holidays, the user initiates a special effects rendering request, and can obtain texture images suitable for the holidays based on the technical solution of the present disclosure.
  • the target network obtained based on some steps of the present disclosure can Using the grid structure for mapping processing, for example, moon texture mapping, you can get a special effects moon, which can be displayed corresponding to the motion trajectory of the video.
  • Application scenario two In an AR scenario, the technical solution of the present disclosure can be used to generate a special effect image for the tracked vehicle or person.
  • the generated special effect image is a three-dimensional special effect image.
  • the generated three-dimensional special effect image can be displayed on the AR device. In the display, it realizes reality enhancement and improves the utilization rate of special effects.
  • the technical solution of the present disclosure can also be applied to the fields of video playback and AR. Specifically, it can include scene-based special effects for special holidays, special effects applications for user trajectory tracking, and other fields.
  • FIG. 10 it is a schematic structural diagram of an embodiment of a special effects image rendering device provided by an embodiment of the present disclosure.
  • the device can be located in an electronic device and can be configured with the above special effects image rendering method.
  • the special effects image rendering device 1000 Can include:
  • Trajectory tracking unit 1001 configured to respond to the special effects drawing request triggered by the user, perform trajectory tracking on the target video provided by the user, and obtain a three-dimensional point set corresponding to the motion trajectory of the target object in the target video.
  • Grid generation unit 1002 used to grid process the three-dimensional point set to obtain the target grid structure.
  • Special effects generation unit 1003 used for mapping the target grid structure to obtain the target special effects image.
  • Special effects display unit 1004 used to display the target special effects image corresponding to the movement trajectory of the target object in the target video.
  • the trajectory tracking unit includes:
  • Object recognition module used to identify target objects in target videos
  • the trajectory sampling module is used to sample the movement trajectory of the target object in the target video to obtain at least one target sampling point;
  • the spatial mapping module is used to respectively map at least one target sampling point to a three-dimensional spatial coordinate system to obtain a three-dimensional point set.
  • the trajectory sampling module includes:
  • the trajectory sampling submodule is used to sample the movement trajectory of the target object from the target video to obtain a two-dimensional point set; the two-dimensional point set includes multiple first sampling points;
  • the resampling submodule is used to resample the two-dimensional point set to obtain multiple second sampling points
  • the sampling selection submodule is used to determine at least one target sampling point that satisfies the trajectory smoothing condition from a plurality of second sampling points.
  • trajectory sampling submodule is specifically used for:
  • the trajectory points of the target object are collected from the image frames of the target video
  • trajectory sampling submodule is specifically used for:
  • trajectory sampling submodule is specifically used for:
  • the trajectory point is added to the last first sampling point in the two-dimensional point set
  • the mean point is obtained based on the weighted sum of the first N first sampling points and trajectory points in the two-dimensional point set; the mean point is increased to the last Nth sampling point in the two-dimensional point set.
  • a sampling point; N is a positive integer greater than 0.
  • the resampling submodule is specifically used to:
  • the closed curve is resampled to obtain multiple second sampling points.
  • the resampling submodule is specifically used to:
  • sampling selection sub-module can be used for:
  • the dot product results are obtained by calculating the dot product of the tangent vectors of the two second sampling points;
  • the spatial mapping module may include:
  • the matrix conversion submodule is used to convert the target video into the camera according to the camera conversion matrix corresponding to the camera that renders the target video.
  • the sample points are converted into the spatial coordinate system to obtain the three-dimensional transformation coordinates corresponding to each target sampling point;
  • the point set determination submodule is used to obtain a three-dimensional point set composed of three-dimensional transformation coordinates corresponding to each target sampling point.
  • the grid generation unit may include:
  • the first processing module is used to triangulate the entire area contained in the three-dimensional point set to obtain a triangular mesh surface
  • the first determination module is used to determine that the triangular mesh surface is a target mesh structure corresponding to the surface special effect type.
  • Special effects generation unit including:
  • the first mapping module is used to perform texture mapping on the three-dimensional mesh surface to obtain the target texture image for three-dimensional rendering
  • the edge smoothing module is used to perform edge smoothing on the target texture image of 3D rendering to obtain the target special effects image.
  • the first processing module includes:
  • Center determination submodule used to determine the center point corresponding to the three-dimensional point set
  • the topology construction submodule is used to construct a sector grid topology based on the three-dimensional point set and the center point to obtain the three-dimensional topological structure;
  • the coordinate calculation submodule is used to convert the boundary value of the three-dimensional topology structure into the calculated value of the percentage coordinate system of the image
  • the surface determination submodule is used to determine the triangular mesh surface based on the three-dimensional topology structure and the calculated values of each point on the three-dimensional topology structure.
  • the edge smoothing module includes:
  • the first rendering submodule is used to render each pixel of the three-dimensional rendering target texture image into a rendering texture with each channel value being zero, to obtain the first texture map;
  • the second rendering submodule is used to draw the alpha channel of the first texture map into the grayscale rendering texture to obtain the grayscale mask map;
  • the corrosion blur sub-module is used to corrode and blur the grayscale mask image to obtain a smooth mask image
  • the image fusion sub-module is used to perform image fusion processing on the smooth mask map and the first texture map to obtain the target special effect image.
  • the grid generation unit includes:
  • the point set difference module is used to interpolate two adjacent sampling points in the three-dimensional point set to obtain the target point set;
  • the grid generation module is used to generate a strip network for the contour area corresponding to the target point set to obtain the target grid structure
  • Special effects generation unit including:
  • the trajectory rendering module is used to perform trajectory rendering on the target grid structure and obtain a three-dimensional bar trajectory graph
  • the trajectory processing module is used to determine the target special effects image based on the three-dimensional bar trajectory map.
  • the trajectory processing module includes:
  • the third rendering submodule is used to render the three-dimensional strip trajectory to a rendering texture with each channel value being zero to obtain the second texture map;
  • the transparency processing submodule is used to transparently process the second texture map according to the rendering transparency to obtain the target special effect image.
  • One possible design also includes:
  • the time determination unit is used to determine the trigger time of the special effects drawing request triggered by the user
  • a holiday determination unit configured to determine the target holiday type corresponding to the trigger time based on the time period corresponding to at least one holiday type
  • the texture association unit is used to obtain the pre-associated texture image of the target festival type
  • Special effects generation unit including:
  • the second mapping module is used to use texture images to map the target grid structure to obtain the target special effects image.
  • the device provided in this embodiment can be used to execute the technical solutions of the above method embodiments. Its implementation principles and technical effects are similar, and will not be described again in this embodiment.
  • embodiments of the present disclosure also provide an electronic device.
  • the electronic device 1100 may be a terminal device or a server.
  • terminal devices may include but are not limited to mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP portable multimedia players
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • the electronic device shown in FIG. 11 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 1100 may include a processing device (such as a central processing unit, a graphics processor, etc.) 1101, which may process data according to a program stored in a read-only memory (Read Only Memory, ROM) 1102 or from a storage device 1108
  • the program loaded into the random access memory (Random Access Memory, RAM) 1103 performs various appropriate actions and processes.
  • RAM Random Access Memory
  • various programs and data required for the operation of the electronic device 1100 are also stored.
  • the processing device 1101, ROM 1102 and RAM 1103 are connected to each other via a bus 1104.
  • An input/output (I/O) interface 1105 is also connected to bus 1104.
  • the following devices can be connected to the I/O interface 1105: input devices 1106 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 1107 such as a speaker, a vibrator, etc.; a storage device 1108 including a magnetic tape, a hard disk, etc.; and a communication device 1109.
  • the communication device 1109 may allow the electronic device 1100 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 11 illustrates an electronic device 1100 having various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 1109, or from storage device 1108, or from ROM 1102.
  • the processing device 1101 When the computer program is executed by the processing device 1101, the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer-readable storage medium may be, for example, but not Limited to - electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any combination thereof.
  • Computer readable storage media may include, but are not limited to: an electrical connection having one or more conductors, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM), optical fiber, portable compact disk read only memory (Compact Disc Read Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to:
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device performs the method shown in the above embodiment.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer ( For example, using an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware.
  • the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first acquisition unit can also be described as "the unit that acquires at least two Internet Protocol addresses.”
  • exemplary types of hardware logic components include: field programmable gate arrays
  • FPGA Field-Programmable Gate Array
  • ASIC Application Specific Standard Parts
  • ASSP application specific standard product
  • SOC System On Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory
  • CD-ROM compact disc-ROM
  • optical storage device optical storage device
  • magnetic storage device any suitable combination of the foregoing.
  • a special effect image rendering method including:
  • trajectory tracking is performed on the target video provided by the user to obtain a three-dimensional target point set corresponding to the motion trajectory of the target object in the target video, including:
  • Map at least one target sampling point to the three-dimensional spatial coordinate system respectively to obtain a three-dimensional point set.
  • sampling the movement trajectory of the target object in the target video to obtain at least one target sampling point includes:
  • the two-dimensional point set includes multiple first sampling points
  • sampling the motion trajectory of the target object from the target video to obtain a two-dimensional point set includes:
  • the trajectory points of the target object are collected from the image frames of the target video
  • determining whether the trajectory point satisfies the distance constraint condition based on the preset distance threshold includes:
  • determining the first sampling point based on the trajectory point includes:
  • the trajectory point is added to the last first sampling point in the two-dimensional point set
  • the mean point is obtained based on the weighted sum of the first N first sampling points and trajectory points in the two-dimensional point set; the mean point is increased to the last Nth sampling point in the two-dimensional point set.
  • a sampling point; N is a positive integer greater than 0.
  • the two-dimensional point set is resampled to obtain a plurality of second sampling points, including:
  • the closed curve is resampled to obtain multiple second sampling points.
  • the closed curve is resampled according to the segmented interval sampling strategy to obtain a plurality of second sampling points, including:
  • determining at least one target sampling point that satisfies the trajectory smoothing condition from a plurality of second sampling points includes:
  • the dot product results are obtained by calculating the dot product of the tangent vectors of the two second sampling points;
  • the dot multiplication result corresponding to the two second sampling points of the target group is greater than or equal to zero, it is determined that the two second sampling points in the target group are both target sampling points.
  • mapping at least one target sampling point to a three-dimensional spatial coordinate system respectively to obtain a three-dimensional point set includes:
  • the camera transformation matrix corresponding to the camera that renders the target video convert the target sampling points into the spatial coordinate system to obtain the three-dimensional transformation coordinates corresponding to each target sampling point;
  • the three-dimensional point set is meshed to obtain a target mesh structure, including:
  • the entire area included in the three-dimensional point set is triangulated to obtain a triangular mesh surface, including:
  • the triangular mesh surface is determined.
  • performing edge smoothing processing on a three-dimensionally rendered target texture image to obtain a target special effect image includes:
  • the three-dimensional point set is meshed to obtain a target mesh structure, including:
  • the target special effect image is determined.
  • determining the target special effect image according to the three-dimensional strip trajectory includes:
  • it further includes:
  • the texture image is used to map the target grid structure to obtain the target special effect image.
  • a special effect image rendering device including:
  • the trajectory tracking unit is used to perform trajectory tracking on the target video provided by the user in response to the special effects drawing request triggered by the user, and obtain a three-dimensional point set corresponding to the motion trajectory of the target object in the target video;
  • the grid generation unit is used to grid process the three-dimensional point set to obtain the target grid structure
  • the special effects generation unit is used to map the target grid structure and obtain the target special effects image
  • the special effects display unit is used to display the target special effects image corresponding to the movement trajectory of the target object in the target video.
  • an electronic device including: a processor, a memory, and an output device;
  • Memory stores instructions for execution by the computer
  • the processor executes the computer execution instructions stored in the memory, so that the processor is configured with the above first aspect and various possible designed special effects image rendering methods of the first aspect, and the output device is used to output a target video with a target special effect image.
  • a computer-readable storage medium is provided.
  • Computer-executable instructions are stored in the computer-readable storage medium.
  • the processor executes the computer-executed instructions, the first step above is implemented. aspects as well as various possible designed special effects image drawing methods in the first aspect.
  • a computer program is provided.
  • the computer program is executed by a processor, the special effects described in the first aspect and various possible designs in the first aspect are realized.
  • Image drawing method When the computer program is executed by a processor, the special effects described in the first aspect and various possible designs in the first aspect are realized.
  • a computer program product including a computer program.
  • the computer program When the computer program is executed by a processor, the special effects of the first aspect and various possible designs of the first aspect are realized.
  • Image drawing method When the computer program is executed by a processor, the special effects of the first aspect and various possible designs of the first aspect are realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例提供一种特效图像绘制方法、装置、设备及介质。该方法包括:响应于用户触发的特效绘制请求,对所述用户提供的目标视频进行轨迹追踪,获得所述目标视频中目标对象的运动轨迹对应的三维点集;将所述三维点集进行网格化处理,获得目标网格结构;对所述目标网格结构进行贴图处理,获得目标特效图像;在所述目标视频中显示与所述目标对象的运动轨迹对应的所述目标特效图像。本公开的技术方案通过三维点集进行特效图像的生成,获得立体的特效图像,提高特效图像的展示性。

Description

特效图像绘制方法、装置、设备及介质
相关申请交叉引用
本申请要求于2022年09月08日提交中国专利局、申请号为202211098070.6、发明名称为“特效图像绘制方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开实施例涉及计算机技术领域,尤其涉及一种特效图像绘制方法、装置、设备及介质。
背景技术
目前,在增强现实(Augmented Reality,AR)虚拟交互、短视频播放等领域中可以为用户展示特效。例如,在用户播放视频的过程中,可以为视频中的人脸添加预先生成的特效图像,诸如花朵、火箭等特效场景。
发明内容
第一方面,本公开实施例提供一种特效图像绘制方法,包括:
响应于用户触发的特效绘制请求,对所述用户提供的目标视频进行轨迹追踪,获得所述目标视频中目标对象的运动轨迹对应的三维点集;
将所述三维点集进行网格化处理,获得目标网格结构;
对所述目标网格结构进行贴图处理,获得目标特效图像;
在所述目标视频中显示与所述目标对象的运动轨迹对应的所述目标特效图像。
第二方面,本公开实施例提供一种特效图像绘制装置,包括:
轨迹追踪单元,用于响应于用户触发的特效绘制请求,对所述用户提供的目标视频进行轨迹追踪,获得所述目标视频中目标对象的运动轨迹对应的三维点集;
网格生成单元,用于将所述三维点集进行网格化处理,获得目标网格结构;
特效生成单元,用于对所述目标网格结构进行贴图处理,获得目标特效图像;
特效显示单元,用于在所述目标视频中显示与所述目标对象的运动轨迹对应的所述目标特效图像。
第三方面,本公开实施例提供一种电子设备,包括:处理器、存储器以及输出装置;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器配置有如上第一方面以及第一方面各种可能的设计所述的特效图像绘制方法,所述输出装置用于输出具有目标特效图像的目标视频。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方 面各种可能的设计所述的特效图像绘制方法。
第五方面,本公开提供一种计算机程序,当所述计算机程序被处理器执行时,实现如上第一方面以及第一方面中各种可能的设计所述的特效图像绘制方法。
第六方面,本公开提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面中各种可能的设计所述的特效图像绘制方法。
根据本公开提供的技术方案,通过与用户交互,获得相应的目标视频。对目标视频进行轨迹追踪,获得目标视频中的目标对象的运动轨迹所对应的三维点集。三维点集进行网格化处理,获得目标网格结构。获得目标网格结构之后,可以对目标网格结构进行贴图处理,获得目标特效图像,从而在目标对象的运动轨迹对应显示目标特效图像,之后可以在视频的运动轨迹对应显示目标特效图像。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种特效图像绘制方法的一个应用示例图;
图2为本公开实施例提供的一种特效图像绘制方法的一个实施例的流程图;
图3为本公开实施例提供的一种特效图像绘制方法的又一个实施例的流程图;
图4为本公开实施例提供的一个重采样示例图;
图5为本公开实施例提供的一个采样示例图;
图6为本公开实施例提供的一种特效图像绘制方法的又一个实施例的流程图;
图7为本公开实施例提供的一个三维点集的三角形剖分示例图;
图8为本公开实施例提供的一种特效图像绘制方法的又一个实施例的流程图;
图9为本公开实施例提供的一个环形条状特效图的示例图;
图10为本公开实施例提供的一种特效图像绘制装置的一个实施例的结构示意图;
图11为本公开实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的技术方案可以应用于AR技术场景中,通过视频中的目标对象的轨迹追踪,以基于目标对象的运动轨迹所执行的网格处理、贴图处理,获得目标特效图像,实现对轨迹对应特效的输出,提高特效输出的实时性和准确性。
相关技术中,在常规的特效输出场景中,一般是,将预先生成的特效图像与视频中的图像帧进行融合,或者直接对图像帧中的人脸等关注区域进行特效图像的生成。生成 具备特效的图像帧之后可以通过输出方式展示具备特效的视频。但是,以上方式的特效处理,是采用对图像帧执行的特效增加,与用户的行为关系不大,缺乏与用户的交互互动,导致特效图像的应用场景较为受限,缺乏特效与用户的交互实时性。
为了解决上述技术问题,本公开考虑对视频中的目标对象进行轨迹追踪,利用追踪的轨迹进行特效生成,实现对视频中目标对象的运动轨迹生成三维点集。通过三维点集进行特效图像的生成,获得立体的特效图像,提高特效图像的展示性。
本公开涉及计算机、云计算、虚拟现实等技术领域,具体涉及一种特效图像绘制方法、装置、设备及介质。
本公开的技术方案中,通过与用户交互,获得相应的目标视频。对目标视频进行轨迹追踪,获得目标视频中的目标对象的运动轨迹所对应的三维点集。将三维点集进行网格化处理,获得目标网格结构。获得与运动轨迹对应的目标网格结构之后,可以对目标网格结构进行贴图处理,获得目标特效图像,从而在目标对象的运动轨迹对应显示目标特效图像,之后可以在视频的运动轨迹对应显示目标特效图像。本技术方案通过对视频中的目标对象的轨迹追踪,实现以运动轨迹作为特效的网格结构的生成基础,获得与用户轨迹相匹配的目标特效图像,获得与用户的运动行为的特效生成,扩展了特效的应用场景,提高特效使用效率。
下面将以具体实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面几个具体实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图对本公开的实施例进行详细描述。
图1是根据本公开提供的一种特效图像绘制方法的一个应用示例图,该特效图像绘制方法可以应用于电子设备1中。电子设备1例如可以包括AR设备。其中,电子设备1可以检测用户触发的特效绘制请求。电子设备1可以配置本公开的技术方案,对特效绘制请求进行响应,并获取目标特效图像,在目标对象的运动轨迹对应显示输出目标特效图像。
示例性地,假设目标视频中用户的运动轨迹为一个曲线2,则可以基于曲线2生成一个月亮3。月亮3即可以在运动轨迹所对应的曲线2处输出。
需要说明的是,图1所示的应用示例中,目标对象U的轨迹随着目标对象的运动而发生变化,轨迹追踪过程中目标对象U可以不变,各图中示出的多个目标对象仅是为了展示出目标对象U的移动轨迹的变化,并不代表轨迹对象是多个。参考图1,在目标对象U移动过程中,可以产生相应的运动轨迹,也即曲线2。
参考图2,图2为本公开实施例提供的一种特效图像绘制方法的一个实施例的流程图,该方法可以配置为特效图像绘制装置,特效图像绘制装置可以位于电子设备中,该特效图像绘制方法可以包括以下几个步骤:
201:响应于用户触发的特效绘制请求,对用户提供的目标视频进行轨迹追踪,获得目标视频中目标对象的运动轨迹对应的三维点集。
可选地,特效绘制请求可以指特效绘制的统一资源定位器URL(Uniform Resource Locator,URL)请求,可以在视频播放页面设置特效绘制控件,检测到用户触发特效绘制控件时,即触发特效绘制请求。视频播放页面可以指播放目标视频的网页。可以在目标视频的播放过程中,检测用户触发的特效绘制请求。目标视频可以为电子设备正在播 放的实时视频。
本公开的技术方案还可以应用于网页技术中,可以通过网页播放目标视频,并在网页的播放器中播放目标视频。网页可以通过超文本标记语言(HyperText Markup Language5,HTLM5)、Objective-C、Java等编程语言编写获得。
其中,在特效绘制请求被触发之后,还可以提供目标视频的选择页面或对话框,可以通过选择页面或对话框选择目标视频。可以接收用户上传的目标视频,可以从用户端接收目标视频。本公开实施例中对目标视频的具体上传方式并不作出过多限定。目标视频可以包括普通类型的二维视频,还可以包括通过AR设备播放的视频。
目标对象可以包括目标对象中运动的物体、车辆、行人、人的五官等对象类型中的至少一种。在被追踪过程中的目标对象可以为目标对象上任意一点。以人脸为例,目标对象可以指人脸中容易被识别到的关键点,例如额头中心点、两眼中心、鼻尖等任一点作为关键点。
运动轨迹一般是对视频中的图像帧进行关键点采样,获得的关键点连接所形成的曲线。获得目标视频中目标对象的运动轨迹对应的三维点集,具体可以指对目标视频中存在目标对象的图像帧,从图像帧中确定目标对象的位置点,也即关键点或轨迹点,将多个图像帧的关键点形成运动轨迹,可以从运动轨迹中进行二次采样,获得相比于原轨迹点密度更高的采样点,采样点可以用于确定三维点集。
202:将三维点集进行网格化处理,获得目标网格结构。
可选地,目标网格结构可以包括曲面网格结构或环形条状网格结构。
目标网格结构可以为按照轨迹所对应的三维点集确定的网格曲面,可以包括普通面型结构的曲面,也可以包括轨迹轮廓所形成的环形曲面。
203:对目标网格结构进行贴图处理,获得目标特效图像。
目标网格结构并未设置贴图,可以将纹理图像粘贴至目标网格结构中,目标网格结构被粘贴纹理图像后即获得目标特效图像。
可以通过贴图软件实现网格结构的贴图。
204:在目标视频中显示与目标对象的运动轨迹对应的目标特效图像。
其中,可以在目标视频中目标对象的运动轨迹对应的位置显示目标特效图像。
目标特效图像可以为三维图像。目标特效图像可以通过电子设备的输出装置显示,具体可以在电子设备显示目标视频时,将目标特效图像在与目标对象的运动轨迹所在位置关联显示。具体可以将运动轨迹的中心位置作为目标特效图像的中心位置,在中心位置上显示目标特效图像。
电子设备可以包括手机、AR设备、虚拟现实设备等,本实施例中对电子设备的具体类型并不作出过多限定。电子设备可以显示三维的目标特效图像。当然,在实际应用中,目标特效图像可以通过坐标转换方式,转换为二维特效图像、特效点云以执行特效显示。
本实施例中,对目标视频进行轨迹追踪,获得目标视频中的目标对象的运动轨迹所对应的三维点集。通过将三维点集进行网格化处理,获得目标网格结构,通过目标网格结构的生成,可以实现运动轨迹的目标网格结构的生成。获得与运动轨迹对应的目标网格结构之后,可以对目标网格结构进行贴图处理,获得目标特效图像,从而在目标对象的运动轨迹对应显示目标特效图像,之后可以在视频的运动轨迹对应显示目标特效图像。 本技术方案通过对视频中的目标对象的轨迹追踪,实现以运动轨迹作为特效的网格结构的生成基础,获得与用户轨迹相匹配的目标特效图像,获得与用户的运动行为的特效生成,扩展了特效的应用场景,提高特效使用效率。
如图3所示,为本公开实施例提供的一种特效图像绘制方法的又一个实施例的流程图,与图2所示实施例的不同之处在于,对用户提供的目标视频进行轨迹追踪,获得目标视频中的目标对象的运动轨迹对应的三维目标点集,包括:
301:识别目标视频中目标对象。
可选地,可以采用目标追踪算法识别目标视频中的目标对象。
其中,目标对象识别之后,可以确定目标对象的关键点。以目标对象为人脸为例,可以将人脸中的任意点作为关键点,例如鼻尖、两眼中心等均可以作为关键点。
302:对目标对象在目标视频中的运动轨迹进行采样,获得至少一个目标采样点。
303:将至少一个目标采样点分别映射到三维空间坐标系,获得三维点集。
将目标采样点分别映射到三维空间坐标系,可以包括:将目标采样点从二维空间坐标系映射到三维空间坐标系。
本公开实施例中,可以识别目标视频中的目标对象,以对目标对象的运动轨迹进行采样,获得二维点集,通过将二维点集映射到三维空间坐标系,获得三维点集。通过轨迹采样以及空间映射可以实现对运动轨迹对应的三维点集的准确提取。
作为一个实施例,对目标对象在目标视频中的运动轨迹进行采样,获得至少一个目标采样点,可以包括:
从目标视频中对目标对象的运动轨迹进行采样,获得二维点集;二维点集包括多个第一采样点;
对二维点集进行重采样,获得多个第二采样点;
从多个第二采样点中确定满足轨迹平滑条件的至少一个目标采样点。
可选地,对二维点集进行重采样,获得多个第二采样点可以包括:对二维点集中的多个第一采样点中两两相邻采样点之间进行采样,获得第二采样点,二维点集重采样结束时可以获得多个第二采样点。
为了便于理解,如图4所示的重采样示例图,假设在目标对象运动轨迹中,采集到的第一采样点分别为401~404,参考图4,相邻两个采样点之间距离较大,若直接根据采样点进行特效绘制,可能会导致出现较大的轮廓误差,为了避免出现此现象,本公开的技术方案采用了重采样。也即可以基于第一采样点的拟合曲线进行分段,对分段曲线进行二次采样,二次采样获得的采样点即为第二采样点。参考图4,第一采样点401和402对应的分段曲线采样可以获得第二采样点405,第一采样点402和403对应的分段曲线采样可以获得第二采样点406,第一采样点403和404对应的分段曲线采样可以获得三个第二采样点407。当然,直接将相邻采样点进行曲线拟合的方式仅仅是示例性的,并不应构成对本公开技术方案的限定。还可以采用整条曲线拟合以及分段方式,对分段曲线进行重采样,具体可以参考下述实施例的描述。
第一采样点可以为从目标视频中采样获得的关键点。第二采样点可以为基于第一采样点重新采样获得的采样点,第二采样点可以包括第一采样点。
轨迹平滑条件可以指采样点与轨迹的距离小于的距离阈值或者位于轨迹上。通过轨迹平滑条件可以将与轨迹相差较大的采样点进行去除,使得获得的至少一个目标采样点所形成的轨迹平滑,避免出现尖点出现,提高轨迹的准确性。
本公开实施例中,对目标对象在目标视频中的运动轨迹采样之后,获得多个第一采样点组成的二维点集。对二维点集进行重采样,获得更密集的第二采样点,通过重采样可以对采样点进行密度增强。在获得第二采样点之后,可以从第二采样点中确定满足轨迹平滑条件的至少一个目标采样点,利用轨迹平滑条件对第二采样点进行优化,使得目标采样点所对应的轨迹更平滑,提高目标采样点的采集精度和准确性。
在某些实施例中,从目标视频中对目标对象的运动轨迹进行采样,获得二维点集,包括:
根据预设采样频率,从目标视频的图像帧中采集目标对象的轨迹点;
基于预设距离阈值,判断轨迹点是否满足距离约束条件;
若是,则基于轨迹点,确定第一采样点;
若否,则替换轨迹点为二维点集中的最后一个第一采样点;
获得目标视频采集结束时获得的多个第一采样点对应的二维点集。
可选地,根据预设采样频率,从目标视频的图像帧中采集目标对象的轨迹点,可以包括:根据预设采样频率,从目标视频中执行图像采样,获得图像帧,从图像帧中识别目标对象的关键点,确定目标对象的关键点为轨迹点。
轨迹点可以包括多个,多个轨迹点可以分别从多个图像帧中采集获得。
可选地,采样频率可以预先设置获得,具体可以根据采样时间间隔设置。其中,采样时间间隔和采样频率的乘积可以为1,二者互为倒数。
可选地,替换轨迹点为二维点集中的最后一个第一采样点,包括:将二维点集中已存在的最后一个第一采样点删除,将轨迹点作为二维点集中新的最后一个第一采样点。
本公开实施例中,对目标对象的运动轨迹进行采样时,可以根据目标视频中图像帧的采样,初步获得轨迹点。通过对轨迹点是否满足距离约束条件进行判断,在距离约束条件的约束下,对不同图像帧采集的轨迹点进行了更详细的筛选,提高第一采样点的采样准确性。
在一种可能的设计中,基于预设距离阈值,判断轨迹点是否满足距离约束条件,包括:
确定轨迹点与二维点集中最后一个第一采样点之间的第一距离;
若第一距离大于距离阈值,则确定轨迹点满足距离约束条件;
若第一距离小于或等于距离阈值,则确定轨迹点不满足距离约束条件。
确定轨迹点与二维点集中最后一个第一采样点之间的第一距离,可以包括:根据轨迹点的像素坐标和二维点集中最后一个第一采样点的像素坐标,计算像素坐标距离,根据像素坐标距离确定第一距离。像素坐标距离的距离单位可以和第一距离的距离单位不同也可以相同。在不同的情况下,可以利用两种距离单位进行转换实现以相同单位的距离比较,实现距离的准确比较。
距离阈值可以为对相邻两个轨迹点之间的限制距离。
本公开实施例中,确定轨迹点与二维点集中最后一个第一采样点之间的第一距离,通过距离阈值对二维点集中的轨迹点进行距离约束,避免出现与已采集的采样点距离过于异常的情况,实现采样点的准确提取。
在轨迹点满足距离约束条件的情况下,可以对轨迹点执行进一步采样。因此,在一种可能的设计中,基于轨迹点,确定第一采样点,包括:
获取二维点集中已存在的第一采样点的采样点数量。
若确定采样点数量小于或等于采样数量阈值,则将轨迹点增加为二维点集中的最后一个第一采样点;
若确定采样点数量大于采样数量阈值时,则基于二维点集中的前N个第一采样点和轨迹点的加权求和,获得均值点;将均值点增加为二维点集中的最后一个第一采样点;N为大于0的正整数。
若相邻两点的距离较近,可能触发闭合的距离阈值判定,也即当前轨迹点与其之前的距离小于距离阈值,则可以判断轨迹已闭合,导致出现无效采样或者采样停止。因此,可以在距离判定之后,再对采样点的数量进行判定,通过采样点数量的约束,可以对当前绘制的采样点数量达到采样点数量阈值之后,再进行加权采样,获得使得采样过程受到距离阈值和采样数量阈值的双重约束,提高采样准确性。
基于二维点集中的前N个第一采样点和轨迹点的加权求和,获得均值点可以使得轨迹点的绘制过程具有一定的控制效果,使得最终获得的第一采样点并不局限于轨迹点,避免轨迹点实际存在较大的波动导致拟合的曲线不平滑的问题,通过加权求和可以使得采集的第一采样点更平滑,准确度更高。
本公开实施例中,利用采样数量阈值对采样点数量进行判断,可以对轨迹点是否可以直接使用进行确认。通过加权求和的方式,可以对采样点及其之前的采样点加权求和,使得第一采样点的准确度更高,对第一采样点进行准确采样。
作为又一种可选实施方式,对二维点集进行重采样,获得多个第二采样点,包括:
基于曲线拟合算法,对二维点集中的多个第一采样点进行曲线拟合,获得第一样条线;
对二维点集中的第一个第一采样点和最后一个第一采样点进行曲线拟合,获得第二样条线;
根据第一样条线和第二样条线的拼接,获得闭合曲线;
根据分段间隔采样策略,对闭合曲线进行重采样,获得多个第二采样点。
可选地,曲线拟合算法可以包括贝塞尔(Bezier)曲线算法或者卡姆尔-罗姆
(Catmull-Rom)曲线算法等拟合算法的任一种。
对二维点集中的第一个第一采样点和最后一个第一采样点进行曲线拟合,获得第二样条线可以包括:根据第一样条线的平均曲率,将二维点集中的第一个第一采样点和最后一个第一采样点进行曲线拟合,获得第二样条线。还可以直接将二维点集中的第一个第一采样点和最后一个第一采样点连接形成线段,获得该线段对应的第二样条线。
为了便于理解,如图5所示的采样示例图,第一个采样点501和最后一个采样点502 之间的采样点拟合获得第一样条线503并不闭合,可以通过曲线拟合,获得第二样条线504,可以将第一样条线503和第二样条线504拼接形成闭合曲线。
本公开实施例中,通过曲线拟合可以获得二维点集中的多个第一采样点进行曲线拟合,获得第一样条线。为了获得闭合的曲线,可对二维点集中第一个采样点和最后一个采样点进行曲线拟合,获得第二样条线。根据第一样条线和第二样条线的拼接,获得闭合曲线。闭合曲线与运动轨迹对应,对闭合曲线进行分段采样,可以获得更详细的多个第二采样点。
在一种可能的设计中,根据分段间隔采样策略,对闭合曲线进行重采样,获得多个第二采样点,包括:
确定分段间隔采样策略对应的分段数量;
将闭合曲线按照分段数量执行分段处理,获得至少一个分段曲线;
从分段曲线中采集第二采样点,获得至少一个分段曲线对应的多个第二采样点。
可选地,从分段曲线中采集第二采样点,可以包括从分段曲线中确定至少一个曲线采样点,确定至少一个曲线采样点分别为第二采样点。还可以确定原第一采样点为第二采样点。多个第二采样点中可以包括二维点集中的多个第一采样点。
可选地,将闭合曲线按照分段数量执行分段处理可以包括:基于分段数量,确定闭合曲线的分段长度,按照分段长度和分段数量,将闭合曲线分段为至少一个分段曲线。至少一个分段曲线的曲线数量可以和分段数量相等。
本公开实施例中,通过分段方式对闭合曲线进行分段,获得至少一个分段曲线,分段曲线是一段线段,从分段曲线上采样可以使得多个第二采样点属于同一线段,精度更高。
作为一种可选实施方式,确定分段间隔采样策略对应的分段数量,可以包括:
确定样条插值算法(Spline Interpolation)对应的最大分段数、最大输入点数和最小分段值;
根据最大分段数、最大输入点数和最小分段数,确定分段间隔采样策略对应的分段数量。
其中,最大分段数、最大输入点数、最小分段数等参数可以设置获得。
本公开实施例中,根据最大分段数、最大输入点数和最小分段数的约束,获得准确的分段数量。
作为又一种可选实施方式,确定分段间隔采样策略对应的分段数量,可以包括:
根据采样频率和采样次数,确定曲线长度;
以曲线长度作为采样间隔数量;
根据采样间隔数量和曲线长度,计算分段数量。
本公开实施例中,利用确定的曲线长度对分段数量进行计算,可以使得分段数量与曲线长度实时性匹配,实现分段数量的动态调整,获得准确的分段数量。
作为一个实施例,将至少一个目标采样点分别映射到三维空间坐标系,获得三维点集,包括:
根据渲染目标视频的相机所对应的相机转换矩阵,将目标采样点转换到三维空间坐 标系中,获得各目标采样点分别对应的三维转换坐标;
获得各目标采样点分别对应的三维转换坐标构成的三维点集。
可选地,相机转换矩阵可以通过相机的参数进行矩阵计算获得,相机的参数可以包括相机内参和相机外参。
目标采样点可以为二维的图像坐标系中的点,可以通过相机转换矩阵等数据,将目标采样点转化至三维空间坐标系中,获得三维转换坐标。图像坐标系、相机坐标系和空间坐标系的转换可以参考相关技术的描述。
本公开实施例中,可以根据各个目标采样点的深度信息结合相机坐标系的相机转换距离,对目标采样点进行三维空间坐标系的转换,实现对三维空间坐标系的像素点的准确转换。
在获得多个第二采样点之后,可以对多个第二采样点是否突出进行确认。
在一种可能的设计中,从多个第二采样点中确定满足轨迹平滑条件的至少一个目标采样点,包括:
将多个第二采样点中相邻两个采样点组合,获得至少一组采样点;
确定各组采样点中的两个第二采样点对应的切线向量;
对于至少一组采样点,若目标组采样点的切线向量小于零,则确定目标组中两个第二采样点中的任意第二采样点为目标采样点;
若目标组采样点的切线向量大于等于零,则确定目标组中两个第二采样点均为目标采样点;
获得至少一组采样点对应所有目标采样点为至少一个目标采样点。
两个第二采样点的切线向量的计算步骤可以包括基于两个采样点的圆弧拟合,利用拟合的圆弧进行切线向量的计算。
本公开实施例中,对于第二采样点,可以通过将相邻两个采样点划分为一组采样点,判断各组采样点的切线向量是否大于零,切线向量大于零,说明该组中的两个第二采样点差异较小,可以保留两个采样点,而若切线向量小于零,则说明该组中的两个第二采样点差异较大,其中一个采样点较为突出,可能会影响特效的展示效果,因此保留其中任一第二采样点即可。通过切线向量的判断可以对相邻采样点中的突出点进行处理,提高采样点的处理效率。
当然,为了提高平滑条件的筛选效率,在又一种可能的设计中,从第二采样点中确定满足轨迹平滑条件的至少一个目标采样点,包括:
判断第二采样点是否属于平滑曲线上一点;
若是,则确定第二采样点满足轨迹平滑条件;
若否,则确定第二采样点不满足轨迹平滑条件,返回基于边缘平均采样算法,对第二采样点是否属于平滑曲线上一点的步骤继续执行;
获得第二采样点遍历结束时,获得满足轨迹平滑条件的目标采样点。
可选地,判断第二采样点是否属于平滑曲线上一点可以包括:对第二采样点进行曲线拟合,获得拟合曲线,计算各第二采样点至拟合曲线的绝对距离,若绝对距离大于绝对阈值,则确定第二采样点满足轨迹平滑条件;若绝对距离小于或等于绝对阈值,则确 定第二采样点不满足轨迹平滑条件。
本公开实施例中,可以基于边缘平均采样算法对第二采样点是否属于平滑曲线上一点进行判断,通过平滑曲线的判断,可以对采样点是否属于较为突出的尖点进行检测,使得目标采样点为沿平滑曲线的点,进而在基于至少一个目标采样点进行特效生成时,使得特效图像的轮廓较为平滑,不存在尖突的部分,进而获得轮廓更平滑的至少一个目标采样点。
如图6所示,为本公开实施例提供的一种特效图像绘制方法的又一个实施例的流程图,该特效图像绘制方法与前述实施例的不同之处在于,将三维点集进行网格化处理,获得目标网格结构,包括:
601:将三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面;
602:确定三角网格曲面为曲面特效类型对应的目标网格结构。
对目标网格结构进行贴图处理,获得目标特效图像,包括:
603:在三维网格曲面上进行纹理贴图,获得三维渲染的目标纹理图像;
604:对三维渲染的目标纹理图像进行边缘平滑处理,获得目标特效图像。
可选地,可以通过将三维点集中的多个点连接形成网格(mash)的方式,获得三角网格曲面。
对三维渲染的目标纹理图像进行边缘平滑处理可以指对三维渲染的目标纹理图像的网格边缘进行平滑处理。
本公开实施例中,可以将三维点集按照网络拓扑,生成三角网格曲面,将三角网格曲面作为目标网格结构进行贴图处理,获得三维纹理贴图。通过网络处理,可以获得准确的三角网格曲面。在获得三维渲染的目标纹理图像之后,可以对三维渲染的目标纹理图像进行边缘平滑处理,获得目标特效图像。通过边缘平滑处理可以使得目标特效图像更平滑,展示效果更好。
作为一个实施例,将三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面,包括:
确定三维点集对应的中心点;
基于三维点集和中心点,构造扇面网格拓扑,获得三维拓扑结构;
对三维拓扑结构的边界值转换至图像的百分比坐标系对应的计算值;
基于三维拓扑结构和三维拓扑结构上各点的计算值,确定三角网格曲面。
可选地,确定三维点集对应的中心点可以包括以下实施方式:
实施方式一、求解三维点集对应的凸多边形的内部重心,获得中心点。
实施方式二、对三维点集的轮廓进行圆形拟合处理,获得拟合圆的圆心为中心点。
实施方式三、对三维点集进行三角网(Delaunay)剖分,若三维点集中的点不在所有剖分出的三角形内,则说明圆心/重心在三维点集对应的凸多边形之外,可以计算三维点集中所有点的均值作为中心点。
凸多边形可以为三维点集中的边缘点连接形成的多边图形。
为了便于理解,图7示出了三维点集的三角形剖分示例图。将三维点集映射到平面 图中,参考图7,三维点集的点为701,中心点为702。相邻两个三维点集的点701和中心点702可以两两连接形成一三角形,获得相应的三维拓扑结构703。当然,图7所示的三维点集仅仅是示例性的,在实际应用中,三维点集可以随运动轨迹随之发生变化。三维拓扑结构703可以映射到UV坐标系中,获各个点的UV值,以获得三角网格曲面。
三维拓扑结构的边界值可以指三维拓扑结构的边界大小,例如三维拓扑结构的长度和宽度。长度和宽度实际是一标定值。
图像的百分比坐标可以指UV坐标系。在对三维拓扑结构进行贴图时,为了确保图像的贴合与三维拓扑结构更紧密,不出现过度贴合现象,可以确定三维拓扑结构的边界值转换为UV坐标系的计算值。UV坐标即为图像的百分比坐标,水平方向为U,垂直方向为V。例如,假设三维拓扑结构包括左上点、右上点、左下点和右下点四个边界点。可以确定左上U等于0,V等于0,右下U等于1,V等于1,右上角U等于1,V等于0,左下角U等于0,V等于1,具体可以为三维拓扑结构的四个角的在UV坐标系的计算值,而三维拓扑结构的中心点的UV值可以是U等于0.5,V等于0.5。
在对三维网格曲面进行贴图时,可以通过各点的计算值,也即UV值进行贴图。
本公开实施例中,生成三角网格曲面时,可以根据三维点集的中心点构造扇面网络拓扑,获得三维拓扑结构,通过将三维拓扑结构的边界值转换为UV计算值,可以使得三维拓扑结构在UV坐标系中标表示,获得具有坐标结构的三角网格曲面。通过曲面构建以及坐标转换可以获得准确的三角网格曲面,确保后续特效贴图的准确执行。
在一种可能的设计中,对三维渲染的目标纹理图像进行边缘平滑处理,获得目标特效图像,包括:
将三维渲染的目标纹理图像的各像素点渲染到初始值为零的渲染纹理中,获得第一纹理贴图;
将第一纹理贴图的阿尔法通道绘制到灰度渲染纹理中,获得灰度掩码图;
对灰度掩码图进行腐蚀处理和模糊处理,获得平滑掩码图;
利用平滑掩码图对第一纹理贴图进行图像融合处理,获得目标特效图像。
可选地,纹理图像及渲染纹理可以包括红(Red,R)、绿(Green,G)、蓝(Blue,B)和阿尔法(alpha,A)四个通道,每个通道均可以在像素点处对应有通道值。
可选地,渲染纹理(Rrender Texture,RT)的各通道的通道值均可以为0,也即各像素点的初始值为(0,0,0,0),各像素点的通道值均为0时的渲染纹理的背景是透明的。
灰度渲染纹理可以指渲染纹理的像素值是0-1的灰度值。alpha通道值可以用于转化为灰度渲染纹理的颜色值,例如,alpha通道值为0对应的颜色值为(0,0,0),即纯黑色;alpha通道值为1对应的颜色值为(1,1,1),属于0-1区间内的alpha通道值则对应为(alpha,alpha,alpha)的灰度颜色值。
灰度掩码图可以用于对第一纹理贴图的掩码标记,使得目标特效图像的轮廓受灰度掩码图的影响,转化为多边形的轮廓。
可选地,对灰度掩码图进行腐蚀处理和模糊处理,获得平滑掩码图可以包括对灰度掩码图进行腐蚀处理,获得腐蚀掩码图,对腐蚀掩码图进行模糊处理,获得平滑掩码图。对腐蚀掩码图进行模糊处理,可以包括:基于高斯函数对腐蚀掩码图进行模糊处理。
本公开实施例中,通过将三维渲染的目标纹理图像绘制到初始值为零的渲染纹理上,获得第一纹理贴图,将第一纹理贴图的alpha通道值绘制到灰度渲染纹理上,可以获得灰度掩码图,利用灰度掩码图的腐蚀处理和模糊处理可以获得更平滑的平滑掩码图。利用平滑掩码图对第一纹理贴图进行图像融合处理,可以对图像的轮廓进行平滑处理,获得轮廓更平滑的目标特效图像。
如图8所示,为本公开实施例提供的一种特效图像绘制方法的又一个实施例的流程图,该特效图像绘制方法与前述实施例的不同之处在于,将三维点集进行网格化处理,获得目标网格结构,包括:
801:对三维点集中的两两相邻的采样点进行插值,获得目标点集。
可选地,对三维点集中两两相邻的采样点进行插值,可以指对三维点集中相邻的两个采样点进行插值,直至三维点集中相邻的两两采样点采样结束。
其中,可以采用基于卡姆尔曲线(Catmull Curve)的插值处理,获得更密集的目标点集。
802:对目标点集所对应的轮廓区域进行条状网络的生成,获得目标网格结构;
对目标网格结构进行贴图处理,获得目标特效图像,包括:
803:对目标网格结构进行轨迹渲染,获得三维条形轨迹图;
804:根据三维条形轨迹图,确定目标特效图像。
为了便于理解,图9示出了环形条状特效图的示例图,三维条形轨迹图901可以用于确定目标特效图像,展示了一种环形特效。
可选地,目标特效图像可以具有半透明特性。渲染透明度可以根据使用需求预先设置获得,可以为大于0且小于等于1的数值,可以为0-1之间的小数。
本公开实施例中,可以生成目标点集对应的条状网络,可以获得条状的目标网络结构,实现条形网格的生成。通过对条形的目标网格结构进行贴图处理,可以获得三维条形轨迹图。三维条形轨迹图即可以用于确定目标特效图像。利用对网格建立条形的目标网格结构,实现条形目标特效图像的准确生成。
作为一个实施例,根据三维条形轨迹,确定目标特效图像,包括:
将三维条形轨迹渲染到初始值为零的渲染纹理,获得第二纹理贴图;
对第二纹理贴图,按照渲染透明度对轨迹混合图像进行透明处理,获得目标特效图像。
可选地,第二纹理贴图可以为显示三维条形轨迹的渲染图像。
本公开实施例中,可以利用RT对三维条形轨迹进行边缘平滑,获得的目标特效图像的条形形状更平滑,展示效果更好。
在一种可能的设计中,还包括:
确定用户触发的特效绘制请求的触发时间;
根据至少一个节日类型分别对应的时间段,确定触发时间对应的目标节日类型;
获取目标节日类型预关联的纹理图像;
对目标网格结构进行贴图处理,获得目标特效图像,包括:
利用纹理图像对目标网格结构进行贴图处理,获得目标特效图像。
可选地至少一个节日类型可以设置获得,每个节日类型对应的时间段已知,例如,中秋节对应的时间段已知。可以为每个节日类型预关联纹理图像,例如,中秋节的纹理图像可以为月亮纹理图像,端午节的纹理图像可以为粽叶纹理图像。
本公开实施例中,通过特效绘制请求的触发时间确定目标节日类型,利用目标节日类型对应的纹理图像进行贴图,使得目标特效图像的纹理与目标节日类型实时性匹配,适用时间更长,可以获得更高的展示效率。
本公开的技术方案可以应用于多种实际的应用场景,下面将为本公开的具体应用场景进行详细介绍。
应用场景一、在传统节假日场景下,例如端午、中秋等节假日,用户发起特效绘制请求,可以基于本公开的技术方案获取与节假日相适应的纹理图像,对基于本公开的部分步骤获得的目标网格结构进行贴图处理,例如月亮纹理进行贴图,可以获得特效月亮,以在视频的运动轨迹对应显示展示特效月亮。
应用场景二,在AR场景下,可以基于本公开的技术方案对被追踪的车辆或者人进行特效轨迹的生成特效图像,生成的特效图像为三维特效图像,可以将生成的三维特效图像在AR设备中展示,实现现实增强,提高特效的使用率。
此外,本公开的技术方案还可以应用于视频播放、AR领域,具体例如可以包括特殊节假日的场景化特效、用户的轨迹追踪特效应用等领域。
如图10所示,为本公开实施例提供的一种特效图像绘制装置的一个实施例的结构示意图,该装置可以位于电子设备中,可以配置有上述特效图像绘制方法,该特效图像绘制装置1000可以包括:
轨迹追踪单元1001:用于响应于用户触发的特效绘制请求,对用户提供的目标视频进行轨迹追踪,获得目标视频中目标对象的运动轨迹对应的三维点集。
网格生成单元1002:用于将三维点集进行网格化处理,获得目标网格结构。
特效生成单元1003:用于对目标网格结构进行贴图处理,获得目标特效图像。
特效显示单元1004:用于在目标视频中显示与目标对象的运动轨迹对应的目标特效图像。
作为一个实施例,轨迹追踪单元,包括:
对象识别模块,用于识别目标视频中目标对象;
轨迹采样模块,用于对目标对象在目标视频中的运动轨迹进行采样,获得至少一个目标采样点;
空间映射模块,用于将至少一个目标采样点分别映射到三维空间坐标系,获得三维点集。
在某些实施例中,轨迹采样模块,包括:
轨迹采样子模块,用于从目标视频中对目标对象的运动轨迹进行采样,获得二维点集;二维点集包括多个第一采样点;
重采样子模块,用于对二维点集进行重采样,获得多个第二采样点;
采样选择子模块,用于从多个第二采样点中确定满足轨迹平滑条件的至少一个目标采样点。
在一种可能的设计中,轨迹采样子模块,具体用于:
根据预设采样频率,从目标视频的图像帧中采集目标对象的轨迹点;
基于预设距离阈值,判断轨迹点是否满足距离约束条件;
若是,则基于轨迹点,确定第一采样点;
若否,则替换轨迹点为二维点集中的最后一个第一采样点;
获得目标视频采集结束时获得的多个第一采样点对应的二维点集。
在一种可能的设计中,轨迹采样子模块,具体用于:
确定轨迹点与二维点集中最后一个第一采样点之间的第一距离;
若第一距离大于距离阈值,则确定轨迹点满足距离约束条件;
若第一距离小于或等于距离阈值,则确定轨迹点不满足距离约束条件。
在一种可能的设计中,轨迹采样子模块,具体用于:
获取二维点集中已存在的第一采样点的采样点数量;
若确定采样点数量小于或等于采样数量阈值,则将轨迹点增加为二维点集中的最后一个第一采样点;
若确定采样点数量大于采样数量阈值时,则基于二维点集中的前N个第一采样点和轨迹点的加权求和,获得均值点;将均值点增加为二维点集中的最后一个第一采样点;N为大于0的正整数。
在某些实施例中,重采样子模块,具体用于:
基于曲线拟合算法,对二维点集中的多个第一采样点进行曲线拟合,获得第一样条线;
对二维点集中的第一个第一采样点和最后一个第一采样点进行曲线拟合,获得第二样条线;
根据第一样条线和第二样条线的拼接,获得闭合曲线;
根据分段间隔采样策略,对闭合曲线进行重采样,获得多个第二采样点。
在某些实施例中,重采样子模块,具体用于:
确定分段间隔采样策略对应的分段数量;
将闭合曲线按照分段数量执行分段处理,获得至少一个分段曲线;
从分段曲线中采集第二采样点,获得至少一个分段曲线对应的多个第二采样点。
作为一个实施例,采样选择子模块,具体可以用于:
将多个第二采样点中相邻两个采样点组合,获得至少一组采样点;
确定各组采样点中两个第二采样点对应的点乘结果,点乘结果为两个第二采样点的切线向量点乘计算获得;
对于至少一组采样点,若目标组的两个第二采样点对应的点乘结果小于零,则确定目标组中两个第二采样点中的任意第二采样点为目标采样点;
若目标组的两个第二采样点对应的点乘结果大于等于零,则确定目标组中两个第二采样点均为目标采样点;
获得至少一组采样点对应所有目标采样点为至少一个目标采样点。
在某些实施例中,空间映射模块,可以包括:
矩阵转换子模块,用于根据渲染目标视频的相机所对应的相机转换矩阵,将目标采 样点转换到空间坐标系中,获得各目标采样点分别对应的三维转换坐标;
点集确定子模块,用于获得各目标采样点分别对应的三维转换坐标构成的三维点集。
作为又一个实施例,网格生成单元,可以包括:
第一处理模块,用于将三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面;
第一确定模块,用于确定三角网格曲面为曲面特效类型对应的目标网格结构。
特效生成单元,包括:
第一贴图模块,用于在三维网格曲面上进行纹理贴图,获得三维渲染的目标纹理图像;
边缘平滑模块,用于对三维渲染的目标纹理图像进行边缘平滑处理,获得目标特效图像。
在某些实施例中,第一处理模块,包括:
中心确定子模块,用于确定三维点集对应的中心点;
拓扑构造子模块,用于基于三维点集和中心点,构造扇面网格拓扑,获得三维拓扑结构;
坐标计算子模块,用于对三维拓扑结构的边界值转换至图像的百分比坐标系的计算值;
曲面确定子模块,用于基于三维拓扑结构和三维拓扑结构上各点的计算值,确定三角网格曲面。
作为一个实施例,边缘平滑模块,包括:
第一渲染子模块,用于将三维渲染的目标纹理图像的各像素点渲染到各通道值均为零的渲染纹理中,获得第一纹理贴图;
第二渲染子模块,用于将第一纹理贴图的阿尔法通道绘制到灰度渲染纹理中,获得灰度掩码图;
腐蚀模糊子模块,用于对灰度掩码图进行腐蚀处理和模糊处理,获得平滑掩码图;
图像融合子模块,用于将平滑掩码图和第一纹理贴图进行图像融合处理,获得目标特效图像。
作为又一个实施例,网格生成单元,包括:
点集差值模块,用于对三维点集中的两两相邻的采样点进行插值,获得目标点集;
网格生成模块,用于对目标点集所对应的轮廓区域进行条状网络的生成,获得目标网格结构;
特效生成单元,包括:
轨迹渲染模块,用于对目标网格结构进行轨迹渲染,获得三维条形轨迹图;
轨迹处理模块,用于根据三维条形轨迹图,确定目标特效图像。
在一种可能的设计中,轨迹处理模块,包括:
第三渲染子模块,用于将三维条形轨迹渲染至各通道值均为零的渲染纹理,获得第二纹理贴图;
透明处理子模块,用于对第二纹理贴图按照渲染透明度进行透明处理,获得目标特效图像。
在一种可能的设计中,还包括:
时间确定单元,用于确定用户触发的特效绘制请求的触发时间;
节日确定单元,用于根据至少一个节日类型分别对应的时间段,确定触发时间对应的目标节日类型;
纹理关联单元,用于获取目标节日类型预关联的纹理图像;
特效生成单元,包括:
第二贴图模块,用于利用纹理图像对目标网格结构进行贴图处理,获得目标特效图像。
本实施例提供的装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
为了实现上述实施例,本公开实施例还提供了一种电子设备。
参考图11,其示出了适于用来实现本公开实施例的电子设备1100的结构示意图,该电子设备1100可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图11示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图11所示,电子设备1100可以包括处理装置(例如中央处理器、图形处理器等)1101,其可以根据存储在只读存储器(Read Only Memory,ROM)1102中的程序或者从存储装置1108加载到随机访问存储器(Random Access Memory,RAM)1103中的程序而执行各种适当的动作和处理。在RAM 1103中,还存储有电子设备1100操作所需的各种程序和数据。处理装置1101、ROM 1102以及RAM 1103通过总线1104彼此相连。输入/输出(Input/Output,I/O)接口1105也连接至总线1104。
通常,以下装置可以连接至I/O接口1105:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1106;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置1107;包括例如磁带、硬盘等的存储装置1108;以及通信装置1109。通信装置1109可以允许电子设备1100与其他设备进行无线或有线通信以交换数据。虽然图11示出了具有各种装置的电子设备1100,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1109从网络上被下载和安装,或者从存储装置1108被安装,或者从ROM 1102被安装。在该计算机程序被处理装置1101执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不 限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:
电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如, 非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列
(Field-Programmable Gate Array,FPGA)、专用集成电路(Application Specific Standard Parts,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System On Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例可以包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器
(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,根据本公开的一个或多个实施例,提供了一种特效图像绘制方法,包括:
响应于用户触发的特效绘制请求,对用户提供的目标视频进行轨迹追踪,获得目标视频中目标对象的运动轨迹对应的三维点集;
将三维点集进行网格化处理,获得目标网格结构;
对目标网格结构进行贴图处理,获得目标特效图像;
在目标视频中显示与目标对象的运动轨迹对应的目标特效图像。
根据本公开的一个或多个实施例,对用户提供的目标视频进行轨迹追踪,获得目标视频中的目标对象的运动轨迹对应的三维目标点集,包括:
识别目标视频中目标对象;
对目标对象在目标视频中的运动轨迹进行采样,获得至少一个目标采样点;
将至少一个目标采样点分别映射到三维空间坐标系,获得三维点集。
根据本公开的一个或多个实施例,对目标对象在目标视频中的运动轨迹进行采样,获得至少一个目标采样点,包括:
从目标视频中对目标对象的运动轨迹进行采样,获得二维点集;二维点集包括多个第一采样点;
对二维点集进行重采样,获得多个第二采样点;
从多个第二采样点中确定满足轨迹平滑条件的至少一个目标采样点。
根据本公开的一个或多个实施例,从目标视频中对目标对象的运动轨迹进行采样,获得二维点集,包括:
根据预设采样频率,从目标视频的图像帧中采集目标对象的轨迹点;
基于预设距离阈值,判断轨迹点是否满足距离约束条件;
若是,则基于轨迹点,确定第一采样点;
若否,则替换轨迹点为二维点集中的最后一个第一采样点;
获得目标视频采集结束时获得的多个第一采样点对应的二维点集。
根据本公开的一个或多个实施例,基于预设距离阈值,判断轨迹点是否满足距离约束条件,包括:
确定轨迹点与二维点集中最后一个第一采样点之间的第一距离;
若第一距离大于距离阈值,则确定轨迹点满足距离约束条件;
若第一距离小于或等于距离阈值,则确定轨迹点不满足距离约束条件。
根据本公开的一个或多个实施例,基于轨迹点,确定第一采样点,包括:
获取二维点集中已存在的第一采样点的采样点数量;
若确定采样点数量小于或等于采样数量阈值,则将轨迹点增加为二维点集中的最后一个第一采样点;
若确定采样点数量大于采样数量阈值时,则基于二维点集中的前N个第一采样点和轨迹点的加权求和,获得均值点;将均值点增加为二维点集中的最后一个第一采样点;N为大于0的正整数。
根据本公开的一个或多个实施例,对二维点集进行重采样,获得多个第二采样点,包括:
基于曲线拟合算法,对二维点集中的多个第一采样点进行曲线拟合,获得第一样条线;
对二维点集中的第一个第一采样点和最后一个第一采样点进行曲线拟合,获得第二样条线;
根据第一样条线和第二样条线的拼接,获得闭合曲线;
根据分段间隔采样策略,对闭合曲线进行重采样,获得多个第二采样点。
根据本公开的一个或多个实施例,根据分段间隔采样策略,对闭合曲线进行重采样,获得多个第二采样点,包括:
确定分段间隔采样策略对应的分段数量;
将闭合曲线按照分段数量执行分段处理,获得至少一个分段曲线;
从分段曲线中采集第二采样点,获得至少一个分段曲线对应的多个第二采样点。
根据本公开的一个或多个实施例,从多个第二采样点中确定满足轨迹平滑条件的至少一个目标采样点,包括:
将多个第二采样点中相邻两个采样点组合,获得至少一组采样点;
确定各组采样点中两个第二采样点对应的点乘结果,点乘结果为两个第二采样点的切线向量点乘计算获得;
对于至少一组采样点,若目标组的两个第二采样点对应的点乘结果小于零,则确定目标组中两个第二采样点中的任意第二采样点为目标采样点;
若目标组的两个第二采样点对应的点乘结果大于等于零,则确定目标组中两个第二采样点均为目标采样点。
根据本公开的一个或多个实施例,将至少一个目标采样点分别映射到三维空间坐标系,获得三维点集,包括:
根据渲染目标视频的相机所对应的相机转换矩阵,将目标采样点转换到空间坐标系中,获得各目标采样点分别对应的三维转换坐标;
获得各目标采样点分别对应的三维转换坐标构成的三维点集。
根据本公开的一个或多个实施例,将三维点集进行网格化处理,获得目标网格结构,包括:
将三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面;
确定三角网格曲面为曲面特效类型对应的目标网格结构。
对目标网格结构进行贴图处理,获得目标特效图像,包括:
在三维网格曲面上进行纹理贴图,获得三维渲染的目标纹理图像;
对三维渲染的目标纹理图像进行边缘平滑处理,获得目标特效图像。
根据本公开的一个或多个实施例,将三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面,包括:
确定三维点集对应的中心点;
基于三维点集和中心点,构造扇面网格拓扑,获得三维拓扑结构;
对三维拓扑结构的边界值转换至图像的百分比坐标系的计算值;
基于三维拓扑结构和三维拓扑结构上各点的计算值,确定三角网格曲面。
根据本公开的一个或多个实施例,对三维渲染的目标纹理图像进行边缘平滑处理,获得目标特效图像,包括:
将三维渲染的目标纹理图像的各像素点渲染到各通道值均为零的渲染纹理中,获得第一纹理贴图;
将第一纹理贴图的阿尔法通道绘制到灰度渲染纹理中,获得灰度掩码图;
对灰度掩码图进行腐蚀处理和模糊处理,获得平滑掩码图;
将平滑掩码图和第一纹理贴图进行图像融合处理,获得目标特效图像。
根据本公开的一个或多个实施例,将三维点集进行网格化处理,获得目标网格结构,包括:
对三维点集中的两两相邻的采样点进行插值,获得目标点集;
对目标点集所对应的轮廓区域进行条状网络的生成,获得目标网格结构;
对目标网格结构进行贴图处理,获得目标特效图像,包括:
对目标网格结构进行轨迹渲染,获得三维条形轨迹图;
根据三维条形轨迹图,确定目标特效图像。
根据本公开的一个或多个实施例,根据三维条形轨迹,确定目标特效图像,包括:
将三维条形轨迹渲染至各通道值均为零的渲染纹理,获得第二纹理贴图;
根据第二纹理贴图对三维条形轨迹进行边缘混合处理,获得轨迹混合图像;
按照渲染透明度对轨迹混合图像进行透明处理,获得目标特效图像。
根据本公开的一个或多个实施例,还包括:
确定用户触发的特效绘制请求的触发时间;
根据至少一个节日类型分别对应的时间段,确定触发时间对应的目标节日类型;
获取目标节日类型预关联的纹理图像;
对目标网格结构进行贴图处理,获得目标特效图像,包括:
利用纹理图像对目标网格结构进行贴图处理,获得目标特效图像。
第二方面,根据本公开的一个或多个实施例,提供了一种特效图像绘制装置,包括:
轨迹追踪单元,用于响应于用户触发的特效绘制请求,对用户提供的目标视频进行轨迹追踪,获得目标视频中目标对象的运动轨迹对应的三维点集;
网格生成单元,用于将三维点集进行网格化处理,获得目标网格结构;
特效生成单元,用于对目标网格结构进行贴图处理,获得目标特效图像;
特效显示单元,用于在目标视频中显示与目标对象的运动轨迹对应的目标特效图像。
第三方面,根据本公开的一个或多个实施例,提供了一种电子设备,包括:处理器、存储器以及输出装置;
存储器存储计算机执行指令;
处理器执行存储器存储的计算机执行指令,使得处理器配置有如上第一方面以及第一方面各种可能的设计的特效图像绘制方法,输出装置用于输出具有目标特效图像的目标视频。
第四方面,根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当处理器执行计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计的特效图像绘制方法。
第五方面,根据本公开的一个或多个实施例,提供了一种计算机程序,当计算机程序被处理器执行时,实现如上第一方面以及第一方面中各种可能的设计所述的特效图像绘制方法。
第六方面,根据本公开的一个或多个实施例,提供了一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计的特效图像绘制方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序来执行。在一定环境下,多任务和并行处理可能是有利的。
同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (17)

  1. 一种特效图像绘制方法,包括:
    响应于用户触发的特效绘制请求,对所述用户提供的目标视频进行轨迹追踪,获得所述目标视频中目标对象的运动轨迹对应的三维点集;
    将所述三维点集进行网格化处理,获得目标网格结构;
    对所述目标网格结构进行贴图处理,获得目标特效图像;
    在所述目标视频中显示与所述目标对象的运动轨迹对应的所述目标特效图像。
  2. 根据权利要求1所述的方法,其中,所述对所述用户提供的目标视频进行轨迹追踪,获得所述目标视频中的目标对象的运动轨迹对应的三维目标点集,包括:
    识别所述目标视频中所述目标对象;
    对所述目标对象在所述目标视频中的运动轨迹进行采样,获得至少一个目标采样点;
    将至少一个所述目标采样点分别映射到三维空间坐标系,获得所述三维点集。
  3. 根据权利要求2所述的方法,其中,所述对所述目标对象在所述目标视频中的运动轨迹进行采样,获得至少一个目标采样点,包括:
    从所述目标视频中对所述目标对象的运动轨迹进行采样,获得二维点集;所述二维点集包括多个第一采样点;
    对所述二维点集进行重采样,获得多个第二采样点;
    从多个所述第二采样点中确定满足轨迹平滑条件的至少一个所述目标采样点。
  4. 根据权利要求3所述的方法,其中,所述从所述目标视频中对所述目标对象的运动轨迹进行采样,获得二维点集,包括:
    根据预设采样频率,从所述目标视频的图像帧中采集所述目标对象的轨迹点;
    基于预设距离阈值,判断所述轨迹点是否满足距离约束条件;
    若是,则基于所述轨迹点,确定第一采样点;
    若否,则替换所述轨迹点为所述二维点集中的最后一个第一采样点;
    获得所述目标视频采集结束时获得的多个第一采样点对应的所述二维点集。
  5. 根据权利要求3或4所述的方法,其中,所述对所述二维点集进行重采样,获得多个第二采样点,包括:
    基于曲线拟合算法,对所述二维点集中的多个第一采样点进行曲线拟合,获得第一样条线;
    对所述二维点集中的第一个第一采样点和最后一个第一采样点进行曲线拟合,获得第二样条线;
    根据所述第一样条线和所述第二样条线的拼接,获得闭合曲线;
    根据分段间隔采样策略,对所述闭合曲线进行重采样,获得所述多个所述第二采样点。
  6. 根据权利要求3至5任一项所述的方法,其中,所述从多个所述第二采样点中确定满足轨迹平滑条件的至少一个所述目标采样点,包括:
    将多个所述第二采样点中相邻两个采样点组合,获得至少一组采样点;
    确定各组采样点中两个第二采样点对应的点乘结果,所述点乘结果为两个第二采样点的切线向量点乘计算获得;
    对于所述至少一组采样点,若目标组的两个第二采样点对应的点乘结果小于零,则确定目标组中两个第二采样点中的任意第二采样点为所述目标采样点;
    若所述目标组的两个第二采样点对应的点乘结果大于等于零,则确定目标组中两个第二采样点均为所述目标采样点;
    获得所述至少一组所述采样点对应所有目标采样点为至少一个所述目标采样点。
  7. 根据权利要求1至6任一项所述的方法,其中,所述将所述三维点集进行网格化处理,获得目标网格结构,包括:
    将所述三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面;
    确定所述三角网格曲面为所述曲面特效类型对应的目标网格结构;
    所述对所述目标网格结构进行贴图处理,获得目标特效图像,包括:
    在所述三维网格曲面上进行纹理贴图,获得三维渲染的目标纹理图像;
    对所述三维渲染的目标纹理图像进行边缘平滑处理,获得所述目标特效图像。
  8. 根据权利要求7所述的方法,其中,所述将所述三维点集所包含的区域整体进行三角形剖分,获得三角网格曲面,包括:
    确定所述三维点集对应的中心点;
    基于所述三维点集和所述中心点,构造扇面网格拓扑,获得三维拓扑结构;
    对所述三维拓扑结构的边界值转换至图像的百分比坐标系的计算值;
    基于所述三维拓扑结构和所述三维拓扑结构上各点的计算值,确定所述三角网格曲面。
  9. 根据权利要求7或8所述的方法,其中,所述对所述三维渲染的目标纹理图像进行边缘平滑处理,获得所述目标特效图像,包括:
    将所述三维渲染的目标纹理图像的各像素点渲染到各通道值均为零的渲染纹理中,获得第一纹理贴图;
    将所述第一纹理贴图的阿尔法通道绘制到灰度渲染纹理中,获得灰度掩码图;
    对所述灰度掩码图进行腐蚀处理和模糊处理,获得平滑掩码图;
    将所述平滑掩码图和所述第一纹理贴图进行图像融合处理,获得所述目标特效图像。
  10. 根据权利要求1至6任一项所述的方法,其中,所述将所述三维点集进行网格化处理,获得目标网格结构,包括:
    对所述三维点集中的两两相邻的采样点进行插值,获得目标点集;
    对所述目标点集所对应的轮廓区域进行条状网络的生成,获得所述目标网格结构;
    所述对所述目标网格结构进行贴图处理,获得目标特效图像,包括:
    对所述目标网格结构进行轨迹渲染,获得三维条形轨迹图;
    根据所述三维条形轨迹图,确定所述目标特效图像。
  11. 根据权利要求10所述的方法,其中,所述根据所述三维条形轨迹,确定所述目标特效图像,包括:
    将所述三维条形轨迹渲染至各通道值均为零的渲染纹理,获得第二纹理贴图;
    对所述第二纹理贴图按照渲染透明度进行透明处理,获得所述目标特效图像。
  12. 根据权利要求1至11任一项所述的方法,其中,还包括:
    确定所述用户触发的特效绘制请求的触发时间;
    根据至少一个节日类型分别对应的时间段,确定所述触发时间对应的目标节日类型;
    获取所述目标节日类型预关联的纹理图像;
    所述对所述目标网格结构进行贴图处理,获得目标特效图像,包括:
    利用所述纹理图像对所述目标网格结构进行贴图处理,获得所述目标特效图像。
  13. 一种特效图像绘制装置,包括:
    轨迹追踪单元,用于响应于用户触发的特效绘制请求,对所述用户提供的目标视频进行轨迹追踪,获得所述目标视频中目标对象的运动轨迹对应的三维点集;
    网格生成单元,用于将所述三维点集进行网格化处理,获得目标网格结构;
    特效生成单元,用于对所述目标网格结构进行贴图处理,获得目标特效图像;
    特效显示单元,用于在所述目标视频中显示与所述目标对象的运动轨迹对应的所述目标特效图像。
  14. 一种电子设备,包括:处理器、存储器以及输出装置;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器配置有如权利要求1至12任一项所述的特效图像绘制方法,所述输出装置用于输出具有目标特效图像的目标视频。
  15. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至12任一项所述的特效图像绘制方法。
  16. 一种计算机程序,包括计算机程序,当所述计算机程序被处理器执行时,实现如权利要求1至12任一项所述的特效图像绘制方法。
  17. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如权利要求1至12任一项所述的特效图像绘制方法。
PCT/CN2023/117329 2022-09-08 2023-09-06 特效图像绘制方法、装置、设备及介质 WO2024051756A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211098070.6A CN115578495A (zh) 2022-09-08 2022-09-08 特效图像绘制方法、装置、设备及介质
CN202211098070.6 2022-09-08

Publications (1)

Publication Number Publication Date
WO2024051756A1 true WO2024051756A1 (zh) 2024-03-14

Family

ID=84580862

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/117329 WO2024051756A1 (zh) 2022-09-08 2023-09-06 特效图像绘制方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115578495A (zh)
WO (1) WO2024051756A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578495A (zh) * 2022-09-08 2023-01-06 北京字跳网络技术有限公司 特效图像绘制方法、装置、设备及介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020027555A1 (en) * 2000-06-28 2002-03-07 Kenichi Mori Method of rendering motion blur image and apparatus therefor
CN110047124A (zh) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 渲染视频的方法、装置、电子设备和计算机可读存储介质
CN112365601A (zh) * 2020-11-19 2021-02-12 连云港市拓普科技发展有限公司 一种基于特征点信息的结构光三维点云重建方法
CN112634401A (zh) * 2020-12-28 2021-04-09 深圳市优必选科技股份有限公司 一种平面轨迹绘制方法、装置、设备及存储介质
CN113706709A (zh) * 2021-08-10 2021-11-26 深圳市慧鲤科技有限公司 文本特效生成方法及相关装置、设备、存储介质
CN114401443A (zh) * 2022-01-24 2022-04-26 脸萌有限公司 特效视频处理方法、装置、电子设备及存储介质
CN114598824A (zh) * 2022-03-09 2022-06-07 北京字跳网络技术有限公司 特效视频的生成方法、装置、设备及存储介质
CN115063518A (zh) * 2022-06-08 2022-09-16 Oppo广东移动通信有限公司 轨迹渲染方法、装置、电子设备及存储介质
CN115578495A (zh) * 2022-09-08 2023-01-06 北京字跳网络技术有限公司 特效图像绘制方法、装置、设备及介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020027555A1 (en) * 2000-06-28 2002-03-07 Kenichi Mori Method of rendering motion blur image and apparatus therefor
CN110047124A (zh) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 渲染视频的方法、装置、电子设备和计算机可读存储介质
CN112365601A (zh) * 2020-11-19 2021-02-12 连云港市拓普科技发展有限公司 一种基于特征点信息的结构光三维点云重建方法
CN112634401A (zh) * 2020-12-28 2021-04-09 深圳市优必选科技股份有限公司 一种平面轨迹绘制方法、装置、设备及存储介质
CN113706709A (zh) * 2021-08-10 2021-11-26 深圳市慧鲤科技有限公司 文本特效生成方法及相关装置、设备、存储介质
CN114401443A (zh) * 2022-01-24 2022-04-26 脸萌有限公司 特效视频处理方法、装置、电子设备及存储介质
CN114598824A (zh) * 2022-03-09 2022-06-07 北京字跳网络技术有限公司 特效视频的生成方法、装置、设备及存储介质
CN115063518A (zh) * 2022-06-08 2022-09-16 Oppo广东移动通信有限公司 轨迹渲染方法、装置、电子设备及存储介质
CN115578495A (zh) * 2022-09-08 2023-01-06 北京字跳网络技术有限公司 特效图像绘制方法、装置、设备及介质

Also Published As

Publication number Publication date
CN115578495A (zh) 2023-01-06

Similar Documents

Publication Publication Date Title
WO2020228405A1 (zh) 图像处理方法、装置及电子设备
WO2024051756A1 (zh) 特效图像绘制方法、装置、设备及介质
WO2022237811A1 (zh) 图像处理方法、装置及设备
WO2021139382A1 (zh) 人脸图像的处理方法、装置、可读介质和电子设备
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN112163990B (zh) 360度图像的显著性预测方法及系统
WO2023207522A1 (zh) 视频合成方法、装置、设备、介质及产品
CN111652791B (zh) 人脸的替换显示、直播方法、装置、电子设备和存储介质
WO2024001360A1 (zh) 绿幕抠图方法、装置及电子设备
CN113313832B (zh) 三维模型的语义生成方法、装置、存储介质与电子设备
CN108805849B (zh) 图像融合方法、装置、介质及电子设备
CN109523622A (zh) 一种非结构化的光场渲染方法
WO2023029893A1 (zh) 纹理映射方法、装置、设备及存储介质
WO2021073443A1 (zh) 感兴趣区域的检测方法、装置、电子设备及可读存储介质
WO2023193639A1 (zh) 图像渲染方法、装置、可读介质及电子设备
WO2023173727A1 (zh) 图像处理方法、装置及电子设备
CN111462205B (zh) 图像数据的变形、直播方法、装置、电子设备和存储介质
US20230133416A1 (en) Image processing method and apparatus, and device and medium
CN111127603B (zh) 动画生成方法、装置、电子设备及计算机可读存储介质
CN113129352A (zh) 一种稀疏光场重建方法及装置
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN114596383A (zh) 线条特效处理方法、装置、电子设备、存储介质及产品
JP7387001B2 (ja) 画像合成方法、装置、およびストレージ媒体
CN111862342A (zh) 增强现实的纹理处理方法、装置、电子设备及存储介质
WO2020155908A1 (zh) 用于生成信息的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862450

Country of ref document: EP

Kind code of ref document: A1