CN113709389A - Video rendering method and device, electronic equipment and storage medium - Google Patents

Video rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113709389A
CN113709389A CN202010436943.4A CN202010436943A CN113709389A CN 113709389 A CN113709389 A CN 113709389A CN 202010436943 A CN202010436943 A CN 202010436943A CN 113709389 A CN113709389 A CN 113709389A
Authority
CN
China
Prior art keywords
target
image frame
output
key point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010436943.4A
Other languages
Chinese (zh)
Inventor
肖逸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010436943.4A priority Critical patent/CN113709389A/en
Publication of CN113709389A publication Critical patent/CN113709389A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N2005/2726Means for inserting a foreground image in a background image, i.e. inlay, outlay for simulating a person's appearance, e.g. hair style, glasses, clothes

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The video rendering method comprises the steps of storing a track of key points of a video by using a track recording graph, correspondingly rendering the key point positions of each frame of the video to the track recording graph before each frame of the video is output, and then overlapping the track recording graph with the frames for output, so that the video with the key point track display effect can be obtained. According to the method disclosed by the invention, aiming at each frame, only the rendering operation is required to be executed on the determined key points in the frame, and the rendering operation is not required to be executed on all the determined key points in the output frame, so that the performance consumption is optimized, and the fluency of video rendering is improved.

Description

Video rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video rendering method and apparatus, an electronic device, and a storage medium.
Background
With the increasing popularity of short video applications, more and more short video special effect rendering technologies are developed successively, and most of these technologies combine a deep learning algorithm to detect key points in video frames and perform related special effect rendering at the detected key point positions.
Among them, there is a kind of special effect that needs to render the moving track of the key point in real time in the video recording/playing or live broadcasting process, such as interesting special effect "nose painting": regarding the key point "nose tip" as a brush, when the 1 st video image frame is output, the position K1 of the nose tip in the frame is detected, and a specified special effect is rendered at the position K1 of the image frame, when the 2 nd video image frame is output, the position K2 of the nose tip in the frame is detected, a specified special effect … … is rendered at the position K1 and the position K2 of the image frame, when the nth video image frame is output, the position Kn of the nose tip in the frame is detected, and a specified special effect is rendered at the position K1 and the position K2 … … Kn of the image frame. Therefore, the key point track can be continuously rendered in real time along with the moving track of the nose in the video recording/playing or live broadcasting process, and the effect of drawing is achieved.
The problems of the existing thought are as follows: as the video duration increases, the number of frames increases, and the number of key points to be rendered in each frame also increases, which may seriously consume the operation performance, so that the video frame rate is obviously reduced.
Disclosure of Invention
In view of the above technical problems, an embodiment of the present disclosure provides a video rendering method, and the technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video rendering method, including:
acquiring a track record map which is created in advance and has the same size with the image frame of the target video;
in the process that the target video sequentially outputs the image frames in sequence, for any image frame to be output, the following operations are executed:
determining the current position of a target key point in the image frame to be output, and rendering at the same position of the track recording image, wherein the target key point is a target pixel point in a target image area of the image frame, and the position change of the target key point in each image frame can form a moving track of the target key point;
and performing overlapping display processing on the track recording image and the image frame to be output so as to output the image frame with the moving track of the target key point.
Optionally, the determining the current position of the target keypoint in the image frame to be output includes:
determining a target part needing to be rendered;
and performing image recognition on the image frame to be output, determining the region where the recognized target part is positioned as a target image region, and determining the current position of a target key point in the target image region.
Optionally, the track recording map is a texture map, and the overlaying and displaying process of the track recording map and the image frame to be output includes:
and displaying the image frame to be output, and rendering the track recording image on the image frame to be output.
Optionally, the track recording map is a texture map, and the overlaying and displaying process of the track recording map and the image frame to be output includes:
and superposing the image frame to be output and the track recording image to obtain a combined image frame with the moving track of the target key point, and displaying the combined image frame.
Optionally, after determining the current position of the target keypoint in the image frame to be output, the method further includes:
in the target video, determining a last image frame of the image frames to be output, and acquiring a last position of the target key point determined in the last image frame;
determining a supplementary position of at least one target key point in a two-point connecting line direction between the current position of the target key point and the last position of the target key point;
rendering is performed at the same position of the track log map each time one of the supplemental positions is determined.
Optionally, the determining a supplementary position of at least one target keypoint in a two-point connecting line direction between the current position of the target keypoint and the last position of the target keypoint includes:
acquiring a preset interpolation distance;
and performing at least one linear interpolation calculation according to the current position of the target key point, the last position of the target key point and the interpolation distance, and correspondingly determining the supplementary position of at least one key point according to the interpolation calculation result.
According to a second aspect of the embodiments of the present disclosure, there is provided a video rendering apparatus including:
the recording acquisition module is configured to acquire a pre-created track recording map which is consistent with the image frame size of the target video;
the video output module is configured to perform the following operations for any image frame to be output in the process that the target video sequentially outputs the image frames in sequence:
a special effect rendering module, configured to determine a current position of a target key point in the image frame to be output, and render at the same position of the track recording map, where the target key point is a target pixel point in a target image area of the image frame, and a position change of the target key point in each image frame may form a moving track of the target key point;
and the special effect display module is configured to perform overlapping display processing on the track recording image and the image frame to be output so as to output the image frame with the moving track of the target key point.
Optionally, the special effect rendering module, when determining the current position of the target keypoint in the image frame to be output, is configured to:
determining a target part needing to be rendered;
and performing image recognition on the image frame to be output, determining the region where the recognized target part is positioned as a target image region, and determining the current position of a target key point in the target image region.
Optionally, the track record map is a texture map, and the special effect display module, when performing overlay display processing on the track record map and the image frame to be output, is configured to:
and displaying the image frame to be output, and rendering the track recording image on the image frame to be output.
Optionally, the track record map is a texture map, and the special effect display module, when performing overlay display processing on the track record map and the image frame to be output, is configured to:
and superposing the image frame to be output and the track recording image to obtain a combined image frame with the moving track of the target key point, and displaying the combined image frame.
Optionally, the apparatus further comprises: a location replenishment module configured to:
in the target video, determining a last image frame of the image frames to be output, and acquiring a last position of the target key point determined in the last image frame;
and determining the supplementary position of at least one target key point in the direction of a two-point connecting line between the current position of the target key point and the last position of the target key point.
Rendering is performed at the same position of the track log map each time one of the supplemental positions is determined.
Optionally, the position supplement module, when determining a supplement position of at least one target key point in a two-point connecting line direction between the current position of the target key point and the last position of the target key point, is configured to:
acquiring a preset interpolation distance;
and performing at least one linear interpolation calculation according to the current position of the target key point, the last position of the target key point and the interpolation distance, and correspondingly determining the supplementary position of at least one key point according to the interpolation calculation result.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the method according to the first aspect.
The embodiment of the disclosure provides a video rendering method and device, electronic equipment and a storage medium. The method comprises the steps of storing a track of key points of a video by using a track record graph, correspondingly rendering the key point positions of each frame into the track record graph before each frame of the video is output, and then overlapping the track record graph with the frames for output, so that the video with the key point track display effect can be obtained. For each frame, only the rendering operation needs to be performed on the determined key points in the frame, and the rendering operation does not need to be performed on all the determined key points in the previous frame. Under the condition that the performance of the mobile terminal needs to be strictly controlled, the performance consumption is optimized, and the fluency of video rendering is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the disclosure.
Moreover, any one of the embodiments of the present disclosure need not achieve all of the effects described above.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flow chart of a video rendering method shown in an exemplary embodiment of the present disclosure;
fig. 2 is another flowchart of a video rendering method shown in an exemplary embodiment of the present disclosure;
fig. 3 is another flowchart of a video rendering method shown in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a non-smooth trajectory shown in an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a smoothed trajectory shown in an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating linear interpolation according to an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a track recording shown in an exemplary embodiment of the present disclosure;
fig. 8 is another flowchart of a video rendering method shown in an exemplary embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a video rendering apparatus according to an exemplary embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a computer device shown in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the increasing popularity of short video applications, more and more short video special effect rendering technologies are developed successively, and most of these technologies combine a deep learning algorithm to detect key points in video frames and perform related special effect rendering at the detected key point positions.
Among them, there is a kind of special effect that needs to render the moving track of the key point in real time in the video recording/playing or live broadcasting process, such as interesting special effect "nose painting": regarding the key point "nose tip" as a brush, when the 1 st video image frame is output, the position K1 of the nose tip in the frame is detected, and a specified special effect is rendered at the position K1 of the image frame, when the 2 nd video image frame is output, the position K2 of the nose tip in the frame is detected, a specified special effect … … is rendered at the position K1 and the position K2 of the image frame, when the nth video image frame is output, the position Kn of the nose tip in the frame is detected, and a specified special effect is rendered at the position K1 and the position K2 … … Kn of the image frame. Therefore, the key point track can be continuously rendered in real time along with the moving track of the nose in the video recording/playing or live broadcasting process, and the effect of drawing is achieved.
The problems of the existing thought are as follows: as the video duration increases, the number of frames increases, and the number of key points to be rendered in each frame also increases, which may seriously consume the operation performance, so that the video frame rate is obviously reduced.
In order to solve this problem, the present disclosure provides a video rendering method and a video rendering apparatus applying the video rendering method, which will be first explained in its entirety. Referring to fig. 1, the method includes the following steps S101 to S103:
in step S101, a pre-created track record map that is consistent with the size of the target video image frame is acquired;
the image frame is a single image frame of the smallest unit in the video image, and can be understood as each frame of a shot on a motion picture film. Each video is composed of a continuous picture, and each picture is an image frame. Specifically, the image frames of a video may be divided into key frames and transition frames.
Key frame: any video to be represented with motion or change at least has two different key states before and after, and the change of the intermediate state and the connection computer can be automatically completed, and the frames representing the key states are called key frames. Transition frame: between two key frames, the frames in which the computer automatically completes the transition picture are called transition frames.
In an embodiment of the present disclosure, the image frames of the target video refer to key frames in the video.
In the embodiment of the present disclosure, the track record map is a map whose size is consistent with the image frame size of the target video, that is, the length and the width of the track record map are the same as those of the video frame of the target video, and any pixel point in any video frame of the target video can find a point at the same position in the track record map corresponding to the target video.
Specifically, the creation timing of the track log graph may be: upon receiving user selection information for specifying a special effect. For example, when a user intends to record a short video in which a special effect "nose painting" is used, and a user device (e.g., a mobile phone) receives selected information of the user for specifying the special effect, a corresponding track log graph is created for the video, and each image frame in the video records the position of a key point by using the track log graph.
In step S102, determining a current position of a target keypoint in the image frame to be output, and rendering at the same position of the track recording map, where the target keypoint is a target pixel point in a target image region of the image frame, and a position change of the target keypoint in each image frame may form a moving track of the target keypoint;
the target video sequentially outputs the image frames in sequence, which may be a video recording/playing process, for example: in the short video recording process, the recorded image frames need to be continuously displayed on the screen in sequence, and the process can be regarded as a process of outputting the image frames.
In an embodiment of the present disclosure, the current positions of the key points in the image frames to be output of the target video may be determined by, but not limited to, the following methods:
(1-1) determining a target part needing to be rendered;
(1-2) carrying out image recognition on the image frame to be output, determining the region where the recognized target part is positioned as a target image region, and determining the current position of a target key point in the target image region
For example, the following steps are carried out: in the special effect 'nose painting', a target part needing to be mounted on the special effect is a nose tip, image recognition can be carried out on an image frame to be output by using the established recognition model, the nose tip of the target part is recognized, and the current position of the nose tip is determined as the current position of a key point.
It can be known that the track log is a graph whose size is consistent with the image frame size of the target video, and any pixel point in any video frame of the target video can find a point at the same position in the track log corresponding to the target video. Then a same position can be found in the trace log after the key point position in the image frame to be output is detected. For example, regarding the image frame to be output and the first pixel at the upper left of the track log as coordinates (0, 0), after detecting that the key point in the image frame to be output is coordinates (1, 1), the coordinates (1, 1) are also found in the track log, and the specified special effect is rendered at the coordinates (1, 1) of the track log. In particular, the process of rendering a specified special effect may specify a particle for rendering in the pixel at the location.
In step S103, the trajectory recording map and the image frame to be output are subjected to overlay display processing to output an image frame having a movement trajectory of the target keypoint.
In an embodiment of the present disclosure, a manner of performing the overlay display processing on the track recording map and the image frame to be output may be: and normally displaying the image frame to be output on a screen, and rendering the track record graph on the image frame to be output.
In an embodiment of the present disclosure, after the track log is created, the track log is directly rendered on the screen, and the target video is normally output in the subsequent process, and meanwhile, the special effect track in the track log is continuously updated according to the positions of the key points in different image frames of the target video. The track log may be considered as a "mask" of the target video image frames.
Taking a special effect of 'nose painting' as an example, at the angle of a user, the existence of a track recording graph cannot be perceived, and only a track special effect which is updated in real time along with the movement of a nose in a screen can be seen; in terms of equipment, each frame is output only by rendering the positions of the key points of the frame in the track recording diagram, and the calculation consumption is low.
Fig. 2 is a flowchart illustrating a method for creating a track log graph, which may be used on a platform capable of performing video rendering according to an exemplary embodiment and is based on the method illustrated in fig. 1, and as illustrated in fig. 2, the method may include the following steps S201 to S203:
in step S201, in the process that the target video sequentially outputs the image frames in sequence, receiving the selected information of the specified special effect from the user;
specifically, the specified special effect is a special effect which needs to display the movement track of the key point in real time, and the display effect of the specified special effect comprises the following steps: and taking a designated part as a key point, and rendering the moving track of the key point in real time in the output process of the video frame. For example, the following steps are carried out: in the video recording process, the nose tip of a person is used as a key point, the current position of the nose tip is determined in real time, and the current position and the historical position of the nose tip are rendered into a special effect to display the moving track of the nose tip.
In step S202, determining the size of the target video image frame, and creating a blank texture map having the same size as the target video image frame;
when the user selects the specified special effect, a blank texture map is created, and the blank texture map can be regarded as a fully transparent image, namely the transparency of each pixel point in the image is set to be the highest. The purpose of this setting is that, when the texture map (track record map) recorded with the special effect track is superposed with the target video image frame, only the display effect of the special effect is increased on the movement track of the key point of the image frame, and the adverse effects such as display occlusion and the like on other pixels of the image frame not in the movement track are not caused.
In step S203, a corresponding relationship is established between the blank texture map and the target video, and the blank texture map is determined as a track record map of the target video.
It can be appreciated that specifying special effects is a type of special effect that requires the display of a real-time trajectory of keypoints. Such as 'nose painting', 'finger painting', etc., all belong to different specific special effects in the class of specified special effects.
For one target video, the possibility exists of rendering multiple special effects simultaneously or sequentially. For example: the user selects a special effect of 'nose painting', displays a special movement track effect of the nose in the following video output process, then switches the special effect into 'finger painting', and displays the special movement track effect of the fingers in the following video output process.
In an embodiment of the present disclosure, one corresponding track recording diagram may be created for each of different specified special effects in the same video, so that, on one hand, a problem that may be caused by repeatedly rendering the same position of the same track recording diagram when positions of moving tracks of different specified special effects overlap can be avoided. On the other hand, when a user thinks to remove one of the movement tracks of the specified special effects in the picture, only the track recording diagram corresponding to the special effects needs to be removed, and if the movement tracks of different specified special effects in the same video are recorded in the same track recording diagram, the point cannot be realized.
In an embodiment of the present disclosure, in order to make the track of the key points smoother and more coherent, a "position supplement algorithm" is designed, and in addition to the detected key point positions, some supplement positions of the key points are additionally added, specifically referring to fig. 3, including the following steps S301 to S304:
in step S301, in the target video, determining a previous image frame of the image frames to be output, and acquiring a previous position of the target key point determined in the previous image frame;
in an embodiment of the present disclosure, the determined key point positions in each image frame of the target video may be stored through a predetermined variable or array, etc., and the storage format may refer to table 1 below
Number of frames Key point location
1 (1,1)
2 (1,3)
3 (1,10)
TABLE 1
In step S302, a supplementary position of at least one of the target keypoints is determined in a two-point connecting line direction between the current position of the target keypoint and a last position of the target keypoint.
In step S303, each time one of the supplementary positions is determined, rendering is performed at the same position of the track recording map.
In some cases, the keypoint moves quickly and the last frame position of the keypoint may be far from this frame position, as in frames 2 and 3 of Table 1. At this moment, the problem of gaps can occur in the track recording of the key points, so that gaps are left in the middle of track rendering, the track is not smooth enough, the visual effect is affected, the track effect can refer to fig. 4, and an intermittent track effect is generated for fingertip drawing.
Therefore, in the embodiment, a supplementary position of the key point is additionally added between the current position of the key point and the previous position of the key point, and for the supplementary position of the key point, an instruction special effect is rendered at the same position of the track recording diagram to fill up the possible blank in the middle of the track and the problem of insufficient smoothness, and the track effect can refer to fig. 5 to generate a continuous track effect for the fingertip drawing.
In an embodiment of the present disclosure, the supplementary position of the key point may be calculated by a linear interpolation algorithm, and the calculation method is as follows:
(2-1) acquiring a preset interpolation distance;
and (2-2) performing at least one linear interpolation calculation according to the current position of the target key point, the previous position of the target key point and the interpolation distance, and correspondingly determining the supplementary position of at least one key point according to the interpolation calculation result.
Wherein, the coordinate of the last position of the key point is set as (x)1,y1) The current position of the key point has coordinates of (x)0,y0) Referring to fig. 6, according to the equation of a straight line:
Figure BDA0002502631430000111
we can therefore obtain the value of y as:
Figure BDA0002502631430000112
wherein, based on the preset minimum interpolation distance, the value of x can be obtained, and based on the above formula, the value of y is obtained, so as to obtain the coordinate data of the supplementary position of the key point, and the specific storage form can refer to the following table 2:
Figure BDA0002502631430000113
TABLE 2
The following describes the video rendering method provided by the present disclosure in more detail, referring to fig. 8, including steps S801 to S806:
in step S801, selection information of a specific effect by a user is received;
in step S802, a track log is created in accordance with the size of the target video image frame;
in step S803, in the process that the target video sequentially outputs the image frames in sequence, for any image frame to be output, determining the current position of the key point in the image frame to be output of the target video, and acquiring the previous position of the key point determined in the previous image frame;
in step S804, a supplementary position of at least one keypoint is determined in a direction of a two-point connection between the current position of the keypoint and the last position of the keypoint.
In step S805, a specified special effect is rendered at the same position of the track recording map with respect to the current position and the supplementary position of the key point;
in step S806, the track log map and the image frame to be output are subjected to overlay display processing to output an image frame with the track special effect of the keypoint.
Specifically, the video rendering method provided by the disclosure can be applied to the recording or playing process of the video and also can be applied to the live broadcasting process. In the recording/playing/live broadcasting process, the embodiment of the present disclosure uses a track record diagram to store the track of the key points of a video, and referring to fig. 7, the track change form in the track record diagram is illustrated (fig. 7 is not a track record diagram, but the change of the track record is illustrated). Before each frame of the video is output, the positions of the key points of the frame are correspondingly rendered in the track recording diagram, and then the frame is overlaid on the track recording diagram with the tracks of the key points rendered for output, so that the same track display effect can be obtained. For each frame, only the rendering operation needs to be performed on the determined key points in the frame, and the rendering operation does not need to be performed on all the determined key points in the previous frame. Under the condition that the performance of the mobile terminal needs to be strictly controlled, the performance consumption is optimized, and the fluency of video rendering is improved.
Corresponding to the foregoing method embodiment, an embodiment of the present disclosure further provides a video rendering apparatus, and referring to fig. 9, the apparatus may include: a record obtaining module 910, a video output module 920, a special effect rendering module 930, and a special effect display module 940.
A record acquisition module 910 configured to acquire a track record map created in advance in accordance with the image frame size of the target video;
a video output module 920, configured to, in the process that the target video sequentially outputs image frames in sequence, perform the following operations for any image frame to be output:
a special effect rendering module 930 configured to determine a current position of a target key point in the image frame to be output, and render at the same position of the track recording map, where the target key point is a target pixel point in a target image region of the image frame, and a position change of the target key point in each image frame may form a moving track of the target key point;
a special effect display module 940, configured to perform an overlay display process on the track log and the image frame to be output, so as to output an image frame with a movement track of the target key point.
Optionally, the special effect rendering module, when determining the current position of the target keypoint in the image frame to be output, is configured to:
determining a target part needing to be rendered;
and performing image recognition on the image frame to be output, determining the region where the recognized target part is positioned as a target image region, and determining the current position of a target key point in the target image region.
Optionally, the track record map is a texture map, and the special effect display module, when performing overlay display processing on the track record map and the image frame to be output, is configured to:
and displaying the image frame to be output, and rendering the track recording image on the image frame to be output.
Optionally, the track record map is a texture map, and the special effect display module, when performing overlay display processing on the track record map and the image frame to be output, is configured to:
and superposing the image frame to be output and the track recording image to obtain a combined image frame with the moving track of the target key point, and displaying the combined image frame.
Optionally, the apparatus further comprises: a location replenishment module configured to:
in the target video, determining a last image frame of the image frames to be output, and acquiring a last position of the target key point determined in the last image frame;
and determining the supplementary position of at least one target key point in the direction of a two-point connecting line between the current position of the target key point and the last position of the target key point.
Rendering is performed at the same position of the track log map each time one of the supplemental positions is determined.
Optionally, the position supplement module, when determining a supplement position of at least one target key point in a two-point connecting line direction between the current position of the target key point and the last position of the target key point, is configured to:
acquiring a preset interpolation distance;
and performing at least one linear interpolation calculation according to the current position of the target key point, the last position of the target key point and the interpolation distance, and correspondingly determining the supplementary position of at least one key point according to the interpolation calculation result.
The disclosed embodiments also provide an electronic device, which at least includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the aforementioned video rendering method when executing the program.
Fig. 10 shows a schematic block diagram of a master-based-side electronic device according to an exemplary embodiment of the present disclosure. Referring to fig. 10, at the hardware level, the electronic device includes a processor 1002, an internal bus 1004, a network interface 1010, a memory 1004, and a non-volatile memory 1010, but may also include hardware required for other services. The processor 1002 reads a corresponding computer program from the non-volatile memory 1010 into the memory 1002 and then runs the computer program, thereby forming a device for executing the video rendering method on a logical level. Of course, besides the software implementation, the present disclosure does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the aforementioned video rendering method.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
The foregoing is merely a detailed description of the embodiments of the disclosure, and it should be noted that modifications and decorations can be made by those skilled in the art without departing from the principle of the embodiments of the disclosure, and these modifications and decorations should also be regarded as the scope of protection of the embodiments of the disclosure.

Claims (10)

1. A method of video rendering, comprising:
acquiring a track record map which is created in advance and has the same size with the image frame of the target video;
in the process that the target video sequentially outputs the image frames in sequence, for any image frame to be output, the following operations are executed:
determining the current position of a target key point in the image frame to be output, and rendering at the same position of the track recording image, wherein the target key point is a target pixel point in a target image area of the image frame, and the position change of the target key point in each image frame can form a moving track of the target key point;
and performing overlapping display processing on the track recording image and the image frame to be output so as to output the image frame with the moving track of the target key point.
2. The method as claimed in claim 1, wherein said determining a current location of said target keypoint in said image frame to be output comprises:
determining a target part needing to be rendered;
and performing image recognition on the image frame to be output, determining the region where the recognized target part is positioned as a target image region, and determining the current position of a target key point in the target image region.
3. The method as claimed in claim 1, wherein the trace record map is a texture map, and the overlaying display process of the trace record map and the image frame to be output includes:
and displaying the image frame to be output, and rendering the track recording image on the image frame to be output.
4. The method as claimed in claim 1, wherein the trace record map is a texture map, and the overlaying display process of the trace record map and the image frame to be output includes:
and superposing the image frame to be output and the track recording image to obtain a combined image frame with the moving track of the target key point, and displaying the combined image frame.
5. The method as claimed in claim 1, wherein after determining the current position of the target keypoint in the image frame to be output, further comprising:
in the target video, determining a last image frame of the image frames to be output, and acquiring a last position of the target key point determined in the last image frame;
determining a supplementary position of at least one target key point in a two-point connecting line direction between the current position of the target key point and the last position of the target key point;
rendering is performed at the same position of the track log map each time one of the supplemental positions is determined.
6. The method of claim 5, wherein determining a supplemental location of at least one of the target keypoints in a direction of a two-point connection between the current location of the target keypoint and a last location of the target keypoint comprises:
acquiring a preset interpolation distance;
and performing at least one linear interpolation calculation according to the current position of the target key point, the last position of the target key point and the interpolation distance, and correspondingly determining the supplementary position of at least one key point according to the interpolation calculation result.
7. A video rendering apparatus, comprising:
the recording acquisition module is configured to acquire a pre-created track recording map which is consistent with the image frame size of the target video;
the video output module is configured to perform the following operations for any image frame to be output in the process that the target video sequentially outputs the image frames in sequence:
a special effect rendering module, configured to determine a current position of a target key point in the image frame to be output, and render at the same position of the track recording map, where the target key point is a target pixel point in a target image area of the image frame, and a position change of the target key point in each image frame may form a moving track of the target key point;
and the special effect display module is configured to perform overlapping display processing on the track recording image and the image frame to be output so as to output the image frame with the moving track of the target key point.
8. The apparatus of claim 7, wherein the special effects rendering module, when determining the current location of the target keypoint in the image frame to be output, is configured to:
determining a target part needing to be rendered;
and performing image recognition on the image frame to be output, determining the region where the recognized target part is positioned as a target image region, and determining the current position of a target key point in the target image region.
9. An electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 6.
10. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-6.
CN202010436943.4A 2020-05-21 2020-05-21 Video rendering method and device, electronic equipment and storage medium Pending CN113709389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010436943.4A CN113709389A (en) 2020-05-21 2020-05-21 Video rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010436943.4A CN113709389A (en) 2020-05-21 2020-05-21 Video rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113709389A true CN113709389A (en) 2021-11-26

Family

ID=78646091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010436943.4A Pending CN113709389A (en) 2020-05-21 2020-05-21 Video rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113709389A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449336A (en) * 2022-01-20 2022-05-06 杭州海康威视数字技术股份有限公司 Vehicle track animation playing method, device and equipment
CN114742856A (en) * 2022-04-08 2022-07-12 北京字跳网络技术有限公司 Video processing method, device, equipment and medium
WO2023103720A1 (en) * 2021-12-10 2023-06-15 北京字跳网络技术有限公司 Video special effect processing method and apparatus, electronic device, and program product

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09200610A (en) * 1996-01-22 1997-07-31 Sony Corp Special effect device
US6600491B1 (en) * 2000-05-30 2003-07-29 Microsoft Corporation Video-based rendering with user-controlled movement
WO2007035878A2 (en) * 2005-09-20 2007-03-29 Jagrut Patel Method and apparatus for determining ball trajectory
CA2559783A1 (en) * 2006-09-15 2008-03-15 Institut National D'optique A system and method for graphically enhancing the visibility of an object/person in broadcasting
JP2009110536A (en) * 2004-05-19 2009-05-21 Sony Computer Entertainment Inc Image frame processing method and device, rendering processor and moving image display method
CN104394324A (en) * 2014-12-09 2015-03-04 成都理想境界科技有限公司 Special-effect video generation method and device
WO2016019770A1 (en) * 2014-08-06 2016-02-11 努比亚技术有限公司 Method, device and storage medium for picture synthesis
CN105739808A (en) * 2014-12-08 2016-07-06 阿里巴巴集团控股有限公司 Display method and apparatus for cursor movement on terminal device
CN106294474A (en) * 2015-06-03 2017-01-04 阿里巴巴集团控股有限公司 The processing method of video data, Apparatus and system
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN107888845A (en) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 A kind of method of video image processing, device and terminal
CN108076375A (en) * 2017-11-28 2018-05-25 北京川上科技有限公司 The finger motion track special efficacy implementation method and device of a kind of video
CN108108707A (en) * 2017-12-29 2018-06-01 北京奇虎科技有限公司 Gesture processing method and processing device based on video data, computing device
CN108260011A (en) * 2018-01-24 2018-07-06 上海哇嗨网络科技有限公司 The method and system for writing picture is realized on the display device
CN108537867A (en) * 2018-04-12 2018-09-14 北京微播视界科技有限公司 According to the Video Rendering method and apparatus of user's limb motion
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture
CN108966031A (en) * 2017-05-18 2018-12-07 腾讯科技(深圳)有限公司 Method and device, the electronic equipment of broadcasting content control are realized in video session
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment
CN110035236A (en) * 2019-03-26 2019-07-19 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110047124A (en) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of render video
CN110515452A (en) * 2018-05-22 2019-11-29 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09200610A (en) * 1996-01-22 1997-07-31 Sony Corp Special effect device
US6600491B1 (en) * 2000-05-30 2003-07-29 Microsoft Corporation Video-based rendering with user-controlled movement
JP2009110536A (en) * 2004-05-19 2009-05-21 Sony Computer Entertainment Inc Image frame processing method and device, rendering processor and moving image display method
WO2007035878A2 (en) * 2005-09-20 2007-03-29 Jagrut Patel Method and apparatus for determining ball trajectory
CA2559783A1 (en) * 2006-09-15 2008-03-15 Institut National D'optique A system and method for graphically enhancing the visibility of an object/person in broadcasting
WO2016019770A1 (en) * 2014-08-06 2016-02-11 努比亚技术有限公司 Method, device and storage medium for picture synthesis
CN105739808A (en) * 2014-12-08 2016-07-06 阿里巴巴集团控股有限公司 Display method and apparatus for cursor movement on terminal device
CN104394324A (en) * 2014-12-09 2015-03-04 成都理想境界科技有限公司 Special-effect video generation method and device
CN106294474A (en) * 2015-06-03 2017-01-04 阿里巴巴集团控股有限公司 The processing method of video data, Apparatus and system
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN108966031A (en) * 2017-05-18 2018-12-07 腾讯科技(深圳)有限公司 Method and device, the electronic equipment of broadcasting content control are realized in video session
CN107888845A (en) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 A kind of method of video image processing, device and terminal
CN108076375A (en) * 2017-11-28 2018-05-25 北京川上科技有限公司 The finger motion track special efficacy implementation method and device of a kind of video
CN108108707A (en) * 2017-12-29 2018-06-01 北京奇虎科技有限公司 Gesture processing method and processing device based on video data, computing device
CN108260011A (en) * 2018-01-24 2018-07-06 上海哇嗨网络科技有限公司 The method and system for writing picture is realized on the display device
CN108537867A (en) * 2018-04-12 2018-09-14 北京微播视界科技有限公司 According to the Video Rendering method and apparatus of user's limb motion
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture
CN110515452A (en) * 2018-05-22 2019-11-29 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment
CN110035236A (en) * 2019-03-26 2019-07-19 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110047124A (en) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of render video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李建明;吴云龙;何荣盛;钱昆明;: "基于粒子系统和GPU加速的喷泉实时仿真", 系统仿真学报 *
褚廷有;: "AE与Illusion粒子动画路径数据交换的实现", 辽宁工程技术大学学报(自然科学版) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103720A1 (en) * 2021-12-10 2023-06-15 北京字跳网络技术有限公司 Video special effect processing method and apparatus, electronic device, and program product
CN114449336A (en) * 2022-01-20 2022-05-06 杭州海康威视数字技术股份有限公司 Vehicle track animation playing method, device and equipment
CN114449336B (en) * 2022-01-20 2023-11-21 杭州海康威视数字技术股份有限公司 Vehicle track animation playing method, device and equipment
CN114742856A (en) * 2022-04-08 2022-07-12 北京字跳网络技术有限公司 Video processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
WO2020238560A1 (en) Video target tracking method and apparatus, computer device and storage medium
US9959903B2 (en) Video playback method
JP7147078B2 (en) Video frame information labeling method, apparatus, apparatus and computer program
CN113709389A (en) Video rendering method and device, electronic equipment and storage medium
US8295683B2 (en) Temporal occlusion costing applied to video editing
US10339629B2 (en) Method for providing indication in multi-dimensional media in electronic device
Kim et al. Recurrent temporal aggregation framework for deep video inpainting
WO2017092332A1 (en) Method and device for image rendering processing
CN107909022B (en) Video processing method and device, terminal equipment and storage medium
US20210042938A1 (en) Data processing method and computing device
WO2019218388A1 (en) Event data stream processing method and computing device
CN107924690A (en) For promoting the methods, devices and systems of the navigation in extended scene
CN110414514B (en) Image processing method and device
CN114419289B (en) Unity-based virtual scene shelf display method and system
CN112200035A (en) Image acquisition method and device for simulating crowded scene and visual processing method
JP7390495B2 (en) Hair rendering methods, devices, electronic devices and storage media
CN108140401B (en) Accessing video clips
CN115119014A (en) Video processing method, and training method and device of frame insertion quantity model
CN111583329B (en) Augmented reality glasses display method and device, electronic equipment and storage medium
CN113259742B (en) Video bullet screen display method and device, readable storage medium and computer equipment
CN117459661A (en) Video processing method, device, equipment and machine-readable storage medium
CN117152660A (en) Image display method and device
CN114095780A (en) Panoramic video editing method, device, storage medium and equipment
CN116993568A (en) Video image processing method, device, equipment and storage medium
CN114529587B (en) Video target tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211126