CN113660528A - Video synthesis method and device, electronic equipment and storage medium - Google Patents

Video synthesis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113660528A
CN113660528A CN202110565463.2A CN202110565463A CN113660528A CN 113660528 A CN113660528 A CN 113660528A CN 202110565463 A CN202110565463 A CN 202110565463A CN 113660528 A CN113660528 A CN 113660528A
Authority
CN
China
Prior art keywords
video
template
lens
user
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110565463.2A
Other languages
Chinese (zh)
Other versions
CN113660528B (en
Inventor
董平
何建丰
叶侃
陈丰
李全亮
刘璐
邓曦澄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202110565463.2A priority Critical patent/CN113660528B/en
Publication of CN113660528A publication Critical patent/CN113660528A/en
Application granted granted Critical
Publication of CN113660528B publication Critical patent/CN113660528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a video synthesis method, a video synthesis device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a video template; responding to a selection instruction of a user on the video template, and acquiring and presenting selectable space types; responding to a selection instruction of a user for the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of the lens according to a preset rule of the lens group; calculating the path movement speed according to the lens movement track and the picture setting and the target duration of the animation segment of the corresponding template of the video template, and generating the animation segment; acquiring the end point position of a lens motion path and camera setting, and rendering to generate a fixed point rendering picture; and (4) leading the animation fragments and the fixed-point rendering pictures into corresponding positions of a video template, and synthesizing the video. According to the invention, the video clip of the preset path is quickly generated only by mainly rendering the stop-motion picture, so that the home decoration display video is quickly generated at low cost.

Description

Video synthesis method and device, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of videos, and particularly relates to a video synthesis method and device, electronic equipment and a storage medium.
Background
In the field of video synthesis and production, standardized video design products are provided, short videos are rapidly generated by adopting video templates, users do not need to manually use video processing software to produce video files, only select proper video template files according to requirements, and then replace some resources in the video template files by recorded video clips or photos, so that new video files meeting business requirements can be obtained. A large number of videos are generated by multiplexing the same video template file for multiple times, and the purpose of batch generation is achieved. The existing technical scheme for generating the video cannot solve the technical problems that the design scheme is rapidly generated in the field of home decoration design, the display video is presented, and the short video is publicized, and the roaming video of a room or an overall design scheme is still manufactured by the traditional rendering technology, so that the cost is high, and the time consumption is long.
Disclosure of Invention
In view of this, in order to solve the problems of high cost and long time consumption of making a roaming video of a room or an overall design scheme, the present application provides a video composition method, apparatus, electronic device and storage medium.
According to a first aspect of embodiments of the present application, there is provided a video synthesis method, including:
step 101, acquiring a video template;
step 102, responding to a selection instruction of a user to a video template, and acquiring and presenting selectable space types;
103, responding to a selection instruction of the user for the space type, and acquiring and presenting an optional lens group corresponding to the space type selected by the user;
104, acquiring a lens group selected by a user, and simulating a movement path of the lens according to a preset rule of the lens group;
step 105, calculating a path motion speed according to the lens motion track and the picture setting generated in the step 104 and the target duration of the animation segment of the template corresponding to the video template in the step 101, and generating an animation segment;
step 106, acquiring the lens motion path end point position and the camera setting in the step 4, and rendering to generate a fixed point rendering picture;
and step 107, importing the animation fragments generated in the step 105 and the fixed-point rendering pictures generated in the step 106 into corresponding positions of the video template to synthesize the video.
In one possible implementation, the video template includes at least one template animation segment and at least one template stop-motion rendered picture.
In one possible implementation manner, the video template is obtained in step 101 by determining a video template to be used in response to an instruction from a user to select the video template, or by receiving a video template uploaded by the user.
In a possible implementation manner, step 101 further includes a video template produced according to a user instruction, including:
acquiring a spatial model or a combination of spatial models selected by a user;
acquiring the total time length of a video template set by a user;
acquiring at least one lens group established by a user and configuring lens information;
generating a template animation segment;
generating a template fixed-point rendering graph;
and synthesizing the template animation fragments and the template fixed-point rendering graph into a video template.
In one possible implementation, the lens group includes lenses disposed at different positions;
the configuration information of the lens includes: position, camera parameters, path, spatial attributes, and model attributes;
the path of the lens is the movement path of the lens in space.
In one possible implementation, the method further includes calculating a cost according to the quality and/or quantity of the fixed-point rendering pictures.
In one possible implementation manner, the animation segment is a three-dimensional scene roaming animation segment, and the method for generating the animation segment includes:
calculating animation sampling point positions in a lens movement path according to a preset distance interval;
acquiring a spatial three-dimensional scene animation frame of each point location;
and recording the three-dimensional scene roaming animation segments of the shot according to the calculated moving speed of the shot path.
According to a second aspect of embodiments of the present application, there is provided a video compositing apparatus, including:
the video template acquisition unit is used for acquiring a video template and responding to a selection instruction of a user on the video template to acquire and present selectable space types;
the lens group configuration unit is used for responding to a selection instruction of a user on the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of the lens according to a preset rule of the lens group;
the animation segment recording unit is used for calculating the path motion speed according to the lens motion track and the picture setting and the target duration of the animation segment of the template corresponding to the video template and generating the animation segment;
the fixed point rendering picture generating unit is used for acquiring the end point position of the lens motion path and camera setting, and rendering to generate a fixed point rendering picture;
and the synthesis unit is used for importing the animation fragments and the fixed-point rendering pictures into corresponding positions of the video template to synthesize the video.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: at least one processor; a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video compositing method of any of the preceding first aspects.
According to a fourth aspect of embodiments herein, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the video composition method of any one of the preceding first aspects.
According to a fifth aspect of embodiments herein, there is also provided a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the video composition method of the foregoing first aspect or any implementation of the first aspect.
The video production task is decomposed into a plurality of stop motion animation rendering tasks, a plurality of path animation fragment generation tasks and a task of video synthesis of the stop motion pictures and the animation fragments. The method that each frame needs to be rendered for roaming videos in the past design scheme is changed, only a plurality of stop motion pictures need to be rendered in a focused mode, video fragments of the preset path are generated rapidly, and therefore low-cost and rapid generation of the short videos is achieved.
Drawings
Fig. 1 is a flowchart of a video synthesis method provided in an embodiment of the present application;
fig. 2 is a schematic view of a video structure according to an embodiment of the present application.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the disclosure. It should be further noted that, for the convenience of description, only some of the structures relevant to the present disclosure are shown in the drawings, not all of them.
Fig. 1 is a flowchart of a video synthesis method according to an embodiment of the present application. The present embodiment is applicable to a case where a composite home decoration effect display video is rapidly made according to a multimedia presentation requirement of a home decoration design scheme, and the method may be executed by a video compositing apparatus, which may be implemented in a software and/or hardware manner, and may be configured in an electronic device, for example, a terminal device or a server. As shown in fig. 1, the method can be used for making a display video of a finished design, comprising the steps of:
step 101, obtaining a video template, specifically, determining a video template to be used in response to an instruction of a user selecting the video template.
When the continuous image changes more than 24 frames per second, the visual effect which looks continuous is realized by using the principle of human persistence of vision, and the continuous image frames are called video. A video comprises a number of frames, where the pictures of the frames build the video when run in sequence.
The video template is a template file generated from an original video file stored in a database. The original video file may be a video file generated using video processing software. The video template may include resources such as pictures, text, video, and audio.
The video template includes video clips and music. The video template relates to the duration, the superposition setting and the transition setting of the video clip. The overlay settings include overlay video, pictures, filters, etc. If there is a superimposition, settings are made regarding the coordinate position of the superimposed content on the screen, the position of the superimposed content at the video level, and the like. Transition settings include whether a transition is applied, and the particular selected transition.
In one specific example, a video clip of a video template comprises frames, each frame configured to:
the types of frames include two types: the method comprises the steps of obtaining a picture and a video, wherein the picture represents that a current frame is a freeze frame picture, and the video represents that the current frame is a frame of a space animation;
transition types including transition and non-transition;
a transition duration, wherein if the transition type is transition, the transition duration needs to be further configured, and generally the transition duration does not exceed 50% of the frame duration;
a frame duration, a value in milliseconds, such as 500 milliseconds;
material types, which may include, for example, rendered, fixed, random freeze frames, and the like;
when the material type is fixed, material linkage is further required to be configured;
overlapping videos and configuring the addresses of the overlapped video resources;
background music, configuring the address of the background music resource.
In one particular example, the video template may be edited or custom set by a user, such as by a timeline and a timeline indicator to assist the user in assembling, editing, or defining the video template, and the user may configure a frame by selecting the frame by moving the pointer of the timeline indicator to a point on the timeline that corresponds to the frame.
In one embodiment, a method of making a video template includes:
selecting a spatial model or a combination of spatial models;
setting the total duration of the video template;
establishing at least one lens group and configuring lens information;
configuring at least one fixed point rendering graph position corresponding to each lens group;
configuring overlapped contents including materials such as overlapped characters, overlapped music, overlapped videos and the like;
generating a video clip;
and synthesizing the video fragments and the fixed-point rendering graphs into a video template.
According to the requirement of a design scheme for showing a manipulation, each moving lens of a lens group in some video templates needs to be configured with a fixed point rendering graph, and some moving lenses of the lens group in some video templates are provided with fixed point rendering graphs, even all moving lenses can be not provided with fixed point rendering graphs.
In one embodiment, in response to a user selecting a determined video template, information of the video template, including, for example, a name, a preview image, a matching space type, a duration, a shot trajectory, etc., is displayed to the user, and a preview animation is displayed as necessary.
In one embodiment, the method further comprises calculating the cost according to the quality, quantity and the like of the material, wherein the quality comprises the size, the quantity and the like. Taking a fixed-point rendering graph as a typical material as an example, the larger the size of the fixed-point rendering graph is, the higher the resolution is, and the higher the cost is; the greater the number of fixed point renderings, the higher the cost. Because the same video template can correspond to fixed-point rendering graphs of various video paths and various resolutions, different fees can be further calculated for different video paths.
And 102, acquiring a corresponding selectable space type according to the video template selected by the user for the user to select.
In the home decoration display short video, different display methods are adopted for different types of spaces, namely, targeted scripts are designed according to space characteristics. The common spaces involved are: living room, dining room, bedroom.
In one embodiment, the user's selection of the video template is obtained through a graphical user interface. For example, a display window of the candidate video template is set in a display area of the display device, and the candidate video template is presented to the user in the form of a name or an effect preview.
And 103, acquiring an optional lens group corresponding to the user selection space type for the user to select.
One or more suitable lens groups are provided for a particular type of space. The user selects a lens group or recommends a lens group to the user. The lens group comprises lenses arranged at different positions; the configuration information of the lens includes: location, camera parameters, path, spatial attributes, model attributes. The spatial attributes correspond to which type of space, and the model attributes are applicable to which type of model. For example, for a living room, the model may have a sofa, a television; for a restaurant, the model may have a table.
The path of the lens is a motion path when the lens shoots a video or simulates a lens shooting angle to generate a video according to a design scheme, and comprises a starting point position, an end point position and camera parameters. The video generated by simulating the shooting angle of the lens for the design scheme can be an effect graph sequence generated by a plurality of rendering effect graphs.
For example, for a living room, an alternative set of lenses may be composed of 6 lenses, and the video structure is shown in fig. 2:
the No. 1 lens is a parlor push lens, the path is a push lens, the middle point of one side of the parlor is selected as a starting point, the middle point of the opposite side is selected as an end point, the lens performs linear horizontal push motion along a connecting line of the two points, and the time duration is 3.5 s;
the No. 2 lens is used for generating a stop motion picture of the No. 1 lens end point, and the time length is 2.5 s;
the No. 3 lens is a lens around the sofa, the path is a certain angle around the sofa, and the time duration is 3.5 s;
the No. 4 lens is used for generating a stop motion picture of the No. 3 lens end point, and the duration is 3.5 s;
the No. 5 lens is a parallel lens of a television background wall, and the time length is 3.3 s;
and the shot No. 6 is used for generating a stop motion picture of the shot No. 5, and the time length is 2.5 s.
In one embodiment, a trailer special effect shot is added at the end of the video, such as a trailer video or picture showing promotional information of a designer, design company or design software company.
And step 104, according to the preset rule of the lens group selected by the user in the step 103, and based on the parameters and the model of the space of the current scheme of the user in the step 102, simulating the motion path, the starting point position and the camera setting of the lens in the space.
The spatial parameters comprise information such as house type structure, furniture distribution, background wall, door and window positions and the like, the path is calculated according to the lens type, the movement rule, the reference object and the orientation rule of the path rule, and the camera is used for cutting under the condition of shielding.
Where some checks need to be made, such as:
checking the space and the model, wherein a living room needs a sofa and a television, a dining room needs a dining table, a bedroom needs a bed and the like;
whether the room size meets the requirement of the lens path needs to be verified, namely the lens moving path cannot exceed the room range;
the model in the path of the lens movement cannot cause the view to be blocked, and the path blocked by the model cannot exceed a certain proportion of the total length of the path, such as 1/4.
And 105, calculating the path motion speed and generating an animation segment according to the lens motion track and the picture setting generated in the step 104 and the target duration of the segment corresponding to the video template in the step 101.
The animation segment belongs to a video segment, and particularly relates to a roaming video segment.
In an embodiment, according to the lens motion trajectory and the picture setting generated in step 104, and according to the target duration of the segment corresponding to the video template in step 101, the path motion speed is calculated and the motion effect preview playing is generated, and at the same time, the video segment is generated by screen recording, and the video segment is used as an animation segment to participate in the subsequent composition.
Generating point positions according to the fixed speed and the lens rule (positions in space), generating arcs according to the point positions, and calculating the speed according to the target duration and the arc length.
And playing the shot through the display equipment, and simultaneously recording a screen and storing a video. Here, the screen of the video clip is an animation screen including a material and a wire frame, which is not rendered.
In a specific example, the manner of generating the animation segments is to generate a three-dimensional scene roaming animation at the first-person viewing angle of the space along the shot motion trajectory and at the calculated speed, and then perform preview playing. The method for generating the three-dimensional scene roaming animation under the first-person visual angle for each shot comprises the following steps: calculating animation sampling point positions in a lens movement path according to a preset distance interval; acquiring a spatial three-dimensional scene animation frame of each point location; and recording the three-dimensional scene roaming animation of the shot according to the calculated moving speed of the shot path.
And 106, acquiring the focal position of the lens motion path and the camera setting in the step 4, and initiating rendering to generate a fixed-point rendering picture.
Stop-motion pictures (i.e., fixed-point rendered pictures) are rendered according to camera position, camera view angle, composition, and resolution.
And step 107, importing the screen recording video clip in the step 105 and the fixed point rendering picture in the step 106 into a corresponding position of the video template selected by the user in the step 101, and finishing video synthesis.
And the import operation is to replace the corresponding position with the corresponding resource according to the defined video template structure.
In one specific example, the synthesis of the video is performed by FFmpeg techniques.
In a specific example, for a synthesized video, a user wants to further upgrade the synthesized video to a rendered video, the animation segment may be rendered again to generate a new full-rendered video, the resolution of the full-rendered video may be the resolution of the fixed-point rendering map, and the cost is calculated according to the duration of the video. The cost may also be calculated based on the resolution selected by the user.
An embodiment of the present application further provides a video synthesizing apparatus, including:
the video template acquisition unit is used for acquiring a video template and responding to a selection instruction of a user on the video template to acquire and present selectable space types;
the lens group configuration unit is used for responding to a selection instruction of a user on the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of the lens according to a preset rule of the lens group;
the animation segment recording unit is used for calculating the path motion speed according to the lens motion track and the picture setting and the target duration of the animation segment of the template corresponding to the video template and generating the animation segment;
the fixed point rendering picture generating unit is used for acquiring the end point position of the lens motion path and camera setting, and rendering to generate a fixed point rendering picture;
and the synthesis unit is used for importing the animation fragments and the fixed-point rendering pictures into corresponding positions of the video template to synthesize the video.
An embodiment of the present application further provides an electronic device, including: at least one processor; a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video compositing method of any of the preceding first aspects.
The electronic device in the present embodiment may include, but is not limited to, a mobile terminal such as a smart phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), and a fixed terminal such as a desktop computer. An electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) or a program loaded from a storage means into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device, ROM, and RAM are connected to each other by a bus 304. An input/output (I/O) interface is also connected to the bus.
Embodiments of the present application further provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the video composition method according to any one of the foregoing first aspects.
Embodiments of the present application further provide a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the video composition method of the foregoing first aspect or any implementation manner of the first aspect.
In the embodiment, the video production task is converted into a plurality of stop motion animation rendering tasks and a plurality of low-cost path video clip generation tasks, and then the stop motion pictures and the video clips are subjected to video synthesis. The method that each frame needs to be rendered for roaming videos in the past design scheme is changed, only a plurality of stop motion pictures need to be rendered in a focused mode, video fragments of the preset path are generated rapidly, and therefore low-cost and rapid generation of the short videos is achieved.
It is worthy to note that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method for video compositing, comprising:
step 101, acquiring a video template;
step 102, responding to a selection instruction of a user to a video template, and acquiring and presenting selectable space types;
103, responding to a selection instruction of the user for the space type, and acquiring and presenting an optional lens group corresponding to the space type selected by the user;
104, acquiring a lens group selected by a user, and simulating a movement path of the lens according to a preset rule of the lens group;
step 105, calculating a path motion speed according to the lens motion track and the picture setting generated in the step 104 and the target duration of the animation segment of the template corresponding to the video template in the step 101, and generating an animation segment;
step 106, acquiring the lens motion path end point position and the camera setting in the step 4, and rendering to generate a fixed point rendering picture;
and step 107, importing the animation fragments generated in the step 105 and the fixed-point rendering pictures generated in the step 106 into corresponding positions of the video template to synthesize the video.
2. A video synthesis method according to claim 1, wherein:
the video template includes at least one template animation segment and at least one template stop-motion rendered picture.
3. A video synthesis method according to claim 1, wherein:
the video template is obtained in step 101 by determining the video template to be used in response to an instruction from a user to select the video template, or by receiving a video template uploaded by the user.
4. A video synthesis method according to claim 1, wherein:
step 101 also includes a video template produced according to a user instruction, including:
acquiring a spatial model or a combination of spatial models selected by a user;
acquiring the total time length of a video template set by a user;
acquiring at least one lens group established by a user and configuring lens information;
generating a template animation segment;
generating a template fixed-point rendering graph;
and synthesizing the template animation fragments and the template fixed-point rendering graph into a video template.
5. A video synthesis method according to claim 1, wherein:
the lens group includes lenses disposed at different positions;
the configuration information of the lens includes: position, camera parameters, path, spatial attributes, and model attributes;
the path of the lens is the movement path of the lens in space.
6. A video synthesis method according to claim 1, wherein:
further comprising calculating a cost based on the quality and/or quantity of the fixed-point rendered picture.
7. A video synthesis method according to claim 1, wherein:
the animation segments are three-dimensional scene roaming animation segments, and the method for generating the animation segments comprises the following steps:
calculating animation sampling point positions in a lens movement path according to a preset distance interval;
acquiring a spatial three-dimensional scene animation frame of each point location;
and recording the three-dimensional scene roaming animation segments of the shot according to the calculated moving speed of the shot path.
8. A video compositing apparatus, comprising:
the video template acquisition unit is used for acquiring a video template and responding to a selection instruction of a user on the video template to acquire and present selectable space types;
the lens group configuration unit is used for responding to a selection instruction of a user on the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of the lens according to a preset rule of the lens group;
the animation segment recording unit is used for calculating the path motion speed according to the lens motion track and the picture setting and the target duration of the animation segment of the template corresponding to the video template and generating the animation segment;
the fixed point rendering picture generating unit is used for acquiring the end point position of the lens motion path and camera setting, and rendering to generate a fixed point rendering picture;
and the synthesis unit is used for importing the animation fragments and the fixed-point rendering pictures into corresponding positions of the video template to synthesize the video.
9. An electronic device, comprising: at least one processor; a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video compositing method of any of claims 1-7.
10. A non-transitory computer-readable storage medium characterized in that:
the non-transitory computer-readable storage medium stores computer instructions for causing a computer to perform the video compositing method of any of claims 1-7.
CN202110565463.2A 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium Active CN113660528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565463.2A CN113660528B (en) 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565463.2A CN113660528B (en) 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113660528A true CN113660528A (en) 2021-11-16
CN113660528B CN113660528B (en) 2023-08-25

Family

ID=78488918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565463.2A Active CN113660528B (en) 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113660528B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286197A (en) * 2022-01-04 2022-04-05 土巴兔集团股份有限公司 Method and related device for rapidly generating short video based on 3D scene
CN114529638A (en) * 2022-02-22 2022-05-24 广东三维家信息科技有限公司 Movie animation generation method and device, electronic equipment and storage medium
CN115100327A (en) * 2022-08-26 2022-09-23 广东三维家信息科技有限公司 Method and device for generating animation three-dimensional video and electronic equipment
WO2024056023A1 (en) * 2022-09-14 2024-03-21 北京字跳网络技术有限公司 Video editing method and apparatus, and device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394530A (en) * 2008-10-29 2009-03-25 中兴通讯股份有限公司 Video freezing method and device in process of mobile phone television playing process
CN103814382A (en) * 2012-09-14 2014-05-21 华为技术有限公司 Augmented reality processing method and device of mobile terminal
CN105976416A (en) * 2016-05-06 2016-09-28 江苏云媒数字科技有限公司 Lens animation generating method and system
CN107801018A (en) * 2017-10-31 2018-03-13 深圳中广核工程设计有限公司 A kind of 3D animation three-dimensional video-frequency manufacturing systems and method played based on ring curtain
CN108257219A (en) * 2018-01-31 2018-07-06 广东三维家信息科技有限公司 A kind of method for realizing the roaming of panorama multiple spot
CN110771150A (en) * 2018-09-29 2020-02-07 深圳市大疆创新科技有限公司 Video processing method, video processing device, shooting system and computer readable storage medium
CN111475675A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video processing system
CN111640174A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed visual angle
CN111640173A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Cloud rendering method and system for home-based roaming animation based on specific path

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394530A (en) * 2008-10-29 2009-03-25 中兴通讯股份有限公司 Video freezing method and device in process of mobile phone television playing process
CN103814382A (en) * 2012-09-14 2014-05-21 华为技术有限公司 Augmented reality processing method and device of mobile terminal
CN105976416A (en) * 2016-05-06 2016-09-28 江苏云媒数字科技有限公司 Lens animation generating method and system
CN107801018A (en) * 2017-10-31 2018-03-13 深圳中广核工程设计有限公司 A kind of 3D animation three-dimensional video-frequency manufacturing systems and method played based on ring curtain
CN108257219A (en) * 2018-01-31 2018-07-06 广东三维家信息科技有限公司 A kind of method for realizing the roaming of panorama multiple spot
CN110771150A (en) * 2018-09-29 2020-02-07 深圳市大疆创新科技有限公司 Video processing method, video processing device, shooting system and computer readable storage medium
CN111475675A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video processing system
CN111640174A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed visual angle
CN111640173A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Cloud rendering method and system for home-based roaming animation based on specific path

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286197A (en) * 2022-01-04 2022-04-05 土巴兔集团股份有限公司 Method and related device for rapidly generating short video based on 3D scene
CN114529638A (en) * 2022-02-22 2022-05-24 广东三维家信息科技有限公司 Movie animation generation method and device, electronic equipment and storage medium
CN115100327A (en) * 2022-08-26 2022-09-23 广东三维家信息科技有限公司 Method and device for generating animation three-dimensional video and electronic equipment
WO2024056023A1 (en) * 2022-09-14 2024-03-21 北京字跳网络技术有限公司 Video editing method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
CN113660528B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN113660528B (en) Video synthesis method and device, electronic equipment and storage medium
CN111698390A (en) Virtual camera control method and device, and virtual studio implementation method and system
WO2021135320A1 (en) Video generation method and apparatus, and computer system
CN111491174A (en) Virtual gift acquisition and display method, device, equipment and storage medium
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
CN114025219B (en) Rendering method, device, medium and equipment for augmented reality special effects
KR20220093342A (en) Method, device and related products for implementing split mirror effect
CN113542624A (en) Method and device for generating commodity object explanation video
CN114245228B (en) Page link release method and device and electronic equipment
CN108846886A (en) A kind of generation method, client, terminal and the storage medium of AR expression
US20200104030A1 (en) User interface elements for content selection in 360 video narrative presentations
CN113781660A (en) Method and device for rendering and processing virtual scene on line in live broadcast room
CN110572717A (en) Video editing method and device
CN113806306A (en) Media file processing method, device, equipment, readable storage medium and product
CN102572219B (en) Mobile terminal and image processing method thereof
CN112839190A (en) Method for synchronously recording or live broadcasting video of virtual image and real scene
CN108140401B (en) Accessing video clips
EP3246921B1 (en) Integrated media processing pipeline
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
WO2020194973A1 (en) Content distribution system, content distribution method, and content distribution program
CN113301356A (en) Method and device for controlling video display
KR101221540B1 (en) Interactive media mapping system and method thereof
CN115049574A (en) Video processing method and device, electronic equipment and readable storage medium
CN111652986B (en) Stage effect presentation method and device, electronic equipment and storage medium
Méndez et al. Natural interaction in virtual TV sets through the synergistic operation of low-cost sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant