CN113660528B - Video synthesis method and device, electronic equipment and storage medium - Google Patents

Video synthesis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113660528B
CN113660528B CN202110565463.2A CN202110565463A CN113660528B CN 113660528 B CN113660528 B CN 113660528B CN 202110565463 A CN202110565463 A CN 202110565463A CN 113660528 B CN113660528 B CN 113660528B
Authority
CN
China
Prior art keywords
video
template
user
lens
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110565463.2A
Other languages
Chinese (zh)
Other versions
CN113660528A (en
Inventor
董平
何建丰
叶侃
陈丰
李全亮
刘璐
邓曦澄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202110565463.2A priority Critical patent/CN113660528B/en
Publication of CN113660528A publication Critical patent/CN113660528A/en
Application granted granted Critical
Publication of CN113660528B publication Critical patent/CN113660528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Abstract

The application provides a video synthesis method, a video synthesis device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a video template; responding to a selection instruction of a user on a video template, and acquiring and presenting selectable space types; responding to a selection instruction of a user for the space type, and acquiring and presenting an optional lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of a lens according to a preset rule of the lens group; calculating the path movement speed according to the lens movement track and the picture setting and the target duration of the animation segment corresponding to the template of the video template, and generating the animation segment; acquiring the end position of a lens motion path and camera setting, and rendering to generate a fixed-point rendering picture; and importing the animation fragments and the fixed-point rendering pictures into corresponding positions of the video template to synthesize the video. According to the method, only the stop-motion pictures are needed to be rendered in an important mode, the video clips of the preset paths are generated rapidly, and the home decoration display video is generated rapidly at low cost.

Description

Video synthesis method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of videos, and particularly relates to a video synthesis method, a device, electronic equipment and a storage medium.
Background
In the field of video synthesis and production, a standardized video design product is available, a video template is adopted to quickly generate a short video, a user does not need to manually use video processing software to produce a video file, only selects a proper video template file according to requirements, and then replaces certain resources in the video template file with recorded video clips or photos, so that a new video file meeting service requirements can be obtained. And a large number of videos are generated by multiplexing the same video template file for a plurality of times, so that the aim of batch generation is fulfilled. The existing technical scheme for generating videos cannot solve the technical problems that the design scheme for quickly generating the design scheme for displaying the video and publicizing the short video in the field of home decoration design, and the traditional rendering technology is still relied on for manufacturing the roaming video of the room or the whole design scheme, so that the cost is high and the time consumption is long.
Disclosure of Invention
In view of the above, the present application provides a video synthesizing method, apparatus, electronic device and storage medium for solving the problems of high cost and long time consumption of roaming video in a room or an overall design scheme.
According to a first aspect of an embodiment of the present application, there is provided a video compositing method, including:
step 101, obtaining a video template;
step 102, responding to a selection instruction of a user on a video template, and acquiring and presenting optional space types;
step 103, responding to a selection instruction of a user for the space type, and acquiring and presenting an optional lens group corresponding to the space type selected by the user;
104, acquiring a lens group selected by a user, and simulating a movement path of a lens according to a preset rule of the lens group;
step 105, calculating the path movement speed according to the lens movement track and the picture setting generated in the step 104 and the target duration of the animation segment of the corresponding template of the video template in the step 101, and generating the animation segment;
step 106, acquiring the end position of the lens movement path and camera setting in the step 4, and rendering to generate a fixed-point rendering picture;
step 107, importing the animation segment generated in step 105 and the fixed-point rendering picture generated in step 106 into corresponding positions of a video template to synthesize a video.
In one possible implementation, a video template includes at least one template animation segment and at least one template freeze-rendered picture.
In one possible implementation, the manner in which the video template is obtained in step 101 is to determine the video template to be used in response to an instruction from the user to select the video template, or to receive a video template uploaded by the user.
In one possible implementation manner, the method further includes a video template made according to a user instruction before step 101, including:
acquiring a spatial model or a combination of spatial models selected by a user;
acquiring the total duration of a video template set by a user;
acquiring at least one lens group established by a user and configuring lens information;
generating a template animation fragment;
generating a template fixed-point rendering diagram;
and synthesizing the template animation fragment and the template fixed-point rendering graph into a video template.
In one possible implementation, the lens group includes lenses disposed at different positions;
the configuration information of the lens includes: position, camera parameters, path, spatial properties, and model properties;
the path of the lens is the movement path of the lens in space.
In one possible implementation, the method further comprises calculating a cost according to the quality and/or quantity of the fixed-point rendered pictures.
In one possible implementation manner, the animation segment is a three-dimensional scene roaming animation segment, and the method for generating the animation segment is as follows:
calculating animation sampling points in a lens motion path according to a preset distance interval;
obtaining a space three-dimensional scene animation frame of each point position;
recording the three-dimensional scene roaming animation segment of the lens according to the calculated lens path movement speed.
According to a second aspect of an embodiment of the present application, there is provided a video compositing apparatus, comprising:
the video template acquisition unit is used for acquiring a video template and responding to a selection instruction of a user on the video template to acquire and present optional space types;
the lens group configuration unit is used for responding to a selection instruction of a user on the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of a lens according to a preset rule of the lens group;
the animation segment recording unit is used for setting according to the lens motion trail and the picture, calculating the path motion speed according to the target duration of the animation segment of the template corresponding to the video template, and generating the animation segment;
the fixed-point rendering picture generation unit is used for acquiring the end position of the lens motion path and the camera setting, and rendering to generate a fixed-point rendering picture;
and the synthesis unit is used for importing the animation fragments and the fixed-point rendering pictures into corresponding positions of the video templates to synthesize the video.
According to a third aspect of an embodiment of the present application, there is provided an electronic apparatus including: at least one processor; a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the video compositing methods of the first aspect described above.
According to a fourth aspect of embodiments of the present application, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform any of the video compositing methods of the preceding aspects.
According to a fifth aspect of embodiments of the present application, there is also provided a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the video compositing method of any of the preceding or first aspects.
The application decomposes the video making task into a plurality of stop-motion animation rendering tasks, a plurality of path animation segment generating tasks and a task of video synthesizing the stop-motion pictures and the animation segments. The method for rendering each frame of the roaming video in the past design scheme is changed, and only a plurality of Zhang Dingge pictures are required to be rendered in a key way, so that video fragments of a preset path are generated quickly, and further low-cost and quick generation of the short video is realized.
Drawings
FIG. 1 is a flow chart of a video compositing method according to an embodiment of the application;
fig. 2 is a schematic diagram of a video structure according to an embodiment of the present application.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the present disclosure and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present disclosure are shown in the drawings.
Fig. 1 is a flowchart of a video synthesizing method according to an embodiment of the present application. The embodiment is applicable to the situation of quickly producing and synthesizing home effect display video according to the multimedia presentation requirement of the home decoration design scheme, the method can be executed by a video synthesizing device, the device can be realized in a software and/or hardware mode, and the device can be configured in an electronic device, such as a terminal device or a server. As shown in fig. 1, the method can be used for making a display video of a decoration design scheme, and comprises the following steps:
step 101, acquiring a video template, which may specifically be determining a video template to be used in response to an instruction of selecting the video template by a user.
When the continuous image changes by more than 24 frames per second, the visual effect which looks continuous is realized by utilizing the visual persistence principle of people, and the continuous image is called video. Video comprises a number of frames, where pictures of a frame build up video when run in sequence.
The video template is a template file generated from an original video file stored in a database. The original video file may be a video file generated using video processing software. The video template can comprise resources such as pictures, words, videos, audio and the like.
The video templates include video clips and music. The video template relates to the duration, superposition setting and transition setting of the video clips. The overlay settings include overlay video, pictures, filters, etc. If there is an overlay, there are also settings regarding the coordinate position of the overlaid content in the screen, the hierarchical position of the overlaid content in the video, and the like. The transition settings include whether to apply a transition, and the transition specifically selected.
In one specific example, a video clip of a video template includes frames, each of which is configured to:
the types of frames include two types: the method comprises the steps of a picture and a video, wherein the picture represents that a current frame is a stop-motion picture, and the video represents that the current frame is one frame of a space animation;
transition types, including transition and non-transition;
the transition time length, if the transition type is transition, the transition time length needs to be further configured, and generally, the transition time length is not more than 50% of the frame time length;
frame duration, a value in milliseconds, e.g., 500 milliseconds;
the material types may include, for example, rendering, fixing, random freeze frames, etc.;
when the material type is fixed, a material link is further required to be configured;
overlapping videos, and configuring addresses of overlapping video resources;
background music, configuring the address of background music resources.
In one embodiment, the video templates may be edited or custom set by a user, such as by a timeline and a timeline indicator to assist the user in combining, editing, or defining the video templates, and the user may configure a frame by moving a pointer of the timeline indicator to a point on the timeline corresponding to the frame.
In one embodiment, a method of making a video template includes:
selecting a spatial model or a combination of spatial models;
setting the total duration of the video template;
establishing at least one lens group and configuring lens information;
configuring at least one fixed-point rendering map position corresponding to each lens group;
configuring superposition contents, including superposition characters, superposition music, superposition video and other materials;
generating a video clip;
and synthesizing the video clips and the fixed-point rendering graph into a video template.
According to the requirements of the design proposal showing the method, each moving lens of the lens group in some video templates needs to be provided with a fixed-point rendering graph, and part of the moving lenses of the lens group in some video templates are provided with the fixed-point rendering graph, even all the moving lenses can not be provided with the fixed-point rendering graph.
In one specific example, in response to a user selecting a determined video template, information of the video template is displayed to the user, including for example, a name, a preview, a matching space type, a duration, a shot track, etc., and a preview animation is displayed as necessary.
In one embodiment, the method further comprises calculating a cost according to the quality, quantity, etc. of the material, wherein the quality comprises size, quantity, etc. Taking a fixed-point rendering diagram as a typical material for example, the larger the size of the fixed-point rendering diagram is, the higher the resolution is, and the higher the cost is; the greater the number of fixed-point rendering graphs, the greater the cost. Since the same video template can correspond to multiple video paths and fixed-point rendering graphs with multiple resolutions, different fees can be further calculated for different video paths.
Step 102, according to the video template selected by the user, obtaining a corresponding selectable space type for the user to select.
In home decoration display short videos, different display methods are adopted for different types of spaces, namely, a specific script is designed according to the characteristics of the spaces. The common spaces involved are: living room, dining room, bedroom.
In one embodiment, a user selection of a video template is obtained through a graphical user interface. For example, a display window of the candidate video templates is set in a display area of the display device, and the candidate video templates are presented to the user in the form of names or effect preview images.
Step 103, obtaining an optional lens group corresponding to the user selection space type for the user to select.
One or more applicable lens groups are provided for a particular type of space. The user selects a lens group or recommends a lens group to the user. The lens group comprises lenses arranged at different positions; the configuration information of the lens includes: location, camera parameters, path, spatial properties, model properties. The spatial properties are the types of spaces to which they correspond, and the model properties are the types of models to which they apply. For example, for living rooms, models may have sofas, televisions; for restaurants, the model may have a table.
The path of the lens is a motion path when the lens shoots a video or simulates a lens shooting angle to generate the video aiming at the design scheme, and comprises a starting point position, an end point position and camera parameters. The video generated for the design scheme by simulating the shot angles can be an effect graph sequence generated by a plurality of rendering effect graphs.
For example, for a living room, an alternative lens group may be composed of 6 lenses, and the video structure is shown in fig. 2:
the No. 1 lens is a living room lens pushing path, a middle point of one side of the living room is selected as a starting point, a middle point of the opposite side is selected as an end point, and the lens performs linear flat pushing motion along a connecting line of the two points for 3.5s;
the lens No. 2 is used for generating a stop-motion picture of the end point of the lens No. 1, and the duration is 2.5s;
the lens 3 is a sofa-surrounding lens, the path is a sofa-surrounding certain angle, and the duration is 3.5s;
the lens No. 4 is used for generating a stop-motion picture of the end point of the lens No. 3, and the duration is 3.5s;
the lens No. 5 is a television background wall parallel lens, and the duration is 3.3s;
and the lens No. 6 is used for generating a stop-motion picture of the end point of the lens No. 5, and the duration is 2.5s.
In one embodiment, an end-of-track special effect shot is also added at the end of the video, such as an end-of-track video or picture showing information advertised by a designer, design company, or design software company.
Step 104, according to the preset rule of the lens group selected by the user in step 103, and based on the parameters and the model of the space of the current scheme of the user in step 102, the motion path, the starting point position and the camera setting of the lens in the space are simulated.
The parameters of the space comprise information such as a house type structure, furniture distribution, background walls, door and window positions and the like, a path is calculated according to the lens type, the movement rule, the reference object and the orientation rule of the path rule, and a camera is used for cutting under the condition that the path is blocked.
Some checks are needed, such as:
checking space and models, sofa and television are needed in living room, dining table is needed in dining room, bed is needed in bedroom, and the like;
checking whether the room size meets the requirement of a lens path or not, namely, the lens moving path cannot exceed the room range;
the model on the lens moving path cannot cause the view to be blocked, and the path blocked by the model cannot exceed a certain proportion of the total length of the path, such as 1/4.
Step 105, calculating the path movement speed and generating an animation segment according to the lens movement track and the picture setting generated in step 104 and the target duration of the segment corresponding to the video template in step 101.
The animation segment belongs to a video segment, and particularly relates to a roaming video segment.
In one embodiment, the path movement speed can be calculated according to the lens movement track and the picture setting generated in the step 104 and according to the target duration of the corresponding segment of the video template in the step 101, and the motion effect preview playing is generated, meanwhile, the video segment is generated by recording, and the video segment is used as the animation segment to participate in the subsequent synthesis.
Generating a point location according to the fixed speed and the lens rule (the position in the space), generating an arc according to the point location, and calculating the speed according to the target duration and the arc length.
The shot is played through the display device while the screen is recorded and saved. The frames of the video clips are here animated frames that contain material and wire frames that have not been rendered.
In one embodiment, the animation segment is generated by generating a three-dimensional scene roaming animation under the first-person view angle of the space along the motion track of the lens and at a calculated speed, and then performing preview playing. The method for generating the three-dimensional scene roaming animation under the first person view angle for each lens comprises the following steps: calculating animation sampling points in a lens motion path according to a preset distance interval; obtaining a space three-dimensional scene animation frame of each point position; and recording the three-dimensional scene roaming animation of the lens according to the calculated lens path movement speed.
And 106, acquiring the focal position and camera setting of the lens movement path in the step 4, and initiating rendering to generate a fixed-point rendering picture.
The freeze frame (i.e., the fixed point rendered picture) is rendered according to camera position, camera view, composition, and resolution.
Step 107, importing the video clips of step 105 and the fixed-point rendering pictures of step 106 into the corresponding positions of the video templates selected by the user in step 101, and completing video synthesis.
The importing operation is to replace the corresponding position with the corresponding resource according to the defined video template structure.
In one specific example, the synthesis of video is performed by FFmpeg techniques.
In a specific example, for a synthesized video, a user hopes to further upgrade to a rendered video, the animation segment can be rendered again to generate a new full-rendered video, the resolution of the full-rendered video can be selected from the resolution of a fixed-point rendering map, and the cost is calculated according to the duration of the video. The fee may also be calculated based on the resolution selected by the user.
The embodiment of the application also provides a video synthesis device, which comprises:
the video template acquisition unit is used for acquiring a video template and responding to a selection instruction of a user on the video template to acquire and present optional space types;
the lens group configuration unit is used for responding to a selection instruction of a user on the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of a lens according to a preset rule of the lens group;
the animation segment recording unit is used for setting according to the lens motion trail and the picture, calculating the path motion speed according to the target duration of the animation segment of the template corresponding to the video template, and generating the animation segment;
the fixed-point rendering picture generation unit is used for acquiring the end position of the lens motion path and the camera setting, and rendering to generate a fixed-point rendering picture;
and the synthesis unit is used for importing the animation fragments and the fixed-point rendering pictures into corresponding positions of the video templates to synthesize the video.
The embodiment of the application also provides electronic equipment, which comprises: at least one processor; a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the video compositing methods of the first aspect described above.
The electronic device in the present embodiment may include, but is not limited to, a mobile terminal such as a smart phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), and a fixed terminal such as a desktop computer. The electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage means into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the electronic device are also stored. The processing device, ROM, and RAM are connected to each other by a bus 304. An input/output (I/O) interface is also connected to the bus.
Embodiments of the present application also provide a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform any of the video compositing methods of the foregoing first aspect.
Embodiments of the present application also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the video compositing method of any of the implementations of the foregoing first aspect or the first aspect.
In this embodiment, the video production task is converted into a plurality of stop-motion animation rendering tasks, a plurality of low-cost path video segment generating tasks, and then the stop-motion pictures and the video segments are subjected to video synthesis. The method for rendering each frame of the roaming video in the past design scheme is changed, and only a plurality of Zhang Dingge pictures are required to be rendered in a key way, so that video fragments of a preset path are generated quickly, and further low-cost and quick generation of the short video is realized.
It is noted that the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (9)

1. A method of video composition, comprising:
step 101, obtaining a video template;
step 102, responding to a selection instruction of a user on a video template, and acquiring and presenting optional space types;
step 103, responding to a selection instruction of a user for the space type, and acquiring and presenting an optional lens group corresponding to the space type selected by the user;
104, acquiring a lens group selected by a user, and simulating a movement path of a lens according to a preset rule of the lens group;
step 105, calculating the path movement speed according to the lens movement track and the picture setting generated in the step 104 and the target duration of the animation segment of the corresponding template of the video template in the step 101, and generating the animation segment;
step 106, acquiring the end position of the lens movement path and camera setting in step 104, and rendering to generate a fixed-point rendering picture;
step 107, importing the animation fragment generated in step 105 and the fixed-point rendering picture generated in step 106 into corresponding positions of a video template to synthesize a video;
step 101 also includes making a video template according to a user instruction, including:
acquiring a spatial model or a combination of spatial models selected by a user;
acquiring the total duration of a video template set by a user;
acquiring at least one lens group established by a user and configuring lens information;
generating a template animation fragment;
generating a template fixed-point rendering diagram;
and synthesizing the template animation fragment and the template fixed-point rendering graph into a video template.
2. The video compositing method of claim 1, wherein:
the video template comprises at least one template animation segment and at least one template freeze-rendered picture.
3. The video compositing method of claim 1, wherein:
the video templates are obtained in step 101 by determining the video templates to be used in response to a user selection instruction or by receiving a video template uploaded by the user.
4. The video compositing method of claim 1, wherein:
the lens group comprises lenses arranged at different positions;
the configuration information of the lens includes: position, camera parameters, path, spatial properties, and model properties;
the path of the lens is the movement path of the lens in space.
5. The video compositing method of claim 1, wherein:
further comprising calculating a cost based on the quality and/or number of the fixed-point rendered pictures.
6. The video compositing method of claim 1, wherein:
the animation segment is a three-dimensional scene roaming animation segment, and the method for generating the animation segment comprises the following steps:
calculating animation sampling points in a lens motion path according to a preset distance interval;
obtaining a space three-dimensional scene animation frame of each point position;
recording the three-dimensional scene roaming animation segment of the lens according to the calculated lens path movement speed.
7. A video compositing apparatus, comprising:
the video template acquisition unit is used for acquiring a video template and responding to a selection instruction of a user on the video template to acquire and present optional space types;
the lens group configuration unit is used for responding to a selection instruction of a user on the space type, and acquiring and presenting a selectable lens group corresponding to the space type selected by the user; acquiring a lens group selected by a user, and simulating a movement path of a lens according to a preset rule of the lens group;
the animation segment recording unit is used for setting according to the lens motion trail and the picture, calculating the path motion speed according to the target duration of the animation segment of the template corresponding to the video template, and generating the animation segment;
the fixed-point rendering picture generation unit is used for acquiring the end position of the lens motion path and the camera setting, and rendering to generate a fixed-point rendering picture;
the synthesis unit is used for guiding the animation fragments and the fixed-point rendering pictures into corresponding positions of the video templates to synthesize videos;
the video template obtaining unit is further configured to make a video template according to a user instruction, and includes:
acquiring a spatial model or a combination of spatial models selected by a user;
acquiring the total duration of a video template set by a user;
acquiring at least one lens group established by a user and configuring lens information;
generating a template animation fragment;
generating a template fixed-point rendering diagram;
and synthesizing the template animation fragment and the template fixed-point rendering graph into a video template.
8. An electronic device, comprising: at least one processor; a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video compositing method of any of claims 1-6.
9. A non-transitory computer readable storage medium characterized by:
the non-transitory computer readable storage medium stores computer instructions for causing a computer to perform the video compositing method of any of claims 1-6.
CN202110565463.2A 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium Active CN113660528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565463.2A CN113660528B (en) 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565463.2A CN113660528B (en) 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113660528A CN113660528A (en) 2021-11-16
CN113660528B true CN113660528B (en) 2023-08-25

Family

ID=78488918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565463.2A Active CN113660528B (en) 2021-05-24 2021-05-24 Video synthesis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113660528B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286197A (en) * 2022-01-04 2022-04-05 土巴兔集团股份有限公司 Method and related device for rapidly generating short video based on 3D scene
CN115100327B (en) * 2022-08-26 2022-12-02 广东三维家信息科技有限公司 Method and device for generating animation three-dimensional video and electronic equipment
CN117749959A (en) * 2022-09-14 2024-03-22 北京字跳网络技术有限公司 Video editing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394530A (en) * 2008-10-29 2009-03-25 中兴通讯股份有限公司 Video freezing method and device in process of mobile phone television playing process
CN103814382A (en) * 2012-09-14 2014-05-21 华为技术有限公司 Augmented reality processing method and device of mobile terminal
CN105976416A (en) * 2016-05-06 2016-09-28 江苏云媒数字科技有限公司 Lens animation generating method and system
CN107801018A (en) * 2017-10-31 2018-03-13 深圳中广核工程设计有限公司 A kind of 3D animation three-dimensional video-frequency manufacturing systems and method played based on ring curtain
CN108257219A (en) * 2018-01-31 2018-07-06 广东三维家信息科技有限公司 A kind of method for realizing the roaming of panorama multiple spot
CN110771150A (en) * 2018-09-29 2020-02-07 深圳市大疆创新科技有限公司 Video processing method, video processing device, shooting system and computer readable storage medium
CN111475675A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video processing system
CN111640173A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Cloud rendering method and system for home-based roaming animation based on specific path
CN111640174A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed visual angle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394530A (en) * 2008-10-29 2009-03-25 中兴通讯股份有限公司 Video freezing method and device in process of mobile phone television playing process
CN103814382A (en) * 2012-09-14 2014-05-21 华为技术有限公司 Augmented reality processing method and device of mobile terminal
CN105976416A (en) * 2016-05-06 2016-09-28 江苏云媒数字科技有限公司 Lens animation generating method and system
CN107801018A (en) * 2017-10-31 2018-03-13 深圳中广核工程设计有限公司 A kind of 3D animation three-dimensional video-frequency manufacturing systems and method played based on ring curtain
CN108257219A (en) * 2018-01-31 2018-07-06 广东三维家信息科技有限公司 A kind of method for realizing the roaming of panorama multiple spot
CN110771150A (en) * 2018-09-29 2020-02-07 深圳市大疆创新科技有限公司 Video processing method, video processing device, shooting system and computer readable storage medium
CN111475675A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video processing system
CN111640173A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Cloud rendering method and system for home-based roaming animation based on specific path
CN111640174A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Furniture growth animation cloud rendering method and system based on fixed visual angle

Also Published As

Publication number Publication date
CN113660528A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
CN113660528B (en) Video synthesis method and device, electronic equipment and storage medium
CN111698390B (en) Virtual camera control method and device, and virtual studio implementation method and system
US11750786B2 (en) Providing apparatus, providing method and computer readable storage medium for performing processing relating to a virtual viewpoint image
US6657637B1 (en) Moving image combining apparatus combining computer graphic image and at least one video sequence composed of a plurality of video frames
CN107197341B (en) Dazzle screen display method and device based on GPU and storage equipment
KR20080028433A (en) Method and device for handling multiple video streams using metadata
CN111491174A (en) Virtual gift acquisition and display method, device, equipment and storage medium
CN106713942B (en) Video processing method and device
WO2021135320A1 (en) Video generation method and apparatus, and computer system
US20140082209A1 (en) Personalized streaming internet video
CN113115110B (en) Video synthesis method and device, storage medium and electronic equipment
CA3164771A1 (en) Video generating method, device and computer system
KR20150105058A (en) Mixed reality type virtual performance system using online
CN113542624A (en) Method and device for generating commodity object explanation video
CN113781660A (en) Method and device for rendering and processing virtual scene on line in live broadcast room
US20160379682A1 (en) Apparatus, method and computer program
KR20190027323A (en) Information processing apparatus, information processing method, and program
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
CN114245228A (en) Page link releasing method and device and electronic equipment
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
KR101221540B1 (en) Interactive media mapping system and method thereof
CN116485966A (en) Video picture rendering method, device, equipment and medium
TW200418322A (en) Processing method and system for real-time video stream
KR101399633B1 (en) Method and apparatus of composing videos
Méndez et al. Natural interaction in virtual TV sets through the synergistic operation of low-cost sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant