CN112004031B - Video generation method, device and equipment - Google Patents

Video generation method, device and equipment Download PDF

Info

Publication number
CN112004031B
CN112004031B CN202010759141.7A CN202010759141A CN112004031B CN 112004031 B CN112004031 B CN 112004031B CN 202010759141 A CN202010759141 A CN 202010759141A CN 112004031 B CN112004031 B CN 112004031B
Authority
CN
China
Prior art keywords
event
map
video
path
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010759141.7A
Other languages
Chinese (zh)
Other versions
CN112004031A (en
Inventor
何真
宋子璇
刘鹏展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Perfect Knowledge Technology Co ltd
Original Assignee
Beijing Perfect Knowledge Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Perfect Knowledge Technology Co ltd filed Critical Beijing Perfect Knowledge Technology Co ltd
Priority to CN202010759141.7A priority Critical patent/CN112004031B/en
Publication of CN112004031A publication Critical patent/CN112004031A/en
Application granted granted Critical
Publication of CN112004031B publication Critical patent/CN112004031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Abstract

The application discloses a video generation method, a video generation device and video generation equipment, relates to the technical field of computers, and improves video playing effect while ensuring the reasonability of arrangement of multimedia materials in generated videos without manually adjusting the multimedia materials in the videos by users. The method comprises the following steps: responding to a selection instruction of an event occurrence area in a target historical map, adding an event, and marking the position information of the event in the event occurrence area; connecting the position information in the event occurrence areas as an event path; inserting a multimedia material into the event and/or the event path on the main time axis by taking the event path as the main time axis of animation editing; and editing the multimedia material to generate a video of a historical map.

Description

Video generation method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video generation method, apparatus, and device.
Background
With the rise of the media industry, various mobile terminal video editing tools appear in the market, compared with a video editing tool at a computer terminal, the mobile terminal video editing tool can provide a more convenient video editing function for a user, the user can add contents such as characters, audio and pictures into a video by using the video editing tool, and the video editing functions of clipping, previewing and uploading are realized, so that the video is generated.
In the related art, in the process of generating a video by a video editing tool, a user can upload an edited multimedia material or add a multimedia material provided by a system, and generate a video by using the multimedia material. However, the video editing tool does not consider the association sequence among the multimedia materials in the video, and needs the user to manually adjust the multimedia materials, and the multimedia materials manually adjusted by the user often do not have accurate association, so that it is difficult to restore the real video content, and the arrangement of the multimedia materials in the generated video is not reasonable, which affects the video playing effect.
Disclosure of Invention
In view of this, the present application provides a video generation method, apparatus and device, and mainly aims to solve the problem in the prior art that the arrangement of multimedia materials in a video after manual adjustment by a user is unreasonable, and the video playing effect is affected.
According to a first aspect of the present application, there is provided a video generation method, the method comprising:
responding to a selection instruction of an event occurrence area in a target historical map, adding an event, and marking the position information of the event in the event occurrence area;
connecting the position information in the event occurrence areas as an event path;
inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis for animation editing;
and editing the multimedia material to generate a video of a historical map.
According to a second aspect of the present application, there is provided a video generating apparatus comprising:
the marking unit is used for responding to a selection instruction of an event occurrence area in the target historical map, adding an event and marking the position information of the event in the event occurrence area;
the adding unit is used for repeating the steps to add the events to obtain position information in a plurality of event occurrence areas;
a connection unit for connecting the position information in the event occurrence areas as an event path;
the inserting unit is used for taking the event path as a main time axis of animation editing and inserting multimedia materials into the events and/or the event path on the main time axis;
and the generating unit is used for carrying out editing operation on the multimedia material and generating a video of a history map.
According to a third aspect of the present application, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the computer program is executed.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect described above.
By means of the technical scheme, compared with the mode that the video is generated by utilizing the association sequence between the materials in the existing mode, the video generation method, the video generation device and the video generation equipment have the advantages that the events are added in response to the selection instruction of the event occurrence areas in the target historical map, the position information of the events in the event occurrence areas is marked, the events are repeatedly added to form the position information in the event occurrence areas, the position information in the event occurrence areas is connected to be used as the event path, the association sequence between the events in the historical map is fully considered in the event path, the historical events can be more comprehensively restored, the event path is further used as the main time axis of animation editing, the multimedia materials are inserted into the events and/or the event path on the main time axis, the video of the historical map is generated, the multimedia materials are not required to be manually adjusted by a user in a time-consuming mode that the historical map is combined with the multimedia materials, the events in the historical map are more diversified, clear and vivid described, the rationality of the generated multimedia materials is guaranteed, and the playing effect of the video is improved.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart illustrating a video generation method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating another video generation method provided in the embodiment of the present application;
FIG. 3 is a diagram illustrating an event path generation process provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a process of adding multimedia material to an event path according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a process of adding multimedia material to an event according to an embodiment of the present application;
fig. 6a is a schematic diagram illustrating a process of inserting video/pictures into events on a sub-time axis according to an embodiment of the present application;
FIG. 6b is a diagram illustrating a process of inserting text into an event on a sub-timeline according to an embodiment of the present application;
FIG. 6c is a diagram illustrating an audio insertion process for events on a sub-timeline according to an embodiment of the present application;
fig. 6d is a schematic diagram illustrating a picture-in-picture process for event insertion on a sub-time axis according to an embodiment of the present application;
fig. 6e is a schematic diagram illustrating a video generation process provided by the embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating video playback logic provided by an embodiment of the present application;
fig. 8 shows a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram illustrating another video generating apparatus provided in an embodiment of the present application;
fig. 10 is a schematic device structure diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
The contents of the present invention will now be discussed with reference to several exemplary embodiments. It is to be understood that these examples are discussed only to enable those of ordinary skill in the art to better understand and thus implement the teachings of the present invention, and are not meant to imply any limitations on the scope of the invention.
As used herein, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to. The term "based on" is to be read as "based, at least in part, on". The terms "one embodiment" and "an embodiment" are to be read as "at least one embodiment". The term "another embodiment" is to be read as "at least one other embodiment".
In a video editing scene, the video editing tool can be used for editing video contents to generate videos with animation effects, and specific users can add contents such as characters, audio and pictures to the videos to realize functions such as clipping, previewing and uploading. However, the video editing tool only integrates the multimedia materials added in the video, and the user is required to manually adjust the association sequence among the media materials in the integration process, if the user does not know the association sequence of the video content, the multimedia materials with wrong association sequence are easily formed, and the real video content cannot be restored, so that the arrangement of the multimedia materials in the generated video is unreasonable, and the video playing effect is influenced.
In order to solve the problem, the present embodiment provides a video generation method, as shown in fig. 1, which is applied to a client of a video editing tool, and includes the following steps:
101. in response to a selection instruction of an event occurrence area in the target history map, adding an event and marking position information of the event in the event occurrence area.
The historical map is a tool for indicating the positions and environments of historical phenomena in different historical stages, can reflect the geographic conditions of the different historical stages, and not only indicates the spatial phenomena, but also indicates the temporal phenomena. Therefore, different history stages usually have different history maps, and the target history map may be the history map of any history stage selected by the user, for example, the history map of Song Dynasty 600 years, and the history map of northern Song Dynasty 980 years.
In the embodiment of the present invention, the historical map includes rich time information and spatial information, where the time information may be expressed as year information and year information of different historical stages, and the spatial information may be expressed as regional information and distribution information of entities in a region, for example, distribution information of entities in mountains, rivers, buildings, and the like in a region.
It can be understood that after the target history map is determined, the user may use the target history map as a background for video editing, select an event occurrence area in the history map, specifically select the event occurrence area by dragging coordinates in the history map, further add an event, mark position information of the event in the event occurrence area, preview a coordinate position of the position information, and of course, may also adjust a range of the event occurrence area, and further define the position information of the event in the event occurrence area. Of course, after the event is added, the event can be named, the event description can be edited, the event description can be modified, and the like.
The execution main body of the embodiment can be a video generation device or equipment, and can be configured at a client of a video editing tool, after a user triggers a selection instruction of a target history map, the target history map is used as a background of video editing, an event occurrence area is selected in the target history map by framing, and then an event is added in the event occurrence area.
102. And repeating the steps to add the events to obtain the position information in the event occurrence areas.
In general, in order to ensure the richness of video contents, a user selects a plurality of event occurrence areas during video editing, adds events in each event occurrence area, forms a plurality of events distributed in a target history map, and obtains position information in the event occurrence areas.
103. And connecting the position information in the event occurrence areas as an event path.
In the embodiment of the invention, the user can self-simulate the video theme before editing the video and customize the story line for the video so as to comb the video content, and each event can correspond to different event clues when being added, and the event clues can be expressed as event keywords, event numbers, event time, time positions, event pictures, event audio and the like. Here, the connection order of the events may be formed by using the relevance between the event cues, for example, when the related keyword a is described at the end of the event A1 and the related keyword a is described at the beginning of the event A2, the event A1 and the event A2 are related event cues having relevance, and should be connected in series, and the position information in the event occurrence areas is connected as the event path according to the connection order of the events.
Specifically, the connection order of the plurality of events may be represented by a time sequence of the events marked in the history map, for example, if the time marked in the history map by the event A1 is 9 points, the time marked in the history map by the event A2 is 11 points, and the time marked in the history map by the event A3 is 6 points, the connection order set in advance for the events may be A3-A1-A2, or may be represented by a number sequence of the events added in the history map, for example, if the number added in the history map by the event A1 is (1), the number added in the history map by the event A2 is (2), and the number added in the history map by the event A3 is (3), then the connection order set in advance for the events may be A1-A2-A3, or may be represented by a spatial order of the positions in the history map, for example, if the space of the positions in the history map by the event A1 is the middle, the space of the positions in the history map by the event A2 is the lower, if the space of the events A3 is the upper part in the history map, then the connection order is a plurality of the events A1-A3, or the connection order is from the upper part A1-A3.
It can be understood that, specifically, in the process of connecting the position information in the event occurrence areas as the event path, the position information in the event occurrence areas may form point location coordinates, the point location coordinates of the adjacent events are further connected according to the connection order, the point location coordinates of the events form a path according to the connection order, and the path is the event path.
104. And inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis of the animation editing.
In the embodiment of the present invention, for the generated event path, animation editing may be performed on the event path and/or events on the event path, for example, a play speed is set for the event path, a switching manner is set for the events on the event path, and an image special effect is set for the events on the event path.
The event path is used as a main time axis of animation editing, multimedia materials, such as characters, audio, picture-in-picture and the like, are inserted into the event and/or the event path on the main time axis, and the event path is formed into an animation effect according to a proper speed.
105. And editing the multimedia material to generate a video of a historical map.
The process of editing the multimedia material includes, but is not limited to, processing and designing effects of texts in the multimedia material, adjusting and converting formats of pictures in the multimedia material, and filtering and processing sound effects of audio in the multimedia material.
It is understood that the multimedia material that has completed the editing operation may form a video of the history map on the event path, and a video preview button, a video confirm button, and a video return button may be provided in the interface in order to facilitate the user's operation on the video, so that the user can preview the video content of the history map, confirm the video content of the history map, and return the video content of the editing history map.
Compared with the mode of generating the video by utilizing the association sequence between the materials in the conventional mode, the video generation method provided by the embodiment of the application adds the events in response to the selection instruction of the event occurrence region in the target historical map, marks the position information of the events in the event occurrence region, repeatedly adds the events to form the position information in the event occurrence regions, connects the position information in the event occurrence regions as event paths, fully considers the association sequence between the events on the historical map by the event paths, can more comprehensively restore the historical events, further takes the event paths as a main time axis for animation editing, inserts the multimedia materials into the events and/or the event paths on the main time axis, generates the video of the historical map, does not need the user to manually adjust the multimedia materials in time consumption, is convenient for the user to describe the events in the historical map more plurally, clearly and vividly in a mode of combining the historical map and the multimedia materials, and improves the video playing effect while ensuring the arrangement of the multimedia materials in the generated video.
Further, as a refinement and an extension of the specific implementation of the above embodiment, in order to fully describe the specific implementation process of the embodiment, the embodiment provides another video generation method, as shown in fig. 2, the method includes:
201. historical map data is collected in advance.
The historical map data may be edited into a video tool by a user, the user may import a required map picture or map data into the video tool, or may be a map picture or map data stored in the video tool in advance, which is not limited herein.
202. And sorting the historical map data in the same time scale range into a map area by using a map tool to form historical maps mapped on different time scale ranges.
It can be understood that, as the distribution of the areas in each history stage is different, the history map can reflect the change of the spatial information mapped by each area on different time information. In order to accurately sort out the historical maps of the historical stages, the historical map data can be further gathered by using a map tool, the historical stages where the historical map data are located are counted, the historical map data in the same historical stage are obtained based on the time scale range corresponding to the historical stages, the historical map data in the same time scale range are sorted into the map area, and then the historical maps mapped on different time scale ranges are formed.
It should be noted that the time scale range is not necessarily specific to a specific historical period, and may be specific to a certain year in the historical period, for example, 890 years in north sons, 1170 years in south sons, and the like, and of course, the time scale range on the historical map may also be set individually.
203. And setting matching identification for the historical maps in different time scale ranges based on the characteristic information describing the places in the historical maps.
Because the time scale range of each historical map corresponds to the start-stop time or the year time, and the historical maps in different time scale ranges are distributed differently in the spatial information, in order to distinguish the historical maps in different time scale ranges and form a mapping relationship between the time scale ranges and the historical maps, for the historical map in each time scale range, a matching identifier can be set for the historical map in the time scale range based on the characteristic information describing the location in the historical map.
The feature information of the place described herein may be significant events occurring in the history stage, such as the treatment of the present of the established times in the down generation, the unification of six countries in the dynasty, and the like, may also be typical characters in the history stage, such as creating people liubang in the western chinese period, creating people in the northern song period, and the like, may also be cultural features in the history stage, such as hundred family vocabularies in the spring and autumn period, down poetry and song words in the song yuan age, and the like, and may further extract keywords from the feature information as matching identifiers corresponding to the history map in different time scale ranges, where the matching identifiers may be expressed as keywords formed by the feature information, and may also be numbers, serial numbers, and the like set for the feature information.
204. And retrieving a target historical map from the historical maps in different time scale ranges by using the matching identifier.
It can be understood that, in order to facilitate retrieval of the history map, the matching identifier is used as a tag of the history map in different time scale ranges, the tags of the history map can be formed into a history map list according to a time sequence, and then the history map list is displayed in a page, and the tags of the history map can be formed into a history map pull-down list according to the time sequence, and then the history map pull-down list formed by the search bar is displayed in the page.
205. In response to a selection instruction of an event occurrence area in the target history map, adding an event and marking position information of the event in the event occurrence area.
In the embodiment of the invention, after a user searches out a target history map, the content in the video is combed according to a self-simulated theme, the target history map is taken as the background for generating the video, each event occurrence area is selected in the history map, and in order to accurately record event information, the position information of each event in the occurrence area is marked.
206. And repeating the steps to add the events to obtain the position information in the event occurrence areas.
207. And connecting the position information in the event occurrence areas as an event path.
In the embodiment of the invention, the position information in the event occurrence areas needs to be connected in sequence in the connection process, and specifically, the position information in the event occurrence areas can be connected as event paths according to the time sequence of the events marked in the history map; the position information in a plurality of event occurrence areas can be connected to be used as event paths according to the number sequence of the events added in the history map; the position information in the event occurrence areas can be connected as an event path according to the spatial sequence of the positions of the events in the historical map.
Specifically, in the process of practical application, the generation process of the event path may be as shown in fig. 3, first determining a page of a target history map, then a user may input feature information of a required history map, such as a name of a place or a name of an ancient country, in a page search field, and then retrieve the target history map matching the feature information from a pull-down list, further preview the target history map, select an event occurrence area in the target history map, further define a place where the event occurs in the event occurrence area, click an "add event 1" button, the user may add an event on the history map, mark a coordinate position of the event, add an event name 1, similarly, with the above-mentioned event addition method, may select more event occurrence areas, add more time through an "add event" button, and mark a coordinate position of the corresponding event, since each event corresponds to a sequence label at the time of addition, further automatically connect two adjacent events according to the sequence label to form a path, and when the add event is greater than or equal to two (i.e., the event path is formed), generate a layout path of the events through the page, and complete the event path in a series connection.
208. And inserting a multimedia material into the event path by taking the event path as a main time axis of animation editing.
It can be understood that, after the event path is successfully created, the event path may be subjected to animation editing by using a multimedia material pre-configured in a material library, and the event path is used as a main timeline for animation editing, and the events on the event path are sequentially played according to the sequence of the event labels and at a proper speed.
In a specific application scenario, in order to facilitate a user to perform animation editing on an event path, a preview area, a material area and a function area may be set in an interface of a video editing tool platform, a process of adding a multimedia material to the event path may be as shown in fig. 4, the user may preview a playing effect of the event path in the preview area, view the multimedia material inserted into a main timeline in the material area, and select different functions in the function area to complete an editing operation on the event path.
209. And inserting a multimedia material into any event on the sub-time axis by selecting any event on the event path and taking the event as the sub-time axis for animation editing.
It can be understood that, in order to enrich the content of the generated video, any event on the event path can be used as a sub timeline of the animation editing, and multimedia material can be added, wherein the sub timeline is equivalent to a branch of the main timeline, and the multimedia material inserted by any event on the sub timeline can be used as the video content inserted in the middle when the event path is played to the event.
In a specific application scenario, when a user inserts a multimedia material into each event, the user may select an event from an event path on the main time axis, and enter an editing interface of the event by clicking a "add multimedia material" button, where the process of adding a multimedia material to an event is shown in fig. 5, first, position nodes distributed on the main time axis for each event, for example, event 1 and event 2, are located, and further, the multimedia material is inserted at event 2, and the inserted multimedia material may be selected from videos/pictures in a material library, and further, when the event 2 is played along the main time axis, the multimedia material inserted by the event 2 is played correspondingly.
Illustratively, when inserting a video/picture into an event on a sub-timeline, as shown in fig. 6a, by clicking a video/picture button, a user can select the inserted video/picture from a local material library according to actual requirements, add the video/picture into a corresponding event on the sub-timeline as a video played corresponding to the event, and when selecting the video/picture, the display duration of the multimedia material in the sub-timeline can be adjusted by dragging two ends of the video/picture, and the position of the multimedia material on the sub-timeline, that is, the time point at which the multimedia material appears, can also be adjusted by adjusting a main body for dragging the video/picture. In addition, after the video/picture material is selected, various processing can be performed on the multimedia material through the operation buttons of the functional area display video/picture, for example, the selected video/picture can be divided into two ends from the current frame by clicking the division button, the selected video/picture can be copied by clicking the copy button, the selected video/picture can be restored to the initial state by clicking the reduction button, and the selected video/picture can be deleted by clicking the delete button.
For example, when inserting a text into an event on the sub-timeline, as shown in fig. 6b, by clicking a text button, a user can select an inserted text or subtitle from a local material library according to actual needs, add the text or subtitle to a corresponding event on the sub-timeline as a video played corresponding to the event, and adjust the position of the text and the size of the text in the preview area. In addition, after the text material is selected, the user can also edit the position and the display duration of the text on the sub-time axis, and meanwhile, when the text is selected, various processing can be performed on the text through the operation buttons for displaying the text in the functional area, and the operation process of the video/picture in the specific processing process is similar and is not repeated here.
For example, when inserting audio into an event on a sub-time axis, as shown in fig. 6c, by clicking an audio button, a user may select inserted music or recording from a local material library according to actual requirements, when selecting to insert music, the user may jump to a music list, the user may listen to and use the recommended music list on trial, or may select an inserted audio file from the local material library, when selecting to insert recording, the user may slide out of a recording panel from the bottom of a page upwards, the user may start recording after pressing the recording button, and simultaneously a preview area automatically plays a video, so that the user may dub and explain according to a video picture, pause recording after releasing the recording button, and click a completion button in the recording panel after completing recording, and end recording. In addition, after the audio material is selected, the user can also edit the position and the display duration of the audio on the sub-time axis, and meanwhile, when the audio is selected, various processing can be performed on the audio through the operation buttons for displaying the audio in the functional area, and the operation process of the video/picture in the specific processing process is similar and is not repeated here.
For example, when inserting a picture-in-picture into an event on the sub-timeline, as shown in fig. 6d, by clicking a picture-in-picture button, a user can select an inserted video or picture from a local material library according to actual needs, add the video or picture to a corresponding event on the sub-timeline as a video played corresponding to the event, set an upper layer that is mainly related to the main video, and adjust the position and size of the video or picture in the preview area. In addition, after selecting the pip material, the user can also edit the position and the display duration of the pip on the sub-timeline, and at the same time, when selecting the text, the user can perform various processing on the text by displaying the operation buttons of the text in the functional area, and the specific processing process is similar to the operation process of the video/picture, and is not repeated here.
210. And editing the multimedia material to generate a video of a historical map.
It can be understood that, for the clipping operation in the editing process, in consideration of the switching between the main timeline and the sub timeline, the switching between the event path on the main timeline and the event on the sub timeline can be realized by selecting the event path or the event using the clipping button.
Illustratively, after completing a video editing task, the process of video generation is as shown in fig. 6e, and the edited video is subjected to final preview by the user through a next button on the upper right corner of the page, and certainly, in the process of previewing, the user can go back to the previous step through a return button to continue editing the video, and after finally confirming that the video is not modified any more, the "creation completion" button is clicked, and the video of the history map can be generated.
211. And responding to a video playing instruction, and sequentially playing the multimedia materials inserted in the event path according to the main time axis corresponding to the event path.
After finishing video editing, a user can check the video effect through a preview button, and specifically, in the process of playing the multimedia material inserted in the event path, whether the event on the event path contains the multimedia material can be detected; if so, sending a pause playing instruction of the main time shaft, and switching to a sub time shaft corresponding to the event; and sequentially playing the multimedia materials inserted in the event according to the sub time axis corresponding to the event until the multimedia materials are played, switching back to the main time axis corresponding to the event path, and continuously playing the multimedia materials inserted in the event path.
In an actual application scenario, in consideration of the playing effect of a video, a specific video playing logic is as shown in fig. 7, starting from a main time axis, playing events on an event path in sequence according to an event number sequence, detecting whether multimedia material exists in an event 1 when the event 1 is played, if so, pausing playing the main time axis, switching to a sub time axis corresponding to the event 1, playing the multimedia material of the event 1 on the sub time axis, after the sub time axis is played, switching back to the main time axis, continuing playing an event 2 on the event path, further detecting whether multimedia material exists in the event 2, and similarly, if so, switching to the sub time axis of the event 2 to play the multimedia material of the event 2, and switching back to the main time axis after the event is played, until the event on the main time axis is played completely.
Further, as a specific implementation of the method in fig. 1 and fig. 2, an embodiment of the present application provides a video generating apparatus, as shown in fig. 8, the apparatus includes: marking unit 31, connecting unit 32, inserting unit 33, and generating unit 34.
A marking unit 31, which may be configured to add an event in response to a selection instruction of an event occurrence area in the target history map, and mark location information of the event in the event occurrence area;
an adding unit 32, which can be used to repeat the above steps to add events, and obtain location information in a plurality of event occurrence areas;
a connection unit 33 operable to connect the position information in the plurality of event occurrence areas as an event path;
an inserting unit 34, configured to insert multimedia material into the event and/or the event path on the main timeline with the event path as a main timeline for animation editing;
the generating unit 35 may be configured to perform an editing operation on the multimedia material to generate a video of a history map.
Compared with the mode of generating the video by utilizing the association sequence between the materials in the conventional mode, the video generation device provided by the embodiment of the invention adds the events in response to the selection instruction of the event occurrence regions in the target history map, marks the position information of the events in the event occurrence regions, repeatedly adds the events to form the position information in the event occurrence regions, connects the position information in the event occurrence regions as event paths, fully considers the association sequence between the events in the history map, can more comprehensively restore the history events, further takes the event paths as a main time axis for animation editing, inserts the multimedia materials into the events and/or the event paths on the main time axis, generates the video of the history map, does not need the user to manually adjust the multimedia materials in time consumption, is convenient for the user to describe the events in the history map more plurally, clearly and vividly in a mode of combining the history map and the multimedia materials, ensures the arrangement of the multimedia materials in the generated video and improves the video playing effect.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
a collecting unit 36, which may be configured to collect history map data in advance before adding an event in response to a selection instruction of an event occurrence area in the target history map and marking position information of the event in the event occurrence area;
the summarizing unit 37 may be configured to use a map tool to sort the historical map data in the same time scale range into map areas, so as to form historical maps mapped on different time scale ranges.
In a specific application scenario, as shown in fig. 9, the summarizing unit 37 includes:
a statistic module 371, configured to utilize a map tool to count a historical stage of the historical map data;
the summarizing module 372 may be configured to sort the historical map data in the same time scale range into a map area based on the time scale range corresponding to the historical stage, so as to form a historical map mapped on different time scale ranges.
In a specific application scenario, as shown in fig. 9, the summarizing unit 37 further includes:
a setting module 373, configured to set matching identifiers for historical maps in different time scale ranges based on feature information describing a place in the historical map after the historical map data in the same time scale range is arranged in a map area by using a map tool to form the historical map mapped in different time scale ranges;
a retrieving module 374, configured to retrieve the target history map from the history maps in the different time scale ranges by using the matching identifier.
In a specific application scenario, the connecting unit 33 may be specifically configured to connect the location information in the event occurrence areas as event paths according to a time sequence of the events marked in the history map; or
Connecting the position information in the event occurrence areas as event paths according to the number sequence of the events added in the history map; or
And connecting the position information in the event occurrence areas as event paths according to the spatial sequence of the positions of the events in the historical map.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
the selecting unit 38 may be configured to, after inserting a multimedia material into the event path by using the event path as a main timeline for animation editing, select any event on the event path, and insert a multimedia material into any event on a sub timeline by using the any event as a sub timeline for animation editing.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
the playing unit 39 may be configured to, after inserting a multimedia material into an event and/or an event path on the main timeline, in response to a video playing instruction, sequentially play the multimedia material inserted in the event path according to the main timeline corresponding to the event path.
In a specific application scenario, as shown in fig. 9, the playing unit 39 includes:
the detecting module 391 may be configured to detect, in the process of playing the multimedia material inserted in the event path, whether the event in the event path contains the multimedia material;
a switching module 392, configured to, if yes, issue a pause playing instruction of the main timeline, and switch to a sub timeline corresponding to the event;
the playing module 393 may be configured to sequentially play the multimedia materials inserted in the event according to the sub-time axis corresponding to the event, switch back to the main time axis corresponding to the event path after the playing of the multimedia materials is finished, and continue to play the multimedia materials inserted in the event path.
It should be noted that other corresponding descriptions of the functional units related to the video generating apparatus provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the methods shown in fig. 1-2, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video generation method shown in fig. 1-2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1-2 and the virtual device embodiment shown in fig. 8-9, to achieve the above object, an embodiment of the present application further provides a video generation entity device, which may specifically be a computer, a smart phone, a tablet computer, a smart watch, a server, or a network device, and the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the video generation method as described above and shown in fig. 1-2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
In an exemplary embodiment, referring to fig. 10, the entity device 400 includes a communication bus, a processor, a memory, and a communication interface, and may further include an input/output interface and a display device, wherein the functional units may communicate with each other through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the painting mounting method in the embodiment.
Those skilled in the art will appreciate that the physical device structure of video generation provided by the present embodiment does not constitute a limitation of the physical device, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program for managing hardware and software resources of the actual device for store search information processing, and supports the operation of the information processing program and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the information processing entity equipment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the technical scheme, compared with the existing mode, the event path in the method fully considers the correlation sequence among the events on the historical map, the historical events can be more comprehensively restored, the event path is further used as the main time axis of animation editing, the multimedia material is inserted into the events and/or the event path on the main time axis, the video of the historical map is generated, the multimedia material does not need to be manually adjusted by a user in a time-consuming mode, the user can conveniently describe the events in the historical map more plurally, clearly and vividly in the mode of combining the historical map and the multimedia material, the arranging rationality of the multimedia material in the generated video is ensured, and the video playing effect is improved.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art can understand that the modules in the device in the implementation scenario may be distributed in the device in the implementation scenario according to the implementation scenario description, and may also be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial number is merely for description and does not represent the superiority and inferiority of the implementation scenario. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (9)

1. A method of video generation, comprising:
responding to a selection instruction of an event occurrence area in a target historical map, adding an event, and marking position information of the event in the event occurrence area;
repeating the steps to add the events to obtain position information in a plurality of event occurrence areas;
connecting the position information in the event occurrence areas as an event path;
inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis for animation editing;
editing the multimedia material to generate a video of a historical map;
responding to a video playing instruction, and sequentially playing the multimedia materials inserted in the event path according to a main time axis corresponding to the event path, wherein the method specifically comprises the following steps: detecting whether the event on the event path contains multimedia materials or not in the process of playing the multimedia materials inserted on the event path; if yes, sending a pause playing instruction of the main time shaft, and switching to a sub time shaft corresponding to the event; and sequentially playing the multimedia materials inserted in the event according to the sub time axis corresponding to the event, switching back to the main time axis corresponding to the event path after the multimedia materials are played, and continuously playing the multimedia materials inserted in the event path.
2. The method according to claim 1, wherein before the adding an event in response to a selection instruction of an event occurrence area in the target history map and marking position information of the event in the event occurrence area, the method further comprises:
collecting historical map data in advance;
and sorting the historical map data in the same time scale range into map areas by using a map tool to form historical maps mapped on different time scale ranges.
3. The method according to claim 2, wherein the sorting historical map data in the same time scale range into map areas by using a map tool to form a historical map list mapped on different time scale ranges includes:
utilizing a map tool to count the historical stage of the historical map data;
and sorting the historical map data in the same time scale range into map areas based on the time scale range corresponding to the historical stage to form the historical map mapped on different time scale ranges.
4. The method of claim 2, wherein after the sorting the historical map data on the same time scale range into map regions by using the map tool to form the historical map mapped on different time scale ranges, the method further comprises:
setting matching identifications for the historical maps in different time scale ranges based on characteristic information describing places in the historical maps;
and retrieving a target historical map from the historical maps in different time scale ranges by using the matching identifier.
5. The method according to any one of claims 1 to 4, wherein the connecting the location information in the event occurrence areas as an event path specifically comprises:
connecting the position information in the event occurrence areas as event paths according to the time sequence of the events marked in the history map; or
Connecting the position information in the event occurrence areas as event paths according to the number sequence of the events added in the history map; or
And connecting the position information in the event occurrence areas as event paths according to the spatial sequence of the positions of the events in the historical map.
6. The method according to claim 1, wherein after inserting multimedia material into the event and/or event path on the main timeline with the event path as the main timeline for animation editing, the method further comprises:
and inserting a multimedia material into any event on the sub-time axis by selecting any event on the event path and taking the event as the sub-time axis for animation editing.
7. A video generation apparatus, comprising:
the marking unit is used for responding to a selection instruction of an event occurrence area in the target historical map, adding an event and marking the position information of the event in the event occurrence area;
the adding unit is used for repeating the steps to add the events to obtain position information in a plurality of event occurrence areas;
a connection unit configured to connect the position information in the plurality of event occurrence areas as an event path;
the inserting unit is used for taking the event path as a main time axis of animation editing and inserting a multimedia material into the event and/or the event path on the main time axis;
the generating unit is used for carrying out editing operation on the multimedia material and generating a video of a historical map;
responding to a video playing instruction, and sequentially playing the multimedia materials inserted in the event path according to a main time axis corresponding to the event path, wherein the method specifically comprises the following steps: detecting whether the event on the event path contains multimedia materials or not in the process of playing the multimedia materials inserted on the event path; if yes, sending a pause playing instruction of the main time shaft, and switching to a sub time shaft corresponding to the event; and sequentially playing the multimedia materials inserted in the event according to the sub time axis corresponding to the event until the multimedia materials are played, switching back to the main time axis corresponding to the event path, and continuously playing the multimedia materials inserted in the event path.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the video generation method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video generation method according to any one of claims 1 to 6.
CN202010759141.7A 2020-07-31 2020-07-31 Video generation method, device and equipment Active CN112004031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010759141.7A CN112004031B (en) 2020-07-31 2020-07-31 Video generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010759141.7A CN112004031B (en) 2020-07-31 2020-07-31 Video generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN112004031A CN112004031A (en) 2020-11-27
CN112004031B true CN112004031B (en) 2023-04-07

Family

ID=73464159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010759141.7A Active CN112004031B (en) 2020-07-31 2020-07-31 Video generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN112004031B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442639B (en) * 2021-06-03 2024-01-16 北京字跳网络技术有限公司 Method, device, equipment and medium for generating special effect configuration file
CN114268746B (en) * 2021-12-20 2023-04-28 北京百度网讯科技有限公司 Video generation method, device, equipment and storage medium
CN114466222B (en) * 2022-01-29 2023-09-26 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005110111A (en) * 2003-10-01 2005-04-21 Sony Corp Image reproducer, method for editing image, program and recording medium
CN102243632A (en) * 2010-05-13 2011-11-16 成都索贝数码科技股份有限公司 Method and system for searching materials based on electronic map
CN104714960A (en) * 2013-12-13 2015-06-17 方正国际软件(北京)有限公司 Method and system for adsorbing multimedia information through track points
CN104835187A (en) * 2015-05-19 2015-08-12 北京三六三互动教育科技有限公司 Animation editor and editing method thereof
US10656797B1 (en) * 2019-02-06 2020-05-19 Snap Inc. Global event-based avatar

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030016239A1 (en) * 2001-07-19 2003-01-23 Christopher Teresa Michelle Method and apparatus for providing a graphical depiction of events
US7149961B2 (en) * 2003-04-30 2006-12-12 Hewlett-Packard Development Company, L.P. Automatic generation of presentations from “path-enhanced” multimedia
US8065080B2 (en) * 2006-10-31 2011-11-22 At&T Intellectual Property I, Lp Location stamping and logging of electronic events and habitat generation
US8966402B2 (en) * 2011-06-29 2015-02-24 National Taipei University Of Education System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof
US9626365B2 (en) * 2013-03-15 2017-04-18 Ambient Consulting, LLC Content clustering system and method
US9886173B2 (en) * 2013-03-15 2018-02-06 Ambient Consulting, LLC Content presentation and augmentation system and method
US10162870B2 (en) * 2015-09-30 2018-12-25 International Business Machines Corporation Historical summary visualizer for news events

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005110111A (en) * 2003-10-01 2005-04-21 Sony Corp Image reproducer, method for editing image, program and recording medium
CN102243632A (en) * 2010-05-13 2011-11-16 成都索贝数码科技股份有限公司 Method and system for searching materials based on electronic map
CN104714960A (en) * 2013-12-13 2015-06-17 方正国际软件(北京)有限公司 Method and system for adsorbing multimedia information through track points
CN104835187A (en) * 2015-05-19 2015-08-12 北京三六三互动教育科技有限公司 Animation editor and editing method thereof
US10656797B1 (en) * 2019-02-06 2020-05-19 Snap Inc. Global event-based avatar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视频地图及其生成方法研究;吴宇翔;《中国优秀硕士论文全文数据库 信息科技辑》;20190115;全文 *

Also Published As

Publication number Publication date
CN112004031A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112004031B (en) Video generation method, device and equipment
CN1610904B (en) Moving image data management apparatus and method
CN103702039B (en) image editing apparatus and image editing method
CN101150699B (en) Information processing apparatus, information processing method
US8700635B2 (en) Electronic device, data processing method, data control method, and content data processing system
CN101459801B (en) Information processing apparatus and method
KR20060052116A (en) Contents management system, contents management method, and computer program
JP4643735B1 (en) Electronic device and video processing method
TW200921497A (en) Method, apparatus and computer program product for hierarchical navigation with respect to content items of a media collection
EP1655736A1 (en) Data processing apparatus, information processing system, selection program and computer-readable recording medium recording the program
JP2004228779A (en) Information processor
CN103596020A (en) Method and system for mixed arrangement and playing of television programs
CN105872717A (en) Video processing method and system, video player and cloud server
WO2010084585A1 (en) Information guidance system
CN108924622A (en) A kind of method for processing video frequency and its equipment, storage medium, electronic equipment
KR101440168B1 (en) Method for creating a new summary of an audiovisual document that already includes a summary and reports and a receiver that can implement said method
CN113395605B (en) Video note generation method and device
CN101183380A (en) Content filtering method and device therefore, and recording medium having filtering program
CN110287464A (en) The methods of exhibiting, device of option data, computer equipment and computer storage medium in list
CN113918522A (en) File generation method and device and electronic equipment
CN112541323A (en) Method and device for processing reading materials
CN112887794B (en) Video editing method and device
JP5552987B2 (en) Search result output device, search result output method, and search result output program
US7844163B2 (en) Information editing device, information editing method, and computer product
CN106605412A (en) Improved interface for accessing television programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant