CN112004031A - Video generation method, device and equipment - Google Patents

Video generation method, device and equipment Download PDF

Info

Publication number
CN112004031A
CN112004031A CN202010759141.7A CN202010759141A CN112004031A CN 112004031 A CN112004031 A CN 112004031A CN 202010759141 A CN202010759141 A CN 202010759141A CN 112004031 A CN112004031 A CN 112004031A
Authority
CN
China
Prior art keywords
event
map
video
historical
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010759141.7A
Other languages
Chinese (zh)
Other versions
CN112004031B (en
Inventor
何真
宋子璇
刘鹏展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Perfect Knowledge Technology Co Ltd
Original Assignee
Beijing Perfect Knowledge Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Perfect Knowledge Technology Co Ltd filed Critical Beijing Perfect Knowledge Technology Co Ltd
Priority to CN202010759141.7A priority Critical patent/CN112004031B/en
Publication of CN112004031A publication Critical patent/CN112004031A/en
Application granted granted Critical
Publication of CN112004031B publication Critical patent/CN112004031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a video generation method, a video generation device and video generation equipment, relates to the technical field of computers, and improves a video playing effect while ensuring the arrangement reasonability of multimedia materials in a generated video without manually adjusting the multimedia materials in the video by a user. The method comprises the following steps: responding to a selection instruction of an event occurrence area in a target historical map, adding an event, and marking position information of the event in the event occurrence area; connecting the position information in the event occurrence areas as an event path; inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis for animation editing; and editing the multimedia material to generate a video of a historical map.

Description

Video generation method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video generation method, apparatus, and device.
Background
With the rise of the media industry, various mobile terminal video editing tools appear in the market, compared with a video editing tool at a computer terminal, the mobile terminal video editing tool can provide a more convenient video editing function for a user, the user can add contents such as characters, audio and pictures into a video by using the video editing tool, and the video editing functions of clipping, previewing and uploading are realized, so that the video is generated.
In the related art, in the process of generating a video, a user can upload an edited multimedia material or add a multimedia material provided by a system, and generate a video by using the multimedia material. However, the video editing tool does not consider the association sequence among the multimedia materials in the video, and needs the user to manually adjust the multimedia materials, and the multimedia materials manually adjusted by the user often do not have accurate association, so that it is difficult to restore the real video content, and the arrangement of the multimedia materials in the generated video is not reasonable, which affects the video playing effect.
Disclosure of Invention
In view of this, the present application provides a video generation method, apparatus and device, and mainly aims to solve the problem in the prior art that the arrangement of multimedia materials in a video after manual adjustment by a user is unreasonable, and the video playing effect is affected.
According to a first aspect of the present application, there is provided a video generation method, the method comprising:
responding to a selection instruction of an event occurrence area in a target historical map, adding an event, and marking position information of the event in the event occurrence area;
connecting the position information in the event occurrence areas as an event path;
inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis for animation editing;
and editing the multimedia material to generate a video of a historical map.
According to a second aspect of the present application, there is provided a video generating apparatus comprising:
the marking unit is used for responding to a selection instruction of an event occurrence area in the target historical map, adding an event and marking the position information of the event in the event occurrence area;
the adding unit is used for repeating the steps to add the events to obtain position information in a plurality of event occurrence areas;
a connection unit configured to connect the position information in the plurality of event occurrence areas as an event path;
the inserting unit is used for taking the event path as a main time axis of animation editing and inserting multimedia materials into the events and/or the event path on the main time axis;
and the generating unit is used for carrying out editing operation on the multimedia material and generating a video of a history map.
According to a third aspect of the present application, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the computer program is executed.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of the first aspect described above.
By the technical scheme, compared with the mode of generating the video by using the association sequence between the materials in the existing mode, the video generation method, the video generation device and the video generation equipment have the advantages that the events are added in response to the selection instruction of the event occurrence areas in the target historical map, the position information of the events in the event occurrence areas is marked, the events are repeatedly added to form the position information in the event occurrence areas, the position information in the event occurrence areas is connected to be used as the event path, the event path fully considers the association sequence between the events on the historical map, the historical events can be more comprehensively restored, the event path is further used as the main time axis of animation editing, the multimedia materials are inserted into the events and/or the event path on the main time axis, and the video of the historical map is generated, the multimedia materials are not required to be adjusted manually in a time-consuming manner by a user, the user can describe events in the historical map more diversely, clearly and vividly in a mode of combining the historical map and the multimedia materials, and the video playing effect is improved while the arranging rationality of the multimedia materials in the generated video is ensured.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart illustrating a video generation method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating another video generation method provided in the embodiment of the present application;
FIG. 3 is a diagram illustrating an event path generation process provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a process of adding multimedia material to an event path according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a process of adding multimedia material to an event according to an embodiment of the present application;
fig. 6a is a schematic diagram illustrating a process of inserting video/pictures into events on a sub-time axis according to an embodiment of the present application;
FIG. 6b is a diagram illustrating a process of inserting text into an event on a sub-timeline according to an embodiment of the present application;
FIG. 6c is a diagram illustrating an audio insertion process for events on a sub-timeline according to an embodiment of the present application;
fig. 6d is a schematic diagram illustrating a picture-in-picture process for event insertion on a sub-time axis according to an embodiment of the present application;
fig. 6e is a schematic diagram illustrating a video generation process provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating video playback logic provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram illustrating a video generating apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram illustrating another video generating apparatus provided in an embodiment of the present application;
fig. 10 is a schematic device structure diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
The content of the invention will now be discussed with reference to a number of exemplary embodiments. It is to be understood that these examples are discussed only to enable those of ordinary skill in the art to better understand and thus implement the teachings of the present invention, and are not meant to imply any limitations on the scope of the invention.
As used herein, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to. The term "based on" is to be read as "based, at least in part, on". The terms "one embodiment" and "an embodiment" are to be read as "at least one embodiment". The term "another embodiment" is to be read as "at least one other embodiment".
In a video editing scene, the video editing tool can be used for editing video contents to generate videos with animation effects, and specific users can add contents such as characters, audio and pictures to the videos to realize functions such as clipping, previewing and uploading. However, the video editing tool only integrates the multimedia materials added in the video, and the user is required to manually adjust the association sequence between the media materials in the integration process, and if the user does not know the association sequence of the video content, the multimedia materials with wrong association sequence are easily formed, and the real video content cannot be restored, so that the arrangement of the multimedia materials in the generated video is unreasonable, and the video playing effect is influenced.
In order to solve the problem, the present embodiment provides a video generation method, as shown in fig. 1, which is applied to a client of a video editing tool, and includes the following steps:
101. in response to a selection instruction of an event occurrence area in the target history map, adding an event and marking position information of the event in the event occurrence area.
The historical map is a tool for indicating the positions and environments of historical phenomena in different historical stages, can reflect the geographic conditions of the different historical stages, and not only indicates the spatial phenomena, but also indicates the temporal phenomena. Therefore, different history stages usually have different history maps, and the target history map may be the history map of any history stage selected by the user, for example, the history map of Song Dynasty 600 years, and the history map of northern Song Dynasty 980 years.
In the embodiment of the present invention, the historical map includes rich time information and spatial information, where the time information may be expressed as year information and year information of different historical stages, and the spatial information may be expressed as regional information and distribution information of entities in a region, for example, distribution information of entities in mountains, rivers, buildings, and the like in a region.
It can be understood that after the target history map is determined, the user may use the target history map as a background for video editing, select an event occurrence area in the history map, specifically select the event occurrence area by dragging coordinates in the history map, further add an event, mark position information of the event in the event occurrence area, preview a coordinate position of the position information, and of course, may also adjust a range of the event occurrence area, and further define the position information of the event in the event occurrence area. Of course, after the event is added, the event can be named, the event description can be edited, the event description can be modified, and the like.
The execution main body of the embodiment can be a video generation device or equipment, and can be configured at a client of a video editing tool, after a user triggers a selection instruction of a target history map, the target history map is used as a background of video editing, an event occurrence area is selected in the target history map by framing, and then an event is added in the event occurrence area.
102. And repeating the steps to add the events to obtain the position information in the event occurrence areas.
In general, in order to ensure the richness of video contents, a user selects a plurality of event occurrence areas during video editing, adds events in each event occurrence area, forms a plurality of events distributed in a target history map, and obtains position information in the event occurrence areas.
103. And connecting the position information in the event occurrence areas as an event path.
In the embodiment of the invention, the user can self-simulate the video theme before editing the video and customize the story line for the video so as to comb the video content, and each event can correspond to different event clues when being added, and the event clues can be expressed as event keywords, event numbers, event time, time positions, event pictures, event audio and the like. Here, the connection order of events may be formed by using the relationship between event cues, for example, when the related keyword a is described at the end of the event a1 and the related keyword a is described at the beginning of the event a2, the event a1 and the event a2 are event cues having a relationship, and should be connected in series, and the position information in a plurality of event occurrence areas is connected as an event path according to the connection order of events.
Specifically, the connection order of the plurality of events may be represented as a time order of the events marked in the history map, for example, the time of the event a1 marked in the history map is 9 points, the time of the event a2 marked in the history map is 11 points, the time of the event A3 marked in the history map is 6 points, the connection order set in advance for the events may be A3-a1-a2, or may be represented as a number order added to the history map, for example, the number of the event a1 added to the history map is (r), the number of the event a2 added to the history map is (r), the number of the event A3 added to the history map is (c), the connection order set in advance for the events may be a1-a2-A3, or may be represented as a spatial order of the positions of the events in the history map, for example, the space of the positions of the event a1 in the history map is the middle, the space of the position of the event A2 in the historical map is the lower part, the space of the position of the event A3 in the historical map is the upper part, and the spatial sequence has a plurality of distribution situations, so the connection sequence preset for the events is distribution A3-A1-A2 from top to bottom, or distribution A2-A1-A3 from bottom to top, and the like.
It can be understood that, specifically, in the process of connecting the position information in the event occurrence areas as the event path, the position information in the event occurrence areas may form point location coordinates, further connect the point location coordinates of the adjacent events according to the connection order, and form a path according to the connection order with the point location coordinates of the events, where the path is the event path.
104. And inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis of the animation editing.
In the embodiment of the present invention, for the generated event path, animation editing may be performed on the event path and/or the events on the event path, for example, setting a play speed for the event path, setting a switching manner for the events on the event path, and setting an image special effect for the events on the event path.
The event path is used as a main time axis of animation editing, multimedia materials are inserted into the event and/or the event path on the main time axis, and the multimedia materials can be characters, audio, picture-in-picture and the like, and the event path is formed into an animation effect at a proper speed.
105. And editing the multimedia material to generate a video of a historical map.
The process of editing the multimedia material includes, but is not limited to, processing and designing effects of texts in the multimedia material, adjusting and converting formats of pictures in the multimedia material, and filtering and processing sound effects of audio in the multimedia material.
It is understood that the multimedia material that has completed the editing operation may form a video of the history map on the event path, and a video preview button, a video confirm button, and a video return button may be provided in the interface in order to facilitate the user's operation on the video, so that the user can preview the video content of the history map, confirm the video content of the history map, and return the video content of the editing history map.
Compared with the method for generating the video by utilizing the association sequence among the materials in the prior art, the method for generating the video adds the events and marks the position information of the events in the event occurrence areas in response to the selection instruction of the event occurrence areas in the target historical map, repeatedly adds the events to form the position information in the event occurrence areas, connects the position information in the event occurrence areas as event paths, fully considers the association sequence among the events on the historical map, can more comprehensively restore the historical events, further takes the event paths as a main time axis of animation editing, inserts the multimedia materials into the events and/or the event paths on the main time axis, generates the video of the historical map, does not need a user to manually adjust the multimedia materials in time-consuming manner, the method is convenient for the user to describe the events in the historical map more plurally, clearly and vividly in a mode of combining the historical map and the multimedia materials, and improves the video playing effect while ensuring the reasonability of arranging the multimedia materials in the generated video.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully describe the specific implementation process of the present embodiment, the present embodiment provides another video generation method, as shown in fig. 2, where the method includes:
201. historical map data is collected in advance.
The historical map data may be edited into a video tool by a user, the user may import a required map picture or map data into the video tool, or may be a map picture or map data stored in the video tool in advance, which is not limited herein.
202. And sorting the historical map data in the same time scale range into map areas by using a map tool to form historical maps mapped on different time scale ranges.
It can be understood that, as the distribution of the areas in each history stage is different, the history map can reflect the change of the spatial information mapped by each area on different time information. In order to accurately sort out the historical maps of the historical stages, the historical map data can be further gathered by using a map tool, the historical stages where the historical map data are located are counted, the historical map data in the same historical stage are obtained based on the time scale range corresponding to the historical stages, the historical map data in the same time scale range are sorted into the map area, and then the historical maps mapped on different time scale ranges are formed.
It should be noted that the time scale range mentioned above is not necessarily specific to a specific historical period, and may be specific to a certain year in the historical period, for example, 890 years in northern sons, 1170 years in southern sons, etc., and of course, the time scale range on the historical map may also be set individually.
203. And setting matching identification for the historical maps in different time scale ranges based on the characteristic information describing the places in the historical maps.
Because the time scale range of each historical map corresponds to the start-stop time or the year time, and the historical maps in different time scale ranges are distributed in different areas on the spatial information, in order to distinguish the historical maps in different time scale ranges and form a mapping relationship between the time scale ranges and the historical maps, for the historical map in each time scale range, a matching identifier can be set for the historical map in the time scale range based on the characteristic information describing the location in the historical map.
The feature information of the place described herein may be significant events occurring in the history stage, such as the treatment of the present of the established times in the down generation, the unification of six countries in the dynasty, and the like, may also be typical characters in the history stage, such as creating people liubang in the western chinese period, creating people in the northern song period, and the like, may also be cultural features in the history stage, such as hundred family vocabularies in the spring and autumn period, down poetry and song words in the song yuan age, and the like, and may further extract keywords from the feature information as matching identifiers corresponding to the history map in different time scale ranges, where the matching identifiers may be expressed as keywords formed by the feature information, and may also be numbers, serial numbers, and the like set for the feature information.
204. And retrieving a target historical map from the historical maps in different time scale ranges by using the matching identifier.
It can be understood that, in order to facilitate retrieval of the historical map, the matching identifier is used as a tag of the historical map in different time scale ranges, the tags of the historical map can be formed into a historical map list according to a time sequence, and then the historical map list is displayed in a page, and the tags of the historical map can be formed into a historical map pull-down list according to the time sequence, and then the historical map pull-down list formed by the search bar is displayed in the page.
205. In response to a selection instruction of an event occurrence area in the target history map, adding an event and marking position information of the event in the event occurrence area.
In the embodiment of the invention, after a user searches out a target history map, the content in the video is combed according to a self-simulated theme, the target history map is taken as the background for generating the video, each event occurrence area is selected in the history map, and in order to accurately record event information, the position information of each event in the occurrence area is marked.
206. And repeating the steps to add the events to obtain the position information in the event occurrence areas.
207. And connecting the position information in the event occurrence areas as an event path.
In the embodiment of the invention, the position information in the event occurrence areas needs to be connected in sequence in the connection process, and specifically, the position information in the event occurrence areas can be connected as event paths according to the time sequence of the events marked in the history map; the position information in a plurality of event occurrence areas can be connected to be used as event paths according to the number sequence of the events added in the history map; the position information in the event occurrence areas can be connected as an event path according to the spatial sequence of the positions of the events in the historical map.
Specifically, in the process of practical application, the event path generation process may be as shown in fig. 3, first determining a page of a target history map, then the user may input feature information of a desired history map, such as a name of a place or a name of an ancient country, in a page search field, further retrieve the target history map matching the feature information from a drop-down list, further preview the target history map, select an event occurrence area in the target history map, further define a place where an event occurs in the event occurrence area, click an "add event 1" button, the user may add an event, mark a coordinate position of the event, add an event name 1 to the history map, similarly, by using the above-mentioned event addition manner, may select more event occurrence areas, and by using the "add event" button, may add more time, and marking the coordinate position of the corresponding event, wherein each event corresponds to a sequence label when being added, and further automatically connecting two adjacent events according to the sequence labels to form a path.
208. And inserting a multimedia material into the event path by taking the event path as a main time axis of animation editing.
It can be understood that after the event path is successfully created, the event path can be subjected to animation editing by using the multimedia material pre-configured in the material library, and the event path is used as a main time axis of the animation editing, and the events on the event path are sequentially played according to the sequence of the event labels and at a proper speed.
In a specific application scenario, in order to facilitate a user to perform animation editing on an event path, a preview area, a material area, and a function area may be set in an interface of a video editing tool platform, a process of adding a multimedia material to the event path may be as shown in fig. 4, the user may preview a playing effect of the event path in the preview area, view the multimedia material inserted into a main timeline in the material area, and select different functions in the function area to complete an editing operation on the event path.
209. And inserting a multimedia material into any event on the sub-time axis by selecting any event on the event path and taking the event as the sub-time axis for animation editing.
It can be understood that, in order to enrich the content of the generated video, any event on the event path can be used as a sub timeline of the animation editing, and multimedia material can be added, wherein the sub timeline is equivalent to a branch of the main timeline, and the multimedia material inserted by any event on the sub timeline can be used as the video content inserted in the middle when the event path is played to the event.
In a specific application scenario, when a user inserts a multimedia material into each event, the user may select an event from an event path on the main time axis, and enter an editing interface of the event by clicking a "add multimedia material" button, where the process of adding a multimedia material to an event is shown in fig. 5, first, position nodes distributed on the main time axis for each event, for example, event 1 and event 2, are located, and further, the multimedia material is inserted at event 2, and the inserted multimedia material may be selected from videos/pictures in a material library, and further, when the event 2 is played along the main time axis, the multimedia material inserted by the event 2 is played correspondingly.
Illustratively, when inserting a video/picture into an event on a sub-timeline, as shown in fig. 6a, by clicking a video/picture button, a user can select the inserted video/picture from a local material library according to actual requirements, add the video/picture into a corresponding event on the sub-timeline as a video played corresponding to the event, and when selecting the video/picture, the display duration of the multimedia material in the sub-timeline can be adjusted by dragging two ends of the video/picture, and the position of the multimedia material on the sub-timeline, that is, the time point at which the multimedia material appears, can also be adjusted by adjusting a main body for dragging the video/picture. In addition, after the video/picture material is selected, various processing can be performed on the multimedia material through the operation buttons of the functional area display video/picture, for example, the selected video/picture can be divided into two ends from the current frame by clicking the division button, the selected video/picture can be copied by clicking the copy button, the selected video/picture can be restored to the initial state by clicking the reduction button, and the selected video/picture can be deleted by clicking the delete button.
For example, when inserting a text into an event on the sub-time axis, as shown in fig. 6b, by clicking a text button, a user can select an inserted text or a subtitle from a local material library according to actual needs, add the text or the subtitle to a corresponding event on the sub-time axis, serve as a video played corresponding to the event, and adjust the position of the text and the size of the text in the preview area. In addition, after the text material is selected, the user can also edit the position and the display duration of the text on the sub-time axis, and meanwhile, when the text is selected, various processing can be performed on the text through the operation buttons for displaying the text in the functional area, and the operation process of the video/picture in the specific processing process is similar and is not repeated here.
For example, when inserting audio into an event on a sub-time axis, as shown in fig. 6c, by clicking an audio button, a user may select inserted music or recording from a local material library according to actual requirements, when selecting to insert music, the user may jump to a music list, the user may listen to and use the recommended music list on trial, or may select an inserted audio file from the local material library, when selecting to insert recording, the user may slide out of a recording panel from the bottom of a page upwards, the user may start recording after pressing the recording button, and simultaneously a preview area automatically plays a video, so that the user may dub and explain according to a video picture, pause recording after releasing the recording button, and click a completion button in the recording panel after completing recording, and end recording. In addition, after the audio material is selected, the user can also edit the position and the display duration of the audio on the sub-time axis, and meanwhile, when the audio is selected, various processing can be performed on the audio through the operation buttons for displaying the audio in the functional area, and the operation process of the video/picture in the specific processing process is similar and is not repeated herein.
For example, when inserting a picture-in-picture into an event on the sub-timeline, as shown in fig. 6d, by clicking a picture-in-picture button, a user can select an inserted video or picture from a local material library according to actual needs, add the video or picture to a corresponding event on the sub-timeline as a video played corresponding to the event, set an upper layer that is mainly related to the main video, and adjust the position and size of the video or picture in the preview area. In addition, after selecting the pip material, the user can also edit the position and the display duration of the pip on the sub-timeline, and at the same time, when selecting the text, the user can perform various processing on the text by displaying the operation buttons of the text in the functional area, and the specific processing process is similar to the operation process of the video/picture, and is not repeated here.
210. And editing the multimedia material to generate a video of a historical map.
It can be understood that, for the clipping operation in the editing process, in consideration of the switching between the main timeline and the sub timeline, the switching between the event path on the main timeline and the event on the sub timeline can be realized by selecting the event path or the event using the clipping button.
Illustratively, after the video editing task is completed, the process of video generation is as shown in fig. 6e, the edited video can be previewed by the user through the next button on the upper right corner of the page, of course, in the previewing process, the user can go back to the last step to continue editing the video through the return button, and after it is finally confirmed that the video playing is not modified any more, the "making completion" button is clicked, and the video of the history map can be generated.
211. And responding to a video playing instruction, and sequentially playing the multimedia materials inserted in the event path according to the main time axis corresponding to the event path.
After the video editing is finished, a user can check the video effect through a preview button, and particularly in the process of playing the multimedia material inserted in the event path, whether the event in the event path contains the multimedia material can be detected; if so, sending a pause playing instruction of the main time shaft, and switching to a sub time shaft corresponding to the event; and sequentially playing the multimedia materials inserted in the event according to the sub time axis corresponding to the event until the multimedia materials are played, switching back to the main time axis corresponding to the event path, and continuously playing the multimedia materials inserted in the event path.
In an actual application scenario, in consideration of the playing effect of a video, a specific video playing logic is as shown in fig. 7, starting from a main time axis, playing events on an event path in sequence according to an event number sequence, detecting whether multimedia material exists in an event 1 when the event 1 is played, if so, pausing playing the main time axis, switching to a sub time axis corresponding to the event 1, playing the multimedia material of the event 1 on the sub time axis, after the sub time axis is played, switching back to the main time axis, continuing playing an event 2 on the event path, further detecting whether multimedia material exists in the event 2, and similarly, if so, switching to the sub time axis of the event 2 to play the multimedia material of the event 2, and switching back to the main time axis after the event is played, until the event on the main time axis is played completely.
Further, as a specific implementation of the method in fig. 1 and fig. 2, an embodiment of the present application provides a video generating apparatus, as shown in fig. 8, the apparatus includes: marking unit 31, connecting unit 32, inserting unit 33, and generating unit 34.
A marking unit 31, which may be configured to add an event in response to a selection instruction of an event occurrence area in the target history map, and mark location information of the event in the event occurrence area;
an adding unit 32, which can be used to repeat the above steps to add events, and obtain location information in a plurality of event occurrence areas;
a connection unit 33 operable to connect the position information within the plurality of event occurrence areas as an event path;
an inserting unit 34, configured to insert multimedia material into the event and/or the event path on the main timeline with the event path as a main timeline for animation editing;
the generating unit 35 may be configured to perform an editing operation on the multimedia material to generate a video of a history map.
Compared with the method for generating the video by utilizing the association sequence between the materials in the prior art, the video generation device provided by the embodiment of the invention adds the events in response to the selection instruction of the event occurrence areas in the target historical map, marks the position information of the events in the event occurrence areas, repeatedly adds the events to form the position information in the event occurrence areas, connects the position information in the event occurrence areas as event paths, fully considers the association sequence between the events on the historical map, can more comprehensively restore the historical events, further takes the event paths as a main time axis of animation editing, inserts the multimedia materials into the events and/or the event paths on the main time axis, generates the video of the historical map, does not need a user to manually adjust the multimedia materials in a time-consuming manner, the method is convenient for the user to describe the events in the historical map more plurally, clearly and vividly in a mode of combining the historical map and the multimedia materials, and improves the video playing effect while ensuring the reasonability of arranging the multimedia materials in the generated video.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
a collecting unit 36, which may be configured to collect history map data in advance before adding an event in response to a selection instruction of an event occurrence area in the target history map and marking position information of the event in the event occurrence area;
the summarizing unit 37 may be configured to use a map tool to sort the historical map data in the same time scale range into map areas, so as to form historical maps mapped on different time scale ranges.
In a specific application scenario, as shown in fig. 9, the summarizing unit 37 includes:
a statistic module 371, configured to utilize a map tool to count a historical stage of the historical map data;
the summarizing module 372 may be configured to sort the historical map data in the same time scale range into a map area based on the time scale range corresponding to the historical stage, so as to form a historical map mapped on different time scale ranges.
In a specific application scenario, as shown in fig. 9, the summarizing unit 37 further includes:
the setting module 373 may be configured to, after the historical map data in the same time scale range is sorted into the map area by using the map tool to form the historical map mapped on different time scale ranges, set matching identifiers for the historical map on different time scale ranges based on the feature information describing the location in the historical map;
a retrieving module 374, configured to retrieve the target history map from the history maps in the different time scale ranges by using the matching identifier.
In a specific application scenario, the connecting unit 33 may be specifically configured to connect the location information in the event occurrence areas as event paths according to a time sequence of the events marked in the history map; or
Connecting the position information in the event occurrence areas as event paths according to the number sequence of the events added in the history map; or
And connecting the position information in the event occurrence areas as event paths according to the spatial sequence of the positions of the events in the historical map.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
the selecting unit 38 may be configured to, after inserting a multimedia material into the event path by using the event path as a main timeline for animation editing, select any event on the event path, and insert a multimedia material into any event on a sub timeline by using the any event as a sub timeline for animation editing.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
the playing unit 39 may be configured to, after inserting a multimedia material into the event and/or the event path on the main timeline with the event path as the main timeline for the animation editing, sequentially play the multimedia material inserted in the event path according to the main timeline corresponding to the event path in response to a video playing instruction.
In a specific application scenario, as shown in fig. 9, the playing unit 39 includes:
the detecting module 391 may be configured to detect whether an event in the event path includes multimedia material during playing the multimedia material inserted in the event path;
a switching module 392, configured to, if yes, issue a pause playing instruction of the main timeline, and switch to a sub timeline corresponding to the event;
the playing module 393 may be configured to sequentially play the multimedia materials inserted in the event according to the sub-time axis corresponding to the event, switch back to the main time axis corresponding to the event path after the playing of the multimedia materials is finished, and continue to play the multimedia materials inserted in the event path.
It should be noted that other corresponding descriptions of the functional units related to the video generating apparatus provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the methods shown in fig. 1-2, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video generation method shown in fig. 1-2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1-2 and the virtual device embodiment shown in fig. 8-9, to achieve the above object, an embodiment of the present application further provides a video generation entity device, which may specifically be a computer, a smart phone, a tablet computer, a smart watch, a server, or a network device, and the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the video generation method as described above and shown in fig. 1-2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
In an exemplary embodiment, referring to fig. 10, the entity device 400 includes a communication bus, a processor, a memory, and a communication interface, and may further include an input/output interface and a display device, wherein the functional units may communicate with each other through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the painting mounting method in the embodiment.
Those skilled in the art will appreciate that the physical device structure of video generation provided by the present embodiment does not constitute a limitation of the physical device, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program for managing hardware and software resources of the actual device for store search information processing, and supports the operation of the information processing program and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the technical scheme, compared with the existing mode, the event path in the method fully considers the correlation sequence among the events on the historical map, the historical events can be more comprehensively restored, the event path is further used as the main time axis of animation editing, the multimedia material is inserted into the events and/or the event path on the main time axis, the video of the historical map is generated, the multimedia material does not need to be manually adjusted by a user in a time-consuming mode, the user can conveniently describe the events in the historical map more plurally, clearly and vividly in the mode of combining the historical map and the multimedia material, the arranging rationality of the multimedia material in the generated video is ensured, and the video playing effect is improved.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (11)

1. A method of video generation, comprising:
responding to a selection instruction of an event occurrence area in a target historical map, adding an event, and marking position information of the event in the event occurrence area;
repeating the steps to add the events to obtain position information in a plurality of event occurrence areas;
connecting the position information in the event occurrence areas as an event path;
inserting multimedia materials into the events and/or the event paths on the main time axis by taking the event paths as the main time axis for animation editing;
and editing the multimedia material to generate a video of a historical map.
2. The method according to claim 1, wherein before the adding an event in response to a selection instruction of an event occurrence area in the target history map and marking position information of the event in the event occurrence area, the method further comprises:
collecting historical map data in advance;
and sorting the historical map data in the same time scale range into map areas by using a map tool to form historical maps mapped on different time scale ranges.
3. The method according to claim 2, wherein the sorting of the historical map data on the same time scale range into map areas by using a map tool to form a historical map list mapped on different time scale ranges comprises:
utilizing a map tool to count the historical stage of the historical map data;
and sorting the historical map data in the same time scale range into map areas based on the time scale range corresponding to the historical stage to form the historical map mapped on different time scale ranges.
4. The method of claim 2, wherein after the sorting the historical map data on the same time scale range into map regions by using the map tool to form the historical map mapped on different time scale ranges, the method further comprises:
setting matching identifications for the historical maps in different time scale ranges based on characteristic information describing places in the historical maps;
and retrieving a target historical map from the historical maps in different time scale ranges by using the matching identifier.
5. The method according to any one of claims 1 to 4, wherein the connecting the location information in the event occurrence areas as an event path specifically comprises:
connecting the position information in the event occurrence areas as event paths according to the time sequence of the events marked in the history map; or
Connecting the position information in the event occurrence areas as event paths according to the number sequence of the events added in the history map; or
And connecting the position information in the event occurrence areas as event paths according to the spatial sequence of the positions of the events in the historical map.
6. The method according to claim 1, wherein after inserting multimedia material into the event and/or event path on the main timeline with the event path as the main timeline for animation editing, the method further comprises:
and inserting a multimedia material into any event on the sub-time axis by selecting any event on the event path and taking the event as the sub-time axis for animation editing.
7. The method according to claim 1, wherein after inserting multimedia material into the event and/or event path on the main timeline with the event path as the main timeline for animation editing, the method further comprises:
and responding to a video playing instruction, and sequentially playing the multimedia materials inserted in the event path according to the main time axis corresponding to the event path.
8. The method according to claim 6 or 7, wherein the playing the multimedia material inserted in the event path in sequence according to the main time axis corresponding to the event path specifically comprises:
detecting whether the event on the event path contains multimedia materials or not in the process of playing the multimedia materials inserted on the event path;
if yes, sending a pause playing instruction of the main time shaft, and switching to a sub time shaft corresponding to the event;
and sequentially playing the multimedia materials inserted in the event according to the sub time axis corresponding to the event until the multimedia materials are played, switching back to the main time axis corresponding to the event path, and continuously playing the multimedia materials inserted in the event path.
9. A video generation apparatus, comprising:
the marking unit is used for responding to a selection instruction of an event occurrence area in the target historical map, adding an event and marking the position information of the event in the event occurrence area;
the adding unit is used for repeating the steps to add the events to obtain position information in a plurality of event occurrence areas;
a connection unit configured to connect the position information in the plurality of event occurrence areas as an event path;
the inserting unit is used for taking the event path as a main time axis of animation editing and inserting multimedia materials into the events and/or the event path on the main time axis;
and the generating unit is used for carrying out editing operation on the multimedia material and generating a video of a history map.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the video generation method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video generation method according to any one of claims 1 to 8.
CN202010759141.7A 2020-07-31 2020-07-31 Video generation method, device and equipment Active CN112004031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010759141.7A CN112004031B (en) 2020-07-31 2020-07-31 Video generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010759141.7A CN112004031B (en) 2020-07-31 2020-07-31 Video generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN112004031A true CN112004031A (en) 2020-11-27
CN112004031B CN112004031B (en) 2023-04-07

Family

ID=73464159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010759141.7A Active CN112004031B (en) 2020-07-31 2020-07-31 Video generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN112004031B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268746A (en) * 2021-12-20 2022-04-01 北京百度网讯科技有限公司 Video generation method, device, equipment and storage medium
CN114466222A (en) * 2022-01-29 2022-05-10 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium
WO2022252916A1 (en) * 2021-06-03 2022-12-08 北京字跳网络技术有限公司 Method and apparatus for generating special effect configuration file, device and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030016239A1 (en) * 2001-07-19 2003-01-23 Christopher Teresa Michelle Method and apparatus for providing a graphical depiction of events
US20040218894A1 (en) * 2003-04-30 2004-11-04 Michael Harville Automatic generation of presentations from "path-enhanced" multimedia
JP2005110111A (en) * 2003-10-01 2005-04-21 Sony Corp Image reproducer, method for editing image, program and recording medium
US20080194268A1 (en) * 2006-10-31 2008-08-14 Robert Koch Location Stamping and Logging of Electronic Events and Habitat Generation
CN102243632A (en) * 2010-05-13 2011-11-16 成都索贝数码科技股份有限公司 Method and system for searching materials based on electronic map
US20130007669A1 (en) * 2011-06-29 2013-01-03 Yu-Ling Lu System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof
US20140280122A1 (en) * 2013-03-15 2014-09-18 Ambient Consulting, LLC Content clustering system and method
CN104714960A (en) * 2013-12-13 2015-06-17 方正国际软件(北京)有限公司 Method and system for adsorbing multimedia information through track points
CN104835187A (en) * 2015-05-19 2015-08-12 北京三六三互动教育科技有限公司 Animation editor and editing method thereof
US20170091291A1 (en) * 2015-09-30 2017-03-30 International Business Machines Corporation Historical summary visualizer for news events
US20170351392A1 (en) * 2013-03-15 2017-12-07 Ambient Consulting, LLC Content Presentation and Augmentation System and Method
US10656797B1 (en) * 2019-02-06 2020-05-19 Snap Inc. Global event-based avatar

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030016239A1 (en) * 2001-07-19 2003-01-23 Christopher Teresa Michelle Method and apparatus for providing a graphical depiction of events
US20040218894A1 (en) * 2003-04-30 2004-11-04 Michael Harville Automatic generation of presentations from "path-enhanced" multimedia
JP2005110111A (en) * 2003-10-01 2005-04-21 Sony Corp Image reproducer, method for editing image, program and recording medium
US20080194268A1 (en) * 2006-10-31 2008-08-14 Robert Koch Location Stamping and Logging of Electronic Events and Habitat Generation
CN102243632A (en) * 2010-05-13 2011-11-16 成都索贝数码科技股份有限公司 Method and system for searching materials based on electronic map
US20130007669A1 (en) * 2011-06-29 2013-01-03 Yu-Ling Lu System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof
US20140280122A1 (en) * 2013-03-15 2014-09-18 Ambient Consulting, LLC Content clustering system and method
US20170351392A1 (en) * 2013-03-15 2017-12-07 Ambient Consulting, LLC Content Presentation and Augmentation System and Method
CN104714960A (en) * 2013-12-13 2015-06-17 方正国际软件(北京)有限公司 Method and system for adsorbing multimedia information through track points
CN104835187A (en) * 2015-05-19 2015-08-12 北京三六三互动教育科技有限公司 Animation editor and editing method thereof
US20170091291A1 (en) * 2015-09-30 2017-03-30 International Business Machines Corporation Historical summary visualizer for news events
US10656797B1 (en) * 2019-02-06 2020-05-19 Snap Inc. Global event-based avatar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴宇翔: "视频地图及其生成方法研究", 《中国优秀硕士论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022252916A1 (en) * 2021-06-03 2022-12-08 北京字跳网络技术有限公司 Method and apparatus for generating special effect configuration file, device and medium
CN114268746A (en) * 2021-12-20 2022-04-01 北京百度网讯科技有限公司 Video generation method, device, equipment and storage medium
CN114268746B (en) * 2021-12-20 2023-04-28 北京百度网讯科技有限公司 Video generation method, device, equipment and storage medium
EP4198769A1 (en) * 2021-12-20 2023-06-21 Beijing Baidu Netcom Science Technology Co., Ltd. Video generation method and apparatus, electronic device, non-transitory computer-readable storage medium, and computer program product
CN114466222A (en) * 2022-01-29 2022-05-10 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium
CN114466222B (en) * 2022-01-29 2023-09-26 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112004031B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN112004031B (en) Video generation method, device and equipment
CN1610904B (en) Moving image data management apparatus and method
CN103702039B (en) image editing apparatus and image editing method
US7903927B2 (en) Editing apparatus and control method thereof, and program and recording medium
US7900161B2 (en) Data display apparatus, data display method, data display program and graphical user interface
CN101150699B (en) Information processing apparatus, information processing method
US8700635B2 (en) Electronic device, data processing method, data control method, and content data processing system
US7382973B2 (en) Data processing apparatus, information processing system and computer-readable recording medium recording selecting program
US20070101266A1 (en) Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
CN106331869B (en) Video-based picture re-editing method and device
JPH08249348A (en) Method and device for video retrieval
JP2004228779A (en) Information processor
KR101440168B1 (en) Method for creating a new summary of an audiovisual document that already includes a summary and reports and a receiver that can implement said method
WO2010084585A1 (en) Information guidance system
US10848831B2 (en) Methods, systems, and media for providing media guidance
CN108924622A (en) A kind of method for processing video frequency and its equipment, storage medium, electronic equipment
CN113395605B (en) Video note generation method and device
US20030177493A1 (en) Thumbnail display apparatus and thumbnail display program
CN110287464A (en) The methods of exhibiting, device of option data, computer equipment and computer storage medium in list
CN113918522A (en) File generation method and device and electronic equipment
CN112887794B (en) Video editing method and device
JP5552987B2 (en) Search result output device, search result output method, and search result output program
US7844163B2 (en) Information editing device, information editing method, and computer product
CN106605412A (en) Improved interface for accessing television programs
WO2008087742A1 (en) Moving picture reproducing system, information terminal device and information display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant