CN113596557B - Video generation method and device - Google Patents

Video generation method and device Download PDF

Info

Publication number
CN113596557B
CN113596557B CN202110772508.3A CN202110772508A CN113596557B CN 113596557 B CN113596557 B CN 113596557B CN 202110772508 A CN202110772508 A CN 202110772508A CN 113596557 B CN113596557 B CN 113596557B
Authority
CN
China
Prior art keywords
subtitle
video
target
video generation
duration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110772508.3A
Other languages
Chinese (zh)
Other versions
CN113596557A (en
Inventor
潘重光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Situne Technology Development Co ltd
Original Assignee
Dalian Situne Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Situne Technology Development Co ltd filed Critical Dalian Situne Technology Development Co ltd
Priority to CN202110772508.3A priority Critical patent/CN113596557B/en
Publication of CN113596557A publication Critical patent/CN113596557A/en
Application granted granted Critical
Publication of CN113596557B publication Critical patent/CN113596557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The embodiment of the application discloses a video generation method and a video generation device, wherein the method comprises the following steps: displaying a video generation interface containing a shooting object; responding to a selection instruction of the subtitle configuration parameters, and determining a target subtitle to be generated and a video generation duration matched with the target subtitle; and responding to a selection instruction of the subtitle generation control, and generating a target video based on the target subtitle and the matched video generation duration. According to the method, the video segment matched with the target subtitle can be synchronously generated when the target subtitle is added through the control in the video generation interface, the clipping process of the video segment is greatly simplified, and the video editing efficiency is improved.

Description

Video generation method and device
Technical Field
The present application relates to the field of internet technologies, and in particular, to a video generation method and apparatus.
Background
At present, professional shooting teams are usually arranged in various activities such as sports events, business conferences, academic forums, entertainment shows and the like and are used for recording activity site videos. In addition, the live video is also edited into video segments, such as highlights in sports events, small videos for promoting various entertainment performances, and the like, for sharing and forwarding.
In a video editing scheme, a complete video is usually browsed repeatedly to select materials, and matched subtitles, voice and other post-effects are independently made for the selected materials, so that the current video editing scheme has a complex operation process and low making efficiency.
Disclosure of Invention
The embodiment of the application provides a video generation method and device, which are used for synchronously generating video segments related to video subtitles when the video subtitles are added, so that the video editing efficiency is improved.
In a first aspect, an embodiment of the present application provides a video generation method, including:
displaying a video generation interface containing a shooting object, wherein a subtitle editing area is superimposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area;
responding to a selection instruction of subtitle configuration parameters, and determining a target subtitle to be generated and video generation duration matched with the target subtitle;
responding to a selection instruction of a subtitle generation control, and generating a target video based on the target subtitle and the video generation duration;
the target subtitles are superposed in the target video, and the duration of the target video is the video generation duration.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
the display module is used for displaying a video generation interface containing a shot object, wherein a subtitle editing area is superposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shot object are loaded in the subtitle editing area;
the processing module is used for responding to a selection instruction of the subtitle configuration parameters and determining a target subtitle to be generated and video generation duration matched with the target subtitle; responding to a selection instruction of a subtitle generation control, and generating a target video based on the target subtitle and the video generation duration; the target subtitles are superimposed in the target video, and the duration of the target video is the video generation duration.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores executable code, and when the executable code is executed by the processor, the processor is enabled to implement at least the method in the first aspect.
In a fourth aspect, embodiments of the present application provide a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the method of the first aspect.
In the embodiment of the application, a video generation interface containing a shooting object is displayed, a subtitle editing area is superimposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area. And then, in response to the selection instruction of the subtitle configuration parameters, determining the target subtitle to be generated and the video generation duration matched with the target subtitle. And responding to a selection instruction of the subtitle generation control, and generating a target video based on the target subtitle and the matched video generation duration, wherein the target subtitle is superimposed in the target video, and the duration of the target video is the video generation duration.
In the embodiment of the application, a video generation interface containing a shooting object is displayed, a subtitle editing area is superimposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area. Through the subtitle editing region, a user can quickly edit a target subtitle to be generated. Specifically, the target subtitle is configured by selecting the subtitle configuration parameters in the subtitle editing area, and the target video is generated based on the target subtitle and the matched video generation duration after the subtitle generation control is selected, so that a brand-new video generation mode is realized, the video segment matched with the target subtitle can be synchronously generated when the target subtitle is added without repeatedly selecting materials and independently making later effects, the clipping process of the video segment is greatly simplified, and the video editing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a video generation method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a video generation interface according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of another video generation interface provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device corresponding to the video generation apparatus provided in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of additional like elements in a commodity or system comprising the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The following description will explain the operation principle of the video generation system provided in the present application with reference to the following embodiments.
With the popularization of terminal devices, videos become one of important means for people to know various information and recent hotspots. For example, professional shooting teams are commonly provided in various events such as sporting events, business meetings, academic forums, entertainment shows, etc. to record event-site videos. And, the live video is also edited into video segments, such as highlights in a sports event, small videos used for promoting various activities, and the like, for sharing and forwarding.
In a video editing scheme, generally, materials are selected by repeatedly browsing a complete video, and matched post-effects such as subtitles and voice subtitles are independently made for the selected materials, so that the current video editing scheme has a complex operation process and low making efficiency.
Taking the game of golf as an example, the game of golf is a large outdoor sport. A live team is typically provided to capture and capture live video. Besides the live broadcast team, a special production team is needed to select the video materials of each player from the live broadcast video, and separately produce matched subtitles for the selected video materials to obtain wonderful video clips such as wonderful moments, player playback and the like in the match.
The scoring rules of golf games are complex, and the number of players is large. The match progress of different sportsmen receives the influence of complex factors such as respective match progress, match route, and the sportsmen that the multiunit progress is different often appears, brings the difficulty for the selection and the post production of video material, often need consume the plenty of time just can clip and produce above-mentioned splendid video segment, is difficult to guarantee the timeliness that the video was shared.
In view of the foregoing technical problems, embodiments of the present application provide a video generation method and apparatus. According to the method, a video generation interface containing a shot object is provided, a subtitle editing area is superimposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shot object are loaded in the subtitle editing area. Through the subtitle editing area, a user can quickly edit the target subtitle to be generated, and the target video is generated based on the target subtitle and the matched video generation duration, so that a brand-new video generation mode is realized, the video segment matched with the target subtitle can be synchronously generated when the target subtitle is added without repeatedly selecting materials and independently making later effects, the editing process of the video segment is greatly simplified, and the video editing efficiency is improved.
Technical solutions provided by embodiments of the present application are described in detail below with reference to the accompanying drawings.
The technical scheme provided by the embodiment of the application can be executed by an electronic device, and the electronic device can be a server. The server may be a physical server including an independent host, or may also be a virtual server carried by a host cluster, or may also be a cloud server. The electronic device may also be a terminal device such as a tablet computer, PC, notebook computer, etc. Of course, the technical solution may also be executed by the server in cooperation with the terminal device, or by cooperation of a plurality of electronic devices, which is not limited in this application.
An embodiment of the present application provides a video generation method, and fig. 1 is a schematic flowchart of the video generation method provided in an exemplary embodiment of the present application. The execution main bodies of all the steps in the video generation method can be the same device or different devices. As shown in fig. 1, the method includes:
101. displaying a video generation interface containing a shooting object;
102. responding to a selection instruction of the subtitle configuration parameters, and determining a target subtitle to be generated and video generation duration matched with the target subtitle;
103. and responding to a selection instruction of the subtitle generation control, and generating a target video based on the target subtitle and the matched video generation duration.
According to the method, the target subtitle is configured by selecting the subtitle configuration parameters in the subtitle editing area, the target video is generated based on the target subtitle and the matched video generation duration, a brand-new video generation mode is achieved, the video segment matched with the target subtitle is synchronously generated when the target subtitle is added, the editing process of the video segment is greatly simplified, and the video editing efficiency is improved.
In practical applications, the above steps may be implemented by one electronic device, or may be implemented by a plurality of electronic devices in cooperation. Such as mobile devices like cell phones, cameras, etc. Taking a mobile phone as an example, the method can be realized by calling a special application program carried in the mobile phone, can also be realized by calling a small program set in an instant messaging application or other types of applications, and can also be realized by calling a cloud server through mobile phone application.
Taking a golf ball as an example, specifically, the electronic device shooting device may be a mobile terminal, for example, a mobile phone, or a wearable device, such as smart glasses, or a professional shooting device, such as a camera, and applications and hardware modules for implementing various functions in a shooting end may be loaded in the camera. The camera may be held by a professional camera or by a caddie, coach or other player of the same group.
To facilitate capturing the performance of the game for each player, more game-related video may be further captured. Optionally, the terminal devices entering the preset spatial range can access the service platform through registration, so that the registered terminal devices can be used as shooting terminals to upload various shot videos to the service platform.
For example, after a player participating in a competition, a related staff (a caddie, a coach, a court staff, etc.), a live audience, etc. enters a competition field or a nearby range, the player can register the player in the service platform through the terminal device carried by the player, so that the registered terminal device is used as a shooting end, and various shot videos are uploaded to the service platform. Such as live conditions, auditorium conditions, player live interactions, player interviews.
The following describes each step in the video generation method with reference to a specific embodiment.
First, in 101, a video generation interface including a photographic subject is displayed.
Wherein the subject may be determined by the type of activity. Such as players, spectators, referees at sporting events, actors, presenters, spectators at entertainment shows. Of course, the present application can set scene elements of various activities as photographic subjects in addition to the active human operators of the above examples. Scene elements of various activities are, for example, a tee, a ball bar, a net, a playing field in a sports event, and further, for example, a performance field panorama, set elements, smoke, fireworks in a entertainment performance.
It should be noted that there are various ways to select the photographic subject. The shooting objects to be displayed on the video generation interface can be selected in the following modes:
in an optional embodiment, the shot object in the video generation interface can be manually selected, and then the shot object is selected from a plurality of objects to be selected. For example, one or more objects to be selected in the video generation interface are identified, the objects to be selected in the video generation interface are selected by using a box frame, and a box corresponding to the shooting object to be selected is clicked. Optionally, the visual characteristics of the object to be selected in the video generation interface can be pre-entered. For example, the visual characteristics of each player are entered before the game begins. Such as taking an image of the face of each athlete, or an image of the apparel of the athlete.
In another optional embodiment, the shooting device can be moved to move the object to be selected in the video generation interface to a specified position, so that the shooting object is identified. For example, the focus mode is selected as the center focus, and in this case, the subject to be selected in the center of the video generation interface is set as the subject to be photographed.
Optionally, a director conversation control, a volume adjustment control, a lens conversion control, and a live broadcast state switching control may also be loaded in the video generation interface. The method is used for adjusting the display state of the shooting object in the video generation interface.
After the shooting object is selected in the above mode, the target video can be generated through the subtitle editing area loaded in the video generation interface.
In the embodiment of the application, a subtitle editing area is superimposed on a video generation interface. The subtitle editing region is mainly used for editing subtitle content. For example, the user may configure the specific content in the subtitle by selecting the subtitle configuration parameter in the subtitle editing region. The specific subtitle content includes, for example, preset subtitle content and a preset blank content field for carrying lead-in information. According to actual requirements, the setting of the specific content of the subtitle can further comprise: font color, font size, font type, etc.
Optionally, a subtitle editing control is also loaded in the video generation interface. The subtitle editing area can be called by a subtitle editing control of the video generation interface. For example, triggering a subtitle edit control may call out a subtitle edit region in a video generation interface.
Optionally, before 102, the step of loading the subtitle configuration parameter corresponding to the photographic subject in the subtitle editing region specifically includes: and identifying the shooting object currently displayed in the video generation interface, and loading the subtitle configuration parameters corresponding to the shooting object currently displayed in the subtitle editing area.
In another embodiment, before 102, if a plurality of object identifiers are loaded in the subtitle editing area, in response to a selection instruction for the object identifiers, subtitle configuration parameters corresponding to the selected object identifiers are loaded in the subtitle editing area.
In the application, the subtitle editing area can be suspended above the video generation interface, or can be arranged in a setting area of the video generation interface. The layout of the subtitle editing region can be set according to the video generation interface, for example, the subtitle editing region shown in fig. 2 is laid out on the right side or the left side of the interface according to the user use habit (such as dominant hand), so that the subtitle can be edited by one-hand operation. The subtitle editing area shown in fig. 3 may be divided into two parts, which are respectively disposed on the left and right sides of the interface, so that the subtitle is edited by two-hand operation. Alternatively, different portions of the subtitle editing area are presented in multiple occasions. For example, in fig. 3, the subtitle editing area 1 is displayed first, and the subtitle editing area 2 is displayed after the player or team is selected.
It is to be understood that the layout form of the subtitle editing area in the video generation interface is not limited to the above example. The subtitle editing area can be arranged according to the interface forms of different terminal devices or the operation habits of different users. For example, assuming that the subtitle editing region does not set a fixed range, in this case, the controls loaded in the subtitle editing region may be dynamically distributed in the video generation interface. Specifically, it is assumed that the control loaded in the subtitle editing region is firstly laid out at an initial setting position, and then the control loaded in the subtitle editing region is dynamically adjusted to other positions by the user through dragging or other forms of operation so as to adapt to the use habits or preferences of the user.
In the present application, the control loaded in the subtitle editing area may be determined by the activity type. For example, the shooting object is a player in a sports event, and the controls loaded in the subtitle editing area are set according to rules of the sports event. Such as event rules, scoring rules, score setting rules. For another example, the shooting object is a related person or a scene in a commercial activity, and the control loaded in the subtitle editing area is set according to factors such as an activity process, a site machine position setting, a shooting angle and the like. Optionally, a subtitle generating control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area.
In the subtitle editing area shown in fig. 2, a subtitle generation control, an undo control, and a subtitle arrangement parameter corresponding to a photographic subject are loaded, taking a golf game as an example. Based on the interface, in 102, in response to a selection instruction of the subtitle configuration parameters, a target subtitle to be generated and a video generation duration matched with the target subtitle are determined. And aiming at different activity types, the subtitle configuration parameters associated with the current activity type can be displayed, so that the display content of the subtitle editing area can be flexibly adjusted.
Optionally, the subtitle configuration parameter may be associated with a video generation duration. Thus, in 102, the target subtitle content to be generated and the associated video generation duration are determined based on the selected subtitle configuration parameters.
In the caption editing area shown in fig. 2, a caption generating control, a cancel control, and caption configuration parameters corresponding to the shooting target are loaded, taking a golf game as an example. Specifically, the subtitle configuration parameters corresponding to the photographic subject are the group, the players (i.e., the photographic subject) in the group, the current number of sticks, and the selection drop point. The number of players in the group can be set according to actual conditions.
In fig. 2, a plurality of controls constructed based on the type of the drop point are recorded in the selection drop point area, including but not limited to: lane, green, hole, lost ball, water fall, out of bounds, lane bunker, green bunker, grass growth, barrier, penalty, no-drop. Alternatively, the controls shown in the drop point region may be set according to the state of the player, for example, based on the state of the last rod of the player 1, to hide part of the controls, such as green, obstacles, etc. The cancel control is used for canceling the currently selected subtitle configuration parameter.
Based on the subtitle editing area, in 102, the currently locked shooting object can be determined by selecting the group in which the athlete is located and the group. And then, selecting the drop point type to configure the specific content of the target caption. In practical application, the specific format, language content and the matched video generation duration of the target caption can be preset.
For example, based on the interface shown in fig. 2, selecting a control such as player 1, green, etc. can determine the format and content of the target subtitle to be generated, such as "player is green". Based on this, the video generation duration matched with the green is further determined to be 10s, so that the video generation duration matched with the target caption is determined.
In the application, the display state of the control can be optionally updated according to the subtitle configuration parameters. For example, if the hole-in control shown in FIG. 2 is selected, the current pole count control will be followed by a "hole-in" script to prompt the user for the current player status, and the corresponding drop point control will be updated to the selected status.
And 103, responding to a selection instruction of the subtitle generation control, and generating a target video based on the target subtitle and the matched video generation duration. The selection instruction can be generated by triggering a subtitle generating control by a user or automatically generated by the equipment. For example, a selection instruction issued by a user is received. For example, after a shooting object which is input in advance is recognized to appear in the video generation interface, a selection instruction of the subtitle generation control is automatically generated.
Optionally, the target video to be stored is marked based on the selected subtitle configuration parameter. For example, the target video is named based on the subtitle configuration parameters. Specifically, taking a golf game as an example, the naming rules of the target video are, for example, event identification, player, number of strokes, and location. For example, in the 8599 game, zhang Sanqiu member 18 th hole third stick hole video may be named Zhangthree _ H18_ S3Holed _01_8599.
In the above or below embodiments of the present application, there are a plurality of devices that display a video generation interface including a photographic subject, and the plurality of devices include a master device and a slave device. For example, a plurality of accounts are bound in advance, the primary account is set as a primary device, and the secondary account is set as a secondary device. Based on this, in the embodiment of the application, a selection instruction of a subtitle generation control in the slave device is also responded, and the target video is generated based on the target subtitle and the video generation duration in the master device is triggered. For example, the primary account number is responsible for taking a picture (or subtitles as well), and the secondary device is responsible for taking subtitles. The caption operation interface of the slave equipment is the same as the primary account interface, and when a caption is clicked, a corresponding instruction is automatically sent to the primary account, the primary account is controlled to automatically overlap a corresponding target caption, and a target video is recorded.
In the caption editing area shown in fig. 2, the caption generating control includes, but is not limited to: the system comprises a track map control for triggering generation of track map subtitles, a performance composition subtitle control for triggering generation of group performance subtitles, a name subtitle control for triggering generation of name subtitles and a batting subtitle control for triggering generation of batting subtitles. Specifically, the track map control is used to generate a drop point caption for a player playing a ball.
Specifically, in an alternative embodiment, it is assumed that the image including the photographic subject shown in the video generation interface is a live video. In short, the live broadcast picture is displayed in real time in the video generation interface, so that the target video can be directly intercepted from the live broadcast picture in the subsequent steps.
Based on this, in 103, in response to a selection instruction of a subtitle generation control, a target video is generated based on the target subtitle and the video generation duration, which specifically includes:
generating a target caption based on the selected caption configuration parameter; overlaying a target subtitle in a live video; and intercepting a video clip which is superimposed with a target subtitle and has the duration as the video generation duration from the live video to serve as the target video.
Specifically, a selection instruction of a batting subtitle control is received, a target subtitle is generated based on the selected subtitle configuration parameters, and the target subtitle is superposed to a live video displayed on a current interface. And simultaneously, triggering the recording operation of the live video displayed on the current interface based on the selection instruction, wherein the recording duration is the video generation duration matched with the target caption, so as to obtain the target video matched with the target caption. The target subtitles are superimposed in the target video, and the time length of the target video is the video generation time length matched with the target subtitles.
In practical application, the video generation durations corresponding to different subtitle configuration parameters are different. Taking golf as an example, the duration corresponding to a tee caption is 20s, the duration corresponding to a normal tee caption (e.g., a tee caption control) is 15s, the duration corresponding to a putting green putter is 12s, the duration corresponding to a today's achievement caption (e.g., a composition achievement caption control) is 8s, the duration corresponding to a hole-by-hole achievement caption is 15s, and the duration corresponding to a track caption is 25s.
Or the video generation duration corresponding to different subtitle generation controls is different. For example, the video generation duration corresponding to the track map control is 10s, the video generation duration corresponding to the composition performance subtitle control is 8s, and the video generation duration corresponding to the name subtitle control is 5s. Further, the video generation duration corresponding to the caption hitting control is determined according to different caption configuration parameters. For example, the time length corresponding to the hole entry control is 6s, the time length corresponding to the lane control is 7s, and the time length corresponding to the green control is 6.5s.
Optionally, it is assumed that a generation delay control is further loaded in the video generation interface. Based on the above, after the selection instruction of the subtitle generation control is triggered, the generation delay control can be displayed in the current interface. And further responding to a selection instruction of the generation delay control, and increasing the video generation time of the target video. And the increased video generation duration is the sum of the video generation duration matched with the target caption and the increment duration corresponding to the generation delay control. For example, the recording time is extended by 5s after each selection of the generation delay control. Further, the number of times that each type of target video can be extended can also be set.
In order to save the space occupied by the subtitle editing area in the video generation interface, in the embodiment of the present application, optionally, after the subtitle generation control is to be triggered, optionally, the currently displayed video generation interface may be switched to another video display interface, and the subtitle editing area is hidden in the video display interface. In practical application, optionally, the generation delay control may be loaded in the video display interface.
In the video generation method shown in fig. 1, the target subtitle is configured by selecting the subtitle configuration parameter in the subtitle editing region, and the target video is generated based on the target subtitle and the matched video generation duration, so that a brand-new video generation mode is realized, a video segment matched with the target subtitle is synchronously generated when the target subtitle is added, the editing process of the video segment is greatly simplified, and the video editing efficiency is improved.
In the foregoing or following embodiments, optionally, the present application may further determine whether the target subtitle is associated with a shooting object currently displayed in the video generation interface; and if the target subtitle is associated with the currently displayed shooting object, increasing the video generation duration of the target video. And the increased video generation duration is the sum of the video generation duration matched with the target caption and the increment duration corresponding to the type of the target caption.
For example, assuming that the target caption is a hole caption, based on this, if the photographic subject in the target video being recorded is a corresponding player, the video generation duration of the target video is automatically added to the incremental duration corresponding to the type to which the hole caption belongs. Optionally, the different types of hole-entering subtitles have different corresponding increment durations. For example, the increment duration corresponding to the caption of the hole-by-hole achievement is 8s per hole, or the increment duration corresponding to the caption of a certain athlete for putting a ball in a certain hole is 5s.
Furthermore, if the target subtitle is not associated with the currently displayed shooting object, the method and the device can also query the video segment associated with the target subtitle and merge the target video and the video segment associated with the target subtitle.
For example, continuously assuming that the target caption is a hole-entering caption, based on this, a video segment corresponding to the shooting object is automatically queried, and the target video is merged with the corresponding multiple video segments. Taking golf as an example, the goal video of Zhang III is merged with the corresponding goal video segment. Taking a basketball game as an example, a plurality of attack assisting videos of the fourth plum can be merged to show the attack assisting performance of the fourth plum in the game.
In the above or following embodiments, optionally, the present application may further identify a scene where the photographic subject is located; and acquiring the subtitle configuration parameters corresponding to the shot object from a preset database according to the corresponding relation between the scene type and the subtitle configuration parameters, and loading the subtitle configuration parameters into a subtitle editing area. The corresponding relationship between the scene type and the subtitle configuration parameter includes, but is not limited to, one or a combination of the following:
if the scene of the shooting object is a golf game, the subtitle configuration parameters corresponding to the shooting object comprise one or more of player identification, number of poles, hitting points and player scores;
if the scene of the shooting object is a basketball game, the subtitle configuration parameters corresponding to the shooting object comprise one or more of team, player, game progress, goal, backboard, penalty and foul conditions;
if the scene of the shooting object is a racing game, the subtitle configuration parameters corresponding to the shooting object comprise one or more of a fleet, a driver, the number of turns, a game progress, a driver rank and a vehicle condition.
For example, the scene of the shooting object is identified according to the preset scene elements, and for example, a basketball game scene is identified according to the shot basketball. Thus, the subtitle configuration parameters associated with the basketball game scene are automatically loaded in the subtitle editing area, as shown in fig. 3.
In the above or following embodiments, optionally, the application may further upload the target video to the server based on a preset period. And performing preset processing on the target video in the server side, and pushing the target video subjected to the preset processing to a viewer side. In practical application, the preset processing includes one or more of transcoding processing, watermarking and post-effect editing processing. For example, a download control may be further provided in the viewer end, so that a user selecting the download control downloads the preset processed target video.
For example, every 10 minutes, the target video generated in the shooting end is uploaded to the cloud platform. And then, the cloud platform carries out transcoding processing, watermarking and later effect editing processing on the target video, and pushes the processed target video to a viewer. In this way, the processed target video can be called out by the application program in the viewer side.
In the above or the following embodiments, optionally, if there are a plurality of subtitle generating controls loaded in the subtitle editing region, and the plurality of subtitle generating controls correspond to a plurality of subtitle types, the present application may further identify a state of the shooting object; judging whether the state of the shot object meets the condition of adding various caption types; and determining the use permission of the plurality of subtitle generating controls based on the judgment result.
Specifically, the subtitle generation control corresponding to the subtitle type meeting the conditions is used for prompting based on the judgment result. Taking a golf game as an example, assuming that all players in the current group are recognized to have a first club drop point for the first time, in this case, the track map control in the subtitle editing area may be set to be in a highlighted state, and prompt triggering is performed to generate the track map subtitle and the matched target video. For another example, assuming that all the players of the current group are identified as being less than or equal to 50 yards from the green for the first time, the track map control in the subtitle editing region may also be set to a highlight state.
In another embodiment, the subtitle generation control corresponding to the unqualified subtitle type is prohibited from being used based on the judgment result. For example, assuming that any one of the players in the current group is identified to be putting on a green, the trackmap control may be changed to a disabled state to avoid generating trackmap subtitles.
In the above or the following embodiments, optionally, the present application may further provide a video preview interface, where the video preview interface is loaded with the generated target video. In the target video list, after the generated target video is selected, the target video can be displayed in a video preview interface through a floating window, or directly displayed in the video preview interface for browsing various target videos.
Fig. 4 is a video generating apparatus according to an embodiment of the present application. As shown in fig. 4, wherein the video generating apparatus includes:
the display module 401 is configured to display a video generation interface including a shooting object, where a subtitle editing area is superimposed on the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area;
the processing module 402 is configured to determine, in response to a selection instruction of a subtitle configuration parameter, a target subtitle to be generated and a video generation duration matched with the target subtitle; responding to a selection instruction of a subtitle generation control, and generating a target video based on the target subtitle and the video generation duration; the target subtitles are superimposed in the target video, and the duration of the target video is the video generation duration.
Optionally, the image containing the shooting object displayed in the video generation interface is a live video.
The processing module 402, in response to a selection instruction of a subtitle generation control, generates a target video based on the target subtitle and the video generation duration, and is configured to:
generating the target caption based on the selected caption configuration parameter; overlaying the target subtitles in a live video; and intercepting a video clip which is superimposed with the target subtitle and has the duration of the video generation duration from a live video as the target video.
Optionally, the processing module 402 loads a subtitle configuration parameter corresponding to the photographic subject in the subtitle editing region, and is configured to:
identifying a shooting object currently displayed in a video generation interface, and loading a subtitle configuration parameter corresponding to the currently displayed shooting object in the subtitle editing area; or if a plurality of shooting object identifiers are loaded in the subtitle editing area, responding to a selection instruction of the shooting object identifiers, and loading subtitle configuration parameters corresponding to the selected shooting object identifiers in the subtitle editing area.
Optionally, a generation delay control is loaded in the video generation interface.
The processing module 402 is further configured to: responding to a selection instruction for generating a delay control, and increasing the video generation time of the target video; and the increased video generation duration is the sum of the video generation duration matched with the target caption and the increment duration corresponding to the generation delay control.
Optionally, there are a plurality of devices that display a video generation interface including a photographic subject, and the plurality of devices include a master device and a slave device.
The processing module 402 is further configured to: and responding to a selection instruction of a subtitle generation control in the slave equipment, and triggering the master equipment to generate a target video based on the target subtitle and the video generation duration.
Optionally, the processing module 402 is further configured to: judging whether the target subtitle is associated with a shooting object currently displayed in a video generation interface; and if the target subtitle is associated with the currently displayed shooting object, increasing the video generation duration of the target video.
And the increased video generation duration is the sum of the video generation duration matched with the target caption and the increment duration corresponding to the type of the target caption.
Optionally, if the target subtitle is not associated with the currently displayed shooting object, the processing module 402 is further configured to: inquiring the video clip associated with the target caption; and merging the target video and the video segment associated with the target subtitle.
Optionally, if there are multiple subtitle generating controls loaded in the subtitle editing region and the multiple subtitle generating controls correspond to multiple subtitle types, the processing module 402 is further configured to: identifying the state of a shooting object; judging whether the state of the shot object meets the condition of adding various caption types; and determining the use permission of the plurality of subtitle generating controls based on the judgment result.
The video generating apparatus may execute the systems or methods provided in the foregoing embodiments, and details of the embodiments may refer to relevant descriptions of the foregoing embodiments, which are not repeated herein.
In one possible design, the structure of the video generating apparatus may be implemented as an electronic device. As shown in fig. 5, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 has stored thereon executable code which, when executed by the processor 21, at least makes the processor 21 capable of implementing the video generation method as provided in the preceding embodiments.
The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
In addition, the present application provides a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of a wireless router, causes the processor to perform the video generation method provided in the foregoing embodiments.
The above-described apparatus embodiments are merely illustrative, wherein the various modules illustrated as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by means of a necessary general hardware platform, and may also be implemented by means of a combination of hardware and software. With this understanding in mind, the above-described technical solutions may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A method of video generation, comprising:
displaying a video generation interface containing a shooting object, wherein a subtitle editing area is superimposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area;
responding to a selection instruction of subtitle configuration parameters, and determining a target subtitle to be generated and video generation duration matched with the target subtitle;
responding to a selection instruction of a subtitle generation control, and generating a target video based on the target subtitle and the video generation duration;
the target subtitles are superimposed in the target video, and the duration of the target video is the video generation duration.
2. The method according to claim 1, wherein the image containing the photographic subject presented in the video generation interface is a live video;
the generating a target video based on the target caption and the video generation duration includes:
generating the target caption based on the selected caption configuration parameter;
overlaying the target caption in a live video;
and intercepting a video clip which is superimposed with the target subtitle and has the duration of the video generation duration from a live video as the target video.
3. The method according to claim 1, wherein the step of loading the subtitle configuration parameters corresponding to the photographic subject in the subtitle editing region comprises:
identifying a shooting object currently displayed in a video generation interface, and loading a subtitle configuration parameter corresponding to the currently displayed shooting object in the subtitle editing area; or alternatively
And if a plurality of shooting object identifiers are loaded in the subtitle editing area, responding to a selection instruction of the shooting object identifiers, and loading subtitle configuration parameters corresponding to the selected shooting object identifiers in the subtitle editing area.
4. The method of claim 1, wherein a video generation interface is loaded with a generation delay control;
responding to a selection instruction for generating a delay control, and increasing the video generation time of the target video;
and the increased video generation duration is the sum of the video generation duration matched with the target caption and the increment duration corresponding to the generation delay control.
5. The method according to claim 1, wherein there are a plurality of devices displaying the video generation interface including the photographic subject, the plurality of devices including a master device and a slave device;
the method further comprises the following steps:
and responding to a selection instruction of a subtitle generation control in the slave equipment, and triggering the master equipment to generate a target video based on the target subtitle and the video generation duration.
6. The method of claim 1, further comprising:
judging whether the target subtitle is associated with a shooting object currently displayed in a video generation interface;
if the target subtitle is associated with the currently displayed shooting object, increasing the video generation duration of the target video;
and the increased video generation duration is the sum of the video generation duration matched with the target caption and the increment duration corresponding to the type of the target caption.
7. The method of claim 6, wherein if the target caption is not associated with the currently displayed photographic subject, the method further comprises:
inquiring the video clip associated with the target caption;
and merging the target video and the video segment associated with the target subtitle.
8. The method according to claim 1, wherein if there are a plurality of subtitle generating controls loaded in the subtitle editing region, and the plurality of subtitle generating controls respectively correspond to a plurality of subtitle types, the method further comprises:
identifying the state of a shooting object;
judging whether the state of the shot object meets the condition of adding various caption types;
and determining the use permission of the plurality of subtitle generating controls based on the judgment result.
9. A video generation apparatus, comprising:
the display module is used for displaying a video generation interface containing a shooting object, wherein a subtitle editing area is superimposed in the video generation interface, and a subtitle generation control and a subtitle configuration parameter corresponding to the shooting object are loaded in the subtitle editing area;
the processing module is used for responding to a selection instruction of the subtitle configuration parameters and determining a target subtitle to be generated and video generation duration matched with the target subtitle; responding to a selection instruction of a subtitle generation control, and generating a target video based on the target subtitle and the video generation duration; the target subtitles are superposed in the target video, and the duration of the target video is the video generation duration.
10. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the video generation method according to any one of claims 1 to 8.
11. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the video generation method of any of claims 1 to 8.
CN202110772508.3A 2021-07-08 2021-07-08 Video generation method and device Active CN113596557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772508.3A CN113596557B (en) 2021-07-08 2021-07-08 Video generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110772508.3A CN113596557B (en) 2021-07-08 2021-07-08 Video generation method and device

Publications (2)

Publication Number Publication Date
CN113596557A CN113596557A (en) 2021-11-02
CN113596557B true CN113596557B (en) 2023-03-21

Family

ID=78246422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110772508.3A Active CN113596557B (en) 2021-07-08 2021-07-08 Video generation method and device

Country Status (1)

Country Link
CN (1) CN113596557B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202542A (en) * 2014-08-28 2014-12-10 深圳市大疆创新科技有限公司 Automatic subtitle generating method and device for video camera
CN109286850A (en) * 2017-07-21 2019-01-29 Tcl集团股份有限公司 A kind of video labeling method and terminal based on barrage
CN109874029A (en) * 2019-04-22 2019-06-11 腾讯科技(深圳)有限公司 Video presentation generation method, device, equipment and storage medium
CN110784759A (en) * 2019-08-12 2020-02-11 腾讯科技(深圳)有限公司 Barrage information processing method and device, electronic equipment and storage medium
CN111970577A (en) * 2020-08-25 2020-11-20 北京字节跳动网络技术有限公司 Subtitle editing method and device and electronic equipment
CN112533003A (en) * 2020-11-24 2021-03-19 大连三通科技发展有限公司 Video processing system, device and method
CN112653919A (en) * 2020-12-22 2021-04-13 维沃移动通信有限公司 Subtitle adding method and device
CN112929744A (en) * 2021-01-22 2021-06-08 北京百度网讯科技有限公司 Method, apparatus, device, medium and program product for segmenting video clips

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10963702B1 (en) * 2019-09-10 2021-03-30 Huawei Technologies Co., Ltd. Method and system for video segmentation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202542A (en) * 2014-08-28 2014-12-10 深圳市大疆创新科技有限公司 Automatic subtitle generating method and device for video camera
CN109286850A (en) * 2017-07-21 2019-01-29 Tcl集团股份有限公司 A kind of video labeling method and terminal based on barrage
CN109874029A (en) * 2019-04-22 2019-06-11 腾讯科技(深圳)有限公司 Video presentation generation method, device, equipment and storage medium
CN110784759A (en) * 2019-08-12 2020-02-11 腾讯科技(深圳)有限公司 Barrage information processing method and device, electronic equipment and storage medium
CN111970577A (en) * 2020-08-25 2020-11-20 北京字节跳动网络技术有限公司 Subtitle editing method and device and electronic equipment
CN112533003A (en) * 2020-11-24 2021-03-19 大连三通科技发展有限公司 Video processing system, device and method
CN112653919A (en) * 2020-12-22 2021-04-13 维沃移动通信有限公司 Subtitle adding method and device
CN112929744A (en) * 2021-01-22 2021-06-08 北京百度网讯科技有限公司 Method, apparatus, device, medium and program product for segmenting video clips

Also Published As

Publication number Publication date
CN113596557A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US9583144B2 (en) System and method for creating a sports video
US20180301169A1 (en) System and method for generating a highlight reel of a sporting event
US20190267041A1 (en) System and method for generating probabilistic play analyses from sports videos
CN112533003B (en) Video processing system, device and method
CN109040766B (en) Live video processing method and device and storage medium
JP6429542B2 (en) Sports competition live watching system, sports competition live watching system video collection and distribution equipment and spectator terminal
JP6598334B2 (en) Sports competition live watching system, sports competition live watching system video collection and distribution equipment and spectator terminal
US11875567B2 (en) System and method for generating probabilistic play analyses
CN114390193B (en) Image processing method, device, electronic equipment and storage medium
US11538243B2 (en) Video playback method, medium, glasses and program product
JP6579558B2 (en) Spectator terminal of sports competition live watching system and program for spectator terminal
US20130222418A1 (en) Providing a Graphic for Video Production
JP6440189B2 (en) Sports competition live watching system, sports competition live watching system video collection and provision device, and spectator terminal
CN110798692A (en) Video live broadcast method, server and storage medium
US20230063505A1 (en) Augmented reality event switching
CN113596557B (en) Video generation method and device
KR102171356B1 (en) Method and apparatus for streaming sporting movie linked to a competition schedule
US20220277160A1 (en) Information prompt method, apparatus, medium, glasses and program product
JP6909513B2 (en) Video collection and distribution equipment and spectator terminal of sports competition live watching system, sports competition live watching system
JP7158770B2 (en) Game viewing system, video collection and distribution device, program for spectator terminal and video collection and distribution device, and program for spectator terminal
WO2020154425A1 (en) System and method for generating probabilistic play analyses from sports videos
CN112533011B (en) Live broadcast system, method and equipment
JP7421821B2 (en) Competition viewing system, spectator terminal, video collection and provision device, program for spectator terminal, and program for video collection and provision device
JP6788293B2 (en) Video collection and provision device and spectator terminal of sports competition live watching system, sports competition live watching system, program for spectator terminal and program for video collection and provision device
JP2021010084A (en) Information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant