CN109413478A - Video editing method, device, electronic equipment and storage medium - Google Patents
Video editing method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109413478A CN109413478A CN201811125999.7A CN201811125999A CN109413478A CN 109413478 A CN109413478 A CN 109413478A CN 201811125999 A CN201811125999 A CN 201811125999A CN 109413478 A CN109413478 A CN 109413478A
- Authority
- CN
- China
- Prior art keywords
- video
- editing
- subtitle
- time point
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012360 testing method Methods 0.000 claims abstract description 125
- 230000008033 biological extinction Effects 0.000 claims abstract description 105
- 238000012545 processing Methods 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4858—End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the present disclosure provides a kind of video editing method, device, electronic equipment and storage medium, needs to be added to the captioned test when pre-editing video this method comprises: obtaining;According to user in the operational order of the editing interface when pre-editing video, determine every subtitle in the captioned test in the time of occurrence point and extinction time point when in pre-editing video;According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to described when pre-editing video, video of the generation with subtitle.The embodiment of the present disclosure does not need the another sentence of user and one inputs subtitle and corresponding time of occurrence point and extinction time point is arranged, and can directly acquire entire captioned test and be added in video, simplify the operation of user, and improves the subtitle addition efficiency of video.
Description
Technical field
This disclosure relates to which video processing technique more particularly to a kind of video editing method, device, electronic equipment and storage are situated between
Matter.
Background technique
With the development of mobile internet, the application program on mobile terminal is more and more, many processing videos occurs
Application program, user by mobile terminal processing video application program can to video add subtitle.
In the related technology, it when adding subtitle to video, needs user to input a subtitle first, then drags time shaft
The time point of appearing and subsiding is set for a subtitle, inputs second subtitle again later, and dragging time shaft is second
The time point of appearing and subsiding is arranged in subtitle, and so on, the subtitle of entire video is set, it is up to a hundred by taking 10 minutes videos as an example
The subtitle of sentence, which generally requires a few hours, can just complete to add, and user's operation is cumbersome but also wastes time very much.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provide a kind of video editing method, device, electronic equipment and
Storage medium.
According to the first aspect of the embodiments of the present disclosure, a kind of video editing method is provided, comprising:
It obtains and needs to be added to the captioned test when pre-editing video;
According to user in the operational order of the editing interface when pre-editing video, determine every in the captioned test
Sentence subtitle is in the time of occurrence point and extinction time point when in pre-editing video;
According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle.
Optionally, further includes:
For every subtitle in the captioned test, a picture file is generated.
Optionally, it is described according to user in the operational order of the editing interface when pre-editing video, determine the word
Every subtitle in curtain text is in the time of occurrence point and extinction time point when in pre-editing video, comprising:
To it is described work as pre-editing video editing during, receive the operational order of user;
According to the operational order, determine each picture file in the time of occurrence point when in pre-editing video
With extinction time point;
It is described according to the time of occurrence point and extinction time point, the subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle, comprising:
According to the time of occurrence point and extinction time point, each picture file is added to when pre-editing video,
Generate the video with subtitle.
Optionally, it is described to it is described work as pre-editing video editing during, receive the operational order of user, comprising:
To it is described work as pre-editing video editing during, receive the current operation instruction of user, and described current
Operational order is that predetermined registration operation determines the time point for receiving current operation instruction when instructing;
It is described according to the operational order, determine each picture file described when the appearance in pre-editing video
Between point and extinction time point, comprising:
The time point for receiving current operation instruction is determined as current image file described when pre-editing video
In time of occurrence point and previous picture file in the extinction time point in the pre-editing video.
Optionally, further includes:
After determining the time of occurrence point of current image file, the current image file is worked as into pre-editing video described
Middle carry out continuous presentation, until determining the extinction time point of current image file.
Optionally, described according to the time of occurrence point and extinction time point, each picture file is added to and is worked as
Pre-editing video generates the video with subtitle, comprising:
According to the time of occurrence point and extinction time point, by corresponding picture file with described when institute in pre-editing video
The video frame stated between time of occurrence point and extinction time point is synthesized, and the video with subtitle is generated.
Optionally, the operational order is to click event command.
Optionally, the acquisition needs are added to the captioned test when pre-editing video, comprising:
The keyword is sent server by the keyword for obtaining the captioned test, and receives server return
Captioned test corresponding with the keyword is added to the captioned test when pre-editing video as needs;Or
The captioned test for obtaining the captioned test that user pastes or user's input is added to as needs when pre-editing regards
The captioned test of frequency.
Optionally, further includes:
Subordinate sentence processing is carried out to the captioned test, obtains every subtitle for forming the captioned test.
According to the second aspect of an embodiment of the present disclosure, a kind of video editing apparatus is provided, comprising:
Captioned test obtains module, is configured as acquisition and needs to be added to the captioned test when pre-editing video;
Time point determining module is configured as according to user described when the operation of the editing interface of pre-editing video refers to
It enables, determines every subtitle in the captioned test in the time of occurrence point and extinction time point when in pre-editing video;
Subtitle adding module is configured as according to the time of occurrence point and extinction time point, will be in the captioned test
Every subtitle be added to described when pre-editing video, generate the video with subtitle.
Optionally, the device further include:
Picture file generation module is configured as every subtitle in the captioned test, one picture file of generation.
Optionally, the time point determining module includes:
Operational order receiving unit is configured as when the pre-editing video editing during, receiving user's to described
Operational order;
Time point determination unit is configured as determining that each picture file is worked as described according to the operational order
Time of occurrence point and extinction time point in pre-editing video;
The subtitle adding module includes:
Subtitle adding unit is configured as according to the time of occurrence point and extinction time point, by each picture text
Part is added to when pre-editing video, generates the video with subtitle.
Optionally, the operational order receiving unit is specifically used for:
To it is described work as pre-editing video editing during, receive the current operation instruction of user, and described current
Operational order is that predetermined registration operation determines the time point for receiving current operation instruction when instructing;
The time point determination unit is specifically used for:
The time point for receiving current operation instruction is determined as current image file described when pre-editing video
In time of occurrence point and previous picture file in the extinction time point in the pre-editing video.
Optionally, the device further include:
Subtitle display module is configured as after determining the time of occurrence point of current image file, by the current image
File carries out continuous presentation in the pre-editing video described, until determining the extinction time point of current image file.
Optionally, the subtitle adding unit is specifically used for:
According to the time of occurrence point and extinction time point, by corresponding picture file with described when institute in pre-editing video
The video frame stated between time of occurrence point and extinction time point is synthesized, and the video with subtitle is generated.
Optionally, the operational order is to click event command.
Optionally, the captioned test acquisition module includes:
First acquisition unit is configured as obtaining the keyword of the captioned test, sends service for the keyword
Device, and the captioned test corresponding with the keyword of server return is received, it is added to as needs when pre-editing video
Captioned test;Or
Second acquisition unit is configured as obtaining the captioned test of the captioned test that user pastes or user's input, make
To need to be added to the captioned test when pre-editing video.
Optionally, the device further include:
Subordinate sentence processing module is configured as carrying out subordinate sentence processing to the captioned test, obtains forming the captioned test
Every subtitle.
According to the third aspect of an embodiment of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, institute is worked as
When stating instruction in storage medium and being executed by the processor of mobile terminal, so that mobile terminal is able to carry out a kind of video editing side
Method, which comprises
It obtains and needs to be added to the captioned test when pre-editing video;
According to user in the operational order of the editing interface when pre-editing video, determine every in the captioned test
Sentence subtitle is in the time of occurrence point and extinction time point when in pre-editing video;
According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle.
According to a fourth aspect of embodiments of the present disclosure, a kind of computer program, the method packet of the computer program are provided
Include following content:
It obtains and needs to be added to the captioned test when pre-editing video;
According to user in the operational order of the editing interface when pre-editing video, determine every in the captioned test
Sentence subtitle is in the time of occurrence point and extinction time point when in pre-editing video;
According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle.
According to a fifth aspect of the embodiments of the present disclosure, this application provides a kind of electronic equipment, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
It obtains and needs to be added to the captioned test when pre-editing video;
According to user in the operational order of the editing interface when pre-editing video, determine every in the captioned test
Sentence subtitle is in the time of occurrence point and extinction time point when in pre-editing video;
According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle.
Further include:
For every subtitle in the captioned test, a picture file is generated.
It is described according to user in the operational order of the editing interface when pre-editing video, determine in the captioned test
Every subtitle in the time of occurrence point and extinction time point in the pre-editing video, comprising:
To it is described work as pre-editing video editing during, receive the operational order of user;
According to the operational order, determine each picture file in the time of occurrence point when in pre-editing video
With extinction time point;
It is described according to the time of occurrence point and extinction time point, the subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle, comprising:
According to the time of occurrence point and extinction time point, each picture file is added to when pre-editing video,
Generate the video with subtitle.
It is described to it is described work as pre-editing video editing during, receive the operational order of user, comprising:
To it is described work as pre-editing video editing during, receive the current operation instruction of user, and described current
Operational order is that predetermined registration operation determines the time point for receiving current operation instruction when instructing;
It is described according to the operational order, determine each picture file described when the appearance in pre-editing video
Between point and extinction time point, comprising:
The time point for receiving current operation instruction is determined as current image file described when pre-editing video
In time of occurrence point and previous picture file in the extinction time point in the pre-editing video.
Further include:
After determining the time of occurrence point of current image file, the current image file is worked as into pre-editing video described
Middle carry out continuous presentation, until determining the extinction time point of current image file.
It is described according to the time of occurrence point and extinction time point, each picture file is added to when pre-editing regards
Frequently, the video with subtitle is generated, comprising:
According to the time of occurrence point and extinction time point, by corresponding picture file with described when institute in pre-editing video
The video frame stated between time of occurrence point and extinction time point is synthesized, and the video with subtitle is generated.
The operational order is to click event command.
The acquisition needs are added to the captioned test when pre-editing video, comprising:
The keyword is sent server by the keyword for obtaining the captioned test, and receives server return
Captioned test corresponding with the keyword is added to the captioned test when pre-editing video as needs;Or
The captioned test for obtaining the captioned test that user pastes or user's input is added to as needs when pre-editing regards
The captioned test of frequency.
Further include:
Subordinate sentence processing is carried out to the captioned test, obtains every subtitle for forming the captioned test.
The technical scheme provided by this disclosed embodiment can include the following benefits: not need the another sentence of user one
Ground input subtitle simultaneously corresponding time of occurrence point and extinction time point are set, can directly acquire entire captioned test and by with
The interaction of user is added in video, simplifies the operation of user, and improves the subtitle addition efficiency of video.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is a kind of flow chart of video editing method shown according to an exemplary embodiment;
Fig. 2 is the flow chart of another video editing method shown according to an exemplary embodiment;
Fig. 3 is the flow chart of another video editing method shown according to an exemplary embodiment;
Fig. 4 is a kind of structural block diagram of video editing apparatus shown according to an exemplary embodiment;
Fig. 5 is a kind of structural block diagram for video editing apparatus shown according to an exemplary embodiment;
Fig. 6 is the structural block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of flow chart of video editing method shown according to an exemplary embodiment.The video editing method
It can be used in the electronic equipments such as terminal.As shown in Figure 1, this method specifically comprises the following steps.
In step s 11, it obtains and needs to be added to the captioned test when pre-editing video.
Wherein, the pre-editing video of working as is the video for currently needing to add subtitle, can pass through electronic equipment for user
The video of recording, or the video (such as TV play, film or network video) that user is downloaded by network.The subtitle
Text is the text of all subtitles of entire video, including video caption or the lyrics.
User is when being that some video adds subtitle by electronic equipment, it is necessary first to open the video, and enter editor
Interface, so that the video is to obtain in editing interface when pre-editing video and need to be added to the captioned test when pre-editing video.
It obtains when the captioned test of pre-editing video, user can directly input entire captioned test, can also provide captioned test
Keyword scan for obtaining entire captioned test.
In step s 12, the word is determined in the operational order of the editing interface when pre-editing video according to user
Every subtitle in curtain text is in the time of occurrence point and extinction time point when in pre-editing video.
Wherein, editing interface can be the entire play area when pre-editing video.The operational order is to hand over user
It mutually determines the time of occurrence point of picture file and the operational order of extinction time point, such as can be and click event command, double-clicks
Event command or slip instruction etc. are chosen as clicking event command, convenient for accurately determining the time of occurrence point of picture file and disappearing
Time point is lost, can solve the problem of artificially dragging time shaft determines subtitle time-interleaving caused by time point in the related technology.
Generally, the captioned test of a video may include that in short, also may include more words, include in captioned test
When more words, can be dispersed in the different periods is shown.
It is added to by captioned test when pre-editing video, it is necessary first to determine every subtitle when in pre-editing video
Time of occurrence point and extinction time point, when determining time of occurrence point and extinction time point, can according to the interaction with user come
It determines, i.e., the operational order of user is obtained in the editing interface when pre-editing video, according to user in the volume for working as pre-editing video
The operational order of editing interface determines every subtitle when the time of occurrence point and extinction time point in pre-editing video.For example, can
To preset a kind of operational order, according to user in editor circle during when the editing interface broadcasting video of pre-editing video
The operational order in face is predetermined registration operation instruction to determine the time of occurrence point and extinction time point of a subtitle in video, or
Editing interface be arranged a shortcut key, and when pre-editing video playing process according to user to the shortcut key
Trigger action determines the time of occurrence point and extinction time point of a subtitle in video.
In step s 13, according to the time of occurrence point and extinction time point, by every subtitle in the captioned test
It is added to described when pre-editing video, video of the generation with subtitle.
Determining that every subtitle, can be according to every after when time of occurrence point and extinction time point in pre-editing video
Every subtitle in captioned test is added to and works as when the time of occurrence point and extinction time point in pre-editing video by sentence subtitle
Pre-editing video, thus by the video with subtitle is generated when pre-editing video.
The video editing method that the present exemplary embodiment provides needs to be added to the subtitle when pre-editing video by obtaining
Text determines that every subtitle in captioned test is being worked as according to user in the operational order of the editing interface when pre-editing video
Time of occurrence point and extinction time point in pre-editing video, and every subtitle is added to when pre-editing video, generation has
The video of subtitle does not need the another sentence of user and one inputs subtitle and corresponding time of occurrence point and extinction time point is arranged,
Entire captioned test can be directly acquired and be added in video in the playing process of video, the operation of user is simplified, and
And video playing completes that the addition of subtitle can be completed, and improves the subtitle addition efficiency of video.For example, for 10 minutes views
Frequently, up to a hundred subtitles are added if necessary, then related art scheme is by inputting subtitle sentence by sentence and time of occurrence point being arranged and disappears
Time point is lost, needs the addition that can just complete entire video caption in general one hour, and the skill for passing through the present exemplary embodiment
Art scheme only needs 10 minutes can add completion, greatly improves the addition efficiency of subtitle.
On the basis of the above embodiments, described obtain needs to be added to the captioned test sheet when pre-editing video, optional
Include:
The keyword is sent server by the keyword for obtaining the captioned test, and receives server return
Captioned test corresponding with the keyword is added to the captioned test when pre-editing video as needs;Or
The captioned test for obtaining the captioned test that user pastes or user's input is added to as needs when pre-editing regards
The captioned test of frequency.
For the video (such as movie and television play or song MV) of some standards of comparison, the function of subtitle search can be provided, obtained
The keyword of the captioned test of family input is taken, the keyword that will acquire is sent to server, by server according to keyword
It scans for inquiring corresponding captioned test, and returns to the captioned test searched, this mode is in the terminal preferably
It solves the inconvenience of user's input, significantly improves the addition efficiency of subtitle.The interface of subtitle input, user can also be provided
User's typing sentence by sentence can be avoided in the entire captioned test of interface direct editing.User can also be first in other positions
Captioned test is edited, and the entire captioned test that editor completes integrally is pasted into the interface, can be solved by key stickup
The certainly inconvenience that user inputs sentence by sentence.
On the basis of the above embodiments, the method is also optional includes:
Subordinate sentence processing is carried out to the captioned test, obtains every subtitle for forming the captioned test.
Subordinate sentence processing can be carried out to captioned test by the punctuation mark in identification captioned test, obtain composition subtitle text
Every subtitle, is added to when in the correspondence time point of pre-editing video by this every subtitle convenient for subsequent.
Fig. 2 is the flow chart of another video editing method shown according to an exemplary embodiment.This exemplary implementation
Example provides a kind of optinal plan on the basis of the above embodiments.As shown in Fig. 2, this method specifically comprises the following steps.
In the step s 21, it obtains and needs to be added to the captioned test when pre-editing video.
Its particular content is identical as the content of step S11 in a upper specific embodiment, and which is not described herein again.
In step S22, it is every subtitle in the captioned test, generates a picture file.
It can be the form of picture by Subtitle Demonstration when showing subtitle, to need when adding subtitle by subtitle text
Originally picture file is converted to, one or more picture files can be generated according to the sentence in captioned test, for example, can incite somebody to action
A subtitle in captioned test generates a picture file, and the picture file of generation is according to the suitable of sentence corresponding in captioned test
Each picture file, is added to when in pre-editing video by sequence arrangement in sequence convenient for subsequent.The picture file can be
Transparent picture blocks the video to avoid when playing video.
Illustratively, the form that user inputs captioned test can be preset, it, in this way can be with if a subtitle is a line
Further increase the efficiency of addition subtitle.It is of course also possible to not preset the form that user inputs captioned test, user can be with
The captioned test for inputting one whole section, identifies sentence therein by way of subsequent subordinate sentence is handled, and such user can be free
Input promotes user experience.
In step S23, to it is described work as pre-editing video editing during, receive the operational order of user.
It is added to by subtitle when pre-editing video, needs to hand over during playing when pre-editing video with user
Every subtitle is mutually determined when the time of occurrence point and extinction time point in pre-editing video, this just needs user in editing interface
It is operated, so that electronic equipment receives the operational order of user.
In step s 24, according to the operational order, determine each picture file described when in pre-editing video
Time of occurrence point and extinction time point.
In the playing process when pre-editing video, each picture file is traversed, to add when receiving operational order
Add a picture file to when in pre-editing video.When receiving once-through operation instruction, determine that currently playing video frame exists
It is i.e. by the disappearance of the time of occurrence point and previous picture file of picture file to be added when the time in pre-editing video
Time point, to each picture file can be determined when pre-editing video after the completion of once working as pre-editing video playing
In time of occurrence point and extinction time point.
In step s 25, according to the time of occurrence point and extinction time point, each picture file is added to and is worked as
Pre-editing video generates the video with subtitle.
As soon as the picture file is added to currently after the time of occurrence point and extinction time point for determining a picture file
Edit video, it can picture file is added during determining time point in the pre-editing video, all picture files
All it is added to after pre-editing video, just will generates the video with subtitle when pre-editing video.
Wherein, described according to the time of occurrence point and extinction time point, each picture file is added to currently
Video is edited, the video with subtitle is generated, it is optional to include:
According to the time of occurrence point and extinction time point, by corresponding picture file with described when institute in pre-editing video
The video frame stated between time of occurrence point and extinction time point is synthesized, and the video with subtitle is generated.
After determining the time of occurrence point of a picture file, by the picture file be played in the pre-editing video
Video frame is synthesized, and when the extinction time point of determining current image file, synthesis, picture file can be synthesized to view
The predeterminated position (such as screen lower position etc.) of frequency frame generates tool so that picture file is added in pre-editing video
There is the video of subtitle.
The video editing method that the present exemplary embodiment provides, on the basis of the above embodiments, by for captioned test
In every subtitle generate a picture file, to work as pre-editing video editing during, receive the operational order of user, root
Determine that each picture file is working as time of occurrence point and extinction time point in pre-editing video according to the operational order, thus according to
Each picture file is added in pre-editing video by time of occurrence point and extinction time point, does not need the another sentence of user one
Simultaneously corresponding time of occurrence point and extinction time point is arranged in ground input subtitle, can directly acquire entire captioned test and in video
Playing process in be added in video, simplify the operation of user, and video playing completes that the addition of subtitle can be completed,
Improve the subtitle addition efficiency of video.
Fig. 3 is the flow chart of another video editing method shown according to an exemplary embodiment.This exemplary implementation
Example on the basis of the above embodiments provide a kind of optinal plan.As shown in figure 3, this method specifically comprises the following steps.
In step S31, obtains and need to be added to the captioned test when pre-editing video.
Its particular content is identical as the content of step S21 in a upper specific embodiment, and which is not described herein again.
In step s 32, it is every subtitle in the captioned test, generates a picture file.
Its particular content is identical as the content of step S22 in a upper specific embodiment, and which is not described herein again.
In step S33, to it is described work as pre-editing video editing during, receive user current operation instruction,
And the determining time point for receiving current operation instruction when current operation instruction is predetermined registration operation instruction.
Wherein, predetermined registration operation instruction be it is preset for interact with user determination picture file time of occurrence point and
The operational order of extinction time point, such as can be and click event command, double click event instruction or slip instruction, it is chosen as clicking
Event command can solve artificial in the related technology convenient for accurately determining the time of occurrence point and extinction time point of picture file
The problem of dragging time shaft determines subtitle time-interleaving caused by time point.
It will include that the picture file of subtitle is added to when pre-editing video, instruction is by picture depending on the user's operation
File is added in pre-editing video, and once-through operation, which instructs, determines a picture file when the appearance in pre-editing video
Between point and previous picture file in the extinction time point in the pre-editing video.To the process for working as pre-editing video editing
In, the current operation instruction of user is received, current operation instruction and predetermined registration operation instruction are compared, if current operation refers to
Order is that predetermined registration operation instructs, it is determined that the time point of current operation instruction is received, convenient for subsequent according to determining pair of the time point
The time of occurrence point and extinction time point for the picture file answered.
In step S34, the time point for receiving current operation instruction is determined as current image file described
When in pre-editing video time of occurrence point and previous picture file in the extinction time point in the pre-editing video.
Wherein, the time point that current operation instruction is received according to the current image file will be added to currently
Edit the picture file in video.
It plays out described when pre-editing video in editing interface, and in the playing process when pre-editing video, connects
The current operation instruction for receiving user is predetermined registration operation instruction in the current operation instruction of user and is for the first time predetermined registration operation instruction
When, it is instructed according to the current operation of user, determines that the time point for receiving current operation instruction is i.e. described when pre-editing video is broadcast
The time point for the present frame being put into is current image file i.e. first picture file in the appearance when in pre-editing video
At time point, when pre-editing video uninterruptedly plays, instructing in the current operation for detecting user for the second time is that predetermined registration operation instructs
When, determine the time point for receiving current operation instruction it is i.e. described when pre-editing video playing to present frame time point to work as
Preceding picture file i.e. second picture file is previous picture text in the time of occurrence point when in pre-editing video
Part i.e. first picture file is in the extinction time point in the pre-editing video, when pre-editing video continues to play,
And the current operation instruction of user is continued to test, it is that predetermined registration operation instructs in the current operation instruction for detecting user for the third time
When, determine the time point for receiving current operation instruction it is i.e. described when pre-editing video playing to present frame time point to work as
Preceding picture file, that is, third picture file is previous picture text in the time of occurrence point when in pre-editing video
Part i.e. second picture file is in the extinction time point in the pre-editing video, when pre-editing video plays always, until
Determine the time of occurrence point and extinction time point of all picture files.
Illustratively, will be described when pre-editing video is during editing interface plays, according to the user couple received
The current click event command of video playback area determines that receiving the current time point for clicking event command works as pre-editing view
The time point for the present frame that frequency is played to is current image file when the time of occurrence point and previous figure in pre-editing video
Piece file when the extinction time point in pre-editing video, that is, is determining the time of occurrence point of a picture file in video and is disappearing
Mistake time point needs to click event command twice to determine.For example, receiving user for the first time to the point of video playback area
When hitting event command, determine the present frame arrived when pre-editing video playing time point be first picture file in video
Time of occurrence point;When receiving click event command of the user to video playback area for the second time, determines and work as pre-editing video
The time point for the present frame being played to is working as the extinction time point in pre-editing video for first picture file, and is second
A picture file is when the time of occurrence point in pre-editing video;Click of the user to video playback area is being received for the third time
When event command, determine that the time point for working as the present frame that pre-editing video playing arrives is second picture file when pre-editing regards
Extinction time point in frequency, and be third picture file when the time of occurrence point in pre-editing video;And so on, it obtains
To the time of occurrence point and extinction time point of each picture file in video.Video in editing interface is broadcast by receiving user
The click event command in region is put to determine the time of occurrence point and extinction time point of each picture file, does not need typing sentence by sentence
Simultaneously time point is arranged in subtitle sentence by sentence, greatly improves subtitle addition efficiency, and the time point determined is more accurate, the subtitle time
It is not overlapped.
In step s 35, according to the time of occurrence point and extinction time point, each picture file is added to and is worked as
Pre-editing video generates the video with subtitle.
Its particular content is identical as the content of step S25 in a upper specific embodiment, and which is not described herein again.
The video editing method that the present exemplary embodiment provides, on the basis of the above embodiments, by being compiled to current
During collecting video editing, the current operation instruction of user is received, and true when current operation instruction is predetermined registration operation instruction
The time point is determined current image file when the appearance in pre-editing video by the time point for receiving current operation instruction surely
Time point and previous picture file do not need user and drag time shaft difference again when the extinction time point in pre-editing video
Time of occurrence point and extinction time point are set for every a subtitle, video playing, which completes subtitle, can add completion, improve word
The addition efficiency of curtain, and time point is determined by interacting with user, determining time point is more accurate, and the subtitle time does not weigh
It is folded.
On the basis of the above embodiments, also optional to include:
After determining the time of occurrence point of current image file, the current image file is worked as into pre-editing video described
Middle carry out continuous presentation, until determining the extinction time point of current image file.
After determining the time of occurrence point of current image file, just by current image file and when in pre-editing video
Between point be that the time of occurrence point and video frame later are synthesized, and during editing interface is played when pre-editing video
Preview displaying is carried out to the subtitle of addition, until determining the extinction time point of current image file.By in video display process
Middle addition subtitle and the preview displaying for carrying out subtitle, check subtitle additive effect convenient for user, promote user experience.
Fig. 4 is a kind of structural block diagram of video editing apparatus shown according to an exemplary embodiment.
As shown in figure 4, the video editing apparatus includes that captioned test obtains module 41, time point determining module 42 and subtitle
Adding module 43.
Captioned test obtains module 41 and is configured as to obtain and needing to be added to the captioned test when pre-editing video;
Time point determining module 42 is configured as according to user described when the operation of the editing interface of pre-editing video refers to
It enables, determines every subtitle in the captioned test in the time of occurrence point and extinction time point when in pre-editing video;
Subtitle adding module 43 is configured as according to the time of occurrence point and extinction time point, will be in the captioned test
Every subtitle be added to described when pre-editing video, generate the video with subtitle.
Optionally, the device further include:
Picture file generation module is configured as every subtitle in the captioned test, one picture file of generation.
Optionally, the time point determining module includes:
Operational order receiving unit is configured as when the pre-editing video editing during, receiving user's to described
Operational order;
Time point determination unit is configured as determining that each picture file is worked as described according to the operational order
Time of occurrence point and extinction time point in pre-editing video;
The subtitle adding module includes:
Subtitle adding unit is configured as according to the time of occurrence point and extinction time point, by each picture text
Part is added to when pre-editing video, generates the video with subtitle.
Optionally, the operational order receiving unit is specifically used for:
To it is described work as pre-editing video editing during, receive the current operation instruction of user, and described current
Operational order is that predetermined registration operation determines the time point for receiving current operation instruction when instructing;
The time point determination unit is specifically used for:
The time point for receiving current operation instruction is determined as current image file described when pre-editing video
In time of occurrence point and previous picture file in the extinction time point in the pre-editing video.
Optionally, the device further include:
Subtitle display module is configured as after determining the time of occurrence point of current image file, by the current image
File carries out continuous presentation in the pre-editing video described, until determining the extinction time point of current image file.
Optionally, the subtitle adding unit is specifically used for:
According to the time of occurrence point and extinction time point, by corresponding picture file with described when institute in pre-editing video
The video frame stated between time of occurrence point and extinction time point is synthesized, and the video with subtitle is generated.
Optionally, the operational order is to click event command.
Optionally, the captioned test acquisition module includes:
First acquisition unit is configured as obtaining the keyword of the captioned test, sends service for the keyword
Device, and the captioned test corresponding with the keyword of server return is received, it is added to as needs when pre-editing video
Captioned test;Or
Second acquisition unit is configured as obtaining the captioned test of the captioned test that user pastes or user's input, make
To need to be added to the captioned test when pre-editing video.
Optionally, the device further include:
Subordinate sentence processing module is configured as carrying out subordinate sentence processing to the captioned test, obtains forming the captioned test
Every subtitle.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
In addition, the computer program can be performed by electronic equipment present invention also provides a kind of computer program, it should
The detailed process of computer program is as shown in Figure 1, the specific steps are as follows:
It obtains and needs to be added to the captioned test when pre-editing video;
According to user in the operational order of the editing interface when pre-editing video, determine every in the captioned test
Sentence subtitle is in the time of occurrence point and extinction time point when in pre-editing video;
According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to described work as
Pre-editing video generates the video with subtitle.
Fig. 5 is a kind of structural block diagram for video editing apparatus shown according to an exemplary embodiment.For example, device
500 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, and medical treatment is set
It is standby, body-building equipment, personal digital assistant etc..
Referring to Fig. 5, device 500 may include following one or more components: processing component 502, memory 504, electric power
Component 506, multimedia component 508, audio component 510, the interface 512 of input/output (I/O), sensor module 514, and
Communication component 516.
The integrated operation of the usual control device 500 of processing component 502, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 502 may include that one or more processors 520 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 502 may include one or more modules, just
Interaction between processing component 502 and other assemblies.For example, processing component 502 may include multi-media module, it is more to facilitate
Interaction between media component 508 and processing component 502.
Memory 504 is configured as storing various types of data to support the operation in equipment 500.These data are shown
Example includes the instruction of any application or method for operating on device 500, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 504 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 506 provides electric power for the various assemblies of device 500.Power supply module 506 may include power management system
System, one or more power supplys and other with for device 500 generate, manage, and distribute the associated component of electric power.
Multimedia component 508 includes the screen of one output interface of offer between described device 500 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 508 includes a front camera and/or rear camera.When equipment 500 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 510 is configured as output and/or input audio signal.For example, audio component 510 includes a Mike
Wind (MIC), when device 500 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 504 or via communication set
Part 516 is sent.In some embodiments, audio component 510 further includes a loudspeaker, is used for output audio signal.
I/O interface 512 provides interface between processing component 502 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 514 includes one or more sensors, and the state for providing various aspects for device 500 is commented
Estimate.For example, sensor module 514 can detecte the state that opens/closes of equipment 500, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 500, and sensor module 514 can be with 500 1 components of detection device 500 or device
Position change, the existence or non-existence that user contacts with device 500,500 orientation of device or acceleration/deceleration and device 500
Temperature change.Sensor module 514 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 514 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 516 is configured to facilitate the communication of wired or wireless way between device 500 and other equipment.Device
500 can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or 5G) or them
Combination.In one exemplary embodiment, communication component 516 is received via broadcast channel from the wide of external broadcasting management system
Broadcast signal or broadcast related information.In one exemplary embodiment, the communication component 516 further includes near-field communication (NFC)
Module, to promote short range communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) can be based in NFC module
Technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 500 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing as shown in Figure 1, Figure 2 or shown in Fig. 3
Operation.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 504 of instruction, above-metioned instruction can be executed by the processor 520 of device 500 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
The application also provides a kind of computer program, which includes operation step as shown in Figure 1, Figure 2 or shown in Fig. 3
Suddenly.
Fig. 6 is the structural block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
As shown in fig. 6, the electronic equipment is provided at least one processor 601, it further include memory 602, the two passes through
Data/address bus 603 connects.
Memory is for storing computer program or instruction, and processor is for obtaining and executing the computer program or refer to
It enables, so that electronic equipment executes following Fig. 1, Fig. 2 or operation shown in Fig. 3.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.Scope of the present application is only limited by the accompanying claims.
Claims (10)
1. a kind of video editing method characterized by comprising
It obtains and needs to be added to the captioned test when pre-editing video;
According to user in the operational order of the editing interface when pre-editing video, every word in the captioned test is determined
Curtain is in the time of occurrence point and extinction time point when in pre-editing video;
According to the time of occurrence point and extinction time point, every subtitle in the captioned test is added to the current volume
Video is collected, the video with subtitle is generated.
2. the method according to claim 1, wherein further include:
For every subtitle in the captioned test, a picture file is generated.
3. according to the method described in claim 2, it is characterized in that, it is described according to user in the editor when pre-editing video
The operational order at interface, determine every subtitle in the captioned test in the time of occurrence point when in pre-editing video and
Extinction time point, comprising:
To it is described work as pre-editing video editing during, receive the operational order of user;
According to the operational order, each picture file is determined in the time of occurrence point when in pre-editing video and is disappeared
Lose time point;
It is described according to the time of occurrence point and extinction time point, the subtitle in the captioned test is added to the current volume
Video is collected, the video with subtitle is generated, comprising:
According to the time of occurrence point and extinction time point, each picture file is added to when pre-editing video, is generated
Video with subtitle.
4. according to the method described in claim 3, it is characterized in that, described to the process when pre-editing video editing
In, receive the operational order of user, comprising:
To it is described work as pre-editing video editing during, the current operation instruction of user is received, and in the current operation
Instruction determines the time point for receiving current operation instruction when being predetermined registration operation instruction;
It is described according to the operational order, determine each picture file in the time of occurrence point when in pre-editing video
With extinction time point, comprising:
The time point for receiving current operation instruction is determined as current image file described when in pre-editing video
Time of occurrence point and previous picture file are in the extinction time point when in pre-editing video.
5. according to the method described in claim 4, it is characterized by further comprising:
After determining the time of occurrence point of current image file, by the current image file it is described in the pre-editing video into
Row continuous presentation, until determining the extinction time point of current image file.
6. according to the method described in claim 3, it is characterized in that, described according to the time of occurrence point and extinction time point,
Each picture file is added to when pre-editing video, the video with subtitle is generated, comprising:
According to the time of occurrence point and extinction time point, corresponding picture file is worked as to go out described in pre-editing video with described
Video frame between existing time point and extinction time point is synthesized, and the video with subtitle is generated.
7. the method according to claim 1, wherein the operational order is to click event command.
8. a kind of video editing apparatus characterized by comprising
Captioned test obtains module, is configured as acquisition and needs to be added to the captioned test when pre-editing video;
Time point determining module is configured as the operational order according to user in the editing interface when pre-editing video, really
Every subtitle in the fixed captioned test is in the time of occurrence point and extinction time point when in pre-editing video;
Subtitle adding module is configured as according to the time of occurrence point and extinction time point, will be every in the captioned test
Sentence subtitle is added to described when pre-editing video, video of the generation with subtitle.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to execute the method according to claim 1 to 7.
10. a kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of mobile terminal
When device executes, so that mobile terminal is able to carry out a kind of video editing method, the method includes any one of such as claim 1-7
The step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811125999.7A CN109413478B (en) | 2018-09-26 | 2018-09-26 | Video editing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811125999.7A CN109413478B (en) | 2018-09-26 | 2018-09-26 | Video editing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109413478A true CN109413478A (en) | 2019-03-01 |
CN109413478B CN109413478B (en) | 2020-04-24 |
Family
ID=65466296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811125999.7A Active CN109413478B (en) | 2018-09-26 | 2018-09-26 | Video editing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109413478B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109788335A (en) * | 2019-03-06 | 2019-05-21 | 珠海天燕科技有限公司 | Video caption generation method and device |
CN110996167A (en) * | 2019-12-20 | 2020-04-10 | 广州酷狗计算机科技有限公司 | Method and device for adding subtitles in video |
CN112653932A (en) * | 2020-12-17 | 2021-04-13 | 北京百度网讯科技有限公司 | Subtitle generating method, device and equipment for mobile terminal and storage medium |
CN113422996A (en) * | 2021-05-10 | 2021-09-21 | 北京达佳互联信息技术有限公司 | Subtitle information editing method, device and storage medium |
CN114501098A (en) * | 2022-01-06 | 2022-05-13 | 北京达佳互联信息技术有限公司 | Subtitle information editing method and device and storage medium |
CN115134659A (en) * | 2022-06-15 | 2022-09-30 | 阿里巴巴云计算(北京)有限公司 | Video editing and configuring method and device, browser, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080316370A1 (en) * | 2007-06-19 | 2008-12-25 | Buffalo Inc. | Broadcasting receiver, broadcasting reception method and medium having broadcasting program recorded thereon |
CN101917557A (en) * | 2010-08-10 | 2010-12-15 | 浙江大学 | Method for dynamically adding subtitles based on video content |
CN103179093A (en) * | 2011-12-22 | 2013-06-26 | 腾讯科技(深圳)有限公司 | Matching system and method for video subtitles |
CN105763949A (en) * | 2014-12-18 | 2016-07-13 | 乐视移动智能信息技术(北京)有限公司 | Audio video file playing method and device |
CN105979169A (en) * | 2015-12-15 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Video subtitle adding method, device and terminal |
CN205726069U (en) * | 2016-06-24 | 2016-11-23 | 谭圆圆 | Unmanned vehicle controls end and unmanned vehicle |
-
2018
- 2018-09-26 CN CN201811125999.7A patent/CN109413478B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080316370A1 (en) * | 2007-06-19 | 2008-12-25 | Buffalo Inc. | Broadcasting receiver, broadcasting reception method and medium having broadcasting program recorded thereon |
CN101917557A (en) * | 2010-08-10 | 2010-12-15 | 浙江大学 | Method for dynamically adding subtitles based on video content |
CN103179093A (en) * | 2011-12-22 | 2013-06-26 | 腾讯科技(深圳)有限公司 | Matching system and method for video subtitles |
CN105763949A (en) * | 2014-12-18 | 2016-07-13 | 乐视移动智能信息技术(北京)有限公司 | Audio video file playing method and device |
CN105979169A (en) * | 2015-12-15 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Video subtitle adding method, device and terminal |
CN205726069U (en) * | 2016-06-24 | 2016-11-23 | 谭圆圆 | Unmanned vehicle controls end and unmanned vehicle |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109788335A (en) * | 2019-03-06 | 2019-05-21 | 珠海天燕科技有限公司 | Video caption generation method and device |
CN110996167A (en) * | 2019-12-20 | 2020-04-10 | 广州酷狗计算机科技有限公司 | Method and device for adding subtitles in video |
CN112653932A (en) * | 2020-12-17 | 2021-04-13 | 北京百度网讯科技有限公司 | Subtitle generating method, device and equipment for mobile terminal and storage medium |
CN112653932B (en) * | 2020-12-17 | 2023-09-26 | 北京百度网讯科技有限公司 | Subtitle generating method, device, equipment and storage medium for mobile terminal |
CN113422996A (en) * | 2021-05-10 | 2021-09-21 | 北京达佳互联信息技术有限公司 | Subtitle information editing method, device and storage medium |
CN113422996B (en) * | 2021-05-10 | 2023-01-20 | 北京达佳互联信息技术有限公司 | Subtitle information editing method, device and storage medium |
CN114501098A (en) * | 2022-01-06 | 2022-05-13 | 北京达佳互联信息技术有限公司 | Subtitle information editing method and device and storage medium |
CN114501098B (en) * | 2022-01-06 | 2023-09-26 | 北京达佳互联信息技术有限公司 | Subtitle information editing method, device and storage medium |
CN115134659A (en) * | 2022-06-15 | 2022-09-30 | 阿里巴巴云计算(北京)有限公司 | Video editing and configuring method and device, browser, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109413478B (en) | 2020-04-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11030987B2 (en) | Method for selecting background music and capturing video, device, terminal apparatus, and medium | |
CN109413478A (en) | Video editing method, device, electronic equipment and storage medium | |
CN103685728B (en) | Mobile terminal and its control method | |
CN106165430A (en) | Net cast method and device | |
CN105791958A (en) | Method and device for live broadcasting game | |
CN104796781A (en) | Video clip extraction method and device | |
CN110602394A (en) | Video shooting method and device and electronic equipment | |
CN105511857A (en) | System language setting method and device | |
CN104090741A (en) | Statistical method and device for electronic book reading | |
KR20160024002A (en) | Method for providing visual sound image and electronic device implementing the same | |
CN109151537A (en) | Method for processing video frequency, device, electronic equipment and storage medium | |
CN104391711B (en) | A kind of method and device that screen protection is set | |
CN105095345A (en) | Method and device for prompting push message | |
CN105487863A (en) | Interface setting method and device based on scene | |
CN106375782A (en) | Video playing method and device | |
CN110891191B (en) | Material selection method, device and storage medium | |
CN104284240A (en) | Video browsing method and device | |
CN110636382A (en) | Method and device for adding visual object in video, electronic equipment and storage medium | |
CN110267054B (en) | Method and device for recommending live broadcast room | |
CN109951379A (en) | Message treatment method and device | |
CN107820006A (en) | Control the method and device of camera shooting | |
CN105550235A (en) | Information acquisition method and information acquisition apparatuses | |
CN108108671A (en) | Description of product information acquisition method and device | |
CN113411516A (en) | Video processing method and device, electronic equipment and storage medium | |
CN109388699A (en) | Input method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |