CN106454151A - Video image stitching method and device - Google Patents
Video image stitching method and device Download PDFInfo
- Publication number
- CN106454151A CN106454151A CN201610909409.4A CN201610909409A CN106454151A CN 106454151 A CN106454151 A CN 106454151A CN 201610909409 A CN201610909409 A CN 201610909409A CN 106454151 A CN106454151 A CN 106454151A
- Authority
- CN
- China
- Prior art keywords
- video pictures
- captions
- different
- video
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012163 sequencing technique Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides a video image stitching method. The method comprises the following steps: obtaining a plurality of video images from a video file; identifying video images, the subtitles of which are different, from the plurality of video images, wherein the subtitle of the video image obtained through next identification is different from that of the video image obtained through the previous identification in the video images, the subtitles of which are different; and carrying out image stitching on the video images, the subtitles of which are different. The invention also provides a video image stitching device. The method and device can device can realize image stitching of the video images, the subtitles of which are different, to facilitate a user to check and share the subtitles in the video.
Description
Technical field
The present invention relates to image processing field is and in particular to a kind of video pictures joining method and device.
Background technology
At present, the use of mobile terminal (such as mobile phone) is more and more extensive, and user, when watching video, sometimes wishes handle
In video, the picture splicing of different captions is together.Existing method typically requires Manual interception video pictures, recycles and repaiies figure
Software is spliced, and time cost is more and inconvenient, is not easy to user and consults and the captions in sharing video frequency.
Content of the invention
In view of the foregoing it is necessary to propose a kind of video pictures joining method, it can be to captions different in video
Video pictures are spliced, and facilitate user to consult and the captions in sharing video frequency.
The first aspect of the application provides a kind of video pictures joining method, and methods described includes:
Multiple video pictures are obtained from video file;
The different video pictures of identification captions, the different video pictures of wherein said captions from the plurality of video pictures
In the captions of video pictures that obtain different from previous identification of the captions of video pictures that obtain of a rear identification;
The video pictures different to described captions splice.
In alternatively possible implementation, described obtain the plurality of video pictures from video file and include:
Each frame video pictures are obtained from described video file;Or
Video pictures are obtained from described video file with prefixed time interval.
In alternatively possible implementation, the described video pictures different to described captions carry out splicing and include:
Different to described captions according to time sequencing in described video file for the different video pictures of described captions
Video pictures are spliced.
In alternatively possible implementation, the described video pictures different to described captions carry out splicing and include:
The video pictures direct splicing different to described captions;Or
Intercept caption area from the different video pictures of described captions, the caption area intercepting is spliced.
In alternatively possible implementation, the described video pictures different to described captions carry out splicing and include:
When similarity between the different video pictures of described captions is less than designated value, the video different to described captions
Picture direct splicing;
When similarity between the different video pictures of described captions is not less than designated value, from different the regarding of described captions
Intercept caption area in frequency picture, the caption area intercepting is spliced.
The second aspect of the application provides a kind of video pictures splicing apparatus, and described device includes:
Acquiring unit, for obtaining multiple video pictures from video file;
Recognition unit, for the different video pictures of identification captions from the plurality of video pictures, described captions are different
Video pictures in rear video pictures captions be different from previous video picture captions;
Concatenation unit, splices for the video pictures different to described captions.
In alternatively possible implementation, described acquiring unit specifically for:
Each frame video pictures are obtained from described video file;Or
Video pictures are obtained from described video file with prefixed time interval.
In alternatively possible implementation, described concatenation unit specifically for:
Different to described captions according to time sequencing in described video file for the different video pictures of described captions
Video pictures are spliced.
In alternatively possible implementation, described concatenation unit specifically for:
The video pictures direct splicing different to described captions;Or
Intercept caption area from the different video pictures of described captions, the caption area intercepting is spliced.
In alternatively possible implementation, described concatenation unit specifically for:
When similarity between the different video pictures of described captions is less than designated value, the video different to described captions
Picture direct splicing;
When similarity between the different video pictures of described captions is not less than designated value, from different the regarding of described captions
Intercept caption area in frequency picture, the caption area intercepting is spliced.
The present invention obtains multiple video pictures from video file, and from the plurality of video pictures, identification captions are different
Video pictures, and spliced according to automatically different to the described captions video pictures of preset rules, thus reaching rapidly and efficiently
Splicing video pictures, facilitate user to consult and the captions in sharing video frequency.
Brief description
Fig. 1 is the flow chart of the video pictures joining method that the embodiment of the present invention one provides.
Fig. 2 is that the time sequencing in video file is carried out to the caption area intercepting according to the different video pictures of captions
The schematic diagram of splicing.
The schematic diagram that the video pictures that Fig. 3 is different to captions are spliced from top to bottom.
The schematic diagram that the video pictures that Fig. 4 is different to captions are from left to right spliced.
Fig. 5 is the embodiment of the present invention two, and the similarity between illustrating how according to video pictures adopts different splicing sides
Formula.
Fig. 6 is the structure chart of the video pictures splicing apparatus that the embodiment of the present invention three provides.
Fig. 7 is the structure chart of the electronic equipment realizing video pictures joining method.
Main element symbol description
Electronic equipment 1
Video pictures splicing apparatus 10
Storage device 20
Processing equipment 30
Display device 40
Acquiring unit 601
Recognition unit 602
Concatenation unit 603
Following specific embodiment will further illustrate the present invention in conjunction with above-mentioned accompanying drawing.
Specific embodiment
In order to be more clearly understood that the above objects, features and advantages of the present invention, below in conjunction with the accompanying drawings and specifically real
Apply example to describe the present invention.It should be noted that in the case of not conflicting, embodiments herein and embodiment
In feature can be mutually combined.
Elaborate a lot of details in the following description in order to fully understand the present invention, described embodiment is only
It is only a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill
The every other embodiment that personnel are obtained under the premise of not making creative work, broadly falls into the scope of protection of the invention.
Unless otherwise defined, all of technology used herein and scientific terminology and the technical field belonging to the present invention
The implication that technical staff is generally understood that is identical.The term being used in the description of the invention herein is intended merely to description tool
The purpose of the embodiment of body is it is not intended that in limiting the present invention.
Preferably, the video pictures joining method of the present invention is applied in one or more electronic equipment.Described electronics
Equipment be a kind of can automatically carry out the equipment of numerical computations and/or information processing according to the instruction being previously set or store, its
Hardware includes but is not limited to microprocessor, special IC (Application Specific Integrated
Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit
(Digital Signal Processor, DSP), embedded device etc..
Described electronic equipment may be, but not limited to, any one and can pass through keyboard, mouse, remote control, touch with user
The mode such as plate or voice-operated device carries out the electronic product of man-machine interaction, for example, personal computer, panel computer, smart mobile phone, individual
Personal digital assistant (Personal Digital Assistant, PDA), game machine, IPTV (Internet
Protocol Television, IPTV), intellectual Wearable etc..
Embodiment one
The flow chart of the video pictures joining method that Fig. 1 provides for the embodiment of the present invention one.Embodiment illustrates how to regarding
The video pictures of frequency file are spliced.As shown in figure 1, the method specifically includes following steps:
101:Multiple video pictures are obtained from video file.
Each frame video pictures can be obtained from described video file.For example, the length of described video file is 1 point
Clock, 24 two field pictures per second, then obtain 24*60=1440 video pictures.
Video pictures can also be obtained from described video file with prefixed time interval.For example, described video file
Length is 1 minute, obtains video pictures every 1 second, then obtain 60 video pictures.Because the change of captions is simultaneously in video
Will not be too fast, institute can't lead to data to be omitted, therefore with prefixed time interval from described video file at a time interval
Middle acquisition while video pictures can ensure that captions are complete reduces the quantity needing image to be processed, reduces follow-up computing
Amount, quickly obtains splicing picture.
The present embodiment obtains the plurality of video pictures from video file.In other examples, can be from broadcasting
Video in intercept the plurality of video pictures.For example, intercept from the video of described broadcasting according to specified time interval described
Multiple video pictures, such as intercept video pictures every 1 second video from described broadcasting.
102:The different video pictures of identification captions from the plurality of video pictures.
In the different video pictures of described captions, the captions of the video pictures that a rear identification obtains are different from previous knowledge
The captions of the video pictures not obtained.
The captions of the video pictures of acquisition can be identified, the captions of the video pictures of acquisition are compared, obtain described
The different video pictures of captions.For example, 60 video pictures are got from video file, according to 60 videos during video playback
The sequencing that picture shows, first video pictures obtaining are drawn as the different video of the first character curtain that obtains of identification
Face, the captions of video pictures different from described first character curtain for the captions of second video pictures obtaining are compared,
If the captions of the captions of the second video pictures video pictures different from described first character curtain are different, will obtain the
Two video pictures as the different video pictures of second captions, then by the captions of the 3rd video pictures obtaining with described
The captions of the different video pictures of second captions are compared.In addition, if the captions of second video pictures and described the
The captions of the different video pictures of one captions are identical, delete second video pictures;Then the 3rd video obtaining is drawn
The captions of the captions in the face video pictures different from first character curtain are compared.By that analogy, until having compared the institute of acquisition
There are video pictures.
103:The video pictures different to described captions splice.
Can according to time sequencing in described video file for the different video pictures of described captions to described captions not
Same video pictures are spliced.For example, identification obtains 50 different video pictures of captions, according to described 50 captions differences
Time sequencing in described video file for the video pictures described 50 different video pictures of captions are spliced.
Optionally, in embodiments of the present invention, the specific implementation when video pictures different to captions splice
Have multiple, the implementation of two kinds of relatively optimizations presented below:
1st, the video pictures different to described captions carry out splice can be different to described captions video pictures direct
Splicing.For example, the time sequencing direct splicing in described video file according to the different video pictures of described 50 captions.
2nd, the video pictures different to described captions carry out splicing and can also be from the different video pictures of described captions
Intercept caption area, the caption area intercepting is spliced.For example, regard described according to 50 different video pictures of captions
Time sequencing in frequency file is spliced to the caption area intercepting.Refering to shown in Fig. 2, being to draw according to the different video of captions
The schematic diagram that time sequencing in video file for the face is spliced to the caption area intercepting.In Fig. 2, the left side is that identification obtains
The different video pictures (upper right corner is the time in video file for this video pictures) of captions, centre is from described captions not
Intercept, in same video pictures, the caption area obtaining, the right is that the caption area intercepting is carried out splicing the split picture obtaining
Face.
Can be with the stitching direction of designated picture, the video pictures different to described captions are according to specified stitching direction
Spliced.For example, the video pictures different to described captions splice from top to bottom or from left to right.Refering to Fig. 3 institute
Show, the schematic diagram that the video pictures being different to described captions are spliced from top to bottom.Refering to shown in Fig. 4, being to described word
The schematic diagram that the different video pictures of curtain are from left to right spliced.
The video pictures joining method of embodiment one obtains multiple video pictures from video file, from the plurality of video
The different video pictures of identification captions in picture, the video pictures different to described captions splice, thus realizing in video
The splicing of the video pictures of different captions, facilitates user and consults and the captions in sharing video frequency.
The video image connecting method being provided based on embodiment one, the present invention provides embodiment two to illustrate how that basis regards
Similarity between frequency picture adopts different connecting methods.As shown in figure 5, the method specifically includes following steps:
501:Judge whether the similarity between the different video pictures of described captions is less than designated value.Can adopt various
Image similarity algorithm calculates the similarity between the different video pictures of described captions.For example, using SIFT (Scale-
Invariant Feature Transform, scale invariant feature change) algorithm, HOG (Histogram of Oriented
Gradient, histograms of oriented gradients) algorithm, wavelet transformation etc. calculate similar between the different video pictures of described captions
Degree.
502:If the similarity between the different video pictures of described captions is less than designated value, different to described captions
Video pictures direct splicing.
Less than designated value, similarity between the different video pictures of described captions represents that the different video of described captions is drawn
Similarity between face is not high, now different to described captions video pictures direct splicing, different to retain each captions
Video pictures.For example, the video pictures direct splicing different by identifying 50 captions obtaining.
503:If the similarity between the different video pictures of described captions is not less than designated value, different from described captions
Video pictures in intercept caption area, to intercept caption area splice.
Similarity between the different video pictures of described captions represents described captions not less than (such as larger than) designated value
With video pictures between similarity higher, now from the different video pictures of described captions intercept caption area, to cut
The caption area taking is spliced, the caption area in the video pictures different to retain described captions and ignore other regions.
For example, intercept caption area respectively from the different video pictures of 50 captions that identification obtains, the caption area intercepting is entered
Row splicing.
When the caption area intercepting is spliced, one of the different video pictures of described captions video can be retained
Picture (such as identifies the different video pictures of the first character curtain obtaining), by intercept in the video pictures different from other captions
Caption area is spliced from the video pictures (as different video pictures in the first character curtain obtaining of identification) retaining.
The video pictures joining method of embodiment two obtains multiple video pictures from video file, from the plurality of video
The different video pictures of identification captions in picture, the similarity between the different video pictures of described captions is less than designated value
When, the video pictures direct splicing different to described captions, the similarity between the different video pictures of described captions is not little
When designated value, intercept caption area from the different video pictures of described captions, the caption area intercepting is spliced, from
And realize the splicing of the video pictures to captions different in video according to the similarity of video pictures, and less data redundancy,
Facilitate user to consult and the captions in sharing video frequency.
Embodiment three
The structure chart of the video pictures splicing apparatus that Fig. 6 provides for the embodiment of the present invention three.As shown in fig. 6, described video
Picture splicing device 10 can include:Acquiring unit 601, recognition unit 602 and concatenation unit 603.
Acquiring unit 601, for obtaining multiple video pictures from video file.
Acquiring unit 601 can obtain each frame video pictures from described video file.For example, described video file
Length is 1 minute, 24 two field pictures per second, then obtain 24*60=1440 video pictures.
Acquiring unit 601 can also obtain video pictures with prefixed time interval from described video file.For example, described
The length of video file is 1 minute, obtains video pictures every 1 second, then obtain 60 video pictures.Because word in video
The change of curtain can't be too fast, and institute can't lead to data to be omitted, therefore with prefixed time interval from institute at a time interval
State to obtain in video file while video pictures can ensure that captions are complete and reduce the quantity needing image to be processed, after minimizing
Continuous operand, quickly obtains splicing picture.
In the present embodiment, acquiring unit 601 obtains the plurality of video pictures or from regarding of playing from video file
The plurality of video pictures are intercepted in frequency.For example, intercept the plurality of video according to specified time interval from the video play
Picture, such as intercept video pictures every 1 second video from described broadcasting.
Recognition unit 602, for the different video pictures of identification captions from the plurality of video pictures.
In the different video pictures of described captions, the captions of the video pictures that a rear identification obtains are different from previous knowledge
The captions of the video pictures not obtained.
Recognition unit 602 can identify the captions of the video pictures of acquisition, and the captions of the video pictures of acquisition are compared
Relatively, the different video pictures of described captions are obtained.For example, get 60 video pictures from video file, broadcast according to video
The sequencing that when putting, 60 video pictures show, the first character curtain that first video pictures obtaining are obtained as identification
Different video pictures, by the word of video pictures different from described first character curtain for the captions of second video pictures obtaining
Curtain is compared, if the captions of the captions of the second video pictures video pictures different from described first character curtain are different,
Then using second video pictures obtaining as the different video pictures of second captions, then the 3rd video pictures that will obtain
The captions of captions video pictures different from described second captions be compared.In addition, if second video pictures
The captions of the captions video pictures different from described first character curtain are identical, delete second video pictures;Then by acquisition
The captions of the captions of the 3rd video pictures video pictures different from first character curtain are compared.By that analogy, until than
All video pictures of completeer acquisition.
Concatenation unit 603, splices for the video pictures different to described captions.
Concatenation unit 603 can be according to the different video pictures of described captions in described video file time sequencing pair
The different video pictures of described captions are spliced.For example, identification obtains 50 different video pictures of captions, according to described 50
Time sequencing in described video file for the different video pictures of individual captions is entered to described 50 different video pictures of captions
Row splicing.
It can be the videos different to described captions that concatenation unit 603 video pictures different to described captions carry out splicing
Picture direct splicing.For example, the different video pictures of 50 captions obtaining will be identified according to described 50 different regarding of captions
Time sequencing direct splicing in described video file for the frequency picture.
Concatenation unit 603 video pictures different to described captions carry out splicing and can also be from different the regarding of described captions
Intercept caption area in frequency picture, the caption area intercepting is spliced.For example, different from 50 captions obtaining of identification
Caption area is intercepted respectively in video pictures, and the time in described video file according to the different video pictures of 50 captions
Order is spliced to the caption area intercepting.
It can in addition contain the stitching direction of designated picture, the video pictures different to described captions are according to specified spelling
Connect direction to be spliced.For example, the video pictures different to described captions splice from top to bottom or from left to right.
Optionally, concatenation unit 603 can carry out video pictures connecting method according to the similarity of the video pictures intercepting
Selection, then this concatenation unit 303 specifically for:Judge whether the similarity between the different video pictures of described captions is less than
Designated value, when the similarity between the different video pictures of described captions is less than designated value, the video different to described captions
Picture direct splicing, when the similarity between the different video pictures of described captions is not less than designated value, from described captions not
Intercept caption area in same video pictures, the caption area intercepting is spliced.
Concatenation unit 603 can be calculated between the different video pictures of described captions using various image similarity algorithms
Similarity.For example, using SIFT (Scale-invariant Feature Transform, scale invariant feature change) algorithm,
HOG (Histogram of Oriented Gradient, histograms of oriented gradients) algorithm, wavelet transformation etc. calculate described captions
Similarity between different video pictures.
Less than designated value, similarity between the different video pictures of described captions represents that the different video of described captions is drawn
Similarity between face is not high, now different to described captions video pictures direct splicing, different to retain each captions
Video pictures.
Similarity between the different video pictures of described captions represents described captions not less than (such as larger than) designated value
With video pictures between similarity higher, now from the different video pictures of described captions intercept caption area, to cut
The caption area taking is spliced, and the caption area in the video pictures different to retain described captions simultaneously omits other guide phase
Same region.
When the caption area intercepting is spliced, one of the different video pictures of described captions video can be retained
Picture (as different video pictures in the first character curtain obtaining of identification), will intercept in the video pictures different from other captions
Caption area spliced from the video pictures (as the different video pictures of the first character curtain obtaining of identification) retaining.
The video pictures splicing apparatus of embodiment three obtains multiple video pictures from video file, from the plurality of video
The different video pictures of identification captions in picture, the video pictures different to described captions splice, thus realizing in video
The splicing of the video pictures of different captions, facilitates user and consults and the captions in sharing video frequency.
Example IV
Fig. 7 is the structural representation of the electronic equipment realizing video pictures joining method of the present invention.Described electronic equipment 1 wraps
Include video pictures splicing apparatus 10.Described electronic equipment 1 also includes storage device 20, processing equipment 30 and display device 40.
Preferably, the video pictures joining method of the present invention passes through the video pictures splicing apparatus in described electronic equipment 1
10 realizing.
Described electronic equipment 1 be a kind of can according to the instruction being previously set or store, automatically carry out numerical computations and/or
The equipment of information processing, its hardware includes but is not limited to microprocessor, special IC (Application Specific
Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), number
Word processing device (Digital Signal Processor, DSP), embedded device etc..
Described electronic equipment 1 may be, but not limited to, any one and can pass through keyboard, mouse, remote control, touch with user
The mode such as plate or voice-operated device carries out the electronic product of man-machine interaction, for example, personal computer, panel computer, smart mobile phone, individual
Personal digital assistant (Personal Digital Assistant, PDA), game machine, IPTV (Internet
Protocol Television, IPTV), intellectual Wearable etc..
Network residing for described electronic equipment 1 includes, but are not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, virtual
Dedicated network (Virtual Private Network, VPN) etc..
Described storage device 20 is used for storing the program code of each program segment in described video pictures splicing apparatus 10.Institute
State storage device 20 can include:USB flash disk, portable hard drive, read only memory (Read-Only Memory, ROM), random access memory are deposited
Reservoir (Random Access Memory, RAM), magnetic disc or CD etc. are various can be with the medium of store program codes.
Described processing equipment 30 can include one or more microprocessor, digital processing unit.Described processing equipment 30
Execute the program code of each program segment of described video pictures splicing apparatus 10, obtain multiple videos from video file and draw
Face, the different video pictures of identification captions from the plurality of video pictures, the video pictures different to described captions are spelled
Connect, thus realizing the splicing of the video pictures of different captions in video, facilitating user and consulting and the captions in sharing video frequency.
Described display device 40 can be the equipment that touch screen etc. is used for display picture, can show video pictures.
It should be understood that disclosed apparatus and method in several embodiments provided by the present invention, can be passed through it
Its mode is realized.For example, device embodiment described above is only schematically, for example, the division of described unit, and only
It is only a kind of division of logic function, actual can have other dividing mode when realizing.
In addition, each functional unit in each embodiment of the present invention can be integrated in same treatment unit it is also possible to
It is that unit is individually physically present it is also possible to two or more units are integrated in same unit.Above-mentioned integrated list
Unit both can be to be realized in the form of hardware, it would however also be possible to employ the form that hardware adds software function module is realized.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of the spirit or essential attributes of the present invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as limiting involved claim.This
Outward it is clear that " inclusion " one word is not excluded for other units or step, odd number is not excluded for plural number.In device claim, statement is multiple
Unit or device can also be realized by software or hardware by same unit or device.The first, the second grade word is used for
Represent title, and be not offered as any specific order.
Finally it should be noted that above example is only in order to illustrate technical scheme and unrestricted, although reference
Preferred embodiment has been described in detail to the present invention, it will be understood by those within the art that, can be to the present invention's
Technical scheme is modified or equivalent, without deviating from the spirit and scope of technical solution of the present invention.
Claims (10)
1. a kind of video pictures joining method is it is characterised in that methods described includes:
Multiple video pictures are obtained from video file;
The identification different video pictures of captions from the plurality of video pictures, in the different video pictures of wherein said captions after
The captions of the video pictures that the captions of the video pictures that one identification obtains obtain different from previous identification;
The video pictures different to described captions splice.
2. video pictures joining method as claimed in claim 1 it is characterised in that described obtain from video file described many
Individual video pictures include:
Each frame video pictures are obtained from described video file;Or
Video pictures are obtained from described video file with prefixed time interval.
3. video pictures joining method as claimed in claim 1 or 2 is it is characterised in that described regard different to described captions
Frequency picture carries out splicing and includes:
According to time sequencing in described video file for the different video pictures of the described captions video different to described captions
Picture is spliced.
4. video pictures joining method as claimed in claim 1 or 2 is it is characterised in that described regard different to described captions
Frequency picture carries out splicing and includes:
The video pictures direct splicing different to described captions;Or
Intercept caption area from the different video pictures of described captions, the caption area intercepting is spliced.
5. video pictures joining method as claimed in claim 4 is it is characterised in that the described video different to described captions is drawn
Face carries out splicing and includes:
When similarity between the different video pictures of described captions is less than designated value, video pictures different to described captions
Direct splicing;
When similarity between the different video pictures of described captions is not less than designated value, draw from the different video of described captions
Intercept caption area in face, the caption area intercepting is spliced.
6. a kind of video pictures splicing apparatus is it is characterised in that described device includes:
Acquiring unit, for obtaining multiple video pictures from video file;
Recognition unit, for the different video pictures of identification captions from the plurality of video pictures, different the regarding of described captions
In frequency picture, the captions of rear video pictures are different from the captions of previous video picture;
Concatenation unit, splices for the video pictures different to described captions.
7. video pictures splicing apparatus as claimed in claim 6 it is characterised in that described acquiring unit specifically for:
Each frame video pictures are obtained from described video file;Or
Video pictures are obtained from described video file with prefixed time interval.
8. video pictures splicing apparatus as claimed in claims 6 or 7 it is characterised in that described concatenation unit specifically for:
According to time sequencing in described video file for the different video pictures of the described captions video different to described captions
Picture is spliced.
9. video pictures splicing apparatus as claimed in claims 6 or 7 it is characterised in that described concatenation unit specifically for:
The video pictures direct splicing different to described captions;Or
Intercept caption area from the different video pictures of described captions, the caption area intercepting is spliced.
10. video pictures splicing apparatus as claimed in claim 9 it is characterised in that described concatenation unit specifically for:
When similarity between the different video pictures of described captions is less than designated value, video pictures different to described captions
Direct splicing;
When similarity between the different video pictures of described captions is not less than designated value, draw from the different video of described captions
Intercept caption area in face, the caption area intercepting is spliced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610909409.4A CN106454151A (en) | 2016-10-18 | 2016-10-18 | Video image stitching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610909409.4A CN106454151A (en) | 2016-10-18 | 2016-10-18 | Video image stitching method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106454151A true CN106454151A (en) | 2017-02-22 |
Family
ID=58176705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610909409.4A Withdrawn CN106454151A (en) | 2016-10-18 | 2016-10-18 | Video image stitching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106454151A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107484018A (en) * | 2017-07-31 | 2017-12-15 | 维沃移动通信有限公司 | A kind of video interception method, mobile terminal |
CN108259991A (en) * | 2018-03-14 | 2018-07-06 | 优酷网络技术(北京)有限公司 | Method for processing video frequency and device |
CN108347643A (en) * | 2018-03-05 | 2018-07-31 | 成都索贝数码科技股份有限公司 | A kind of implementation method of the subtitle superposition sectional drawing based on deep learning |
CN108495162A (en) * | 2018-03-14 | 2018-09-04 | 优酷网络技术(北京)有限公司 | Method for processing video frequency and device |
CN108924626A (en) * | 2018-08-17 | 2018-11-30 | 腾讯科技(深圳)有限公司 | Picture Generation Method, device, equipment and storage medium |
CN109040825A (en) * | 2018-10-29 | 2018-12-18 | 北京奇艺世纪科技有限公司 | A kind of subtitle intercept method and device |
CN109146789A (en) * | 2018-08-23 | 2019-01-04 | 北京优酷科技有限公司 | Picture splicing method and device |
CN109618224A (en) * | 2018-12-18 | 2019-04-12 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device, computer readable storage medium and equipment |
CN109803180A (en) * | 2019-03-08 | 2019-05-24 | 腾讯科技(深圳)有限公司 | Video preview drawing generating method, device, computer equipment and storage medium |
CN110248117A (en) * | 2019-06-25 | 2019-09-17 | 新华智云科技有限公司 | Video mosaic generation method, device, electronic equipment and storage medium |
CN110708589A (en) * | 2017-11-30 | 2020-01-17 | 腾讯科技(深圳)有限公司 | Information sharing method and device, storage medium and electronic device |
CN110968391A (en) * | 2019-11-28 | 2020-04-07 | 珠海格力电器股份有限公司 | Screenshot method, screenshot device, terminal equipment and storage medium |
WO2021164326A1 (en) * | 2020-02-17 | 2021-08-26 | 腾讯科技(深圳)有限公司 | Video processing method, apparatus and device, and computer readable storage medium |
CN113766149A (en) * | 2020-08-28 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Splicing method and device for subtitle spliced pictures, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101448100A (en) * | 2008-12-26 | 2009-06-03 | 西安交通大学 | Method for extracting video captions quickly and accurately |
CN101853381A (en) * | 2009-03-31 | 2010-10-06 | 华为技术有限公司 | Method and device for acquiring video subtitle information |
CN102833638A (en) * | 2012-07-26 | 2012-12-19 | 北京数视宇通技术有限公司 | Automatic video segmentation and annotation method and system based on caption information |
CN103634605A (en) * | 2013-12-04 | 2014-03-12 | 百度在线网络技术(北京)有限公司 | Processing method and device for video images |
CN103686418A (en) * | 2013-12-27 | 2014-03-26 | 联想(北京)有限公司 | Information processing method and electronic device |
CN105763949A (en) * | 2014-12-18 | 2016-07-13 | 乐视移动智能信息技术(北京)有限公司 | Audio video file playing method and device |
CN105872810A (en) * | 2016-05-26 | 2016-08-17 | 网易传媒科技(北京)有限公司 | Media content sharing method and device |
-
2016
- 2016-10-18 CN CN201610909409.4A patent/CN106454151A/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101448100A (en) * | 2008-12-26 | 2009-06-03 | 西安交通大学 | Method for extracting video captions quickly and accurately |
CN101853381A (en) * | 2009-03-31 | 2010-10-06 | 华为技术有限公司 | Method and device for acquiring video subtitle information |
CN102833638A (en) * | 2012-07-26 | 2012-12-19 | 北京数视宇通技术有限公司 | Automatic video segmentation and annotation method and system based on caption information |
CN103634605A (en) * | 2013-12-04 | 2014-03-12 | 百度在线网络技术(北京)有限公司 | Processing method and device for video images |
CN103686418A (en) * | 2013-12-27 | 2014-03-26 | 联想(北京)有限公司 | Information processing method and electronic device |
CN105763949A (en) * | 2014-12-18 | 2016-07-13 | 乐视移动智能信息技术(北京)有限公司 | Audio video file playing method and device |
CN105872810A (en) * | 2016-05-26 | 2016-08-17 | 网易传媒科技(北京)有限公司 | Media content sharing method and device |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107484018A (en) * | 2017-07-31 | 2017-12-15 | 维沃移动通信有限公司 | A kind of video interception method, mobile terminal |
CN110708589A (en) * | 2017-11-30 | 2020-01-17 | 腾讯科技(深圳)有限公司 | Information sharing method and device, storage medium and electronic device |
CN108347643A (en) * | 2018-03-05 | 2018-07-31 | 成都索贝数码科技股份有限公司 | A kind of implementation method of the subtitle superposition sectional drawing based on deep learning |
CN108347643B (en) * | 2018-03-05 | 2020-09-15 | 成都索贝数码科技股份有限公司 | Subtitle superposition screenshot realization method based on deep learning |
CN108259991A (en) * | 2018-03-14 | 2018-07-06 | 优酷网络技术(北京)有限公司 | Method for processing video frequency and device |
CN108495162A (en) * | 2018-03-14 | 2018-09-04 | 优酷网络技术(北京)有限公司 | Method for processing video frequency and device |
CN112866785B (en) * | 2018-08-17 | 2021-10-29 | 腾讯科技(深圳)有限公司 | Picture generation method, device, equipment and storage medium |
CN108924626B (en) * | 2018-08-17 | 2021-02-23 | 腾讯科技(深圳)有限公司 | Picture generation method, device, equipment and storage medium |
US11223880B2 (en) | 2018-08-17 | 2022-01-11 | Tencent Technology (Shenzhen) Company Limited | Picture generation method and apparatus, device, and storage medium |
CN108924626A (en) * | 2018-08-17 | 2018-11-30 | 腾讯科技(深圳)有限公司 | Picture Generation Method, device, equipment and storage medium |
TWI776066B (en) * | 2018-08-17 | 2022-09-01 | 大陸商騰訊科技(深圳)有限公司 | Picture generating method, device, terminal, server and storage medium |
CN112866785A (en) * | 2018-08-17 | 2021-05-28 | 腾讯科技(深圳)有限公司 | Picture generation method, device, equipment and storage medium |
CN109146789A (en) * | 2018-08-23 | 2019-01-04 | 北京优酷科技有限公司 | Picture splicing method and device |
CN109040825A (en) * | 2018-10-29 | 2018-12-18 | 北京奇艺世纪科技有限公司 | A kind of subtitle intercept method and device |
CN109618224B (en) * | 2018-12-18 | 2021-03-09 | 腾讯科技(深圳)有限公司 | Video data processing method, device, computer readable storage medium and equipment |
CN109618224A (en) * | 2018-12-18 | 2019-04-12 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device, computer readable storage medium and equipment |
CN112929745A (en) * | 2018-12-18 | 2021-06-08 | 腾讯科技(深圳)有限公司 | Video data processing method, device, computer readable storage medium and equipment |
CN112929745B (en) * | 2018-12-18 | 2022-04-08 | 腾讯科技(深圳)有限公司 | Video data processing method, device, computer readable storage medium and equipment |
CN109803180B (en) * | 2019-03-08 | 2022-05-20 | 腾讯科技(深圳)有限公司 | Video preview generation method and device, computer equipment and storage medium |
CN109803180A (en) * | 2019-03-08 | 2019-05-24 | 腾讯科技(深圳)有限公司 | Video preview drawing generating method, device, computer equipment and storage medium |
CN110248117A (en) * | 2019-06-25 | 2019-09-17 | 新华智云科技有限公司 | Video mosaic generation method, device, electronic equipment and storage medium |
CN110968391A (en) * | 2019-11-28 | 2020-04-07 | 珠海格力电器股份有限公司 | Screenshot method, screenshot device, terminal equipment and storage medium |
WO2021164326A1 (en) * | 2020-02-17 | 2021-08-26 | 腾讯科技(深圳)有限公司 | Video processing method, apparatus and device, and computer readable storage medium |
CN113766149A (en) * | 2020-08-28 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Splicing method and device for subtitle spliced pictures, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106454151A (en) | Video image stitching method and device | |
CN109803180B (en) | Video preview generation method and device, computer equipment and storage medium | |
US9418280B2 (en) | Image segmentation method and image segmentation device | |
US9684818B2 (en) | Method and apparatus for providing image contents | |
US9100616B2 (en) | Relational display of images | |
US7916894B1 (en) | Summary of a video using faces | |
US9582610B2 (en) | Visual post builder | |
US7757172B2 (en) | Electronic equipment and method for displaying images | |
US9881215B2 (en) | Apparatus and method for identifying a still image contained in moving image contents | |
KR102193567B1 (en) | Electronic Apparatus displaying a plurality of images and image processing method thereof | |
CN111757175A (en) | Video processing method and device | |
WO2012124149A1 (en) | Image processing device, image processing method and control program | |
EP2824633A1 (en) | Image processing method and terminal device | |
RU2681364C1 (en) | System and method of hiding objects in a video archive on users requirement | |
JP2016085534A (en) | Image processing apparatus, control method of image processing apparatus, and program | |
US20200092444A1 (en) | Playback method, playback device and computer-readable storage medium | |
CN112822394A (en) | Display control method and device, electronic equipment and readable storage medium | |
CN111954076A (en) | Resource display method and device and electronic equipment | |
CN108268139B (en) | Virtual scene interaction method and device, computer device and readable storage medium | |
CN114598921B (en) | Video frame extraction method, device, terminal equipment and storage medium | |
CN107452067B (en) | Demonstration method and device based on augmented reality and terminal equipment | |
CN109640170B (en) | Speed processing method of self-shooting video, terminal and storage medium | |
CN114125297A (en) | Video shooting method and device, electronic equipment and storage medium | |
CN104469546B (en) | A kind of method and apparatus for handling video segment | |
US20170344204A1 (en) | Systems and methods for detecting and displaying graphic contents on digital media page |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170222 |