CN104853223A - Video stream intercutting method and terminal equipment - Google Patents

Video stream intercutting method and terminal equipment Download PDF

Info

Publication number
CN104853223A
CN104853223A CN201510213758.8A CN201510213758A CN104853223A CN 104853223 A CN104853223 A CN 104853223A CN 201510213758 A CN201510213758 A CN 201510213758A CN 104853223 A CN104853223 A CN 104853223A
Authority
CN
China
Prior art keywords
indication information
information
video data
intercut
intercuts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510213758.8A
Other languages
Chinese (zh)
Other versions
CN104853223B (en
Inventor
梁鑫
刘洁
王兴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510213758.8A priority Critical patent/CN104853223B/en
Publication of CN104853223A publication Critical patent/CN104853223A/en
Application granted granted Critical
Publication of CN104853223B publication Critical patent/CN104853223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Abstract

The invention relates to a video stream intercutting method and a piece of terminal equipment. The method comprises the steps of acquiring intercutting indication information in a video stream to be played, acquiring first video data corresponding to the intercutting indication information from a pre-stored information database, and using the first video data to intercut second video data, corresponding to the intercutting indication information, in the video stream. During video stream playing, personalized video content meeting the need of a user can be presented to the user in real time without the need to tamper with video stream data. The situation in which original video stream data needs to be modified according to the need of a user in advance and a lot of storage space is occupied is avoided. The flexibility and efficiency of personalized video playing are improved.

Description

The inserting method of video flowing and terminal equipment
Technical field
The disclosure relates to video display arts field, particularly a kind of inserting method of video flowing and terminal equipment.
Background technology
Intelligent terminal day by day universal, become the major way of customer multi-media video-see, for mobile phone, user can download interested video content from network side and watch, or the local video content stored of viewing.
In correlation technique, video playback is the video content according to recording, one old constant carrying out is play, if the content changed wherein such as breaks for commercialsy, also be when making early stage, carry out change to original video stream to update, the demand of user's real-time change can not be met, there is certain limitation.
Summary of the invention
Disclosure embodiment provides a kind of inserting method and terminal equipment of video flowing.Described technical scheme is as follows:
According to the first aspect of disclosure embodiment, provide a kind of inserting method of video flowing, the method comprises:
Obtain and intercut indication information in video flowing to be played;
Obtain from the information database prestored and intercut the first video data corresponding to indication information with described, intercut in described video flowing, with described intercut the second video data corresponding to indication information to apply described first video data.
According to the second aspect of disclosure embodiment, provide a kind of terminal equipment, described equipment comprises:
First acquisition module, is configured to obtain and intercuts indication information in video flowing to be played;
Second acquisition module, is configured to obtain from the information database prestored intercut the first video data corresponding to indication information with described;
Playing module, is configured to described first video data of application and intercuts in described video flowing, with described and intercut the second video data corresponding to indication information.
According to the third aspect of disclosure embodiment, provide a kind of terminal equipment, this equipment comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Obtain and intercut indication information in video flowing to be played;
Obtain from the information database prestored and intercut the first video data corresponding to indication information with described;
Apply described first video data to intercut in described video flowing, with described and intercut the second video data corresponding to indication information.
The technical scheme that disclosure embodiment provides can comprise following beneficial effect:
Indication information is intercutted in video flowing to be played by obtaining, from the information database prestored, obtaining first video data corresponding with intercutting indication information, applying the first video data and intercutting the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and be configured to explain principle of the present disclosure together with specification.
Fig. 1 is the flow chart of the inserting method of a kind of video flowing according to an exemplary embodiment;
Fig. 2 is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment;
Fig. 3 is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment
Fig. 4 A is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment;
The screen display of the terminal equipment shown in Fig. 4 B be former picture frame;
The screen display of the terminal equipment shown in Fig. 4 C for intercutting picture frame;
Fig. 5 A is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment;
The screen display of the terminal equipment shown in Fig. 5 B be former picture frame;
The screen display of the terminal equipment shown in Fig. 5 C for intercutting picture frame;
Fig. 6 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Fig. 7 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Fig. 8 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Fig. 9 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 10 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 11 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 12 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 13 is the block diagram of a kind of terminal equipment according to an exemplary embodiment.
By above-mentioned accompanying drawing, illustrate the embodiment that the disclosure is clear and definite more detailed description will be had hereinafter.These accompanying drawings and text description be not in order to limited by any mode the disclosure design scope, but by reference to specific embodiment for those skilled in the art illustrate concept of the present disclosure.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
Fig. 1 is the flow chart of the inserting method of a kind of video flowing according to an exemplary embodiment, and the present embodiment is configured to the inserting method of this video flowing to comprise in the terminal equipment of display screen and illustrates.The inserting method of this video flowing can comprise following several step:
In a step 101, obtain and intercut indication information in video flowing to be played.
Terminal equipment receives the video flowing that user specifies broadcasting, and wherein, user specifies the video flowing of broadcasting to be the video flowing that terminal equipment receives the transmission of all the other network equipments, or terminal equipment is stored in advance in the video flowing of terminal equipment this locality.
Terminal equipment obtains and intercuts indication information from video flowing to be played.Wherein, obtain the mode intercutting indication information a lot, the concrete manifestation form intercutting indication information is also a lot, illustrates as follows:
Example one, is represented by the temporal information of video flowing in this example and intercuts indication information, and corresponding acquisition process is specially:
Terminal equipment obtains the first identification information of video flowing from the header file of video flowing to be played,
The inter cut time information table prestored is searched according to this first identification information, this inter cut time information table comprises the first identification information of video flowing and the corresponding relation of inter cut time, thus obtain from this inter cut time information table and corresponding with this first identification information intercut indication information, this intercuts indication information and comprises: intercut initial time and break ends time.
It should be noted that, this inter cut time information table can be stored in terminal equipment this locality, the content of terminal equipment directly in local search inter cut time information table, and this mode improves treatment effect; This inter cut time information table also can be stored on other equipment, and terminal equipment and other equipment carry out information interaction and inquires about content in inter cut time information table, and this mode makes all terminal equipment central access, saves process resource.
Example two, is represented by the coordinate information on the picture frame of video flowing in this example and intercuts indication information, and corresponding acquisition process is specially:
Terminal equipment obtains the second identification information of the picture frame of video flowing to be played,
According to this second identification information search prestore intercut coordinate information table, this intercuts coordinate information table and comprises the second identification information of each picture frame in video flowing and intercut the corresponding relation of coordinate, thus intercut coordinate information table to obtain corresponding with this second identification information intercut indication information from this, this intercuts indication information and comprises: multiplely intercut coordinate information.
It should be noted that, this intercuts coordinate information table can be stored in terminal equipment this locality, and terminal equipment directly intercuts the content in coordinate information table at local search, and this mode improves treatment effect; This intercuts coordinate information table and also can be stored on other equipment, and terminal equipment and other equipment carry out information interaction and inquires about the content intercutted in coordinate information table, and this mode makes all terminal equipment central access, saves process resource.
Example three, is represented by the pattern characteristics information on the picture frame of video flowing in this example and intercuts indication information, and corresponding acquisition process is specially:
Terminal equipment obtains the pattern characteristics information of the picture frame of video flowing to be played, the concrete manifestation form of pattern characteristics information is depending on types of patterns, such as, by check pattern content, if know, this pattern comprises character face, then corresponding with this character face pattern characteristics information can comprise: Hear feature or FisherFace feature or LBPH feature; If know, this pattern does not comprise character face, then corresponding with this pattern pattern characteristics information can comprise: color histogram feature etc.
Whether be that user is preassigned according to the pattern characteristics database identification icon characteristic information prestored intercuts indication information; Wherein, pattern characteristics database comprises the sample patterns characteristic information corresponding with intercutting indication information.
It should be noted that, this pattern characteristics database can be stored in terminal equipment this locality, the content of terminal equipment directly in local matched patterns property data base, and this mode improves treatment effect; This pattern characteristics database also can be stored on other equipment, and terminal equipment and other equipment carry out the content in information interaction matched patterns property data base, and this mode makes all terminal equipment central access, saves process resource.
Example four, is represented by the spectrum signature information on the picture frame of video flowing in this example and intercuts indication information, and corresponding acquisition process is specially:
Terminal equipment obtains the spectrum signature information of video flowing to be played, and wherein, the spectrum signature information of video flowing can comprise spectral signature information corresponding with picture in video flowing, and/or the spectrum signature information corresponding with sound.
Identify whether spectrum signature information is that user is preassigned and intercuts indication information according to the spectral feature data storehouse prestored; Wherein, spectral feature data storehouse comprises the sample spectra characteristic information corresponding with intercutting indication information.
It should be noted that, this spectral feature data storehouse can be stored in terminal equipment this locality, the content of terminal equipment directly in coupling spectral feature data storehouse, this locality, and this mode improves treatment effect; This spectral feature data storehouse also can be stored on other equipment, and terminal equipment and other equipment carry out information interaction and mate content in spectral feature data storehouse, and this mode makes all terminal equipment central access, saves process resource.
In a step 102, from the information database prestored, acquisition intercuts the first video data corresponding to indication information with described.
Terminal equipment obtains and intercuts the first video data corresponding to indication information according to the indication information that intercuts obtained with this from the information database prestored.Wherein, the first video data is the content will carrying out intercutting, and that is, in the process of displaying video stream, does not present the second video data corresponding with intercutting indication information in this video flowing, but presents this first video data.
The corresponding relation intercutting indication information and the first video data is stored in information database, the present embodiment does not do concrete restriction to the form of expression of storage mode, such as, can store according to the Type division intercutting indication information, the four kinds of forms intercutting indication information are represented, information and the corresponding relation of the corresponding relation of the corresponding relation of the corresponding relation of the first video data, coordinate information and the first video data, pattern characteristics information and the first video data, spectrum signature information and the first video data memory time of can classifying according to the example in step 101.Thus can efficiently from information database obtain intercut the first video data corresponding to indication information with this.
It should be noted that, this information database can be stored in terminal equipment this locality, and terminal equipment is directly searched information database and obtained the first video data in this locality, this mode improves treatment effect; This information database also can be stored on other equipment, and terminal equipment and other equipment carry out information interaction and searches information database and obtain the first video data, and this mode makes all terminal equipment central access, saves process resource.
In step 103, apply described first video data to intercut in described video flowing, with described and intercut the second video data corresponding to indication information.
Terminal equipment, in the process of displaying video stream, when being played to second video data corresponding with intercutting indication information, being applied the first video data obtained from information database and being intercutted, thus exporting the first video data to user.Therefore, the inserting method that the present embodiment provides is in the process of video render, and the indication information that intercuts according to obtaining in advance exports break-in content i.e. the first video data in real time, thus presents the individualized video content of meeting consumers' demand to user.
In sum, the inserting method of the video flowing that the present embodiment provides, indication information is intercutted in video flowing to be played by obtaining, from the information database prestored, obtaining first video data corresponding with intercutting indication information, applying the first video data and intercutting the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
In embodiment illustrated in fig. 1, described first video data of terminal equipment application intercuts in described video flowing, has a lot with the described implementation intercutting the second video data corresponding to indication information, such as: cover and intercut mode, or switch and intercut mode, terminal equipment can be selected corresponding to intercut mode according to concrete scene of intercutting, and the present embodiment does not limit this.Illustrate as follows:
If be intercut for the subregion on the picture frame of video flowing according to intercutting indication information acquisition, covering then can be adopted to intercut mode, if be intercut for the Zone Full on the continuous print picture frame of video flowing according to intercutting indication information acquisition, switching then can be adopted to intercut mode, below by Fig. 2 with to be embodiment illustrated in fig. 3ly described in detail.
Fig. 2 is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment, and the present embodiment should be configured to the inserting method of this video flowing to comprise in the terminal equipment of display screen and illustrate.The present embodiment is for representing the situation intercutting indication information with inter cut time, break-in content is the Zone Full of continuous print picture frame on continuous print video flowing, the advertisement such as continued for some time, then adopt the application scenarios switching mode of intercutting and carry out intercutting, the inserting method of this video flowing can comprise following several step:
In step 201, from the header file of video flowing to be played, obtain the first identification information of video flowing.
In step 202., search the inter cut time information table prestored according to described first identification information, obtain and corresponding with described first identification information intercut indication information, described in intercut indication information and comprise: intercut initial time and break ends time.
In step 203, from the information database prestored, acquisition intercuts the first video data corresponding to indication information with described.
In the present embodiment, the embodiment of step 201-step 203 can step 101-step 102 in embodiment shown in Figure 1, its implementation process and know-why similar, repeat no more herein.
In step 204, in the process playing described video flowing, stop playing described second video data according to the described indication information that intercuts, and described first video data of synchronous broadcasting.
Terminal equipment is in the process of displaying video stream, when being played to second video data corresponding with intercutting initial time, stop playing second video data corresponding with intercutting initial time, but intercut the first video data obtained from information database intercutting initial time; When the break ends time arrives, when stopping broadcasting the first video data, continue to play second video data corresponding with the break ends time.Thus in the process of video render, export break-in content i.e. the first video data from intercutting initial time in real time to the break ends time, thus present the individualized video content of meeting consumers' demand to user.
In sum, the inserting method of the video flowing that the present embodiment provides, by obtaining the inter cut time indication information in video flowing to be played, from the information database prestored, obtain first video data corresponding with inter cut time indication information, apply the first video data switching and intercut the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, real-time switching is intercutted and is presented the personalized video content meeting user's needs to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Fig. 3 is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment, and the present embodiment should be configured to the inserting method of this video flowing to comprise in the terminal equipment of display screen and illustrate.The present embodiment is for representing the situation intercutting indication information by pattern characteristics information, break-in content is the subregion of the picture frame of video flowing, such as character face, then adopt the application scenarios covering mode of intercutting and carry out intercutting, the inserting method of this video flowing can comprise following several step:
In step 301, the pattern characteristics information of the picture frame of video flowing to be played is obtained.
First, obtain the pattern characteristics information of the picture frame of video flowing to be played, concrete obtain manner illustrates as follows:
Mode one, according to the unit window pre-set, such as long 30 pixels, the unit window of wide 30 pixels, extract one by one the characteristic information in regions all on this picture frame, such as, this picture frame is long 900 pixels, the picture of wide 900 pixels, utilize long 30 pixels, the unit window of wide 30 pixels carries out feature extraction to picture frame, needs extraction 400 characteristic informations, the universality of this mode is very strong, can for all types of pattern.
Mode two, if the preassigned pattern of indication information that intercuts of user is for character face, then can adopt the transaction module such as neural network model of face recognition, or grader comparison model, first in picture frame, determine facial zone, and then face feature information is being extracted from this facial zone, avoid the characteristic information extracting this picture from all regions of picture frame one by one, this mode improves treatment effeciency to the object content of easily locating regional area.
In step 302, according to the pattern characteristics database identification prestored pattern characteristics information whether be that user is preassigned intercut indication information; Wherein, described pattern characteristics database comprises and intercuts sample patterns characteristic information corresponding to indication information with described.
The pattern characteristics information that terminal equipment obtains according to the identification of pattern characteristics database from this picture frame be whether user specify intercut indication information, wherein, pattern characteristics database comprises the sample patterns characteristic information corresponding with intercutting indication information, thus sample characteristics information corresponding with intercutting indication information for pattern characteristics database is mated with the pattern characteristics information obtained from this picture frame by terminal equipment one by one, if the match is successful, illustrate in picture frame have that user is preassigned intercuts indication information; If it fails to match, illustrate in picture frame do not have that user is preassigned intercuts indication information.
It should be noted that, the content in pattern characteristics database can be the sample characteristics information that the service provider of video flowing has cured.Comparatively flexibly, the sample characteristics information of property data base except having cured before comprising, can also comprise send for user in real time video flowing, specify according to user intercut the sample patterns characteristic information that indication information generates.
In step 303, from the information database prestored, acquisition intercuts the first video data corresponding to indication information with described.
In the present embodiment, the embodiment of step 303 can step 102 in embodiment shown in Figure 1, its implementation process and know-why similar, repeat no more herein.
In step 304, determine to intercut the primary importance region of the second video data corresponding to indication information on described picture frame on the picture frame of described video flowing, with described.
If terminal equipment judges to know in picture frame have that user is preassigned intercuts indication information, then to be obtained by image boundary track algorithm and this intercuts the smoothness of the borderline region of the second video data on picture frame corresponding to indication information; Wherein, image boundary track algorithm comprises the image boundary track algorithm based on two-value, the image boundary track algorithm etc. based on small echo, can need to select according to the application of reality, and then obtain the smoothness on this border of the second video data on picture frame by image boundary track algorithm.
Judge the threshold value whether this smoothness reaches default, it should be noted that, different image boundary track algorithms is preset with different threshold values, such as, be A based on the threshold value that the image boundary track algorithm of two-value is corresponding, be B based on the threshold value that the image boundary track algorithm of small echo is corresponding, therefore, the smoothness of acquisition compares with corresponding threshold value by the algorithm according to adopting, judge to know to reach default threshold value with this smoothness time, then dividing processing is easily carried out in declare area border, directly using this second video data border on picture frame as primary importance region.When judging to know the threshold value that smoothness do not reach default, then declare area border is not easy to carry out dividing processing, can determine the smooth region corresponding with zone boundary according to the compensating parameter preset, and then using smooth region as primary importance region.
In step 305, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data.
Terminal equipment intercuts the primary importance region of the second video data corresponding to indication information on picture frame according to described, determines Showing Picture on the screen of frame, the second place region of corresponding display-object content.It should be noted that, determine that the implementation in the second place region on screen is a lot of according to the primary importance region of picture frame, illustrate as follows;
Mode one,
First picture frame is carried out convergent-divergent, wherein, primary importance region is also synchronous carries out convergent-divergent;
When picture frame is zoomed to screen size, the primary importance area information after record convergent-divergent, this primary importance area information can as the second place region of the screen for showing this picture frame, corresponding display-object content.
Mode two,
First obtain multiple first coordinate informations on primary importance region, such as, suppose that primary importance region is for square, multiple first coordinate informations corresponding with this primary importance region can be the coordinate information at four angles; Suppose that primary importance region is for circular, multiple first coordinate informations corresponding with this primary importance region can be the intersecting point coordinate information of at least two diameters and circular boundary;
According to the dimension scale of this picture frame and this screen, adjust multiple first coordinate informations on primary importance region in proportion, obtain multiple second coordinate informations corresponding with the plurality of first coordinate information;
Can determine showing on the screen of this picture frame according to the plurality of second coordinate information, the second place region of corresponding display-object content.
Within step 306, generate user interface UI layer, wherein, described first video data is drawn in region corresponding with described second place region on described UI layer.
Terminal equipment application UI control generates new free user interface UI layer;
Then the UI element of resolving acquisition first video data is carried out to the file storing the first video data, and this UI element is added to corresponding part of coincideing on blank UI layer, with second place region screen being used for show the first video data.
In step 307, in the process playing described video flowing, according to the described indication information that intercuts, described UI layer is covered on described second video data.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, to cover on this picture frame with the UI layer that corresponding part draws the first video data that coincide of the second place region on screen, and then make this first video data cover second video data corresponding with intercutting indication information, thus present the individualized video content of meeting consumers' demand to user.
In sum, the inserting method of the video flowing that the present embodiment provides, by obtaining the inter cut time indication information in video flowing to be played, from the information database prestored, obtain first video data corresponding with inter cut time indication information, apply the first video data switching and intercut the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, real-time covering is intercutted and is presented the personalized video content meeting user's needs to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Have multiple for the middle generating mode of UI layer embodiment illustrated in fig. 3 and the realization rate of coverage mode, the proportion of picture frame can be accounted for according to the second video data corresponding with intercutting indication information, or the aspects such as arrangement mode carry out selecting different UI layer treatment technologies, to improve treatment effeciency, below by Fig. 4 and detailed description embodiment illustrated in fig. 5.
Fig. 4 A is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment, and the present embodiment should be configured to the inserting method of this video flowing to comprise in the terminal equipment of display screen and illustrate.The indication information that intercuts of specifying for user in the present embodiment is the first character face, and the application scenarios that these distributed areas of the first character face on picture frame are unique, adopt the Local treatment mode of UI layer to realize, the inserting method of this video flowing can comprise following several step:
In step 401, the facial characteristics scope obtained according to training in advance determines the facial zone on the picture frame of video flowing to be played.
According to the unit window preset extract on picture frame with unit window characteristic of correspondence, judge whether this feature belongs to this range intervals according to the facial characteristics scope that training in advance obtains, if this feature belongs to this range intervals, illustrate that the region corresponding with this feature is facial zone, if this feature does not belong to this range intervals, illustrate that the region corresponding with this feature is not facial zone, thus the facial zone on quick position picture frame.Wherein, facial characteristics can comprise: Hear feature or FisherFace feature or LBPH feature, can select according to application needs.
In step 402, from described facial zone, facial characteristics is extracted.
Facial feature extraction is carried out from positions such as the profile facial zone, eyebrow, eyes, nose, lips.
In step 403, by described facial characteristics with in pattern characteristics database, the preassigned sample face feature intercutting indication information corresponding of user mates.
In step 404, know if judge and intercut indication information described in existence, then determine to intercut the primary importance region of the second video data corresponding to indication information on described picture frame with described.
In step 405, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data.
In a step 406, generate the UI layer coincide with described second place zone boundary, wherein, whole UI layer draws described first video data.
Terminal equipment application UI control generates new free user interface UI layer, the zone boundary of this UI layer coincide corresponding with second place zone boundary, then the UI element of resolving acquisition first video data is carried out to the file storing the first video data, and this UI element is added on whole blank UI layer.
In step 407, in the process playing described video flowing, cover described second place region according to the described indication information that intercuts by identical for described UI layer.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, coincide this UI layer the second place region covered for showing this picture frame, and then make this first video data cover the second video data, thus present the individualized video content of meeting consumers' demand to user.
As a kind of example, the screen display of the terminal equipment shown in Fig. 4 B be former picture frame, the screen display of the terminal equipment shown in Fig. 4 C for intercutting picture frame, shown in Fig. 4 B and Fig. 4 C,
Suppose that second video data corresponding with intercutting indication information that user specifies is " machine cat face " on picture frame, first video data is " Little Bear face ", specifically, sample face feature corresponding with " machine cat face " in pattern characteristics database is mated with the facial characteristics extracted from facial zone, if the match is successful, then judge to know that facial zone is " machine cat face ", then parsing is carried out to the file storing " Little Bear face " and obtain UI element, and this UI element is added to border and coincide on corresponding blank UI layer with second place zone boundary.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, being coincide by this UI layer covers for showing machine cat facial zone on this picture frame, and then make this " Little Bear face " covering " machine cat face ", thus present the individualized video content of meeting consumers' demand to user.
In sum, the inserting method of the video flowing that the present embodiment provides, adopt the Local treatment mode of UI layer to realize covering to intercut, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, save process resource.
Fig. 5 A is the flow chart of the inserting method of a kind of video flowing according to another exemplary embodiment, and the present embodiment should be configured to the inserting method of this video flowing to comprise in the terminal equipment of display screen and illustrate.The indication information that intercuts of specifying for user in the present embodiment is multiple pattern, the application scenarios of the distributed areas dispersion of multiple pattern on picture frame, adopt the disposed of in its entirety mode of UI layer to realize, the inserting method of this video flowing can comprise following several step:
In step 501, the area of the pattern on the picture frame of video flowing to be played is determined according to boundary profile algorithm.
Based on area of the pattern all on boundary profile algorithm determination picture frame.
In step 502, from described area of the pattern, pattern characteristics is extracted.
From area of the pattern, carry out extraction pattern characteristics, pattern characteristics comprises color histogram, or, histogram of gradients.
In step 503, by described pattern characteristics with in pattern characteristics database, the preassigned sample patterns feature intercutting indication information corresponding of user mates.
In step 504, know if judge and intercut indication information described in existence, then determine to intercut the primary importance region of the second video data corresponding to indication information on described picture frame with described.
In step 505, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data.
In step 506, the UI layer coincide with described screen border is generated; Wherein, described first video data is drawn in the 3rd corresponding band of position that described UI layer coincide with described second place region, and the part outside described 3rd band of position carries out transparent processing.
Terminal equipment application UI control generates new free user interface UI layer, the zone boundary of this UI layer coincide corresponding with screen border, then the UI element of resolving acquisition first video data is carried out to the file storing the first video data, and this UI element is added to the 3rd corresponding band of position that to coincide on UI layer, with the second place region on screen, and the part on UI layer, outside the 3rd band of position carries out transparent processing.
In step 507, in the process playing described video flowing, according to the described indication information that intercuts, described UI layer entirety is covered on described screen.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, this UI layer entirety is covered on this picture frame, and then makes this first video data cover the second video data, thus present the individualized video content of meeting consumers' demand to user.
As a kind of example, the screen display of the terminal equipment shown in Fig. 5 B be former picture frame, the screen display of the terminal equipment shown in Fig. 5 C for intercutting picture frame, shown in Fig. 5 B and Fig. 5 C,
Suppose that the indication information that intercuts that user specifies comprises the first pattern and the second pattern, second video data corresponding with the first pattern is " lower part of the body of health husband " on this picture frame, the first corresponding video data is " tail of mermaid ", second video data corresponding with the second pattern is " machine cat head top ", the first corresponding video data is " the machine cat head top of band aircraft ", specifically, first pattern in pattern characteristics database and sample patterns feature corresponding to the second pattern are mated with the pattern characteristics extracted from area of the pattern, if the match is successful, then judge to know that area of the pattern is " lower part of the body of health husband " and " machine cat head top ", then parsing is carried out to the file storing " tail of mermaid " and " the machine cat head top of band aircraft " pattern and obtain UI element, and this UI element is added on UI layer, coincide on the 3rd corresponding band of position with the second place region on screen, and the part outside the 3rd band of position carries out transparent processing.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, this UI layer entirety is covered on this picture frame, and then make " tail of mermaid " pattern covers " lower part of the body of health husband " pattern, " the machine cat head top of band aircraft " pattern covers " machine cat head top " pattern, thus the individualized video content of meeting consumers' demand is presented to user.
In sum, the inserting method of the video flowing that the present embodiment provides, adopt the disposed of in its entirety mode of UI layer to realize covering to intercut, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, save process resource.
Following is disclosure device embodiment, can be configured to perform disclosure embodiment of the method.For the details do not disclosed in disclosure device embodiment, please refer to disclosure embodiment of the method.
Fig. 6 is the block diagram of a kind of terminal equipment according to an exemplary embodiment, and as shown in Figure 6, this terminal equipment, comprising: the first acquisition module 11, second acquisition module 12 and playing module 13, wherein,
First acquisition module 11, is configured to obtain and intercuts indication information in video flowing to be played;
Second acquisition module 12, is configured to obtain from the information database prestored intercut the first video data corresponding to indication information with described;
Playing module 13, is configured to described first video data of application and intercuts in described video flowing, with described and intercut the second video data corresponding to indication information.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, indication information is intercutted in video flowing to be played by obtaining, from the information database prestored, obtaining first video data corresponding with intercutting indication information, applying the first video data and intercutting the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Fig. 7 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 7, based on embodiment illustrated in fig. 6, the first acquisition module 11, comprising: the first acquiring unit 111 and the first query unit 112, wherein,
First acquiring unit 111, is configured to the first identification information obtaining video flowing from the header file of video flowing to be played;
First query unit 112, be configured to search according to described first identification information the inter cut time information table prestored, obtain and corresponding with described first identification information intercut indication information, described in intercut indication information and comprise: intercut initial time and break ends time.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
Fig. 8 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 8, based on embodiment illustrated in fig. 6, the first acquisition module 11, comprising: second acquisition unit 113 and the second query unit 114, wherein,
Second acquisition unit 113, is configured to the second identification information of the picture frame obtaining video flowing to be played;
Second query unit 114, be configured to according to described second identification information search prestore intercut coordinate information table, obtain and corresponding with described second identification information intercut indication information, described in intercut indication information and comprise: multiplely intercut coordinate information.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
Fig. 9 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 9, based on embodiment illustrated in fig. 6, the first acquisition module 11, comprising: the 3rd acquiring unit 115 and the first recognition unit 116, wherein,
3rd acquiring unit 115, is configured to the pattern characteristics information of the picture frame obtaining video flowing to be played;
First recognition unit 116, whether the pattern characteristics information according to the pattern characteristics database identification prestored that is configured to is that user is preassigned is intercutted indication information; Wherein, described pattern characteristics database comprises and intercuts sample patterns characteristic information corresponding to indication information with described.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
Figure 10 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 10, based on embodiment illustrated in fig. 6, the first acquisition module 11, comprising: the 4th acquiring unit 117 and the second recognition unit 118, wherein,
4th acquiring unit 117, is configured to the spectrum signature information obtaining video flowing to be played;
Second recognition unit 118, the spectral feature data storehouse be configured to according to prestoring identifies whether described spectrum signature information is that user is preassigned and intercuts indication information; Wherein, described spectral feature data storehouse comprises and intercuts sample spectra characteristic information corresponding to indication information with described.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
Figure 11 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in figure 11, based on embodiment illustrated in fig. 6, playing module 13, comprising: control unit 131 and first intercuts unit 132, wherein,
Control unit 131, is configured in the process playing described video flowing, stops playing described second video data according to the described indication information that intercuts;
First intercuts unit 132, is configured to synchronously play described first video data.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
In sum, the terminal equipment that the present embodiment provides, by obtaining the inter cut time indication information in video flowing to be played, from the information database prestored, obtain first video data corresponding with inter cut time indication information, apply the first video data switching and intercut the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, real-time switching is intercutted and is presented the personalized video content meeting user's needs to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Figure 12 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, as shown in figure 12, based on embodiment illustrated in fig. 6, playing module 13, comprise: the first positioning unit 133, second positioning unit 134, drawing unit 135 and second intercut unit 136, wherein
First positioning unit 133, is configured to determine to intercut the primary importance region of the second video data corresponding to indication information on described picture frame on the picture frame of described video flowing, with described;
Second positioning unit 134, is configured to the second place region according to described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data;
Drawing unit 135, be configured to generate user interface UI layer, wherein, described first video data is drawn in region corresponding with described second place region on described UI layer;
Second intercuts unit 136, is configured to, in the process playing described video flowing, be covered on described second video data by described UI layer according to the described indication information that intercuts.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
In sum, the terminal equipment that the present embodiment provides, by obtaining the inter cut time indication information in video flowing to be played, from the information database prestored, obtain first video data corresponding with inter cut time indication information, apply the first video data switching and intercut the second video data corresponding in video flowing, with intercutting indication information.When achieving displaying video stream, when not needing to distort video stream data, real-time covering is intercutted and is presented the personalized video content meeting user's needs to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Further, the first positioning unit 133, is configured to:
Whether the smoothness detecting the zone boundary corresponding with described second video data based on image boundary track algorithm reaches default threshold value;
If judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described second video data as described primary importance region;
Know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
Further, the second positioning unit 134, is configured to:
According to the dimension scale of described picture frame and described screen, adjust multiple first coordinate informations on described primary importance region in proportion, obtain multiple second coordinate informations corresponding with described multiple first coordinate information;
The described second place region on described screen is determined according to described multiple second coordinate information.
Further, drawing unit 135, is configured to:
Generate the UI layer coincide with described second place zone boundary, wherein, whole UI layer draws described first video data, so that in the process playing described video flowing, covers described second place region according to the described indication information that intercuts by identical for described UI layer.
In sum, the terminal equipment that the present embodiment provides, adopts the Local treatment mode of UI layer to realize covering and intercuts, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Further, drawing unit 135, is configured to:
Generate the UI layer coincide with described screen border; Wherein, described UI layer coincide with described second place region on the 3rd corresponding band of position and draw described first video data, and the part outside described 3rd band of position carries out transparent processing, in the process playing described video flowing, described UI layer entirety is covered on described screen according to the described indication information that intercuts.
In sum, the terminal equipment that the present embodiment provides, adopts the disposed of in its entirety mode of UI layer to realize covering and intercuts, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Figure 13 is the block diagram of a kind of terminal equipment according to an exemplary embodiment.Such as, terminal equipment 1300 can be mobile phone, computer, flat-panel devices etc.
With reference to Figure 13, terminal equipment 1300 can comprise following one or more assembly: processing components 1302, memory 1304, power supply module 1306, multimedia groupware 1308, audio-frequency assembly 1310, the interface 1312 of I/O (I/O), sensor cluster 1314, and communications component 1316.
The integrated operation of the usual control terminal 1300 of processing components 1302, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1302 can comprise one or more processor 1320 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 1302 can comprise one or more module, and what be convenient between processing components 1302 and other assemblies is mutual.Such as, processing components 1302 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1308 and processing components 1302.
Memory 1304 is configured to store various types of data to be supported in the operation of terminal equipment 1300.The example of these data comprises the instruction being configured to any application program or the method operated on terminal equipment 1300, contact data, telephone book data, message, picture, video etc.Memory 1304 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 1306 is terminal equipment 1300 provide electric power.Power supply module 1306 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for terminal equipment 1300 and be associated.
Multimedia groupware 1308 is included in the touching display screen providing an output interface between described terminal equipment 1300 and user.In certain embodiments, touching display screen can comprise liquid crystal display (LCD) and touch panel (TP).Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1308 comprises a front-facing camera and/or post-positioned pick-up head.When terminal equipment 1300 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside video data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1310 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1310 comprises a microphone (MIC), and when terminal equipment 1300 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 1304 further or be sent via communications component 1316.In certain embodiments, audio-frequency assembly 1310 also comprises a loud speaker, is configured to output audio signal.
I/O interface 1312 is for providing interface between processing components 1302 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 1314 comprises one or more transducer, is configured to as terminal equipment 1300 provides the state estimation of various aspects.Such as, sensor cluster 1314 can detect the opening/closing state of terminal equipment 1300, the relative positioning of assembly, such as described assembly is display and the keypad of terminal equipment 1300, the position of all right sense terminals equipment 1300 of sensor cluster 1314 or terminal equipment 1300 assemblies changes, the presence or absence that user contacts with terminal equipment 1300, the variations in temperature of terminal equipment 1300 orientation or acceleration/deceleration and terminal equipment 1300.Sensor cluster 1314 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 1314 can also comprise optical sensor, as CMOS or ccd image sensor, is configured to use in imaging applications.In certain embodiments, this sensor cluster 1314 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communications component 1316 is configured to the communication being convenient to wired or wireless mode between terminal equipment 1300 and other equipment.Terminal equipment 1300 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1316 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1316 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, terminal equipment 1300 can be realized by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, is configured to perform above-mentioned document display method.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 1304 of instruction, above-mentioned instruction can perform said method by the processor 1320 of terminal equipment 1300.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of terminal equipment 1300, makes terminal equipment 1300 can perform a kind of document display method.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (23)

1. an inserting method for video flowing, is characterized in that, described method comprises:
Obtain and intercut indication information in video flowing to be played;
Obtain from the information database prestored and intercut the first video data corresponding to indication information with described;
Apply described first video data to intercut in described video flowing, with described and intercut the second video data corresponding to indication information.
2. method according to claim 1, is characterized in that, intercuts indication information in described acquisition video flowing to be played, comprising:
The first identification information of video flowing is obtained from the header file of video flowing to be played;
Search the inter cut time information table prestored according to described first identification information, obtain and corresponding with described first identification information intercut indication information, described in intercut indication information and comprise: intercut initial time and break ends time.
3. method according to claim 1, is characterized in that, intercuts indication information in described acquisition video flowing to be played, comprising:
Obtain the second identification information of the picture frame of video flowing to be played;
According to described second identification information search prestore intercut coordinate information table, obtain and corresponding with described second identification information intercut indication information, described in intercut indication information and comprise: multiplely intercut coordinate information.
4. method according to claim 1, is characterized in that, intercuts indication information in described acquisition video flowing to be played, comprising:
Obtain the pattern characteristics information of the picture frame of video flowing to be played;
According to the pattern characteristics database identification prestored, whether intercut indication information to pattern characteristics information if being that user is preassigned; Wherein, described pattern characteristics database comprises and intercuts sample patterns characteristic information corresponding to indication information with described.
5. method according to claim 1, is characterized in that, intercuts indication information in described acquisition video flowing to be played, comprising:
Obtain the spectrum signature information of video flowing to be played;
Identify whether described spectrum signature information is that user is preassigned and intercuts indication information according to the spectral feature data storehouse prestored; Wherein, described spectral feature data storehouse comprises and intercuts sample spectra characteristic information corresponding to indication information with described.
6., according to the arbitrary described method of claim 1-5, it is characterized in that, described first video data of described application intercuts in described video flowing, with described and intercuts the second video data corresponding to indication information, comprising:
In the process playing described video flowing, stop playing described second video data according to the described indication information that intercuts, and described first video data of synchronous broadcasting.
7., according to the arbitrary described method of claim 1-5, it is characterized in that, described first video data of described application intercuts in described video flowing, with described and intercuts the second video data corresponding to indication information, comprising:
Determine to intercut the primary importance region of the second video data corresponding to indication information on described picture frame on the picture frame of described video flowing, with described;
According to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data;
Generate user interface UI layer, wherein, described first video data is drawn in region corresponding with described second place region on described UI layer.
In the process playing described video flowing, according to the described indication information that intercuts, described UI layer is covered on described second video data.
8. method according to claim 7, is characterized in that, intercuts the primary importance region of the second video data corresponding to indication information on described picture frame, comprising on the described picture frame determining described video flowing, with described:
Whether the smoothness detecting the zone boundary corresponding with described second video data based on image boundary track algorithm reaches default threshold value;
If judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described second video data as described primary importance region;
Know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
9. method according to claim 7, is characterized in that, described according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data, comprising:
According to the dimension scale of described picture frame and described screen, adjust multiple first coordinate informations on described primary importance region in proportion, obtain multiple second coordinate informations corresponding with described multiple first coordinate information;
The described second place region on described screen is determined according to described multiple second coordinate information.
10., according to the arbitrary described method of claim 7-9, it is characterized in that, described generation user interface UI layer, comprising:
Generate the UI layer coincide with described second place zone boundary, wherein, whole UI layer draws described first video data, so that in the process playing described video flowing, covers described second place region according to the described indication information that intercuts by identical for described UI layer.
11. according to the arbitrary described method of claim 7-9, and it is characterized in that, described generation user interface UI layer, comprising:
Generate the UI layer coincide with described screen border; Wherein, described UI layer coincide with described second place region on the 3rd corresponding band of position and draw described first video data, and the part outside described 3rd band of position carries out transparent processing, in the process playing described video flowing, described UI layer entirety is covered on described screen according to the described indication information that intercuts.
12. 1 kinds of terminal equipments, is characterized in that, described equipment comprises:
First acquisition module, is configured to obtain and intercuts indication information in video flowing to be played;
Second acquisition module, is configured to obtain from the information database prestored intercut the first video data corresponding to indication information with described;
Playing module, is configured to described first video data of application and intercuts in described video flowing, with described and intercut the second video data corresponding to indication information.
13. equipment according to claim 12, is characterized in that, described first acquisition module, comprising:
First acquiring unit, is configured to the first identification information obtaining video flowing from the header file of video flowing to be played;
First query unit, is configured to search according to described first identification information the inter cut time information table prestored, and obtains corresponding with described first identification information to intercut indication information, described in intercut indication information and comprise: intercut initial time and break ends time.
14. equipment according to claim 12, is characterized in that, described first acquisition module, comprising:
Second acquisition unit, is configured to the second identification information of the picture frame obtaining video flowing to be played;
Second query unit, be configured to according to described second identification information search prestore intercut coordinate information table, obtain and corresponding with described second identification information intercut indication information, described in intercut indication information and comprise: multiplely intercut coordinate information.
15. equipment according to claim 12, is characterized in that, described first acquisition module, comprising:
3rd acquiring unit, is configured to the pattern characteristics information of the picture frame obtaining video flowing to be played;
First recognition unit, whether the pattern characteristics information according to the pattern characteristics database identification prestored that is configured to is that user is preassigned is intercutted indication information; Wherein, described pattern characteristics database comprises and intercuts sample patterns characteristic information corresponding to indication information with described.
16. equipment according to claim 12, is characterized in that, described first acquisition module, comprising:
4th acquiring unit, is configured to the spectrum signature information obtaining video flowing to be played;
Second recognition unit, the spectral feature data storehouse be configured to according to prestoring identifies whether described spectrum signature information is that user is preassigned and intercuts indication information; Wherein, described spectral feature data storehouse comprises and intercuts sample spectra characteristic information corresponding to indication information with described.
17. according to the arbitrary described equipment of claim 12-15, and it is characterized in that, described playing module, comprising:
Control unit, is configured in the process playing described video flowing, stops playing described second video data according to the described indication information that intercuts;
First intercuts unit, is configured to synchronously play described first video data.
18. according to the arbitrary described equipment of claim 12-15, and it is characterized in that, described playing module, comprising:
First positioning unit, is configured to determine to intercut the primary importance region of the second video data corresponding to indication information on described picture frame on the picture frame of described video flowing, with described;
Second positioning unit, is configured to the second place region according to described primary importance region determines showing on the screen of described picture frame, correspondence shows described second video data;
Drawing unit, be configured to generate user interface UI layer, wherein, described first video data is drawn in region corresponding with described second place region on described UI layer;
Second intercuts unit, is configured to, in the process playing described video flowing, be covered on described second video data by described UI layer according to the described indication information that intercuts.
19. equipment according to claim 18, is characterized in that, described first positioning unit, is configured to:
Whether the smoothness detecting the zone boundary corresponding with described second video data based on image boundary track algorithm reaches default threshold value;
If judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described second video data as described primary importance region;
Know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
20. equipment according to claim 18, is characterized in that, described second positioning unit, is configured to:
According to the dimension scale of described picture frame and described screen, adjust multiple first coordinate informations on described primary importance region in proportion, obtain multiple second coordinate informations corresponding with described multiple first coordinate information;
The described second place region on described screen is determined according to described multiple second coordinate information.
21. according to the arbitrary described equipment of claim 18-20, and it is characterized in that, described drawing unit, is configured to:
Generate the UI layer coincide with described second place zone boundary, wherein, whole UI layer draws described first video data, so that in the process playing described video flowing, covers described second place region according to the described indication information that intercuts by identical for described UI layer.
22. according to the arbitrary described equipment of claim 18-20, and it is characterized in that, described drawing unit, is configured to:
Generate the UI layer coincide with described screen border; Wherein, described UI layer coincide with described second place region on the 3rd corresponding band of position and draw described first video data, and the part outside described 3rd band of position carries out transparent processing, in the process playing described video flowing, described UI layer entirety is covered on described screen according to the described indication information that intercuts.
23. 1 kinds of terminal equipments, is characterized in that, described equipment comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Obtain and intercut indication information in video flowing to be played;
Obtain from the information database prestored and intercut the first video data corresponding to indication information with described;
Apply described first video data to intercut in described video flowing, with described and intercut the second video data corresponding to indication information.
CN201510213758.8A 2015-04-29 2015-04-29 The inserting method and terminal device of video flowing Active CN104853223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510213758.8A CN104853223B (en) 2015-04-29 2015-04-29 The inserting method and terminal device of video flowing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510213758.8A CN104853223B (en) 2015-04-29 2015-04-29 The inserting method and terminal device of video flowing

Publications (2)

Publication Number Publication Date
CN104853223A true CN104853223A (en) 2015-08-19
CN104853223B CN104853223B (en) 2018-09-04

Family

ID=53852520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510213758.8A Active CN104853223B (en) 2015-04-29 2015-04-29 The inserting method and terminal device of video flowing

Country Status (1)

Country Link
CN (1) CN104853223B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847965A (en) * 2016-03-29 2016-08-10 百度在线网络技术(北京)有限公司 Video data processing method and video data processing system
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN108537867A (en) * 2018-04-12 2018-09-14 北京微播视界科技有限公司 According to the Video Rendering method and apparatus of user's limb motion
CN111629253A (en) * 2020-06-11 2020-09-04 网易(杭州)网络有限公司 Video processing method and device, computer readable storage medium and electronic equipment
WO2022143374A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Media playing method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557464A (en) * 2009-04-01 2009-10-14 深圳市融创天下科技发展有限公司 Method for dynamically embedding other media segments in video program playback
CN101616288A (en) * 2008-06-27 2009-12-30 上海乐程文化传播有限公司 A kind of method and apparatus that fast video intercuts on the portable video terminal
CN102572558A (en) * 2011-12-31 2012-07-11 华为技术有限公司 Video inter-cut method, device and system
CN103595992A (en) * 2013-11-08 2014-02-19 深圳市奥拓电子股份有限公司 Court LED display screen system and advertisement insertion method thereof capable of achieving precise advertisement delivery
CN103634540A (en) * 2012-08-20 2014-03-12 联想(北京)有限公司 A control method, a data terminal and a data transmitting device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616288A (en) * 2008-06-27 2009-12-30 上海乐程文化传播有限公司 A kind of method and apparatus that fast video intercuts on the portable video terminal
CN101557464A (en) * 2009-04-01 2009-10-14 深圳市融创天下科技发展有限公司 Method for dynamically embedding other media segments in video program playback
CN102572558A (en) * 2011-12-31 2012-07-11 华为技术有限公司 Video inter-cut method, device and system
CN103634540A (en) * 2012-08-20 2014-03-12 联想(北京)有限公司 A control method, a data terminal and a data transmitting device
CN103595992A (en) * 2013-11-08 2014-02-19 深圳市奥拓电子股份有限公司 Court LED display screen system and advertisement insertion method thereof capable of achieving precise advertisement delivery

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847965A (en) * 2016-03-29 2016-08-10 百度在线网络技术(北京)有限公司 Video data processing method and video data processing system
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN106792003B (en) * 2016-12-27 2020-04-14 西安石油大学 Intelligent advertisement insertion method and device and server
CN108537867A (en) * 2018-04-12 2018-09-14 北京微播视界科技有限公司 According to the Video Rendering method and apparatus of user's limb motion
CN108537867B (en) * 2018-04-12 2020-01-10 北京微播视界科技有限公司 Video rendering method and device according to user limb movement
CN111629253A (en) * 2020-06-11 2020-09-04 网易(杭州)网络有限公司 Video processing method and device, computer readable storage medium and electronic equipment
WO2022143374A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Media playing method and electronic device

Also Published As

Publication number Publication date
CN104853223B (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN110662083A (en) Data processing method and device, electronic equipment and storage medium
CN105244048A (en) Audio play control method and apparatus
CN105160854A (en) Equipment control method, device and terminal equipment
KR102147329B1 (en) Video display device and operating method thereof
CN104486451B (en) Application program recommends method and device
CN104281432A (en) Method and device for regulating sound effect
CN105554581A (en) Method and device for bullet screen display
CN105426086A (en) Display processing method and device of searching functional block in page
CN105426386A (en) File synchronization method and apparatus, and terminal device
CN105242942A (en) Application control method and apparatus
CN104853223A (en) Video stream intercutting method and terminal equipment
CN104717293A (en) Method and device for showing information resources on conversation interface
CN104133956A (en) Method and device for processing pictures
CN103914148A (en) Function interface display method and device and terminal equipment
CN105786507A (en) Display interface switching method and device
CN104461348A (en) Method and device for selecting information
CN104837154A (en) Wireless access point control method and device
CN104484795A (en) Information prompting method and device
CN104020924A (en) Label establishing method and device and terminal
CN107330391A (en) Product information reminding method and device
CN104267881A (en) Toolbar operating method and device
CN108040280A (en) Content item display methods and device, storage medium
CN104902318A (en) Playing control method and terminal device
CN105488829A (en) Method and device for generating head portrait
CN104883603B (en) Control method for playing back, system and terminal device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant