CN104902318B - Control method for playing back and terminal device - Google Patents
Control method for playing back and terminal device Download PDFInfo
- Publication number
- CN104902318B CN104902318B CN201510210000.9A CN201510210000A CN104902318B CN 104902318 B CN104902318 B CN 104902318B CN 201510210000 A CN201510210000 A CN 201510210000A CN 104902318 B CN104902318 B CN 104902318B
- Authority
- CN
- China
- Prior art keywords
- picture frame
- content
- layers
- object content
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The disclosure is directed to a kind of control method for playing back and terminal devices, know that there are the object contents that user specifies on the picture frame of video flowing to be played by detection, then determine on the picture frame, first position corresponding with object content region, the screen for showing the picture frame is determined further according to first position region, the second position region of corresponding display target content, then UI layers are generated, and on the UI layers, with second position region coincide corresponding part draw it is preset, more new content corresponding with object content, thus when playing original video stream when the screen display picture frame, the UI layers is covered on the picture frame, so that more new content coverage goal content is shown to user.It realizes in the case where video stream data need not be distorted, the personalized video content for meeting user's needs is presented to user in real time, improve flexibility and the efficiency of personalized video broadcasting.
Description
Technical field
This disclosure relates to field of video broadcasting technology, more particularly to a kind of control method for playing back and terminal device.
Background technology
Intelligent terminal becomes increasingly popular, and becomes the major way of customer multi-media video-see, by taking mobile phone as an example, uses
Family can download interested video content from network side and be watched, or the video content that viewing is locally stored.
In the related technology, video playing is played out according to the picture frame of video flowing, and user can only control broadcasting
Mode, such as:Playing progress rate, if full frame etc..However, user can not control broadcasting content, to interested video content
Carry out personalized video playing.
Invention content
The embodiment of the present disclosure provides a kind of control method for playing back and terminal device.The technical solution is as follows:
According to the first aspect of the embodiments of the present disclosure, a kind of control method for playing back is provided, this method includes:
The picture frame for detecting video flowing to be played judges whether the preassigned object content of user;
If judgement knows that there are the object contents, it is determined that on the picture frame, corresponding with the object content the
One band of position;
It is shown in the target according to first position region determination for showing the screen of the picture frame, corresponding to
The second position region of appearance;
UI layers of user interface is generated, and the corresponding part drafting that coincide on the UI layers, with the second position region
More new content preset, corresponding with the object content;
Described in the screen display when picture frame, described UI layers is covered on the picture frame, so that the update
Content covers the object content and is shown to the user.
According to the second aspect of the embodiment of the present disclosure, a kind of terminal device is provided, the equipment includes:
Detection module is configured as detecting the picture frame of video flowing to be played, judges whether that user is preassigned
Object content;
First locating module, be configured as judge to know there are when the object content, determine on the picture frame, with
The corresponding first position region of the object content;
Second locating module is configured as determining the screen for showing the picture frame according to the first position region
Upper, the corresponding second position region for showing the object content;
Processing module is configurable to generate UI layers of user interface, and is kissed on the UI layers, with the second position region
It closes corresponding part and draws more new content preset, corresponding with the object content;
Display module covers the picture when being configured as the picture frame described in the screen display by described UI layers
On frame, so that the more new content covers the object content and is shown to the user.
According to the third aspect of the embodiment of the present disclosure, a kind of terminal device is provided, which includes:
Processor;
Memory for the executable instruction for storing the processor;
Wherein, the processor is configured as:
The picture frame for detecting video flowing to be played judges whether the preassigned object content of user;
If judgement knows that there are the object contents, it is determined that on the picture frame, corresponding with the object content the
One band of position;
It is shown in the target according to first position region determination for showing the screen of the picture frame, corresponding to
The second position region of appearance;
UI layers of user interface is generated, and the corresponding part drafting that coincide on the UI layers, with the second position region
More new content preset, corresponding with the object content;
Described in the screen display when picture frame, described UI layers is covered on the picture frame, so that the update
Content covers the object content and is shown to the user.
The technical solution that the embodiment of the present disclosure provides can include the following benefits:
Know that there are the object contents that user specifies on the picture frame of video flowing to be played by detection, it is determined that the picture
On frame, first position corresponding with object content region, determine screen for showing the picture frame further according to first position region
On curtain, the second position region of corresponding display target content, then generate UI layer, and on the UI layers and the region of the second position
More new content preset, corresponding with object content is drawn in identical corresponding part, to work as screen when playing original video stream
When showing the picture frame, the UI layers is covered on the picture frame, so that more new content coverage goal content is shown to user.It is real
When having showed broadcasting video flowing, in the case where video stream data need not be distorted, is presented to user meet user's needs in real time
Personalized video content, avoid the need in advance according to user need change original video stream data and occupy a large amount of memory space into
Row storage improves flexibility and the efficiency of personalized video broadcasting.
It should be understood that above general description and following detailed description is only exemplary and explanatory, not
The disclosure can be limited.
Description of the drawings
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and be configured as together with specification explaining the principle of the disclosure.
Fig. 1 is a kind of flow chart of control method for playing back shown according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment;
Fig. 3 A are a kind of flow charts of the control method for playing back shown according to another exemplary embodiment;
The screen display of terminal device shown in Fig. 3 B is the picture frame for including object content;
The screen display of terminal device shown in Fig. 3 C is the picture frame for using more new content coverage goal content;
Fig. 4 A are a kind of flow charts of the control method for playing back shown according to another exemplary embodiment;
The screen display of terminal device shown in Fig. 4 B is the picture frame for including object content;
The screen display of terminal device shown in Fig. 4 C is the picture frame for using more new content coverage goal content;
Fig. 5 is a kind of block diagram of terminal device shown according to an exemplary embodiment;
Fig. 6 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Fig. 7 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Fig. 8 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Fig. 9 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Figure 10 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Figure 11 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Figure 12 is a kind of block diagram of the terminal device shown according to another exemplary embodiment;
Figure 13 is a kind of block diagram of terminal device shown according to an exemplary embodiment.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings
It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate the concept of the disclosure.
Specific implementation mode
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of flow chart of control method for playing back shown according to an exemplary embodiment, and the present embodiment is broadcast with this
Place control method should be configured as include show screen terminal device in illustrate.The control method for playing back may include
The following steps:
In a step 101, the picture frame for detecting video flowing to be played judges whether in the preassigned target of user
Hold.
First, terminal device receives the specified video flowing played of user, and the specified video flowing played of user is terminal device
The video flowing or terminal device for receiving the transmission of remaining network side equipment are stored in advance in the video flowing of terminal device local.
Then, terminal device receive user for specified by the video flowing object content and user provide with this
The corresponding more new content of object content.Wherein, the object content that user specifies is exactly customized personalized broadcasting content, i.e.,
When playing the picture concerned frame of video flowing, object content original in video flowing is not presented, but presentation user specifies more
New content.
It should be noted that the preassigned object content of user include character face in video flowing, dress ornament, color,
At least one of word, pattern are multiple, and the more new content that user is provided previously is corresponding with object content.
Terminal device, to the personalized broadcasting demand of selected video flowing, detects video flowing to be played first according to user
Picture frame, judge in the picture frame whether there is the preassigned object content of user.It should be noted that detection picture frame
In with the presence or absence of object content realization method have very much, illustration:By in the pixel and picture frame by object content
Mode that pixel compares, by the characteristic information and picture frame of object content the matched mode of characteristic information or by target
Mode of the spectral information of content compared with the spectral information in picture frame, it is suitable to be selected according to actual object content
Detection mode, the present embodiment are without limitation.
In a step 102, if judge know that there are the object contents, it is determined that on the picture frame, with the target
The corresponding first position region of content.
Terminal device is by detecting the picture frame of video flowing to be played, if judging to know that there are users in advance to refer in picture frame
Fixed object content, it is determined that on the picture frame, first position region corresponding with the object content that user specifies.Citing comes
It says, if the preassigned object content of user includes the first character face, first position region is the first character face region;
If the preassigned object content of user includes the cap of the first character face and first facial, first position region is first
Character face region and the cap region of first facial, if the preassigned object content of user includes the first character face
With the second character face and the first pattern, then first position region be the first character face and the second character face region, with
And first area of the pattern.
In step 103, it is shown according to first position region determination for showing the screen of the picture frame, corresponding to
Show the second position region of the object content.
Terminal device according on picture frame, first position region corresponding with the object content that user specifies, determination be used for
It shows on the screen of picture frame, the second position region of corresponding display target content.It should be noted that according to the of picture frame
One band of position determines that there are many realization method in the second position region on screen, is illustrated below;
Mode one,
Picture frame is zoomed in and out first, wherein what first position region also synchronized zooms in and out;
When picture frame is zoomed to screen size, the first position area information after record scaling, the first position area
Domain information can be as the second position region of screen, corresponding display target content for showing the picture frame.
Mode two,
Multiple first coordinate informations on the region of first position are obtained first, for example, it is assumed that first position region is pros
Shape, multiple first coordinate informations corresponding with the first position region can be the coordinate information at four angles;Assuming that first position
Region is circle, and multiple first coordinate informations corresponding with the first position region can be at least two diameters and circular boundary
Intersecting point coordinate information;
According to the dimension scale of the picture frame and the screen, multiple first coordinates on the region of first position are adjusted in proportion
Information obtains multiple second coordinate informations corresponding with multiple first coordinate information;
Screen for showing the picture frame can be determined according to multiple second coordinate information, in corresponding display target
The second position region of appearance.
At step 104, UI layers of user interface is generated, is coincide on the UI layers, with the second position region corresponding
Part, which is drawn, more new content preset, corresponding with the object content.
Terminal device application UI controls generate UI layers of new free user interface;
Then the UI members that parsing obtains more new content are carried out to the file for being stored with more new content corresponding with object content
Element, and the UI elements are added on blank UI layers, are coincide pair for the second position region of display target content on screen
The part answered.
In step 105, described UI layers is covered on the picture frame when picture frame described in the screen display,
So that the more new content covers the object content and is shown to the user.
Terminal device play video flowing during, when the screen display picture frame, by with the second position on screen
Coincide corresponding part of region draws and has the UI layers of more new content to cover on the picture frame, so that more new content covering
The object content that user specifies, to which the individualized video content for meeting user demand be presented to user.
In conclusion control method for playing back provided in this embodiment, the picture frame of video flowing to be played is known by detection
On there are the object contents that user specifies, it is determined that on the picture frame, the region of first position corresponding with object content, further according to
The determination of first position region is for showing the screen of the picture frame, the second position region of corresponding display target content, then
Generate UI layer, and on the UI layers and the corresponding part drafting that coincide of second position region is preset, corresponding with object content
More new content, to, when the screen display picture frame, the UI layers be covered on the picture frame when playing original video stream, with
More new content coverage goal content is set to be shown to user.When realizing broadcasting video flowing, video stream data need not distorted
In the case of, the personalized video content for meeting user's needs is presented to user in real time, avoids the need for needing to repair according to user in advance
Change original video stream data and occupy a large amount of memory space and stored, improves flexibility and the efficiency of personalized video broadcasting.
Fig. 2 is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this
Control method for playing back should be configured as include show screen terminal device in illustrate.Picture frame is directed in the present embodiment
The detection of middle object content, using the matched detection mode of characteristic information, and for corresponding with object content on picture frame
The reality of control method for playing back is described in detail using the positioning method based on image boundary track algorithm for the positioning in first position region
Process is applied, which may include the following steps:
In step 201, the characteristic information in the picture frame of video flowing to be played is obtained.
Terminal device receives the specified video flowing played of user, and for the object content specified by the video flowing, with
And the more new content corresponding with the object content that user provides.Wherein, the specified video flowing played of user is that terminal device connects
The video flowing or terminal device for receiving the transmission of remaining network side equipment are stored in advance in the video flowing of terminal device local.
It should be noted that the preassigned object content of user include character face in video flowing, dress ornament, color,
At least one of word, pattern are multiple, and the more new content that user is provided previously is corresponding with object content.Terminal device according to
User needs the picture frame for detecting video flowing to be played, judges the picture to the personalized broadcasting demand of selected video flowing
It whether there is the preassigned object content of user in frame.
First, the characteristic information in the picture frame is obtained.It should be noted that modification can be preassigned according to user
Object content selects different characteristic information acquisition modes, is illustrated below:
Mode one, if the preassigned object content of user is the first pattern for being distributed multiple positions in the background, root
According to pre-set unit window, for example, long 30 pixels, the unit window of wide 30 pixel, to all regions on the picture frame
Characteristic information is extracted one by one, for example, the picture frame is one long 900 pixel, the picture of wide 900 pixel utilizes long 30 pictures
Element, the unit window of wide 30 pixel carry out feature extraction to picture frame, need to extract 400 characteristic informations, this mode it is pervasive
Property is very strong, can be directed to all types of object contents.
The processing mould of face recognition may be used if the preassigned object content of user is character face in mode two
Type such as neural network model or grader comparison model first determine facial area in picture frame, and then from the face
Face feature information is extracted in region, avoids the characteristic information for extracting the picture one by one from all areas of picture frame, this side
Formula improves treatment effeciency to the object content for being easy positioning regional area.
In step 202, identify whether the characteristic information is in the preassigned target of user according to property data base
Hold;Wherein, the property data base includes sample characteristics information corresponding with the object content.
Whether terminal device is what user specified according to the characteristic information that property data base identification is obtained from the picture frame
Object content, wherein property data base includes sample characteristics information corresponding with object content, to which terminal device is by characteristic
Matched one by one with the characteristic information obtained from the picture frame according to library sample characteristics information corresponding with object content, if matching at
Work(illustrates that there are the preassigned object contents of user in picture frame;If it fails to match, illustrate that there is no user is pre- in picture frame
First specified object content.
It should be noted that the sample that the service provider that the content in property data base can be video flowing has been cured
Characteristic information.More it is flexibly that property data base can also include in addition to the sample characteristics information including having been cured before
In real time for user's video flowing sent, the sample characteristics information for the contents processing generation specified according to user.
In step 203, if judging to know there are the object content, based on image boundary track algorithm obtain with it is described
The smoothness of the corresponding zone boundary of object content;
Terminal device is by detecting the picture frame of video flowing to be played, if judging to know that there are users in advance to refer in picture frame
Fixed object content then obtains the smoothness of zone boundary corresponding with the object content by image boundary track algorithm;Its
In, image boundary track algorithm includes the image boundary track algorithm based on two-value, the image boundary track algorithm based on small echo
Deng, can be selected according to actual application, and then pass through image boundary track algorithm obtain with the object content pair
The smoothness for the zone boundary answered.
In step 204, judge whether the smoothness reaches preset threshold value, if judging to know that the smoothness reaches
To preset threshold value, 205 are thened follow the steps;If judgement knows that the smoothness does not reach preset threshold value, execute
Step 206;
Judge whether the smoothness of zone boundary corresponding with the object content reaches preset threshold value, it should be noted that
It is that different image boundary track algorithms is preset with different threshold values, for example, the image boundary track algorithm pair based on two-value
The threshold value answered is A, and the corresponding threshold value of image boundary track algorithm based on small echo is B, therefore, will according to the algorithm of use
The smoothness of acquisition is compared with corresponding threshold value, if judging to know that smoothness reaches preset threshold value, executes step
Rapid 205;If judgement knows that smoothness does not reach preset threshold value, 206 are thened follow the steps;
It in step 205, will be corresponding with the object content if judging to know that the smoothness reaches the threshold value
Zone boundary as the first position region.
When judging to know that the smoothness of zone boundary corresponding with the object content reaches preset threshold value, then illustrate
Zone boundary is easy to be split processing, directly will zone boundary corresponding with object content as first position region.
In step 206, if judging to know that the smoothness does not reach the threshold value, it is determined that with the regional edge
The corresponding smooth region in boundary, and using the smooth region as the first position region.
When judging to know that the smoothness of zone boundary corresponding with the object content does not reach preset threshold value, then
Illustrate that zone boundary is not easy to be split processing, can be determined according to preset compensating parameter corresponding with zone boundary smooth
Region, and then using smooth region as first position region.
In step 207, it is shown according to first position region determination for showing the screen of the picture frame, corresponding to
Show the second position region of the object content;
In a step 208, UI layers of user interface is generated, is coincide on the UI layers, with the second position region corresponding
Part, which is drawn, more new content preset, corresponding with the object content;
In step 209, described UI layers is covered on the picture frame when picture frame described in the screen display,
So that the more new content covers the object content and is shown to the user.
The specific implementation mode of step 207- steps 209 in the present embodiment may refer to the step in embodiment illustrated in fig. 1
Rapid 103- steps 105, details are not described herein again.
In conclusion control method for playing back provided in this embodiment, for the detection of object content in picture frame, using spy
Reference ceases matched detection mode, and for the positioning in first position corresponding with object content region on picture frame, uses
The implementation process of control method for playing back is described in detail in positioning method based on image boundary track algorithm, then according to first position
Region determines the second position region that display target content is used on screen, generates UI layers, and on the UI layers and the second position
More new content preset, corresponding with object content is drawn in coincide corresponding part of region, to work as when playing original video stream
When the screen display picture frame, the UI layers is covered on the picture frame, so that more new content coverage goal content is shown to use
Family.When realizing broadcasting video flowing, in the case where video stream data need not be distorted, is presented to user meet user's need in real time
The personalized video content wanted avoids the need for needing to change original video stream data according to user in advance and occupies a large amount of storage sky
Between stored, improve personalized video broadcasting flexibility and efficiency.
You need to add is that before step 201, the method further includes:
Receive the picture frame of multiple video flowings;
Obtain sample characteristics information corresponding with the pre-set sample content of user in each picture frame;
The correspondence of sample characteristics information and sample content is stored in the property data base.
In conclusion control method for playing back provided in this embodiment, can dynamically update property data base, when with using
Between accumulation, the personalized content played provided to the user is more diversified.
For in above-described embodiment, using picture frame of the UI layers covering with object content of generation, and then make in update
Hold coverage goal content, the effect of personalized broadcasting is presented to the user by screen, it should be noted that in order to realize above-mentioned mistake
There are many journey, UI layers of generating mode and the realization rates of coverage mode, and the proportion of picture frame can be accounted for according to object content, or
Person's arrangement mode etc. carries out selecting different UI layer treatment technologies, to improve treatment effeciency, below by Fig. 3 and Fig. 4 institutes
Show that embodiment is described in detail.
Fig. 3 A are a kind of flow charts of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this
Control method for playing back should be configured as include show screen terminal device in illustrate.
The object content specified for user in the present embodiment is the first character face, and first character face is in picture
The unique application scenarios in distributed areas on frame, are realized, which can using UI layers of Local treatment mode
To include the following steps:
In step 301, the facial characteristics range obtained according to advance training determines on the picture frame of video flowing to be played
Facial area.
Feature corresponding with unit window on picture frame is extracted according to preset unit window, is obtained according to advance training
Facial characteristics range judges whether this feature belongs to the range intervals, if this feature belongs to the range intervals, illustrates and this feature
Corresponding region is facial area, if this feature is not belonging to the range intervals, it is face to illustrate region corresponding with this feature not
Region, to the facial area on Quick positioning map piece frame.Wherein, facial characteristics may include:Hear features or
FisherFace features or LBPH features can be selected according to application.
In step 302, facial characteristics is extracted from the facial area.
Facial feature extraction is carried out from positions such as profile, eyebrow, eyes, nose, lips in facial area.
In step 303, the facial characteristics is corresponding in property data base, the preassigned object content of user
Sample face feature is matched.
By sample face feature corresponding with object content in property data base and the facial characteristics from facial area extraction
It is matched, if successful match, judgement knows that facial area is object content;If it fails to match, facial area is known in judgement
Domain is not object content.
In step 304, if judge know that there are the object contents, it is determined that on the picture frame, with the target
The corresponding first position region of content.
In step 305, it is shown according to first position region determination for showing the screen of the picture frame, corresponding to
Show the second position region of the object content.
Step 304 and step 305 in the present embodiment may refer to the step 102 in embodiment illustrated in fig. 1 and step
103 or embodiment shown in Figure 2 in step 203 to step 207.
Within step 306, the UI layers coincideing with the second position zone boundary are generated, are drawn on entire UI layers described in having
More new content;
Terminal device application UI controls generate UI layers of new free user interface, the UI layers of zone boundary and the second position
Zone boundary, which coincide, to be corresponded to, and then carrying out parsing to the file for being stored with more new content corresponding with object content obtains in update
The UI elements of appearance, and the UI elements are added on entire blank UI layers.
In step 307, described in the screen display when picture frame, described UI layers is coincide and is covered for showing
It states on picture frame, the second position region of the object content, so that the more new content covers the object content and shows
Show to the user.
During terminal device plays video flowing, when the screen display picture frame, the UI layers is coincide and covers use
In the second position region for showing object content on the picture frame, so that in the target that more new content covering user specifies
Hold, to which the individualized video content for meeting user demand be presented to user.
As an example, the screen display of terminal device shown in Fig. 3 B is the picture frame for including object content, figure
The screen display of terminal device shown in 3C is the picture frame for using more new content coverage goal content, referring to Fig. 3 B and Fig. 3 C institutes
Show,
Assuming that the object content that user specifies is " Doraemon face " on picture frame, more new content is " Little Bear face ", in detail
It is for thin, sample face feature corresponding with " Doraemon face " in property data base and the face extracted from facial area is special
Sign is matched, if successful match, judgement knows that facial area is " Doraemon face ", then to being stored with " Little Bear face "
File carry out parsing and obtain UI elements, and the UI elements are added to boundary and second position zone boundary and are coincide corresponding sky
On white UI layers.
During terminal device plays video flowing, when the screen display picture frame, the UI layers is coincide and covers use
In showing Doraemon facial area on the picture frame, so that " Little Bear face " covering " Doraemon face " is somebody's turn to do, to use
The individualized video content for meeting user demand is presented in family.
In conclusion control method for playing back provided in this embodiment, is the first personage for the object content that user specifies
Face, and the unique application scenarios in distributed areas of first character face on picture frame, using UI layers of Local treatment side
Formula is realized, to when playing original video stream, when the screen display picture frame, the UI layers be coincide and covered for showing
Show the second position region of object content, so that more new content coverage goal content is shown to user.Realize broadcasting video flowing
When, in the case where video stream data need not be distorted, the personalized video for meeting user's needs can be presented to user in real time
Content improves treatment effeciency, has saved process resource.
Fig. 4 A are a kind of flow charts of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this
Control method for playing back should be configured as include show screen terminal device in illustrate.
It is multiple patterns, distributed area of multiple patterns on picture frame that the object content that user specifies is directed in the present embodiment
The application scenarios of domain dispersion, are realized using UI layers of disposed of in its entirety mode, which may include following several
A step:
In step 401, the area of the pattern on the picture frame of video flowing to be played is determined according to boundary profile algorithm.
Area of the pattern all on picture frame is determined based on boundary profile algorithm.
In step 402, pattern characteristics are extracted from the area of the pattern.
Pattern characteristics are extracted from area of the pattern, pattern characteristics include color histogram, alternatively, histogram of gradients.
In step 403, the pattern characteristics are corresponding in property data base, the preassigned object content of user
Sample patterns feature is matched.
By sample patterns feature corresponding with object content in property data base and the pattern characteristics from area of the pattern extraction
It is matched, if successful match, judgement knows that facial area is object content;If it fails to match, facial area is known in judgement
Domain is not object content.
In step 404, if judge know that there are the object contents, it is determined that on the picture frame, with the target
The corresponding first position region of content.
In step 405, it is shown according to first position region determination for showing the screen of the picture frame, corresponding to
Show the second position region of the object content.
Step 404 and step 405 in the present embodiment may refer to the step 102 in embodiment illustrated in fig. 1 and step
103 or embodiment shown in Figure 2 in step 203 to step 207.
In a step 406, the UI layers coincideing with the screen border are generated, on the UI layers, and the second position
Region coincide on corresponding the third place region draw described in more new content, and the part except the third place region into
Row transparent processing.
Terminal device application UI controls generate UI layers of new free user interface, the UI layers of zone boundary and screen border
It coincide and corresponds to, the UI members that parsing obtains more new content then are carried out to the file for being stored with more new content corresponding with object content
Element, and the UI elements are added on UI layers, coincide corresponding the third place region with the second position region on screen, and
Part on UI layers, except the third place region carries out transparent processing.
In step 407, described in the screen display when picture frame, whole the picture frame is covered by described UI layers
On, so that the more new content covers the object content and is shown to the user.
During terminal device plays video flowing, when the screen display picture frame, the UI layers of entirety is covered this
On picture frame, so that the object content that more new content covering user specifies, meets user demand to be presented to user
Individualized video content.
As an example, the screen display of terminal device shown in Fig. 4 B is the picture frame for including object content, figure
The screen display of terminal device shown in 4C is the picture frame for using more new content coverage goal content, referring to Fig. 4 B and Fig. 4 C institutes
Show,
Assuming that the object content that user specifies includes first pattern and the second pattern, the first pattern is " health on the picture frame
The lower part of the body of husband ", corresponding more new content are " tail of mermaid ", and the second pattern is " the Doraemon crown ", corresponding update
Content is that " the Doraemon crown with aircraft " specifically will be corresponding with first pattern and the second pattern in property data base
Sample patterns feature matched with the pattern characteristics extracted from area of the pattern, if successful match, judgement know pattern area
Domain is " lower part of the body of health husband " and " the Doraemon crown ", then to being stored with " tail of mermaid " and " machine with aircraft
The file of cat head top " pattern carries out parsing and obtains UI elements, and the UI elements are added on UI layers and screen on second
It sets region to coincide on corresponding the third place region, and the part except the third place region carries out transparent processing.
During terminal device plays video flowing, when the screen display picture frame, the UI layers of entirety is covered this
On picture frame, so that " tail of mermaid " pattern covers " lower part of the body of health husband " pattern, " the machine cat head with aircraft
Top " pattern covers " the Doraemon crown " pattern, to which the individualized video content for meeting user demand be presented to user.
In conclusion control method for playing back provided in this embodiment, is multiple patterns for the object content that user specifies,
The application scenarios of distributed areas dispersion of multiple patterns on picture frame, are realized using UI layers of disposed of in its entirety mode, from
And when playing original video stream when the screen display picture frame, UI layers of entirety are covered on the picture frame, so that update
Content coverage goal content is shown to user.When realizing broadcasting video flowing, in the case where video stream data need not be distorted,
The personalized video content for meeting user's needs can be presented to user in real time, improve treatment effeciency, saved process resource.
Following is embodiment of the present disclosure, can be configured as execution embodiments of the present disclosure.The disclosure is filled
Undisclosed details in embodiment is set, embodiments of the present disclosure is please referred to.
Fig. 5 is a kind of block diagram of terminal device shown according to an exemplary embodiment, as shown in figure 5, the terminal device,
Including:Detection module 11, the first locating module 12, the second locating module 13, processing module 14 and display module 15;Wherein,
Detection module 11 is configured as detecting the picture frame of video flowing to be played, judges whether that user preassigns
Object content;
First locating module 12, be configured as judge to know there are when the object content, determine on the picture frame,
First position corresponding with object content region;
Second locating module 13 is configured as determining the screen for showing the picture frame according to the first position region
On curtain, correspond to the second position region for showing the object content;
Processing module 14, is configurable to generate UI layers of user interface, and on the UI layers, with the second position region
Draw more new content preset, corresponding with the object content in identical corresponding part;
Display module 15 covers the figure when being configured as the picture frame described in the screen display by described UI layers
On piece frame, so that the more new content covers the object content and is shown to the user.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Terminal device provided in this embodiment knows that there are users to specify on the picture frame of video flowing to be played by detection
Object content, it is determined that it is true further according to first position region on the picture frame, the region of first position corresponding with object content
The second position region determined the screen for showing the picture frame, correspond to display target content, then generates UI layers, and at this
More new content preset, corresponding with object content is drawn in coincide corresponding part of on UI layers and second position region, thus
When playing original video stream when the screen display picture frame, the UI layers is covered on the picture frame, so that more new content covers
Object content is shown to user.When realizing broadcasting video flowing, in the case where video stream data need not be distorted, in real time to use
The personalized video content for meeting user's needs is presented in family, avoids the need for being needed to change original video stream data simultaneously in advance according to user
It occupies a large amount of memory space to be stored, improves flexibility and the efficiency of personalized video broadcasting.
Fig. 6 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in fig. 6, being based on Fig. 5 institutes
Show embodiment, the first locating module 12, including:Judging unit 121, the first determination unit 122 and the second determination unit 123,
In,
Judging unit 121 is configured as judging region corresponding with the object content based on image boundary track algorithm
Whether the smoothness on boundary reaches preset threshold value;
First determination unit 122, be configured as judge know that the smoothness reaches the threshold value when, will with it is described
The corresponding zone boundary of object content is as the first position region;
Second determination unit 123 is configured as, when judging to know that the smoothness does not reach the threshold value, determining
Smooth region corresponding with the zone boundary, and using the smooth region as the first position region.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Fig. 7 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in fig. 7, being based on Fig. 5 institutes
Show embodiment, second locating module 13, including:First acquisition unit 131 and third determination unit 132, wherein
First acquisition unit 131 is configured as, according to the dimension scale of the picture frame and the screen, adjusting in proportion
Multiple first coordinate informations on the first position region obtain corresponding with the multiple first coordinate information multiple second
Coordinate information;
Third determination unit 132, be configured as being determined on the screen according to the multiple second coordinate information described in
Second position region.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Terminal device provided in this embodiment is determined for first position corresponding with object content region on picture frame
Position the implementation process of control method for playing back is described in detail using the positioning method based on image boundary track algorithm, then basis
First position region determine on screen be used for display target content second position region, generate UI layer, and on the UI layers, and
More new content preset, corresponding with object content is drawn in coincide corresponding part of second position region, to be regarded in broadcasting original
When frequency flows when the screen display picture frame, the UI layers is covered on the picture frame, so that more new content coverage goal content
It is shown to user.When realizing broadcasting video flowing, in the case where video stream data need not be distorted, presented in real time to user full
The personalized video content that sufficient user needs avoids the need for needing to change original video stream data and occupying a large amount of according to user in advance
Memory space stored, improve personalized video broadcasting flexibility and efficiency.
Fig. 8 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in figure 8, being based on Fig. 5 institutes
Show embodiment, the processing module 14, including:First generation unit 141 and the first drawing unit 142, wherein
First generation unit 141 is configurable to generate the UI layers coincideing with the second position zone boundary;
First drawing unit 142 is configured as on entire UI layers more new content described in drafting;
Display module 15 is configured as described UI layers coincideing and cover for showing the picture frame, the target
The second position region of content.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Terminal device provided in this embodiment is realized for using UI layers of Local treatment mode, to play
When original video stream when the screen display picture frame, by the identical second position area covered for display target content of the UI floor
Domain, so that more new content coverage goal content is shown to user.When realizing broadcasting video flowing, video fluxion need not distorted
In the case of, the personalized video content for meeting user's needs can be presented to user in real time, improve treatment effeciency, save
Process resource.
Fig. 9 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in figure 9, being based on Fig. 5 institutes
Show embodiment, the processing module 14, including:Second generation unit 143 and the second drawing unit 144, wherein
Second generation unit 143 is configurable to generate the UI layers coincideing with the screen border;
Second drawing unit 144 is configured as on the UI layers, corresponding third of coincideing with the second position region
More new content described in being drawn on the band of position, and the part except the third place region carries out transparent processing;
Display module 15 is configured as covering described UI layers entirety on the picture frame.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Terminal device provided in this embodiment is realized for using UI layers of disposed of in its entirety mode, to play
When original video stream when the screen display picture frame, UI layers of entirety are covered on the picture frame, so that more new content covers
Object content is shown to user.It, can be real-time in the case where video stream data need not be distorted when realizing broadcasting video flowing
To user present meet user needs personalized video content, improve treatment effeciency, saved process resource.
Figure 10 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in Figure 10, is based on Fig. 5
Illustrated embodiment, the detection module 11, including:Second acquisition unit 111 and recognition unit 112, wherein
Second acquisition unit 111 is configured as obtaining the characteristic information in the picture frame;
Recognition unit 112 is configured as identifying whether the characteristic information is the object content according to property data base;
Wherein, the property data base includes sample characteristics information corresponding with the object content.
Further, the equipment further includes:
Receiving module 16 is configured as receiving the picture frame of multiple video flowings;
Acquisition module 17 is configured as obtaining sample corresponding with the pre-set sample content of user in each picture frame
Characteristic information;
Memory module 18 is configured as the correspondence of sample characteristics information and sample content being stored in the characteristic
According in library.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Control method for playing back provided in this embodiment, for the detection of object content in picture frame, using characteristic information
The detection mode matched, and property data base can be dynamically updated, with the accumulation of usage time, the personalization provided to the user
The content of broadcasting is more diversified.
Figure 11 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in figure 11, based on figure
10 illustrated embodiments, the second acquisition unit 111, including:First processing subelement 1111 and first extracts subelement 1112,
In,
First processing subelement 1111, if it is the first pattern to be configured as the object content, according to boundary profile algorithm
Determine the area of the pattern on the picture frame;
First extraction subelement 1112, is configured as extracting pattern characteristics from the area of the pattern;
Recognition unit 112, be configured as by the pattern characteristics and the property data base with first pattern pair
The sample patterns feature answered is matched;
If successful match, the area of the pattern is known in judgement, and there are first patterns;
If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Terminal device provided in this embodiment, the object content specified for user are the first character face, and this first
The unique application scenarios in distributed areas of the character face on picture frame are carried using the detection mode of pattern characteristics information matches
High treatment effeciency.
Figure 12 is a kind of block diagram of the terminal device shown according to another exemplary embodiment, as shown in figure 12, based on figure
10 illustrated embodiments, the second acquisition unit 111, including:Second processing subelement 1113 and second extracts subelement 1114,
In,
Second processing subelement 1113, if it is the first character face to be configured as the object content, according in grader
The facial characteristics that training obtains in advance determines the facial area on the picture frame;
Second extraction subelement 1114, is configured as extracting facial characteristics from the facial area;
Recognition unit 112, be configured as by the facial characteristics and the property data base with the first object plane
The corresponding sample face feature in portion is matched;
If successful match, the facial area is known in judgement, and there are first character faces;
If it fails to match, judgement knows that first character face is not present in the facial area.
The function and process flow of each module in terminal device provided in this embodiment, may refer to it is above-mentioned shown in method
Embodiment, realization principle is similar, and details are not described herein again.
Terminal device provided in this embodiment, the object content specified for user is multiple patterns, and multiple patterns are being schemed
The application scenarios of distributed areas dispersion on piece frame improve treatment effeciency using the matched detection mode of face feature information.
Figure 13 is a kind of block diagram of terminal device shown according to an exemplary embodiment.For example, terminal device 1300 can
To be mobile phone, computer, tablet device etc..
Referring to Fig.1 3, terminal device 1300 may include following one or more components:Processing component 1302, memory
1304, power supply module 1306, multimedia component 1308, audio component 1310, the interface 1312 of input/output (I/O), sensor
Component 1314 and communication component 1316.
The integrated operation of 1302 usual control terminal equipment 1300 of processing component, such as with display, call, data are logical
Letter, camera operation and record operate associated operation.Processing component 1302 may include one or more processors 1320
It executes instruction, to perform all or part of the steps of the methods described above.In addition, processing component 1302 may include one or more
Module, convenient for the interaction between processing component 1302 and other assemblies.For example, processing component 1302 may include multimedia mould
Block, to facilitate the interaction between multimedia component 1308 and processing component 1302.
Memory 1304 is configured as storing various types of data to support the operation in terminal device 1300.These numbers
According to example include any application program or method that are configured as operating on terminal device 1300 instruction, contact number
According to, telephone book data, message, picture, video etc..Memory 1304 can be by any kind of volatibility or non-volatile memories
Equipment or combination thereof are realized, such as static RAM (SRAM), electrically erasable programmable read-only memory
(EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory
(ROM), magnetic memory, flash memory, disk or CD.
Power supply module 1306 provides electric power for the various assemblies of terminal device 1300.Power supply module 1306 may include power supply
Management system, one or more power supplys and other with for terminal device 1300 generate, management and distribution associated group of electric power
Part.
Multimedia component 1308 is included in touching for one output interface of the offer between the terminal device 1300 and user
Control display screen.In some embodiments, touching display screen may include liquid crystal display (LCD) and touch panel (TP).It touches
Panel includes one or more touch sensors to sense the gesture on touch, slide, and touch panel.The touch sensor
The boundary of a touch or slide action can be not only sensed, but also detects the duration associated with the touch or slide operation
And pressure.In some embodiments, multimedia component 1308 includes a front camera and/or rear camera.Work as terminal
Equipment 1300 is in operation mode, and when such as screening-mode or video mode, front camera and/or rear camera can receive
External multi-medium data.Each front camera and rear camera can be a fixed optical lens system or have
Focusing and optical zoom capabilities.
Audio component 1310 is configured as output and/or input audio signal.For example, audio component 1310 includes a wheat
Gram wind (MIC), when terminal device 1300 is in operation mode, when such as call model, logging mode and speech recognition mode, Mike
Wind is configured as receiving external audio signal.The received audio signal can be further stored in memory 1304 or via
Communication component 1316 is sent.In some embodiments, audio component 1310 further includes a loud speaker, is configured as output audio
Signal.
I/O interfaces 1312 provide interface, above-mentioned peripheral interface module between processing component 1302 and peripheral interface module
Can be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and
Locking press button.
Sensor module 1314 includes one or more sensors, is configured as providing various aspects for terminal device 1300
Status assessment.For example, sensor module 1314 can detect the state that opens/closes of terminal device 1300, the phase of component
It is the display and keypad of terminal device 1300 to positioning, such as the component, sensor module 1314 can also detect end
The position change of 1,300 1 components of end equipment 1300 or terminal device, the presence or do not deposit that user contacts with terminal device 1300
In the temperature change of 1300 orientation of terminal device or acceleration/deceleration and terminal device 1300.Sensor module 1314 may include
Proximity sensor is configured to detect the presence of nearby objects without any physical contact.Sensor module 1314
It can also be configured as using in imaging applications such as CMOS or ccd image sensor including optical sensor.In some implementations
In example, which can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor
Or temperature sensor.
Communication component 1316 is configured to facilitate the logical of wired or wireless way between terminal device 1300 and other equipment
Letter.Terminal device 1300 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.One
In a exemplary embodiment, communication component 1316 via broadcast channel receive broadcast singal from external broadcasting management system or
Broadcast related information.In one exemplary embodiment, the communication component 1316 further includes near-field communication (NFC) module, with
Promote short range communication.For example, can be based on radio frequency identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology surpasses
Broadband (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, terminal device 1300 can by one or more application application-specific integrated circuit (ASIC),
Digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field-programmable gate array
Arrange (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, be configured as executing above-mentioned document and show
Method.
In the exemplary embodiment, it includes the non-transitorycomputer readable storage medium instructed, example to additionally provide a kind of
Such as include the memory 1304 of instruction, above-metioned instruction can be executed by the processor 1320 of terminal device 1300 to complete above-mentioned side
Method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, magnetic
Band, floppy disk and optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by terminal device 1300
When processor executes so that terminal device 1300 is able to carry out a kind of document display method.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and includes the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
Claims (21)
1. a kind of control method for playing back, which is characterized in that the method includes:
The picture frame for detecting video flowing to be played judges whether the preassigned object content of user;
If judgement knows that there are the object contents, it is determined that on the picture frame, first corresponding with the object content
Set region;
According to the first position region determination object content is shown for showing the screen of the picture frame, corresponding to
Second position region;
Generate UI layer of user interface, the corresponding part drafting that coincide with the second position region on the UI layers have it is preset,
More new content corresponding with the object content;
Described in the screen display when picture frame, described UI layers is covered on the picture frame, so that the more new content
It covers the object content and is shown to the user;
UI layers of the generation user interface, the corresponding part that coincide with the second position region on the UI layers draw have it is default
, corresponding with the object content more new content, including:
UI layers of new free user interface is generated using UI controls;
Then the file for being stored with more new content corresponding with the object content is parsed, more new content described in acquisition
UI elements, and the UI elements are added on the blank interface UI layers, on screen for showing the object content
The second position region coincide corresponding part;
On the determination picture frame, first position corresponding with object content region, including:
Whether the smoothness that zone boundary corresponding with the object content is detected based on image boundary track algorithm reaches default
Threshold value;
If judge know that the smoothness reaches the threshold value, will zone boundary corresponding with the object content as institute
State first position region;
If judgement knows that the smoothness does not reach the threshold value, it is determined that smooth area corresponding with the zone boundary
Domain, and using the smooth region as the first position region.
2. according to the method described in claim 1, it is characterized in that, described determine according to the first position region for showing
On the screen of the picture frame, the second position region for showing the object content is corresponded to, including:
According to the dimension scale of the picture frame and the screen, multiple first on the first position region are adjusted in proportion
Coordinate information obtains multiple second coordinate informations corresponding with the multiple first coordinate information;
The second position region on the screen is determined according to the multiple second coordinate information.
3. according to the method described in claim 1, it is characterized in that,
UI layers of the generation user interface, including:
Generate the UI layers coincideing with the second position zone boundary;
Coincide with the second position region on the UI layers on corresponding region draw have it is preset, with the object content pair
The more new content answered, including:
More new content described in being drawn on entire UI layers;
It is described to cover described UI layers on the picture frame, including:
By the second position region covered for showing the picture frame, the object content that coincide described UI layers.
4. according to the method described in claim 1, it is characterized in that,
UI layers of the generation user interface, including:
Generate the UI layers coincideing with the screen border;
Coincide with the second position region on the UI layers on corresponding region draw have it is preset, with the object content pair
The more new content answered, including:
It coincide with the second position region on the UI layers on corresponding the third place region more new content described in drawing, and
Part except the third place region carries out transparent processing;
It is described to cover described UI layers on the picture frame, including:
Described UI layers entirety is covered on the picture frame.
5. according to any methods of claim 1-4, which is characterized in that the picture frame of the detection video flowing to be played,
Judge whether the preassigned object content of user, including:
Obtain the characteristic information in the picture frame;
Identify whether the characteristic information is the object content according to property data base;Wherein, the property data base includes
Sample characteristics information corresponding with the object content.
6. according to the method described in claim 5, it is characterized in that, the preassigned object content of the user, including:
At least one of character face, dress ornament, color, word, pattern are multiple.
7. according to the method described in claim 6, it is characterized in that, the characteristic information obtained in the picture frame it
Before, the method further includes:
Receive the picture frame of multiple video flowings;
Obtain sample characteristics information corresponding with the pre-set sample content of user in each picture frame;
The correspondence of sample characteristics information and sample content is stored in the property data base.
8. if according to the method described in claim 6, it is characterized in that, the object content is the first pattern;Described in then obtaining
Characteristic information in picture frame, including:
The area of the pattern on the picture frame is determined according to boundary profile algorithm;
Pattern characteristics are extracted from the area of the pattern;
It is described to identify whether the characteristic information is the object content according to property data base, including:
Sample patterns feature corresponding with first pattern in the pattern characteristics and the property data base is matched;
If successful match, the area of the pattern is known in judgement, and there are first patterns;
If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
9. if according to the method described in claim 6, it is characterized in that, the object content is the first character face;Then obtain
Characteristic information in the picture frame, including:
The facial area on the picture frame is determined according to the facial characteristics that training obtains in advance in grader;
Facial characteristics is extracted from the facial area;
It is described to identify whether the characteristic information is the object content according to property data base, including:
Sample face feature corresponding with first character face in the facial characteristics and the property data base is carried out
Matching;
If successful match, the facial area is known in judgement, and there are first character faces;
If it fails to match, judgement knows that first character face is not present in the facial area.
10. according to the method described in claim 9, it is characterized in that, character face's feature includes:
Hear features or FisherFace features or LBPH features.
11. a kind of terminal device, which is characterized in that the equipment includes:
Detection module is configured as detecting the picture frame of video flowing to be played, judges whether the preassigned target of user
Content;
First locating module, be configured as judge to know there are when the object content, determine on the picture frame, with it is described
The corresponding first position region of object content;
Second locating module, be configured as according to the first position region determine for show the picture frame screen,
The corresponding second position region for showing the object content;
Processing module, is configurable to generate UI layers of user interface, and coincide pair on the UI layers, with the second position region
Draw more new content preset, corresponding with the object content in the part answered;
Display module covers described UI layers on the picture frame when being configured as the picture frame described in the screen display,
So that the more new content covers the object content and is shown to the user;
UI layers of the generation user interface, and the corresponding part drafting that coincide on the UI layers, with the second position region
More new content preset, corresponding with the object content, including:
UI layers of new free user interface is generated using UI controls;
Then the file for being stored with more new content corresponding with the object content is parsed, more new content described in acquisition
UI elements, and the UI elements are added on the blank interface UI layers, on screen for showing the object content
The second position region coincide corresponding part;
First locating module, including:
Judging unit is configured as judging the flat of zone boundary corresponding with the object content based on image boundary track algorithm
Whether slippery reaches preset threshold value;
First determination unit, be configured as judge know that the smoothness reaches the threshold value when, will in the target
Hold corresponding zone boundary as the first position region;
Second determination unit, be configured as judge know that the smoothness does not reach the threshold value when, determine with it is described
The corresponding smooth region in zone boundary, and using the smooth region as the first position region.
12. equipment according to claim 11, which is characterized in that second locating module, including:
First acquisition unit, is configured as the dimension scale according to the picture frame and the screen, adjusts described in proportion
Multiple first coordinate informations on one band of position obtain multiple second coordinate letters corresponding with the multiple first coordinate information
Breath;
Third determination unit is configured as determining the second position on the screen according to the multiple second coordinate information
Region.
13. equipment according to claim 11, which is characterized in that the processing module, including:
First generation unit is configurable to generate the UI layers coincideing with the second position zone boundary;
First drawing unit is configured as on entire UI layers more new content described in drafting;
The display module is configured as described UI layers coincideing and cover for showing the picture frame, in the target
The second position region held.
14. equipment according to claim 11, which is characterized in that the processing module, including:
Second generation unit is configurable to generate the UI layers coincideing with the screen border;
Second drawing unit is configured as on the UI floor, coincide corresponding the third place area with the second position region
More new content described in being drawn on domain, and the part except the third place region carries out transparent processing;
The display module is configured as covering described UI layers entirety on the picture frame.
15. according to any equipment of claim 11-14, which is characterized in that the detection module, including:
Second acquisition unit is configured as obtaining the characteristic information in the picture frame;
Recognition unit is configured as identifying whether the characteristic information is the object content according to property data base;Wherein, institute
It includes sample characteristics information corresponding with the object content to state property data base.
16. equipment according to claim 15, which is characterized in that the preassigned object content of user includes:
At least one of character face, dress ornament, color, word, pattern are multiple.
17. equipment according to claim 16, which is characterized in that the characteristic information obtained in the picture frame it
Before, the equipment further includes:
Receiving module is configured as receiving the picture frame of multiple video flowings;
Acquisition module is configured as obtaining sample characteristics letter corresponding with the pre-set sample content of user in each picture frame
Breath;
Memory module is configured as the correspondence of sample characteristics information and sample content being stored in the property data base
In.
18. equipment according to claim 16, which is characterized in that the second acquisition unit, including:
First processing subelement, if it is the first pattern to be configured as the object content, according to the determination of boundary profile algorithm
Area of the pattern on picture frame;
First extraction subelement, is configured as extracting pattern characteristics from the area of the pattern;
The recognition unit, being configured as will be corresponding with first pattern in the pattern characteristics and the property data base
Sample patterns feature is matched;
If successful match, the area of the pattern is known in judgement, and there are first patterns;
If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
19. equipment according to claim 16, which is characterized in that the second acquisition unit, including:
Second processing subelement, if it is the first character face to be configured as the object content, according to being trained in advance in grader
The facial characteristics of acquisition determines the facial area on the picture frame;
Second extraction subelement, is configured as extracting facial characteristics from the facial area;
The recognition unit, be configured as by the facial characteristics and the property data base with first character face couple
The sample face feature answered is matched;
If successful match, the facial area is known in judgement, and there are first character faces;
If it fails to match, judgement knows that first character face is not present in the facial area.
20. equipment according to claim 19, which is characterized in that character face's feature includes:
Hear features or FisherFace features or LBPH features.
21. a kind of terminal device, which is characterized in that the equipment includes:
Processor;
Memory for the executable instruction for storing the processor;
Wherein, the processor is configured as:
The picture frame for detecting video flowing to be played judges whether the preassigned object content of user;
If judgement knows that there are the object contents, it is determined that on the picture frame, first corresponding with the object content
Set region;
According to the first position region determination object content is shown for showing the screen of the picture frame, corresponding to
Second position region;
UI layer of user interface is generated, and the corresponding part drafting that coincide on the UI layers, with the second position region is preset
, corresponding with the object content more new content;
Described in the screen display when picture frame, described UI layers is covered on the picture frame, so that the more new content
It covers the object content and is shown to the user;
UI layers of the generation user interface, and the corresponding part drafting that coincide on the UI layers, with the second position region
More new content preset, corresponding with the object content, including:
UI layers of new free user interface is generated using UI controls;
Then the file for being stored with more new content corresponding with the object content is parsed, more new content described in acquisition
UI elements, and the UI elements are added on the blank interface UI layers, on screen for showing the object content
The second position region coincide corresponding part;
On the determination picture frame, first position corresponding with object content region, including:
Whether the smoothness that zone boundary corresponding with the object content is detected based on image boundary track algorithm reaches default
Threshold value;
If judge know that the smoothness reaches the threshold value, will zone boundary corresponding with the object content as institute
State first position region;
If judgement knows that the smoothness does not reach the threshold value, it is determined that smooth area corresponding with the zone boundary
Domain, and using the smooth region as the first position region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510210000.9A CN104902318B (en) | 2015-04-29 | 2015-04-29 | Control method for playing back and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510210000.9A CN104902318B (en) | 2015-04-29 | 2015-04-29 | Control method for playing back and terminal device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104902318A CN104902318A (en) | 2015-09-09 |
CN104902318B true CN104902318B (en) | 2018-09-18 |
Family
ID=54034669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510210000.9A Active CN104902318B (en) | 2015-04-29 | 2015-04-29 | Control method for playing back and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104902318B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106604105B (en) * | 2016-12-26 | 2019-10-29 | 深圳Tcl新技术有限公司 | Calculate the method and device of HBBTV application image size |
CN106713968B (en) * | 2016-12-27 | 2020-04-24 | 北京奇虎科技有限公司 | Live data display method and device |
CN106899892A (en) * | 2017-02-20 | 2017-06-27 | 维沃移动通信有限公司 | A kind of method and mobile terminal for carrying out video playback in a browser |
CN110661987A (en) * | 2018-06-29 | 2020-01-07 | 南京芝兰人工智能技术研究院有限公司 | Method and system for replacing video content |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622595A (en) * | 2011-01-28 | 2012-08-01 | 北京千橡网景科技发展有限公司 | Method and equipment used for positioning picture contained in image |
CN102893625A (en) * | 2010-05-17 | 2013-01-23 | 亚马逊技术股份有限公司 | Selective content presentation engine |
CN102982348A (en) * | 2012-12-25 | 2013-03-20 | 百灵时代传媒集团有限公司 | Identification method of advertisement image |
CN103442295A (en) * | 2013-08-23 | 2013-12-11 | 天脉聚源(北京)传媒科技有限公司 | Method and device for playing videos in image |
CN104038807A (en) * | 2014-06-13 | 2014-09-10 | Tcl集团股份有限公司 | Layer mixing method and device based on open graphics library (OpenGL) |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1432708A (en) * | 2002-01-18 | 2003-07-30 | 胡新宜 | Two-storey multimedia network terminal service stall |
-
2015
- 2015-04-29 CN CN201510210000.9A patent/CN104902318B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102893625A (en) * | 2010-05-17 | 2013-01-23 | 亚马逊技术股份有限公司 | Selective content presentation engine |
CN102622595A (en) * | 2011-01-28 | 2012-08-01 | 北京千橡网景科技发展有限公司 | Method and equipment used for positioning picture contained in image |
CN102982348A (en) * | 2012-12-25 | 2013-03-20 | 百灵时代传媒集团有限公司 | Identification method of advertisement image |
CN103442295A (en) * | 2013-08-23 | 2013-12-11 | 天脉聚源(北京)传媒科技有限公司 | Method and device for playing videos in image |
CN104038807A (en) * | 2014-06-13 | 2014-09-10 | Tcl集团股份有限公司 | Layer mixing method and device based on open graphics library (OpenGL) |
Also Published As
Publication number | Publication date |
---|---|
CN104902318A (en) | 2015-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104850828B (en) | Character recognition method and device | |
US20170091551A1 (en) | Method and apparatus for controlling electronic device | |
CN106791893A (en) | Net cast method and device | |
CN107040646A (en) | Mobile terminal and its control method | |
CN104850432B (en) | Adjust the method and device of color | |
CN105760884B (en) | The recognition methods of picture type and device | |
CN105426079B (en) | The method of adjustment and device of picture luminance | |
CN104853223B (en) | The inserting method and terminal device of video flowing | |
TWI544336B (en) | Classes,electronic device and method of pairing thereof and seamless content playback method | |
CN106778773A (en) | The localization method and device of object in picture | |
CN104902318B (en) | Control method for playing back and terminal device | |
CN107392166A (en) | Skin color detection method, device and computer-readable recording medium | |
KR20160127606A (en) | Mobile terminal and the control method thereof | |
CN107529699A (en) | Control method of electronic device and device | |
CN107704190A (en) | Gesture identification method, device, terminal and storage medium | |
CN109523461A (en) | Method, apparatus, terminal and the storage medium of displaying target image | |
CN112052897A (en) | Multimedia data shooting method, device, terminal, server and storage medium | |
CN107330391A (en) | Product information reminding method and device | |
CN104883603B (en) | Control method for playing back, system and terminal device | |
CN108040280A (en) | Content item display methods and device, storage medium | |
CN107507128A (en) | Image processing method and equipment | |
CN105426904B (en) | Photo processing method, device and equipment | |
CN110286813A (en) | Picture mark position determines method and apparatus | |
CN103984476B (en) | menu display method and device | |
CN107463316A (en) | The method of adjustment and device of screen intensity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |