CN104902318A - Playing control method and terminal device - Google Patents

Playing control method and terminal device Download PDF

Info

Publication number
CN104902318A
CN104902318A CN201510210000.9A CN201510210000A CN104902318A CN 104902318 A CN104902318 A CN 104902318A CN 201510210000 A CN201510210000 A CN 201510210000A CN 104902318 A CN104902318 A CN 104902318A
Authority
CN
China
Prior art keywords
picture frame
object content
content
layer
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510210000.9A
Other languages
Chinese (zh)
Other versions
CN104902318B (en
Inventor
刘洁
梁鑫
王兴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510210000.9A priority Critical patent/CN104902318B/en
Publication of CN104902318A publication Critical patent/CN104902318A/en
Application granted granted Critical
Publication of CN104902318B publication Critical patent/CN104902318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a playing control method and a terminal device. The playing control method includes the following steps that: whether target content specified by a user exists on a picture frame of a video stream to be played is judged through detection; a first position area on the picture frame, which is corresponding to the target content, is determined; a second position area which is located on a screen used for displaying the picture frame and correspondingly displays the target content, is determined according to the first position area; a UI layer is generated, and preset update content which is corresponding to the target content is drawn on a portion which is located on the UI layer and is matched with and corresponding to the second position area; and when the original video stream is played, and the screen displays the picture frame, the picture frame is covered by the UI layer, and therefore, the update content can cover the target content and can be displayed for a user. With the playing control method and the terminal device adopted, without modification on the video stream data required, personalized video content which satisfies user requirements of the user can be displayed for the user in real time, and the flexibility and efficiency of personalized video playing can be improved.

Description

Control method for playing back and terminal equipment
Technical field
The disclosure relates to video display arts field, particularly a kind of control method for playing back and terminal equipment.
Background technology
Intelligent terminal day by day universal, become the major way of customer multi-media video-see, for mobile phone, user can download interested video content from network side and watch, or the local video content stored of viewing.
Whether, in correlation technique, video playback plays according to the picture frame of video flowing, and user only can control broadcast mode, such as: playing progress rate, full frame etc.But user can not control play content, personalized video playback is carried out to interested video content.
Summary of the invention
Disclosure embodiment provides a kind of control method for playing back and terminal equipment.Described technical scheme is as follows:
According to the first aspect of disclosure embodiment, provide a kind of control method for playing back, the method comprises:
Detect the picture frame of video flowing to be played, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content;
According to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Generate user interface UI layer, and corresponding part of coincideing on described UI layer, with described second place region draws that preset, corresponding with described object content more fresh content;
When described in described screen display during picture frame, described UI layer is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
According to the second aspect of disclosure embodiment, provide a kind of terminal equipment, described equipment comprises:
Detection module, is configured to the picture frame detecting video flowing to be played, judges whether to there is the preassigned object content of user;
First locating module, be configured to judge know there is described object content time, determine primary importance region on described picture frame, corresponding with described object content;
Second locating module, is configured to the second place region according to described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Processing module, be configured to generate user interface UI layer, and corresponding part of coincideing on described UI layer, with described second place region draws that preset, corresponding with described object content more fresh content;
Display module, is configured to, when described in described screen display during picture frame, cover on described picture frame by described UI layer, with described in making more fresh content cover described object content and be shown to described user.
According to the third aspect of disclosure embodiment, provide a kind of terminal equipment, this equipment comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Detect the picture frame of video flowing to be played, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content;
According to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Generate user interface UI layer, and corresponding part of coincideing on described UI layer, with described second place region draws that preset, corresponding with described object content more fresh content;
When described in described screen display during picture frame, described UI layer is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
The technical scheme that disclosure embodiment provides can comprise following beneficial effect:
By detect know video flowing to be played picture frame on there is the object content that user specifies, then determine on this picture frame, the primary importance region corresponding with object content, determine showing on the screen of this picture frame according to primary importance region again, the second place region of corresponding display-object content, then UI layer is generated, and on this UI layer, with second place region coincide corresponding part draw preset, the more fresh content corresponding with object content, thus play original video stream time when this picture frame of screen display time, this UI layer is covered on this picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and be configured to explain principle of the present disclosure together with specification.
Fig. 1 is the flow chart of a kind of control method for playing back according to an exemplary embodiment;
Fig. 2 is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
Fig. 3 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
The screen display of the terminal equipment shown in Fig. 3 B be the picture frame comprising object content;
The screen display of the terminal equipment shown in Fig. 3 C be picture frame by more fresh content coverage goal content;
Fig. 4 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
The screen display of the terminal equipment shown in Fig. 4 B be the picture frame comprising object content;
The screen display of the terminal equipment shown in Fig. 4 C be picture frame by more fresh content coverage goal content;
Fig. 5 is the block diagram of a kind of terminal equipment according to an exemplary embodiment;
Fig. 6 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Fig. 7 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Fig. 8 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Fig. 9 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 10 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 11 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 12 is the block diagram of a kind of terminal equipment according to another exemplary embodiment;
Figure 13 is the block diagram of a kind of terminal equipment according to an exemplary embodiment.
By above-mentioned accompanying drawing, illustrate the embodiment that the disclosure is clear and definite more detailed description will be had hereinafter.These accompanying drawings and text description be not in order to limited by any mode the disclosure design scope, but by reference to specific embodiment for those skilled in the art illustrate concept of the present disclosure.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
Fig. 1 is the flow chart of a kind of control method for playing back according to an exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the terminal equipment of display screen and illustrate.This control method for playing back can comprise following several step:
In a step 101, detect the picture frame of video flowing to be played, judge whether to there is the preassigned object content of user.
First, terminal equipment receives the video flowing that user specifies broadcasting, and user specifies the video flowing of broadcasting to be the video flowing that terminal equipment receives the transmission of all the other network equipments, or terminal equipment is stored in advance in the video flowing of terminal equipment this locality.
Then, terminal equipment receives user for the object content specified by this video flowing, and the more fresh content corresponding with this object content that user provides.Wherein, the object content that user specifies is exactly the personalized play content of customization, namely when the picture concerned frame of displaying video stream, does not present object content original in video flowing, but presents the more fresh content that user specifies.
It should be noted that, the preassigned object content of user comprises at least one or more in character face in video flowing, dress ornament, color, word, pattern, and the more fresh content that user provides in advance is corresponding with object content.
Terminal equipment plays demand according to user to the personalization of selected video flowing, first detects the picture frame of video flowing to be played, judges whether there is the preassigned object content of user in this picture frame.It should be noted that, detecting the implementation that whether there is object content in picture frame has a lot, illustrate: by the mode that the pixel of object content is compared with the pixel in picture frame, the mode that the characteristic information of object content and the characteristic information in picture frame are mated or the mode that the spectral information of object content is compared with the spectral information in picture frame, can select suitable detection mode according to the object content of reality, the present embodiment does not limit this.
In a step 102, know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content.
Terminal equipment, by detecting the picture frame of video flowing to be played, is known in picture frame to there is the preassigned object content of user if judge, is then determined primary importance region on this picture frame, corresponding with the object content that user specifies.For example, if the preassigned object content of user comprises the first character face, then primary importance region is the first character face region; If the preassigned object content of user comprises the cap of the first character face and first facial, then primary importance region is the first character face region, and the cap region of first facial, if the preassigned object content of user comprises the first character face and the second character face, and first pattern, then primary importance region is the first character face and the second character face region, and the first area of the pattern.
In step 103, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content.
Terminal equipment according on picture frame, corresponding with the object content that user specifies primary importance region, determine Showing Picture on the screen of frame, the second place region of corresponding display-object content.It should be noted that, determine that the implementation in the second place region on screen is a lot of according to the primary importance region of picture frame, illustrate as follows;
Mode one,
First picture frame is carried out convergent-divergent, wherein, primary importance region is also synchronous carries out convergent-divergent;
When picture frame is zoomed to screen size, the primary importance area information after record convergent-divergent, this primary importance area information can as the second place region of the screen for showing this picture frame, corresponding display-object content.
Mode two,
First obtain multiple first coordinate informations on primary importance region, such as, suppose that primary importance region is for square, multiple first coordinate informations corresponding with this primary importance region can be the coordinate information at four angles; Suppose that primary importance region is for circular, multiple first coordinate informations corresponding with this primary importance region can be the intersecting point coordinate information of at least two diameters and circular boundary;
According to the dimension scale of this picture frame and this screen, adjust multiple first coordinate informations on primary importance region in proportion, obtain multiple second coordinate informations corresponding with the plurality of first coordinate information;
Can determine showing on the screen of this picture frame according to the plurality of second coordinate information, the second place region of corresponding display-object content.
At step 104, generate user interface UI layer, corresponding part of coincideing on described UI layer, with described second place region draws default, corresponding with described object content more fresh content.
Terminal equipment application UI control generates new free user interface UI layer;
Then the file storing the more fresh content corresponding with object content is resolved to the UI element obtaining more fresh content, and this UI element is added to corresponding part of coincideing on blank UI layer, with second place region screen being used for display-object content.
In step 105, when described in described screen display during picture frame, described UI layer is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, to cover on this picture frame with the UI layer that corresponding part draws more fresh content that coincide of the second place region on screen, and then make this more fresh content cover user's object content of specifying, thus present the individualized video content of meeting consumers' demand to user.
In sum, the control method for playing back that the present embodiment provides, by detect know video flowing to be played picture frame on there is the object content that user specifies, then determine on this picture frame, the primary importance region corresponding with object content, determine showing on the screen of this picture frame according to primary importance region again, the second place region of corresponding display-object content, then UI layer is generated, and on this UI layer, with second place region coincide corresponding part draw preset, the more fresh content corresponding with object content, thus play original video stream time when this picture frame of screen display time, this UI layer is covered on this picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Fig. 2 is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the terminal equipment of display screen and illustrate.For the detection of object content in picture frame in the present embodiment, adopt the detection mode of characteristic information coupling, and for the location in primary importance region corresponding with object content on picture frame, adopt the locate mode based on image boundary track algorithm to describe the implementation process of control method for playing back in detail, this control method for playing back can comprise following several step:
In step 201, the characteristic information in the picture frame of video flowing to be played is obtained.
Terminal equipment receives the video flowing that user specifies broadcasting, and for the object content specified by this video flowing, and the more fresh content corresponding with this object content that user provides.Wherein, user specifies the video flowing of broadcasting to be the video flowing that terminal equipment receives the transmission of all the other network equipments, or terminal equipment is stored in advance in the video flowing of terminal equipment this locality.
It should be noted that, the preassigned object content of user comprises at least one or more in character face in video flowing, dress ornament, color, word, pattern, and the more fresh content that user provides in advance is corresponding with object content.Terminal equipment plays demand according to user to the personalization of selected video flowing, needs the picture frame detecting video flowing to be played, judges whether there is the preassigned object content of user in this picture frame.
First, the characteristic information in this picture frame is obtained.It should be noted that, different characteristic information obtain manners can be selected according to the object content of user's specified modification in advance, illustrate as follows:
Mode one, if the preassigned object content of user is the first pattern being distributed in multiple position in background, then according to the unit window pre-set, such as long 30 pixels, the unit window of wide 30 pixels, the characteristic information in regions all on this picture frame is extracted one by one, such as, this picture frame is long 900 pixels, the picture of wide 900 pixels, utilizes long 30 pixels, the unit window of wide 30 pixels carries out feature extraction to picture frame, need extraction 400 characteristic informations, the universality of this mode is very strong, can for all types of object content.
Mode two, if the preassigned object content of user is character face, then can adopt the transaction module such as neural network model of face recognition, or grader comparison model, first in picture frame, determine facial zone, and then face feature information is being extracted from this facial zone, avoid the characteristic information extracting this picture from all regions of picture frame one by one, this mode improves treatment effeciency to the object content of easily locating regional area.
In step 202., according to property data base identification, whether characteristic information is the preassigned object content of user; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
Whether the characteristic information that terminal equipment obtains from this picture frame according to property data base identification is the object content that user specifies, wherein, property data base comprises the sample characteristics information corresponding with object content, thus sample characteristics information corresponding with object content for property data base is mated with the characteristic information obtained from this picture frame by terminal equipment one by one, if the match is successful, illustrate in picture frame to there is the preassigned object content of user; If it fails to match, illustrate in picture frame there is not the preassigned object content of user.
It should be noted that, the content in property data base can be the sample characteristics information that the service provider of video flowing has cured.Comparatively flexibly, the sample characteristics information of property data base except having cured before comprising, can also comprise the video flowing sent for user in real time, sample characteristics information that the contents processing of specifying according to user generates.
In step 203, know to there is described object content if judge, obtain the smoothness of the zone boundary corresponding with described object content based on image boundary track algorithm;
Terminal equipment, by detecting the picture frame of video flowing to be played, is known in picture frame to there is the preassigned object content of user if judge, is then obtained the smoothness of the zone boundary corresponding with this object content by image boundary track algorithm; Wherein, image boundary track algorithm comprises the image boundary track algorithm based on two-value, the image boundary track algorithm etc. based on small echo, can need to select according to the application of reality, and then obtain the smoothness of the zone boundary corresponding with this object content by image boundary track algorithm.
In step 204, judge the threshold value whether described smoothness reaches default, know if judge the threshold value that described smoothness reaches default, then perform step 205; Know if judge the threshold value that described smoothness does not reach default, then perform step 206;
Judge whether the smoothness of the zone boundary corresponding with this object content reaches default threshold value, it should be noted that, different image boundary track algorithms is preset with different threshold values, such as, be A based on the threshold value that the image boundary track algorithm of two-value is corresponding, be B based on the threshold value that the image boundary track algorithm of small echo is corresponding, therefore, the smoothness of acquisition compares with corresponding threshold value by the algorithm according to adopting, know if judge the threshold value that smoothness reaches default, then perform step 205; Know if judge the threshold value that smoothness does not reach default, then perform step 206;
In step 205, if judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described object content as described primary importance region.
When judging to know the threshold value that the smoothness of the zone boundary corresponding with this object content reaches default, then dividing processing is easily carried out in declare area border, directly using the zone boundary corresponding with object content as primary importance region.
In step 206, know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
When judging to know that the smoothness of the zone boundary corresponding with this object content does not reach default threshold value, then declare area border is not easy to carry out dividing processing, the smooth region corresponding with zone boundary can be determined according to the compensating parameter preset, and then using smooth region as primary importance region.
In step 207, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
In a step 208, generate user interface UI layer, corresponding part of coincideing on described UI layer, with described second place region draws default, corresponding with described object content more fresh content;
In step 209, when described in described screen display during picture frame, described UI layer is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
The embodiment of the step 207-step 209 in the present embodiment can step 103-step 105 in embodiment shown in Figure 1, repeats no more herein.
In sum, the control method for playing back that the present embodiment provides, for the detection of object content in picture frame, adopt the detection mode of characteristic information coupling, and for the location in primary importance region corresponding with object content on picture frame, the locate mode based on image boundary track algorithm is adopted to describe the implementation process of control method for playing back in detail, then the second place region for display-object content on screen is determined according to primary importance region, generate UI layer, and on this UI layer, with second place region coincide corresponding part draw preset, the more fresh content corresponding with object content, thus play original video stream time when this picture frame of screen display time, this UI layer is covered on this picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
You need to add is that, before step 201, described method also comprises:
Receive the picture frame of multiple video flowing;
Obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
The corresponding relation of sample characteristics information and sample content is stored in described property data base.
In sum, the control method for playing back that the present embodiment provides, can dynamically update property data base, and along with the accumulation of service time, the content that the personalization provided for user is play is more diversified.
For in above-described embodiment, the UI layer generated is adopted to cover the picture frame with object content, and then make more fresh content coverage goal content, the effect of user individual broadcasting is presented to by screen, it should be noted that, in order to realize said process, the generating mode of UI layer and the realization rate of coverage mode have multiple, the proportion of picture frame can be accounted for according to object content, or the aspects such as arrangement mode carry out selecting different UI layer treatment technologies, to improve treatment effeciency, below by Fig. 3 and detailed description embodiment illustrated in fig. 4.
Fig. 3 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the terminal equipment of display screen and illustrate.
The object content of specifying for user in the present embodiment is the first character face, and the application scenarios that these distributed areas of the first character face on picture frame are unique, adopt the Local treatment mode of UI layer to realize, this control method for playing back can comprise following several step:
In step 301, the facial characteristics scope obtained according to training in advance determines the facial zone on the picture frame of video flowing to be played.
According to the unit window preset extract on picture frame with unit window characteristic of correspondence, judge whether this feature belongs to this range intervals according to the facial characteristics scope that training in advance obtains, if this feature belongs to this range intervals, illustrate that the region corresponding with this feature is facial zone, if this feature does not belong to this range intervals, illustrate that the region corresponding with this feature is not facial zone, thus the facial zone on quick position picture frame.Wherein, facial characteristics can comprise: Hear feature or FisherFace feature or LBPH feature, can select according to application needs.
In step 302, from described facial zone, facial characteristics is extracted.
Facial feature extraction is carried out from positions such as the profile facial zone, eyebrow, eyes, nose, lips.
In step 303, by described facial characteristics with in property data base, sample face feature that the preassigned object content of user is corresponding mates.
Sample face feature corresponding with object content in property data base is mated with the facial characteristics extracted from facial zone, if the match is successful, then judges to know that facial zone is object content; If it fails to match, then judge to know that facial zone is not object content.
In step 304, know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content.
In step 305, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content.
Step 304 in the present embodiment and step 305 can step 102 in embodiment shown in Figure 1 and steps 103, or the step 203 in embodiment shown in Figure 2 is to step 207.
Within step 306, generate the UI layer coincide with described second place zone boundary, on whole UI layer, drafting has described renewal content;
Terminal equipment application UI control generates new free user interface UI layer, the zone boundary of this UI layer coincide corresponding with second place zone boundary, then the file storing the more fresh content corresponding with object content is resolved to the UI element obtaining more fresh content, and this UI element is added on whole blank UI layer.
In step 307, when described in described screen display during picture frame, coincide described UI layer the described second place region covered for showing described picture frame, described object content, with described in making more fresh content cover described object content and be shown to described user.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, coincide this UI layer the second place region covered for showing object content on this picture frame, and then make this more fresh content cover user's object content of specifying, thus present the individualized video content of meeting consumers' demand to user.
As a kind of example, the screen display of the terminal equipment shown in Fig. 3 B be the picture frame comprising object content, the screen display of the terminal equipment shown in Fig. 3 C be picture frame by more fresh content coverage goal content, shown in Fig. 3 B and Fig. 3 C,
Suppose that the object content that user specifies is " machine cat face " on picture frame, more fresh content is " Little Bear face ", specifically, sample face feature corresponding with " machine cat face " in property data base is mated with the facial characteristics extracted from facial zone, if the match is successful, then judge to know that facial zone is " machine cat face ", then the file of " Little Bear face " carries out parsings acquisition UI element to storing, and this UI element is added to border and coincide on corresponding blank UI layer with second place zone boundary.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, being coincide by this UI layer covers for showing machine cat facial zone on this picture frame, and then make this " Little Bear face " covering " machine cat face ", thus present the individualized video content of meeting consumers' demand to user.
In sum, the control method for playing back that the present embodiment provides, the object content of specifying for user is the first character face, and the application scenarios that these distributed areas of the first character face on picture frame are unique, the Local treatment mode of UI layer is adopted to realize, thus play original video stream time when this picture frame of screen display time, coincide this UI layer the second place region covered for display-object content, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Fig. 4 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the terminal equipment of display screen and illustrate.
The object content of specifying for user in the present embodiment is multiple pattern, the application scenarios of the distributed areas dispersion of multiple pattern on picture frame, and adopt the disposed of in its entirety mode of UI layer to realize, this control method for playing back can comprise following several step:
In step 401, the area of the pattern on the picture frame of video flowing to be played is determined according to boundary profile algorithm.
Based on area of the pattern all on boundary profile algorithm determination picture frame.
In step 402, from described area of the pattern, pattern characteristics is extracted.
From area of the pattern, carry out extraction pattern characteristics, pattern characteristics comprises color histogram, or, histogram of gradients.
In step 403, by described pattern characteristics with in property data base, sample patterns feature that the preassigned object content of user is corresponding mates.
Sample patterns feature corresponding with object content in property data base is mated with the pattern characteristics extracted from area of the pattern, if the match is successful, then judges to know that facial zone is object content; If it fails to match, then judge to know that facial zone is not object content.
In step 404, know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content.
In step 405, according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content.
Step 404 in the present embodiment and step 405 can step 102 in embodiment shown in Figure 1 and steps 103, or the step 203 in embodiment shown in Figure 2 is to step 207.
In a step 406, generate the UI layer coincide with described screen border, more fresh content described in the 3rd corresponding band of position that coincide on described UI layer, with described second place region is drawn, and the part outside described 3rd band of position carries out transparent processing.
Terminal equipment application UI control generates new free user interface UI layer, the zone boundary of this UI layer coincide corresponding with screen border, then the file storing the more fresh content corresponding with object content is resolved to the UI element obtaining more fresh content, and this UI element is added to the 3rd corresponding band of position that to coincide on UI layer, with the second place region on screen, and the part on UI layer, outside the 3rd band of position carries out transparent processing.
In step 407, when described in described screen display during picture frame, described UI layer entirety is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, this UI layer entirety is covered on this picture frame, so make this more fresh content cover user's object content of specifying, thus present the individualized video content of meeting consumers' demand to user.
As a kind of example, the screen display of the terminal equipment shown in Fig. 4 B be the picture frame comprising object content, the screen display of the terminal equipment shown in Fig. 4 C be picture frame by more fresh content coverage goal content, shown in Fig. 4 B and Fig. 4 C,
Suppose that the object content that user specifies comprises the first pattern and the second pattern, first pattern is " lower part of the body of health husband " on this picture frame, corresponding more fresh content is " tail of mermaid ", second pattern is " machine cat head top ", corresponding more fresh content is " the machine cat head top of band aircraft ", specifically, sample patterns feature corresponding with the first pattern and the second pattern in property data base is mated with the pattern characteristics extracted from area of the pattern, if the match is successful, then judge to know that area of the pattern is " lower part of the body of health husband " and " machine cat head top ", then parsing is carried out to the file storing " tail of mermaid " and " the machine cat head top of band aircraft " pattern and obtain UI element, and this UI element is added on UI layer, coincide on the 3rd corresponding band of position with the second place region on screen, and the part outside the 3rd band of position carries out transparent processing.
In the process of terminal equipment displaying video stream, when this picture frame of screen display, this UI layer entirety is covered on this picture frame, and then make " tail of mermaid " pattern covers " lower part of the body of health husband " pattern, " the machine cat head top of band aircraft " pattern covers " machine cat head top " pattern, thus the individualized video content of meeting consumers' demand is presented to user.
In sum, the control method for playing back that the present embodiment provides, the object content of specifying for user is multiple pattern, the application scenarios of the distributed areas dispersion of multiple pattern on picture frame, the disposed of in its entirety mode of UI layer is adopted to realize, thus play original video stream time when this picture frame of screen display time, UI layer entirety is covered on described picture frame, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Following is disclosure device embodiment, can be configured to perform disclosure embodiment of the method.For the details do not disclosed in disclosure device embodiment, please refer to disclosure embodiment of the method.
Fig. 5 is the block diagram of a kind of terminal equipment according to an exemplary embodiment, and as shown in Figure 5, this terminal equipment, comprising: detection module 11, first locating module 12, second locating module 13, processing module 14 and display module 15; Wherein,
Detection module 11, is configured to the picture frame detecting video flowing to be played, judges whether to there is the preassigned object content of user;
First locating module 12, be configured to judge know there is described object content time, determine primary importance region on described picture frame, corresponding with described object content;
Second locating module 13, is configured to the second place region according to described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Processing module 14, be configured to generate user interface UI layer, and corresponding part of coincideing on described UI layer, with described second place region draws that preset, corresponding with described object content more fresh content;
Display module 15, is configured to, when described in described screen display during picture frame, cover on described picture frame by described UI layer, with described in making more fresh content cover described object content and be shown to described user.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, by detect know video flowing to be played picture frame on there is the object content that user specifies, then determine on this picture frame, the primary importance region corresponding with object content, determine showing on the screen of this picture frame according to primary importance region again, the second place region of corresponding display-object content, then UI layer is generated, and on this UI layer, with second place region coincide corresponding part draw preset, the more fresh content corresponding with object content, thus play original video stream time when this picture frame of screen display time, this UI layer is covered on this picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Fig. 6 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 6, based on embodiment illustrated in fig. 5, the first locating module 12, comprising: judging unit 121, first determining unit 122 and the second determining unit 123, wherein,
Judging unit 121, is configured to judge whether the smoothness of the zone boundary corresponding with described object content reaches default threshold value based on image boundary track algorithm;
First determining unit 122, is configured to when judging to know that described smoothness reaches described threshold value, using the zone boundary corresponding with described object content as described primary importance region;
Second determining unit 123, is configured to, when judging to know that described smoothness does not reach described threshold value, determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
Fig. 7 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 7, based on embodiment illustrated in fig. 5, this second locating module 13, comprising: the first acquiring unit 131 and the 3rd determining unit 132, wherein,
First acquiring unit 131, is configured to the dimension scale according to described picture frame and described screen, adjusts multiple first coordinate informations on described primary importance region in proportion, obtains multiple second coordinate informations corresponding with described multiple first coordinate information;
3rd determining unit 132, is configured to the described second place region determined according to described multiple second coordinate information on described screen.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, for the location in primary importance region corresponding with object content on picture frame, the locate mode based on image boundary track algorithm is adopted to describe the implementation process of control method for playing back in detail, then the second place region for display-object content on screen is determined according to primary importance region, generate UI layer, and on this UI layer, with second place region coincide corresponding part draw preset, the more fresh content corresponding with object content, thus play original video stream time when this picture frame of screen display time, this UI layer is covered on this picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Fig. 8 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 8, based on embodiment illustrated in fig. 5, this processing module 14, comprising: the first generation unit 141 and the first drawing unit 142, wherein,
First generation unit 141, is configured to generate the UI layer coincide with described second place zone boundary;
First drawing unit 142, more fresh content described in being configured to draw on whole UI layer;
Display module 15, being configured to coincide described UI layer covers described second place region for showing described picture frame, described object content.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, realize for adopting the Local treatment mode of UI layer, thus play original video stream time when this picture frame of screen display time, coincide this UI layer the second place region covered for display-object content, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Fig. 9 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 9, based on embodiment illustrated in fig. 5, this processing module 14, comprising: the second generation unit 143 and the second drawing unit 144, wherein,
Second generation unit 143, is configured to generate the UI layer coincide with described screen border;
Second drawing unit 144, more fresh content described in the 3rd corresponding band of position that is configured to coincide on described UI layer, with described second place region is drawn, and the part outside described 3rd band of position carries out transparent processing;
Display module 15, is configured to described UI layer entirety to cover on described picture frame.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, realize for adopting the disposed of in its entirety mode of UI layer, thus play original video stream time when this picture frame of screen display time, UI layer entirety is covered on described picture frame, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Figure 10 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in Figure 10, based on embodiment illustrated in fig. 5, this detection module 11, comprising: second acquisition unit 111 and recognition unit 112, wherein,
Second acquisition unit 111, is configured to obtain the characteristic information in described picture frame;
Recognition unit 112, is configured to whether characteristic information according to property data base identification is described object content; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
Further, described equipment also comprises:
Receiver module 16, is configured to the picture frame receiving multiple video flowing;
Acquisition module 17, is configured to obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
Memory module 18, is configured to the corresponding relation of sample characteristics information and sample content to be stored in described property data base.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The control method for playing back that the present embodiment provides, for the detection of object content in picture frame, adopt the detection mode of characteristic information coupling, and can property data base be dynamically updated, along with the accumulation of service time, the content that the personalization provided for user is play is more diversified.
Figure 11 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in figure 11, based on embodiment illustrated in fig. 10, this second acquisition unit 111, comprising: the first process subelement 1111 and first extracts subelement 1112, wherein,
First process subelement 1111, if being configured to described object content is the first pattern, determines the area of the pattern on described picture frame according to boundary profile algorithm;
First extracts subelement 1112, is configured to extract pattern characteristics from described area of the pattern;
Recognition unit 112, is configured to described pattern characteristics to mate with sample patterns feature corresponding with described first pattern in described property data base;
If the match is successful, then judge to know that described area of the pattern exists described first pattern;
If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, the object content of specifying for user is the first character face, and the application scenarios that these distributed areas of the first character face on picture frame are unique, adopt the detection mode of pattern characteristics information matches, improve treatment effeciency.
Figure 12 is the block diagram of a kind of terminal equipment according to another exemplary embodiment, and as shown in figure 12, based on embodiment illustrated in fig. 10, this second acquisition unit 111, comprising: the second process subelement 1113 and second extracts subelement 1114, wherein,
Second process subelement 1113, if being configured to described object content is the first character face, determines the facial zone on described picture frame according to the facial characteristics of training in advance acquisition in grader;
Second extracts subelement 1114, is configured to extract facial characteristics from described facial zone;
Recognition unit 112, is configured to described facial characteristics to mate with sample face feature corresponding with described first character face in described property data base;
If the match is successful, then judge to know that described facial zone exists described first character face;
If it fails to match, then judge to know that described facial zone does not exist described first character face.
The function of each module and handling process in the terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The terminal equipment that the present embodiment provides, the object content of specifying for user is multiple pattern, and the application scenarios of the distributed areas dispersion of multiple pattern on picture frame, adopts the detection mode of face feature information coupling, improve treatment effeciency.
Figure 13 is the block diagram of a kind of terminal equipment according to an exemplary embodiment.Such as, terminal equipment 1300 can be mobile phone, computer, flat-panel devices etc.
With reference to Figure 13, terminal equipment 1300 can comprise following one or more assembly: processing components 1302, memory 1304, power supply module 1306, multimedia groupware 1308, audio-frequency assembly 1310, the interface 1312 of I/O (I/O), sensor cluster 1314, and communications component 1316.
The integrated operation of the usual control terminal 1300 of processing components 1302, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1302 can comprise one or more processor 1320 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 1302 can comprise one or more module, and what be convenient between processing components 1302 and other assemblies is mutual.Such as, processing components 1302 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1308 and processing components 1302.
Memory 1304 is configured to store various types of data to be supported in the operation of terminal equipment 1300.The example of these data comprises the instruction being configured to any application program or the method operated on terminal equipment 1300, contact data, telephone book data, message, picture, video etc.Memory 1304 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 1306 is terminal equipment 1300 provide electric power.Power supply module 1306 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for terminal equipment 1300 and be associated.
Multimedia groupware 1308 is included in the touching display screen providing an output interface between described terminal equipment 1300 and user.In certain embodiments, touching display screen can comprise liquid crystal display (LCD) and touch panel (TP).Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1308 comprises a front-facing camera and/or post-positioned pick-up head.When terminal equipment 1300 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1310 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1310 comprises a microphone (MIC), and when terminal equipment 1300 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 1304 further or be sent via communications component 1316.In certain embodiments, audio-frequency assembly 1310 also comprises a loud speaker, is configured to output audio signal.
I/O interface 1312 is for providing interface between processing components 1302 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 1314 comprises one or more transducer, is configured to as terminal equipment 1300 provides the state estimation of various aspects.Such as, sensor cluster 1314 can detect the opening/closing state of terminal equipment 1300, the relative positioning of assembly, such as described assembly is display and the keypad of terminal equipment 1300, the position of all right sense terminals equipment 1300 of sensor cluster 1314 or terminal equipment 1300 assemblies changes, the presence or absence that user contacts with terminal equipment 1300, the variations in temperature of terminal equipment 1300 orientation or acceleration/deceleration and terminal equipment 1300.Sensor cluster 1314 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 1314 can also comprise optical sensor, as CMOS or ccd image sensor, is configured to use in imaging applications.In certain embodiments, this sensor cluster 1314 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communications component 1316 is configured to the communication being convenient to wired or wireless mode between terminal equipment 1300 and other equipment.Terminal equipment 1300 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1316 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1316 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, terminal equipment 1300 can be realized by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, is configured to perform above-mentioned document display method.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 1304 of instruction, above-mentioned instruction can perform said method by the processor 1320 of terminal equipment 1300.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of terminal equipment 1300, makes terminal equipment 1300 can perform a kind of document display method.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (23)

1. a control method for playing back, is characterized in that, described method comprises:
Detect the picture frame of video flowing to be played, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content;
According to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
When described in described screen display during picture frame, described UI layer is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
2. method according to claim 1, is characterized in that, describedly determines primary importance region on described picture frame, corresponding with described object content, comprising:
Whether the smoothness detecting the zone boundary corresponding with described object content based on image boundary track algorithm reaches default threshold value;
If judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described object content as described primary importance region;
Know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
3. method according to claim 1, is characterized in that, described according to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content, comprising:
According to the dimension scale of described picture frame and described screen, adjust multiple first coordinate informations on described primary importance region in proportion, obtain multiple second coordinate informations corresponding with described multiple first coordinate information;
The described second place region on described screen is determined according to described multiple second coordinate information.
4. method according to claim 1, is characterized in that,
Described generation user interface UI layer, comprising:
Generate the UI layer coincide with described second place zone boundary;
Described UI layer coincide with described second place region on corresponding region and draws default, corresponding with described object content more fresh content, comprising:
More fresh content described in whole UI layer is drawn;
Described described UI layer to be covered on described picture frame, comprising:
Coincide described UI layer the described second place region covered for showing described picture frame, described object content.
5. method according to claim 1, is characterized in that,
Described generation user interface UI layer, comprising:
Generate the UI layer coincide with described screen border;
Described UI layer coincide with described second place region on corresponding region and draws default, corresponding with described object content more fresh content, comprising:
Described UI layer coincide with described second place region described in the 3rd corresponding band of position draws more fresh content, and the part outside described 3rd band of position carries out transparent processing;
Described described UI layer to be covered on described picture frame, comprising:
Described UI layer entirety is covered on described picture frame.
6., according to the arbitrary described method of claim 1-5, it is characterized in that the picture frame of described detection video flowing to be played judges whether to there is the preassigned object content of user, comprising:
Obtain the characteristic information in described picture frame;
According to property data base identification, whether characteristic information is described object content; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
7. method according to claim 6, is characterized in that, the preassigned object content of described user, comprising:
At least one or more in character face, dress ornament, color, word, pattern.
8. method according to claim 7, is characterized in that, before the characteristic information in the described picture frame of described acquisition, described method also comprises:
Receive the picture frame of multiple video flowing;
Obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
The corresponding relation of sample characteristics information and sample content is stored in described property data base.
9. method according to claim 7, is characterized in that, if described object content is the first pattern; Then obtain the characteristic information in described picture frame, comprising:
The area of the pattern on described picture frame is determined according to boundary profile algorithm;
Pattern characteristics is extracted from described area of the pattern;
Described according to property data base identification characteristic information whether be described object content, comprising:
Described pattern characteristics is mated with sample patterns feature corresponding with described first pattern in described property data base;
If the match is successful, then judge to know that described area of the pattern exists described first pattern;
If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
10. method according to claim 7, is characterized in that, if described object content is the first character face; Then obtain the characteristic information in described picture frame, comprising:
The facial zone on described picture frame is determined according to the facial characteristics of training in advance acquisition in grader;
Facial characteristics is extracted from described facial zone;
Described according to property data base identification characteristic information whether be described object content, comprising:
Described facial characteristics is mated with sample face feature corresponding with described first character face in described property data base;
If the match is successful, then judge to know that described facial zone exists described first character face;
If it fails to match, then judge to know that described facial zone does not exist described first character face.
11. methods according to claim 10, is characterized in that, described character face's feature comprises:
Hear feature or FisherFace feature or LBPH feature.
12. 1 kinds of terminal equipments, is characterized in that, described equipment comprises:
Detection module, is configured to the picture frame detecting video flowing to be played, judges whether to there is the preassigned object content of user;
First locating module, be configured to judge know there is described object content time, determine primary importance region on described picture frame, corresponding with described object content;
Second locating module, is configured to the second place region according to described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Processing module, be configured to generate user interface UI layer, and corresponding part of coincideing on described UI layer, with described second place region draws that preset, corresponding with described object content more fresh content;
Display module, is configured to, when described in described screen display during picture frame, cover on described picture frame by described UI layer, with described in making more fresh content cover described object content and be shown to described user.
13. equipment according to claim 12, is characterized in that, described first locating module, comprising:
Judging unit, is configured to judge whether the smoothness of the zone boundary corresponding with described object content reaches default threshold value based on image boundary track algorithm;
First determining unit, is configured to when judging to know that described smoothness reaches described threshold value, using the zone boundary corresponding with described object content as described primary importance region;
Second determining unit, is configured to, when judging to know that described smoothness does not reach described threshold value, determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
14. equipment according to claim 12, is characterized in that, described second locating module, comprising:
First acquiring unit, is configured to the dimension scale according to described picture frame and described screen, adjusts multiple first coordinate informations on described primary importance region in proportion, obtains multiple second coordinate informations corresponding with described multiple first coordinate information;
3rd determining unit, is configured to the described second place region determined according to described multiple second coordinate information on described screen.
15. equipment according to claim 12, is characterized in that, described processing module, comprising:
First generation unit, is configured to generate the UI layer coincide with described second place zone boundary;
First drawing unit, more fresh content described in being configured to draw on whole UI layer;
Described display module, being configured to coincide described UI layer covers described second place region for showing described picture frame, described object content.
16. equipment according to claim 12, is characterized in that, described processing module, comprising:
Second generation unit, is configured to generate the UI layer coincide with described screen border;
Second drawing unit, more fresh content described in the 3rd corresponding band of position that is configured to coincide on described UI layer, with described second place region is drawn, and the part outside described 3rd band of position carries out transparent processing;
Described display module, is configured to described UI layer entirety to cover on described picture frame.
17. according to the arbitrary described equipment of claim 12-16, and it is characterized in that, described detection module, comprising:
Second acquisition unit, is configured to obtain the characteristic information in described picture frame;
Recognition unit, is configured to whether characteristic information according to property data base identification is described object content; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
18. equipment according to claim 17, is characterized in that, the preassigned object content of described user comprises:
At least one or more in character face, dress ornament, color, word, pattern.
19. equipment according to claim 18, is characterized in that, before the characteristic information in the described picture frame of described acquisition, described equipment also comprises:
Receiver module, is configured to the picture frame receiving multiple video flowing;
Acquisition module, is configured to obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
Memory module, is configured to the corresponding relation of sample characteristics information and sample content to be stored in described property data base.
20. equipment according to claim 18, is characterized in that, described second acquisition unit, comprising:
First process subelement, if being configured to described object content is the first pattern, determines the area of the pattern on described picture frame according to boundary profile algorithm;
First extracts subelement, is configured to extract pattern characteristics from described area of the pattern;
Described recognition unit, is configured to described pattern characteristics to mate with sample patterns feature corresponding with described first pattern in described property data base;
If the match is successful, then judge to know that described area of the pattern exists described first pattern;
If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
21. equipment according to claim 18, is characterized in that, described second acquisition unit, comprising:
Second process subelement, if being configured to described object content is the first character face, determines the facial zone on described picture frame according to the facial characteristics of training in advance acquisition in grader;
Second extracts subelement, is configured to extract facial characteristics from described facial zone;
Described recognition unit, is configured to described facial characteristics to mate with sample face feature corresponding with described first character face in described property data base;
If the match is successful, then judge to know that described facial zone exists described first character face;
If it fails to match, then judge to know that described facial zone does not exist described first character face.
22. equipment according to claim 21, is characterized in that, described character face's feature comprises:
Hear feature or FisherFace feature or LBPH feature.
23. 1 kinds of terminal equipments, is characterized in that, described equipment comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Detect the picture frame of video flowing to be played, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described picture frame, corresponding with described object content;
According to the second place region that described primary importance region determines showing on the screen of described picture frame, correspondence shows described object content;
Generate user interface UI layer, and corresponding part of coincideing on described UI layer, with described second place region draws that preset, corresponding with described object content more fresh content;
When described in described screen display during picture frame, described UI layer is covered on described picture frame, with described in making more fresh content cover described object content and be shown to described user.
CN201510210000.9A 2015-04-29 2015-04-29 Control method for playing back and terminal device Active CN104902318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510210000.9A CN104902318B (en) 2015-04-29 2015-04-29 Control method for playing back and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510210000.9A CN104902318B (en) 2015-04-29 2015-04-29 Control method for playing back and terminal device

Publications (2)

Publication Number Publication Date
CN104902318A true CN104902318A (en) 2015-09-09
CN104902318B CN104902318B (en) 2018-09-18

Family

ID=54034669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510210000.9A Active CN104902318B (en) 2015-04-29 2015-04-29 Control method for playing back and terminal device

Country Status (1)

Country Link
CN (1) CN104902318B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106604105A (en) * 2016-12-26 2017-04-26 深圳Tcl新技术有限公司 Method and device for calculating image size of HBBTV application
CN106713968A (en) * 2016-12-27 2017-05-24 北京奇虎科技有限公司 Live broadcast data display method and device
CN106899892A (en) * 2017-02-20 2017-06-27 维沃移动通信有限公司 A kind of method and mobile terminal for carrying out video playback in a browser
CN110661987A (en) * 2018-06-29 2020-01-07 南京芝兰人工智能技术研究院有限公司 Method and system for replacing video content

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1432708A (en) * 2002-01-18 2003-07-30 胡新宜 Two-storey multimedia network terminal service stall
CN102622595A (en) * 2011-01-28 2012-08-01 北京千橡网景科技发展有限公司 Method and equipment used for positioning picture contained in image
CN102893625A (en) * 2010-05-17 2013-01-23 亚马逊技术股份有限公司 Selective content presentation engine
CN102982348A (en) * 2012-12-25 2013-03-20 百灵时代传媒集团有限公司 Identification method of advertisement image
CN103442295A (en) * 2013-08-23 2013-12-11 天脉聚源(北京)传媒科技有限公司 Method and device for playing videos in image
CN104038807A (en) * 2014-06-13 2014-09-10 Tcl集团股份有限公司 Layer mixing method and device based on open graphics library (OpenGL)

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1432708A (en) * 2002-01-18 2003-07-30 胡新宜 Two-storey multimedia network terminal service stall
CN102893625A (en) * 2010-05-17 2013-01-23 亚马逊技术股份有限公司 Selective content presentation engine
CN102622595A (en) * 2011-01-28 2012-08-01 北京千橡网景科技发展有限公司 Method and equipment used for positioning picture contained in image
CN102982348A (en) * 2012-12-25 2013-03-20 百灵时代传媒集团有限公司 Identification method of advertisement image
CN103442295A (en) * 2013-08-23 2013-12-11 天脉聚源(北京)传媒科技有限公司 Method and device for playing videos in image
CN104038807A (en) * 2014-06-13 2014-09-10 Tcl集团股份有限公司 Layer mixing method and device based on open graphics library (OpenGL)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106604105A (en) * 2016-12-26 2017-04-26 深圳Tcl新技术有限公司 Method and device for calculating image size of HBBTV application
CN106604105B (en) * 2016-12-26 2019-10-29 深圳Tcl新技术有限公司 Calculate the method and device of HBBTV application image size
CN106713968A (en) * 2016-12-27 2017-05-24 北京奇虎科技有限公司 Live broadcast data display method and device
CN106713968B (en) * 2016-12-27 2020-04-24 北京奇虎科技有限公司 Live data display method and device
CN106899892A (en) * 2017-02-20 2017-06-27 维沃移动通信有限公司 A kind of method and mobile terminal for carrying out video playback in a browser
CN110661987A (en) * 2018-06-29 2020-01-07 南京芝兰人工智能技术研究院有限公司 Method and system for replacing video content

Also Published As

Publication number Publication date
CN104902318B (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
CN108037863B (en) Method and device for displaying image
CN104133956B (en) Handle the method and device of picture
CN105244048A (en) Audio play control method and apparatus
CN104486451B (en) Application program recommends method and device
CN105139415A (en) Foreground and background segmentation method and apparatus of image, and terminal
CN104281432A (en) Method and device for regulating sound effect
CN106331761A (en) Live broadcast list display method and apparatuses
CN104850828A (en) Person identification method and person identification device
CN107333170A (en) The control method and device of intelligent lamp
CN105554581A (en) Method and device for bullet screen display
KR20150144547A (en) Video display device and operating method thereof
CN104462418A (en) Page displaying method and device and electronic device
CN104853223B (en) The inserting method and terminal device of video flowing
CN104020924A (en) Label establishing method and device and terminal
CN104517271A (en) Image processing method and device
CN104902318A (en) Playing control method and terminal device
CN108108671A (en) Description of product information acquisition method and device
CN107330391A (en) Product information reminding method and device
CN105335714A (en) Photograph processing method, device and apparatus
CN104883603B (en) Control method for playing back, system and terminal device
CN108040280A (en) Content item display methods and device, storage medium
CN103885678A (en) Method and device for displaying object
CN107729530A (en) Map Switch method and device
CN105657325A (en) Method, apparatus and system for video communication

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant