CN104883603B - Control method for playing back, system and terminal device - Google Patents

Control method for playing back, system and terminal device Download PDF

Info

Publication number
CN104883603B
CN104883603B CN201510210500.2A CN201510210500A CN104883603B CN 104883603 B CN104883603 B CN 104883603B CN 201510210500 A CN201510210500 A CN 201510210500A CN 104883603 B CN104883603 B CN 104883603B
Authority
CN
China
Prior art keywords
picture frame
object content
content
terminal equipment
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510210500.2A
Other languages
Chinese (zh)
Other versions
CN104883603A (en
Inventor
刘洁
梁鑫
王兴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510210500.2A priority Critical patent/CN104883603B/en
Publication of CN104883603A publication Critical patent/CN104883603A/en
Application granted granted Critical
Publication of CN104883603B publication Critical patent/CN104883603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure is directed to a kind of control method for playing back, system and terminal device.By first terminal equipment according to the identification information of the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with the identification information is obtained from first terminal equipment, and the first picture frame corresponding with the timestamp, and obtain what user marked on the first picture frame in advance, with the first position region corresponding to the preassigned object content of user, then by the first position, region is sent to second terminal equipment, second terminal equipment generates UI layers according to first position region and more new content corresponding with object content, so as to when screen display second picture frame, the UI layers is covered on the second picture frame, so that more new content coverage goal content is shown to user.Realize in the case where video stream data need not be distorted, the personalized video content for meeting user's needs is presented to user in real time, and alleviate the processing load of playback terminal.

Description

Control method for playing back, system and terminal device
Technical field
This disclosure relates to video display arts field, more particularly to a kind of control method for playing back, system and terminal device.
Background technology
Intelligent terminal becomes increasingly popular, and becomes the major way of customer multi-media video-see, by taking mobile phone as an example, uses Family can download video content interested from network side and be watched, or the video content that viewing is locally stored.
In correlation technique, video playing is played out according to the picture frame of video flowing, and user can only control broadcasting Mode, such as:Playing progress rate, if full frame etc..However, user can not control broadcasting content, to video content interested Carry out personalized video playing.
The content of the invention
The embodiment of the present disclosure provides a kind of control method for playing back, system and terminal device.The technical solution is as follows:
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of control method for playing back, this method include:
Label information is sent to the first terminal equipment of the first video flowing of storage and obtains request, and the acquisition request includes: The identification information of second video flowing to be played in second terminal equipment, and the timestamp of second picture frame;
Receive the response message for including first position region that the first terminal equipment returns, the first position area Domain is that the first terminal equipment obtains first video flowing corresponding with the identification information, is obtained from first video flowing , corresponding with the timestamp the first picture frame, and obtained and the preassigned target of user on first picture frame Region corresponding to content, wherein, first picture frame is identical with the second picture frame;
Screen, the corresponding display mesh for showing the second picture frame is determined according to the first position region Mark the second place region of content;
UI layer of user interface is generated, the corresponding part drafting that coincide with the second place region on the UI layers has default , corresponding with the object content more new content;
When second picture frame described in the screen display, described UI layers is covered on the second picture frame, so that The more new content covers the object content and is shown to the user.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of control method for playing back, this method include:
The first picture frame in the first video flowing is detected, judges whether the preassigned object content of user;
If judgement knows that there are the object content, it is determined that on first picture frame, corresponding with the object content First position region, and be marked on first picture frame;
When receiving the label information acquisition request that second terminal equipment is sent, the acquisition request includes:To be played The timestamp of second picture frame in second video flowing, acquisition, corresponding with the timestamp first from first video flowing Picture frame, wherein, first picture frame is identical with the second picture frame;
If first position area corresponding with the preassigned object content of user can be obtained from first picture frame Domain, then return to the response message for including the first position region, so that the second terminal is set to the second terminal equipment It is standby that user interface UI is generated according to the first position region and more new content default, corresponding with the object content Layer, and then when second picture frame described in screen display, described UI layers is covered on the second picture frame so that it is described more New content covers the object content and is shown to the user.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of second terminal equipment, the equipment include:
Sending module, is configured as sending label information acquisition request to the first terminal equipment of the first video flowing of storage, The acquisition request includes:In second terminal equipment in the second video flowing to be played second picture frame timestamp;
First receiving module, is configured as receiving the sound for including first position region that the first terminal equipment returns Answer message, the first position region be the first terminal equipment obtained from first video flowing, with the time Corresponding first picture frame is stabbed, and is obtained and the area corresponding to the preassigned object content of user on first picture frame Domain, wherein, first picture frame is identical with the second picture frame;
First locating module, is configured as being determined for showing the second picture frame according to the first position region On screen, correspond to the second place region for showing the object content;
First processing module, is configurable to generate UI layers of user interface, is kissed on the UI layers with the second place region Closing corresponding part and drawing has more new content default, corresponding with the object content;
Display module, when being configured as second picture frame described in the screen display, described UI layers is covered described On second picture frame, so that the more new content covers the object content and is shown to the user.
According to the fourth aspect of the embodiment of the present disclosure, there is provided a kind of first terminal equipment, the equipment include:
Detection module, the first picture frame being configured as in the first video flowing of detection, judges whether that user refers in advance Fixed object content;
Second locating module, if being configured as judging to know that there are the object content, it is determined that first picture frame Upper, first position corresponding with object content region, and be marked on first picture frame;
First acquisition module, it is described when being configured as receiving the label information acquisition request of second terminal equipment transmission Obtaining request includes:The timestamp of second picture frame in the second video flowing to be played, from first video flowing obtain, with Corresponding first picture frame of the timestamp, wherein, first picture frame is identical with the second picture frame;
Second processing module, if being configured as to obtain from first picture frame and the preassigned target of user The corresponding first position region of content, the then response for including the first position region to second terminal equipment return disappear Breath so that the second terminal equipment according to the first position region and it is default, corresponding with the object content more New content generates UI layers of user interface, and then when second picture frame described in screen display, described the is covered by described UI layers On two picture frames, so that the more new content covers the object content and is shown to the user.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a kind of broadcasting control system, the system include:Above-mentioned Two terminal devices, and first terminal equipment.
According to the 6th of the embodiment of the present disclosure the aspect, there is provided a kind of second terminal equipment, the equipment include:
Processor;
Memory for the executable instruction for storing the processor;
Wherein, the processor is configured as:
Label information is sent to the first terminal equipment of the first video flowing of storage and obtains request, and the acquisition request includes: The identification information of second video flowing to be played in second terminal equipment, and the timestamp of second picture frame;
Receive the response message for including first position region that the first terminal equipment returns, the first position area Domain is that the first terminal equipment obtains first video flowing corresponding with the identification information, is obtained from first video flowing , corresponding with the timestamp the first picture frame, and obtained and the preassigned target of user on first picture frame Region corresponding to content, wherein, first picture frame is identical with the second picture frame;
Screen, the corresponding display mesh for showing the second picture frame is determined according to the first position region Mark the second place region of content;
UI layer of user interface is generated, the corresponding part drafting that coincide with the second place region on the UI layers has default , corresponding with the object content more new content;
When second picture frame described in the screen display, described UI layers is covered on the second picture frame, so that The more new content covers the object content and is shown to the user.
According to the 7th of the embodiment of the present disclosure the aspect, there is provided a kind of first terminal equipment, the equipment include:
Processor;Memory for the executable instruction for storing the processor;
Wherein, the processor is configured as:
The first picture frame in the first video flowing is detected, judges whether the preassigned object content of user;
If judgement knows that there are the object content, it is determined that on first picture frame, corresponding with the object content First position region, and be marked on first picture frame;
When receiving the label information acquisition request that second terminal equipment is sent, the acquisition request includes:To be played The timestamp of second picture frame in second video flowing, acquisition, corresponding with the timestamp first from first video flowing Picture frame, wherein, first picture frame is identical with the second picture frame;
If first position area corresponding with the preassigned object content of user can be obtained from first picture frame Domain, then return to the response message for including the first position region, so that the second terminal is set to the second terminal equipment It is standby that user interface UI is generated according to the first position region and more new content default, corresponding with the object content Layer, and then when second picture frame described in screen display, described UI layers is covered on the second picture frame so that it is described more New content covers the object content and is shown to the user.
The technical solution that the embodiment of the present disclosure provides can include the following benefits:
By first terminal equipment according to the identification information of the second video flowing to be played and the timestamp of second picture frame, First video flowing corresponding with the identification information, and the first picture corresponding with the timestamp are obtained from first terminal equipment Frame, and obtain first corresponding to user marks on the first picture frame in advance and preassigned object content of user Put region, then by the first position, region is sent to second terminal equipment, second terminal equipment according to first position region and More new content corresponding with object content generates UI layers, so that when screen display second picture frame, this is covered by the UI layers On second picture frame, so that more new content coverage goal content is shown to user.When realizing broadcasting video flowing, it need not usurp In the case of changing video stream data, the personalized video content for meeting user's needs is presented to user in real time, improves personalized video The flexibility of broadcasting and efficiency, and alleviate the processing load of playback terminal.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not The disclosure can be limited.
Brief description of the drawings
Attached drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure Example, and it is configured as together with specification explaining the principle of the disclosure.
Fig. 1 is a kind of flow chart of control method for playing back according to an exemplary embodiment;
Fig. 2A is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment;
The screen display of second terminal equipment shown in Fig. 2 B is the second picture frame for including object content;
The screen display of second terminal equipment shown in Fig. 2 C is the second picture for using more new content coverage goal content Frame;
Fig. 3 A are a kind of flow charts of the control method for playing back shown according to another exemplary embodiment;
The screen display of terminal device shown in Fig. 3 B is the second picture frame for including object content;
The screen display of terminal device shown in Fig. 3 C is the second picture frame for using more new content coverage goal content;
Fig. 4 is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment;
Fig. 5 is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment;
Fig. 6 is a kind of block diagram of second terminal equipment according to an exemplary embodiment;
Fig. 7 is a kind of block diagram of the second terminal equipment shown according to another exemplary embodiment;
Fig. 8 is a kind of block diagram of the second terminal equipment shown according to another exemplary embodiment;
Fig. 9 is a kind of block diagram of the second terminal equipment shown according to another exemplary embodiment;
Figure 10 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment;
Figure 11 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment;
Figure 12 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment;
Figure 13 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment;
Figure 14 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment;
Figure 15 is a kind of block diagram of broadcasting control system according to an exemplary embodiment;
Figure 16 is a kind of block diagram of terminal device according to an exemplary embodiment.
Pass through above-mentioned attached drawing, it has been shown that the clear and definite embodiment of the disclosure, will hereinafter be described in more detail.These attached drawings It is not intended to limit the scope of disclosure design by any mode with word description, but is by reference to specific embodiment Those skilled in the art illustrate the concept of the disclosure.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to During attached drawing, unless otherwise indicated, the same numbers in different attached drawings represent the same or similar key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of flow chart of control method for playing back according to an exemplary embodiment, and the present embodiment is broadcast with this Place control method is configured as including in the second terminal equipment of display screen to illustrate.The control method for playing back can wrap Include the following steps:
In a step 101, send label information to the first terminal equipment of the first video flowing of storage and obtain request, it is described to obtain Request is taken to include:In second terminal equipment in the identification information of the second video flowing to be played, and second video flowing The timestamp of two picture frames.
The first video flowing in the present embodiment is stored in first terminal equipment, and each picture frame in the first video flowing claims For the first picture frame;Second video flowing is stored in second terminal equipment, and each picture frame in the second video flowing is known as second Picture frame.The first video flowing and the second video flowing with same identification information are the identical video flowings of content.First terminal is set Standby object content of the reception user specified by for the first video flowing interested in advance, and second terminal equipment receive in advance The more new content corresponding with the object content that user provides.
First, second terminal equipment receives the video flowing that user specifies broadcasting, and it is second that user, which specifies the video flowing of broadcasting, Terminal device receives the video flowing of remaining network side equipment transmission, or second terminal equipment is stored in advance in second terminal equipment Local video flowing.
Then second terminal equipment is according to the individual demand of user, during the second video flowing is played, to storage The first terminal equipment of first video flowing sends label information and obtains request, wherein, acquisition request includes:To be played second The timestamp of second picture frame in the identification information of video flowing, and second video flowing.
In a step 102, the response message for including first position region that the first terminal equipment returns, institute are received It is that the first terminal equipment obtains first video flowing corresponding with the identification information to state first position region, from described first First picture frame being obtained in video flowing, corresponding with the timestamp, and acquisition is pre- with user on first picture frame The region corresponding to object content first specified, wherein, first picture frame is identical with the second picture frame.
The label information that first terminal equipment sends second terminal equipment obtains request and parses, and obtains to be played The timestamp of second picture frame in the identification information of second video flowing, and second video flowing.Then first terminal equipment from It is local to obtain first video flowing corresponding with the identification information, then acquisition corresponding with the timestamp the from first video flowing One picture frame, it should be noted that first picture frame is identical with the second picture frame.
Then inquire about on first picture frame and whether there is and first corresponding to the preassigned object content of user Put region, it should be noted that first position region can be identified with crucial coordinate information, or the area for passing through figure layer Domain display mode is identified.The preassigned object content of user includes the character face in video flowing, dress ornament, color, text At least one or more in word, pattern.If inquiry know on first picture frame exist with the preassigned target of user Hold corresponding first position region, then the response message for including first position region is sent to second terminal equipment, so that the Two terminal devices parse the response message, obtain the first position region.
In step 103, the screen, right for showing the second picture frame is determined according to the first position region It should show the second place region of the object content.
Terminal device according on the first picture frame, first position region corresponding with the object content that user specifies, determine For showing the screen of second picture frame, corresponding to the second place region of display target content.It should be noted that according to One band of position determines that the implementation in the second place region on screen is very much, is illustrated below;
Mode one,
Position of the first position region on second picture frame is determined first, and then second picture frame is zoomed in and out, its In, first position region is also synchronous to be zoomed in and out;
When second picture frame is zoomed to screen size, the first position area information after record scaling, this first Putting area information can be as the screen for showing the picture frame, the second place region of corresponding display target content.
Mode two,
Position of the first position region on second picture frame is determined first, is then obtained on the region of first position Multiple first coordinate informations, for example, it is assumed that first position region is square, corresponding with the first position region multiple first Coordinate information can be the coordinate information at four angles;Assuming that first position region is circle, it is corresponding with the first position region Multiple first coordinate informations can be at least intersecting point coordinate information of two diameters and circular boundary;
According to the dimension scale of the second picture frame and the screen, multiple first on the region of first position are adjusted in proportion Coordinate information, obtains multiple second coordinate informations corresponding with the plurality of first coordinate information;
Screen, corresponding display mesh for showing the second picture frame can be determined according to the plurality of second coordinate information Mark the second place region of content.
At step 104, UI layers of user interface is generated, coincide on the UI layers corresponding portion with the second place region Drafting is divided to have more new content default, corresponding with the object content.
Second terminal equipment application UI controls generate UI layers new of free user interface;
Then the UI members that parsing obtains more new content are carried out to the file for being stored with more new content corresponding with object content Element, and the UI elements are added on blank UI layers, are coincide pair with being used for the second place region of display target content on screen The part answered.
In step 105, when second picture frame described in the screen display, second figure is covered by described UI layers On piece frame, so that the more new content covers the object content and is shown to the user.
During second terminal device plays video flowing, when the screen display second picture frame, by with screen Coincide corresponding part of second place region is drawn the UI layers for having more new content and is covered on the second picture frame, so that this The object content that more new content covering user specifies, so that the individualized video content for meeting user demand be presented to user.
In conclusion control method for playing back provided in this embodiment, by first terminal equipment according to be played second The identification information of video flowing and the timestamp of second picture frame, obtain corresponding with the identification information the from first terminal equipment One video flowing, and the first picture frame corresponding with the timestamp, and obtain it is that user marks on the first picture frame in advance, with First position region corresponding to the preassigned object content of user, then by the first position, region is sent to second terminal Equipment, second terminal equipment generate UI layers according to first position region and more new content corresponding with object content, so that when screen When curtain shows second picture frame, the UI layers is covered on the second picture frame, so that more new content coverage goal content is shown To user.When realizing broadcasting video flowing, in the case where video stream data need not be distorted, give user to present in real time and meet to use The personalized video content that family needs, avoids the need for needing to change original video stream data and take largely to deposit according to user in advance Storage space is stored, and improves flexibility and the efficiency of personalized video broadcasting.
For in above-described embodiment, using second picture frame of the UI layers covering with object content of generation, and then make more New content coverage goal content, is presented to the effect of user individual broadcasting, it is necessary to illustrate, on realizing by screen State process, UI layers of generating mode and the realization rate of coverage mode have a variety of, and second picture frame can be accounted for according to object content Proportion, or arrangement mode etc. makes choice different UI layer treatment technologies, to improve treatment effeciency, below by figure 2 and embodiment illustrated in fig. 3 describe in detail.
Fig. 2A is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this Control method for playing back should be configured as including in the second terminal equipment of display screen to illustrate.
The object content specified in the present embodiment for user is the first character face, and first character face is second The unique application scenarios in distributed areas on picture frame, are realized using UI layers of Local treatment mode, the broadcasting controlling party Method can include the following steps:
In step 201, send label information to the first terminal equipment of the first video flowing of storage and obtain request, it is described to obtain Request is taken to include:In second terminal equipment in the identification information of the second video flowing to be played, and second video flowing The timestamp of two picture frames.
In step 202, the response message for including first position region that the first terminal equipment returns, institute are received It is that the first terminal equipment obtains first video flowing corresponding with the identification information to state first position region, from described first First picture frame being obtained in video flowing, corresponding with the timestamp, and acquisition is pre- with user on first picture frame The region corresponding to object content first specified, wherein, first picture frame is identical with the second picture frame.
In step 203, the screen, right for showing the second picture frame is determined according to the first position region It should show the second place region of the object content.
Step 201- steps 203 in the present embodiment may refer to the step 101- steps 103 in embodiment illustrated in fig. 1.
In step 204, the UI layers coincideing with the second place zone boundary are generated, are drawn on whole UI layers described in having More new content;
Second terminal equipment application UI controls generate UI layers new of free user interface, the UI layers of zone boundary and second Band of position border coincide correspondence, and parsing is then carried out to the file for being stored with more new content corresponding with object content and is obtained more The UI elements of new content, and the UI elements are added on whole blank UI layers.
In step 205, when second picture frame described in the screen display, described UI layers is coincide and is covered for showing Show on the picture frame, the second place region of the object content, so that the more new content is covered in the target Appearance is shown to the user.
During second terminal device plays video flowing, when the screen display second picture frame, the UI layers is coincide The second place region for showing object content on the second picture frame is covered, so that the more new content covers user The object content specified, so that the individualized video content for meeting user demand be presented to user.
As a kind of example, the screen display of the second terminal equipment shown in Fig. 2 B is the second figure for including object content Piece frame, the screen display of the second terminal equipment shown in Fig. 2 C is the second picture frame for using more new content coverage goal content, Referring to shown in Fig. 2 B and Fig. 2 C,
Assuming that the object content that user specifies is " Doraemon face " on the second picture frame, more new content is " Little Bear Face ", describes in detail:On the first picture frame sent by first terminal equipment, mark in advance it is corresponding with object content First position region, it is then right according to the object content according to the region of first position on second picture frame i.e. " Doraemon face " The file for being stored with " Little Bear face " carries out parsing acquisition UI elements, and the UI elements are added to border and second place region Border coincide on corresponding blank UI layers.
During second terminal device plays video flowing, when the screen display second picture frame, the UI layers is coincide Cover for showing second picture frame " Doraemon face " region, so that should " Little Bear face " covering " Doraemon Face ", so that the individualized video content for meeting user demand be presented to user.
In conclusion control method for playing back provided in this embodiment, is the first personage for the object content that user specifies Face, and the unique application scenarios in distributed areas of first character face on picture frame, using UI layers of Local treatment side Formula is realized, so that when playing original video stream when the screen display picture frame, the UI layers is coincide and is covered for showing Show the second place region of object content, so that more new content coverage goal content is shown to user.Realize broadcasting video flowing When, in the case where video stream data need not be distorted, the personalized video for meeting user's needs can be presented to user in real time Content, improves treatment effeciency, has saved process resource.
Fig. 3 A are a kind of flow charts of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this Control method for playing back should be configured as including in the second terminal equipment of display screen to illustrate.
The object content specified in the present embodiment for user be multiple patterns, multiple patterns dividing on second picture frame The application scenarios of cloth Regional Dispersion, are realized using UI layers of disposed of in its entirety mode, which can be included such as Under several steps:
In step 301, send label information to the first terminal equipment of the first video flowing of storage and obtain request, it is described to obtain Request is taken to include:In second terminal equipment in the identification information of the second video flowing to be played, and second video flowing The timestamp of two picture frames.
In step 302, the response message for including first position region that the first terminal equipment returns, institute are received It is that the first terminal equipment obtains first video flowing corresponding with the identification information to state first position region, from described first First picture frame being obtained in video flowing, corresponding with the timestamp, and acquisition is pre- with user on first picture frame The region corresponding to object content first specified, wherein, first picture frame is identical with the second picture frame.
In step 303, the screen, right for showing the second picture frame is determined according to the first position region It should show the second place region of the object content.
Step 301- steps 303 in the present embodiment may refer to the step 101- steps 103 in embodiment illustrated in fig. 1.
In step 304, the UI layers that generation coincide with the screen border, on the UI layers and the second place Region coincide on corresponding the third place region draw described in more new content, and the part outside the third place region into Row transparent processing;
Second terminal equipment application UI controls generate UI layers new of free user interface, the UI layers of zone boundary and screen Border coincide correspondence, and parsing is then carried out to the file for being stored with more new content corresponding with object content and obtains more new content UI elements, and the UI elements are added on UI layers, coincide corresponding the third place region with the second place region on screen, And the part on UI layers, outside the third place region carries out transparent processing.
In step 305, when second picture frame described in the screen display, second figure is covered by described UI layers On piece frame, so that the more new content covers the object content and is shown to the user.
It is overall by the UI layers when the screen display second picture frame during second terminal device plays video flowing Cover on the second picture frame, so that the object content that more new content covering user specifies, so as to be presented to user Meet the individualized video content of user demand.
As a kind of example, the screen display of the terminal device shown in Fig. 3 B is the second picture for including object content Frame, the screen display of the terminal device shown in Fig. 3 C is the second picture frame for using more new content coverage goal content, referring to figure Shown in 3B and Fig. 3 C,
Assuming that the object content that user specifies includes first pattern and the second pattern, the first pattern is on the second picture frame " lower part of the body of health husband ", corresponding more new content are " tail of mermaid ", and the second pattern is " the Doraemon crown ", it is corresponding more New content is " the Doraemon crown with aircraft ", is described in detail as follows:The first picture frame sent by first terminal equipment First position corresponding with object content region that is upper, marking in advance, according to according to the region of first position on second picture frame Object content is " lower part of the body of health husband " and " the Doraemon crown ", then to being stored with " tail of mermaid " and " band aircraft The Doraemon crown " file carry out parsing and obtain UI elements, and the UI elements are added on UI layers and screen on second The band of position coincide on corresponding the third place region, and the part outside the third place region carries out transparent processing.
It is overall by the UI layers when the screen display second picture frame during second terminal device plays video flowing Cover on the second picture frame, so that " tail of mermaid " pattern covers " lower part of the body of health husband " pattern, " band flight The Doraemon crown of device " pattern covers " the Doraemon crown " pattern, meets that the personalization of user demand regards so as to be presented to user Frequency content.
In conclusion control method for playing back provided in this embodiment, is multiple patterns for the object content that user specifies, The application scenarios that distributed areas of multiple patterns on picture frame disperse, are realized using UI layers of disposed of in its entirety mode, from And when playing original video stream when the screen display picture frame, UI layers of entirety are covered on the picture frame, so that renewal Content coverage goal content is shown to user.When realizing broadcasting video flowing, in the case where video stream data need not be distorted, The personalized video content for meeting user's needs can be presented to user in real time, improve treatment effeciency, saved process resource.
Fig. 4 is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this Control method for playing back should be configured as including in the first terminal equipment of display screen to illustrate.
The first video flowing in the present embodiment is stored in first terminal equipment, and each picture frame in the first video flowing claims For the first picture frame;Second video flowing is stored in second terminal equipment, and each picture frame in the second video flowing is known as second Picture frame.The first video flowing and the second video flowing with same identification information are the identical video flowings of content.First terminal is set Standby object content of the reception user specified by for the first video flowing interested in advance, and second terminal equipment receive in advance The more new content corresponding with the object content that user provides.
In step 401, the first picture frame in the first video flowing is detected, judges whether the preassigned mesh of user Mark content.
Personalized broadcasting demand of the first terminal equipment according to user to selected video flowing, detects user and specifies first The first video flowing the first picture frame, judge to whether there is the preassigned object content of user in first picture frame.Need It is noted that the implementation in the first picture frame of detection with the presence or absence of object content has very much, illustrate:By by mesh Mark mode of the pixel compared with the pixel in the first picture frame of content, by the characteristic information of object content and the first picture frame The matched mode of characteristic information or by the spectral information of object content compared with the spectral information in seismograph picture frame Mode, can select suitable detection mode, the present embodiment is without limitation according to actual object content.
In step 402, if judge know that there are the object content, it is determined that on first picture frame, with it is described The corresponding first position region of object content, and be marked on first picture frame.
If first terminal equipment judge know that there are the object content that user specifies, it is determined that on first picture frame, with The corresponding first position region of the object content, and be marked on first picture frame, it should be noted that first position Region can be identified with crucial coordinate information, or be identified by the region display mode of figure layer.
In step 403, when receiving the label information acquisition request that second terminal equipment is sent, the acquisition request bag Include:The identification information of second video flowing to be played, and the timestamp of second picture frame, obtain corresponding with the identification information The first video flowing, and from first video flowing obtain, the first picture frame corresponding with the timestamp, wherein, institute It is identical with the second picture frame to state the first picture frame.
In step 404, if can be obtained from first picture frame corresponding with the preassigned object content of user First position region, then being returned to the second terminal equipment includes the response message in the first position region, so that institute Second terminal equipment is stated to be given birth to according to the first position region and more new content default, corresponding with the object content Into UI layers of user interface, and then when second picture frame described in screen display, the second picture frame is covered by described UI layers On, so that the more new content covers the object content and is shown to the user.
Step 101- steps in the implementation process embodiment shown in Figure 1 of step 403 and step 404 in the present embodiment Rapid 105, details are not described herein again.
In conclusion control method for playing back provided in this embodiment, by first terminal equipment according to be played second The identification information of video flowing and the timestamp of second picture frame, obtain corresponding with the identification information the from first terminal equipment One video flowing, and the first picture frame corresponding with the timestamp, and obtain it is that user marks on the first picture frame in advance, with First position region corresponding to the preassigned object content of user, then by the first position, region is sent to second terminal Equipment, second terminal equipment generate UI layers according to first position region and more new content corresponding with object content, so that when screen When curtain shows second picture frame, the UI layers is covered on the second picture frame, so that more new content coverage goal content is shown To user.When realizing broadcasting video flowing, in the case where video stream data need not be distorted, give user to present in real time and meet to use The personalized video content that family needs, avoids the need for needing to change original video stream data and take largely to deposit according to user in advance Storage space is stored, and improves flexibility and the efficiency of personalized video broadcasting, and alleviates the processing load of playback terminal.
Fig. 5 is a kind of flow chart of the control method for playing back shown according to another exemplary embodiment, and the present embodiment is with this Control method for playing back should be configured as including in the first terminal equipment of display screen to illustrate.For the in the present embodiment The detection of object content in one picture frame, using the matched detection mode of characteristic information, and on the first picture frame with mesh The positioning in the corresponding first position region of content is marked, is described in detail and played using the positioning method based on image boundary track algorithm The implementation process of control method, the control method for playing back can include the following steps:
In step 501, the characteristic information of the first picture frame in the first video flowing is obtained.
First terminal equipment receives the first video flowing that user specifies broadcasting, and for specified by first video flowing Object content.Different characteristic information acquisition modes are selected according to the preassigned object content of user, are illustrated below:
Mode one, if the preassigned object content of user is the first pattern for being distributed multiple positions in the background, root According to pre-set unit window, for example, long 30 pixels, the unit window of wide 30 pixel, to area all on first picture frame The characteristic information in domain is extracted one by one, for example, first picture frame is one long 900 pixel, the picture of wide 900 pixel, profit With long 30 pixels, the unit window of wide 30 pixel carries out picture frame feature extraction, it is necessary to extract 400 characteristic informations, this side The universality of formula is very strong, can be directed to all types of object contents.
Mode two, if the preassigned object content of user is character face, can use the processing mould of face recognition Type such as neural network model, or grader comparison model, first in the first picture frame determine facial zone, and then from this Face feature information is extracted in facial zone, avoids the feature letter for extracting the picture one by one from all areas of the first picture frame Breath, this mode improve treatment effeciency to the object content of easily positioning regional area.
In step 502, identify whether the characteristic information is in the preassigned target of user according to property data base Hold;Wherein, the property data base includes sample characteristics information corresponding with the object content.
Whether first terminal equipment is use according to the characteristic information that property data base identification is obtained from first picture frame The object content that family is specified, wherein, property data base includes sample characteristics information corresponding with object content, so that first terminal Equipment is by property data base sample characteristics information corresponding with object content and the characteristic information that obtains from first picture frame Match one by one, if successful match, illustrate that there are the preassigned object content of user in the first picture frame;If it fails to match, say The preassigned object content of user is not present in bright first picture frame.
It should be noted that the sample that the service provider that the content in property data base can be video flowing has been cured Characteristic information.More it is flexibly that property data base can also include except the sample characteristics information including having been cured before The sample characteristics information that the video flowing sent in real time for user, the contents processing specified according to user generate.
As a kind of example, if the object content is the first pattern;Then described first is determined according to boundary profile algorithm Area of the pattern on picture frame;Pattern characteristics are extracted from the area of the pattern;By the pattern characteristics and the characteristic Sample patterns feature corresponding with first pattern is matched in storehouse;If successful match, the pattern area is known in judgement There are first pattern in domain;If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
As a kind of example, if the object content is the first character face;The face then obtained according to training in advance is special Sign scope determines the facial zone on the picture frame;Facial characteristics is extracted from the facial zone;By the facial characteristics Matched with sample face feature corresponding with first character face in the property data base;If successful match, The facial zone is known in judgement, and there are first character face;If it fails to match, judgement knows the facial zone not There are first character face.
In conclusion first localization region, feature is being extracted from region, whether can have object content with fast positioning, carry High treatment effeciency.
In step 503, if judge know there are the object content, based on image boundary track algorithm obtain with it is described The smoothness of the corresponding zone boundary of object content;
First terminal equipment is by detecting the first picture frame, if judging to know that there are the preassigned mesh of user in picture frame Content is marked, then the smoothness of zone boundary corresponding with the object content is obtained by image boundary track algorithm;Wherein, image Edge following algorithm includes the image boundary track algorithm based on two-value, image boundary track algorithm based on small echo etc., can be with Made choice according to actual using needs, and then region corresponding with the object content is obtained by image boundary track algorithm The smoothness on border.
In step 504, judge whether the smoothness reaches default threshold value, if judging to know that the smoothness reaches To default threshold value, then step 505 is performed;If judgement knows that the smoothness is not reaching to default threshold value, perform Step 506;
The smoothness that judges corresponding with object content zone boundary whether reaches default threshold value, it is necessary to attention It is that different image boundary track algorithms is preset with different threshold values, for example, the image boundary track algorithm pair based on two-value The threshold value answered is A, and the corresponding threshold value of image boundary track algorithm based on small echo is B, therefore, will according to the algorithm of use The smoothness of acquisition, if judging to know that smoothness reaches default threshold value, performs step compared with corresponding threshold value Rapid 505;If judgement knows that smoothness is not reaching to default threshold value, step 506 is performed;
In step 505, will be corresponding with the object content if judging to know that the smoothness reaches the threshold value Zone boundary as the first position region, and be marked on first picture frame.
When judgement knows that the smoothness of zone boundary corresponding with the object content reaches default threshold value, then illustrate Zone boundary easily carries out dividing processing, directly will zone boundary corresponding with object content as first position region, and It is marked on first picture frame.
In step 506, if judging to know that the smoothness is not reaching to the threshold value, it is determined that with the regional edge The corresponding smooth region in boundary, and using the smooth region as the first position region, and it is enterprising in first picture frame Line flag.
When judgement knows that the smoothness of zone boundary corresponding with the object content is not reaching to default threshold value, then Illustrate that zone boundary is not easy to carry out dividing processing, can be determined according to default compensating parameter corresponding with zone boundary smooth Region, and then using smooth region as first position region, and be marked on first picture frame.
In step 507, when receiving the label information acquisition request that second terminal equipment is sent, the acquisition request bag Include:The identification information of second video flowing to be played, and the timestamp of second picture frame, obtain corresponding with the identification information The first video flowing, and from first video flowing obtain, the first picture frame corresponding with the timestamp, wherein, institute It is identical with the second picture frame to state the first picture frame.
In step 508, if can be obtained from first picture frame corresponding with the preassigned object content of user First position region, then being returned to the second terminal equipment includes the response message in the first position region, so that institute Second terminal equipment is stated to be given birth to according to the first position region and more new content default, corresponding with the object content Into UI layers of user interface, and then when second picture frame described in screen display, the second picture frame is covered by described UI layers On, so that the more new content covers the object content and is shown to the user.
Step 101- steps in the implementation process embodiment shown in Figure 1 of step 507 and step 508 in the present embodiment Rapid 105, details are not described herein again.
In conclusion control method for playing back provided in this embodiment, first terminal equipment is directed to target in the first picture frame The detection of content, using the matched detection mode of characteristic information, and on the first picture frame corresponding with object content The positioning of one band of position, using the positioning method based on image boundary track algorithm, with as needed by first position region Second terminal equipment is sent to, so that second terminal equipment is according to first position region and more new content corresponding with object content UI layers of generation, will more new content coverage goal content by UI layers.When realizing broadcasting video flowing, video flowing need not distorted In the case of data, the personalized video content for meeting user's needs is presented to user in real time, improves the spirit of personalized video broadcasting Activity and efficiency.By carrying out the positioning in first position region, second terminal equipment real-time query the in first terminal equipment One band of position directly generates UI layers, while improves treatment effeciency, and centralized detecting, has saved the process resource of detection.
You need to add is that before step 501, the method further includes:
Receive the picture frame of multiple video flowings;
Obtain sample characteristics information corresponding with the pre-set sample content of user in each picture frame;
The correspondence of sample characteristics information and sample content is stored in the property data base.
In conclusion control method for playing back provided in this embodiment, can dynamically update property data base, during with using Between accumulation, the personalized content that plays provided to the user is more diversified.
Following is embodiment of the present disclosure, can be configured as execution embodiments of the present disclosure.Filled for the disclosure The details not disclosed in embodiment is put, refer to embodiments of the present disclosure.
Fig. 6 is a kind of block diagram of second terminal equipment according to an exemplary embodiment, as shown in fig. 6, this second Terminal device includes:Sending module 11, the first receiving module 12, the first locating module 13, first processing module 14 and display mould Block 15, wherein,
Sending module 11, is configured as obtaining to the first terminal equipment transmission label information of the first video flowing of storage and asks Ask, the acquisition request includes:The identification information of second video flowing to be played in second terminal equipment, and second picture frame Timestamp;
First receiving module 12, is configured as receiving that the first terminal equipment returns including first position region Response message, the first position region are that the first terminal equipment obtains first video corresponding with the identification information Stream, the first picture frame obtained from first video flowing, corresponding with the timestamp, and on first picture frame Acquisition and the region corresponding to the preassigned object content of user, wherein, first picture frame and the second picture frame It is identical;
First locating module 13, is configured as determining to be used to show the second picture frame according to the first position region Screen on, the corresponding second place region for showing the object content;
First processing module 14, is configurable to generate UI layers of user interface, on the UI layers with the second place region The corresponding part drafting that coincide has more new content default, corresponding with the object content;
Display module 15, when being configured as second picture frame described in the screen display, institute is covered by described UI layers State on second picture frame, so that the more new content covers the object content and is shown to the user.
The function and process flow of each module in second terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
Second terminal equipment provided in this embodiment, the mark by first terminal equipment according to the second video flowing to be played Know the timestamp of information and second picture frame, first video flowing corresponding with the identification information obtained from first terminal equipment, And the first picture frame corresponding with the timestamp, and obtain that user marks on the first picture frame in advance, advance with user First position region corresponding to the object content specified, then by the first position, region is sent to second terminal equipment, the Two terminal devices generate UI layer according to first position region and more new content corresponding with object content, so as to work as screen display the During two picture frames, the UI layers is covered on the second picture frame, so that more new content coverage goal content is shown to user.It is real When having showed broadcasting video flowing, in the case where video stream data need not be distorted, presented to user meet user's needs in real time Personalized video content, avoid the need in advance according to user need change original video stream data and take substantial amounts of memory space into Row storage, improves flexibility and the efficiency of personalized video broadcasting.
Fig. 7 is a kind of block diagram of the second terminal equipment shown according to another exemplary embodiment, as shown in fig. 7, being based on Embodiment illustrated in fig. 6, first locating module 13, including:Adjustment unit 131 and determination unit 132, wherein,
Adjustment unit 131, is configured as the dimension scale according to the second picture frame and the screen, adjusts in proportion Multiple first coordinate informations on the first position region, obtain corresponding with the multiple first coordinate information multiple second Coordinate information;
Determination unit 132, is configured as determining described second on the screen according to the multiple second coordinate information The band of position.
The function and process flow of each module in second terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
Fig. 8 is a kind of block diagram of the second terminal equipment shown according to another exemplary embodiment, as shown in figure 8, being based on Embodiment illustrated in fig. 6, the first processing module 14, including:First generation unit 141 and the first drawing unit 142, wherein,
First generation unit 141, is configurable to generate the UI layers coincideing with the second place zone boundary;
First drawing unit 142, is configured as on whole UI layers more new content described in drafting;
Display module 15, is configured as described UI layers coincideing and covers for showing the second picture frame, described The second place region of object content.
The function and process flow of each module in second terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
Second terminal equipment provided in this embodiment, is realized for using UI layers of Local treatment mode, so that When playing original video stream when the screen display picture frame, the UI layers is coincide the second covered for display target content Region is put, so that more new content coverage goal content is shown to user.When realizing broadcasting video flowing, video need not distorted In the case of flow data, the personalized video content for meeting user's needs can be presented to user in real time, improve treatment effeciency, Process resource is saved.
Fig. 9 is a kind of block diagram of the second terminal equipment shown according to another exemplary embodiment, as shown in figure 9, being based on Embodiment illustrated in fig. 6, the first processing module 14, including:Second generation unit 143 and the second drawing unit 144, wherein,
Second generation unit 143, is configurable to generate the UI layers coincideing with the screen border;
Second drawing unit 144, is configured as on the UI layers, coincide the corresponding 3rd with the second place region More new content described in being drawn on the band of position, and the part outside the third place region carries out transparent processing;
Display module 15, is configured as covering described UI layers entirety on the second picture frame.
The function and process flow of each module in second terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
Second terminal equipment provided in this embodiment, is realized for using UI layers of disposed of in its entirety mode, so that When playing original video stream when the screen display picture frame, UI layers of entirety are covered on the picture frame, so that more new content Coverage goal content is shown to user., can in the case where video stream data need not be distorted when realizing broadcasting video flowing The personalized video content for meeting user's needs is presented to user in real time, improves treatment effeciency, has saved process resource.
Figure 10 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment, as shown in Figure 10, should First terminal equipment includes:Detection module 21, the second locating module 22, the first acquisition module 23 and Second processing module 24, its In,
Detection module 21, the first picture frame being configured as in the first video flowing of detection, judges whether that user is advance The object content specified;
Second locating module 22, if being configured as judging to know that there are the object content, it is determined that first picture On frame, first position corresponding with object content region, and be marked on first picture frame;
First acquisition module 23, when being configured as receiving the label information acquisition request of second terminal equipment transmission, institute Stating acquisition request includes:The identification information of second video flowing to be played, and the timestamp of second picture frame, obtain with it is described First video flowing corresponding to identification information, and obtained from first video flowing, the first figure corresponding with the timestamp Piece frame, wherein, first picture frame is identical with the second picture frame;
Second processing module 24, if being configured as to obtain from first picture frame and the preassigned mesh of user The corresponding first position region of content is marked, then the response for including the first position region to second terminal equipment return disappears Breath so that the second terminal equipment according to the first position region and it is default, corresponding with the object content more New content generates UI layers of user interface, and then when second picture frame described in screen display, described the is covered by described UI layers On two picture frames, so that the more new content covers the object content and is shown to the user.
The function and process flow of each module in first terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
First terminal equipment provided in this embodiment, the mark by first terminal equipment according to the second video flowing to be played Know the timestamp of information and second picture frame, first video flowing corresponding with the identification information obtained from first terminal equipment, And the first picture frame corresponding with the timestamp, and obtain that user marks on the first picture frame in advance, advance with user First position region corresponding to the object content specified, then by the first position, region is sent to second terminal equipment, the Two terminal devices generate UI layer according to first position region and more new content corresponding with object content, so as to work as screen display the During two picture frames, the UI layers is covered on the second picture frame, so that more new content coverage goal content is shown to user.It is real When having showed broadcasting video flowing, in the case where video stream data need not be distorted, presented to user meet user's needs in real time Personalized video content, avoid the need in advance according to user need change original video stream data and take substantial amounts of memory space into Row storage, improves flexibility and the efficiency of personalized video broadcasting, and alleviates the processing load of playback terminal.
Figure 11 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment, as shown in figure 11, base In embodiment illustrated in fig. 10, second locating module 22, including:Judging unit 221, the first determination unit 222 and second determine Unit 223, wherein,
Judging unit 221, is configured as detecting region corresponding with the object content based on image boundary track algorithm Whether the smoothness on border reaches default threshold value;
First determination unit 222, if be configured as judge know that the smoothness reaches the threshold value, will with it is described The corresponding zone boundary of object content is as the first position region;
Second determination unit 223, if being configured as judging to know that the smoothness is not reaching to the threshold value, it is determined that Smooth region corresponding with the zone boundary, and using the smooth region as the first position region.
Figure 12 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment, as shown in figure 12, base In embodiment illustrated in fig. 10, detection module 21, including:
Acquiring unit 211, is configured as obtaining the characteristic information in first picture frame;
Recognition unit 212, is configured as identifying whether the characteristic information is the object content according to property data base; Wherein, the property data base includes sample characteristics information corresponding with the object content.
Further, the equipment further includes:Second receiving module 25, the second acquisition module 26 and memory module 27, its In,
Second receiving module 25, is configured as receiving the picture frame of multiple video flowings;
Second acquisition module 26, is configured as obtaining corresponding with the pre-set sample content of user in each picture frame Sample characteristics information;
Memory module 27, is configured as the correspondence of sample characteristics information and sample content being stored in the characteristic According in storehouse.
The function and process flow of each module in first terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
First terminal equipment provided in this embodiment, first terminal equipment are directed to the inspection of object content in the first picture frame Survey, using the matched detection mode of characteristic information, and for first position corresponding with object content area on the first picture frame The positioning in domain, using the positioning method based on image boundary track algorithm, first position region is sent to as needed Two terminal devices, so that second terminal equipment generates UI according to first position region and more new content corresponding with object content Layer, will more new content coverage goal content by UI layers.When realizing broadcasting video flowing, video stream data need not distorted In the case of, presented to user meet the personalized video contents of user's needs in real time, improve the flexibility of personalized video broadcasting with Efficiency.By carrying out the positioning in first position region, second terminal equipment real-time query first position in first terminal equipment Region directly generates UI layers, while improves treatment effeciency, and centralized detecting, has saved the process resource of detection.
Figure 13 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment, as shown in figure 13, base In embodiment illustrated in fig. 12, the acquiring unit 211, including:First processing subelement 2111 and first extracts subelement 2112, its In,
First processing subelement 2111, if it is the first pattern to be configured as the object content, according to boundary profile algorithm Determine the area of the pattern on first picture frame;
First extraction subelement 2112, is configured as extracting pattern characteristics from the area of the pattern;
Recognition unit 212, be configured as by the pattern characteristics and the property data base with first pattern pair The sample patterns feature answered is matched;
If successful match, the area of the pattern is known in judgement, and there are first pattern;
If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
The function and process flow of each module in first terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
First terminal equipment provided in this embodiment, is the first character face for the object content that user specifies, and should The unique application scenarios in distributed areas of first character face on picture frame, using the detection side of pattern characteristics information matches Formula, improves treatment effeciency.
Figure 14 is a kind of block diagram of the first terminal equipment shown according to another exemplary embodiment, as shown in figure 14, base In embodiment illustrated in fig. 12, the acquiring unit 211, including:Second processing subelement 2113 and second extracts subelement 2114, its In,
Second processing subelement 2113, if it is the first character face to be configured as the object content, according to advance training The facial characteristics scope of acquisition determines the facial zone on the picture frame;
Second extraction subelement 2114, is configured as extracting facial characteristics from the facial zone;
Recognition unit 212, be configured as by the facial characteristics and the property data base with the first object plane The corresponding sample face feature in portion is matched;
If successful match, the facial zone is known in judgement, and there are first character face;
If it fails to match, judgement knows that first character face is not present in the facial zone.
The function and process flow of each module in first terminal equipment provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
First terminal equipment provided in this embodiment, is multiple patterns for the object content that user specifies, multiple patterns The application scenarios that distributed areas on picture frame disperse, using the matched detection mode of face feature information, improve processing Efficiency.
Figure 15 is a kind of block diagram of broadcasting control system according to an exemplary embodiment, and as shown in figure 15, this is broadcast Putting control system includes:Second terminal equipment 1, and first terminal equipment 2, wherein, second terminal equipment 1, and the first end End equipment 2 can use the second terminal equipment and first terminal equipment provided in above-described embodiment.
The function and process flow of each module in broadcasting control system provided in this embodiment, may refer to it is above-mentioned shown in Embodiment of the method, its realization principle is similar, and details are not described herein again.
Broadcasting control system provided in this embodiment, the mark by first terminal equipment according to the second video flowing to be played Know the timestamp of information and second picture frame, first video flowing corresponding with the identification information obtained from first terminal equipment, And the first picture frame corresponding with the timestamp, and obtain that user marks on the first picture frame in advance, advance with user First position region corresponding to the object content specified, then by the first position, region is sent to second terminal equipment, the Two terminal devices generate UI layer according to first position region and more new content corresponding with object content, so as to work as screen display the During two picture frames, the UI layers is covered on the second picture frame, so that more new content coverage goal content is shown to user.It is real When having showed broadcasting video flowing, in the case where video stream data need not be distorted, presented to user meet user's needs in real time Personalized video content, improves flexibility and the efficiency of personalized video broadcasting, and alleviates the processing load of playback terminal.
Figure 16 is a kind of block diagram of terminal device according to an exemplary embodiment.For example, terminal device 1300 can To be mobile phone, computer, tablet device etc..
With reference to Figure 13, terminal device 1300 can include following one or more assemblies:Processing component 1302, memory 1304, power supply module 1306, multimedia component 1308, audio component 1310, the interface 1312 of input/output (I/O), sensor Component 1314, and communication component 1316.
The integrated operation of the usual control terminal equipment 1300 of processing component 1302, such as leads to display, call, data The operation that letter, camera operation and record operation are associated.Processing component 1302 can be including one or more processors 1320 Execute instruction, to complete all or part of step of above-mentioned method.In addition, processing component 1302 can include one or more Module, easy to the interaction between processing component 1302 and other assemblies.For example, processing component 1302 can include multimedia mould Block, to facilitate the interaction between multimedia component 1308 and processing component 1302.
Memory 1304 is configured as storing various types of data to support the operation in terminal device 1300.These numbers According to example include being configured as the instruction of any application program or method operated on terminal device 1300, contact number According to, telephone book data, message, picture, video etc..Memory 1304 can be by any kind of volatibility or non-volatile memories Equipment or combinations thereof are realized, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash memory, disk or CD.
Power supply module 1306 provides electric power for the various assemblies of terminal device 1300.Power supply module 1306 can include power supply Management system, one or more power supplys, and other groups associated with generating, managing and distributing electric power for terminal device 1300 Part.
Multimedia component 1308 is included in touching for one output interface of offer between the terminal device 1300 and user Control display screen.In certain embodiments, touching display screen can include liquid crystal display (LCD) and touch panel (TP).Touch Panel includes one or more touch sensors to sense the gesture on touch, slip and touch panel.The touch sensor The boundary of a touch or slide action can be not only sensed, but also is detected and the touch or slide relevant duration And pressure.In certain embodiments, multimedia component 1308 includes a front camera and/or rear camera.Work as terminal Equipment 1300 is in operator scheme, and during such as screening-mode or video mode, front camera and/or rear camera can receive Exterior multi-medium data.Each front camera and rear camera can be a fixed optical lens system or have Focusing and optical zoom capabilities.
Audio component 1310 is configured as output and/or input audio signal.For example, audio component 1310 includes a wheat Gram wind (MIC), when terminal device 1300 is in operator scheme, during such as call model, logging mode and speech recognition mode, Mike Wind is configured as receiving external audio signal.The received audio signal can be further stored in memory 1304 or via Communication component 1316 is sent.In certain embodiments, audio component 1310 further includes a loudspeaker, is configured as output audio Signal.
I/O interfaces 1312 provide interface, above-mentioned peripheral interface module between processing component 1302 and peripheral interface module Can be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and Locking press button.
Sensor component 1314 includes one or more sensors, is configured as providing various aspects for terminal device 1300 Status assessment.For example, sensor component 1314 can detect opening/closed mode of terminal device 1300, the phase of component To positioning, such as the display and keypad that the component is terminal device 1300, sensor component 1314 can also detect end The position of 1,300 1 components of end equipment 1300 or terminal device changes, the presence or do not deposit that user contacts with terminal device 1300 In 1300 orientation of terminal device or acceleration/deceleration and the temperature change of terminal device 1300.Sensor component 1314 can include Proximity sensor, is configured to detect presence of nearby objects without any physical contact.Sensor component 1314 It can also include optical sensor, such as CMOS or ccd image sensor, be configured as using in imaging applications.In some implementations In example, which can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor Or temperature sensor.
Communication component 1316 is configured to facilitate the logical of wired or wireless way between terminal device 1300 and other equipment Letter.Terminal device 1300 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.One In a exemplary embodiment, communication component 1316 via broadcast channel receive broadcast singal from external broadcasting management system or Broadcast related information.In one exemplary embodiment, the communication component 1316 further includes near-field communication (NFC) module, with Promote junction service.For example, can be based on radio frequency identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, surpasses Broadband (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, terminal device 1300 can by one or more application application-specific integrated circuit (ASIC), Digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field-programmable gate array Arrange (FPGA), controller, microcontroller, microprocessor or other electronic components to realize, be configured as performing above-mentioned broadcasting control Method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 1304 of instruction, above-metioned instruction can be performed by the processor 1320 of terminal device 1300 to complete above-mentioned side Method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, magnetic Band, floppy disk and optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by terminal device 1300 When processor performs so that terminal device 1300 is able to carry out a kind of control method for playing back.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice invention disclosed herein Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be appreciated that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claim.

Claims (26)

  1. A kind of 1. control method for playing back, it is characterised in that the described method includes:
    Label information is sent to the first terminal equipment of the first video flowing of storage and obtains request, and the acquisition request includes:Second The identification information of second video flowing to be played on terminal device, and the timestamp of second picture frame;
    The response message for including first position region that the first terminal equipment returns is received, the first position region is The first terminal equipment acquisition first video flowing corresponding with the identification information, being obtained from first video flowing, The first picture frame corresponding with the timestamp, and obtained and the preassigned object content of user on first picture frame Corresponding region, wherein, first picture frame is identical with the second picture frame;
    According to the first position region determine for show the second picture frame screen, corresponding show in the target The second place region of appearance;
    Generate UI layer of user interface, the corresponding part drafting that coincide with the second place region on the UI layers have it is default, More new content corresponding with the object content;
    When second picture frame described in the screen display, described UI layers is covered on the second picture frame, so that described More new content covers the object content and is shown to the user.
  2. 2. according to the method described in claim 1, it is characterized in that, described determine to be used to show according to the first position region On the screen of the second picture frame, the second place region for showing the object content is corresponded to, including:
    According to the dimension scale of the second picture frame and the screen, adjust in proportion multiple on the first position region First coordinate information, obtains multiple second coordinate informations corresponding with the multiple first coordinate information;
    The second place region on the screen is determined according to the multiple second coordinate information.
  3. 3. according to the method described in claim 1, it is characterized in that,
    UI layers of the user interface of generation, including:
    The UI layers that generation coincide with the second place zone boundary;
    Coincide on the UI layers with the second place region on corresponding region draw have it is default, with the object content pair The more new content answered, including:
    More new content described in being drawn on whole UI layers;
    It is described to cover described UI layers on the second picture frame, including:
    Described UI layers is coincide the second place region covered for showing the second picture frame, the object content.
  4. 4. according to the method described in claim 1, it is characterized in that,
    UI layers of the user interface of generation, including:
    The UI layers that generation coincide with the screen border;
    Coincide on the UI layers with the second place region on corresponding region draw have it is default, with the object content pair The more new content answered, including:
    It coincide on UI layers described with the second place region on corresponding the third place region more new content described in drawing, and Part outside the third place region carries out transparent processing;
    It is described to cover described UI layers on the second picture frame, including:
    Described UI layers entirety is covered on the second picture frame.
  5. A kind of 5. control method for playing back, it is characterised in that the described method includes:
    The first picture frame in the first video flowing is detected, judges whether the preassigned object content of user;
    If judgement knows that there are the object content, it is determined that on first picture frame, corresponding with the object content the One band of position, and be marked on first picture frame;
    When receiving the label information acquisition request that second terminal equipment is sent, the acquisition request includes:To be played second The identification information of video flowing, and the timestamp of second picture frame, obtain first video flowing corresponding with the identification information, with And obtained from first video flowing, the first picture frame corresponding with the timestamp, wherein, first picture frame and institute It is identical to state second picture frame;
    If first position region corresponding with the preassigned object content of user can be obtained from first picture frame, Being returned to the second terminal equipment includes the response message in the first position region so that the second terminal equipment according to The first position region and more new content default, corresponding with object content UI layers of user interface of generation, and then When second picture frame described in screen display, described UI layers is covered on the second picture frame, so that the more new content Cover the object content and be shown to the user.
  6. 6. according to the method described in claim 5, it is characterized in that, it is described determine first picture frame on, with the target The corresponding first position region of content, including:
    Whether the smoothness that zone boundary corresponding with the object content is detected based on image boundary track algorithm reaches default Threshold value;
    If judge know that the smoothness reaches the threshold value, will zone boundary corresponding with the object content as institute State first position region;
    If judgement knows that the smoothness is not reaching to the threshold value, it is determined that smooth area corresponding with the zone boundary Domain, and using the smooth region as the first position region.
  7. 7. the method according to claim 5 or 6, it is characterised in that the first picture frame in the first video flowing of the detection, Judge whether the preassigned object content of user, including:
    Obtain the characteristic information in first picture frame;
    Identify whether the characteristic information is the object content according to property data base;Wherein, the property data base includes Sample characteristics information corresponding with the object content.
  8. 8. the method according to the description of claim 7 is characterized in that the preassigned object content of the user, including:
    At least one or more in character face, dress ornament, color, word, pattern.
  9. 9. the method according to the description of claim 7 is characterized in that characteristic information in acquisition first picture frame Before, the method further includes:
    Receive the picture frame of multiple video flowings;
    Obtain sample characteristics information corresponding with the pre-set sample content of user in each picture frame;
    The correspondence of sample characteristics information and sample content is stored in the property data base.
  10. 10. if according to the method described in claim 8, it is characterized in that, the object content is the first pattern;Described in then obtaining Characteristic information in one picture frame, including:
    Area of the pattern on first picture frame is determined according to boundary profile algorithm;
    Pattern characteristics are extracted from the area of the pattern;
    It is described to identify whether the characteristic information is the object content according to property data base, including:
    The pattern characteristics are matched with sample patterns feature corresponding with first pattern in the property data base;
    If successful match, the area of the pattern is known in judgement, and there are first pattern;
    If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
  11. 11. if according to the method described in claim 8, it is characterized in that, the object content is the first character face;Then obtain Characteristic information in first picture frame, including:
    The facial characteristics scope obtained according to advance training determines the facial zone on the picture frame;
    Facial characteristics is extracted from the facial zone;
    It is described to identify whether the characteristic information is the object content according to property data base, including:
    The facial characteristics is carried out with sample face feature corresponding with first character face in the property data base Matching;
    If successful match, the facial zone is known in judgement, and there are first character face;
    If it fails to match, judgement knows that first character face is not present in the facial zone.
  12. 12. a kind of second terminal equipment, it is characterised in that the equipment includes:
    Sending module, is configured as sending label information acquisition request to the first terminal equipment of the first video flowing of storage, described Obtaining request includes:The identification information of second video flowing to be played in second terminal equipment, and the time of second picture frame Stamp;
    First receiving module, is configured as receiving response that the first terminal equipment returns including first position region and disappears Breath, the first position region is that the first terminal equipment obtains first video flowing corresponding with the identification information, from institute State the first picture frame being obtained in the first video flowing, corresponding with the timestamp, and obtain on first picture frame with Region corresponding to the preassigned object content of user, wherein, first picture frame is identical with the second picture frame;
    First locating module, is configured as determining the screen for showing the second picture frame according to the first position region Upper, the corresponding second place region for showing the object content;
    First processing module, is configurable to generate UI layers of user interface, coincide on the UI layers with the second place region pair The part answered, which is drawn, more new content default, corresponding with the object content;
    Display module, when being configured as second picture frame described in the screen display, described second is covered by described UI layers On picture frame, so that the more new content covers the object content and is shown to the user.
  13. 13. equipment according to claim 12, it is characterised in that first locating module, including:
    Adjustment unit, is configured as the dimension scale according to the second picture frame and the screen, adjusts described in proportion Multiple first coordinate informations on one band of position, obtain multiple second coordinate letters corresponding with the multiple first coordinate information Breath;
    Determination unit, is configured as determining the second place area on the screen according to the multiple second coordinate information Domain.
  14. 14. equipment according to claim 12, it is characterised in that
    The first processing module, including:
    First generation unit, is configurable to generate the UI layers coincideing with the second place zone boundary;
    First drawing unit, is configured as on whole UI layers more new content described in drafting;
    The display module, is configured as described UI layers coincideing and covers for showing the second picture frame, the mesh Mark the second place region of content.
  15. 15. equipment according to claim 12, it is characterised in that
    The first processing module, including:
    Second generation unit, is configurable to generate the UI layers coincideing with the screen border;
    Second drawing unit, is configured as on the UI floor, coincide corresponding the third place area with the second place region More new content described in being drawn on domain, and the part outside the third place region carries out transparent processing;
    The display module, is configured as covering described UI layers entirety on the second picture frame.
  16. 16. a kind of first terminal equipment, it is characterised in that the equipment includes:
    Detection module, the first picture frame being configured as in the first video flowing of detection, judges whether that user is preassigned Object content;
    Second locating module, if be configured as judge know that there are the object content, it is determined that on first picture frame, with The corresponding first position region of the object content, and be marked on first picture frame;
    First acquisition module, when being configured as receiving the label information acquisition request of second terminal equipment transmission, the acquisition Request includes:The identification information of second video flowing to be played, and the timestamp of second picture frame, obtain and believe with the mark Corresponding first video flowing is ceased, and is obtained from first video flowing, the first picture frame corresponding with the timestamp, its In, first picture frame is identical with the second picture frame;
    Second processing module, if being configured as to obtain from first picture frame and the preassigned object content of user Corresponding first position region, then return to the response message for including the first position region to the second terminal equipment, with Make the second terminal equipment according in the first position region and renewal default, corresponding with the object content Hold UI layers of user interface of generation, and then when second picture frame described in screen display, second figure is covered by described UI layers On piece frame, so that the more new content covers the object content and is shown to the user.
  17. 17. equipment according to claim 16, it is characterised in that second locating module, including:
    Judging unit, is configured as detecting the flat of zone boundary corresponding with the object content based on image boundary track algorithm Whether slippery reaches default threshold value;
    First determination unit, if be configured as judge know that the smoothness reaches the threshold value, will with the target Hold corresponding zone boundary as the first position region;
    Second determination unit, if be configured as judge know that the smoothness is not reaching to the threshold value, it is determined that with it is described The corresponding smooth region in zone boundary, and using the smooth region as the first position region.
  18. 18. the equipment according to claim 16 or 17, it is characterised in that the detection module, including:
    Acquiring unit, is configured as obtaining the characteristic information in first picture frame;
    Recognition unit, is configured as identifying whether the characteristic information is the object content according to property data base;Wherein, institute Stating property data base includes sample characteristics information corresponding with the object content.
  19. 19. equipment according to claim 18, it is characterised in that the preassigned object content of user, including:
    At least one or more in character face, dress ornament, color, word, pattern.
  20. 20. equipment according to claim 18, it is characterised in that the feature letter in acquisition first picture frame Before breath, the equipment further includes:
    Second receiving module, is configured as receiving the picture frame of multiple video flowings;
    Second acquisition module, it is special to be configured as obtaining sample corresponding with the pre-set sample content of user in each picture frame Reference ceases;
    Memory module, is configured as the correspondence of sample characteristics information and sample content being stored in the property data base In.
  21. 21. equipment according to claim 19, it is characterised in that
    The acquiring unit, including:
    First processing subelement, if it is the first pattern to be configured as the object content, according to determining boundary profile algorithm Area of the pattern on first picture frame;
    First extraction subelement, is configured as extracting pattern characteristics from the area of the pattern;
    The recognition unit, is configured as the pattern characteristics are corresponding with first pattern with the property data base Sample patterns feature is matched;
    If successful match, the area of the pattern is known in judgement, and there are first pattern;
    If it fails to match, judgement knows that first pattern is not present in the area of the pattern.
  22. 22. equipment according to claim 19, it is characterised in that
    The acquiring unit, including:
    Second processing subelement, if it is the first character face to be configured as the object content, according to the face of advance training acquisition Portion's characteristic range determines the facial zone on the picture frame;
    Second extraction subelement, is configured as extracting facial characteristics from the facial zone;
    The recognition unit, be configured as by the facial characteristics and the property data base with first character face couple The sample face feature answered is matched;
    If successful match, the facial zone is known in judgement, and there are first character face;
    If it fails to match, judgement knows that first character face is not present in the facial zone.
  23. A kind of 23. broadcasting control system, it is characterised in that the system comprises:Second as described in claim 12-15 is any Terminal device, and the first terminal equipment as described in claim 16-22 is any.
  24. 24. a kind of second terminal equipment, it is characterised in that the equipment includes:
    Processor;
    Memory for the executable instruction for storing the processor;
    Wherein, the processor is configured as:
    Label information is sent to the first terminal equipment of the first video flowing of storage and obtains request, and the acquisition request includes:Second The identification information of second video flowing to be played on terminal device, and the timestamp of second picture frame;
    The response message for including first position region that the first terminal equipment returns is received, the first position region is The first terminal equipment acquisition first video flowing corresponding with the identification information, being obtained from first video flowing, The first picture frame corresponding with the timestamp, and obtained and the preassigned object content of user on first picture frame Corresponding region, wherein, first picture frame is identical with the second picture frame;
    According to the first position region determine for show the second picture frame screen, corresponding show in the target The second place region of appearance;
    Generate UI layer of user interface, the corresponding part drafting that coincide with the second place region on the UI layers have it is default, More new content corresponding with the object content;
    When second picture frame described in the screen display, described UI layers is covered on the second picture frame, so that described More new content covers the object content and is shown to the user.
  25. 25. a kind of first terminal equipment, it is characterised in that the equipment includes:
    Processor;Memory for the executable instruction for storing the processor;
    Wherein, the processor is configured as:
    The first picture frame in the first video flowing is detected, judges whether the preassigned object content of user;
    If judgement knows that there are the object content, it is determined that on first picture frame, corresponding with the object content the One band of position, and be marked on first picture frame;
    When receiving the label information acquisition request that second terminal equipment is sent, the acquisition request includes:To be played second The identification information of video flowing, and the timestamp of second picture frame, obtain first video flowing corresponding with the identification information, with And obtained from first video flowing, the first picture frame corresponding with the timestamp, wherein, first picture frame and institute It is identical to state second picture frame;
    If first position region corresponding with the preassigned object content of user can be obtained from first picture frame, Being returned to the second terminal equipment includes the response message in the first position region so that the second terminal equipment according to The first position region and more new content default, corresponding with object content UI layers of user interface of generation, and then When second picture frame described in screen display, described UI layers is covered on the second picture frame, so that the more new content Cover the object content and be shown to the user.
  26. A kind of 26. computer-readable recording medium, it is characterised in that instruction is stored with the computer-readable recording medium, Described instruction is loaded by processor and performed to realize such as Claims 1-4 any one of them control method for playing back, Huo Zheru Claim 5 to 11 any one of them control method for playing back.
CN201510210500.2A 2015-04-29 2015-04-29 Control method for playing back, system and terminal device Active CN104883603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510210500.2A CN104883603B (en) 2015-04-29 2015-04-29 Control method for playing back, system and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510210500.2A CN104883603B (en) 2015-04-29 2015-04-29 Control method for playing back, system and terminal device

Publications (2)

Publication Number Publication Date
CN104883603A CN104883603A (en) 2015-09-02
CN104883603B true CN104883603B (en) 2018-04-27

Family

ID=53950911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510210500.2A Active CN104883603B (en) 2015-04-29 2015-04-29 Control method for playing back, system and terminal device

Country Status (1)

Country Link
CN (1) CN104883603B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028097A (en) * 2015-12-09 2016-10-12 展视网(北京)科技有限公司 Vehicle-mounted terminal movie play device
US10629166B2 (en) * 2016-04-01 2020-04-21 Intel Corporation Video with selectable tag overlay auxiliary pictures
CN109963106B (en) * 2019-03-29 2020-01-10 宇龙计算机通信科技(深圳)有限公司 Video image processing method and device, storage medium and terminal
CN112583976B (en) * 2020-12-29 2022-02-18 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN103634503A (en) * 2013-12-16 2014-03-12 苏州大学 Video manufacturing method based on face recognition and behavior recognition and video manufacturing method based on face recognition and behavior recognition
CN104376589A (en) * 2014-12-04 2015-02-25 青岛华通国有资本运营(集团)有限责任公司 Method for replacing movie and TV play figures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN103634503A (en) * 2013-12-16 2014-03-12 苏州大学 Video manufacturing method based on face recognition and behavior recognition and video manufacturing method based on face recognition and behavior recognition
CN104376589A (en) * 2014-12-04 2015-02-25 青岛华通国有资本运营(集团)有限责任公司 Method for replacing movie and TV play figures

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Video collage:A Novel Presentation of Video Sequence;Tang Wang ET AL;《Proceedings of the 15th International Conference on Multimedia》;20070929;461-464 *

Also Published As

Publication number Publication date
CN104883603A (en) 2015-09-02

Similar Documents

Publication Publication Date Title
CN102498725B (en) Mobile device which automatically determines operating mode
CN106791893A (en) Net cast method and device
CN108037863A (en) A kind of method and apparatus for showing image
CN107040646A (en) Mobile terminal and its control method
CN105068467B (en) Control the method and device of smart machine
CN104090735B (en) The projecting method and device of a kind of picture
CN106951884A (en) Gather method, device and the electronic equipment of fingerprint
CN104902189A (en) Picture processing method and picture processing device
CN104883603B (en) Control method for playing back, system and terminal device
CN104853223B (en) The inserting method and terminal device of video flowing
CN108108671A (en) Description of product information acquisition method and device
CN107529699A (en) Control method of electronic device and device
CN105631804A (en) Image processing method and device
CN107704190A (en) Gesture identification method, device, terminal and storage medium
CN106484287A (en) Display device control method, device and display device
CN107330391A (en) Product information reminding method and device
CN112669233A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN107168543A (en) Control method of keyboard and device
CN103885678A (en) Method and device for displaying object
CN108040280A (en) Content item display methods and device, storage medium
CN108346179A (en) AR equipment display methods and device
CN107845094A (en) Pictograph detection method, device and computer-readable recording medium
CN104902318B (en) Control method for playing back and terminal device
CN107122149A (en) Display methods, device and the terminal of application program
CN106775210A (en) The method and apparatus that wallpaper is changed

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant