CN106911962A - Mobile video based on scene intelligently plays interaction control method - Google Patents
Mobile video based on scene intelligently plays interaction control method Download PDFInfo
- Publication number
- CN106911962A CN106911962A CN201710212145.1A CN201710212145A CN106911962A CN 106911962 A CN106911962 A CN 106911962A CN 201710212145 A CN201710212145 A CN 201710212145A CN 106911962 A CN106911962 A CN 106911962A
- Authority
- CN
- China
- Prior art keywords
- video
- face
- scene
- interbehavior
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Processing (AREA)
- Telephone Function (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Interaction control method is intelligently played the invention discloses the mobile video based on scene., using Intelligent mobile equipment by presetting scene with reference to the identification and the perception of internal state to external object, Auto-matching simultaneously exports interbehavior for it, realizes the Based Intelligent Control to video playback, specifically includes default scene as follows:User continuously watches video more than certain hour;In video display process, cell phone apparatus detect face disappearance;In video playback, user does not watch video with correct posture;In video playback, after face disappearance is detected, recovery detects face;By the perception to player status, for different states;By the identification to Intelligent mobile equipment screen angle.The beneficial effects of the invention are as follows:Automatically form to interacting between video and beholder and the Based Intelligent Control to video display process, be greatly improved the humanistic care to user, under the influence of intelligent, interaction the variation played, more preferable Consumer's Experience can be brought.
Description
Technical field
The present invention relates to Video processing correlative technology field, refer in particular to the mobile video based on scene and intelligently play interaction
Control method.
Background technology
Video (Video) refer to a series of static images caught in the way of electric signal, noted down, being processed, being stored,
Transmission and the various technologies reappeared.Continuous image change it is per second more than more than 24 frames (frame) picture when, according to the persistence of vision
Principle, human eye cannot distinguish the tableaux of single width;Smooth continuous visual effect is appeared to be, so continuous picture is called
Video.Video technique is developed for television system, but has developed into a variety of forms now with profit consumption
Person gets off videograph.The prosperity of network technology also promotes the record fragment of video to be present in Yin Te in the form of streaming media
On net and can by computer receive with play.Video belongs to different technologies from film, and the latter is will be dynamic using photography
Image capturing is a series of still photo.
The broadcasting of video all the time is a passive isolated process, unrelated between video clip and beholder.
Video cannot perceive the presence of beholder in itself, therefore lack interactive, it is impossible to allow the playing process to become more interesting.Video playback
The change each time of state all must manually be triggered by beholder, so-called to play the result that interactive controlling is all human intervention.
The content of the invention
The present invention is above-mentioned in order to overcome the shortcomings of to exist in the prior art, there is provided one kind can realize video with viewing
The mobile video based on scene of interaction intelligently plays interaction control method between person.
To achieve these goals, the present invention uses following technical scheme:
Mobile video based on scene intelligently plays interaction control method, using Intelligent mobile equipment by default scene knot
The identification and the perception of internal state to external object are closed, Auto-matching simultaneously exports interbehavior, realize the intelligence to video playback
Can control, specifically include default scene as follows:
(1) user continuously watches video more than certain hour, produces an interbehavior;
(2) in video display process, cell phone apparatus detect face disappearance, produce interbehavior;
(3) in video playback, user does not watch video with correct posture, produces interbehavior;
(4) in video playback, after face disappearance is detected, recovery detects face, produces interbehavior;
(5) by the perception to player status, for different states, different interbehaviors are produced;
(6) by the identification to Intelligent mobile equipment screen angle, interbehavior is produced;
Wherein:Described Intelligent mobile equipment includes smart mobile phone, panel computer and notebook computer, described intelligent sliding
Dynamic equipment is provided with camera.
This patent is devoted to seeking setting up certain contact between video playback and beholder, and based on some default fields
Scape, automatically forms to interacting between video and beholder and the Based Intelligent Control to video display process.Present invention offers one
More interesting, hommization broadcasting experience is planted, on the premise of not disturbing user to watch video, whole playing process becomes more
It is active no longer uninteresting, particular for the weaker video of episodic (such as instructional video), point out user to watch interest.Meanwhile,
Automatic pause, the friendly prompting into the automatic ability played and various states are left, the humanity to user is greatly improved
Show loving care for, under the influence of intelligent, interaction the variation played so that video viewing experience reaches a kind of unprecedented height
Degree.
Preferably, in default scene (1), be provided with described Intelligent mobile equipment continuous viewing video duration when
The angle threshold of threshold value long and each angle of user's head portrait, first, by the camera combination recognition of face on Intelligent mobile equipment
Technology, the angle to user's head portrait recognition result is calculated, and side face, the angle for coming back and bowing can be drawn according to result of calculation
Degree, is then considered effectively viewing in the angle threshold of setting, and whether the accurate posture for judging user's viewing video is in effective
Viewed status, if being in effective viewed status, watch the record of duration and follow the trail of;If being not at effectively watching shape
State, then continue through face recognition technology and the posture of user's viewing video judged;Afterwards, by the continuous knowledge to face
Do not follow the trail of, calculate viewing duration, if the identification viewing duration of continuous effective reaches the duration threshold value of setting, interact row
For;If without effective viewed status are continuously in, continuing through face recognition technology and the posture of user's viewing video being entered
Row judges.
Preferably, in default scene (2), be provided with described Intelligent mobile equipment recognition of face detection cycle and
Detection limit X for limiting detection cycle number of times, first, by the camera combination recognition of face skill on Intelligent mobile equipment
Art, it is determined that detecting the face of user;Afterwards, face cannot be detected in the detection cycle of setting, and continues X inspection of experience
The survey cycle cannot all detect face, then it is assumed that nobody interacts behavior in viewing video, while video council is by automatic temporary
Stop.
Preferably, in default scene (3), the angle of each angle of user's head portrait is provided with described Intelligent mobile equipment
The ratio upper limit threshold N and ratio lower threshold M of the ratio between threshold value and face area and screen area, by Intelligent mobile equipment
Pick-up head combining with human face recognition technique, the angle to user's head portrait recognition result calculates, can be obtained according to result of calculation
Go out side face, the angle for coming back and bowing, the angle threshold more than setting is then considered invalid viewing behavior, then interact row
For;The face area and screen area identified by the pick-up head combining with human face recognition technique on Intelligent mobile equipment, calculating
Ratio, and with setting ratio upper limit threshold N and ratio lower threshold M compare, if face area and screen area
Ratio be more than ratio upper limit threshold N, then regard as too near apart from screen;If the ratio of face area and screen area is less than
Ratio lower threshold M, then regard as too remote apart from screen;And carry out interbehavior.
Preferably, in default scene (4), the detection cycle of recognition of face is provided with described Intelligent mobile equipment,
By the pick-up head combining with human face recognition technique on Intelligent mobile equipment, in a detection cycle, face is correctly identified, then
Think that user returns, the video of pause automatically continues broadcasting, and carries out interbehavior.
Preferably, in default scene (5), described player status are commenced play out including video, video pause is broadcast
Put, video terminates to play, in video buffer and video playback exception.
Preferably, in default scene (6), the gradient of screen inclination angle is provided with described Intelligent mobile equipment
Threshold value, using the dynamic sensing of gyroscope on smart machine, detects the downward angle of inclination of screen, is set when angle of inclination is less than
The gradient threshold value put, then assert that user is not suitable for being watched with the angle, and carry out interbehavior.
Preferably, being provided with interactive database in described Intelligent mobile equipment, friendship is provided with described interactive database
Mutual type, interbehavior and response data, described type of interaction are corresponding with default scene, described interbehavior and response
Data are corresponding, wherein:Each type of interaction is contained within some interbehaviors, and the interbehavior selected in each type of interaction is
Random.It is emphasized that:The interactive database can be by preset using some App of the invention inside, while the interaction
Data can be updated by high in the clouds so that interbehavior can be with online updating, so that more diversified.
Preferably, described interbehavior includes commencing play out, suspending broadcasting, continue to play, terminate for control video
One or more in animation, voice, picture, word or the vibrations playing and show.
Preferably, described picture includes static images and gif pictures.
The beneficial effects of the invention are as follows:Automatically form to interacting between video and beholder and to video display process
Based Intelligent Control, is greatly improved the humanistic care to user, under the influence of intelligent, interaction the variation played, can
Bring more preferable Consumer's Experience.
Specific embodiment
With reference to specific embodiment, the present invention will be further described.
Mobile video based on scene intelligently plays interaction control method, using Intelligent mobile equipment by default scene knot
The identification and the perception of internal state to external object are closed, Auto-matching simultaneously exports interbehavior, realize the intelligence to video playback
Can control, specifically include default scene as follows:
(1) user continuously watches video more than certain hour, produces an interbehavior;
(2) in video display process, cell phone apparatus detect face disappearance, produce interbehavior;
(3) in video playback, user does not watch video with correct posture, produces interbehavior;
(4) in video playback, after face disappearance is detected, recovery detects face, produces interbehavior;
(5) by the perception to player status, for different states, different interbehaviors are produced;
(6) by the identification to Intelligent mobile equipment screen angle, interbehavior is produced.
Wherein:Intelligent mobile equipment includes smart mobile phone, panel computer and notebook computer, and Intelligent mobile equipment is provided with
Camera.Interactive database is provided with Intelligent mobile equipment, type of interaction, interbehavior and number of responses are provided with interactive database
According to, type of interaction is corresponding with default scene, and interbehavior is corresponding with response data, wherein:Each type of interaction is contained within
Some interbehaviors, the interbehavior selected in each type of interaction is random.Interbehavior includes the beginning of control video
Play, suspend the one kind or many played, continue to play, terminate in animation, voice, picture, word or the vibrations of broadcasting and displaying
Kind.Picture includes static images and gif pictures.It is emphasized that:The interactive database can be by some App using this patent
It is internal preset, while the interaction data can be updated by high in the clouds so that interbehavior can be with online updating, so that more
Plus variation.
In default scene (1), the duration threshold value of continuous viewing video duration is provided with Intelligent mobile equipment and account is used
As the angle threshold of each angle, first, by the pick-up head combining with human face recognition technique on Intelligent mobile equipment, to user's head portrait
The angle of recognition result is calculated, and side face, the angle for coming back and bowing can be drawn according to result of calculation, in the angle of setting
Effectively viewing is then considered in threshold value, it is accurate to judge that whether user watches the posture of video in effective viewed status, if place
In effective viewed status, then carry out watching the record of duration and following the trail of;If being not at effective viewed status, people is continued through
Face identification technology is judged the posture of user's viewing video;Afterwards, followed the trail of by the continuous identification to face, calculate viewing
Duration, if the identification viewing duration of continuous effective reaches the duration threshold value of setting, interacts behavior;If without continuous
In effective viewed status, then continue through face recognition technology and the posture of user's viewing video is judged.The scene is carried
Some times still again relatively uninteresting video viewing experience slightly long is risen.For same class scene, there can be various friendships
Mutual behavior definition, can at random take one kind every time, and each interactive experience or perception are all different from the user point of view.At this
Interbehavior in scene include but is not limited to prompt text, display picture, display animation and play voice in one or
The multiple combinations of person (picture includes static images, gif pictures).Such as continuous viewing instructional video pointed out user more than 3 minutes
" you care!", and with animation or voice.
In default scene (2), the detection cycle of recognition of face is provided with Intelligent mobile equipment and for limiting detection week
The detection limit X of phase number of times, first, by the pick-up head combining with human face recognition technique on Intelligent mobile equipment, it is determined that detecting use
The face at family;Afterwards, face cannot be detected in the detection cycle of setting, and continues X detection cycle of experience cannot all examine
Measure face, then it is assumed that nobody interacts behavior, while video council is by automatic pause in viewing video.This scene needs essence
Really detection rationally controls detection cycle from the process having to nothing, when multiple detection cycles of setting all fail to detect face, then
Assert that user has been moved off, while matching and exporting interbehavior.For example:Make ", which you go" or " parent, you also exist
" etc. similar prompting, while video council is by automatic pause, in order to avoid miss wonderful.This addresses the problem user due to prominent
Hair event (for example answering the call) needs to leave, and misses the situation of wonderful.Voice message combination automatic pause in this scene
Video can bring more preferable Consumer's Experience.
In default scene (3), the angle threshold and face area of each angle of user's head portrait are provided with Intelligent mobile equipment
With the ratio upper limit threshold N and ratio lower threshold M of the ratio between screen area, by the camera combination people on Intelligent mobile equipment
Face identification technology, the angle to user's head portrait recognition result is calculated, and can be drawn side face according to result of calculation, be come back and low
The angle of head, the angle threshold more than setting is then considered invalid viewing behavior, then interact behavior;Set by intelligent mobile
Standby upper pick-up head combining with human face recognition technique, the ratio of the face area that calculating is identified and screen area, and with set
Ratio upper limit threshold N and ratio lower threshold M compare, if the ratio of face area and screen area is more than the ratio upper limit
Threshold value N, then regard as too near apart from screen;If the ratio of face area and screen area is less than ratio lower threshold M, recognize
It is set to too remote apart from screen;And carry out interbehavior.It can be seen that the face recognition result of normal viewing, " eyes " for collecting
Genius loci key point is almost on a horizontal line, but the feature key points collected in the collection of illustrative plates of side face have one
Fixed angle, by the calculating to this angle and with angle threshold more just may determine that whether in effectively viewing shape
State.For example:Make and " which is seen" etc. similar prompting.
In default scene (4), the detection cycle of recognition of face is provided with Intelligent mobile equipment, by Intelligent mobile equipment
On pick-up head combining with human face recognition technique, in a detection cycle, correctly identify face, then it is assumed that user return, temporarily
The video for stopping automatically continues broadcasting, and carries out interbehavior.For example:" small owner, I wants to die you to prompt text!" etc. it is similar
Prompting.
In default scene (5), player status are commenced play out including video, video pause is played, video terminates broadcasting,
With video playback exception in video buffer.For example:The similar prompting such as prompt text " in buffering ".
In default scene (6), the gradient threshold value of screen inclination angle is provided with Intelligent mobile equipment, is set using intelligence
The dynamic sensing of standby upper gyroscope, detects the downward angle of inclination of screen, when angle of inclination is less than the gradient threshold value for setting,
Then assert that user is not suitable for being watched with the angle, and carry out interbehavior.The interbehavior of this scene is included but is not limited to
Mobile phone vibrations, animation, word, voice message etc..For example:Prompt text " see by your recumbency" etc. similar prompting.
Face recognition technology referred to herein, employs eigenface method, and its basic thought is:Find facial image point
The basic element of cloth, the i.e. characteristic vector of facial image sample set covariance matrix, facial image is approx characterized with this, these
Characteristic vector is referred to as eigenface.In fact, eigenface reflects the knot for lying in information and face inside face sample set
Structure relation.The characteristic vector of eyes, cheek, the sample set covariance matrix of lower jaw is referred to as eigen eyes, feature jaw and feature lip,
It is referred to as sub-face of feature.Sub-face of feature generated subspace, referred to as sub-face space in corresponding image space.Calculate test image
Window, if video in window meets threshold value comparison condition, judges that it is face in the projector distance of sub-face space.
The method of eigenface, is first to determine size, position, the distances of image surface face profile such as eye iris, the wing of nose, the corners of the mouth etc.
Attribute, then calculates their geometric feature again, and these characteristic quantities form a characteristic vector for describing the image surface.Its skill
It is " partial body's signature analysis " that the core of art is actual, and finally it constructs principal component subspace according to lineup's face training image, by
There is the shape of face in pivot, also referred to as eigenface, project on principal component subspace test image during identification, obtain one group of throwing
Shadow coefficient, and the facial image of each known people is relatively identified.According to statistics:Obtained in 200 3000 width images of people
95% correct recognition rata.The up to ten million different characteristic faces of 6-12 Sui age bracket child that present invention utilization is collected, and according to
This carries out instruction mould, significantly increases the database entirety radix of eigenface so that reach in the discrimination of this age bracket
99.9%.
Experienced present invention offers a kind of more interesting, hommization broadcasting, in the premise for not disturbing user to watch video
Under, whole playing process becomes more active no longer uninteresting, particular for the weaker video of episodic (such as instructional video), carries
Show that user watches interest.Meanwhile, leave automatic pause, the friendly prompting into the automatic ability played and various states, pole
The big humanistic care improved to user, under the influence of intelligent, interaction the variation played so that video viewing experience
Reach a kind of unprecedented height.
Claims (10)
1. the mobile video based on scene intelligently plays interaction control method, it is characterized in that, using Intelligent mobile equipment by pre-
If scene combines the identification and the perception of internal state to external object, Auto-matching simultaneously exports interbehavior, realizes to video
The Based Intelligent Control of broadcasting, specifically includes default scene as follows:
(1) user continuously watches video more than certain hour, produces an interbehavior;
(2) in video display process, cell phone apparatus detect face disappearance, produce interbehavior;
(3) in video playback, user does not watch video with correct posture, produces interbehavior;
(4) in video playback, after face disappearance is detected, recovery detects face, produces interbehavior;
(5) by the perception to player status, for different states, different interbehaviors are produced;
(6) by the identification to Intelligent mobile equipment screen angle, interbehavior is produced;
Wherein:Described Intelligent mobile equipment includes smart mobile phone, panel computer and notebook computer, and described intelligent mobile sets
It is standby to be provided with camera.
2. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, pre-
If in scene (1), duration threshold value and each angle of user's head portrait of continuous viewing video duration are provided with described Intelligent mobile equipment
The angle threshold of degree, first, by the pick-up head combining with human face recognition technique on Intelligent mobile equipment, recognizes to user's head portrait and ties
The angle of fruit is calculated, and side face, the angle for coming back and bowing can be drawn according to result of calculation, in the angle threshold of setting
Then it is considered effectively viewing, whether the accurate posture for judging user's viewing video is in effective viewed status, if in effective
Viewed status, then carry out watching the record of duration and following the trail of;If being not at effective viewed status, recognition of face is continued through
Technology is judged the posture of user's viewing video;Afterwards, followed the trail of by the continuous identification to face, calculate viewing duration,
If the identification viewing duration of continuous effective reaches the duration threshold value of setting, behavior is interacted;If without being continuously in
Effective viewed status, then continue through face recognition technology and the posture of user's viewing video judged.
3. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, pre-
If in scene (2), the detection cycle of recognition of face is provided with described Intelligent mobile equipment and for limiting detection cycle number of times
Detection limit X, first, by the pick-up head combining with human face recognition technique on Intelligent mobile equipment, it is determined that detecting the people of user
Face;Afterwards, face cannot be detected in the detection cycle of setting, and continues X detection cycle of experience cannot all detect people
Face, then it is assumed that nobody interacts behavior, while video council is by automatic pause in viewing video.
4. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, pre-
If in scene (3), the angle threshold and face area and screen of each angle of user's head portrait are provided with described Intelligent mobile equipment
The ratio upper limit threshold N and ratio lower threshold M of area ratio, by the camera combination recognition of face on Intelligent mobile equipment
Technology, the angle to user's head portrait recognition result is calculated, and side face, the angle for coming back and bowing can be drawn according to result of calculation
Degree, the angle threshold more than setting is then considered invalid viewing behavior, then interact behavior;By on Intelligent mobile equipment
Pick-up head combining with human face recognition technique, the ratio of the face area that identifies of calculating and screen area, and with the ratio for setting on
Limit threshold value N and ratio lower threshold M compares, if the ratio of face area and screen area is more than ratio upper limit threshold N,
Then regard as too near apart from screen;If the ratio of face area and screen area be less than ratio lower threshold M, regard as away from
Off screen curtain is too remote;And carry out interbehavior.
5. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, pre-
If in scene (4), the detection cycle of recognition of face is provided with described Intelligent mobile equipment, by taking the photograph on Intelligent mobile equipment
As head combination face recognition technology, in a detection cycle, correctly identify face, then it is assumed that user returns, pause is regarded
Frequency automatically continues broadcasting, and carries out interbehavior.
6. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, pre-
If in scene (5), described player status are commenced play out including video, video pause is played, video terminates broadcasting, video delays
Punching neutralizes video playback exception.
7. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, pre-
If in scene (6), the gradient threshold value of screen inclination angle is provided with described Intelligent mobile equipment, using top on smart machine
The dynamic sensing of spiral shell instrument, detects the downward angle of inclination of screen, when angle of inclination is less than the gradient threshold value for setting, then assert
User is not suitable for being watched with the angle, and carries out interbehavior.
8. the mobile video based on scene according to claim 1 intelligently plays interaction control method, it is characterized in that, it is described
Intelligent mobile equipment in be provided with interactive database, type of interaction, interbehavior and response are provided with described interactive database
Data, described type of interaction is corresponding with default scene, and described interbehavior is corresponding with response data, wherein:Each
Type of interaction is contained within some interbehaviors, and the interbehavior selected in each type of interaction is random.
9. the mobile video based on scene according to claim 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 intelligently plays interaction
Control method, it is characterized in that, described interbehavior includes commencing play out, suspending broadcasting, continue to play, terminate for control video
One or more in animation, voice, picture, word or the vibrations playing and show.
10. the mobile video based on scene according to claim 9 intelligently plays interaction control method, it is characterized in that, institute
The picture stated includes static images and gif pictures.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710212145.1A CN106911962B (en) | 2017-04-01 | 2017-04-01 | Scene-based mobile video intelligent playing interaction control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710212145.1A CN106911962B (en) | 2017-04-01 | 2017-04-01 | Scene-based mobile video intelligent playing interaction control method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106911962A true CN106911962A (en) | 2017-06-30 |
CN106911962B CN106911962B (en) | 2020-03-13 |
Family
ID=59194381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710212145.1A Active CN106911962B (en) | 2017-04-01 | 2017-04-01 | Scene-based mobile video intelligent playing interaction control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106911962B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107396151A (en) * | 2017-08-24 | 2017-11-24 | 维沃移动通信有限公司 | A kind of video playing control method and electronic equipment |
CN109104630A (en) * | 2018-08-31 | 2018-12-28 | 北京优酷科技有限公司 | Video interaction method and device |
CN109213562A (en) * | 2018-09-18 | 2019-01-15 | 北京猎户星空科技有限公司 | Control method, device, equipment and the storage medium of smart machine |
CN109889901A (en) * | 2019-03-27 | 2019-06-14 | 深圳创维-Rgb电子有限公司 | Control method for playing back, device, equipment and the storage medium of playback terminal |
CN110162232A (en) * | 2018-02-11 | 2019-08-23 | 中国移动通信集团终端有限公司 | Screen display method, device, equipment and storage medium with display screen |
CN113891157A (en) * | 2021-11-11 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN113905191A (en) * | 2021-11-15 | 2022-01-07 | 深圳市华瑞安科技有限公司 | Intelligent interactive education tablet computer and interaction method |
CN114125540A (en) * | 2021-11-11 | 2022-03-01 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN114866693A (en) * | 2022-04-15 | 2022-08-05 | 苏州清睿智能科技股份有限公司 | Information interaction method and device based on intelligent terminal |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100328492A1 (en) * | 2009-06-30 | 2010-12-30 | Eastman Kodak Company | Method and apparatus for image display control according to viewer factors and responses |
CN103369274A (en) * | 2013-06-28 | 2013-10-23 | 青岛歌尔声学科技有限公司 | Intelligent television regulating system and television regulating method thereof |
CN103747331A (en) * | 2013-12-23 | 2014-04-23 | 乐视致新电子科技(天津)有限公司 | Interactive method of watching videos and device thereof |
CN104090656A (en) * | 2014-06-30 | 2014-10-08 | 潘晓丰 | Eyesight protecting method and system for smart device |
CN104808946A (en) * | 2015-04-29 | 2015-07-29 | 天脉聚源(北京)传媒科技有限公司 | Image playing and controlling method and device |
CN105657500A (en) * | 2016-01-26 | 2016-06-08 | 广东欧珀移动通信有限公司 | Video playing control method and device |
CN105872757A (en) * | 2016-03-24 | 2016-08-17 | 乐视控股(北京)有限公司 | Method and apparatus for reminding safe television watching distance |
-
2017
- 2017-04-01 CN CN201710212145.1A patent/CN106911962B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100328492A1 (en) * | 2009-06-30 | 2010-12-30 | Eastman Kodak Company | Method and apparatus for image display control according to viewer factors and responses |
CN103369274A (en) * | 2013-06-28 | 2013-10-23 | 青岛歌尔声学科技有限公司 | Intelligent television regulating system and television regulating method thereof |
CN103747331A (en) * | 2013-12-23 | 2014-04-23 | 乐视致新电子科技(天津)有限公司 | Interactive method of watching videos and device thereof |
CN104090656A (en) * | 2014-06-30 | 2014-10-08 | 潘晓丰 | Eyesight protecting method and system for smart device |
CN104808946A (en) * | 2015-04-29 | 2015-07-29 | 天脉聚源(北京)传媒科技有限公司 | Image playing and controlling method and device |
CN105657500A (en) * | 2016-01-26 | 2016-06-08 | 广东欧珀移动通信有限公司 | Video playing control method and device |
CN105872757A (en) * | 2016-03-24 | 2016-08-17 | 乐视控股(北京)有限公司 | Method and apparatus for reminding safe television watching distance |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107396151A (en) * | 2017-08-24 | 2017-11-24 | 维沃移动通信有限公司 | A kind of video playing control method and electronic equipment |
CN110162232A (en) * | 2018-02-11 | 2019-08-23 | 中国移动通信集团终端有限公司 | Screen display method, device, equipment and storage medium with display screen |
CN109104630A (en) * | 2018-08-31 | 2018-12-28 | 北京优酷科技有限公司 | Video interaction method and device |
CN109213562A (en) * | 2018-09-18 | 2019-01-15 | 北京猎户星空科技有限公司 | Control method, device, equipment and the storage medium of smart machine |
CN109213562B (en) * | 2018-09-18 | 2022-03-11 | 北京金山安全软件有限公司 | Control method, device and equipment of intelligent equipment and storage medium |
CN109889901A (en) * | 2019-03-27 | 2019-06-14 | 深圳创维-Rgb电子有限公司 | Control method for playing back, device, equipment and the storage medium of playback terminal |
CN113891157A (en) * | 2021-11-11 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN114125540A (en) * | 2021-11-11 | 2022-03-01 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN113905191A (en) * | 2021-11-15 | 2022-01-07 | 深圳市华瑞安科技有限公司 | Intelligent interactive education tablet computer and interaction method |
CN113905191B (en) * | 2021-11-15 | 2024-02-06 | 深圳市华瑞安科技有限公司 | Intelligent interaction education tablet computer and interaction method |
CN114866693A (en) * | 2022-04-15 | 2022-08-05 | 苏州清睿智能科技股份有限公司 | Information interaction method and device based on intelligent terminal |
CN114866693B (en) * | 2022-04-15 | 2024-01-05 | 苏州清睿智能科技股份有限公司 | Information interaction method and device based on intelligent terminal |
Also Published As
Publication number | Publication date |
---|---|
CN106911962B (en) | 2020-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106911962A (en) | Mobile video based on scene intelligently plays interaction control method | |
CN1905629B (en) | Image capturing apparatus and image capturing method | |
CN109976506B (en) | Awakening method of electronic equipment, storage medium and robot | |
US10798438B2 (en) | Determining audience state or interest using passive sensor data | |
CN106792177A (en) | A kind of TV control method and system | |
CN108521606A (en) | A kind of monitoring method, device, storage medium and the smart television of viewing TV | |
CN112272324B (en) | Follow-up mode control method and display device | |
WO2015196584A1 (en) | Smart recording system | |
US20180129871A1 (en) | Behavior pattern statistical apparatus and method | |
CN101674435B (en) | Image display apparatus and detection method | |
CN103760968A (en) | Method and device for selecting display contents of digital signage | |
CN106235931A (en) | Control the method and device of face cleaning instrument work | |
WO2015158087A1 (en) | Method and apparatus for detecting health status of human eyes and mobile terminal | |
CN103093124A (en) | Method for restricting the usage of electronic equipment and electronic equipment | |
CN109375765B (en) | Eyeball tracking interaction method and device | |
CN109416834A (en) | Attract angle value processing system and attracts angle value processing unit | |
CN107392159A (en) | A kind of facial focus detecting system and method | |
WO2023273500A1 (en) | Data display method, apparatus, electronic device, computer program, and computer-readable storage medium | |
CN115599219B (en) | Eye protection control method, system and equipment for display screen and storage medium | |
CN105072327A (en) | Eye-closing-preventing person photographing method and device thereof | |
CN110825220B (en) | Eyeball tracking control method, device, intelligent projector and storage medium | |
CN109167877A (en) | Terminal screen control method, device, terminal device and storage medium | |
EP3060317A1 (en) | Information processing device, recording medium, and information processing method | |
CN108668080A (en) | Prompt method and device, the electronic equipment of camera lens degree of fouling | |
US10820040B2 (en) | Television time shifting control method, system and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CB03 | Change of inventor or designer information |
Inventor after: Xu Jin Inventor after: Huang Feifei Inventor after: Jiang Jun Inventor before: Xu Jin Inventor before: Huang Feifei |
|
CB03 | Change of inventor or designer information |