CN106911962B - Scene-based mobile video intelligent playing interaction control method - Google Patents

Scene-based mobile video intelligent playing interaction control method Download PDF

Info

Publication number
CN106911962B
CN106911962B CN201710212145.1A CN201710212145A CN106911962B CN 106911962 B CN106911962 B CN 106911962B CN 201710212145 A CN201710212145 A CN 201710212145A CN 106911962 B CN106911962 B CN 106911962B
Authority
CN
China
Prior art keywords
video
playing
face
intelligent mobile
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710212145.1A
Other languages
Chinese (zh)
Other versions
CN106911962A (en
Inventor
徐进
黄飞飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xin Xin Network Technology Co Ltd
Original Assignee
Shanghai Xin Xin Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xin Xin Network Technology Co Ltd filed Critical Shanghai Xin Xin Network Technology Co Ltd
Priority to CN201710212145.1A priority Critical patent/CN106911962B/en
Publication of CN106911962A publication Critical patent/CN106911962A/en
Application granted granted Critical
Publication of CN106911962B publication Critical patent/CN106911962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Processing (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a scene-based mobile video intelligent playing interactive control method. The intelligent mobile device is adopted to automatically match and output interactive behaviors through the combination of preset scenes with the recognition of external objects and the perception of internal states, so that the intelligent control on video playing is realized, and the intelligent mobile device specifically comprises the following preset scenes: the user continuously watches the video for more than a certain time; in the video playing process, the mobile phone device detects that the human face disappears; in video playing, a user does not watch a video in a correct posture; in video playing, after the face disappears, the detected face is recovered; aiming at different states through the perception of the state of the player; through the identification of the screen angle of the intelligent mobile device. The invention has the beneficial effects that: the intelligent control of interaction between the video and the viewer and the video playing process is automatically formed, the humanistic care of the user is greatly improved, and better user experience can be brought under the influence of intellectualization and interaction of playing.

Description

Scene-based mobile video intelligent playing interaction control method
Technical Field
The invention relates to the technical field related to video processing, in particular to a scene-based mobile video intelligent playing interactive control method.
Background
Video generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames (frames) of pictures per second, human eyes cannot distinguish a single static picture according to the persistence of vision principle; it appears as a smooth continuous visual effect, so that the continuous picture is called a video. Video technology was originally developed for television systems, but has now evolved into a variety of different formats to facilitate consumer recording of video. Advances in networking technology have also enabled recorded segments of video to be streamed over the internet and received and played by computers. Video and movies are different technologies that take advantage of photography to capture dynamic images as a series of still photographs.
The playing of the video is a passive isolated process all the time, and the video interface is unrelated to the viewer. The video itself cannot sense the presence of the viewer, and therefore lacks interaction and does not make the playing process more interesting. Each change in the video playback status must be triggered manually by the viewer, and so-called playback interaction control is the result of human intervention.
Disclosure of Invention
The invention provides a scene-based mobile video intelligent playing interaction control method capable of realizing interaction between videos and viewers in order to overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
the scene-based mobile video intelligent playing interaction control method adopts intelligent mobile equipment to automatically match and output interaction behaviors by combining recognition of external objects and perception of internal states through preset scenes, and realizes intelligent control of video playing, and specifically comprises the following preset scenes:
(1) a user continuously watches the video for more than a certain time to generate an interactive behavior;
(2) in the video playing process, the mobile phone device detects that the human face disappears and generates an interactive behavior;
(3) in the video playing process, a user does not watch the video in a correct posture to generate an interactive behavior;
(4) in video playing, after the face disappears, the detected face is recovered to generate an interactive behavior;
(5) different interaction behaviors are generated aiming at different states through sensing the state of the player;
(6) generating an interactive behavior by identifying the screen angle of the intelligent mobile equipment;
wherein: the intelligent mobile equipment comprises an intelligent mobile phone, a tablet personal computer and a notebook computer, and a camera is arranged on the intelligent mobile equipment.
The patent seeks to establish some kind of connection between video playing and viewers, and automatically form intelligent control over the interaction between the video and the viewers and the video playing process based on some preset scenes. The invention brings more interesting and humanized playing experience, the whole playing process becomes more lively and is not withered on the premise of not interfering the user to watch the video, and the user is prompted to watch the interest especially for the video with weaker plot (such as teaching video). Meanwhile, the ability of automatic pause and automatic playing and friendly prompts in various states are left, so that the humanistic care of a user is greatly improved, and the video watching experience reaches unprecedented height under the influence of intellectualization and interaction of playing.
Preferably, in a preset scene (1), a time threshold of the duration of continuously watching a video and an angle threshold of each angle of a user head portrait are arranged in the intelligent mobile device, firstly, a camera on the intelligent mobile device is combined with a face recognition technology to calculate the angle of a user head portrait recognition result, angles of a side face, a head raising and a head lowering can be obtained according to the calculation result, effective watching is considered within the set angle threshold, whether the posture of the user watching the video is in an effective watching state or not is accurately judged, and if the posture of the user watching the video is in the effective watching state, the watching time is recorded and tracked; if the video is not in the effective watching state, the posture of the user watching the video is continuously judged through the face recognition technology; then, calculating the watching time length by continuously identifying and tracking the human face, and carrying out interactive action if the continuously effective identifying and watching time length reaches a set time length threshold value; and if the video is not continuously in the effective watching state, continuously judging the video watching posture of the user through the face recognition technology.
Preferably, in a preset scene (2), a detection period of face recognition and a detection amount X for limiting the number of the detection period are set in the intelligent mobile device, and firstly, a camera on the intelligent mobile device is combined with a face recognition technology to determine that a face of a user is detected; and then, if the human face cannot be detected in the set detection period and cannot be detected in X detection periods continuously, the situation that no person watches the video is considered, the interactive behavior is carried out, and meanwhile, the video is automatically paused.
Preferably, in a preset scene (3), an angle threshold of each angle of the head portrait of the user and a proportion upper limit threshold N and a proportion lower limit threshold M of the ratio of the face area to the screen area are arranged in the intelligent mobile device, the angle of the head portrait recognition result of the user is calculated by combining a camera on the intelligent mobile device with a face recognition technology, the angles of the side face, the head raising and the head lowering can be obtained according to the calculation result, if the angle is larger than the set angle threshold, an invalid viewing behavior is considered, and an interaction behavior is carried out; calculating the ratio of the identified face area to the screen area by combining a camera on the intelligent mobile equipment with a face identification technology, comparing the ratio with a set upper ratio threshold N and a set lower ratio threshold M, and if the ratio of the face area to the screen area is greater than the upper ratio threshold N, determining that the face area is too close to the screen; if the ratio of the face area to the screen area is smaller than a lower ratio threshold M, determining that the face area is too far away from the screen; and performing an interactive action.
Preferably, in the preset scene (4), a detection period of face recognition is set in the intelligent mobile device, and a camera on the intelligent mobile device is combined with a face recognition technology, so that a face is correctly recognized in one detection period, the user is considered to return, the paused video is automatically played continuously, and an interactive behavior is performed.
Preferably, in the preset scene (5), the player status includes video start playing, video pause playing, video end playing, video buffering and video playing exception.
Preferably, in a preset scene (6), an inclination threshold of the inclination angle of the screen is set in the intelligent mobile device, the downward inclination angle of the screen is detected by utilizing the dynamic induction of a gyroscope on the intelligent device, and when the inclination angle is smaller than the set inclination threshold, the user is determined to be unsuitable to watch at the angle, and an interactive behavior is performed.
Preferably, an interaction database is arranged in the smart mobile device, an interaction type, an interaction behavior and response data are arranged in the interaction database, the interaction type corresponds to a preset scene, the interaction behavior corresponds to the response data, and the interaction database comprises: each interactive type contains a plurality of interactive behaviors, and the interactive behaviors selected in each interactive type are random. It is to be emphasized that: the interaction database can be preset inside some apps using the method, and meanwhile, the interaction data can be updated through the cloud, so that the interaction behavior can be updated on line, and the method is more diversified.
Preferably, the interactive behavior includes one or more of animation, voice, picture, text or vibration for controlling the start of playing, the pause of playing, the continuation of playing, the end of playing and displaying of the video.
Preferably, the pictures include still pictures and gif pictures.
The invention has the beneficial effects that: the intelligent control of interaction between the video and the viewer and the video playing process is automatically formed, the humanistic care of the user is greatly improved, and better user experience can be brought under the influence of intellectualization and interaction of playing.
Detailed Description
The invention is further described with reference to specific embodiments.
The scene-based mobile video intelligent playing interaction control method adopts intelligent mobile equipment to automatically match and output interaction behaviors by combining recognition of external objects and perception of internal states through preset scenes, and realizes intelligent control of video playing, and specifically comprises the following preset scenes:
(1) a user continuously watches the video for more than a certain time to generate an interactive behavior;
(2) in the video playing process, the mobile phone device detects that the human face disappears and generates an interactive behavior;
(3) in the video playing process, a user does not watch the video in a correct posture to generate an interactive behavior;
(4) in video playing, after the face disappears, the detected face is recovered to generate an interactive behavior;
(5) different interaction behaviors are generated aiming at different states through sensing the state of the player;
(6) and generating interactive behaviors through recognizing the screen angle of the intelligent mobile equipment.
Wherein: the intelligent mobile equipment comprises an intelligent mobile phone, a tablet personal computer and a notebook computer, and a camera is arranged on the intelligent mobile equipment. An interaction database is arranged in the intelligent mobile equipment, interaction types, interaction behaviors and response data are arranged in the interaction database, the interaction types correspond to preset scenes, the interaction behaviors correspond to the response data, and the interaction types are as follows: each interactive type contains a plurality of interactive behaviors, and the interactive behaviors selected in each interactive type are random. The interactive behavior comprises one or more of animation, voice, pictures, characters or vibration for controlling the starting playing, the pausing playing, the continuing playing, the ending playing and the displaying of the video. The pictures include still pictures and gif pictures. It is to be emphasized that: this interactive database can be by using some apps of this patent inside preset, and this interactive data can be updated through the high in the clouds simultaneously for the interactive action can be updated on line, thereby more diversified.
In a preset scene (1), a time length threshold value of the time length for continuously watching a video and an angle threshold value of each angle of a user head portrait are arranged in an intelligent mobile device, firstly, the angle of a recognition result of the user head portrait is calculated by combining a camera on the intelligent mobile device with a face recognition technology, the angles of a side face, a head-up face and a head-down face can be obtained according to the calculation result, effective watching is considered within a set angle threshold value, whether the posture of the user watching the video is in an effective watching state or not is accurately judged, and if the posture of the user watching the video is in the effective watching state, recording and tracking of the watching time length are carried out; if the video is not in the effective watching state, the posture of the user watching the video is continuously judged through the face recognition technology; then, calculating the watching time length by continuously identifying and tracking the human face, and carrying out interactive action if the continuously effective identifying and watching time length reaches a set time length threshold value; and if the video is not continuously in the effective watching state, continuously judging the video watching posture of the user through the face recognition technology. This scene promotes a somewhat longer but relatively boring video viewing experience. For the same type of scene, multiple interactive behavior definitions can be provided, one can be randomly selected at a time, and the interactive experience or perception of each time is different from the perspective of a user. The interactive behaviors in the scene include, but are not limited to, one or more of prompting words, displaying pictures, displaying animation and playing voice (pictures include still pictures and gif pictures). For example, continuously watching the instructional video for more than 3 minutes, prompt the user "your will be very careful! ", and with animation or speech.
In a preset scene (2), a detection period of face recognition and a detection quantity X for limiting the number of the detection period are arranged in the intelligent mobile equipment, and firstly, the face of a user is determined to be detected through a camera on the intelligent mobile equipment in combination with a face recognition technology; and then, if the human face cannot be detected in the set detection period and cannot be detected in X detection periods continuously, the situation that no person watches the video is considered, the interactive behavior is carried out, and meanwhile, the video is automatically paused. The scene needs to accurately detect the process from existence to nonexistence, the detection period is reasonably controlled, when the human face cannot be detected in a plurality of set detection periods, the user is determined to leave, and meanwhile, the interactive behavior is matched and output. For example: make "Yita, which is you going? "or" is you still you there? "and the like, while the video is automatically paused to avoid missing highlights. This solves the problem that the user misses the highlight due to the departure of the user in an emergency (e.g., answering a phone call). Voice prompts combined with automatic pause video in this scenario can lead to better user experience.
In a preset scene (3), an angle threshold value of each angle of a user head portrait, a ratio upper limit threshold value N and a ratio lower limit threshold value M of the ratio of the face area to the screen area are arranged in the intelligent mobile equipment, the angle of a user head portrait recognition result is calculated by combining a camera on the intelligent mobile equipment with a face recognition technology, angles of a side face, a head raising and a head lowering can be obtained according to the calculation result, if the angle is larger than the set angle threshold value, an invalid viewing behavior is considered, and an interactive behavior is carried out; calculating the ratio of the identified face area to the screen area by combining a camera on the intelligent mobile equipment with a face identification technology, comparing the ratio with a set upper ratio threshold N and a set lower ratio threshold M, and if the ratio of the face area to the screen area is greater than the upper ratio threshold N, determining that the face area is too close to the screen; if the ratio of the face area to the screen area is smaller than a lower ratio threshold M, determining that the face area is too far away from the screen; and performing an interactive action. According to the face recognition result of normal watching, the collected feature key points of the 'eyes' part are almost on the same horizontal line, but the feature key points collected in the side face atlas have a certain angle, and whether the face is in an effective watching state can be judged by calculating the angle and comparing the angle with an angle threshold value. For example: do you "see which worship? "and the like.
In the preset scene (4), a detection period of face recognition is set in the intelligent mobile equipment, a camera on the intelligent mobile equipment is combined with a face recognition technology, in one detection period, the face is correctly recognized, the user is considered to return, the paused video is automatically played continuously, and an interactive action is performed. For example: the prompt text "little host, I want to die you! "and the like.
In the preset scene (5), the player state comprises video starting playing, video pausing playing, video ending playing, video buffering and video playing abnormity. For example: the prompt text "buffer in" and the like.
In presetting scene (6), be equipped with the gradient threshold value of screen inclination in the smart mobile device, utilize the dynamic response of gyroscope on the smart mobile device, detect the downward inclination of screen, when inclination is less than the gradient threshold value that sets up, then the user is unsuitable to watch with this angle to carry out the interactive action. The interactive behaviors of the scene include but are not limited to mobile phone vibration, animation, characters, voice prompt and the like. For example: the prompt text "do you look lying down? "and the like.
The face recognition technology mentioned in the text adopts a characteristic face method, and the basic idea is as follows: and finding a basic element of the distribution of the face image, namely an eigenvector of a covariance matrix of a face image sample set, thereby approximately representing the face image, wherein the eigenvectors are called eigenfaces. In fact, the eigenface reflects the information hidden inside the face sample set and the structural relationship of the face. The eigenvectors of the covariance matrix of the sample sets of eyes, cheeks and mandible are called characteristic eyes, characteristic jaws and characteristic lips, and are collectively called characteristic sub-faces. The feature sub-faces generate a sub-space in the corresponding image space, referred to as the sub-face space. And calculating the projection distance of the test image window in the sub-face space, and if the window image meets the threshold comparison condition, judging the window image to be a human face.
The characteristic face method is that the size, position and distance of facial image contour such as iris, nose wing and mouth angle are determined, then their geometric characteristic quantities are calculated, and these characteristic quantities form a characteristic vector describing said facial image. The core of the technology is actually 'local human body feature analysis', and finally a main element subspace is constructed according to a group of human face training images. According to statistics: a correct recognition rate of 95% was obtained in 3000 images of 200 persons. The invention utilizes the collected different characteristic faces of tens of millions of children in the age range of 6-12 years and trains the models according to the characteristic faces, thereby greatly increasing the integral cardinality of the database of the characteristic faces and leading the recognition rate in the age range to reach 99.9 percent.
The invention brings more interesting and humanized playing experience, the whole playing process becomes more lively and is not withered on the premise of not interfering the user to watch the video, and the user is prompted to watch the interest especially for the video with weaker plot (such as teaching video). Meanwhile, the ability of automatic pause and automatic playing and friendly prompts in various states are left, so that the humanistic care of a user is greatly improved, and the video watching experience reaches unprecedented height under the influence of intellectualization and interaction of playing.

Claims (9)

1. The scene-based mobile video intelligent playing interaction control method is characterized in that an intelligent mobile device is adopted to automatically match and output interaction behaviors by combining recognition of external objects and perception of internal states through a preset scene, so that intelligent control of video playing is realized, and the scene-based mobile video intelligent playing interaction control method specifically comprises the following preset scenes:
(1) a user continuously watches the video for more than a certain time to generate an interactive behavior;
(2) in the video playing process, the mobile phone device detects that the human face disappears and generates an interactive behavior;
(3) in the video playing process, a user does not watch the video in a correct posture to generate an interactive behavior;
the intelligent mobile equipment is internally provided with an angle threshold value of each angle of the head portrait of the user, a proportion upper limit threshold value N and a proportion lower limit threshold value M of the ratio of the face area to the screen area, the angle of the head portrait recognition result of the user is calculated by combining a camera on the intelligent mobile equipment with a face recognition technology, the angles of the side face, the head raising and the head lowering can be obtained according to the calculation result, if the angle is larger than the set angle threshold value, an invalid viewing behavior is considered, and an interactive behavior is carried out; calculating the ratio of the identified face area to the screen area by combining a camera on the intelligent mobile equipment with a face identification technology, comparing the ratio with a set upper ratio threshold N and a set lower ratio threshold M, and if the ratio of the face area to the screen area is greater than the upper ratio threshold N, determining that the face area is too close to the screen; if the ratio of the face area to the screen area is smaller than a lower ratio threshold M, determining that the face area is too far away from the screen; and performing an interactive action;
(4) in video playing, after the face disappears, the detected face is recovered to generate an interactive behavior;
(5) different interaction behaviors are generated aiming at different states through sensing the state of the player;
(6) generating an interactive behavior by identifying the screen angle of the intelligent mobile equipment;
wherein: the intelligent mobile equipment comprises an intelligent mobile phone, a tablet personal computer and a notebook computer, and a camera is arranged on the intelligent mobile equipment.
2. The intelligent mobile video playing interaction control method based on the scene as claimed in claim 1, wherein in the preset scene (1), a time length threshold value of the continuous video watching time length and an angle threshold value of each angle of the user head portrait are arranged in the intelligent mobile device, firstly, the angle of the user head portrait recognition result is calculated through a camera on the intelligent mobile device in combination with a face recognition technology, the angles of the side face, the head raising and the head lowering can be obtained according to the calculation result, the intelligent mobile device is considered to be effective for watching within the set angle threshold value, whether the posture of the user watching the video is in an effective watching state or not is accurately judged, and if the intelligent mobile device is in the effective watching state, the watching time length is recorded and tracked; if the video is not in the effective watching state, the posture of the user watching the video is continuously judged through the face recognition technology; then, calculating the watching time length by continuously identifying and tracking the human face, and carrying out interactive action if the continuously effective identifying and watching time length reaches a set time length threshold value; and if the video is not continuously in the effective watching state, continuously judging the video watching posture of the user through the face recognition technology.
3. The interactive control method for intelligent playing of mobile video based on scene as claimed in claim 1, wherein in the preset scene (2), the intelligent mobile device is provided with a detection period for face recognition and a detection amount X for limiting the number of detection periods, first, the face of the user is determined to be detected by the camera on the intelligent mobile device in combination with the face recognition technology; and then, if the human face cannot be detected in the set detection period and cannot be detected in X detection periods continuously, the situation that no person watches the video is considered, the interactive behavior is carried out, and meanwhile, the video is automatically paused.
4. The intelligent mobile video playing interaction control method based on the scene as claimed in claim 1, wherein in the preset scene (4), a detection period for face recognition is set in the intelligent mobile device, and a camera on the intelligent mobile device is combined with a face recognition technology, so that in one detection period, a face is correctly recognized, the user is considered to return, the paused video automatically continues to be played, and an interaction behavior is performed.
5. The interactive control method for mobile video intelligent playing based on scene as claimed in claim 1, wherein in the preset scene (5), the player status includes video start playing, video pause playing, video end playing, video buffering and video playing exception.
6. The intelligent mobile video playing interaction control method based on scenes as claimed in claim 1, wherein in the preset scene (6), an inclination threshold of the inclination angle of the screen is arranged in the intelligent mobile device, the downward inclination angle of the screen is detected by utilizing the dynamic induction of a gyroscope on the intelligent device, and when the inclination angle is smaller than the set inclination threshold, the user is determined not to be suitable for watching at the angle and the interaction behavior is carried out.
7. The intelligent mobile video playing interaction control method based on scenes as claimed in claim 1, wherein an interaction database is arranged in the intelligent mobile device, the interaction database is internally provided with interaction types, interaction behaviors and response data, the interaction types correspond to preset scenes, and the interaction behaviors correspond to the response data, wherein: each interactive type contains a plurality of interactive behaviors, and the interactive behaviors selected in each interactive type are random.
8. The method for controlling interaction of mobile video smart play based on scene as claimed in claim 1, 2, 3, 4, 5, 6 or 7, wherein the interaction behavior comprises one or more of animation, voice, picture, text or vibration for controlling the start, pause, continue, end and display of video.
9. The interactive control method for intelligent playing of mobile video based on scenes as claimed in claim 8, wherein said pictures comprise still pictures and gif pictures.
CN201710212145.1A 2017-04-01 2017-04-01 Scene-based mobile video intelligent playing interaction control method Active CN106911962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710212145.1A CN106911962B (en) 2017-04-01 2017-04-01 Scene-based mobile video intelligent playing interaction control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710212145.1A CN106911962B (en) 2017-04-01 2017-04-01 Scene-based mobile video intelligent playing interaction control method

Publications (2)

Publication Number Publication Date
CN106911962A CN106911962A (en) 2017-06-30
CN106911962B true CN106911962B (en) 2020-03-13

Family

ID=59194381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710212145.1A Active CN106911962B (en) 2017-04-01 2017-04-01 Scene-based mobile video intelligent playing interaction control method

Country Status (1)

Country Link
CN (1) CN106911962B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396151B (en) * 2017-08-24 2019-06-07 维沃移动通信有限公司 A kind of video playing control method and electronic equipment
CN110162232A (en) * 2018-02-11 2019-08-23 中国移动通信集团终端有限公司 Screen display method, device, equipment and storage medium with display screen
CN109104630A (en) * 2018-08-31 2018-12-28 北京优酷科技有限公司 Video interaction method and device
CN109213562B (en) * 2018-09-18 2022-03-11 北京金山安全软件有限公司 Control method, device and equipment of intelligent equipment and storage medium
CN109889901A (en) * 2019-03-27 2019-06-14 深圳创维-Rgb电子有限公司 Control method for playing back, device, equipment and the storage medium of playback terminal
CN113891157A (en) * 2021-11-11 2022-01-04 百度在线网络技术(北京)有限公司 Video playing method, video playing device, electronic equipment, storage medium and program product
CN114125540A (en) * 2021-11-11 2022-03-01 百度在线网络技术(北京)有限公司 Video playing method, video playing device, electronic equipment, storage medium and program product
CN113905191B (en) * 2021-11-15 2024-02-06 深圳市华瑞安科技有限公司 Intelligent interaction education tablet computer and interaction method
CN114866693B (en) * 2022-04-15 2024-01-05 苏州清睿智能科技股份有限公司 Information interaction method and device based on intelligent terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369274A (en) * 2013-06-28 2013-10-23 青岛歌尔声学科技有限公司 Intelligent television regulating system and television regulating method thereof
CN103747331A (en) * 2013-12-23 2014-04-23 乐视致新电子科技(天津)有限公司 Interactive method of watching videos and device thereof
CN104090656A (en) * 2014-06-30 2014-10-08 潘晓丰 Eyesight protecting method and system for smart device
CN104808946A (en) * 2015-04-29 2015-07-29 天脉聚源(北京)传媒科技有限公司 Image playing and controlling method and device
CN105657500A (en) * 2016-01-26 2016-06-08 广东欧珀移动通信有限公司 Video playing control method and device
CN105872757A (en) * 2016-03-24 2016-08-17 乐视控股(北京)有限公司 Method and apparatus for reminding safe television watching distance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8154615B2 (en) * 2009-06-30 2012-04-10 Eastman Kodak Company Method and apparatus for image display control according to viewer factors and responses

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369274A (en) * 2013-06-28 2013-10-23 青岛歌尔声学科技有限公司 Intelligent television regulating system and television regulating method thereof
CN103747331A (en) * 2013-12-23 2014-04-23 乐视致新电子科技(天津)有限公司 Interactive method of watching videos and device thereof
CN104090656A (en) * 2014-06-30 2014-10-08 潘晓丰 Eyesight protecting method and system for smart device
CN104808946A (en) * 2015-04-29 2015-07-29 天脉聚源(北京)传媒科技有限公司 Image playing and controlling method and device
CN105657500A (en) * 2016-01-26 2016-06-08 广东欧珀移动通信有限公司 Video playing control method and device
CN105872757A (en) * 2016-03-24 2016-08-17 乐视控股(北京)有限公司 Method and apparatus for reminding safe television watching distance

Also Published As

Publication number Publication date
CN106911962A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
CN106911962B (en) Scene-based mobile video intelligent playing interaction control method
CN1905629B (en) Image capturing apparatus and image capturing method
US9025830B2 (en) Liveness detection system based on face behavior
US9288388B2 (en) Method and portable terminal for correcting gaze direction of user in image
CN107911736B (en) Live broadcast interaction method and system
JP5474062B2 (en) Content reproduction apparatus, content reproduction method, program, and integrated circuit
EP3264222B1 (en) An apparatus and associated methods
KR20190020779A (en) Ingestion Value Processing System and Ingestion Value Processing Device
TWI255141B (en) Method and system for real-time interactive video
US8958686B2 (en) Information processing device, synchronization method, and program
KR20140138798A (en) System and method for dynamic adaption of media based on implicit user input and behavior
CN115599219B (en) Eye protection control method, system and equipment for display screen and storage medium
WO2023273500A1 (en) Data display method, apparatus, electronic device, computer program, and computer-readable storage medium
CN109154862B (en) Apparatus, method, and computer-readable medium for processing virtual reality content
JP4911472B2 (en) Output device, output method, and program
CN102542300B (en) Method for automatically recognizing human body positions in somatic game and display terminal
KR101789153B1 (en) VR-based method for providing immersive eye movement of EMDR
US20210058609A1 (en) Information processor, information processing method, and program
US20150271465A1 (en) Audio/video system with user analysis and methods for use therewith
CN111444789A (en) Myopia prevention method and system based on video induction technology
Peng et al. A user experience model for home video summarization
US20220335246A1 (en) System And Method For Video Processing
US10946242B2 (en) Swing analysis device, swing analysis method, and swing analysis system
EP1944700A1 (en) Method and system for real time interactive video
CN114694200A (en) Method for supervising healthy film watching of children and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Xu Jin

Inventor after: Huang Feifei

Inventor after: Jiang Jun

Inventor before: Xu Jin

Inventor before: Huang Feifei

CB03 Change of inventor or designer information