CN105760141A - Multi-dimensional control method, intelligent terminal and controllers - Google Patents

Multi-dimensional control method, intelligent terminal and controllers Download PDF

Info

Publication number
CN105760141A
CN105760141A CN201610206745.2A CN201610206745A CN105760141A CN 105760141 A CN105760141 A CN 105760141A CN 201610206745 A CN201610206745 A CN 201610206745A CN 105760141 A CN105760141 A CN 105760141A
Authority
CN
China
Prior art keywords
intelligent terminal
controller
video
scene information
motion estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610206745.2A
Other languages
Chinese (zh)
Other versions
CN105760141B (en
Inventor
赵秋林
黄宇轩
刘成刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201610206745.2A priority Critical patent/CN105760141B/en
Publication of CN105760141A publication Critical patent/CN105760141A/en
Priority to PCT/CN2017/079444 priority patent/WO2017173976A1/en
Application granted granted Critical
Publication of CN105760141B publication Critical patent/CN105760141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses a multi-dimensional experience method, an intelligent terminal and controllers. The multi-dimensional experience method includes analyzing acquired currently played video contents by the aid of the intelligent terminal and identifying scene information corresponding to the video contents; transmitting the scene information to the controllers by the aid of the intelligent terminal so as to allow the controllers to start multi-dimensional control according to the scene information. According to the technical scheme, the multi-dimensional experience method, the intelligent terminal and the controllers have the advantages that audio and video frequencies can be detected by the aid of the intelligent terminal, so that currently played video scenes can be identified; currently played scenes can be reconstructed by the various controllers under the control according to the various identified scenes, multi-dimensional experience effects can be realized for the shown contents in real time, and the multi-dimensional experience method, the intelligent terminal and the controllers are suitable for ordinary families.

Description

A kind of realize method, intelligent terminal and the controller that multidimensional controls
Technical field
The present invention relates to, but not limited to intellectual technology, espespecially a kind of realize method, intelligent terminal and the controller that multidimensional controls.
Background technology
If see TV or film user, vibrations, blowing, smog, bubble, abnormal smells from the patient, setting and personage can be performed equivalence fruit and simulate introducing, form the performance form of a kind of uniqueness, these on-the-spot special effects and the story of a play or opera are combined closely, a kind of environment consistent with substance film will be built, allow spectators pass through vision, olfactory sensation, audition and the sense of touch multiple health brand-new entertainment effect of sensory experience.
But, the Consumer's Experience of current this multidimensional just can only be experienced on special film, the control instruction that multidimensional is experienced is just to have carried out synchronization with film in advance, such as: send control instruction to corresponding controller on corresponding projection time point so that controller controls to produce vibrations, blowing, smog, bubble, abnormal smells from the patient, setting and personage performs equivalence fruit.It is to say, what the realization of current this brand-new entertainment effect caused being limited by the use of family.
Summary of the invention
The present invention provides a kind of and realizes method, intelligent terminal and the controller that multidimensional controls, it is possible to presentation content adds multidimensional experience effect in real time, and is applicable to average family.
In order to reach the object of the invention, the invention provides a kind of method realizing multidimensional control, including: the currently playing video content received is analyzed by intelligent terminal, the scene information corresponding to identify described video content;
Intelligent terminal sends described scene information to controller, in order to controller starts multidimensional according to described scene information and controls.
Alternatively, the described video content to obtaining is analyzed, and identifies scene information and includes:
When described intelligent terminal plays video, sampling analysis frame of video, search for candidate's object: to each sample frame, obtain motion estimation vector, and the some regional assignments concentrated by macro block big for motion estimation vector are marked region;
Key frame in currently playing frame of video is carried out lasting detection by described intelligent terminal, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, then described intelligent terminal starts the key frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the candidate's object in frame of video and position, to identify described scene information.
Alternatively, the described some regional assignments concentrated by macro block big for motion estimation vector are that marked region includes:
Adopt sorting algorithm that the described motion estimation vector obtained is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;
The some regional assignments concentrated by macro block big for motion estimation vector are marked region;It is positioned at the object outside marked region as object of reference.
Present invention also offers and a kind of realize the method that multidimensional controls, controller recognizes the need for starting multidimensional according to the scene information that the currently playing video content received is corresponding and experiences the instruction of control, controls accordingly.
Alternatively, described controller is previously provided with the corresponding relation between different object classifications and control information;
Described identifying according to the scene information obtained self needs to start multidimensional and experiences the instruction controlled and include: described in object in the scene information that obtains belong to pre-set trigger the object classification controlled, and when meeting the trigger condition pre-set, it is determined that start described multidimensional and experience the instruction controlled.
Alternatively, described controller includes: shake control device and/or abnormal smells from the patient controller and/or sprayer controller and/or lamp dimmer and/or voice controller.
Alternatively, distributed deployment or centralized deployment are adopted between each controller.
Invention further provides a kind of method realizing multidimensional experience, including:
The currently playing video content received is analyzed by intelligent terminal, with scene information corresponding to controller identified and initiate request;
Intelligent terminal determines the need for starting multidimensional according to the scene information identified and experiences control;
When determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.
Alternatively, described intelligent terminal also includes before the video content obtained is analyzed:
Described intelligent terminal listens to the querying command from one or more controllers, and the device descriptive information of self returns to the controller initiating inquiry request;
The controller receiving inquiry response initiates session as client to intelligent terminal, and sets up session between described intelligent terminal and controller.
Alternatively, the described video content to obtaining is analyzed, and identifies the scene information corresponding with the controller initiating request and includes:
When described intelligent terminal plays video, sampling analysis frame of video, search for candidate's object: to each sample frame, obtain motion estimation vector, and the some regional assignments concentrated by macro block big for motion estimation vector are marked region;
Key frame in the frame of video of described acquisition is carried out lasting detection, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, then start the key frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the candidate object relevant to the controller initiating to inquire about and set up session in frame of video and position, the scene information corresponding to identify controller that is described and that initiate to inquire about and set up session.
Alternatively, the described some regional assignments concentrated by macro block big for motion estimation vector are that marked region includes:
Adopt sorting algorithm that the described motion estimation vector obtained is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;
The some regional assignments concentrated by macro block big for motion estimation vector are marked region;It is positioned at the object outside marked region as object of reference.
Alternatively, described intelligent terminal is previously provided with the corresponding relation between different object classifications and control information;
Described intelligent terminal determines the need for starting multidimensional according to the scene information obtained and experiences control and include: described in object in the scene information that obtains belong to pre-set trigger the object classification controlled, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control, and corresponding control information is handed down to corresponding controllers.
Also invention has reoffered a kind of intelligent terminal, including the first analysis module, broadcast module;Wherein,
First analysis module, for, after starting multidimensional experience functions, being analyzed the currently playing video content received, the scene information corresponding to identify described video content;
Broadcast module, identifies scene information to controller for transmission, in order to controller starts multidimensional according to described scene information and controls.
Alternatively, described first analysis module specifically for: when playing video, sampling analysis frame of video, to each sample frame, obtain motion estimation vector;Adopt sorting algorithm that the motion estimation vector of acquisition is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;The some regional assignments concentrated by macro block big for motion estimation vector are marked region;
Key frame in currently playing frame of video is carried out lasting detection, if continuing in longer sequence of frames of video at one section, there is marked region always, then start the key frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the candidate's object in frame of video and position, to identify described scene information.
Present invention also offers a kind of intelligent terminal, including the second analysis module, it is determined that module;Wherein,
Second analysis module, for, after starting multidimensional experience functions, the currently playing video content received being analyzed, the scene information corresponding to identify the controller obtained and initiate request;
Determine module, experience control for determining the need for starting multidimensional according to the scene information that identifies, when determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.
Alternatively, also include setting up module, for listening to the querying command from one or more controllers, the device descriptive information of intelligent terminal belonging to self is returned to the controller initiating inquiry request;And initiate to set up session between the controller of session.
Alternatively, described second analysis module specifically for:
When playing video, sampling analysis frame of video, to each sample frame, obtain motion estimation vector;Adopt sorting algorithm that the motion estimation vector of acquisition is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;The some regional assignments concentrated by macro block big for motion estimation vector are marked region;The object being positioned at outside marked region is called object of reference;
Key frame in currently playing frame of video is carried out lasting detection, if continuing in longer sequence of frames of video at one section, there is marked region always, then start the frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the primary objects relevant to the controller initiating to inquire about and set up session in frame of video and position, the scene information corresponding to identify the controller inquiring about and setting up session with described initiation.
Alternatively, described determine module specifically for: be wherein previously provided with the corresponding relation between different object classifications and control information, when the object in the described scene information obtained belongs to the object classification triggering control pre-set, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control, and corresponding control information is handed down to corresponding controllers.
The present invention has reoffered a kind of controller, including acquisition module, controls module;Wherein,
Acquisition module, for obtaining the scene information that currently playing video content is corresponding;
Control module, when self needing startup multidimensional experience to control for determining according to the scene information obtained, control accordingly.
Alternatively, described control module is previously provided with the corresponding relation between different object classifications and control information;
Described control module specifically for: when the object in the described scene information obtained belong to pre-set trigger control object classification, and meet pre-set trigger condition time, start described multidimensional experience control.
Alternatively, described acquisition module is additionally operable to: send querying command, to inquire about the facility information of the intelligent terminal in current network, and monitors the information of intelligent terminal's broadcast.
Compared with prior art, technical scheme includes the intelligent terminal's currently playing video content to receiving and is analyzed, the scene information corresponding to identify video content;Intelligent terminal sends described scene information to controller, in order to controller starts multidimensional according to scene information and controls.Or including after starting multidimensional experience functions, currently playing video content is analyzed the scene information that the controller to obtain with initiate request is corresponding by intelligent terminal;Intelligent terminal determines the need for starting multidimensional according to the scene information obtained and experiences control;When determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.Technical scheme provided by the invention utilizes intelligent terminal to realize audio frequency and video detection, in order to identify the scene of current video playback, and rebuild currently playing scene according to the various controller of various scenery control identified, achieve and in real time presentation content is added multidimensional experience effect, and be applicable to average family.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from description, or understand by implementing the present invention.The purpose of the present invention and other advantages can be realized by structure specifically noted in description, claims and accompanying drawing and be obtained.
Accompanying drawing explanation
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, and the schematic description and description of the present invention is used for explaining the present invention, is not intended that inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is a kind of flow chart realizing the method that multidimensional is experienced of the present invention;
Fig. 2 is the flow chart that another kind of the present invention realizes the method that multidimensional is experienced;
Fig. 3 is the composition structural representation of a kind of intelligent terminal of the present invention;
Fig. 4 is the composition structural representation of another kind intelligent terminal of the present invention;
Fig. 5 is the composition structural representation of controller of the present invention;
Fig. 6 is the group-network construction schematic diagram that controller of the present invention adopts centralized deployment;
Fig. 7 is the group-network construction schematic diagram that controller of the present invention adopts distributed deployment.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with accompanying drawing, embodiments of the invention are described in detail.It should be noted that when not conflicting, the embodiment in the application and the feature in embodiment can combination in any mutually.
Fig. 1 is a kind of flow chart realizing the method that multidimensional controls of the present invention, as it is shown in figure 1, include:
Step 100: the currently playing video content received is analyzed by intelligent terminal, the scene information corresponding to identify described video content.
After starting multidimensional experience functions, first, when intelligent terminal plays video, sampling analysis frame of video, attempt search candidate's object such as flower (such as corresponding wind), grass, the molten slurry of rock (as correspondence shakes) etc., namely to each sample frame, obtain motion estimation vector;Adopt sorting algorithm such as k-means cluster analysis that the motion estimation vector of acquisition is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector.The some regional assignments concentrated by macro block big for motion estimation vector are marked region.If certain marked region area is too little, then abandon this marked region.It is positioned at the object of reference as overall background of the object outside marked region.So, the Probability Area that candidate's object of key exists has been looked up out.Wherein, for whole video, in predeterminable area such as rectangular region, if the ratio that the big macro block of motion vector accounts for total macroblock number exceedes predetermined threshold value such as 80% (adjustable), then, it is believed that this region is exactly marked region.Wherein, if the area of the marked region delimited out accounts for the size proportion threshold value such as 10% (adjustable) less than preset area of the gross area, then, abandon this marked region.
Then, key frame and I frame in intelligent terminal's frame of video to obtaining carry out lasting detection, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, so, intelligent terminal starts the key frame in this sequence of frames of video of sampling analysis, by neutral net scheduling algorithm identification, each sample frame is oriented the candidate's object in frame of video and position, thus identifying scene information.So, it is achieved that the identification to crucial candidate's object.
Specifically: if the object of reference obtained before all exists in the sequence of frames of video of present sample, so, to the candidate's object identified in the marked region of sequence of frames of video, if met the following conditions, just it is labeled as the classification of candidate's object: 1) this object classification all exists 2 in the marked region of continuous print sequence of frames of video) this object type each object other, the object of reference of relatively each video sequence, position relative vector continues to change.Further, if the classification of this candidate's object is more than one, then scene information also includes: records extra parameter such as object persistent period, object space and moves relative velocity, number etc..
Such as: in implementing, above-mentioned use to neutral net can adopt the structure of AlexNet: 8 layers altogether, first 5 layers is convolutional layer, rear 3 layers of full articulamentum.Wherein, last layer uses softmax grader.Specifically: in the convolutional layer of first 5 layers, 1st layer is convolutional layer, specific masterplate interval is used to carry out convolution, then adopt ReLU as activation primitive, Pooling it is after regularization, the result obtained inputs as level 2 volume lamination, after 4 layers of convolutional layer and the 1st layer similar, simply have employed the less convolution masterplate of dimension;In rear 3 layers of full articulamentum, after rear 3 layers of ReLU, dropout connects entirely again;Finally adopt softmaxlost as lostfunction.
In this step, if the object of reference obtained before is absent from the sequence of frames of video of present sample, then abandon this search, process ends.
Give an example: if adopting neutral net to detect current picture has and large-area spend sea, the edge contour of flower can be found, if also detecting that and spending bigger shaking amplitude of turning right, so, can be inferred that wind blows from left to right according to the direction that flower swings, the grade of air-out can have been calculated according to the amplitude of flower swing;If be also detected with personage to occur in picture simultaneously, then, mark position and the number of personage, and find the speed etc. of relative movement between personage by multiframe.The information of these acquisitions is exactly the scene information needed in this step.
Step 101: intelligent terminal sends the scene information identified to controller, in order to controller starts multidimensional according to described scene information and controls.
Intelligent terminal sends the scene information identified to controller, as by the scene information that broadcast identifies.For above-named example, scene information may include that colored kind, colored approximate number;The direction of wind and wind scale;The speed of personage's number and relative movement.
Wherein, control information controls accordingly for needing the controller starting multidimensional experience control.
For each controller, then also include: controller identifies according to the scene information that the currently playing video content received is corresponding and self needs to start multidimensional and experience the instruction controlled, and controls accordingly.
Controller in the present invention can include but not limited to: shake control device and/or abnormal smells from the patient controller and/or sprayer controller and/or lamp dimmer and/or voice controller etc..
Can be distributed deployment between each controller, it is also possible to be centralized deployment.When adopting distributed deployment, each controller communicates with intelligent terminal;When adopting centralized deployment, it is possible to each controller is arranged in an a device such as wearable device, so, is more convenient for the experience of user.Wherein, controller and intelligent terminal can adopt the modes such as Ethernet (Ethernet), WiFi, bluetooth (Bluetooth) to communicate.
In the controller of this step, it is previously provided with the corresponding relation between different object classifications and control information, when the object in the scene information obtained belongs to the object classification triggering control pre-set, and when meeting the trigger condition pre-set, it is determined that start corresponding multidimensional and experience the instruction controlled.
Such as: for shake control device, this corresponding relation could be arranged to: when the object in the scene information obtained belongs to the object classification such as rock triggering vibrations, and meet trigger condition such as object number more than 1 and speed per second more than 1/8 screen, be continued above 3 seconds, then starting shock controller triggers vibrating effect;
For another example, for abnormal smells from the patient controller, this corresponding relation could be arranged to: triggers, when the object in the scene information obtained belongs to, the object classification RUGUI flower producing abnormal smells from the patient, and meet trigger condition such as lasting time of occurrence > 6 seconds, and quantity > 10, then start abnormal smells from the patient controller and send the abnormal smells from the patient with Flos Osmanthi Fragrantis fragrance.
And for example: for voice controller, this corresponding relation can be: occurs in picture if any task when the object in the scene information obtained belongs to the object classification triggering generation sound, and meet the position of trigger condition such as personage, moving direction and translational speed etc., then start voice controller and trigger the progressive formation producing step with personage's moving direction.
Fig. 2 is the flow chart that another kind of the present invention realizes the method that multidimensional controls, as in figure 2 it is shown, include:
Step 200: the currently playing video content received is analyzed by intelligent terminal, with scene information corresponding to controller identified and initiate request.
Also include before this step: certain or some controllers send querying command to intelligent terminal after starting, to inquire about the facility information of the intelligent terminal in current network, and monitor the information of intelligent terminal's broadcast;
Intelligent terminal can monitor the inquiry of self-controller as convergent point, when having listened to inquiry, the device descriptive information of self returns to the controller initiating inquiry request;
The controller receiving inquiry response initiates session as client to intelligent terminal, and sets up session between intelligent terminal and controller.
Implement and the step 100 of this step are consistent, and difference is in that: in this step, intelligent terminal is the collection that the request for controller carries out corresponding scene information.Such as, what initiate inquiry request is shake control device, then, now intelligent terminal is identified only for the object classification such as rock triggering vibrations, say, that the object in the scene information now returned only has the object classification triggering vibrations.
Step 201: intelligent terminal determines the need for starting multidimensional according to the scene information identified and experiences control.
In this step, in intelligent terminal, it is previously provided with the corresponding relation between different object classifications and control information, when the object in the scene information obtained belongs to the object classification triggering control pre-set, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control.
Implementing of this step is consistent with step 102, repeats no more here.
Step 202: when determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.
In this step, final control information is directly handed down to controller by intelligent terminal, and controller has only to start according to the control instruction received and trigger corresponding actions.
Fig. 3 is the composition structural representation of a kind of intelligent terminal of the present invention, as it is shown on figure 3, at least include: the first analysis module, broadcast module;Wherein,
First analysis module, for, after starting multidimensional experience functions, being analyzed the currently playing video content received, the scene information corresponding to identify described video content;
Broadcast module, identifies scene information to controller for transmission, in order to controller starts multidimensional according to described scene information and controls.
Wherein, the first analysis module specifically for:
When playing video, sampling analysis frame of video, attempt search candidate's object, namely to each sample frame, obtain motion estimation vector;Adopt sorting algorithm such as k-means cluster analysis that the motion estimation vector of acquisition is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector.The some regional assignments concentrated by macro block big for motion estimation vector are marked region.If certain marked region area is too little, then abandon this marked region.The object being positioned at outside marked region is called object of reference.
Key frame in currently playing frame of video is carried out lasting detection, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, so, start the key frame in this sequence of frames of video of sampling analysis, each sample frame is oriented the candidate's object in frame of video and position by neutral net scheduling algorithm identification, thus obtaining scene information.
Fig. 4 is the composition structural representation of another kind intelligent terminal of the present invention, as shown in Figure 4, at least includes the second analysis module, it is determined that module;Wherein,
Second analysis module, for starting after multidimensional experience functions, being analyzed the currently playing video content received, with the scene information that the controller that identifies with initiate request is corresponding;
Determine module, experience control for determining the need for starting multidimensional according to the scene information that identifies, when determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.
Intelligent terminal shown in Fig. 4 also include setting up module for: listen to the querying command from certain or some controllers, the device descriptive information of intelligent terminal belonging to self returned to the controller initiating inquiry request;And initiate to set up session between the controller of session.
Wherein, the second analysis module specifically for:
Key frame in currently playing frame of video is carried out lasting detection, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, so, start the key frame in this sequence of frames of video of sampling analysis, each sample frame is oriented the candidate object relevant to the controller initiating to inquire about and set up session in frame of video and position by neutral net scheduling algorithm identification, thus identifying the scene information corresponding with the controller initiating to inquire about and set up session.
Determine module specifically for: be wherein previously provided with the corresponding relation between different object classifications and control information, when the object in the scene information obtained belongs to the object classification triggering control pre-set, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control, and corresponding control information is handed down to corresponding controllers.
Fig. 5 is the composition structural representation of controller of the present invention, as it is shown in figure 5, at least include acquisition module, controls module;Wherein,
Acquisition module, for obtaining the scene information that currently playing video content is corresponding;
Control module, when self needing startup multidimensional experience to control for determining according to the scene information obtained, control accordingly.
Wherein, the corresponding relation between different object classifications and control information it is previously provided with in the control module;Control module specifically for: when the object in the scene information obtained belong to pre-set trigger control object classification, and meet pre-set trigger condition time, start multidimensional experience control.
Wherein, acquisition module is additionally operable to: send querying command, to inquire about the facility information of the intelligent terminal in current network, and monitors the information of intelligent terminal's broadcast
It is described in detail below in conjunction with specific embodiment.
Fig. 6 is the group-network construction schematic diagram that controller of the present invention adopts centralized deployment, as shown in Figure 6, in the first embodiment, it is assumed that adopt centralized deployment between each controller, as being arranged in a wearable device.In first embodiment, initiate inquiry request for shake control device, and in first embodiment, intelligent terminal is determined the need of starting shock controller triggering vibrating effect.Specifically include:
First, after shake control device starts, send querying command, the device descriptive information of the intelligent terminal in inquiry current network to intelligent terminal, and monitor the broadcast message of intelligent terminal;Intelligent terminal, as convergent point, when monitoring has shake control device to initiate inquiry, reads the device descriptive information of self and returns to shake control device by inquiry response;Shake control device initiates session as client, and intelligent terminal for reception session also sets up session between self and shake control device.
Then, when intelligent terminal plays video, first sampling analysis frame of video, attempt search candidate's object, namely to each sample frame, obtain out motion estimation vector.Adopt sorting algorithm to be divided into by the motion estimation vector of this frame of video obtained macro block that two type games estimate vectors are big and the little macro block of motion estimation vector.The some regional assignments concentrated by macro block big for motion estimation vector are marked region.If certain marked region area is too little, then abandon this marked region.The object being positioned at outside marked region is called object of reference.
If continue, in longer sequence of frames of video, there is marked region at one section always.Then the frame in this sequence of frames of video of sampling analysis, orients the primary objects in frame of video and position to each sample frame by neutral net scheduling algorithm identification.Such as: in implementing, this neutral net can adopt the structure of AlexNet: 8 layers altogether, first 5 layers is convolutional layer, rear 3 layers of full articulamentum.Wherein, last layer uses softmax grader.Specifically: in the convolutional layer of first 5 layers, 1st layer is convolutional layer, specific masterplate interval is used to carry out convolution, then adopt ReLU as activation primitive, Pooling it is after regularization, the result obtained inputs as level 2 volume lamination, after 4 layers of convolutional layer and the 1st layer similar, simply have employed the less convolution masterplate of dimension;In rear 3 layers of full articulamentum, after rear 3 layers of ReLU, dropout connects entirely again;Finally adopt softmaxlost as lostfunction.
Then, if the object of reference obtained all exists in the sequence of frames of video of present sample before, so, to the candidate's object identified in the marked region of sequence of frames of video, if met the following conditions, just it is labeled as the classification of candidate's object: 1) this object classification all exists 2 in the marked region of continuous print sequence of frames of video) this object type each object other, the object of reference of relatively each video sequence, position relative vector continues to change.Further, if the classification of this candidate's object is more than one, then scene information also includes: records extra parameter such as object persistent period, object space and moves relative velocity, number etc..
In first embodiment, intelligent terminal has the corresponding relation between different object classifications and control information, when the object in the scene information obtained belongs to the object classification controlled that triggers pre-set, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control.In first embodiment, it is assumed that preset several corresponding relations triggering vibrations for shake control device: each triggering item is provided with the object classification of triggering and trigger condition, trigger vibrating effect when satisfied triggering item.Such as: for shake control device, this corresponding relation could be arranged to: when the object in the scene information obtained belongs to the object classification such as rock triggering vibrations, and meet trigger condition such as object number more than 1 and speed per second more than 1/8 screen, be continued above 3 seconds, then starting shock controller triggers vibrating effect.
Finally, in the first embodiment, namely corresponding control information only need to be triggered vibrating effect and be handed down to shake control device by intelligent terminal.
In a second embodiment, for feeble QI controller, it is assumed that intelligent terminal, to being determined the need of starting the abnormal smells from the patient controller effect that scents of, is sent to abnormal smells from the patient controller after generating control command afterwards.Specifically include:
First, after abnormal smells from the patient controller starts, send querying command, the device descriptive information of the intelligent terminal in inquiry current network to intelligent terminal, and monitor the broadcast message of intelligent terminal;Intelligent terminal, as convergent point, when monitoring scent of controller and initiating inquiry, reads the device descriptive information of self and returns to abnormal smells from the patient controller by inquiry response;Abnormal smells from the patient controller initiates session as client, and intelligent terminal for reception session also sets up session between self and abnormal smells from the patient controller.
Then, in the second embodiment, intelligent terminal, according to the sorting objects in scene, needs to manufacture some environment abnormal smells from the patient to enrich Consumer's Experience under some scenes, correspondingly, pre-sets discernible object and the abnormal smells from the patient of correspondence.
When intelligent terminal plays video, sample to several key frames every in frame of video extract one out.Use convolutional neural networks scheduling algorithm to identify this sampling and this frame exists substantial amounts of bouquet, and continue for the quite a long time.Implement consistent with first embodiment, repeat no more here.
In second embodiment, intelligent terminal has the corresponding relation between different scene informations and control information, when the object in the scene information obtained belongs to the object classification controlled that triggers pre-set, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control.In second embodiment, it is assumed that preset several corresponding relations triggering fragrance for abnormal smells from the patient controller: each triggering item specifies the object classification of triggering and trigger condition, trigger odor effect when satisfied triggering item.Such as: trigger, when the object in the scene information obtained belongs to, the object classification RUGUI flower producing abnormal smells from the patient, and meet trigger condition such as lasting time of occurrence > 6 seconds, and quantity > 10, then start abnormal smells from the patient controller and send the abnormal smells from the patient with Flos Osmanthi Fragrantis fragrance:
Finally, in a second embodiment, namely corresponding control information only need to be triggered the abnormal smells from the patient with Flos Osmanthi Fragrantis fragrance and be handed down to abnormal smells from the patient controller by intelligent terminal.
Fig. 7 is the group-network construction schematic diagram that controller of the present invention adopts distributed deployment, as it is shown in fig. 7, in the third embodiment, it is assumed that adopt distributed deployment between each controller.In 3rd embodiment, the object classification set only need to be identified by intelligent terminal, and broadcasts the scene information identified;And the scene information belonging to self span of control can be made whether that needing start-up connector to trigger multidimensional effect is determined by individual controller.Specifically include:
First, key frame in currently playing frame of video is carried out lasting detection, large-area sea is spent as neutral net detects to have in current picture, after the edge contour of flower is found, if also detecting that and spending bigger shaking amplitude of turning right, then, can be inferred that wind blows from left to right according to the direction that flower swings, the grade of air-out can have been calculated according to the amplitude of flower swing;If be also detected with personage to occur in picture simultaneously, then, mark position and the number of personage, and find the speed etc. of relative movement between personage by multiframe.The information of these acquisitions is exactly scene information.
Then, intelligent terminal broadcasts the scene information of acquisition, the kind namely spent, colored approximate number;The direction of wind and wind scale;The speed of personage's number and relative movement.
Then, the process for each controller is as follows:
For each blowing controller, the position according to the scene information obtained, oneself place, and the corresponding relation between different scene informations and control information, decide whether to trigger blowing and the magnitude of wind.Such as: scene information apoplexy is blown from left to right, if blowing controller orientation is on the left side, then, then blow wind-force corresponding in scene information;If blowing controller orientation is on the right, then just need not need to trigger blowing.
For each fragrance of a flower controller, according to the corresponding relation between the scene information and the different scene information pre-set and the control information that obtain, trigger abnormal smells from the patient controller and the fragrance of the colored class in corresponding scene information is discharged.
For each voice controller, according to the scene information obtained, select corresponding background sound as stiff in what the rustle of leaves in the wind.And according to personage's translational speed and moving direction in scene information, and the corresponding relation between the different scene information pre-set and control information, and according to the corresponding sound channel of voice controller self, toggle sound controller selects footsteps strong and weak or gradual change, exports after then background sound and footsteps do superposition.Complete the voice output of this sound channel.
So, under the comprehensive function of various controllers, simulate, to user, the scene that wind spends sea, personage to walk about.
The above, be only the preferred embodiments of the present invention, be not intended to limit protection scope of the present invention.All within the spirit and principles in the present invention, any amendment of making, equivalent replacement, improvement etc., should be included within protection scope of the present invention.

Claims (21)

1. one kind realizes the method that multidimensional controls, it is characterised in that including: the currently playing video content received is analyzed by intelligent terminal, the scene information corresponding to identify described video content;
Intelligent terminal sends described scene information to controller, in order to controller starts multidimensional according to described scene information and controls.
2. method according to claim 1, it is characterised in that the described video content to obtaining is analyzed, and identifies scene information and includes:
When described intelligent terminal plays video, sampling analysis frame of video, search for candidate's object: to each sample frame, obtain motion estimation vector, and the some regional assignments concentrated by macro block big for motion estimation vector are marked region;
Key frame in currently playing frame of video is carried out lasting detection by described intelligent terminal, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, then described intelligent terminal starts the key frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the candidate's object in frame of video and position, to identify described scene information.
3. method according to claim 2, it is characterised in that the described some regional assignments concentrated by macro block big for motion estimation vector are that marked region includes:
Adopt sorting algorithm that the described motion estimation vector obtained is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;
The some regional assignments concentrated by macro block big for motion estimation vector are marked region;It is positioned at the object outside marked region as object of reference.
4. one kind realizes the method that multidimensional controls, it is characterised in that controller recognizes the need for starting multidimensional according to the scene information that the currently playing video content received is corresponding and experiences the instruction of control, controls accordingly.
5. method according to claim 4, it is characterised in that be previously provided with the corresponding relation between different object classifications and control information in described controller;
Described identifying according to the scene information obtained self needs to start multidimensional and experiences the instruction controlled and include: described in object in the scene information that obtains belong to pre-set trigger the object classification controlled, and when meeting the trigger condition pre-set, it is determined that start described multidimensional and experience the instruction controlled.
6. the method according to claim 4 or 5, it is characterised in that described controller includes: shake control device and/or abnormal smells from the patient controller and/or sprayer controller and/or lamp dimmer and/or voice controller.
7. method according to claim 6, it is characterised in that adopt distributed deployment or centralized deployment between each controller.
8. one kind realizes the method that multidimensional is experienced, it is characterised in that including:
The currently playing video content received is analyzed by intelligent terminal, with scene information corresponding to controller identified and initiate request;
Intelligent terminal determines the need for starting multidimensional according to the scene information identified and experiences control;
When determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.
9. method according to claim 8, it is characterised in that described intelligent terminal also includes before the video content obtained is analyzed:
Described intelligent terminal listens to the querying command from one or more controllers, and the device descriptive information of self returns to the controller initiating inquiry request;
The controller receiving inquiry response initiates session as client to intelligent terminal, and sets up session between described intelligent terminal and controller.
10. method according to claim 9, it is characterised in that the described video content to obtaining is analyzed, identifies the scene information corresponding with the controller initiating request and includes:
When described intelligent terminal plays video, sampling analysis frame of video, search for candidate's object: to each sample frame, obtain motion estimation vector, and the some regional assignments concentrated by macro block big for motion estimation vector are marked region;
Key frame in the frame of video of described acquisition is carried out lasting detection, if continuing in longer sequence of frames of video at one section pre-set, there is marked region always, then start the key frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the candidate object relevant to the controller initiating to inquire about and set up session in frame of video and position, the scene information corresponding to identify controller that is described and that initiate to inquire about and set up session.
11. method according to claim 10, it is characterised in that the described some regional assignments concentrated by macro block big for motion estimation vector are that marked region includes:
Adopt sorting algorithm that the described motion estimation vector obtained is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;
The some regional assignments concentrated by macro block big for motion estimation vector are marked region;It is positioned at the object outside marked region as object of reference.
12. method according to claim 9, it is characterised in that be previously provided with the corresponding relation between different object classifications and control information in described intelligent terminal;
Described intelligent terminal determines the need for starting multidimensional according to the scene information obtained and experiences control and include: described in object in the scene information that obtains belong to pre-set trigger the object classification controlled, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control, and corresponding control information is handed down to corresponding controllers.
13. an intelligent terminal, it is characterised in that include the first analysis module, broadcast module;Wherein,
First analysis module, for, after starting multidimensional experience functions, being analyzed the currently playing video content received, the scene information corresponding to identify described video content;
Broadcast module, identifies scene information to controller for transmission, in order to controller starts multidimensional according to described scene information and controls.
14. intelligent terminal according to claim 13, it is characterised in that described first analysis module specifically for: when playing video, sampling analysis frame of video, to each sample frame, obtain motion estimation vector;Adopt sorting algorithm that the motion estimation vector of acquisition is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;The some regional assignments concentrated by macro block big for motion estimation vector are marked region;
Key frame in currently playing frame of video is carried out lasting detection, if continuing in longer sequence of frames of video at one section, there is marked region always, then start the key frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the candidate's object in frame of video and position, to identify described scene information.
15. an intelligent terminal, it is characterised in that include the second analysis module, it is determined that module;Wherein,
Second analysis module, for, after starting multidimensional experience functions, the currently playing video content received being analyzed, the scene information corresponding to identify the controller obtained and initiate request;
Determine module, experience control for determining the need for starting multidimensional according to the scene information that identifies, when determine need to start multidimensional experience control time, corresponding control information is handed down to corresponding controllers.
16. intelligent terminal according to claim 15, it is characterized in that, also include setting up module, for listening to the querying command from one or more controllers, the device descriptive information of intelligent terminal belonging to self is returned to the controller initiating inquiry request;And initiate to set up session between the controller of session.
17. intelligent terminal according to claim 16, it is characterised in that described second analysis module specifically for:
When playing video, sampling analysis frame of video, to each sample frame, obtain motion estimation vector;Adopt sorting algorithm that the motion estimation vector of acquisition is divided into two classes: macro block that motion estimation vector is big and the little macro block of motion estimation vector;The some regional assignments concentrated by macro block big for motion estimation vector are marked region;The object being positioned at outside marked region is called object of reference;
Key frame in currently playing frame of video is carried out lasting detection, if continuing in longer sequence of frames of video at one section, there is marked region always, then start the frame in this sequence of frames of video of sampling analysis, each sample frame identification is oriented the primary objects relevant to the controller initiating to inquire about and set up session in frame of video and position, the scene information corresponding to identify the controller inquiring about and setting up session with described initiation.
18. intelligent terminal according to claim 16, it is characterized in that, described determine module specifically for: be wherein previously provided with the corresponding relation between different object classifications and control information, when the object in the described scene information obtained belongs to the object classification triggering control pre-set, and when meeting the trigger condition pre-set, start corresponding multidimensional and experience control, and corresponding control information is handed down to corresponding controllers.
19. a controller, it is characterised in that include acquisition module, control module;Wherein,
Acquisition module, for obtaining the scene information that currently playing video content is corresponding;
Control module, when self needing startup multidimensional experience to control for determining according to the scene information obtained, control accordingly.
20. controller according to claim 19, it is characterised in that be previously provided with the corresponding relation between different object classifications and control information in described control module;
Described control module specifically for: when the object in the described scene information obtained belong to pre-set trigger control object classification, and meet pre-set trigger condition time, start described multidimensional experience control.
21. the controller according to claim 19 or 20, it is characterised in that described acquisition module is additionally operable to: send querying command, to inquire about the facility information of the intelligent terminal in current network, and monitor the information of intelligent terminal's broadcast.
CN201610206745.2A 2016-04-05 2016-04-05 Method for realizing multidimensional control, intelligent terminal and controller Active CN105760141B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610206745.2A CN105760141B (en) 2016-04-05 2016-04-05 Method for realizing multidimensional control, intelligent terminal and controller
PCT/CN2017/079444 WO2017173976A1 (en) 2016-04-05 2017-04-05 Method for realizing multi-dimensional control, intelligent terminal and controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610206745.2A CN105760141B (en) 2016-04-05 2016-04-05 Method for realizing multidimensional control, intelligent terminal and controller

Publications (2)

Publication Number Publication Date
CN105760141A true CN105760141A (en) 2016-07-13
CN105760141B CN105760141B (en) 2023-05-09

Family

ID=56333468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610206745.2A Active CN105760141B (en) 2016-04-05 2016-04-05 Method for realizing multidimensional control, intelligent terminal and controller

Country Status (2)

Country Link
CN (1) CN105760141B (en)
WO (1) WO2017173976A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657975A (en) * 2016-10-10 2017-05-10 乐视控股(北京)有限公司 Video playing method and device
WO2017173976A1 (en) * 2016-04-05 2017-10-12 中兴通讯股份有限公司 Method for realizing multi-dimensional control, intelligent terminal and controller
CN107743205A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108063701A (en) * 2016-11-08 2018-05-22 华为技术有限公司 A kind of method and device for controlling smart machine
CN109388719A (en) * 2018-09-30 2019-02-26 京东方科技集团股份有限公司 Multidimensional contextual data generating means and method based on Digitized Works
CN110245628A (en) * 2019-06-19 2019-09-17 成都世纪光合作用科技有限公司 A kind of method and apparatus that testing staff discusses scene
CN110493090A (en) * 2019-08-22 2019-11-22 三星电子(中国)研发中心 A kind of method and system for realizing Intelligent home theater
CN112040289A (en) * 2020-09-10 2020-12-04 深圳创维-Rgb电子有限公司 Video playing control method and device, video playing equipment and readable storage medium
EP3792731A4 (en) * 2018-05-10 2022-01-05 ZTE Corporation Multimedia information transmission method and apparatus, and terminal
CN114885189A (en) * 2022-04-14 2022-08-09 深圳创维-Rgb电子有限公司 Control method, device and equipment for opening fragrance and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200213662A1 (en) * 2018-12-31 2020-07-02 Comcast Cable Communications, Llc Environmental Data for Media Content
CN111031392A (en) * 2019-12-23 2020-04-17 广州视源电子科技股份有限公司 Media file playing method, system, device, storage medium and processor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100138478A1 (en) * 2007-05-08 2010-06-03 Zhiping Meng Method of using information set in video resource
CN103559713A (en) * 2013-11-10 2014-02-05 深圳市幻实科技有限公司 Method and terminal for providing augmented reality
CN103679727A (en) * 2013-12-16 2014-03-26 中国科学院地理科学与资源研究所 Multi-dimensional space-time dynamic linkage analysis method and device
CN103970892A (en) * 2014-05-23 2014-08-06 无锡清华信息科学与技术国家实验室物联网技术中心 Method for controlling multidimensional film-watching system based on intelligent home device
CN105306982A (en) * 2015-05-22 2016-02-03 维沃移动通信有限公司 Sensory feedback method for mobile terminal interface image and mobile terminal thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8009923B2 (en) * 2006-03-14 2011-08-30 Celestial Semiconductor, Inc. Method and system for motion estimation with multiple vector candidates
CN105072483A (en) * 2015-08-28 2015-11-18 深圳创维-Rgb电子有限公司 Smart home equipment interaction method and system based on smart television video scene
CN105760141B (en) * 2016-04-05 2023-05-09 中兴通讯股份有限公司 Method for realizing multidimensional control, intelligent terminal and controller

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100138478A1 (en) * 2007-05-08 2010-06-03 Zhiping Meng Method of using information set in video resource
CN103559713A (en) * 2013-11-10 2014-02-05 深圳市幻实科技有限公司 Method and terminal for providing augmented reality
CN103679727A (en) * 2013-12-16 2014-03-26 中国科学院地理科学与资源研究所 Multi-dimensional space-time dynamic linkage analysis method and device
CN103970892A (en) * 2014-05-23 2014-08-06 无锡清华信息科学与技术国家实验室物联网技术中心 Method for controlling multidimensional film-watching system based on intelligent home device
CN105306982A (en) * 2015-05-22 2016-02-03 维沃移动通信有限公司 Sensory feedback method for mobile terminal interface image and mobile terminal thereof

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017173976A1 (en) * 2016-04-05 2017-10-12 中兴通讯股份有限公司 Method for realizing multi-dimensional control, intelligent terminal and controller
CN106657975A (en) * 2016-10-10 2017-05-10 乐视控股(北京)有限公司 Video playing method and device
CN108063701B (en) * 2016-11-08 2020-12-08 华为技术有限公司 Method and device for controlling intelligent equipment
CN108063701A (en) * 2016-11-08 2018-05-22 华为技术有限公司 A kind of method and device for controlling smart machine
CN107743205A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
EP3792731A4 (en) * 2018-05-10 2022-01-05 ZTE Corporation Multimedia information transmission method and apparatus, and terminal
CN109388719A (en) * 2018-09-30 2019-02-26 京东方科技集团股份有限公司 Multidimensional contextual data generating means and method based on Digitized Works
CN110245628A (en) * 2019-06-19 2019-09-17 成都世纪光合作用科技有限公司 A kind of method and apparatus that testing staff discusses scene
CN110493090A (en) * 2019-08-22 2019-11-22 三星电子(中国)研发中心 A kind of method and system for realizing Intelligent home theater
CN110493090B (en) * 2019-08-22 2022-01-28 三星电子(中国)研发中心 Method and system for realizing intelligent home theater
CN112040289A (en) * 2020-09-10 2020-12-04 深圳创维-Rgb电子有限公司 Video playing control method and device, video playing equipment and readable storage medium
CN114885189A (en) * 2022-04-14 2022-08-09 深圳创维-Rgb电子有限公司 Control method, device and equipment for opening fragrance and storage medium
WO2023197580A1 (en) * 2022-04-14 2023-10-19 深圳创维-Rgb电子有限公司 Perfume turn-on control method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2017173976A1 (en) 2017-10-12
CN105760141B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN105760141A (en) Multi-dimensional control method, intelligent terminal and controllers
CN114269448A (en) Information processing apparatus, information processing method, display apparatus equipped with artificial intelligence function, and reproduction system equipped with artificial intelligence function
JP4052556B2 (en) External device-linked content generation device, method and program thereof
CN104093078B (en) A kind of method and device of playing video file
US8604328B2 (en) Method and system for generating data for controlling a system for rendering at least one signal
CN106303555A (en) A kind of live broadcasting method based on mixed reality, device and system
KR102253374B1 (en) A controller for scent diffusing device and a server for supporting the controller
KR20100114857A (en) Method and apparatus for representation of sensory effects using user's sensory effect preference metadata
KR20100114858A (en) Method and apparatus for representation of sensory effects using sensory device capabilities metadata
CN102830723A (en) Control device, terminal and control method
KR20090038834A (en) Sensory effect media generating and consuming method and apparatus thereof
US20100178211A1 (en) Communication terminal device
CN109084432A (en) The regulation method and air conditioner of sleep environment
CN101971608A (en) Method and apparatus to provide a physical stimulus to a user, triggered by a motion detection in a video stream
CN104618446A (en) Multimedia pushing implementing method and device
KR20150144321A (en) Music washing machine and control method thereof
CN110392292A (en) A kind of synchronous synergetic method of multi-section intelligent electronic device and multimedia play system
CN107071518A (en) The video broadcasting method and system of adaptive mobile terminal study
US20220020053A1 (en) Apparatus, systems and methods for acquiring commentary about a media content event
JP2005303722A (en) Communications system for transmitting feeling of oneness
CN106564059B (en) A kind of domestic robot system
CN110493090A (en) A kind of method and system for realizing Intelligent home theater
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
CN115782908A (en) Human-computer interaction method of vehicle, nonvolatile storage medium and vehicle
CN110472073A (en) Shuffle method, apparatus and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant