CN103777851B - Internet of Things video interactive method and system - Google Patents
Internet of Things video interactive method and system Download PDFInfo
- Publication number
- CN103777851B CN103777851B CN201410066271.7A CN201410066271A CN103777851B CN 103777851 B CN103777851 B CN 103777851B CN 201410066271 A CN201410066271 A CN 201410066271A CN 103777851 B CN103777851 B CN 103777851B
- Authority
- CN
- China
- Prior art keywords
- internet
- things
- video
- user terminal
- operation content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Telephonic Communication Services (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention is suitable for internet of things field, discloses a kind of Internet of Things video interactive method and system, the described method comprises the following steps:Gather the video of Internet of Things object;Internet of Things object in the video is identified;The operation content to the Internet of Things object corresponding to object in the video will be converted into the operation of object in video;The operation content is sent to the Internet of Things object to go to perform;Show that Internet of Things object performs the last state after the operation content in the video.Using the above method and system, it can be by by Internet of Things video acquisition to user terminal, then by user terminal video directly as user and the interactive interface of Internet of Things object, the function of interaction is increased newly to Internet of Things video, reduce the development cost of user and Internet of Things object interactive function, it improves intuitive that user interacts with Internet of Things object and iconicity, reduce the complexity that user interacts with Internet of Things object, improve the interaction impression of user.
Description
Technical field
The present invention relates to internet of things field more particularly to a kind of Internet of Things video interactive method and systems.
Background technology
In technology of Internet of things, user can be interacted by the interactive interface of Internet of Things and Internet of Things object, so as to
Realize the interaction with Internet of Things object.This interaction includes manipulation Internet of Things object and obtains the state of Internet of Things object.
Traditional Internet of Things interaction technique is all by representing Internet of Things object in user and client menu or control panel
The interface that the schematic diagram or word of body interact to trigger the Internet of Things object calls, and then realizes user and the Internet of Things
The interaction of net object.For example, when user wants to open the air-conditioning of family in advance by Internet of Things on next road, then this object
The client of working application is generally on the mobile phone of user, and client is generally in the form of menu or control panel, menu item
Or control panel item represents the opening operation of air-conditioning with schematic diagram or word, when user clicks on, then can trigger and the menu
The interface of item or the associated Internet of Things air-conditioning of control panel item calls, and is then sent to Internet of Things air-conditioning by Internet of Things, real
Now open air-conditioning.
Traditional Internet of Things video technique is all the video that Internet of Things object is gathered by camera, is mainly used for video prison
Control.User or monitoring system by video can see or identify the real-time status of Internet of Things object in real time.For example, in intelligence
In household Internet of Things, user can check by Internet of Things video monitoring on mobile phone the family that family camera is sent at any time
Middle real-time video.
In traditional Internet of Things interaction technique, in order to support the interaction of user and Internet of Things object, it is necessary to manually with
The word for representing each Internet of Things object or image and control panel or menu are designed in the interactive interface of family one by one, so manually
Development cost is high, and represent object word or image be not as directly perceived with image as the video of object, user needs to understand in advance
These represent the word or image of each Internet of Things object, use complexity.
In traditional Internet of Things video technique, video is not used to interact with the Internet of Things object, does not possess
The function that user is supported to be interacted by video and Internet of Things object.For example, when user wants to pass through Internet of Things on next road
Net opens the air-conditioning of family in advance, then for the client of this Internet of Things application generally on the mobile phone of user, user clicks on prison
Air-conditioning in control video is released the button, and is air-conditioning in the real family that can not be opened in monitor video corresponding to air-conditioning.
Therefore, the prior art has yet to be improved and developed.
The content of the invention
It is an object of the invention to provide a kind of Internet of Things video interactive methods, are only used for for video in the prior art
It monitors the state of Internet of Things object, user is not supported to be interacted by video and the Internet of Things object, user and Internet of Things
The interaction of object can only be carried out by representing word or image and control panel or the menu of each Internet of Things object, so as to lead
The development cost for causing Internet of Things interaction technique is high, and the interaction between user and Internet of Things object is not intuitively vivid not to ask easily
Topic.It proposes a kind of method that user can be supported to pass through the video of Internet of Things object to interact with the Internet of Things object, comes
Increase the function of Internet of Things video, make Internet of Things video that can also support to interact, to reduce the development cost of Internet of Things interactive system,
The simplification that user interacts with Internet of Things object is improved, improves the interaction impression of user.
A kind of Internet of Things video interactive method, comprises the following steps:
The operation of object in video will be converted into the operation to the Internet of Things object corresponding to object in the video
Hold;
The operation content is sent to the Internet of Things object to go to perform.
Preferentially, the operation of object in video will be converted into the Internet of Things object corresponding to object in the video
Before operation content, the method is further comprising the steps of:
Gather the video of Internet of Things object;
Internet of Things object in the video is identified.
Preferentially, the operation content is sent to after the Internet of Things object goes execution, the method further includes following
Step:
Show that Internet of Things object performs the last state after the operation content in the video.
Preferentially, the operation to object in video is the operation to the action-item popped up on object in video.
Preferentially, the operation to object in video is the operation to the operable component of object in video.
A kind of Internet of Things video interactive system, including:
Interactive module, for that will be converted into the operation of object in video to the Internet of Things corresponding to object in the video
The operation content of object;
Execution module goes to perform for the operation content to be sent to the Internet of Things object.
Preferentially, the system also includes:
Transport module is gathered, for gathering the video of Internet of Things object;
Identification module, for the Internet of Things object in the video to be identified.
Preferentially, the system also includes:
Display module, for showing the last state after the Internet of Things object execution operation content in the video.
Preferentially, the operation to object in video is the operation to the action-item popped up on object in video.
Preferentially, the operation to object in video is the operation to the operable component of object in video.
Above-mentioned Internet of Things video interactive method and system, by by Internet of Things video acquisition to user terminal, then by user
Video is held directly as user and the interactive interface of Internet of Things object, so that interactive function is added to Internet of Things video, from
And reduce the development cost of user and Internet of Things object interactive function;Simultaneously so that user can intuitively regard user terminal
Object is operated and the state of Internet of Things object after operation can be intuitive to see from video in frequency so that user's finding, that is, institute
, it can immersively be interacted by user terminal video with Internet of Things object, so as to improve user and Internet of Things object
Interactive intuitive and iconicity reduce the complexity that user interacts with Internet of Things object, so as to improve the interaction of user
Impression.
Description of the drawings
Fig. 1 is the flow chart of Internet of Things video interactive method in one embodiment;
Fig. 2 is the method flow diagram that the Internet of Things object in the video is identified in one embodiment;
Fig. 3 is that the operation of object in video will be converted into the object corresponding to object in the video in one embodiment
The method flow diagram of the operation content of networking object;
Fig. 4 is that the operation content is sent to the Internet of Things object in one embodiment to remove the method flow performed
Figure;
Fig. 5 is to show the newest shape that Internet of Things object is performed after the operation content in one embodiment in the video
The method flow diagram of state;
Fig. 6 is the structure diagram of Internet of Things video interactive system in one embodiment;
Fig. 7 is the structure diagram of identification module in Fig. 6;
Fig. 8 is the structure diagram of interactive module in Fig. 6;
Fig. 9 is the structure diagram of execution module in Fig. 6;
Figure 10 is the structure diagram of display module in Fig. 6.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
In one embodiment, as shown in Figure 1, a kind of method of Internet of Things object video interactive, comprises the following steps:
Step S101 gathers the video of Internet of Things object.Preferably, Internet of Things object is gathered by video capture device
Then the video collected is real-time transmitted to user terminal by network and shown, forms corresponding user terminal by video
Video.For example, placing a camera on washing machine, its key panel is directed at, and the video is passed into user terminal in real time.That
In user terminal, user can operate the panel of the washing machine of this in video;
The Internet of Things object in the video is identified in step S102.Preferably, identified from user terminal video
With the object in the Internet of Things the object correspondingly user terminal video.Preferably, identified from user terminal video
With the object part in the Internet of Things the object part correspondingly user terminal video.Preferably, when user views
When pressing shift key when some object or incorrect component identification, recognition result is shown that item becomes editable pattern, is subjected to using
The modification at family.For example, there is washer and refrigerator in video, but during identification, washing machine refrigerator is identified as, refrigerator is identified as
Washing machine, then user enter edit pattern directly by this recognition result exchange come.This is simply to illustrate that clearly ask
Inscribe the hypothesis done.It is usually that this mistake will not occur in identification since the difference of washer and refrigerator is obvious.More
Common mistake is that the taboret beside the object area of identification some errors, such as washing machine has also been identified as washing machine
A part, at this moment user can enter edit pattern, taboret is removed from the washing machine region identified;
Step S103 will be converted into the Internet of Things object corresponding to object in the video operation of object in video
Operation content.Preferably, Internet of Things object is included in the operation content and receives the address of operation content, Internet of Things object
Need the content of operation content performed.Preferably, user is mapped as to institute the operation of object in the user terminal video
State the operation content of the Internet of Things object in user terminal video corresponding to object.Preferably, by user to the user terminal
The operation of object part is mapped as to the Internet of Things object corresponding to object part in the user terminal video in video
Operation content.Preferably, clicking operation is usually the action type that all objects and its component are all supported in video, and other classes
The operation of type, for example, input, swipe the card, brush finger line etc. is, it is necessary in the respective objects of video or its component(As gate inhibition swipes the card in video
Device)Upper right button can show the action type that the object or component are supported in right button popup menu.For example, user can be with
The swiping card type is selected, as long as then user swipes the card where user terminal video is connected on terminal card reader, is then brushed
The data of card can just be transferred to the correspondence access control system in Internet of Things as operation content;
The operation content is sent to the Internet of Things object and goes to perform by step S104.Preferably, by the Internet of Things
The operation content of object is sent to the Internet of Things object and goes to perform.For example, Internet of Things air-conditioning is received by Internet of Things
The operation content opened, will automatically turn on air-conditioning;
Step S105 shows that Internet of Things object performs the last state after the operation content in the video.It is preferred that
Ground shows the last state after the Internet of Things object execution operation content using the user terminal video.For example, user
Operation is to click on the temperature display of air-conditioning in video, then Internet of Things air-conditioning needs the temperature number for feeding back air-conditioning to be regarded to user terminal
Frequently.In another example the operation of user is to click on air-conditioning in video to release the button, then the state after Internet of Things air-conditioning is opened can be shown
In user terminal video.
As shown in Fig. 2, in one embodiment, the step S102 bags that the Internet of Things object in the video is identified
It includes:
In step S112, the type of each object and position in the user terminal video are identified;
In step S122, the type of each Internet of Things object is obtained by Internet of Things or manually sets in the user terminal
And position;
It, will be in the type of each Internet of Things object and position and the user terminal video identified in step S132
Each object type and position carry out fuzzy matching;
In step S142, each object in the user terminal video of successful match is marked in the user terminal video
The title and unique designation code of upper corresponding each Internet of Things object, by the name of each Internet of Things object of non-successful match
Claim and unique designation code is also shown in user terminal video;
In step S152, receive user and the result of the successful match is modified in the user terminal video, it is right
Each Internet of Things object of non-successful match carries out manual identified in the user terminal video;
In step S162, after the completion of the identification, when the object in user terminal video occurs mobile, then video object is utilized
Body motion tracking technology repositions the object in the user terminal video.
In another embodiment(Explanation:Difference lies in also support to object in video with a upper embodiment for the present embodiment
The component-level identification of body), the step S102 that the Internet of Things object in the video is identified includes:
In step S ' 112, the type and position of each object and its operable component in the user terminal video are identified;
In step S ' 122, the user terminal by Internet of Things obtain or manually set each Internet of Things object and its
The type of operable component and position;
In step S ' 132, by the type and position of each Internet of Things object and its operable component and identify
Each object and its operable unit type and position carry out fuzzy matching in the user terminal video;
In step S ' 142, by each object in the user terminal video of successful match and its operable component described
Upper corresponding each Internet of Things object and its title of operable component and unique designation code are marked in user terminal video, it will not
Each Internet of Things object of successful match and its title and unique designation code of operable component also show user terminal video
On;
In step S ' 152, receive user and the result of the successful match is modified in the user terminal video, it is right
Each Internet of Things object of non-successful match and its operable component carry out manual identified in the user terminal video;
In step S ' 162, after the completion of the identification, when the object in user terminal video or its operable component move
When, then the object in the user terminal video or its operable component are carried out again using video object motion tracking technology
Positioning.
As shown in figure 3, in one embodiment, the operation of object in video will be converted into object institute in the video
The step S103 of the operation content of corresponding Internet of Things object includes:
In step S113, obtained in the user terminal by Internet of Things or manually set what each Internet of Things object was supported
Operation content;
In step S123, the operation content that each Internet of Things object is supported is as counterpart in the user terminal video
The popup menu of body operation content corresponding when being operated;
In step S133, mouse is moved on to the object in the user terminal video as user, just the object in the video
On emerge the menu;
In step S143, when the action-item described in user's operation on popup menu, the described of the action-item is just mapped to
Operate the corresponding operation content.
In another embodiment(Explanation:Difference lies in also support to object in video with a upper embodiment for the present embodiment
The component-level operation of body), the operation of object in video will be converted into the Internet of Things object corresponding to object in the video
The step S103 of operation content include:
In step S ' 113, the user terminal by Internet of Things obtain or manually set each Internet of Things object and its
The operation content that operable component is supported;
In step S ' 123, using the operation content that each Internet of Things object is supported as corresponding in the user terminal video
The popup menu of object operation content corresponding when being operated.The operation that each operable component of Internet of Things object is supported
Content is as corresponding to operation content corresponding when the operable component of object is operated in the user terminal video;
In step S ' 133, when mouse is moved on to the non-operable component area of object in the user terminal video by user,
Just emerge the menu on object in the video.When mouse is moved on to the operable of object in the user terminal video by user
When component area is operated, operation of the user to the operating member is just received;
In step S ' 143, when the action-item described in user's operation on popup menu, the described of the action-item is just mapped to
Operate the corresponding operation content.When operable component described in user's operation, the described of the operable component is just mapped to
Operate the corresponding operation content.
As shown in figure 4, in one embodiment, the operation content is sent to the Internet of Things object and goes the step performed
Rapid S104 includes:
In step S114, when the operation content is to change the state of the Internet of Things object, then Internet of Things object changes
The state of itself;
In step S124, when the operation content is to obtain the state content of the Internet of Things object, then Internet of Things object
The state content of the Internet of Things object is sent back into the user terminal video.
As shown in figure 5, in one embodiment, after showing that Internet of Things object performs the operation content in the video
The step S105 of last state include:
In step S115, the last state after Internet of Things object performs the operation content is gathered by video capture device
Video, then the video of the last state collected is shown by network transmission to user terminal;
In step S125, when the user terminal video receives the state content that the Internet of Things object is sent, then will
The state content is shown in the respective objects in the corresponding user terminal video of the Internet of Things object.
In one embodiment, as shown in fig. 6, a kind of Internet of Things video interactive system, including acquisition transport module 101,
Identification module 102, interactive module 103, execution module 104 and display module 105, wherein(Explanation:In the corresponding embodiments of Fig. 1
The example passed the imperial examinations at the provincial level is equally applicable to this present embodiment, repeats no more):
Transport module 101 is gathered, for gathering the video of Internet of Things object.Preferably, for passing through video capture device
The video of Internet of Things object is gathered, the video collected is then real-time transmitted to user terminal by network and is shown,
Form corresponding user terminal video;
Identification module 102, for the Internet of Things object in the video to be identified.Preferably, for from user terminal
It is identified in video and the object in the Internet of Things the object correspondingly user terminal video.Preferably, from user terminal
It is identified in video and the object part in the Internet of Things the object part correspondingly user terminal video;
Interactive module 103, for that will be converted into the operation of object in video to the object corresponding to object in the video
The operation content of networking object.Preferably, Internet of Things object is included in the operation content and receives the address of operation content, object
The content for the operation content that networking object needs perform.Preferably, for the behaviour by user to object in the user terminal video
It is mapped as the operation content to the Internet of Things object corresponding to object in the user terminal video.Preferably, for general
User is mapped as to corresponding to object part in the user terminal video operation of object part in the user terminal video
The operation content of the Internet of Things object;
Execution module 104 goes to perform for the operation content to be sent to the Internet of Things object.Preferably, it is used for
The operation content of the Internet of Things object is sent to the Internet of Things object to go to perform;
Display module 105, for showing the newest shape after the Internet of Things object execution operation content in the video
State.Preferably, for the last state after performing the operation content using the user terminal video showing Internet of Things object.
As shown in fig. 7, in one embodiment, identification module 102 include video object type location identification module 112,
Internet of Things object type position setting module 122, object fuzzy matching module 132, recognition result display module 142, identification knot
Fruit correcting module 152, video object reorientation module 162, wherein:
Video object type location identification module 112, for identify in the user terminal video type of each object and
Position;
Internet of Things object type position setting module 122, for being obtained by Internet of Things in the user terminal or manually being set
The type of fixed each Internet of Things object and position;
Object fuzzy matching module 132, for by the type of each Internet of Things object and position and the institute identified
It states each object type and position in user terminal video and carries out fuzzy matching;
Recognition result display module 142, for each object in the user terminal video by successful match in the use
The title and unique designation code of upper corresponding each Internet of Things object are marked in the video of family end, by the described each of non-successful match
The title and unique designation code of Internet of Things object are also shown in user terminal video;
Recognition result correcting module 152, for receiving user in the user terminal video to the knot of the successful match
Fruit is modified, and manual identified is carried out in the user terminal video to each Internet of Things object of non-successful match;
Video object relocates module 162, after the completion of the identification, when the object in user terminal video moves
When, then the object in the user terminal video is repositioned using video object motion tracking technology.
In another embodiment(Explanation:Difference lies in also support to object in video with a upper embodiment for the present embodiment
The component-level identification of body), identification module 102 include video object type location identification module 112, Internet of Things object type position
Setting module 122, object fuzzy matching module 132, recognition result display module 142, recognition result correcting module 152, video
Object relocates module 162, wherein:
Video object type location identification module 112, for identifying each object in the user terminal video and its can grasp
Make type and the position of component;
Internet of Things object type position setting module 122, for being obtained by Internet of Things in the user terminal or manually being set
Type and the position of fixed each Internet of Things object and its operable component;
Object fuzzy matching module 132, for by type and the position of each Internet of Things object and its operable component
Fuzzy matching is carried out with each object and its operable unit type in the user terminal video that identifies and position;
Recognition result display module 142 for each object in the user terminal video by successful match and its can be grasped
Make the title and only that component marks corresponding each Internet of Things object and its operable component in the user terminal video
One identity code also shows the title and unique designation code of each Internet of Things object of non-successful match and its operable component
Onto user terminal video;
Recognition result correcting module 152, for receiving user in the user terminal video to the knot of the successful match
Fruit is modified, and each Internet of Things object of non-successful match and its operable component are carried out in the user terminal video
Manual identified;
Video object relocate module 162, after the completion of the identification, when the object in user terminal video or its can grasp
When making component and occurring mobile, then using video object motion tracking technology come to the object in the user terminal video or its can grasp
It is repositioned as component.
As shown in figure 8, in one embodiment, interactive module 103 includes Internet of Things object operation content setting module
113rd, operation content setting module 123, operation trigger module 133, operation content mapping block 143, wherein:
Internet of Things object operation content setting module 113, for being obtained by Internet of Things in the user terminal or manually being set
The operation content that fixed each Internet of Things object is supported;
Operation content setting module 123, for using the operation content of each Internet of Things object support as the user
Corresponding operation content when the popup menu for corresponding to object in video being held to be operated;
Trigger module 133 is operated, the object in the user terminal video is moved on to for working as user by mouse, is just regarded described
Emerge the menu in frequency on object;
Operation content mapping block 143 for working as the action-item described in user's operation on popup menu, is just mapped to described
The corresponding operation content of the operation of action-item.
In another embodiment(Difference lies in the behaviour for also supporting object part grade with a upper embodiment for the present embodiment
Make), interactive module 103 include Internet of Things object operation content setting module 113, operation content setting module 123, operation triggering
Module 133, operation content mapping block 143, wherein:
Internet of Things object operation content setting module 113 sets by Internet of Things acquisition or manually institute in the user terminal
State the operation content that each Internet of Things object and its operable component are supported;
Operation content setting module 123, the operation content that each Internet of Things object is supported are regarded as the user terminal
Operation content corresponding when the popup menu of object is operated is corresponded in frequency.By each operable component branch of Internet of Things object
The operation content held is as corresponding to operation content corresponding when the operable component of object is operated in the user terminal video;
Trigger module 133 is operated, when mouse is moved on to the non-operable component region of object in the user terminal video by user
During domain, just emerge the menu on object in the video.When mouse is moved on to object in the user terminal video by user
When operable component area is operated, operation of the user to the operating member is just received;
Operation content mapping block 143 when the action-item described in user's operation on popup menu, is just mapped to the operation
The corresponding operation content of the operation of item.When operable component described in user's operation, the operable portion is just mapped to
The corresponding operation content of the operation of part.
As shown in figure 9, in one embodiment, execution module 104 includes state change execution module 114, state obtains
Execution module 124, wherein:
State change execution module 114:When the operation content is to change the state of the Internet of Things object, then Internet of Things
Object changes the state of itself;
State obtains execution module 124:When the operation content is to obtain the state content of the Internet of Things object, then object
The state content of the Internet of Things object is sent back the user terminal video by networking object.
As shown in Figure 10, in one embodiment, display module 105 includes video update display module 115, state content
Display module 125, wherein:
Video updates display module 115:After the Internet of Things object execution operation content being gathered by video capture device
Last state video, then the video of the last state collected is shown by network transmission to user terminal
Show;
State content display module 125:When the user terminal video is received in the state that the Internet of Things object sends
The state content is then shown in the respective objects in the corresponding user terminal video of the Internet of Things object by Rong Shi.
The effect of the various embodiments described above is illustrated with reference to specific example:
Example 1 gathers the video of Internet of Things air-conditioning in real time, then clicks on air-conditioning in video and releases the button, corresponding object
Networking air-conditioning will be opened;
Example 2, during someone goes on business, for certain secretary into the people office is needed to open the door of the people office,
The door has coded lock, it is necessary to inputting password could open, as long as the people remotely clicks on each of coded lock in the corresponding video of this
Digital keys input password.The password(It can be obtained by identifying that mouse is clicked on video which digital keys)Automatically quilt
This coded lock being transmitted in Internet of Things, as long as password correctly as long as can open door;
Example 3 opens the door coded lock of Internet of Things by video interactive, can also click the coded lock by right key, close on video
Code lock position will pop up the frame of input password, and user need to only use input through keyboard password, and into the frame, effect is equal to
Click corresponding digital keys on coded lock;
Example 4 if the door of Internet of Things needs to open by fingerprint recognition, can click this in video by right key
Fingerprint Identification Unit, Fingerprint Identification Unit position will pop up the prompting that please scan fingerprint on video, and user only need to be in user terminal
Fingerprint scanner on scan the fingerprint of oneself, effect is equal to be referred on the fingerprint scanner of this of Internet of Things
Line scans, and the door on Internet of Things can be opened if fingerprint matching;
Example 5, in smart home Internet of Things, as long as at home installation Internet of Things camera TV, air-conditioning,
Lamp, curtain, refrigerator etc. are all transferred to user terminal in a manner of video, and for user interaction, user can be clicked in video
Respective objects carry out corresponding operating.For example, in smart home Internet of Things, when user, which looks on the bright side of things, opens air-conditioning, it is only necessary in visitor
The air-conditioning that family endpoint is hit in video is released the button, when releasing the button for air-conditioning of the user in client clicks video
Afterwards, the time of day that air-conditioning is opened can be just seen from video at once, the impression with opening air-conditioning in reality is identical;
Above-mentioned Internet of Things video interactive method and system so that people can operate Internet of Things object by Internet of Things video
Body.This, which has thoroughly overturned existing Internet of Things video technique, can only monitor Internet of Things object real-time status, and can not be to Internet of Things object
The state of the art that the state of body is intervened.And before the method for this Internet of Things video interactive is also for video interactive technology
Do not have so that video interactive technology moves towards Internet of Things from existing unit and internet, breaches original boundary.
Above-mentioned Internet of Things video interactive method and system are passed by gathering the video of Internet of Things object, and by the video
It is defeated to the interface interacted as user with Internet of Things object after user terminal, and the cost of the acquisition transmission of video be it is low-down,
Can accomplish as long as with common IP Camera, add the interactive function of Internet of Things video, greatly reduce user with
The development cost that Internet of Things object interacts.Simultaneously as video is more directly perceived, image and real surface show Internet of Things
Object and its real-time status so that user and the interaction of Internet of Things object are more simple, greatly improve user and use Internet of Things
Convenience.And it can see the last state of the Internet of Things object after interaction, pole from video immediately again after interaction
Interactive quality and user's impression are improved greatly.For example, in smart home Internet of Things, user is clicked in Internet of Things monitor video
Air-conditioning release the button, it becomes possible to really open air-conditioning in family, this causes the value of monitor video to be improved, and more than supervises
The effect of control additionally provides interactive video clip, and can see the real-time status after air-conditioning unlatching, this friendship in video
Mutual mode is very simple and fast and people's reality in the pattern one that interact with object to must can greatly promote Internet of Things network users and hand over
The development of mutual technology, what Internet of Things must can be applied popular playing an important role.
Embodiment described above only expresses the several embodiments of the present invention, and description is more specific and detailed, but simultaneously
Cannot the limitation to the scope of the claims of the present invention therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention
Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (4)
1. a kind of Internet of Things video interactive method, comprises the following steps:
The video of Internet of Things object is gathered, forms corresponding user terminal video;
Identify the type and position of each object and its operable component in the user terminal video;
Type and the position of each Internet of Things object and its operable component are obtained by Internet of Things or manually set in user terminal
It puts;
It will be in the type and position of each Internet of Things object and its operable component and the user terminal video identified
Each object and its operable unit type and position carry out fuzzy matching, by each object in the user terminal video of successful match
Body and its operable component mark corresponding each Internet of Things object and its operable component in the user terminal video
Title and unique designation code;
The operation of object in the user terminal video will be converted into the Internet of Things corresponding to object in the user terminal video
The operation content of object, including:It is obtained in user terminal by Internet of Things or manually sets each Internet of Things object and its can grasp
Make the operation content of component support;The operation content that each operable component of Internet of Things object is supported is as the user terminal
Operation content corresponding when the operable component of object is operated is corresponded in video;
When the operable component area that mouse is moved on to object in the user terminal video by user is operated, user is just received
User is mapped as to the user operation of object part in the user terminal video by the operation to the operable component
Hold the operation content of the Internet of Things object in video corresponding to object part;The operation content is sent to the Internet of Things
Net object goes to perform.
2. Internet of Things video interactive method according to claim 1, which is characterized in that the operation content is sent to institute
It states after Internet of Things object goes execution, the method is further comprising the steps of:
Show that Internet of Things object performs the last state after the operation content in the user terminal video;
When the operation content is to change the state of the Internet of Things object, then Internet of Things object changes the state of itself;
When the operation content is to obtain the state content of the Internet of Things object, then Internet of Things object is by the Internet of Things object
State content send back the user terminal video;
When the object in the user terminal video or its operable component occur mobile, then video object motion tracking skill is utilized
Art repositions the object in the user terminal video or its operable component.
3. a kind of Internet of Things video interactive system, which is characterized in that including:
Transport module is gathered, for gathering the video of Internet of Things object, forms corresponding user terminal video;
Video object type location identification module, for identifying the type of each object and position in the user terminal video;
Internet of Things object type position setting module, for setting by Internet of Things acquisition or manually each Internet of Things in user terminal
The type of net object and position;
Object fuzzy matching module, for by the type of each Internet of Things object and position and the user terminal identified
Each object type and position carry out fuzzy matching in video;
Recognition result display module exists for each object in the user terminal video by successful match and its operable component
Upper corresponding each Internet of Things object and its title of operable component and unique designation code are marked in the user terminal video;
Interactive module, for will be converted into the operation of object in the user terminal video to object institute in the user terminal video
The operation content of corresponding Internet of Things object, including:For passing through in the user terminal described in Internet of Things acquisition or artificial setting
The operation content that each Internet of Things object and its operable component are supported;For each operable component of Internet of Things object to be supported
Operation content as corresponding to operation content corresponding when the operable component of object is operated in the user terminal video;
Execution module, the operable component area that object in the user terminal video is moved on to for working as user by mouse operate
When, operation of the user to the operable component is just received, user reflects the operation of object part in the user terminal video
It penetrates as the operation content to the Internet of Things object corresponding to object part in the user terminal video;By the operation content
The Internet of Things object is sent to go to perform.
4. Internet of Things video interactive system according to claim 3, which is characterized in that the system also includes:
Display module, for showing the newest shape after the Internet of Things object execution operation content in the user terminal video
State;
State change execution module:When the operation content is to change the state of the Internet of Things object, then Internet of Things object changes
Become the state of itself;
State obtains execution module:When the operation content is to obtain the state content of the Internet of Things object, then Internet of Things object
The state content of the Internet of Things object is sent back the user terminal video by body;
Video object relocates module, and after the completion of the identification, when the object in the user terminal video or its is operable
When component occurs mobile, then using video object motion tracking technology come to the object in the user terminal video or its is operable
Component is repositioned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410066271.7A CN103777851B (en) | 2014-02-26 | 2014-02-26 | Internet of Things video interactive method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410066271.7A CN103777851B (en) | 2014-02-26 | 2014-02-26 | Internet of Things video interactive method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103777851A CN103777851A (en) | 2014-05-07 |
CN103777851B true CN103777851B (en) | 2018-05-29 |
Family
ID=50570169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410066271.7A Active CN103777851B (en) | 2014-02-26 | 2014-02-26 | Internet of Things video interactive method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103777851B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104597759B (en) * | 2014-12-26 | 2018-05-08 | 深圳市海蕴新能源有限公司 | Appliance control method and system and intelligent household management system based on Internet video |
CN105955040A (en) * | 2016-05-20 | 2016-09-21 | 深圳市大拿科技有限公司 | Intelligent household system according to real-time video picture visual control and control method thereof |
CN105955043B (en) * | 2016-05-27 | 2019-02-01 | 浙江大学 | A kind of visible i.e. controllable intelligent home furnishing control method of augmented reality type |
CN107168085B (en) * | 2017-06-28 | 2021-09-24 | 杭州登虹科技有限公司 | Intelligent household equipment remote control method, device, medium and computing equipment |
CN108769608B (en) * | 2018-06-14 | 2019-05-07 | 视云融聚(广州)科技有限公司 | A kind of video integration method of multi-dimensional data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102577250A (en) * | 2009-10-05 | 2012-07-11 | 阿尔卡特朗讯公司 | Device for interaction with an augmented object |
CN102662378A (en) * | 2012-05-18 | 2012-09-12 | 天津申能科技有限公司 | Remote interaction system based on Internet of things technology |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049978A1 (en) * | 2000-10-20 | 2002-04-25 | Rodriguez Arturo A. | System and method for access and placement of media content information items on a screen display with a remote control device |
US20020103898A1 (en) * | 2001-01-31 | 2002-08-01 | Moyer Stanley L. | System and method for using session initiation protocol (SIP) to communicate with networked appliances |
JP2007514494A (en) * | 2003-12-19 | 2007-06-07 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Interactive video |
US20090284476A1 (en) * | 2008-05-13 | 2009-11-19 | Apple Inc. | Pushing a user interface to a remote device |
-
2014
- 2014-02-26 CN CN201410066271.7A patent/CN103777851B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102577250A (en) * | 2009-10-05 | 2012-07-11 | 阿尔卡特朗讯公司 | Device for interaction with an augmented object |
CN102662378A (en) * | 2012-05-18 | 2012-09-12 | 天津申能科技有限公司 | Remote interaction system based on Internet of things technology |
Also Published As
Publication number | Publication date |
---|---|
CN103777851A (en) | 2014-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103777851B (en) | Internet of Things video interactive method and system | |
CN104090762B (en) | Screenshot processing device and method | |
CN102637127B (en) | Method for controlling mouse modules and electronic device | |
CN105301971B (en) | A kind of method and its system controlling smart machine based on image recognition | |
CN109491263A (en) | Intelligent home equipment control method, device, system and storage medium | |
CN104216619B (en) | Method, device and the electronic equipment of data processing | |
CN105091491B (en) | The food control method and device of intelligent refrigerator | |
CN106358069A (en) | Video data processing method and mobile terminal | |
CN103312814B (en) | The method for building up of VNC concealed channel between cloud management platform and virtual machine terminal user | |
CN104573552A (en) | Method and device for hiding application icons | |
CN109388532A (en) | Test method, device, electronic equipment and computer-readable storage medium | |
CN107045296B (en) | Intelligence control system and method | |
CN110177139A (en) | A kind of ostensible mobile APP data grab method | |
CN105549398B (en) | Control system and method for driving corresponding device at fixed time according to trigger strategy | |
CN109062590A (en) | A kind of method and system of game SDK online updating | |
CN106533862A (en) | Method and device for adapting of terminal and intelligent home equipment, terminal | |
CN108829856A (en) | The resource of web application preloads method and device in display terminal | |
CN106896974A (en) | The method and apparatus that data are sorted out | |
CN106354316A (en) | Operation panel based on AR technology and image recognition technology | |
CN110620948A (en) | Display method, display terminal and readable storage medium | |
CN109189295A (en) | display control method, device and terminal device | |
CN109581886A (en) | Apparatus control method, device, system and storage medium | |
CN104571801A (en) | Information processing method and electronic equipment | |
CN108694009A (en) | terminal control method and device | |
CN105959552A (en) | Shooting method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20180322 Address after: 523000, Guangdong province Dongguan Songshan Lake hi tech Industrial Development Zone Creative Life City shopping mall B two floor shopping mall 2 part of the site (No. 201) Applicant after: Great power innovative Intelligent Technology (Dongguan) Co., Ltd. Address before: Tianhe District Shipai computer science and engineering South China Normal University Guangzhou 510630 Guangdong Province Applicant before: Zhu Dingju |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |