CN103777851A - Method and system for video interaction of internet of things - Google Patents

Method and system for video interaction of internet of things Download PDF

Info

Publication number
CN103777851A
CN103777851A CN201410066271.7A CN201410066271A CN103777851A CN 103777851 A CN103777851 A CN 103777851A CN 201410066271 A CN201410066271 A CN 201410066271A CN 103777851 A CN103777851 A CN 103777851A
Authority
CN
China
Prior art keywords
internet
things
video
content
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410066271.7A
Other languages
Chinese (zh)
Other versions
CN103777851B (en
Inventor
朱定局
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great power innovative Intelligent Technology (Dongguan) Co., Ltd.
Original Assignee
朱定局
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 朱定局 filed Critical 朱定局
Priority to CN201410066271.7A priority Critical patent/CN103777851B/en
Publication of CN103777851A publication Critical patent/CN103777851A/en
Application granted granted Critical
Publication of CN103777851B publication Critical patent/CN103777851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephonic Communication Services (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention applies to the technical field of internet of things, and discloses a method and a system for video interaction of the internet of things. The method includes following steps: collecting video of objects on the internet of things; identifying the objects of the internet of things, which are displayed in the video; translating operation for the objects displayed in the video into operation content of the objects on the internet of things, which are corresponding to the objects displayed in the video; sending the operation content to the objects on the internet of things for execution; displaying the newest status of the objects on the internet of things after the objects on the internet of things execute the operation content. When the method and the system for the video interaction of the internet of things are used, interactive functions can be newly added on the video of the internet of things by collecting the video of the internet of things onto a user side and directly using the video on the user side as an interactive interface of a user and the objects on the internet of things, and therefore development cost of the interactive functions of the user and the objects on the internet of things is reduced, intuition and figurativeness of interaction of the user and the objects on the internet of things are improved, complexity of the interaction of the user and the objects on the internet of things is reduced, and interaction feeling of the user is improved.

Description

Internet of Things video interactive method and system
Technical field
The present invention relates to technology of Internet of things field, relate in particular to a kind of Internet of Things video interactive method and system.
Background technology
In technology of Internet of things, user can be undertaken alternately by the interactive interface of Internet of Things and Internet of Things object, thereby realization and Internet of Things object is mutual.This state that comprises alternately manipulation Internet of Things object and obtain Internet of Things object.
Traditional Internet of Things interaction technique is all to trigger alternately the interface interchange of described Internet of Things object by schematic diagram or the word of representative networking object in user and client menu or control panel, and then realizes the mutual of user and described Internet of Things object.For example, when user wants to open in advance by Internet of Things the air-conditioning of family on next road, the client of this Internet of Things application is generally on user's mobile phone so, and client is generally with the form of menu or control panel, menu item or control panel item represent the open operation of air-conditioning with schematic diagram or word, in the time that user clicks, can trigger the interface interchange of the Internet of Things air-conditioning being associated with this menu item or control panel item, then send to Internet of Things air-conditioning by Internet of Things, realize and open air-conditioning.
Traditional Internet of Things video technique is all by the video of camera collection Internet of Things object, is mainly used in video monitoring.User or supervisory system can be seen or the real-time status of recognizate networking object in real time by video.For example, in Smart Home Internet of Things, user can check at any time by Internet of Things video monitoring real-time video in the family that family camera sends on mobile phone.
In traditional Internet of Things interaction technique, in order to support the mutual of user and Internet of Things object, need manually in User Interface, to design one by one the word or image and control panel or the menu that represent each Internet of Things object, so artificial cost of development is high, and represent the word of object or image not as the video of object directly perceived and vivid, user need to understand these word that represents each Internet of Things object or images in advance, uses complexity.
In traditional Internet of Things video technique, video cannot be used for carrying out alternately with described Internet of Things object, does not possess the user of support and carries out mutual function by video and Internet of Things object.For example, when user wants to open in advance by Internet of Things the air-conditioning of family on next road, the client of this Internet of Things application is generally on user's mobile phone so, the air-conditioning that user clicks in monitor video is released the button, and cannot open in monitor video air-conditioning in the corresponding real family of air-conditioning.
Therefore, prior art has yet to be improved and developed.
Summary of the invention
The object of the present invention is to provide a kind of Internet of Things video interactive method, can only be used for monitoring the state of Internet of Things object, not support user to be undertaken alternately by video and described Internet of Things object for video in prior art, user and Internet of Things object alternately can only be by representing that the word of each Internet of Things object or image and control panel or menu carry out, thereby cause the cost of development of Internet of Things interaction technique high, the not vivid easy problem directly perceived not alternately between user and Internet of Things object.Propose a kind ofly can support the video of user by Internet of Things object to come to carry out mutual method with described Internet of Things object, carry out the function of increment networked video, Internet of Things video can also be supported alternately, reduce the cost of development of Internet of Things interactive system, improve the mutual simplification of user and Internet of Things object, improve user's mutual impression.
A kind of Internet of Things video interactive method, comprises the following steps:
By the content of operation operation of object in video being converted into the corresponding Internet of Things object of object in described video;
Send to described Internet of Things object to go to carry out described content of operation.
Preferentially, will the operation of object in video be converted into before the content of operation of the corresponding Internet of Things object of object in described video, described method is further comprising the steps of:
Gather the video of Internet of Things object;
Internet of Things object in described video is identified.
Preferentially, after sending to described Internet of Things object to go to carry out described content of operation, described method is further comprising the steps of:
In described video, show that Internet of Things object carries out the last state after described content of operation.
Preferentially, the described operation that is operating as the action-item to ejecting on object in video to object in video.
Preferentially, described to object in video be operating as to object in video can functional unit operation.
A kind of Internet of Things video interactive system, comprising:
Interactive module, for by the content of operation operation of video object being converted into the corresponding Internet of Things object of object in described video;
Execution module, for sending to described content of operation described Internet of Things object to go to carry out.
Preferentially, described system also comprises:
Gather transport module, for gathering the video of Internet of Things object;
Identification module, identifies for the Internet of Things object to described video.
Preferentially, described system also comprises:
Display module, for showing that at described video Internet of Things object carries out the last state after described content of operation.
Preferentially, the described operation that is operating as the action-item to ejecting on object in video to object in video.
Preferentially, described to object in video be operating as to object in video can functional unit operation.
Above-mentioned Internet of Things video interactive method and system, by by Internet of Things video acquisition to user side, then using user terminal video directly as the interactive interface of user and Internet of Things object, thereby increase mutual function to Internet of Things video, thereby reduced the cost of development of user and Internet of Things object interactive function; Simultaneously, make user can intuitively object in user terminal video be operated and can from video, be seen intuitively the state of the rear Internet of Things object of operation, make user's What You See Is What You Get, can immersively carry out alternately with Internet of Things object by user terminal video, thereby improve user and Internet of Things object mutual intuitive and iconicity, reduced user and the mutual complicacy of Internet of Things object, thereby improved user's mutual impression.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of Internet of Things video interactive method in an embodiment;
Fig. 2 is the method flow diagram of in an embodiment, the Internet of Things object in described video being identified;
Fig. 3 is by the method flow diagram operation of object in video being converted into the content of operation of the corresponding Internet of Things object of object in described video in an embodiment;
Fig. 4 in an embodiment sends to described content of operation described Internet of Things object to remove the method flow diagram of carrying out;
Fig. 5 shows in described video in an embodiment that Internet of Things object carries out the method flow diagram of the last state after described content of operation;
Fig. 6 is the structural representation of Internet of Things video interactive system in an embodiment;
Fig. 7 is the structural representation of identification module in Fig. 6;
Fig. 8 is the structural representation of interactive module in Fig. 6;
Fig. 9 is the structural representation of execution module in Fig. 6;
Figure 10 is the structural representation of display module in Fig. 6.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
In one embodiment, as shown in Figure 1, a kind of method of Internet of Things object video interactive, comprises the following steps:
Step S101, the video of collection Internet of Things object.Preferably, gather the video of Internet of Things object by video capture device, then the described video collecting is real-time transmitted to user side by network and shows, form corresponding user terminal video.For example, on washing machine, place a camera, aim at its key panel, and this video is passed to user side in real time.At user side, user just can operate the panel of this washing machine in video so;
Step S102, identifies the Internet of Things object in described video.Preferably, from user terminal video, identify and the described Internet of Things object object in described user terminal video one to one.Preferably, from user terminal video, identify and the described Internet of Things object part object part in described user terminal video one to one.Preferably, when user views certain object or component identification when incorrect during by shift key, recognition result show bar is become can edit pattern, can accept user's modification.For example, in video, have washer and refrigerator, but when identification, by washing machine be identified as refrigerator, refrigerator has been identified as washing machine, user enters edit pattern and directly this recognition result exchange is come.This hypothesis that just problem is done for clarity.Because the difference of washer and refrigerator is obvious, in the time of identification, be generally that this mistake can not occur.More common mistake is, some error of the object area of identification, and for example the taboret on washing machine side has also been identified as a part for washing machine, and at this moment user just can enter edit pattern, and taboret is removed from the washing machine region identifying;
Step S103, by the content of operation operation of object in video being converted into the corresponding Internet of Things object of object in described video.Preferably, in described content of operation, include Internet of Things object and receive the address of content of operation, the content of content of operation that Internet of Things object need to be carried out.Preferably, be the content of operation to the corresponding described Internet of Things object of object in described user terminal video by user to the operation map of object in described user terminal video.Preferably, be the content of operation to the corresponding described Internet of Things object of object part in described user terminal video by user to the operation map of object part in described user terminal video.Preferably, clicking operation is generally the action type that in video, all objects and parts thereof are all supported, and the operation of other types, for example input, swipe the card, brush finger line etc., need to, at the respective objects of video or the upper point of its parts (entrance guard card reader in as video) right button, can in right button popup menu, show the action type that this object or parts are supported.For example, user can select this action type of swiping the card, and then user is as long as swipe the card with being connected on the terminal card reader of user terminal video place, and the data of swiping the card just can be transferred to the corresponding gate control system in Internet of Things as content of operation;
Step S104, sends to described Internet of Things object to go to carry out described content of operation.Preferably, the described content of operation of described Internet of Things object is sent to described Internet of Things object go to carry out.For example, Internet of Things air-conditioning has received by Internet of Things the content of operation of opening, and will automatically open air-conditioning;
Step S105, shows that in described video Internet of Things object carries out the last state after described content of operation.Preferably, utilize described user terminal video to show that Internet of Things object carries out the last state after described content of operation.For example, user's operation is the temperature display of clicking air-conditioning in video, and the temperature number that Internet of Things air-conditioning need to feed back air-conditioning is to user terminal video.Again for example, user's operation is to click releasing the button of air-conditioning in video, and the state of Internet of Things air-conditioning after opening can be presented in user terminal video.
As shown in Figure 2, in one embodiment, the step S102 that the Internet of Things object in described video is identified comprises:
In step S112, identify type and the position of each object in described user terminal video;
In step S122, obtain or manually set type and the position of described each Internet of Things object by Internet of Things at described user side;
In step S132, fuzzy matching is carried out in each object type and position in the type of described each Internet of Things object and position and the described user terminal video that identifies;
In step S142, by the each object in the described user terminal video that the match is successful title and unique identity code of corresponding described each Internet of Things object on mark in described user terminal video, the title of described each Internet of Things object that the match is successful and unique identity code are also shown in user terminal video;
In step S152, accept user and in described user terminal video, the described result that the match is successful is revised, described each Internet of Things object that the match is successful is carried out to artificial cognition in described user terminal video;
In step S162, after described identification completes, in the time that the object in user terminal video is moved, utilize VS motion tracking technology to reorientate the object in described user terminal video.
(illustrate: the difference of the present embodiment and a upper embodiment is also to support the component-level identification to object in video) in another embodiment, the step S102 that the Internet of Things object in described video is identified comprises:
In step S ' 112, identify each object in described user terminal video and type and position that can functional unit;
In step S ' 122, obtain or manually set described each Internet of Things object and type and position that can functional unit by Internet of Things at described user side;
In step S ' 132, by each object in described each Internet of Things object and type that can functional unit thereof and position and the described user terminal video that identifies and can functional unit type and position carry out fuzzy matching;
In step S ' 142, by the each object in the described user terminal video that the match is successful and can functional unit in described user terminal video corresponding described each Internet of Things object and title and unique identity code that can functional unit on mark, described each Internet of Things object that the match is successful and title and unique identity code that can functional unit be also shown in user terminal video;
In step S ' 152, accept user and in described user terminal video, the described result that the match is successful is revised, to the described each Internet of Things object that the match is successful and can carry out artificial cognition by functional unit in described user terminal video;
In step S ' 162, after described identification completes, in the time that the object in user terminal video or its can functional unit be moved, utilize VS motion tracking technology can to reorientate by functional unit the object in described user terminal video or its.
As shown in Figure 3, in one embodiment, the step S103 operation of object in video being converted into the content of operation of the corresponding Internet of Things object of object in described video is comprised:
In step S113, obtain or manually set the content of operation that described each Internet of Things object is supported by Internet of Things at described user side;
In step S123, corresponding content of operation when the popup menu of content of operation corresponding object in described user terminal video of described each Internet of Things object support is operated;
In step S133, when user moves on to mouse on the object in described user terminal video, just in described video, on object, appear described menu in one's mind;
In step S143, when user operates the action-item on described popup menu, be just mapped to the described content of operation corresponding to described operation of described action-item.
(illustrate: the difference of the present embodiment and a upper embodiment is also to support the component-level operation to object in video) in another embodiment, the step S103 operation of object in video being converted into the content of operation of the corresponding Internet of Things object of object in described video comprised:
In step S ' 113, the content of operation that obtains or manually set described each Internet of Things object at described user side by Internet of Things and can functional unit support;
In step S ' 123, corresponding content of operation when the popup menu of content of operation corresponding object in described user terminal video of described each Internet of Things object support is operated.Content of operation that can functional unit support using described each Internet of Things object corresponding content of operation when corresponding object can functional unit be operated in described user terminal video;
In step S ' 133, when user moves on to mouse object in described user terminal video non-can functional unit region time, just in described video, on object, appear described menu in one's mind.In the time that user moves on to mouse can functional unit region operating of object in described user terminal video, just receive the operation of user to described functional unit;
In step S ' 143, when user operates the action-item on described popup menu, be just mapped to the described content of operation corresponding to described operation of described action-item.When user operate described can functional unit, be just mapped to described described content of operation corresponding to described operation that can functional unit.
As shown in Figure 4, in one embodiment, send to described Internet of Things object to go the step S104 carrying out to comprise described content of operation:
In step S114, when described content of operation is the state that changes described Internet of Things object, Internet of Things object changes the state of self;
In step S124, when described content of operation is the state content that obtains described Internet of Things object, the state content of described Internet of Things object is sent it back described user terminal video by Internet of Things object.
As shown in Figure 5, in one embodiment, in described video, show that Internet of Things object carries out the step S105 of the last state after described content of operation and comprise:
In step S115, gather Internet of Things object and carry out the video of the last state after described content of operation by video capture device, then the video of the described last state collecting is shown to user side by Internet Transmission;
In step S125, in the time that described user terminal video is received the state content that described Internet of Things object sends, described state content is shown in the respective objects in the described user terminal video that described Internet of Things object is corresponding.
In one embodiment, as shown in Figure 6, a kind of Internet of Things video interactive system, comprise and gather transport module 101, identification module 102, interactive module 103, execution module 104 and display module 105, wherein (illustrate: the example of passing the imperial examinations at the provincial level at embodiment corresponding to Fig. 1 is equally applicable to this present embodiment, repeats no more):
Gather transport module 101, for gathering the video of Internet of Things object.Preferably, for gather the video of Internet of Things object by video capture device, then the described video collecting is real-time transmitted to user side by network and shows, form corresponding user terminal video;
Identification module 102, identifies for the Internet of Things object to described video.Preferably, for identifying from user terminal video and the described Internet of Things object object described user terminal video one to one.Preferably, from user terminal video, identify and the described Internet of Things object part object part in described user terminal video one to one;
Interactive module 103, for by the content of operation operation of video object being converted into the corresponding Internet of Things object of object in described video.Preferably, in described content of operation, include Internet of Things object and receive the address of content of operation, the content of content of operation that Internet of Things object need to be carried out.Preferably, for being the content of operation to the corresponding described Internet of Things object of object in described user terminal video by user to the operation map of described user terminal video object.Preferably, for being the content of operation to the corresponding described Internet of Things object of object part in described user terminal video by user to the operation map of described user terminal video object part;
Execution module 104, for sending to described content of operation described Internet of Things object to go to carry out.Preferably, for the described content of operation of described Internet of Things object being sent to described Internet of Things object go to carry out;
Display module 105, for showing that at described video Internet of Things object carries out the last state after described content of operation.Preferably, for utilizing described user terminal video to show that Internet of Things object carries out the last state after described content of operation.
As shown in Figure 7, in one embodiment, identification module 102 comprises VS type location identification module 112, Internet of Things object type set positions module 122, object fuzzy matching module 132, recognition result display module 142, recognition result correcting module 152, VS reorientation module 162, wherein:
VS type location identification module 112, for identifying type and the position of the each object of described user terminal video;
Internet of Things object type set positions module 122, for obtaining or manually set type and the position of described each Internet of Things object by Internet of Things at described user side;
Object fuzzy matching module 132, for carrying out fuzzy matching by the type of described each Internet of Things object and position and the each object type of described user terminal video and the position that identify;
Recognition result display module 142, for by each object of the described user terminal video that the match is successful title and unique identity code of corresponding described each Internet of Things object on mark in described user terminal video, the title of described each Internet of Things object that the match is successful and unique identity code are also shown in user terminal video;
Recognition result correcting module 152, revises the described result that the match is successful in described user terminal video for accepting user, and described each Internet of Things object that the match is successful is carried out to artificial cognition in described user terminal video;
VS reorientation module 162, after completing, in the time that the object in user terminal video is moved, reorientates the object in described user terminal video with VS motion tracking technology for described identification.
(illustrate: the difference of the present embodiment and a upper embodiment is also to support the component-level identification to object in video) in another embodiment, identification module 102 comprises VS type location identification module 112, Internet of Things object type set positions module 122, object fuzzy matching module 132, recognition result display module 142, recognition result correcting module 152, VS reorientation module 162, wherein:
VS type location identification module 112, for identifying the each object of described user terminal video and type and position that can functional unit;
Internet of Things object type set positions module 122, for obtaining or manually set described each Internet of Things object and type and position that can functional unit by Internet of Things at described user side;
Object fuzzy matching module 132, for by described each Internet of Things object and type that can functional unit thereof and position and the each object of described user terminal video identifying and can functional unit type and position carry out fuzzy matching;
Recognition result display module 142, for by each object of the described user terminal video that the match is successful and can functional unit in described user terminal video corresponding described each Internet of Things object and title and unique identity code that can functional unit on mark, described each Internet of Things object that the match is successful and title and unique identity code that can functional unit be also shown in user terminal video;
Recognition result correcting module 152, revises the described result that the match is successful in described user terminal video for accepting user, to the described each Internet of Things object that the match is successful and can carry out artificial cognition by functional unit in described user terminal video;
VS reorientation module 162, after completing for described identification, in the time that the object in user terminal video or its can functional unit be moved, utilize VS motion tracking technology can to reorientate by functional unit the object in described user terminal video or its.
As shown in Figure 8, in one embodiment, interactive module 103 comprises Internet of Things object content of operation setting module 113, content of operation setting module 123, operation trigger module 133, content of operation mapping block 143, wherein:
Internet of Things object content of operation setting module 113, the content of operation of supporting for obtain or manually set described each Internet of Things object by Internet of Things at described user side;
Content of operation setting module 123, for using the content of operation of described each Internet of Things object support corresponding content of operation when the popup menu of the corresponding object of described user terminal video is operated;
Operation trigger module 133 for as user, mouse being moved on to the object of described user terminal video, just appears described menu in one's mind in described video on object;
Content of operation mapping block 143, for operating the action-item on described popup menu as user, is just mapped to the described content of operation corresponding to described operation of described action-item.
(difference of the present embodiment and a upper embodiment is to go back the operation of holder body component level) in another embodiment, interactive module 103 comprises Internet of Things object content of operation setting module 113, content of operation setting module 123, operation trigger module 133, content of operation mapping block 143, wherein:
Internet of Things object content of operation setting module 113, the content of operation that obtains or manually set described each Internet of Things object at described user side by Internet of Things and can functional unit support;
Content of operation setting module 123, corresponding content of operation when the popup menu of content of operation corresponding object in described user terminal video of described each Internet of Things object support is operated.Content of operation that can functional unit support using described each Internet of Things object corresponding content of operation when corresponding object can functional unit be operated in described user terminal video;
Operation trigger module 133 when user moves on to mouse object in described user terminal video non-can functional unit region time, just appears described menu in one's mind in described video on object.In the time that user moves on to mouse can functional unit region operating of object in described user terminal video, just receive the operation of user to described functional unit;
Content of operation mapping block 143, when user operates the action-item on described popup menu, is just mapped to the described content of operation corresponding to described operation of described action-item.When user operate described can functional unit, be just mapped to described described content of operation corresponding to described operation that can functional unit.
As shown in Figure 9, in one embodiment, execution module 104 comprises that state changes execution module 114, state obtains execution module 124, wherein:
State changes execution module 114: when described content of operation is the state that changes described Internet of Things object, Internet of Things object changes the state of self;
State obtains execution module 124: when described content of operation is the state content that obtains described Internet of Things object, the state content of described Internet of Things object is sent it back described user terminal video by Internet of Things object.
As shown in figure 10, in one embodiment, display module 105 comprises that video upgrades display module 115, state content display module 125, wherein:
Video upgrades display module 115: gathered Internet of Things object and carried out the video of the last state after described content of operation by video capture device, then the video of the described last state collecting is shown to user side by Internet Transmission;
State content display module 125: in the time that described user terminal video is received the state content that described Internet of Things object sends, described state content is shown in the respective objects in the described user terminal video that described Internet of Things object is corresponding.
Below in conjunction with concrete example, the effect of the various embodiments described above is described:
Example 1, the video of Real-time Collection Internet of Things air-conditioning, then clicks releasing the button of air-conditioning in video, and corresponding Internet of Things air-conditioning will be unlocked;
Example 2, during someone goes on business, open the door of Gai Ren office for entering certain secretary of Gai Ren office, this door has coded lock, need to input password and just can open, as long as each digital keys input password of coded lock in this corresponding video of the long-range click of this people.This password (can be clicked and be obtained on which digital keys of video by identification mouse) is sent to this coded lock in Internet of Things automatically, as long as password correctly just can be opened door;
Example 3, open the door coded lock of Internet of Things by video interactive, also can click this coded lock by right key, on video, the frame of input password will be ejected in coded lock position, user only need input password in this frame with keyboard, and its effect is equal to has clicked corresponding digital keys on coded lock;
Example 4, if the door of Internet of Things need to be opened by fingerprint recognition, can click in video this Fingerprint Identification Unit by right key, on video, the prompting of asking scanning fingerprint will be ejected in Fingerprint Identification Unit position, user only need be scanned the fingerprint of oneself on the fingerprint scanner of user side, its effect is equal on this fingerprint scanner of Internet of Things has carried out finger scan, if fingerprint matching just can be opened this door on Internet of Things;
Example 5, in Smart Home Internet of Things, as long as the camera of installation material networking at home just can all be transferred to user side in the mode of video TV, air-conditioning, lamp, curtain, refrigerator etc., and can supply user interactions, the respective objects that user can click in video is carried out corresponding operating.For example, in Smart Home Internet of Things, when user looks on the bright side of things while opening air-conditioning, only need to click releasing the button of air-conditioning in video in client, when user is after client has been clicked releasing the button of the air-conditioning in video, just can from video, see the time of day that air-conditioning is opened at once, identical with the impression of opening air-conditioning in reality;
Above-mentioned Internet of Things video interactive method and system, make people to carry out operating article networking object by Internet of Things video.This has thoroughly overturned existing Internet of Things video technique can only monitor Internet of Things object real-time status, and the state of the art that cannot intervene the state of Internet of Things object.And the method for this Internet of Things video interactive is also unprecedented concerning video interactive technology, make video interactive technology move towards Internet of Things from existing unit and internet, break through original boundary.
Above-mentioned Internet of Things video interactive method and system, by gathering the video of Internet of Things object, and after using described transmission of video to user side as user and the mutual interface of Internet of Things object, and the cost of the collection of video transmission is low-down, as long as just can accomplish by common IP Camera, increase the interactive function of Internet of Things video, greatly reduced user and Internet of Things object carries out mutual cost of development.Simultaneously, because video is more directly perceived, image and real surface show Internet of Things object and real-time status thereof, make the more simple and easy alternately of user and Internet of Things object, greatly improved user and use the convenience of Internet of Things.And after mutual, can from video, see immediately again the last state of mutual Internet of Things object afterwards, greatly improve mutual quality and user's impression.For example, in Smart Home Internet of Things, the air-conditioning that user clicks in Internet of Things monitor video is released the button, just can really open air-conditioning in family, this is improved the value of monitor video, the not just effect of monitoring, mutual video clip is also provided, and can in video, see the real-time status after air-conditioning is opened, very simple and fast of this interactive mode, with in people's reality with the mutual pattern one of object to, must greatly promote the development of Internet of Things user interaction techniques, must be to popular the playing an important role of Internet of Things application.
The above embodiment has only expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims. ?

Claims (10)

1. an Internet of Things video interactive method, comprises the following steps:
By the content of operation operation of object in video being converted into the corresponding Internet of Things object of object in described video;
Send to described Internet of Things object to go to carry out described content of operation.
2. Internet of Things video interactive method according to claim 1, is characterized in that, will the operation of object in video be converted into before the content of operation of the corresponding Internet of Things object of object in described video, and described method is further comprising the steps of:
Gather the video of Internet of Things object;
Internet of Things object in described video is identified.
3. Internet of Things video interactive method according to claim 1, is characterized in that, after sending to described Internet of Things object to go to carry out described content of operation, described method is further comprising the steps of:
In described video, show that Internet of Things object carries out the last state after described content of operation.
4. Internet of Things video interactive method according to claim 1, is characterized in that, the described operation that is operating as the action-item to ejecting on object in video to object in video.
5. Internet of Things video interactive method according to claim 1, is characterized in that, described to object in video be operating as to object in video can functional unit operation.
6. an Internet of Things video interactive system, is characterized in that, comprising:
Interactive module, for by the content of operation operation of video object being converted into the corresponding Internet of Things object of object in described video;
Execution module, for sending to described content of operation described Internet of Things object to go to carry out.
7. Internet of Things video interactive system according to claim 6, is characterized in that, described system also comprises:
Gather transport module, for gathering the video of Internet of Things object;
Identification module, identifies for the Internet of Things object to described video.
8. Internet of Things video interactive system according to claim 6, is characterized in that, described system also comprises:
Display module, for showing that at described video Internet of Things object carries out the last state after described content of operation.
9. Internet of Things video interactive system according to claim 6, is characterized in that, the described operation that is operating as the action-item to ejecting on object in video to object in video.
10. Internet of Things video interactive method according to claim 6, is characterized in that, described to object in video be operating as to object in video can functional unit operation.
CN201410066271.7A 2014-02-26 2014-02-26 Internet of Things video interactive method and system Active CN103777851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410066271.7A CN103777851B (en) 2014-02-26 2014-02-26 Internet of Things video interactive method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410066271.7A CN103777851B (en) 2014-02-26 2014-02-26 Internet of Things video interactive method and system

Publications (2)

Publication Number Publication Date
CN103777851A true CN103777851A (en) 2014-05-07
CN103777851B CN103777851B (en) 2018-05-29

Family

ID=50570169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410066271.7A Active CN103777851B (en) 2014-02-26 2014-02-26 Internet of Things video interactive method and system

Country Status (1)

Country Link
CN (1) CN103777851B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104597759A (en) * 2014-12-26 2015-05-06 深圳市兰丁科技有限公司 Network video based household control method and system and intelligent household management system
CN105955040A (en) * 2016-05-20 2016-09-21 深圳市大拿科技有限公司 Intelligent household system according to real-time video picture visual control and control method thereof
CN105955043A (en) * 2016-05-27 2016-09-21 浙江大学 Augmented-reality type visible controllable intelligent household control system and method
CN107168085A (en) * 2017-06-28 2017-09-15 杭州登虹科技有限公司 A kind of intelligent home device long-range control method, device, medium and computing device
CN108769608A (en) * 2018-06-14 2018-11-06 视云融聚(广州)科技有限公司 A kind of video integration method of multi-dimensional data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020103898A1 (en) * 2001-01-31 2002-08-01 Moyer Stanley L. System and method for using session initiation protocol (SIP) to communicate with networked appliances
WO2002103470A2 (en) * 2001-06-14 2002-12-27 Scientific-Atlanta, Inc. System and method for access and placement of media content information items on a screen display with a remote control device
CN1894012A (en) * 2003-12-19 2007-01-10 皇家飞利浦电子股份有限公司 Interactive video
CN101582053A (en) * 2008-05-13 2009-11-18 苹果公司 Pushing interface from portable media device to accessory
CN102577250A (en) * 2009-10-05 2012-07-11 阿尔卡特朗讯公司 Device for interaction with an augmented object
CN102662378A (en) * 2012-05-18 2012-09-12 天津申能科技有限公司 Remote interaction system based on Internet of things technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020103898A1 (en) * 2001-01-31 2002-08-01 Moyer Stanley L. System and method for using session initiation protocol (SIP) to communicate with networked appliances
WO2002103470A2 (en) * 2001-06-14 2002-12-27 Scientific-Atlanta, Inc. System and method for access and placement of media content information items on a screen display with a remote control device
CN1894012A (en) * 2003-12-19 2007-01-10 皇家飞利浦电子股份有限公司 Interactive video
CN101582053A (en) * 2008-05-13 2009-11-18 苹果公司 Pushing interface from portable media device to accessory
CN102577250A (en) * 2009-10-05 2012-07-11 阿尔卡特朗讯公司 Device for interaction with an augmented object
CN102662378A (en) * 2012-05-18 2012-09-12 天津申能科技有限公司 Remote interaction system based on Internet of things technology

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104597759A (en) * 2014-12-26 2015-05-06 深圳市兰丁科技有限公司 Network video based household control method and system and intelligent household management system
CN105955040A (en) * 2016-05-20 2016-09-21 深圳市大拿科技有限公司 Intelligent household system according to real-time video picture visual control and control method thereof
CN105955043A (en) * 2016-05-27 2016-09-21 浙江大学 Augmented-reality type visible controllable intelligent household control system and method
CN105955043B (en) * 2016-05-27 2019-02-01 浙江大学 A kind of visible i.e. controllable intelligent home furnishing control method of augmented reality type
CN107168085A (en) * 2017-06-28 2017-09-15 杭州登虹科技有限公司 A kind of intelligent home device long-range control method, device, medium and computing device
CN107168085B (en) * 2017-06-28 2021-09-24 杭州登虹科技有限公司 Intelligent household equipment remote control method, device, medium and computing equipment
CN108769608A (en) * 2018-06-14 2018-11-06 视云融聚(广州)科技有限公司 A kind of video integration method of multi-dimensional data

Also Published As

Publication number Publication date
CN103777851B (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN103777851A (en) Method and system for video interaction of internet of things
CN100456235C (en) Method and system for screen drawing-sectioning in instant messaging
CN103529762B (en) A kind of intelligent home furnishing control method based on sensor technology and system
CN105301971B (en) A kind of method and its system controlling smart machine based on image recognition
CN102637127B (en) Method for controlling mouse modules and electronic device
CN106575239A (en) Mobile application state identifier framework
CN107870997B (en) Conference blackboard-writing file management method and device, display device and storage medium
CN104503248A (en) Task setting method and device
CN101866226A (en) Mobile positioning operation device of portable electronic equipment and operation method
CN106533862A (en) Method and device for adapting of terminal and intelligent home equipment, terminal
CN108475160A (en) Image processing apparatus, method for displaying image and program
CN101346683A (en) Display data extraction methods, devices and computer systems utilizing the same
WO2022089431A1 (en) Device control method and apparatus, and electronic device
CN101281441A (en) System and method for constituting big screen multi-point touch screen
CN109581886A (en) Apparatus control method, device, system and storage medium
CN112270798A (en) Express delivery cabinet new user pickup method, device, server and system
CN107402762B (en) Fingerprint navigation implementation method and device
CN109120487A (en) Smart machine wireless control method and device
CN105955040A (en) Intelligent household system according to real-time video picture visual control and control method thereof
CN103067782B (en) A kind of bimanual input interactive operation processing method and system based on intelligent television
CN109492515B (en) Information processing apparatus, data structure of image file, and computer-readable medium
CN204315077U (en) A kind of intelligentized omnipotent infrared remote control system
CN109543384A (en) Using starting method and relevant device
CN102750094A (en) Image acquiring method
CN106339089B (en) A kind of interactive action identifying system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180322

Address after: 523000, Guangdong province Dongguan Songshan Lake hi tech Industrial Development Zone Creative Life City shopping mall B two floor shopping mall 2 part of the site (No. 201)

Applicant after: Great power innovative Intelligent Technology (Dongguan) Co., Ltd.

Address before: Tianhe District Shipai computer science and engineering South China Normal University Guangzhou 510630 Guangdong Province

Applicant before: Zhu Dingju

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant