CN104021183A - Multi-device interaction method and multi-device interaction system - Google Patents

Multi-device interaction method and multi-device interaction system Download PDF

Info

Publication number
CN104021183A
CN104021183A CN201410255648.3A CN201410255648A CN104021183A CN 104021183 A CN104021183 A CN 104021183A CN 201410255648 A CN201410255648 A CN 201410255648A CN 104021183 A CN104021183 A CN 104021183A
Authority
CN
China
Prior art keywords
equipment
machine
related information
readable
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410255648.3A
Other languages
Chinese (zh)
Inventor
刘嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201410255648.3A priority Critical patent/CN104021183A/en
Publication of CN104021183A publication Critical patent/CN104021183A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a multi-device interaction method and a multi-device interaction system. The multi-device interaction method comprises the steps that at least one machine readable feature corresponding to at least one piece of display content displayed by a first device is obtained; the machine readable feature includes associated information of the display content; the associated information used for making a second device display at least one piece of display content is provided for the second device. According to the technical scheme, display content can be shared by the first device and the second device under the situation that the no connection is built between the first device and the second device in advance, the multi-device interaction method and the multi-device interaction system are particularly applicable to the situation that neither the first device nor the second device is convenient to move, and convenient to implement, and user experience is improved.

Description

Many equipment exchange method and many equipment interactive system
Technical field
The application relates to many equipment interaction technique, relates in particular to a kind of many equipment exchange method and many equipment interactive system.
Background technology
Along with the development of technology, a user likely can have a plurality of equipment with Presentation Function, such as: computer, mobile phone, TV etc.In some cases, user likely needs the content of a screen to be presented on another equipment, for example: user watches a video frequency program in bedroom by a computer, when user moves to parlor, hope can continue to see this video frequency program by a TV in parlor, need between different display devices, transmit easily the information of displayed content.
Summary of the invention
The application's object is: a kind of many equipment interaction schemes is provided.
First aspect, the application provides a kind of many equipment exchange method, comprising:
At least one machine-readable features corresponding at least one rendering content that one first equipment that obtains presents; The related information that described at least one machine-readable features comprises described at least one rendering content;
To one second equipment, be provided for making described the second equipment to present the described related information of described at least one rendering content.
Second aspect, the application provides a kind of interactive device, comprising:
Feature acquisition module, at least one machine-readable features corresponding at least one rendering content presenting for obtaining one first equipment; The related information that described at least one machine-readable features comprises described at least one rendering content;
Information provides module, for be provided for making described the second equipment to present the described related information of described at least one rendering content to one second equipment.
The third aspect, the application provides a kind of many equipment interactive system, comprising:
Interactive device recited above, and one second equipment;
Described the second equipment comprises:
Communication module, for obtaining the related information of at least one rendering content that one first equipment presents from described interactive device;
Present module, for present described at least one rendering content according to described related information.
At least one embodiment of the embodiment of the present application is obtained the related information that the content that presents to one first equipment is relevant and is offered one second equipment by an intermediate equipment, make described the second equipment to present the content that described the first equipment presents according to described related information, the rendering content that also can complete in the situation that described the first equipment and described the second equipment do not connect in advance between the two is shared, be particularly suitable for described the first equipment and described the second equipment is all inconvenient to situation about moving, it is realized and facilitates and improved user's experience.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of a kind of many equipment exchange method of the embodiment of the present application;
Fig. 2 is the schematic flow sheet of a kind of many equipment exchange method of the embodiment of the present application;
Fig. 3 is the structural representation of a kind of interactive device of the embodiment of the present application;
Fig. 4 is the structural representation of a kind of interactive device of the embodiment of the present application;
Fig. 4 a-4c is the structural representation of feature acquisition module of three kinds of interactive devices of the embodiment of the present application;
Fig. 5 a and 5b are the structural representation of two kinds of interactive devices of the embodiment of the present application;
Fig. 6 is the structural representation of a kind of near eye wearable device of the embodiment of the present application;
Fig. 7 is the structural representation of a kind of many equipment interactive system of the embodiment of the present application;
Fig. 7 a is the structural representation of a kind of many equipment interactive system of the embodiment of the present application;
The structural representation that presents module of the equipment interactive system more than two kinds that Fig. 7 b and Fig. 7 c are the embodiment of the present application;
Fig. 8 is the structural representation of a kind of interactive device of the embodiment of the present application.
Embodiment
Below in conjunction with accompanying drawing (in some accompanying drawings, identical label represents identical element) and embodiment, the application's embodiment is described in further detail.Following examples are used for illustrating the application, but are not used for limiting the application's scope.
It will be understood by those skilled in the art that the terms such as " first " in the application, " second ", only for distinguishing different step, equipment or module etc., neither represent any particular technology implication, also do not represent the inevitable logical order between them.
As shown in Figure 1, the embodiment of the present application provides a kind of many equipment exchange method, comprising:
S110 obtains at least one machine-readable features corresponding at least one rendering content that one first equipment presents; The related information that described at least one machine-readable features comprises described at least one rendering content;
S120 is provided for making described the second equipment to present the described related information of described at least one rendering content to one second equipment.
For instance, interactive device provided by the invention, as the executive agent of the present embodiment, is carried out S110~S120.Particularly, described interactive device can be arranged in subscriber equipment in the mode of software, hardware or software and hardware combining, or described interactive device itself is exactly described subscriber equipment; Described subscriber equipment includes but not limited to: smart mobile phone, intelligent glasses, intelligent helmet, mobile phone, panel computer, intelligent bracelet, intelligent ring etc., wherein intelligent glasses is divided into again intelligent framework glasses and intelligent invisible glasses.
The embodiment of the present application is obtained the related information that the content that presents to one first equipment is relevant and is offered one second equipment by an intermediate equipment, make described the second equipment to present the content that described the first equipment presents according to described related information, the rendering content that also can complete in the situation that described the first equipment and described the second equipment do not connect in advance between the two is shared, be particularly suitable for described the first equipment and described the second equipment is all inconvenient to situation about moving, it is realized and facilitates and improved user's experience.
By embodiment below, further illustrate each step of the embodiment of the present application method:
S110 obtains at least one machine-readable features corresponding at least one rendering content that one first equipment presents.
In the embodiment of the present application, described at least one machine-readable features can be: the image tags such as bar code, Quick Response Code, character, symbol, watermark; Also can be the voice tags such as a sound that embed in one section of sound in the sound outside people's earshots such as ultrasound wave, a sound-content or sound-content; Can also be the features such as radiofrequency signal, visible ray or invisible light signal.
In the embodiment of the present application, at least one rendering content that described the first equipment presents is such as being: the displaying contents such as at least one webpage that a display device shows, picture, document window; The sound-contents such as the music that one audio playing device is play, voice; Displaying contents and sound-content that the audio frequency and video that multimedia play equipment presents etc. comprise simultaneously.
Wherein, in a kind of possible embodiment, described at least one rendering content can be a rendering content, for example: the current displaying contents of watching or listening to of user or sound-content; As, in the web page windows that user opens, the window of forefront, or determine the current window of watching of user etc. according to the blinkpunkt that detects user.In the possible embodiment of another kind, described at least one rendering content can be a plurality of rendering contents, for example: user is when a plurality of display windows of front opening.
Generally, each rendering content is corresponding to a machine-readable features, and therefore, when comprising a plurality of rendering content, described at least one machine-readable features is generally also a plurality of of similar number.
In the embodiment of the present application, described step S110 obtains described at least one machine-readable features and comprises:
Gather described at least one machine-readable features.For example, by machine-readable features corresponding to collection such as corresponding sensors (camera, microphone, radio frequency receiver), illustrate below.
In a kind of possible embodiment, described at least one rendering content is at least one displaying contents.In this embodiment, described the first equipment is the equipment with Presentation Function, such as: TV, computer, mobile phone etc.Described at least one machine-readable features is: at least one machine readable image tag comprising in described at least one displaying contents.
Here, described machine readable image tag can be embedded in described displaying contents, for example, when described displaying contents is a web page windows, described machine readable image tag can be a certain user's specific or that specific region or background parts do not embed of described web page windows visible (for example Quick Response Code) or invisible (for example digital watermarking) image tag.This machine readable image tag can be that Website server is embedded in described web page windows, can be also that described the first equipment is embedded in described web page windows.
In the present embodiment, can obtain described machine readable image tag by the mode of image acquisition.For example: by a camera, obtain the image that comprises described machine readable image tag.
In the possible embodiment of another kind, described at least one rendering content is at least one sound-content.In this embodiment, described the first equipment is the equipment with vocal function, such as: music player, TV, computer, mobile phone etc.Described at least one machine-readable features is: at least one machine readable voice tag comprising in described at least one sound-content.
In a kind of possible embodiment, described machine readable voice tag is embedded in described sound-content, such as: the ultrasound wave that not too can discover with human auditory etc. is embedded in described sound-content, or the sound that can hear of the people who embeds in the gap of sound-content, or described machine readable voice tag can also be a part for described sound-content itself.
In the present embodiment, can obtain described machine readable voice tag by the mode of sound collection.
In another possible embodiment, described at least one rendering content comprises described at least one displaying contents and described at least one sound-content, now can obtain by the mode of image acquisition and sound collection respectively with these two kinds of machine-readable features that sound-content is corresponding.
In addition, in other possible embodiment, when described the first equipment is in the time of can presenting the multimedia display device of plurality of kinds of contents, also can described at least one rendering content be a displaying contents, and corresponding machine-readable features can be the further feature outside machine readable image tag, such as being the further features such as machine readable voice tag or Visible Light Characteristics.
In a kind of possible embodiment, the related information of described at least one rendering content comprises: the link information of described at least one rendering content.
Here, the link information of described at least one rendering content is the routing information that is linked to the source of described at least one rendering content.For example,, when described rendering content is a web page windows: described link information can be for example the website information of described web page windows; Or described link information can also be for being linked to described the first equipment and copying the routing information of the rendering content of described the first equipment.
In the possible embodiment of another kind, the related information of described at least one rendering content can comprise: the attribute information of described at least one rendering content.
In the situation that there is no described link information, can obtain according to described attribute information the link information of described at least one rendering content, and then make described the second equipment to obtain the source of described at least one rendering content and to carry out corresponding content according to described link information to present.
For further user-friendly, in a kind of possible embodiment, the related information of described at least one rendering content can also comprise: current position of appearing information.
For example: when described rendering content is an audio-video frequency content, whole audio-video frequency content has 30 minutes, currently be played to the 17th minute, in order to facilitate user, make user on the second equipment, not need to carry out extra F.F. or back operation just can directly view the position of appearing of described rendering content on described the first equipment, described related information can comprise described current position of appearing information.
In a kind of possible embodiment, the executive agent of the embodiment of the present application method just and between described the second equipment had wired or wireless communication connection originally, after described step S110, can directly carry out described step S120.
In the possible embodiment of another kind, the executive agent of the embodiment of the present application method and described the second equipment is communication connection not, and described method also comprised before described step S120: connect with described the second equipment.For example, the method for attachment of mentioning in can the Chinese patent that be CN102595643A by publication number, the machine-readable markers of obtaining on described the second equipment by described interactive device to connect with described the second equipment.Or, can also connect by other wired or wireless mode.Certainly, the application, in order to be user-friendly to, preferably adopts wireless mode to connect.
S120 is provided for making described the second equipment to present the described related information of described at least one rendering content to one second equipment.
In the embodiment of the present application, described step S120 provides the mode of described related information to be to described the second equipment: described related information is directly provided or the described machine-readable features that comprises described related information is provided.Be specially:
In a kind of possible embodiment, to described the second equipment, provide described related information to comprise:
To described the second equipment, send described at least one machine-readable features.
By described the second equipment, from described at least one machine-readable features, obtained the related information of described at least one rendering content.
In the possible embodiment of another kind, to described the second equipment, provide described related information to comprise: to described the second equipment, to send described related information.
In this embodiment, before providing described related information to described the second equipment, described method also comprises:
Process described at least one machine-readable features and obtain described related information.
For example, described at least one machine-readable features is decoded, obtain described related information.For example, when described machine-readable features is Quick Response Code, by corresponding coding/decoding method, described Quick Response Code is decoded, obtain the described related information comprising in described Quick Response Code.Again for example: when described machine-readable features is when to be sound-content recited above a part of, for example, can carry out pattern-recognition to this part sound-content, thereby obtain the attribute information that described sound-content is corresponding.
By finding out above, in the embodiment of the present application, the executive agent of the embodiment of the present application method not must and described the first equipment between have communication connection.
In the possible embodiment of another kind, the executive agent of the embodiment of the present application method can be many equipment interactive system described below, and it comprises interactive device recited above and the second equipment.
As shown in Figure 2, in the present embodiment, except recited above, each step of described interactive device side, the method for the embodiment of the present application also comprises: at described the second equipment side:
S130 obtains described related information;
S140 presents described at least one rendering content according to described related information.
In a kind of possible embodiment, when described interactive device to described the second equipment, send be described related information time, described in obtain described related information and comprise:
Receive described related information.
As described above, in the possible embodiment of another kind, what described interactive device sent to described the second equipment is described at least one machine-readable features, and now, the step of obtaining described related information at described the second equipment side comprises:
From described interactive device, receive described at least one machine-readable features;
Process described at least one machine-readable features and obtain described related information.
Wherein, process concrete grammar that described at least one machine-readable features obtains described related information referring to corresponding description in embodiment above.
In a kind of possible embodiment, described related information is above-mentioned while being linked to the outside link information such as Website server, presents described at least one rendering content and can be: according to described link information, be linked to the data message that corresponding Website server obtains described at least one rendering content according to described related information; According to described data message, present described at least one rendering content.
In the possible embodiment of another kind, when described related information is the above-mentioned link information that is linked to described the first equipment, can be linked to described the first equipment according to described related information, obtain corresponding data message and carry out presenting of described at least one rendering content.In a kind of possible embodiment, between described the second equipment and described the first equipment, after link, for copying, present, the content that wherein arbitrary equipment presents is changed, also can there is corresponding change in the content that another equipment presents.Facilitate user between two equipment, to switch at any time.
It will be appreciated by those skilled in the art that, in the said method of the application's embodiment, the sequence number size of each step does not also mean that the priority of execution sequence, the execution sequence of each step should be definite with its function and internal logic, and should not form any restriction to the implementation process of the application's embodiment.
As shown in Figure 3, the embodiment of the present application provides a kind of interactive device 300, comprising:
Feature acquisition module 310, at least one machine-readable features corresponding at least one rendering content presenting for obtaining one first equipment; The related information that described at least one machine-readable features comprises described at least one rendering content;
Information provides module 320, for be provided for making described the second equipment to present the described related information of described at least one rendering content to one second equipment.
The device of the embodiment of the present application obtains the related information that the content that presents to one first equipment is relevant and offers one second equipment, make described the second equipment to present the content that described the first equipment presents according to described related information, the rendering content that also can complete in the situation that described the first equipment and described the second equipment do not connect in advance between the two is shared, be particularly suitable for described the first equipment and described the second equipment is all inconvenient to situation about moving, it is realized and facilitates and improved user's experience.
Further illustrate the structure of each module of the embodiment of the present application device below:
In the embodiment of the present application, described at least one machine-readable features can be: the image tags such as bar code, Quick Response Code, character, symbol, watermark; Also can be the voice tags such as a sound that embed in one section of sound in the sound outside people's earshots such as ultrasound wave, a sound-content or sound-content; Can also be the features such as radiofrequency signal, visible ray or invisible light signal.
In the embodiment of the present application, at least one rendering content that described the first equipment presents is such as being: the displaying contents such as at least one webpage that a display device shows, picture, document window; The sound-contents such as the music that one audio playing device is play, voice; Displaying contents and sound-content that the audio frequency and video that multimedia play equipment presents etc. comprise simultaneously.
Wherein, in a kind of possible embodiment, described at least one rendering content can be a rendering content, for example: the current displaying contents of watching or listening to of user or sound-content; As, in the web page windows that user opens, the window of forefront, or determine the current window of watching of user etc. according to the blinkpunkt that detects user.In the possible embodiment of another kind, described at least one rendering content can be a plurality of rendering contents, for example: user is when a plurality of display windows of front opening.
Generally, each rendering content is corresponding to a machine-readable features, and therefore, when comprising a plurality of rendering content, described at least one machine-readable features is generally also a plurality of of similar number.
As shown in Figure 4, in the embodiment of the present application, described feature acquisition module 310 comprises:
Collection apparatus unit 311, for gathering described at least one machine-readable features.
Corresponding with the type of described at least one machine-readable features, described collection apparatus unit 311 is such as being following one or more: camera, microphone, ultrasonic sensor, radio frequency receiver, light sensor etc.
In a kind of possible embodiment, described at least one rendering content is at least one displaying contents.In this embodiment, described the first equipment is the equipment with Presentation Function, such as: TV, computer, mobile phone etc.Described at least one machine-readable features is: at least one machine readable image tag comprising in described at least one displaying contents.
Here, described machine readable image tag can be embedded in described displaying contents, for example, when described displaying contents is a web page windows, described machine readable image tag can be a certain user's specific or that specific region or background parts do not embed of described web page windows visible (for example Quick Response Code) or invisible (for example digital watermarking) image tag.This machine readable image tag can be that Website server is embedded in described web page windows, can be also that described the first equipment is embedded in described web page windows.
As shown in Fig. 4 a, in the present embodiment, described collection apparatus unit 311 comprises:
Characteristics of image gathers subelement 3111, for the image that comprises described at least one machine readable image tag from described at least one displaying contents collection.
It can be for example a camera that described characteristics of image gathers subelement 3111, obtains the image that comprises described machine readable image tag by described camera.
In the possible embodiment of another kind, described at least one rendering content is at least one sound-content.In this embodiment, described the first equipment is the equipment with vocal function, such as: music player, TV, computer, mobile phone etc.Described at least one machine-readable features is: at least one machine readable voice tag comprising in described at least one sound-content.
In a kind of possible embodiment, described machine readable voice tag is embedded in described sound-content, such as: the ultrasound wave that not too can discover with human auditory etc. is embedded in described sound-content, or the sound that can hear of the people who embeds in the gap of sound-content, or described machine readable voice tag can also be a part for described sound-content itself.
As shown in Figure 4 b, in the present embodiment, described collection apparatus unit 311 comprises:
Sound characteristic gathers subelement 3112, for the sound that comprises described at least one machine readable voice tag from described sound-content collection.
In the possible embodiment of another kind, as shown in Fig. 4 c, described at least one rendering content comprises described at least one displaying contents and described at least one sound-content, described collection apparatus unit 311 can comprise that described characteristics of image gathers subelement 3111 and described sound characteristic gathers subelement 3112 simultaneously, for respectively with these two kinds of machine-readable features that sound-content is corresponding.
Certainly, in other possible embodiment of the application, described collection apparatus unit 311 can also be other possible form.
In a kind of possible embodiment, the related information of described at least one rendering content comprises: the link information of described at least one rendering content.
Here, the link information of described at least one rendering content is the routing information that is linked to the source of described at least one rendering content.For example,, when described rendering content is a web page windows: described link information can be for example the website information of described web page windows; Or described link information can also be for being linked to described the first equipment and copying the routing information of the rendering content of described the first equipment.
In the possible embodiment of another kind, the related information of described at least one rendering content can comprise: the attribute information of described at least one rendering content.
In the situation that there is no described link information, can obtain according to described attribute information the link information of described at least one rendering content, and then make described the second equipment to obtain the source of described at least one rendering content and to carry out corresponding content according to described link information to present.
For further user-friendly, in a kind of possible embodiment, the related information of described at least one rendering content can also comprise: current position of appearing information.
For example: when described rendering content is an audio-video frequency content, whole audio-video frequency content has 30 minutes, currently be played to the 17th minute, in order to facilitate user, make user on the second equipment, not need to carry out extra F.F. or back operation just can directly view the position of appearing of described rendering content on described the first equipment, described related information can comprise described current position of appearing information.
In a kind of possible embodiment, described interactive device 300 also comprises communication module 330, for connecting with described the second equipment.Described communication module 330 can, for conventional wired or wireless communication module, in order to be user-friendly to, be preferably wireless communication module.For example, the coupling arrangement of mentioning in can the Chinese patent that be CN102595643A by publication number, the machine-readable markers of obtaining on described the second equipment by described interactive device 300 to connect with described the second equipment.
In the embodiment of the present application, described information provides module 320 to provide the mode of described related information to be to described the second equipment: described related information is directly provided or the described machine-readable features that comprises described related information is provided.
Therefore, as shown in Figure 5 a, in a kind of possible embodiment, described information provides module 320 to comprise:
Communication unit 321, for sending described at least one machine-readable features to described the second equipment.
In the present embodiment, by described the second equipment, from described at least one machine-readable features, obtained the related information of described at least one rendering content.
As shown in Figure 5 b, in the possible embodiment of another kind, described device 300 also comprises:
Characteristic processing module 340, obtains described related information for the treatment of described at least one machine-readable features;
Described information provides module 320 to comprise:
Communication unit 322, for sending described related information to described the second equipment.
The processing example of 340 pairs of described machine-readable features of described characteristic processing module is as being: to as described at least one machine-readable features decode, obtain described related information.For example, when described machine-readable features is Quick Response Code, by corresponding coding/decoding method, described Quick Response Code is decoded, obtain the described related information comprising in described Quick Response Code.Again for example: when described machine-readable features is when to be sound-content recited above a part of, for example, can carry out pattern-recognition to this part sound-content, thereby obtain the attribute information that described sound-content is corresponding.
By finding out above, in the embodiment of the present application, the interactive device 300 of the embodiment of the present application not must and described the first equipment between have communication connection.
In a kind of possible embodiment, the application's interactive device 300 is a portable unit, such as: mobile phone, panel computer, intelligent glasses, intelligent bracelet etc.Portable unit conveniently moving, is suitable for obtaining described machine-readable features from described the first equipment, then provides corresponding related information to described the second equipment.
In the possible embodiment of another kind, as shown in Figure 6, the interactive device 300 of the embodiment of the present application is one near wearable device 600.
Nearly eye wearable device 600 for example, for being worn near wearable device eyes of user: intelligent glasses, intelligent helmet.
In a kind of possible embodiment, on described nearly eye wearable device 600, can comprise and take the direction camera 610 consistent with eyes of user view direction, can gather human eye image within the vision.By this camera 610, can obtain the machine readable image tag of at least one rendering content on the display device that described user watching.In addition, described near eye wearable device 600 can have communication interface 620, for providing corresponding related information with one second equipment connection and to it.
In addition,, when needs directly send described related information to described the second equipment, described nearly eye wearable device 600 can also have processor 630, for as machine-readable features corresponding to characteristic processing resume module, obtains described related information.
Those skilled in the art can know, when described interactive device is described nearly eye wearable device 600, when user only need to see the rendering content of described the first equipment demonstration, just can automatically obtain described machine-readable features by described camera 610; User, need to when the second equipment presents described rendering content, can only see to described the second equipment, just to it, provide the related information that described rendering content is corresponding, be convenient to described the second equipment and show described rendering content.Make user very easy to use, natural, further improve user and experience.
A kind of possible application scenarios of the embodiment of the present application is:
User wears the pictures of described near eye wearable device 600 on seeing the mobile phone, and when its eyes leave and see to a panel computer from described mobile phone, automatically shows that described picture watches for described user on described panel computer.
As shown in Figure 7, the embodiment of the present application provides a kind of many equipment interactive system 700, comprises any interactive device 710 described in Fig. 3 to Fig. 6, and one second equipment 720.
The structure of described interactive device 710, referring to the description in embodiment above, repeats no more here.
Described the second equipment 720 comprises:
Communication module 721, for obtaining the related information of at least one rendering content that one first equipment presents from described interactive device;
Present module 722, for present described at least one rendering content according to described related information.
Described communication module 721 can be wired or wireless communication module, wherein, is preferably wireless communication module.
The described module 722 that presents can comprise display module or sound broadcasting module or comprise described display module and sound broadcasting module simultaneously, for presenting corresponding displaying contents and sound-content.
In a kind of possible embodiment, when described interactive device to described the second equipment, send be described related information time, described communication module 721 for:
Receive described related information.
As described above, in the possible embodiment of another kind, what described interactive device sent to described the second equipment is described at least one machine-readable features, now, described communication module 721 for:
From described interactive device, receive described at least one machine-readable features.
As shown in Figure 7a, in this embodiment, described the second equipment 720 also comprises:
Processing module 723, obtains described related information for the treatment of described at least one machine-readable features.
Wherein, process concrete grammar that described at least one machine-readable features obtains described related information referring to corresponding description in embodiment above.
As shown in Figure 7b, in a kind of possible embodiment, described related information is above-mentioned while being linked to the outside link information such as Website server, described in present module 722 and comprise:
Communication unit 7221, for being linked to according to described link information the data message that corresponding Website server obtains described at least one rendering content;
Display unit 7222, for presenting described at least one rendering content according to described data message.
As shown in Figure 7 c, in the possible embodiment of another kind, when described related information is the above-mentioned link information that is linked to described the first equipment, described in present module 722 and comprise:
Communication unit 7223, for being linked to described the first equipment and obtaining corresponding data message according to described related information;
Display unit 7224, for carrying out presenting of described at least one rendering content according to described data message.
In a kind of possible embodiment, between described the second equipment and described the first equipment, after link, for copying, present, the content that wherein arbitrary equipment presents is changed, also can there is corresponding change in the content that another equipment presents.Facilitate user between two equipment, to switch at any time.
The structural representation of another interactive device 800 that Fig. 8 provides for the embodiment of the present application, the application's specific embodiment does not limit the specific implementation of interactive device 800.As shown in Figure 8, this interactive device 800 can comprise:
Processor (processor) 810, communication interface (Communications Interface) 820, storer (memory) 830 and communication bus 840.Wherein:
Processor 810, communication interface 820 and storer 830 complete mutual communication by communication bus 840.
Communication interface 820, for the net element communication with such as client etc.
Processor 810, for executive routine 832, specifically can carry out the correlation step in said method embodiment.
Particularly, program 832 can comprise program code, and described program code comprises computer-managed instruction.
Processor 810 may be a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or is configured to implement one or more integrated circuit of the embodiment of the present application.
Storer 830, for depositing program 832.Storer 830 may comprise high-speed RAM storer, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disk memory.Program 832 specifically can be for making described interactive device 800 carry out following steps:
At least one machine-readable features corresponding at least one rendering content that one first equipment that obtains presents; The related information that described at least one machine-readable features comprises described at least one rendering content;
To one second equipment, be provided for making described the second equipment to present the described related information of described at least one rendering content.
In program 832, the specific implementation of each step can, referring to description corresponding in the corresponding steps in above-described embodiment and unit, be not repeated herein.Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the equipment of foregoing description and module, can describe with reference to the corresponding process in preceding method embodiment, does not repeat them here.
Those of ordinary skills can recognize, unit and the method step of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can specifically should be used for realizing described function with distinct methods to each, but this realization should not thought and exceeds the application's scope.
If described function usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the part that the application's technical scheme contributes to prior art in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the application.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), the various media that can be program code stored such as random access memory (RAM, Random Access Memory), magnetic disc or CD.
Above embodiment is only for illustrating the application; and the not restriction to the application; the those of ordinary skill in relevant technologies field; in the situation that do not depart from the application's spirit and scope; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to the application's category, and the application's scope of patent protection should be defined by the claims.

Claims (23)

1. the exchange method of equipment more than, is characterized in that, comprising:
At least one machine-readable features corresponding at least one rendering content that one first equipment that obtains presents; The related information that described at least one machine-readable features comprises described at least one rendering content;
To one second equipment, be provided for making described the second equipment to present the described related information of described at least one rendering content.
2. the method for claim 1, is characterized in that, obtains described at least one machine-readable features and comprises:
Gather described at least one machine-readable features.
3. the method for claim 1, is characterized in that, described at least one rendering content comprises:
At least one displaying contents and/or at least one sound-content.
4. method as claimed in claim 3, is characterized in that, described at least one machine-readable features comprises:
At least one machine readable image tag comprising in described at least one displaying contents.
5. method as claimed in claim 4, is characterized in that, obtains described at least one machine-readable features and comprises:
The image that comprises described at least one machine readable image tag from described at least one displaying contents collection.
6. method as claimed in claim 3, is characterized in that, described at least one machine-readable features comprises:
At least one machine readable voice tag comprising in described at least one sound-content.
7. method as claimed in claim 6, is characterized in that, obtains described at least one machine-readable features and comprises:
The sound that comprises described at least one machine readable voice tag from described sound-content collection.
8. the method for claim 1, is characterized in that, described related information comprises following at least one:
Link information, attribute information and current position of appearing information.
9. the method for claim 1, is characterized in that, to described the second equipment, provides described related information to comprise:
To described the second equipment, send described at least one machine-readable features.
10. the method for claim 1, is characterized in that, described method also comprises:
Process described at least one machine-readable features and obtain described related information;
To described the second equipment, provide described related information to comprise:
To described the second equipment, send described related information.
11. the method for claim 1, is characterized in that, described method also comprised before providing described related information to described the second equipment:
Connect with described the second equipment.
12. the method for claim 1, is characterized in that, described method also comprises: at described the second equipment side:
Obtain described related information;
According to described related information, present described at least one rendering content.
13. 1 kinds of interactive devices, is characterized in that, comprising:
Feature acquisition module, at least one machine-readable features corresponding at least one rendering content presenting for obtaining one first equipment; The related information that described at least one machine-readable features comprises described at least one rendering content;
Information provides module, for be provided for making described the second equipment to present the described related information of described at least one rendering content to one second equipment.
14. methods as claimed in claim 13, is characterized in that, described feature acquisition module comprises:
Collection apparatus unit, for gathering described at least one machine-readable features.
15. methods as claimed in claim 13, is characterized in that,
Described at least one rendering content comprises: at least one displaying contents;
Described at least one machine-readable features comprises: at least one machine readable image tag comprising in described at least one displaying contents;
Described collection apparatus unit comprises:
Characteristics of image gathers subelement, for the image that comprises described at least one machine readable image tag from described at least one displaying contents collection.
16. methods as described in claim 13 or 15, is characterized in that,
Described at least one rendering content comprises: at least one sound-content;
Described at least one machine-readable features comprises: at least one machine readable voice tag comprising in described at least one sound-content;
Described collection apparatus unit comprises:
Sound characteristic gathers subelement, for the sound that comprises described at least one machine readable voice tag from described sound-content collection.
17. methods as claimed in claim 13, is characterized in that, described related information comprises following at least one:
Link information, attribute information and current position of appearing information.
18. methods as claimed in claim 13, is characterized in that, described information provides module to comprise:
Communication unit, for sending described at least one machine-readable features to described the second equipment.
19. methods as claimed in claim 13, is characterized in that, described device also comprises:
Characteristic processing module, obtains described related information for the treatment of described at least one machine-readable features;
Described information provides module to comprise:
Communication unit, for sending described related information to described the second equipment.
20. methods as claimed in claim 13, is characterized in that, described device also comprises:
Communication module, for connecting with described the second equipment.
21. methods as claimed in claim 13, is characterized in that, described device is a portable unit.
22. methods as claimed in claim 13, is characterized in that, described device is one near wearable device.
More than 23. 1 kinds, equipment interactive system, is characterized in that, comprises the interactive device described in any one in claim 13-22, and one second equipment;
Described the second equipment comprises:
Communication module, for obtaining the related information of at least one rendering content that one first equipment presents from described interactive device;
Present module, for present described at least one rendering content according to described related information.
CN201410255648.3A 2014-06-10 2014-06-10 Multi-device interaction method and multi-device interaction system Pending CN104021183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410255648.3A CN104021183A (en) 2014-06-10 2014-06-10 Multi-device interaction method and multi-device interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410255648.3A CN104021183A (en) 2014-06-10 2014-06-10 Multi-device interaction method and multi-device interaction system

Publications (1)

Publication Number Publication Date
CN104021183A true CN104021183A (en) 2014-09-03

Family

ID=51437937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410255648.3A Pending CN104021183A (en) 2014-06-10 2014-06-10 Multi-device interaction method and multi-device interaction system

Country Status (1)

Country Link
CN (1) CN104021183A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580493A (en) * 2015-01-21 2015-04-29 冯山泉 KTV remote interaction method and system
CN105610591A (en) * 2014-12-30 2016-05-25 Tcl集团股份有限公司 A system and a method for information sharing among multiple apparatuses
CN109963189A (en) * 2017-12-26 2019-07-02 三星电子株式会社 Electronic device and its method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120217292A1 (en) * 2011-02-28 2012-08-30 Echostar Technologies L.L.C. Synching One or More Matrix Codes to Content Related to a Multimedia Presentation
CN103024063A (en) * 2012-12-24 2013-04-03 腾讯科技(深圳)有限公司 Data sharing method, clients and data sharing system
CN103347079A (en) * 2013-07-08 2013-10-09 惠州Tcl移动通信有限公司 Method and mobile device for schedule event synchronization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120217292A1 (en) * 2011-02-28 2012-08-30 Echostar Technologies L.L.C. Synching One or More Matrix Codes to Content Related to a Multimedia Presentation
CN103430565A (en) * 2011-02-28 2013-12-04 艾科星科技公司 Synching one or more matrix codes to content related to multimedia presentation
CN103024063A (en) * 2012-12-24 2013-04-03 腾讯科技(深圳)有限公司 Data sharing method, clients and data sharing system
CN103347079A (en) * 2013-07-08 2013-10-09 惠州Tcl移动通信有限公司 Method and mobile device for schedule event synchronization

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105610591A (en) * 2014-12-30 2016-05-25 Tcl集团股份有限公司 A system and a method for information sharing among multiple apparatuses
CN105610591B (en) * 2014-12-30 2021-06-22 Tcl科技集团股份有限公司 System and method for sharing information among multiple devices
CN104580493A (en) * 2015-01-21 2015-04-29 冯山泉 KTV remote interaction method and system
CN109963189A (en) * 2017-12-26 2019-07-02 三星电子株式会社 Electronic device and its method

Similar Documents

Publication Publication Date Title
CN109729420B (en) Picture processing method and device, mobile terminal and computer readable storage medium
US10235025B2 (en) Various systems and methods for expressing an opinion
CN103035134B (en) Image touch and talk playing system and mage touch and talk playing method
US9743119B2 (en) Video display system
CN105120325B (en) A kind of information transferring method and system
US9729864B2 (en) Camera based safety mechanisms for users of head mounted displays
CN107846629B (en) Method, device and server for recommending videos to users
US20130141313A1 (en) Wearable personal digital eyeglass device
CN107211180A (en) Spatial audio signal for the object with associated audio content is handled
WO2015031802A1 (en) Video display system
KR102238330B1 (en) Display device and operating method thereof
JP2016523014A (en) User-customized advertisement providing system based on sound signal output from TV, method for providing user-customized advertisement, and computer-readable recording medium recording MIM service program
CN102129636A (en) System and method for providing viewer identification-based advertising
CN104361847A (en) Advertising playing system and method capable of interacting through audio
CN105142000A (en) Information pushing method and system based on television playing content
CN103838374A (en) Message notification method and message notification device
CN109104619B (en) Image processing method and device for live broadcast
CN105894571B (en) Method and device for processing multimedia information
CN105117608A (en) Information interaction method and device
CN105721904B (en) The method of the content output of display device and control display device
CN104021183A (en) Multi-device interaction method and multi-device interaction system
CN107864272A (en) It is a kind of that the method and system of information, mobile terminal are obtained by a reading equipment
CN202976538U (en) Image touch-read playing system
US10268852B2 (en) Electronic device and reading method
CN104837046A (en) Multi-media file processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140903