WO2015032208A1 - 多设备协作系统、第一设备、第二设备及其协作方法 - Google Patents

多设备协作系统、第一设备、第二设备及其协作方法 Download PDF

Info

Publication number
WO2015032208A1
WO2015032208A1 PCT/CN2014/075920 CN2014075920W WO2015032208A1 WO 2015032208 A1 WO2015032208 A1 WO 2015032208A1 CN 2014075920 W CN2014075920 W CN 2014075920W WO 2015032208 A1 WO2015032208 A1 WO 2015032208A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
detection information
detection
collaboration
cooperation
Prior art date
Application number
PCT/CN2014/075920
Other languages
English (en)
French (fr)
Inventor
许阳坡
宋星光
刘欣
艾常权
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015032208A1 publication Critical patent/WO2015032208A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a multi-device cooperation system, a first device, a second device, and a cooperative method thereof.
  • Synchronous playback can be achieved to achieve the synchronization experience, or synchronized screen playback to achieve a large screen experience, or to share pictures, screen synchronization display.
  • the first type multi-device collaboration through multi-device screen collisions, usually using contact sensors;
  • the second type obtaining the address of the target device or the information to be shared, etc. by obtaining the QR code information of the other party by taking a picture of the target device;
  • the third type by using a device to move to the target device, thereby establishing a connection with the target device to achieve multi-device cooperation.
  • the fourth type realizes the cooperation of multiple devices by the function of the touch screen, specifically by pinching on the screen of multiple devices.
  • the first method requires the screens of two devices to collide. If it is a plurality of devices, it needs multiple collisions, which is not easy to operate, and the collision is not user-friendly;
  • the second method requires photographing the target device. If multiple devices collaborate, multiple shots are required, and the two-dimensional code must be used when collaborating. For example, a device shares the current content with another device, such as watching. Video, the device must interrupt the current content, and then call the QR code corresponding to the video, and then another device to capture and obtain, this process is not simple enough, more than the device to repeat the above operations;
  • the third method is only applicable to the one-to-one collaboration. If it is a one-to-many collaboration, it needs to be swiped separately for each device, so it is not easy enough;
  • the fourth method is also applicable to the one-to-one collaboration. If multiple devices collaborate, for example, to merge the screens of multiple devices, it is not easy enough to pinch multiple times.
  • the technical problem to be solved by the present invention is to provide a first device, a second device, a multi-device cooperation system, and a cooperation method between the first device and the second device, which can participate in multi-device cooperation, and can only cooperate when multiple devices perform collaborative interaction.
  • Non-contact gestures need to be performed near the device, no need to touch multiple devices, and the operation is simple and user-friendly.
  • the first aspect provides a first device that participates in multi-device cooperation
  • the first device includes: a detecting module, configured to detect a contactless gesture operation in the vicinity of the first device; and a detection information generating module, configured to be used according to the first device
  • the non-contact gesture operation generates the detection information of the first device
  • the detection information sending module is configured to send the detection information of the first device to the second device
  • the cooperation response receiving module is configured to receive the collaboration response information sent by the second device
  • the collaboration response information is generated by the second device according to the detection information of the first device
  • the collaboration response processing module is configured to perform collaborative processing according to the collaboration response information.
  • the non-contact gesture operation in the vicinity of the first device includes at least one of: starting from a predetermined direction, waving the hand or other object through the first device once or Multiple times, starting from a predetermined direction, waving a hand or other object through the vicinity of the first device one or more times and then passing the vicinity of the first device one or more times in the opposite direction, waving the hand or other object while passing at least two first devices One or more times nearby, waving your hand or other object to stay near the first device one or more times.
  • the detection information generating module further includes: a determining unit and a detecting information generating unit, wherein the determining unit is configured to determine whether the non-contact gesture operation in the vicinity of the first device conforms to the preset The first device detects the information generation condition, and when the result of the determination is YES, the detection information generation unit generates the detection information of the first device.
  • the first device detection information generation condition includes at least one of the following conditions: a preset cooperation mode switching process, a cooperation mode switching condition , signal mode, signal change mode, mode in which the object is approaching or moving away, gesture change mode, image change mode.
  • the detection information of the first device includes at least one of the following information: signal information corresponding to the signal mode, corresponding to the signal change mode
  • the change information of the signal is close to the object corresponding to the mode in which the object is detected to be close to or away from, the distant information or the cooperation mode switching information corresponding to the cooperation mode switching process and the cooperative mode switching condition.
  • the collaboration response information includes the first collaboration response information or the second collaboration response information, where: the first collaboration response information includes positioning information, sharing information, screen matching information, and split screen Information, switching screen size information, or switching information of the played or displayed content. If the first device receives the first collaboration response information, then performing positioning, sharing, closing, splitting, switching screen size or switching to play or The displayed content; the second collaboration response information includes mismatch information, corrects the current cooperation mode, initializes the detection module, or restores the detection module to a predetermined state, and if the first device receives the second collaboration response information, corrects the current Cooperate mode, initialize the detection module or restore the detection module to a predetermined state.
  • the second aspect provides a second device that participates in multi-device cooperation, and the second device includes: a detection information receiving module, configured to receive detection information of the at least one first device, where the detection information of the first device is the first device a collaboration response generation module, configured to generate collaboration response information according to the detection information of the at least one first device, and a collaboration response sending module, configured to send the collaboration response information to the at least one first device .
  • a detection information receiving module configured to receive detection information of the at least one first device, where the detection information of the first device is the first device
  • a collaboration response generation module configured to generate collaboration response information according to the detection information of the at least one first device
  • a collaboration response sending module configured to send the collaboration response information to the at least one first device .
  • the collaboration response generation module further generates collaboration response information according to the detection information of the at least two first devices.
  • the second device further includes: a detecting module, configured to detect a contactless gesture operation in the vicinity of the second device; and a detection information generating module, configured to be used according to the second device The contactless gesture operation generates detection information of the second device.
  • the collaboration response generation module further generates collaboration response information according to the detection information of the second device and the detection information of the at least one first device;
  • the second device further includes: a collaboration response processing module, configured to perform collaborative processing according to the collaboration response information.
  • the collaboration response generation module includes a comparison unit and a collaboration response generation unit, where the collaboration response information includes the first collaboration The response information or the second collaboration response information, wherein the comparing unit compares the detection information of the at least two first devices or the detection information of the second device and the detection information of the at least one first device to satisfy a preset matching condition, when comparing When the result meets the preset matching condition, the cooperation response generation unit generates the first cooperation response information, and when the comparison result does not satisfy the matching condition, the cooperation response generation unit generates the second cooperation response information or does not generate the cooperation response information.
  • the matching condition includes a time matching condition, where the time matching condition includes whether the detection information of the first device and the detection information of the second device occur in In the same period of time, or whether there is a sequence of time or time difference.
  • the matching condition includes a mode matching condition, where the mode matching condition is a mode of detecting information of the first device and detection information of the second device Whether the patterns match.
  • the first collaboration response information includes location information, sharing information, screen matching information, split screen information, switching screen size information, or switching The information of the content being played or displayed;
  • the second collaboration response information includes mismatch information, instructing the first device to correct the current cooperation mode, initializing the detection module, or restoring the detection module to a predetermined state.
  • a third aspect provides a multi-device collaboration system, the system comprising a second device and at least one first device, wherein the first device is in the first aspect, the first to fifth possible implementations of the first aspect
  • the first device according to any one of the preceding claims, wherein the second device is the second device according to any one of the first to seventh possible implementations of the second aspect.
  • a fourth aspect provides a cooperation method of a first device, where the method includes the following steps: detecting a contactless gesture operation in the vicinity of the first device; generating detection information of the first device according to the contactless gesture in the vicinity of the first device; The detection information of the first device is sent to the second device; and the collaboration response information sent by the second device is received, where the collaboration response information is generated by the second device according to the detection information of the first device; and the collaboration process is performed according to the collaboration response information.
  • the step of generating the detection information of the first device according to the non-contact gesture operation in the vicinity of the first device further includes: determining whether the non-contact gesture operation in the vicinity of the first device is consistent The preset first device detection information generation condition is generated; when the result of the determination is YES, the detection information of the first device is generated; when the result of the determination is NO, the detection information of the first device is not generated.
  • a fifth aspect provides a method for cooperating a second device, the method comprising the steps of: receiving detection information of at least one first device, wherein the detection information of the first device is a non-contact gesture operation of the first device according to the proximity thereof Generating; generating collaboration response information according to the detection information of the at least one first device; and transmitting the collaboration response information to the at least one first device.
  • the step of generating the collaboration response information according to the detection information of the at least one first device comprises: generating the collaboration response information according to the detection information of the at least two first devices.
  • the step of generating the collaboration response information according to the detection information of the at least one first device includes: detecting a contactless gesture operation in the vicinity of the second device; The contactless gesture operation generates detection information of the second device.
  • the step of generating the collaboration response information according to the detection information of the at least one first device further includes: according to the detection information of the second device, and at least The detection information of the first device generates the collaboration response information; the step of transmitting the collaboration response information to the at least one first device comprises: the second device and the at least one first device performing the cooperative processing according to the collaboration response information.
  • the collaboration response information includes the first collaboration response information or the second collaboration response information, wherein, according to the at least one first
  • the step of generating the collaboration response information by the detection information of the device further includes: comparing, by the second device, the detection information of the at least two first devices or the detection information of the second device and the detection information of the at least one first device to meet a preset matching condition; When the comparison result satisfies the preset matching condition, the first collaboration response information is generated; when the comparison result does not satisfy the preset matching condition, the second collaboration response information is generated or the collaboration response information is not generated.
  • the matching condition includes a time matching condition, where the time matching condition includes whether the detection information of the first device and the detection information of the second device occur In the same period of time, or whether there is a sequence of time or time difference.
  • the matching condition includes a mode matching condition, where the mode matching condition is the mode of detecting information of the first device and the detecting information of the second device Whether the patterns match.
  • the first device of the present invention detects the non-contact gesture operation in the vicinity of the first device by the detecting module, and the detection information generating module is based on the non-contact gesture near the first device.
  • the operation generates the detection information of the first device
  • the detection information sending module sends the detection information of the first device to the second device
  • the collaboration response receiving module receives the collaboration response information sent by the second device
  • the collaborative response processing module performs the collaborative processing according to the collaboration response information.
  • the invention only needs to perform non-contact gesture operation in the vicinity of the first device, does not need to contact multiple devices, and is simple and user-friendly.
  • FIG. 1 is a schematic diagram of a logical structure of a multi-device cooperation system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a logical structure of a first device participating in multi-device cooperation according to an embodiment of the present invention
  • FIG. 3 is another schematic structural diagram of a first device participating in multi-device cooperation according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a contactless gesture according to an embodiment of the present invention.
  • FIG. 5 is another schematic structural diagram of a contactless gesture according to an embodiment of the present invention.
  • FIG. 6 is still another schematic structural diagram of a contactless gesture according to an embodiment of the present invention.
  • FIG. 7 is still another schematic structural diagram of a contactless gesture according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of still another logical structure of a first device participating in multi-device cooperation according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a logical structure of a second device participating in multi-device cooperation according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another logical structure of a second device participating in multi-device cooperation according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a logical structure for eliminating a clock error between two devices according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of still another logical structure of a second device participating in multi-device cooperation according to an embodiment of the present invention.
  • FIG. 13 is still another logical structure diagram of a second device participating in multi-device cooperation according to an embodiment of the present disclosure
  • FIG. 14 is a flowchart of a multi-device cooperation method according to an embodiment of the present invention.
  • FIG. 15 is a flowchart of a method for cooperation of a first device according to an embodiment of the present invention.
  • FIG. 16 is another flowchart of a method for cooperation of a first device according to an embodiment of the present invention.
  • FIG. 17 is a flowchart of a method for cooperation of a second device according to an embodiment of the present invention.
  • FIG. 18 is another flowchart of a method for cooperation of a second device according to an embodiment of the present invention.
  • FIG. 19 is still another flowchart of a cooperation method of a second device according to an embodiment of the present invention.
  • FIG. 20 is still another flowchart of a method for cooperation of a second device according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a logical structure of a multi-device cooperation system according to an embodiment of the present invention.
  • the multi-device collaboration system 10 of the present invention includes at least one first device 11 and second device 12.
  • the first device 11 is plural, and the second device 12 is configured to control the plurality of first devices 11 to perform cooperative processing.
  • the plurality of first devices 11 detect the non-contact gesture operation in the vicinity thereof, and generate the detection information of the corresponding first device 11 when the non-contact gesture operation conforms to the preset first device detection information generation condition,
  • the detection information is sent to the second device 12, and the second device 12 generates the collaboration response information according to the detection information of the plurality of first devices 11, and sends the cooperation response information to the corresponding first device 11, respectively. Collaborate processing based on their respective collaboration response information.
  • the second device 12 controls the plurality of first devices 11 to perform collaborative processing, and may also participate in cooperative processing.
  • the second device 12 can also detect a non-contact gesture operation in the vicinity thereof, and generate detection information of the second device 12 when the non-contact gesture operation conforms to the preset second device detection information generation condition, and further The cooperation response information is generated according to the detection information of the first device 11 and the detection information of the plurality of first devices 11 respectively, and is sent to the corresponding first device 11, and the plurality of first devices 11 and 12 are respectively configured according to the respective collaboration response information.
  • Collaborative processing is described by Collaborative processing.
  • the user when the multi-device performs the cooperative interaction, the user only needs to perform the non-contact gesture operation in the vicinity of the first device 11 or the first device 11 and the second device 12, and does not need to contact the first device 11 and the second device 12, Easy to operate and user-friendly.
  • first device 11 and the second device 12 of the present invention are only functionally classified.
  • each device in the multi-device cooperation system 10 can have the functions of the first device 11 and the second device 12, and when performing the cooperative response, can be selected as the first device 11 or as the second according to the specific situation.
  • Device 12
  • FIG. 2 is a schematic diagram of a logical structure of a first device participating in multi-device cooperation according to an embodiment of the present invention.
  • the first device 11 of the present invention includes:
  • the detecting module 110 is configured to detect a contactless gesture operation in the vicinity of the first device 11;
  • the detection information generating module 111 is configured to generate detection information of the first device 11 according to a non-contact gesture operation in the vicinity of the first device 11;
  • the detection information sending module 112 is configured to send the detection information of the first device 11 to the second device 12;
  • the collaboration response receiving module 113 is configured to receive the collaboration response information sent by the second device 12, where the collaboration response information is generated by the second device 12 according to the detection information of the first device 11;
  • the collaboration response processing module 114 is configured to perform collaborative processing according to the collaboration response information.
  • the first device 11 generates a detection operation information of the first device 11 according to the non-contact gesture operation, and then transmits the detection information of the first device 11 to the second device 12, and further receives the first
  • the cooperation response information sent by the second device 12 is finally coordinated according to the cooperation response information. Therefore, when the multi-device performs the cooperative interaction, only the non-contact gesture operation needs to be performed on the first device 11, and the first device 11 does not need to be touched, and the operation is simple and user-friendly.
  • the embodiment of the present invention further provides another logical structure diagram of the first device 11, which is described in detail based on the first device 11 provided in the foregoing embodiment.
  • the detection information generating module 111 of the first device 11 further includes a determining unit 1110 and a detecting information generating unit 1111, wherein the determining unit 1110 is configured to determine whether the non-contact gesture operation in the vicinity of the first device 11 conforms to the pre- The first device detection information generation condition is set, and when the result of the determination is YES, the detection information generation unit 1111 generates the detection information of the first device 11.
  • the non-contact gesture operation in the vicinity of the first device 11 includes at least one of the following operations: waving a hand or other object from a predetermined direction one or more times near the first device 11 to start waving from a preset direction After the hand or other object passes the first device 11 one or more times and then passes through the vicinity of the first device 11 one or more times in the opposite direction, waving the hand or other object while passing through the vicinity of the at least two first devices 11 one or more times, waving The hand or other object stays near the first device 11 one or more times.
  • the waving hand passes through the vicinity of the first device 11 once from the right;
  • the waving hand from the right passes through the vicinity of the first device 11 once (shown in FIG. 5.1) and then passes through the vicinity of the first device 11 in the opposite direction (shown in FIG. 5.2);
  • the waving hand passes through the vicinity of at least two first devices 11 at the same time;
  • the waving hand stays near the first device 11 once.
  • the non-contact gesture in the vicinity of the first device 11 of the present invention may further include a hand motion such as flipping, looping, or the like on the first device 11, as long as it is preset, and the first device 11 identifiable contactless gestures are the scope of protection of the present invention and will not be described herein.
  • the first device detection information generation condition includes at least one of the following conditions: a preset cooperation mode switching flow, a cooperative mode switching condition, a signal mode, a signal change mode, a mode in which an object is detected to be close to or away from, a gesture change mode, and an image change. mode.
  • the collaborative mode switching process includes: first positioning, then sharing, then closing the screen, and then splitting the screen.
  • the cooperative mode switching condition includes: in the positioning state, the first device 11 detects that an object is approached twice in a predetermined time, or the first device 11 detects that an object approaches and then leaves after two consecutive times within a predetermined time. Then, it is determined that the positioning condition is satisfied; in the closed state, the first device 11 detects that an object is approaching, or the first device 11 detects that an object is approaching and then leaves, and then determines that the split screen condition is satisfied.
  • the first device 11 includes both the cooperative mode switching process and the cooperative mode switching condition, the two are combined as the first device detection information generating condition.
  • the first device 11 performs cooperative processing according to the cooperation mode switching process.
  • the contents of the cooperation mode switching process enumerated in the foregoing are exemplified.
  • the first device 11 must meet the positioning conditions before the detection information of the corresponding first device 11 can be generated.
  • the sharing condition must be met to generate the corresponding detection information of the first device 11, and so on.
  • the combined screen and split screen conditions must be met to sequentially generate the corresponding detection information of the first device 11. It is worth noting that the first device 11 must cooperate in sequence according to the cooperation mode switching process.
  • the satisfied handover condition is not the next cooperation condition
  • the first device 11 continues the current cooperation, and the corresponding detection information of the first device 11 is not generated. For example, if the first device 11 satisfies the switching condition of the split screen when the sharing cooperation is performed, the first device 11 does not generate the detection information of the corresponding first device 11, but the first device 11 does not generate the detection information of the corresponding first device 11. The sharing of the collaboration continues until the corresponding switching condition of the screen is met to generate the corresponding detection information of the first device 11.
  • the cooperation mode switching condition is the first device detection information generation condition, and the detection information of the corresponding first device 11 may be generated as long as the cooperation mode switching condition is satisfied.
  • the signal mode includes: detecting that the signal is less than or equal to a preset threshold, or detecting that the signal is greater than or equal to a preset threshold.
  • the signal change pattern includes: detecting that the signal is changed from small to large, or detecting that the signal is changed from large to small, or detecting that the signal is changed from one range to another, or a combination thereof.
  • the pattern in which the object approaches or is away includes detecting that the object is approaching, or detecting that the object is away, or detecting that the object is near for a period of time, or detecting that the object has left for a period of time, or a combination thereof.
  • Gesture change patterns include: gestures from flipping to looping, gestures from clenching fists to opening palms, or a combination thereof.
  • the image change mode includes: a change or motion of an object in the image, for example, the image is changed from large to small or from small to large.
  • the first device detection information generation condition is not limited to the above-listed modes, and the modes are not limited to the above-listed contents, and those skilled in the art may list more other modes according to the situation, which is not limited herein.
  • the detection information of the first device 11 includes at least one of the following: information information corresponding to the signal mode, change information of the signal corresponding to the signal change mode, and an object corresponding to the mode in which the object is detected to be close to or away from the object is approached or distant. Information, cooperation mode switching information corresponding to the cooperation mode switching process and the cooperation mode switching condition. Further, the detection information of the first device 11 may further include a current cooperation state of the first device 11. The detection information sending module 112 may further transmit the current cooperation state of the first device 11 to the second device 12.
  • the detection information generation condition of the first device 11 and the detection information of the first device 11 may be one-to-one correspondence, because the detection information generation condition of the first device 11 is not limited to the above enumerated mode, and therefore, the first device
  • the detection information of 11 is also not limited to the detection information listed above.
  • the collaboration response information includes first collaboration response information or second collaboration response information, where: the first collaboration response information includes positioning information, sharing information, screen matching information, split screen information, switching screen size information, or switching played or displayed
  • the first collaboration response information includes positioning information, sharing information, screen matching information, split screen information, switching screen size information, or switching played or displayed
  • the information of the content if the first device 11 receives the first collaboration response information, performs positioning, sharing, closing, splitting, switching the screen size or switching the played or displayed content according to the first collaboration corresponding information.
  • sharing includes sharing your own content to other devices or getting shared content from other devices.
  • the second collaboration response information includes mismatch information, corrects the current cooperation mode, initializes the detection module, or restores the detection module to a predetermined state; if the first device 11 receives the second collaboration response information, according to the second collaboration response information Correcting the current collaboration mode, initializing the detection module 110 or restoring the detection module 110 to a predetermined state.
  • the first collaboration response information and the second collaboration response information are not limited to the above enumerated content, and those skilled in the art may list more other content according to the situation, which is not limited herein.
  • the first device 11 detects the non-contact gesture operation in the vicinity thereof, and generates the detection information of the first device 11 when the non-contact gesture operation conforms to the first device detection information generation condition, and then sends the detection information to the first device 11
  • the second device 12 further receives the collaboration response information sent by the second device 12, and finally performs collaborative processing according to the collaboration response information. Therefore, when the multi-device performs the cooperative interaction, only the non-contact gesture operation needs to be performed on the first device 11, and the first device 11 does not need to be touched, and the operation is simple and user-friendly.
  • FIG. 8 is a schematic diagram of still another logical structure of a first device participating in multi-device cooperation according to an embodiment of the present invention.
  • the first device 11 includes a sensor 1100, a processor 1101, a transmitter 1102, a receiver 1103, and a memory 1104.
  • the sensor 1100, the processor 1101, the transmitter 1102, the receiver 1103, and the memory 1104 are connected by a bus system 1105.
  • the sensor 1100 is configured to detect a contactless gesture operation in the vicinity of the first device 11; and generate detection information of the first device 11 according to a contactless gesture operation in the vicinity of the first device 11.
  • the sensor 1100 can also be replaced by a camera.
  • the transmitter 1102 is configured to send the detection information of the first device 11 to the second device 12.
  • the receiver 1103 is configured to receive the collaboration response information sent by the second device 12.
  • the memory 1104 is configured to store an instruction that causes the processor 1101 to perform a cooperative process of performing cooperative processing according to the cooperative response information.
  • the processor 1101 performs processing such as positioning, sharing, closing, splitting, switching the screen size, or switching the played or displayed content according to the cooperation response information, and the processor 1101 may also be referred to as a central processing unit (Central). Processing Unit, CPU).
  • Memory 1104 can include read only memory and random access memory and provides instructions and data to processor 1101. A portion of the memory 1104 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • receiver 1103 and transmitter 1102 can be coupled to antenna 1106.
  • the various components of the first device 11 are coupled together by a bus system 1105, which may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like. However, for clarity of description, various buses are labeled as bus system 1105 in the figure.
  • the foregoing embodiment of the present invention discloses that the cooperative response processing module of the first device 11 can be implemented by the processor 1101.
  • the processor 1101 may be an integrated circuit chip with signal processing capabilities.
  • the cooperative response processing module of the first device 11 may be completed by an integrated logic circuit of hardware in the processor 1101 or an instruction in a form of software.
  • the processor 1101 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 1104, and the processor 1101 reads the information in the memory 1104 and performs the functions of the cooperative response processing module of the first device 11 in conjunction with its hardware.
  • the senor 1100 further determines whether the non-contact gesture operation in the vicinity of the first device 11 meets the preset first device detection information generation condition. When the result of the determination is yes, the generating unit generates the first Detection information of the device 11.
  • FIG. 9 is a schematic diagram of a logical structure of a second device participating in multi-device cooperation according to an embodiment of the present invention.
  • the second device 12 includes:
  • the detection information receiving module 121 is configured to receive the detection information of the at least one first device 11, wherein the detection information of the first device 11 is generated by the first device 11 according to a non-contact gesture operation in the vicinity thereof;
  • the collaboration response generation module 122 is configured to generate collaboration response information according to the detection information of the at least one first device 11;
  • the collaboration response sending module 123 is configured to send the collaboration response information to the at least one first device 11.
  • the second device 12 receives the detection information of the at least one first device 11 and generates corresponding collaboration response information, and then sends the cooperation response information to the at least one first device 11. Therefore, when the multi-device performs the cooperative interaction, only the non-contact gesture operation needs to be performed in the vicinity of the at least one first device 11, and the first device 11 and the second device 12 do not need to be touched, and the operation is simple and user-friendly.
  • the embodiment of the present invention further provides another logical structure diagram of a second device participating in multi-device cooperation, which is described in detail based on the second device 12 provided in the foregoing embodiment. As shown in Figure 10.
  • the device 12 When the second device 12 controls the first device 11 to cooperate, the device 12 itself may not participate in the collaboration response, but if it also participates in the collaboration response, the second device 12 further includes:
  • the detecting module 125 is configured to detect a contactless gesture operation in the vicinity of the second device 12.
  • the non-contact gesture operation in the vicinity of the second device 12 includes at least one of: starting from a predetermined direction, waving a hand or other object through the vicinity of the second device 12 one or more times, starting from a preset direction After the hand or other object passes the second device 12 one or more times and then passes the vicinity of the second device 12 one or more times in the opposite direction, waving the hand or other object while passing through the second device 12 and the at least one first device 11 once or Multiple times, waving a hand or other object stays near the second device 12 one or more times.
  • non-contact gesture in the vicinity of the second device 12 may also include other hand motions or actions of the object, as long as it is preset, and the contactless gesture that the second device 12 can recognize is the protection scope of the present invention. , will not repeat them here.
  • the detection information generating module 126 is configured to generate detection information of the second device 12 according to the non-contact gesture operation in the vicinity of the second device 12.
  • the detection information generating module 126 further includes a determining unit 1260 and a detecting information generating unit 1261, wherein the determining unit 1260 is configured to determine whether the non-contact gesture operation in the vicinity of the second device 12 conforms to the preset second device detecting information.
  • the generation condition when the result of the determination is YES, the detection information generating unit 1261 generates the detection information of the second device 12.
  • the second device detection information generation condition may also include at least one of the following conditions: a preset cooperation mode switching process, where the content of the second device detection information generation condition is the same as the content of the previous device detection information generation condition.
  • the cooperative mode switching condition, the signal mode, the signal change mode, the mode in which the object is approached or distant, the gesture change mode, and the image change mode are detected.
  • the second device detection information generation condition is not limited to the above-listed modes, and those skilled in the art may list more other modes according to the situation, which is not limited herein.
  • the criteria for the second device detection information generation condition and the first device detection information generation condition may be different.
  • the first device 11 needs to detect that the object meets the shared first device detection information generation condition when the object passes through the vicinity of the first device 11 and then passes through the vicinity of the first device 11 in the opposite direction, but the first The two devices 12 only need to detect that the object passes through the vicinity of the second device 12, that is, the shared second device detection information generation condition is met.
  • the content of the detection information of the second device 12 is the same as the content of the detection information of the first device 11, that is, the detection information of the second device 12 includes at least one of the following information: signal information corresponding to the signal mode, and a signal change mode corresponding to the signal.
  • the change information of the signal, the information that the object corresponding to the mode in which the object approaches or is far away is close to or far away, or the cooperative mode switching flow and the cooperative mode switching information corresponding to the cooperative mode switching condition.
  • the collaboration response generation module 122 further generates collaboration response information according to the detection information of the second device 12 and the detection information of the at least one first device 11.
  • the detection information receiving module 121 can also receive the current collaboration state of the at least one first device 11, so that the collaboration response generation module 122 generates the collaboration response information more flexibly and reliably.
  • the second device 12 further includes a collaboration response processing module 124 for performing cooperative processing according to the collaboration response information.
  • the collaboration response generation module 122 includes a comparison unit 1221 and a collaboration response generation unit 1222, and the collaboration response information includes first collaboration response information or second collaboration response information.
  • the comparison unit 1221 compares the detection information of the second device 12 with the detection information of the at least one first device 11 to meet a preset matching condition, and when the comparison result satisfies the preset matching condition, the cooperation response generating unit 1222 generates the first A cooperation response information, when the result of the comparison does not satisfy the matching condition, the cooperation response generation unit 1222 generates the second cooperation response information or does not generate the cooperation response information.
  • the first collaboration response information includes positioning information, sharing information, screen matching information, split screen information, switching screen size information, and switching information of the played or displayed content.
  • the second cooperation response information includes mismatch information, instructs the first device 11 to correct the current cooperation mode, initializes the detection module, or restores the detection module to a predetermined state.
  • the matching condition includes a time matching condition and a pattern matching condition.
  • the time matching condition includes whether the detection information of the first device 11 and the detection information of the second device 12 occur in the same period of time, or whether there is a sequence in time or time difference. Whether it is mainly used to determine whether the detection information of the two is generated by a non-contact gesture operation in the same period of time, for example, obtaining the detection information receiving time of the first device 11 or acquiring the detection from the detection information of the first device 11 When the information is generated, the time of generating the detection information is obtained from the detection information of the second device 12, and whether they are in the same time period to avoid false detection or missed detection, and the length of the same period of time can be set according to experience. Or set after continuous detection.
  • the time detected by the two devices is different.
  • the detection time of the first non-contact gesture operation through the first device 11 and the second device 12 is T1 and T2, respectively.
  • the times of the first device 11 and the second device 12 are calibrated, respectively, Obtaining two times from the detection information of the first device 11 and the second device 12 and comparing the sizes of the two times, if the default non-contact gesture operation is performed from right to left, the relative orientation of the device can be identified, if Knowing the relative orientation of the device, the direction and path of the contactless gesture operation can be identified.
  • the time difference can be used to judge. Specifically, if a hand or other object is swept from a certain direction through the first device 11 and the second device 12, and then the hand or other object is sequentially swept through the second device 12 and the first device 11, the device will Two objects were detected close. As shown in FIG. 11, the first device 11 detects that the time difference between the two objects is: T3 - T2, the second device 12 detects that the time difference between the two objects is: T4 - T1. This eliminates the clock error between the two devices.
  • the non-contact gesture operation starts from right to left, then if (T3 - T2) > (T4 - T1), then the first device 11 is on the right side of the second device 12, if (T3 - T2) ⁇ (T4 - T1), then the first device 11 is to the left of the second device 12. This makes it possible to perform positioning by comparing the time difference in the detection information of the first device 11 and the second device 12.
  • Whether there is a sequence of time or time difference can also be used to identify some special equipment, such as identifying the device with the smallest detection time, identifying the device with the largest detection time, identifying the device with the smallest detection time difference, and identifying the device with the largest detection time difference.
  • some special equipment such as identifying the device with the smallest detection time, identifying the device with the largest detection time, identifying the device with the smallest detection time difference, and identifying the device with the largest detection time difference.
  • the system can perform special cooperation on multiple devices, such as sharing the content on the device with the smallest detection time or detection time difference to other devices.
  • the pattern matching condition refers to whether the patterns of the detection information of the first device 11 and the second device 12 match. For example, whether the signal patterns of the two match, or whether the signal change mode matches, or whether the mode in which the object approaches or stays matches, or whether the cooperative mode switching condition matches.
  • the pattern matching condition refers to whether the modes of the first device 11 and the second device 12 satisfy the pre-definition, for example, it is pre-defined that the two modes are only the same to be matched. Of course, it is also possible to pre-define that the two modes are different and match, for example, the first device 11 detects that the signal is changed from large to small three times, and the second device 12 detects that the signal is changed from large to small, and can also be used as a match.
  • the corresponding non-contact gesture operation may be to pass the hand or other object back and forth through the vicinity of the first device 11 once, and then pass through the first device 11 and the second device 12 once, and the corresponding cooperative response is to be on the first device 11
  • the content is shared to the second device 12.
  • the cooperation response information sent by the cooperative response sending module 123 to each of the first devices 11 may not be completely the same.
  • the positioning information of each of the first devices 11 is different, and the portions displayed by each of the first devices 11 are different when the screen is displayed, and even the cooperative actions of each of the first devices 11 are different.
  • the cooperation response generation module 122 determines whether the preset matching condition is met, it is not necessarily required that all the first devices 11 generate the detection information in the same period of time. For example, when all the devices are playing in the screen, if the gesture is swiped over a certain first device 11 or a plurality of first devices 11, the first device 11 or the first devices 11 may be generated correspondingly.
  • the detection information receiving module 121 After receiving the detection information of the first device 11 or the several first devices 11, the detection information receiving module 121 sends the detection information to the cooperation response generation module 122, and the detection information of the cooperation response generation module 122 at the first device 11 and the second device 12 When the detection information satisfies the matching condition, the first device 11 is instructed to exit the screen-playing mode.
  • the second device 12 recalculates the content displayed by the other first devices 11 participating in the screen-playing, and transmits a collaboration response to the first device 11 still participating in the screen-playing to update the displayed portion. It should be understood that the second device 12 will also calculate the content displayed by itself and update the displayed portion.
  • the second device 12 generates corresponding collaboration response information according to the detection information of the second device 12 and the detection information of the at least one first device 11, and then sends the cooperation response information to the at least one first device 11, and Collaborative processing is performed based on the collaboration response information. Therefore, when the multi-device performs the cooperative interaction, only the non-contact gesture operation needs to be performed in the vicinity of the first device 11 and the second device 12, and the first device 11 and the second device 12 do not need to be touched, and the operation is simple and user-friendly.
  • the second device 12 may also only control the first device 11 to perform a cooperative response, but does not participate in the cooperative response itself, that is, the cooperation response processing module 124, the detecting module 125, and the detection information generating module 126 may be omitted. Lower cost and simpler structure.
  • FIG. 12 is a schematic diagram of still another logical structure of a second device participating in multi-device cooperation according to an embodiment of the present invention.
  • the collaboration response generation module 122 generates collaboration response information according to the detection information of the at least two first devices 11.
  • the detection information receiving module 121 receives the detection information of the at least two first devices 11, and the comparison unit 1221 compares whether the detection information of the at least two first devices 11 meets a preset matching condition, and when the comparison result satisfies the preset When the matching condition is met, the cooperation response generation unit 1222 generates the first cooperation response information, and when the result of the comparison does not satisfy the matching condition, the cooperation response generation unit 1222 generates the second cooperation response information or does not generate the cooperation response information.
  • FIG. 13 is a schematic diagram of still another logical structure of a second device participating in multi-device cooperation according to an embodiment of the present invention.
  • the second device 12 includes a processor 1201, a transmitter 1202, a receiver 1203, and a memory 1204.
  • the processor 1201, the transmitter 1202, the receiver 1203, and the memory 1204 are connected by a bus system 1205.
  • the receiver 1203 is configured to receive detection information of the at least one first device 11, wherein the detection information of the first device 11 is generated by the first device 11 according to a contactless gesture operation in the vicinity thereof.
  • the memory 1204 is configured to store an instruction causing the processor 1201 to: generate cooperative response information according to the detection information of the at least one first device 11.
  • the transmitter 1202 is configured to send the cooperative response information to the at least one first device 11.
  • processor 1201 may also be referred to as a central processing unit (Central Processing Unit, CPU).
  • Memory 1204 can include read only memory and random access memory and provides instructions and data to processor 1201.
  • a portion of the memory 1204 may also include non-volatile random access memory (NVRAM).
  • receiver 1203 and transmitter 1202 can be coupled to antenna 1206.
  • the various components of the second device 12 are coupled together by a bus system 1205, which may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like. However, for clarity of description, various buses are labeled as bus system 1205 in the figure.
  • the foregoing embodiment of the present invention discloses that the cooperative response generation module of the second device 12 can be implemented by the processor 1201.
  • the processor 1201 may be an integrated circuit chip with signal processing capabilities.
  • the cooperative response generation module of the second device 12 may be completed by an integrated logic circuit of hardware in the processor 1201 or an instruction in a form of software.
  • the processor 1201 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 1204, and the processor 1201 reads the information in the memory 1204 and performs the functions of the cooperative response generation module of the second device 12 in conjunction with its hardware.
  • the memory 1204 further stores instructions that cause the processor 1201 to: generate cooperative response information according to the detection information of the at least two first devices 11.
  • the collaboration response information includes the first collaboration response information or the second collaboration response information, where the processor 1201 compares whether the detection information of the at least two first devices 11 meets a preset matching condition, and when the comparison result satisfies the preset The processor 1201 generates first cooperation response information, and when the result of the comparison does not satisfy the matching condition, the processor 1201 generates the second cooperation response information or does not generate the cooperation response information.
  • the second device 12 further includes a sensor 1207, and the sensor 1207 is connected to the processor 1201 through the bus system 1205.
  • the sensor 1207 is configured to detect a non-contact gesture operation in the vicinity of the second device 12, and determine whether the non-contact gesture operation in the vicinity of the second device 12 meets the preset second device detection information generation condition, and the result of the determination is
  • the detection information of the second device 12 is generated.
  • the memory 1204 further stores instructions for causing the processor 1201 to: generate cooperative response information according to the detection information of the second device 12 and the detection information of the at least one first device 11, and perform cooperative processing according to the cooperation response information.
  • the processor 1201 performs processing such as positioning, sharing, closing, splitting, switching the screen size, or switching the played or displayed content according to the collaboration response information.
  • the cooperative response processing module of the second device 12 disclosed in the foregoing embodiment of the present invention may be implemented by the processor 1201.
  • the specific implementation is the same as the implementation of the cooperative response generating module by the processor 1201, and details are not described herein again.
  • the cooperation response information includes first collaboration response information or second collaboration response information, wherein the processor 1201 compares the detection information of the second device 12 with the detection information of the at least one first device 11 to satisfy a preset matching condition, when compared When the result satisfies the preset matching condition, the processor 1201 generates the first cooperation response information, and when the comparison result does not satisfy the matching condition, the processor 1201 generates the second cooperation response information or does not generate the cooperation response information.
  • the senor 1207 can also be replaced with a camera.
  • FIG. 14 is a flowchart of a multi-device cooperation method according to an embodiment of the present invention. As shown in FIG. 14, the multi-device cooperation method of the present invention includes the following steps:
  • Step S1 Determine the first device 11 and the second device 12 among the plurality of devices.
  • the first device 11 and the second device 12 have a static method and a dynamic method.
  • the static method has been pre-set for the functions of each device, some devices are functions of the first device 11, and some devices are functions of the second device 12.
  • the dynamic method is the same for each device, that is, each device can be used as the first device 11 or the second device 12.
  • one of the devices is selected as the second device 12 according to the specific situation, and the remaining devices are A device 11.
  • Step S2 detecting a non-contact gesture operation in the vicinity of at least one first device 11.
  • the contactless gesture is the same as the contactless gesture of the first device 11 described above, and details are not described herein again.
  • Step S3 Generate detection information of the first device 11 according to the non-contact gesture operation in the vicinity of the at least one first device 11.
  • Step S4 Send the detection information of the at least one first device 11 to the second device 12.
  • Step S5 The second device 12 generates collaboration response information according to the detection information of the at least one first device 11.
  • Step S6 The second device 12 transmits the cooperation response information to the at least one first device 11.
  • Step S7 The at least one first device 11 performs cooperative processing according to the cooperation response information.
  • the first device 11 generates the detection information of the first device 11 according to the non-contact gesture operation, and then sends the detection information of the first device 11 to the second device 12, and the second device 12 Generating corresponding collaboration response information according to the detection information of the at least one first device 11, and then transmitting the cooperation response information to the at least one first device 11, the first device 11 further receiving the coordinated response information of the second device, and finally according to the collaboration
  • the response information is processed collaboratively. Therefore, the present invention only needs to perform a non-contact gesture operation in the vicinity of at least one first device 11 when the multi-device performs cooperative interaction, and does not need to contact the first device 11 and the second device 12, and the operation is simple and user-friendly.
  • FIG. 15 is a flowchart of a method for cooperation of a first device according to an embodiment of the present invention. As shown in FIG. 15, the cooperation method of the first device 11 includes the following steps:
  • Step S11 detecting a non-contact gesture operation in the vicinity of the first device 11.
  • the non-contact gesture operation of the first device 11 is as described above, and details are not described herein again.
  • Step S12 The detection information of the first setting 11 is generated according to the non-contact gesture operation in the vicinity of the first device 11.
  • Step S13 Send the detection information of the first device 11 to the second device 12.
  • Step S14 Receive the collaboration response information sent by the second device 12.
  • the cooperation response information is generated by the second device 12 according to the detection information of the first device 11.
  • the collaboration response information is the same as the collaboration response information described above, and is not described here.
  • the first device 11 detects the non-contact gesture operation in the vicinity thereof, and generates the detection information of the first device 11 when the non-contact gesture operation conforms to the first device detection information generation condition, and then sends the detection information to the first device 11
  • the second device 12 further receives the collaboration response information sent by the second device 12, and finally performs collaborative processing according to the collaboration response information. Therefore, when the multi-device performs the cooperative interaction, only the non-contact gesture operation needs to be performed on the first device 11, and the first device 11 does not need to be touched, and the operation is simple and user-friendly.
  • FIG. 16 is still another flowchart of a method for cooperating a first device according to an embodiment of the present invention, which is a detailed description of the step of generating the detection information of the first device 11, as shown in FIG.
  • the generating the detection information of the first device 11 specifically includes the following steps:
  • Step S120 It is determined whether the non-contact gesture operation in the vicinity of the first device 11 meets the preset first device detection information generation condition.
  • step S121 is performed, and when the result of the determination is NO, step S122 is performed.
  • the first device detection information generation condition is the same as the first device detection information generation condition described above, and details are not described herein again.
  • Step S121 Generate detection information of the first device 11.
  • the detection information of the first device 11 is the same as the detection information of the first device 11 as described above, and details are not described herein again.
  • Step S122 The detection information of the first device 11 is not generated.
  • FIG. 17 is a flowchart of a method for cooperation of a second device according to an embodiment of the present invention. As shown in FIG. 17, the method includes the following steps:
  • Step S21 Receive detection information of at least one first device 11, wherein the detection information of the first device 11 is generated by the first device 11 according to a contactless gesture operation in its vicinity.
  • Step S22 Generate collaboration response information according to the detection information of the at least one first device.
  • Step S23 Send the cooperation response information to the at least one first device 11.
  • the second device 12 receives the detection information of the at least one first device 11 and generates corresponding collaboration response information, and then sends the cooperation response information to the at least one first device 11. Therefore, when the multi-device performs the cooperative interaction, only the non-contact gesture operation needs to be performed in the vicinity of the at least one first device 11, and the first device 11 and the second device 12 do not need to be touched, and the operation is simple and user-friendly.
  • the second device 12 may not participate in the collaborative response in the cooperative operation, and may also participate in the collaborative response.
  • the cooperation method of the second device 12 specifically includes the following steps:
  • Step S31 Receive detection information of at least two first devices 11.
  • Step S32 Generate collaboration response information according to the detection information of the at least two first devices 11.
  • Step S33 Compare whether the detection information of the at least two first devices 11 meets a preset matching condition.
  • steps S34, S35, and S36 are performed; if the result of the comparison does not satisfy the preset matching condition, steps S37, S38, and S39 or step S310 are performed.
  • Matching conditions include time matching conditions or pattern matching conditions.
  • the time matching condition and the mode matching condition are respectively the time matching condition and the pattern matching condition as described above, and are not described herein again.
  • Step S34 Generate first collaboration response information.
  • the first collaboration response information includes positioning information, sharing information, screen matching information, split screen information, switching screen size information, and switching information of the played or displayed content.
  • Step S35 Send the first cooperation response information to the at least two first devices 11.
  • Step S36 The at least two first devices 11 perform cooperative processing according to the first cooperation response information.
  • Step S37 Generate second collaboration response information.
  • the second cooperation response information includes mismatch information, instructs the first device 11 to correct the current cooperation mode, initializes the detection module, or restores the detection module to a predetermined state.
  • Step S38 Send the second collaboration response information to the at least two first devices 11.
  • Step S39 The at least two first devices 11 perform cooperative processing according to the second cooperation response information.
  • Step S310 No collaboration response information is generated.
  • the cooperation method of the second device 12 specifically includes the following steps:
  • Step S41 detecting a non-contact gesture operation in the vicinity of the second device 12.
  • the non-contact gesture operation in the vicinity of the second device 12 is as described above, and the non-contact gesture operation in the vicinity of the second device 12 is not described herein.
  • Step S42 Generate detection information of the second device 12 according to the non-contact gesture operation in the vicinity of the second device 12.
  • Step S43 Send the detection information of the at least one first device 11 to the second device 12.
  • Step S44 Compare whether the detection information of the second device 12 and the detection information of the at least one first device 11 satisfy a preset matching condition.
  • steps S45, S46, and S47 are performed; if the result of the comparison does not satisfy the preset matching condition, steps S48, S49, and S410 or step S411 are performed.
  • Matching conditions include time matching conditions or pattern matching conditions.
  • the time matching condition and the mode matching condition are respectively the time matching condition and the pattern matching condition as described above, and are not described herein again.
  • Step S45 Generate first collaboration response information.
  • the first collaboration response information includes positioning information, sharing information, screen matching information, split screen information, switching screen size information, and switching information of the played or displayed content.
  • Step S46 Send the first cooperation response information to the at least one first device 11.
  • Step S47 The second device 12 and the at least one first device 11 perform cooperative processing according to the first cooperation response information.
  • Step S48 Generate second collaboration response information.
  • the second collaboration response information includes mismatch information, instructing the first device to correct the current cooperation mode, initializing the detection module, or restoring the detection module to a predetermined state.
  • Step S49 Send the second collaboration response information to the at least one first device 11.
  • Step S410 The second device 12 and the at least one first device 11 perform cooperative processing according to the second cooperation response information.
  • Step S411 No cooperation response information is generated.
  • FIG. 20 is still another flowchart of a method for cooperation of a second device according to an embodiment of the present invention, which is a detailed description of the steps of generating the detection information of the second device 12, as shown in FIG.
  • the generating the detection information of the second device 12 specifically includes the following steps:
  • Step S421 It is determined whether the non-contact gesture operation in the vicinity of the second device 12 meets the preset second device detection information generation condition.
  • step S422 When the result of the determination is YES, step S422 is performed, and when the result of the determination is NO, step S423 is performed.
  • the second device detection information generation condition is the second device detection information generation condition as described above, and details are not described herein again.
  • Step S422 Generate detection information of the second device 12.
  • Step S423 The detection information of the second device 12 is not generated.
  • the first device detects the non-contact gesture operation in the vicinity thereof, and generates the detection information of the first device when the non-contact gesture operation conforms to the first device detection information generation condition, and then sends the detection information to the second device.
  • the second device generates corresponding collaboration response information according to the detection information of the at least one first device, and then sends the cooperation response information to the at least one first device, where the first device further receives the coordinated response information sent by the second device, and finally
  • the cooperation process is performed according to the collaboration response information, and in addition, the second device can also perform collaborative processing according to the collaboration response information. Therefore, in the multi-device cooperation, the present invention only needs to perform a non-contact gesture operation in the vicinity of at least one first device, and does not need to contact the first device and the second device, and the operation is simple and user-friendly.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)

Abstract

本发明公开了一种多设备协作系统、第一设备、第二设备及其方法。该第一设备包括检测模块、检测信息生成模块、检测信息发送模块、协作响应接收模块以及协作响应处理模块。检测模块用于检测第一设备附近的非接触式手势操作,检测信息生成模块用于根据第一设备附近的非接触式手势操作生成第一设备的检测信息,检测信息发送模块用于发送第一设备的检测信息到第二设备,协作响应接收模块用于接收第二设备发送的协作响应信息,其中,协作响应信息为第二设备根据第一设备的检测信息产生的,协作响应处理模块用于根据协作响应信息进行协作处理。通过上述方式,本发明能够在多设备进行协作交互时,只需在设备附近进行非接触式手势操作,不需要接触多设备,操作简单且人性化。

Description

多设备协作系统、第一设备、第二设备及其协作方法
【技术领域】
本发明涉及通信技术领域,特别是涉及一种多设备协作系统、第一设备、第二设备及其协作方法。
【背景技术】
随着智能终端的普及以及网络传输速率的提升,多设备间的社交化协作变得越来越多。例如多个设备之间分享音乐,每个设备播放的内容相同,获得更大声音的播放效果,或者每个设备播放音乐的不同声道,获得立体音乐享受,或者多个设备之间分享视频,可以同步播放达到同步体验,或者同步合屏播放达到大屏幕体验,或者分享图片,合屏同步显示等。
现有技术中实现上述的多设备间的社交化协作的方法通常有四种:
第一种:通过多设备屏幕碰撞来实现多设备的协作,通常用到的是接触传感器;
第二种:通过对着目标设备的屏幕拍照获得对方的二维码信息从而获得目标设备的地址或者要分享的信息等;
第三种:通过使用某个设备对着目标设备甩动,从而与目标设备建立起连接进而实现多设备的协作。
第四种:利用触摸屏的功能来实现,具体为通过在多设备的屏幕上的捏合来实现多设备的协作。
但是现有技术的四种方法都不够简单,具体而言:
第一种方法需要两个设备的屏幕进行碰撞,如果是多个设备则需要多次碰撞,操作起来不够简便,且碰撞这种方式不够人性化;
第二种方法需要对着目标设备拍照,如果多个设备进行协作,则需要多次拍摄,且协作的时候必须有二维码,例如某个设备分享当前的内容给另外一个设备,如正在观看的视频,该设备必须要中断当前的内容,然后调出视频对应的二维码,再由另外一个设备进行拍摄获取,这个过程不够简便,多设备分享时更要重复上述的操作;
第三种方法只适用于1对1的协作,如果是1对多的协作,则需要对着各个设备分别挥动,所以也不够简便;
第四种方法同样是适用于1对1的协作,如果多个设备进行协作,例如将多个设备的屏幕合并,则需要捏合多次,也不够简便。
【发明内容】
本发明主要解决的技术问题是提供一种参与多设备协作的第一设备、第二设备、多设备协作系统及第一设备和第二设备的协作方法,能够在多设备进行协作交互时,只需在设备附近进行非接触式手势操作,不需要接触多设备,操作简单且人性化。
第一方面提供一种参与多设备协作的第一设备,该第一设备包括:检测模块,用于检测第一设备附近的非接触式手势操作;检测信息生成模块,用于根据第一设备附近的非接触式手势操作生成第一设备的检测信息;检测信息发送模块,用于发送第一设备的检测信息到第二设备;协作响应接收模块,用于接收第二设备发送的协作响应信息,其中,协作响应信息为第二设备根据第一设备的检测信息产生的;协作响应处理模块,用于根据协作响应信息进行协作处理。
在第一方面的第一种可能的实现方式中,第一设备附近的非接触式手势操作包括如下操作中的至少一个:从一预设方向开始挥动手或其它物体经过第一设备附近一次或多次,从一预设方向开始挥动手或其它物体经过第一设备附近一次或多次后再反方向经过第一设备附近一次或多次,挥动手或其它物体同时经过至少两个第一设备附近一次或多次,挥动手或其它物体停留在第一设备附近一次或多次。
在第一方面的第二种可能的实现方式中,检测信息生成模块进一步包括判断单元和检测信息生成单元,其中,判断单元用于判断第一设备附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件,在判断的结果为是时,检测信息生成单元生成第一设备的检测信息。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,第一设备检测信息生成条件包括如下条件中的至少一个:预先设置的协作模式切换流程,协作模式切换条件,信号模式,信号变化模式,检测到物体靠近或远离的模式,手势变化模式,图像变化模式。
结合第一方面的第三种可能的实现方式,在第四种可能的实现方式中,第一设备的检测信息包括如下信息中的至少一个:与信号模式对应的信号信息,与信号变化模式对应的信号的变化信息,与检测到物体靠近或远离的模式对应的物体靠近,远离的信息或协作模式切换流程和协作模式切换条件对应的协作模式切换信息。
在第一方面的第五种可能的实现方式中,协作响应信息包括第一协作响应信息或第二协作响应信息,其中:第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息或切换所播放或所显示的内容的信息,若第一设备接收到第一协作响应信息,则进行定位、分享、合屏、分屏、切换屏幕大小或切换所播放或所显示的内容;第二协作响应信息包括不匹配信息、校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态,若第一设备接收到第二协作响应信息,则校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态。
第二方面提供一种参与多设备协作的第二设备,该第二设备包括:检测信息接收模块,用于接收至少一个第一设备的检测信息,其中,第一设备的检测信息为第一设备根据其附近的非接触式手势操作生成的;协作响应生成模块,用于根据至少一个第一设备的检测信息生成协作响应信息;协作响应发送模块,用于发送协作响应信息到至少一个第一设备。
在第二方面的第一种可能的实现方式中,协作响应生成模块进一步根据至少两个第一设备的检测信息生成协作响应信息。
在第二方面的第二种可能的实现方式中,第二设备还包括:检测模块,用于检测第二设备附近的非接触式手势操作;检测信息生成模块,用于根据第二设备附近的非接触式手势操作生成第二设备的检测信息。
结合第二方面的第二种可能的实现方式,在第三种可能的实现方式中,协作响应生成模块进一步根据第二设备的检测信息和至少一个第一设备的检测信息生成协作响应信息;第二设备还包括:协作响应处理模块,用于根据协作响应信息进行协作处理。
结合第二方面的第一种可能的实现方式或第三种可能实现方式,在第四种可能的实现方式中,协作响应生成模块包括比较单元和协作响应生成单元,协作响应信息包括第一协作响应信息或第二协作响应信息,其中,比较单元比较至少两个第一设备的检测信息或者第二设备的检测信息和至少一个第一设备的检测信息是否满足预设的匹配条件,当比较的结果满足预设的匹配条件时,协作响应生成单元生成第一协作响应信息,当比较的结果不满足匹配条件时,协作响应生成单元生成第二协作响应信息或不产生协作响应信息。
结合第二方面的第四种可能的实现方式,在第五种可能的实现方式中,匹配条件包括时间匹配条件,时间匹配条件包括第一设备的检测信息和第二设备的检测信息是否发生在同一段时间内,或者是否在时间或时间差上有先后顺序。
结合第二方面的第四种可能的实现方式,在第六种可能的实现方式中,匹配条件包括模式匹配条件,模式匹配条件为第一设备的检测信息的模式和第二设备的检测信息的模式是否匹配。
结合第二方面的第四种可能的实现方式,在第七种可能的实现方式中,第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息或切换所播放或所显示的内容的信息;第二协作响应信息包括不匹配信息、指示第一设备校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态。
第三方面提供一种多设备协作系统,该系统包括第二设备以及至少一个第一设备,其中,第一设备为第一方面、第一方面的第一至第五种可能的实现方式中的任一项所述的第一设备,第二设备为第二方面、第二方面的第一至第七种可能的实现方式中的任一项所述的第二设备。
第四方面提供一种第一设备的协作方法,该方法包括以下步骤:检测第一设备附近的非接触式手势操作;根据第一设备附近的非接触式手势生成第一设备的检测信息;发送第一设备的检测信息到第二设备;接收第二设备发送的协作响应信息,其中,协作响应信息为第二设备根据第一设备的检测信息产生的;根据协作响应信息进行协作处理。
在第四方面的第一种可能的实现方式中,根据第一设备附近的非接触式手势操作生成第一设备的检测信息的步骤进一步包括:判断第一设备附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件;在判断的结果为是时,生成第一设备的检测信息;在判断的结果为否时,不生成第一设备的检测信息。
第五方面提供一种第二设备的协作方法,该方法包括以下步骤:接收至少一个第一设备的检测信息,其中,第一设备的检测信息为第一设备根据其附近的非接触式手势操作生成的;根据至少一个第一设备的检测信息生成协作响应信息;发送协作响应信息到至少一个第一设备。
在第五方面的第一种可能的实现方式中,根据至少一个第一设备的检测信息生成协作响应信息的步骤包括:根据至少两个第一设备的检测信息生成协作响应信息。
在第五方面的第二种可能的实现方式中,根据至少一个第一设备的检测信息生成协作响应信息的步骤之前包括:检测第二设备附近的非接触式手势操作;根据第二设备附近的非接触式手势操作生成第二设备的检测信息。
结合第五方面的第二种可能的实现方式,在第三种可能的实现方式中,根据至少一个第一设备的检测信息生成协作响应信息的步骤进一步包括:根据第二设备的检测信息和至少一个第一设备的检测信息生成协作响应信息;发送协作响应信息到至少一个第一设备的步骤之后包括:第二设备和至少一个第一设备根据协作响应信息进行协作处理。
结合第五方面的第一种或第三种可能的实现方式,在第四种可能的实现方式中,协作响应信息包括第一协作响应信息或第二协作响应信息,其中,根据至少一个第一设备的检测信息生成协作响应信息的步骤进一步包括:第二设备比较至少两个第一设备的检测信息或者第二设备的检测信息和至少一个第一设备的检测信息是否满足预设的匹配条件;当比较的结果满足预设的匹配条件时,生成第一协作响应信息;当比较的结果不满足预设的匹配条件时,生成第二协作响应信息或不产生协作响应信息。
结合第五方面的第四种可能的实现方式,在第五种可能的实现方式中,匹配条件包括时间匹配条件,时间匹配条件包括第一设备的检测信息和第二设备的检测信息是否发生在同一段时间内,或者是否在时间或时间差上有先后顺序。
结合第五方面的第四种可能的实现方式,在第六种可能的实现方式中,匹配条件包括模式匹配条件,模式匹配条件为第一设备的检测信息的模式和第二设备的检测信息的模式是否匹配。
本发明的有益效果是:区别于现有技术的情况,本发明的第一设备通过检测模块检测第一设备附近的非接触式手势操作,检测信息生成模块根据第一设备附近的非接触式手势操作生成第一设备的检测信息,检测信息发送模块发送第一设备的检测信息到第二设备,协作响应接收模块接收第二设备发送的协作响应信息,协作响应处理模块根据协作响应信息进行协作处理。使得本发明在多设备进行协作交互时,只需要在第一设备附近进行非接触式的手势操作,不需要接触多设备,操作简单且人性化。
【附图说明】
图1是本发明实施例提供的一种多设备协作系统的一逻辑结构示意图;
图2是本发明实施例提供的一种参与多设备协作的第一设备的一逻辑结构示意图;
图3是本发明实施例提供的一种参与多设备协作的第一设备的另一逻辑结构示意图;
图4是本发明实施例提供的一种非接触式手势的一结构示意图;
图5是本发明实施例提供的一种非接触式手势的另一结构示意图;
图6是本发明实施例提供的一种非接触式手势的又一结构示意图;
图7是本发明实施例提供的一种非接触式手势的又一结构示意图;
图8是本发明实施例提供的一种参与多设备协作的第一设备的又一逻辑结构示意图;
图9是本发明实施例提供的一种参与多设备协作的第二设备的一逻辑结构示意图;
图10是本发明实施例提供的一种参与多设备协作的第二设备的另一逻辑结构示意图;
图11是本发明实施例提供的一种消除两个设备之间的时钟误差的一逻辑结构示意图;
图12是本发明实施例提供的一种参与多设备协作的第二设备的又一逻辑结构示意图;
图13是本发明实施例提供的一种参与多设备协作的第二设备的又一逻辑结构示意图;
图14是本发明实施例提供的一种多设备协作方法的流程图;
图15是本发明实施例提供的一种第一设备的协作方法的流程图;
图16是本发明实施例提供的第一设备的协作方法的另一流程图;
图17是本发明实施例提供的一种第二设备的协作方法的流程图;
图18是本发明实施例提供的第二设备的协作方法的另一流程图;
图19是本发明实施例提供的第二设备的协作方法的又一流程图;
图20是本发明实施例提供的第二设备的协作方法的又一流程图。
【具体实施方式】
下面结合附图和实施例对本发明进行详细的说明。
请参阅图1,图1是本发明实施例提供的一种多设备协作系统的一逻辑结构示意图。如图1所示,本发明的多设备协作系统10包括至少一个第一设备11和第二设备12。图1中,第一设备11为多个,其中,第二设备12用于控制多个第一设备11进行协作处理。具体而言,多个第一设备11检测其附近的非接触式手势操作,并在非接触式手势操作符合预设的第一设备检测信息生成条件时生成相应的第一设备11的检测信息,并将检测信息发送到第二设备12,第二设备12根据接收到多个第一设备11的检测信息分别生成协作响应信息,并分别发送给相应的第一设备11,多个第一设备11根据各自的协作响应信息进行协作处理。
可选地,第二设备12控制多个第一设备11进行协作处理的同时,其自身还可以参与协作处理。具体而言,第二设备12还可检测其附近的非接触式手势操作,并在该非接触式手势操作符合预设的第二设备检测信息生成条件时生成第二设备12的检测信息,进而根据自身的检测信息和多个第一设备11的检测信息分别生成协作响应信息,并分别发送给相应的第一设备11,多个第一设备11和第二设备12根据各自的协作响应信息进行协作处理。
因此,在多设备进行协作交互时,用户只需要在第一设备11或者第一设备11和第二设备12附近进行非接触式的手势操作,不需要接触第一设备11和第二设备12,操作方便及人性化。
值得注意的是,本发明的第一设备11和第二设备12只是从功能上进行分类。在实际应用中,多设备协作系统10中的每个设备都可具有第一设备11和第二设备12的功能,在进行协作响应时,可根据具体情况选择作为第一设备11还是作为第二设备12。
下文将具体说明本发明的第一设备和第二设备的结构和功能。
请参阅图2,图2是本发明实施例提供的一种参与多设备协作的第一设备的一逻辑结构示意图。如图2所示,本发明的第一设备11包括:
检测模块110,用于检测第一设备11附近的非接触式手势操作;
检测信息生成模块111,用于根据第一设备11附近的非接触式手势操作生成第一设备11的检测信息;
检测信息发送模块112,用于发送第一设备11的检测信息到第二设备12;
协作响应接收模块113,用于接收第二设备12发送的协作响应信息,其中,该协作响应信息为第二设备12根据第一设备11的检测信息产生的;
协作响应处理模块114,用于根据协作响应信息进行协作处理。
在本实施例中,第一设备11通过检测其附近的非接触式手势操作,并根据该非接触式手势操作生成第一设备11的检测信息,然后发送给第二设备12,并进一步接收第二设备12发送的协作响应信息,最后根据协作响应信息进行协作处理。因此,在多设备进行协作交互时,只需要在第一设备11上进行非接触式手势操作即可,不需要接触第一设备11,操作简单及人性化。
本发明实施例还提供了第一设备11的另一逻辑结构示意图,其是在上述实施例提供的第一设备11的基础上进行详细描述。
如图3所示,第一设备11的检测信息生成模块111还包括判断单元1110和检测信息生成单元1111,其中,判断单元1110用于判断第一设备11附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件,在判断的结果为是时,检测信息生成单元1111生成第一设备11的检测信息。
其中,第一设备11附近的非接触式手势操作包括如下操作中的至少一个:从一预设方向开始挥动手或其它物体经过第一设备11附近一次或多次,从一预设方向开始挥动手或其它物体经过第一设备11附近一次或多次后再反方向经过第一设备11附近一次或多次,挥动手或其它物体同时经过至少两个第一设备11附近一次或多次,挥动手或其它物体停留在第一设备11附近一次或多次。
以下将举例说明上述列举的非接触式手势:
如图4所示,从右开始挥动手经过第一设备11附近一次;
如图5所示,从右开始挥动手经过第一设备11附近一次(图5.1所示)后再反方向经过第一设备11附近一次(图5.2所示);
如图6所示,挥动手同时经过至少两个第一设备11附近一次;
如图7所示,挥动手停留在第一设备11附近一次。
值得注意的是,本发明第一设备11附近的非接触式手势还可以包括在第一设备11上翻转、打圈等手部动作或物体的动作等,只要是预先设置的,且第一设备11可以识别的非接触式手势都是本发明的保护范围,在此不再赘述。
第一设备检测信息生成条件包括如下条件中的至少一个:预先设置的协作模式切换流程,协作模式切换条件,信号模式,信号变化模式,检测到物体靠近或远离的模式,手势变化模式,图像变化模式。
举例而言,协作模式切换流程包括:先定位,再分享,再合屏,再分屏的流程。
协作模式切换条件包括:在定位状态,第一设备11在预定的时间内连续两次检测到有物体靠近,或第一设备11在预定的时间内连续两次检测到有物体靠近后再离开,则判断为满足定位条件;在已合屏状态下,第一设备11检测到有物体靠近,或第一设备11检测到有物体靠近再离开,则判断为满足分屏条件。
其中,如果第一设备11同时包括协作模式切换流程和协作模式切换条件,则将两者结合起来作为第一设备检测信息生成条件。第一设备11根据协作模式切换流程进行协作处理。详细而言,以前文所列举的协作模式切换流程的内容进行举例说明。一开始,第一设备11必须满足定位条件,才可以生成相应的第一设备11的检测信息,在实现定位之后,必须满足分享条件才可以生成相应的第一设备11的检测信息,依次类推,在分享之后必须满足合屏、分屏条件才能依次生成相应的第一设备11的检测信息。值得注意的是,第一设备11必须按照协作模式切换流程依次进行协作。若第一设备11在进行一种协作时,满足的切换条件并非是下个协作的条件,则第一设备11继续当前的协作,不会生成相应的第一设备11的检测信息。例如,若第一设备11在进行分享协作时,满足分屏的切换条件,则因下一个协作是合屏,因此,第一设备11不会生成相应的第一设备11的检测信息,而是继续分享协作,直至满足合屏的切换条件才生成相应的第一设备11的检测信息。
应理解,如果第一设备11只包括协作模式切换条件,则协作模式切换条件为第一设备检测信息生成条件,此时只要满足协作模式切换条件即可以生成相应的第一设备11的检测信息。
信号模式包括:检测到信号小于或等于预设的阈值,或者检测到信号大于或等于预设的阈值。
信号变化模式包括:检测到信号由小变大,或检测到信号由大变小,或检测到信号由一个范围变化到另一个范围,或它们的组合。
物体靠近或远离的模式包括:检测到物体靠近,或检测到物体离开,或检测到物体靠近一段时间,或检测到物体离开一段时间,或它们的组合。
手势变化模式包括:手势从翻转到打圈,手势从握拳头到打开手掌,或它们的组合。
图像变化模式包括:图像中的物体的变化或运动,例如:图像由大变小或由小变大。
应理解,第一设备检测信息生成条件不局限于上述列举的模式,并且该些模式也并不局限于上述列举的内容,本领域人员可以根据情况列举更多其它的模式,在此不作限制。
第一设备11的检测信息包括如下信息中的至少一个:与信号模式对应的信号信息,与信号变化模式对应的信号的变化信息,与检测到物体靠近或远离的模式对应的物体靠近或远离的信息,与协作模式切换流程和协作模式切换条件对应的协作模式切换信息。进一步地,第一设备11的检测信息还可以包括第一设备11当前的协作状态。检测信息发送模块112还可以进一步发送第一设备11当前的协作状态给第二设备12。应理解,第一设备11的检测信息生成条件和第一设备11的检测信息可以是一一对应的,因第一设备11的检测信息生成条件不局限于上述列举的模式,因此,第一设备11的检测信息亦不局限于上述列举的检测信息。
协作响应信息包括第一协作响应信息或第二协作响应信息,其中:第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息或切换所播放或所显示的内容的信息,若第一设备11接收到第一协作响应信息,则根据该第一协作相应信息进行定位、分享、合屏、分屏、切换屏幕大小或切换所播放或所显示的内容。其中,分享包括将自身内容分享给其他设备或从其他设备获取分享内容。第二协作响应信息包括不匹配信息、校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态;若第一设备11接收到第二协作响应信息,则根据该第二协作响应信息校正当前的协作模式、初始化检测模块110或将检测模块110恢复到预定的状态。
本实施例中,第一协作响应信息和第二协作响应信息不局限于上述列举的内容,本领域人员可以根据情况列举更多其它的内容,在此不作限制。
在本实施例中,第一设备11通过检测其附近的非接触式手势操作,并在该非接触式手势操作符合第一设备检测信息生成条件时生成第一设备11的检测信息,然后发送给第二设备12,并进一步接收第二设备12发送的协作响应信息,最后根据协作响应信息进行协作处理。因此,在多设备进行协作交互时,只需要在第一设备11上进行非接触式手势操作即可,不需要接触第一设备11,操作简单及人性化。
请参阅图8,图8是本发明实施例提供的一种参与多设备协作的第一设备的又一逻辑结构示意图。如图8所示,第一设备11包括传感器1100、处理器1101、发射器1102、接收器1103以及存储器1104。其中,传感器1100、处理器1101、发射器1102、接收器1103以及存储器1104通过总线系统1105相连。
传感器1100用于检测第一设备11附近的非接触式手势操作;并根据第一设备11附近的非接触式手势操作生成第一设备11的检测信息。本实施例中,传感器1100还可以用摄像头代替。
发射器1102用于发送第一设备11的检测信息到第二设备12。
接收器1103用于接收第二设备12发送的协作响应信息。
存储器1104用于存储使得处理器1101执行以下操作的指令:根据协作响应信息进行协作处理。
此外,处理器1101根据协作响应信息进行定位、分享、合屏、分屏、切换屏幕大小或切换所播放或所显示的内容等处理,处理器1101还可以称为中央处理单元(Central Processing Unit,CPU)。存储器1104可以包括只读存储器和随机存取存储器,并向处理器1101提供指令和数据。存储器1104的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中,接收器1103和发射器1102可以耦合到天线1106。第一设备11的各个组件通过总线系统1105耦合在一起,其中总线系统1105除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统1105。
上述本发明实施例揭示第一设备11的协作响应处理模块可以由处理器1101实现。处理器1101可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述第一设备11的协作响应处理模块可以通过处理器1101中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1101可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1104,处理器1101读取存储器1104中的信息,结合其硬件完成第一设备11的协作响应处理模块的功能。
可选地,作为一个实施例,传感器1100进一步判断第一设备11附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件,在判断的结果为是时,生成单元生成第一设备11的检测信息。
请参阅图9,图9是本发明实施例提供的一种参与多设备协作的第二设备的一逻辑结构示意图。如图9所示,第二设备12包括:
检测信息接收模块121,用于接收至少一个第一设备11的检测信息,其中,第一设备11的检测信息为第一设备11根据其附近的非接触式手势操作生成的;
协作响应生成模块122,用于根据至少一个第一设备11的检测信息生成协作响应信息;
协作响应发送模块123,用于发送协作响应信息到至少一个第一设备11。
在本实施例中,第二设备12通过接收至少一个第一设备11的检测信息,并生成相应的协作响应信息,然后发送该协作响应信息到至少一个第一设备11。因此,在多设备进行协作交互时,只需要在至少一个第一设备11附近进行非接触式的手势操作,不需要接触第一设备11和第二设备12,操作简单及人性化。
本发明实施例还提供了一种参与多设备协作的第二设备的另一逻辑结构示意图,其是在前文实施例提供的第二设备12的基础上进行详细描述。如图10所示。
第二设备12控制第一设备11进行协作时,其本身可以不参与协作响应,但如果本身也参与协作响应时,第二设备12还包括:
检测模块125,用于检测第二设备12附近的非接触式手势操作。
其中,第二设备12附近的非接触式手势操作包括如下操作中的至少一个:从一预设方向开始挥动手或其它物体经过第二设备12附近一次或多次,从一预设方向开始挥动手或其它物体经过第二设备12附近一次或多次后再反方向经过第二设备12附近一次或多次,挥动手或其它物体同时经过第二设备12和至少一个第一设备11附近一次或多次,挥动手或其它物体停留在第二设备12附近一次或多次。
应理解,第二设备12附近的非接触式手势还可以包括其他手部动作或物体的动作,只要是预先设置的,且第二设备12可以识别的非接触式手势都是本发明的保护范围,在此不再赘述。
检测信息生成模块126,用于根据第二设备12附近的非接触式手势操作生成第二设备12的检测信息。
具体而言,检测信息生成模块126还包括判断单元1260和检测信息生成单元1261,其中,判断单元1260用于判断第二设备12附近的非接触式手势操作是否符合预设的第二设备检测信息生成条件,在判断的结果为是时,检测信息生成单元1261生成第二设备12的检测信息。
其中,第二设备检测信息生成条件的内容和前文第一设备检测信息生成条件的内容相同,即第二设备检测信息生成条件也可以包括如下条件中的至少一个:预先设置的协作模式切换流程,协作模式切换条件,信号模式,信号变化模式,检测到物体靠近或远离的模式,手势变化模式,图像变化模式。同样,第二设备检测信息生成条件不局限于上述列举的模式,本领域人员可以根据情况列举更多其它的模式,在此不作限制。
值得注意的是,第二设备检测信息生成条件和第一设备检测信息生成条件的标准可以不相同。例如,在判断是否符合分享条件时,第一设备11需检测到物体经过第一设备11附近一次后再反方向经过第一设备11附近一次时符合分享的第一设备检测信息生成条件,但第二设备12只需检测到物体经过第二设备12附近一次即符合分享的第二设备检测信息生成条件。
其中,第二设备12的检测信息的内容和前文第一设备11的检测信息的内容相同,即第二设备12的检测信息包括如下信息的至少一个:信号模式对应的信号信息、信号变化模式对应的信号的变化信息、检测到物体靠近或远离的模式对应的物体靠近或远离的信息、或协作模式切换流程和协作模式切换条件对应的协作模式切换信息。
本实施例中,协作响应生成模块122进一步根据第二设备12的检测信息和至少一个第一设备11的检测信息生成协作响应信息。其中,检测信息接收模块121还可以接收至少一个第一设备11的当前协作状态,使得协作响应生成模块122更加灵活可靠地生成协作响应信息。其中,第二设备12还包括协作响应处理模块124,用于根据协作响应信息进行协作处理。
进一步地,协作响应生成模块122包括比较单元1221和协作响应生成单元1222,协作响应信息包括第一协作响应信息或第二协作响应信息。其中,比较单元1221比较第二设备12的检测信息和至少一个第一设备11的检测信息是否满足预设的匹配条件,当比较的结果满足预设的匹配条件时,协作响应生成单元1222生成第一协作响应信息,当比较的结果不满足匹配条件时,协作响应生成单元1222生成第二协作响应信息或不产生协作响应信息。
其中,第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息以及切换所播放或所显示的内容的信息。第二协作响应信息包括不匹配信息、指示第一设备11校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态。
本实施例中,匹配条件包括时间匹配条件和模式匹配条件。
其中,时间匹配条件包括第一设备11的检测信息和第二设备12的检测信息是否发生在同一段时间内,或者是否在时间或时间差上有先后顺序。是否在同一段时间内主要是用于判别两者的检测信息是否为一次非接触式手势操作产生,比如获得第一设备11的检测信息接收时间或从第一设备11的检测信息中获取该检测信息的产生时间,从第二设备12的检测信息中获取该检测信息的产生时间,比较它们是否在同一时间段内,用以避免误检或漏检,同一段时间的长短可以根据经验设定或不断检测后设定。是否在时间或时间差上有先后顺序用于识别不同设备的相对方位,或识别非接触式手势操作的方向或路径,因为从某个方向挥动手或其他物体依次经过第一设备11和第二设备12时,两个设备检测到的时间不一样。例如图11所示,一次非接触式手势操作经过第一设备11和第二设备12的检测时间分别是T1、T2,如果第一设备11和第二设备12的时间进行了校准,则可以分别从第一设备11和第二设备12的检测信息中获取两次时间并比较出两次时间的大小,如果默认非接触式手势操作从右向左进行,则可以识别出设备的相对方位,如果已知设备的相对方位,则可以识别出非接触式手势操作的方向以及路径。
由于两个设备之间可能存在时钟不同步,简单依赖一次检测时间可能不可靠,所以可以使用时间差来判断。具体而言,若从某方向挥动手或其它物体依次经过第一设备11和第二设备12后再反方向挥动手或其它物体依次经过第二设备12和第一设备11,如此每个设备会检测到两次物体靠近。如图11所示,第一设备11检测到两次物体靠近的时间差为:T3 - T2,第二设备12检测到两次物体靠近的时间差为:T4 - T1。这样就消除了两个设备之间的时钟误差。如果系统默认在定位的时候,非接触式手势操作从右向左开始进行,则,如果(T3 - T2) > (T4 - T1),则第一设备11在第二设备12右边,如果(T3 - T2) < (T4 - T1),则第一设备11在第二设备12左边。这样就可以通过比较第一设备11和第二设备12的检测信息中的时间差来进行定位。
是否在时间或时间差上有先后顺序还可以用来识别一些特殊的设备,例如识别检测时间最小的设备,识别检测时间最大的设备,识别检测时间差最小的设备,识别检测时间差最大的设备等。通过这种识别,系统可以对多设备进行特别的协作,比如将检测时间或检测时间差最小的设备上的内容分享给其它设备等。
模式匹配条件是指第一设备11和第二设备12的检测信息的模式是否匹配。例如两者的信号模式是否匹配,或信号变化模式是否匹配,或物体靠近或远离的模式是否匹配,或协作模式切换条件是否匹配等。
模式匹配条件是指第一设备11和第二设备12的模式是否满足预先定义,比如预先定义为两者模式只有相同才算匹配。当然也可以预先定义两者模式不相同也算匹配,例如第一设备11检测到信号由大变小3次,第二设备12检测到信号由大变小1次,也可以作为一种匹配。对应的非接触式手势操作可以为挥动手或其它物体来回经过第一设备11附近一次,然后再依次经过第一设备11和第二设备12一次,对应的协作响应为将第一设备11上的内容分享给第二设备12。
值得注意的是,本实施例中,若第一设备11为多个时,协作响应发送模块123发送给每一个第一设备11的协作响应信息不一定完全相同。例如,每个第一设备11的定位信息不同,合屏显示时每个第一设备11显示的部分不同,甚至每个第一设备11参与的协作动作也会不同。
其次,在本实施例中,在协作响应生成模块122判断是否满足预设的匹配条件时,不一定要求所有的第一设备11都在同一段时间产生检测信息。例如当所有设备都在合屏播放时,如果在某个第一设备11或某几个第一设备11上方挥动手势时,可以使该第一设备11或该些第一设备11产生相应的第一设备11的检测信息,如退出合屏播放模式的检测信息。检测信息接收模块121接收到一个第一设备11或几个第一设备11的检测信息后,发送给协作响应生成模块122,协作响应生成模块122在第一设备11的检测信息和第二设备12的检测信息满足匹配条件时,指示第一设备11退出合屏播放模式。同时,第二设备12要重新计算参与合屏播放的其他第一设备11所显示的内容,并发送协作响应给仍然参与合屏播放的第一设备11更新所显示的部分。应理解,第二设备12也会计算本身所显示的内容和更新所显示的部分。
在本实施例中,第二设备12根据第二设备12的检测信息和至少一个第一设备11的检测信息生成相应的协作响应信息,然后发送该协作响应信息到至少一个第一设备11,并且根据该协作响应信息进行协作处理。因此,在多设备进行协作交互时,只需要在第一设备11和第二设备12附近进行非接触式的手势操作,不需要接触第一设备11和第二设备12,操作简单及人性化。
在其他实施例中,第二设备12也可只控制第一设备11进行协作响应,而自身不参与协作响应,即可以省去协作响应处理模块124、检测模块125和检测信息生成模块126,使得成本更低,结构更加简单。具体请参阅图12,图12是本发明实施例提供的一种参与多设备协作的第二设备的又一逻辑结构示意图。在本实施例中,协作响应生成模块122根据至少两个第一设备11的检测信息生成协作响应信息。
具体而言,检测信息接收模块121接收至少两个第一设备11的检测信息,比较单元1221比较至少两个第一设备11的检测信息是否满足预设的匹配条件,当比较的结果满足预设的匹配条件时,协作响应生成单元1222生成第一协作响应信息,当比较的结果不满足匹配条件时,协作响应生成单元1222生成第二协作响应信息或不生成协作响应信息。
其中,本实施例中第二设备12的其他模块的结构和功能和前文实施例中第二设备12的对应模块的结构和功能相同,在此不再赘述。
请参阅图13,图13是本发明实施例提供的一种参与多设备协作的第二设备的又一逻辑结构示意图。如图13所示,第二设备12包括处理器1201、发射器1202、接收器1203以及存储器1204。其中,处理器1201、发射器1202、接收器1203以及存储器1204通过总线系统1205相连。
接收器1203用于接收至少一个第一设备11的检测信息,其中,第一设备11的检测信息为第一设备11根据其附近的非接触式手势操作生成的。
存储器1204用于存储使得处理器1201执行以下操作的指令:根据至少一个第一设备11的检测信息生成协作响应信息。
发射器1202用于发送协作响应信息到至少一个第一设备11。
此外,处理器1201还可以称为中央处理单元(Central Processing Unit,CPU)。存储器1204可以包括只读存储器和随机存取存储器,并向处理器1201提供指令和数据。存储器1204的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中,接收器1203和发射器1202可以耦合到天线1206。第二设备12的各个组件通过总线系统1205耦合在一起,其中总线系统1205除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统1205。
上述本发明实施例揭示第二设备12的协作响应生成模块可以由处理器1201实现。处理器1201可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述第二设备12的协作响应生成模块可以通过处理器1201中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1201可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1204,处理器1201读取存储器1204中的信息,结合其硬件完成第二设备12的协作响应生成模块的功能。
可选地,作为一个实施例,存储器1204进一步存储使得处理器1201执行以下操作的指令:根据至少两个第一设备11的检测信息生成协作响应信息。具体的,协作响应信息包括第一协作响应信息或第二协作响应信息,其中,处理器1201比较至少两个第一设备11的检测信息是否满足预设的匹配条件,当比较的结果满足预设的匹配条件时,处理器1201生成第一协作响应信息,当比较的结果不满足匹配条件时,处理器1201生成第二协作响应信息或不生成协作响应信息。
可选地,作为一个实施例,第二设备12还包括传感器1207,传感器1207通过总线系统1205与处理器1201相连。传感器1207用于检测第二设备12附近的非接触式手势操作,并判断第二设备12附近的非接触式手势操作是否符合预设的第二设备检测信息生成条件,并在判断的结果为是时生成第二设备12的检测信息。存储器1204进一步存储使得处理器1201执行以下操作的指令:根据第二设备12的检测信息和至少一个第一设备11的检测信息生成协作响应信息,并根据该协作响应信息进行协作处理。具体的,处理器1201根据协作响应信息进行定位、分享、合屏、分屏、切换屏幕大小或切换所播放或所显示的内容等处理。上述本发明实施例所揭示的第二设备12的协作响应处理模块可以由处理器1201实现,具体实现方式如上述处理器1201实现协作响应生成模块的相同,在此不再赘述。
协作响应信息包括第一协作响应信息或第二协作响应信息,其中,处理器1201比较第二设备12的检测信息和至少一个第一设备11的检测信息是否满足预设的匹配条件,当比较的结果满足预设的匹配条件时,处理器1201生成第一协作响应信息,当比较的结果不满足匹配条件时,处理器1201生成第二协作响应信息或不生成协作响应信息。
本实施例中,传感器1207还可以用摄像头代替。
请参阅图14,图14是本发明实施例提供的一种多设备协作方法的流程图,如图14所示,本发明的多设备协作方法包括以下步骤:
步骤S1:确定多设备中的第一设备11和第二设备12。
确定第一设备11和第二设备12有静态法和动态法。
静态法为各设备的功能已经预先设置好,有的设备为第一设备11的功能,有的设备为第二设备12的功能。
动态法为各设备的功能都相同,即每个设备都可以作为第一设备11或者第二设备12,在进行协作时,根据具体情况选择其中一个设备为第二设备12,剩余的设备为第一设备11。
步骤S2:检测至少一个第一设备11附近的非接触式手势操作。
非接触式手势和前文所述的第一设备11的非接触式手势相同,在此不再赘述。
步骤S3:根据至少一个第一设备11附近的非接触式手势操作生成第一设备11的检测信息。
步骤S4:发送至少一个第一设备11的检测信息到第二设备12。
步骤S5:第二设备12根据至少一个第一设备11的检测信息生成协作响应信息。
步骤S6:第二设备12发送协作响应信息到至少一个第一设备11。
步骤S7:至少一个第一设备11根据协作响应信息进行协作处理。
在本实施例中,第一设备11通过检测其附近的非接触式手势操作,并根据该非接触式手势操作生成第一设备11的检测信息,然后发送给第二设备12,第二设备12根据至少一个第一设备11的检测信息生成相应的协作响应信息,然后发送该协作响应信息到至少一个第一设备11,第一设备11进一步接收第二设备的发送的协作响应信息,最后根据协作响应信息进行协作处理。因此,本发明在多设备进行协作交互时,只需要在至少一个第一设备11附近进行非接触式的手势操作,不需要接触第一设备11和第二设备12,操作简单及人性化。
请参阅图15,图15是本发明实施例提供的一种第一设备的协作方法的流程图。如图15所示,第一设备11的协作方法包括以下步骤:
步骤S11:检测第一设备11附近的非接触式手势操作。
其中,第一设备11的非接触式手势操作如前文所述,在此不再赘述。
步骤S12:根据第一设备11附近的非接触式手势操作生成第一设11的检测信息。
步骤S13:发送第一设备11的检测信息到第二设备12。
步骤S14:接收第二设备12发送的协作响应信息。
其中,该协作响应信息为第二设备12根据第一设备11的检测信息产生的。该协作响应信息和前文所述的协作响应信息相同,在此不再赘述。
在本实施例中,第一设备11通过检测其附近的非接触式手势操作,并在该非接触式手势操作符合第一设备检测信息生成条件时生成第一设备11的检测信息,然后发送给第二设备12,并进一步接收第二设备12发送的协作响应信息,最后根据协作响应信息进行协作处理。因此,在多设备进行协作交互时,只需要在第一设备11上进行非接触式手势操作即可,不需要接触第一设备11,操作简单及人性化。
请参阅图16,图16是本发明实施例提供的第一设备的协作方法的又一流程图,其是对上述生成第一设备11的检测信息的步骤的详细描述,如图16所示,生成第一设备11的检测信息具体包括以下步骤:
步骤S120:判断第一设备11附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件。
在判断的结果为是时,执行步骤S121,在判断的结果为否时,执行步骤S122。
第一设备检测信息生成条件和前文所述的第一设备检测信息生成条件相同,在此不再赘述。
步骤S121:生成第一设备11的检测信息。
第一设备11的检测信息和前文所述的第一设备11的检测信息相同,在此不再赘述。
步骤S122:不生成第一设备11的检测信息。
请参阅图17,图17是本发明实施例提供的一种第二设备的协作方法的流程图,如图17所示,该方法包括以下步骤:
步骤S21:接收至少一个第一设备11的检测信息,其中,第一设备11的检测信息为第一设备11根据其附近的非接触式手势操作生成的。
步骤S22:根据至少一个第一设备的检测信息生成协作响应信息。
步骤S23:发送协作响应信息到至少一个第一设备11。
在本实施例中,第二设备12通过接收至少一个第一设备11的检测信息,并生成相应的协作响应信息,然后发送该协作响应信息到至少一个第一设备11。因此,在多设备进行协作交互时,只需要在至少一个第一设备11附近进行非接触式的手势操作,不需要接触第一设备11和第二设备12,操作简单及人性化。
值得注意的是,第二设备12在协作操作中,其本身可以不参与协作响应,也可以参与协作响应。当其本身不参与协作响应时,请参阅图18,第二设备12的协作方法具体包括以下步骤:
步骤S31:接收至少两个第一设备11的检测信息。
步骤S32:根据至少两个第一设备11的检测信息生成协作响应信息。
步骤S33:比较至少两个第一设备11的检测信息是否满足预设的匹配条件。
如果比较的结果满足预设的匹配条件,则执行步骤S34、S35以及S36;如果比较的结果不满足预设的匹配条件,则执行步骤S37、S38以及S39或步骤S310。
匹配条件包括时间匹配条件或模式匹配条件。其中,时间匹配条件和模式匹配条件分别如前文所述的时间匹配条件和模式匹配条件,在此不再赘述。
步骤S34:生成第一协作响应信息。
第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息以及切换所播放或所显示的内容的信息。
步骤S35:发送第一协作响应信息到至少两个第一设备11。
步骤S36:至少两个第一设备11根据第一协作响应信息进行协作处理。
步骤S37:生成第二协作响应信息。
第二协作响应信息包括不匹配信息、指示第一设备11校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态。
步骤S38:发送第二协作响应信息到至少两个第一设备11。
步骤S39:至少两个第一设备11根据第二协作响应信息进行协作处理。
步骤S310:不产生协作响应信息。
当第二设备12本身也参与协作响应时,请参阅图19,第二设备12的协作方法具体包括以下步骤:
步骤S41:检测第二设备12附近的非接触式手势操作。
第二设备12附近的非接触式手势操作如前文所述的第二设备12附近的非接触式手势操作,在此不再赘述。
步骤S42:根据第二设备12附近的非接触式手势操作生成第二设备12的检测信息。
步骤S43:发送至少一个第一设备11的检测信息到第二设备12。
步骤S44:比较第二设备12的检测信息和至少一个第一设备11的检测信息是否满足预设的匹配条件。
如果比较的结果满足预设的匹配条件,则执行步骤S45、S46以及S47;如果比较的结果不满足预设的匹配条件,则执行步骤S48、S49以及S410或步骤S411。
匹配条件包括时间匹配条件或模式匹配条件。其中,时间匹配条件和模式匹配条件分别如前文所述的时间匹配条件和模式匹配条件,在此不再赘述。
步骤S45:生成第一协作响应信息。
第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息以及切换所播放或所显示的内容的信息。
步骤S46:发送第一协作响应信息到至少一个第一设备11。
步骤S47:第二设备12和至少一个第一设备11根据第一协作响应信息进行协作处理。
步骤S48:生成第二协作响应信息。
第二协作响应信息包括不匹配信息、指示第一设备校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态。
步骤S49:发送第二协作响应信息到至少一个第一设备11。
步骤S410:第二设备12和至少一个第一设备11根据第二协作响应信息进行协作处理。
步骤S411:不产生协作响应信息。
请参阅图20,图20是本发明实施例提供的第二设备的协作方法的又一流程图,其是对上述生成第二设备12的检测信息的步骤的详细描述,如图20所示,生成第二设备12的检测信息具体包括以下步骤:
步骤S421:判断第二设备12附近的非接触式手势操作是否符合预设的第二设备检测信息生成条件。
在判断的结果为是时,执行步骤S422,在判断的结果为否时,执行步骤S423。
第二设备检测信息生成条件如前文所述的第二设备检测信息生成条件,在此不再赘述。
步骤S422:生成第二设备12的检测信息。
步骤S423:不生成第二设备12的检测信息。
综上所述,第一设备通过检测其附近的非接触式手势操作,并在该非接触式手势操作符合第一设备检测信息生成条件时生成第一设备的检测信息,然后发送给第二设备,第二设备通过根据至少一个第一设备的检测信息生成相应的协作响应信息,然后发送该协作响应信息到至少一个第一设备,第一设备进一步接收第二设备的发送的协作响应信息,最后根据协作响应信息进行协作处理,此外,第二设备还可以根据协作响应信息进行协作处理。因此,本发明在多设备协作时,只需要在至少一个第一设备附近进行非接触式的手势操作,不需要接触第一设备和第二设备,操作简单及人性化。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (24)

  1. 一种参与多设备协作的第一设备,其特征在于,所述第一设备包括:
    检测模块,用于检测所述第一设备附近的非接触式手势操作;
    检测信息生成模块,用于根据所述第一设备附近的非接触式手势操作生成所述第一设备的检测信息;
    检测信息发送模块,用于发送所述第一设备的检测信息到第二设备;
    协作响应接收模块,用于接收所述第二设备发送的协作响应信息,其中,所述协作响应信息为所述第二设备根据所述第一设备的检测信息产生的;
    协作响应处理模块,用于根据所述协作响应信息进行协作处理。
  2. 根据权利要求1所述的第一设备,其特征在于,所述第一设备附近的非接触式手势操作包括如下操作中的至少一个:
    从一预设方向开始挥动手或其它物体经过所述第一设备附近一次或多次,从一预设方向开始挥动手或其它物体经过所述第一设备附近一次或多次后再反方向经过所述第一设备附近一次或多次,挥动手或其它物体同时经过至少两个所述第一设备附近一次或多次,挥动手或其它物体停留在所述第一设备附近一次或多次。
  3. 根据权利要求1所述的第一设备,其特征在于,所述检测信息生成模块进一步包括判断单元和检测信息生成单元,其中,所述判断单元用于判断所述第一设备附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件,在判断的结果为是时,所述检测信息生成单元生成所述第一设备的检测信息。
  4. 根据权利要求3所述的第一设备,其特征在于,所述第一设备检测信息生成条件包括如下条件中的至少一个:
    预先设置的协作模式切换流程,协作模式切换条件,信号模式,信号变化模式,检测到物体靠近或远离的模式,手势变化模式,图像变化模式。
  5. 根据权利要求4所述的第一设备,其特征在于,所述第一设备的检测信息包括如下信息中的至少一个:
    与所述信号模式对应的信号信息,与所述信号变化模式对应的信号的变化信息,与所述检测到物体靠近或远离的模式对应的物体靠近或远离的信息,与所述协作模式切换流程和协作模式切换条件对应的协作模式切换信息。
  6. 根据权利要求1所述的第一设备,其特征在于,所述协作响应信息包括第一协作响应信息或第二协作响应信息,其中:
    所述第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息或切换所播放或所显示的内容的信息,若所述第一设备接收到所述第一协作响应信息,则进行定位、分享、合屏、分屏、切换屏幕大小或切换所播放或所显示的内容;
    所述第二协作响应信息包括不匹配信息、校正当前的协作模式、初始化所述检测模块或将所述检测模块恢复到预定的状态,若所述第一设备接收到所述第二协作响应信息,则校正当前的协作模式、初始化所述检测模块或将所述检测模块恢复到预定的状态。
  7. 一种参与多设备协作的第二设备,其特征在于,所述第二设备包括:
    检测信息接收模块,用于接收至少一个第一设备的检测信息,其中,所述第一设备的检测信息为所述第一设备根据其附近的非接触式手势操作生成的;
    协作响应生成模块,用于根据至少一个所述第一设备的检测信息生成协作响应信息;
    协作响应发送模块,用于发送所述协作响应信息到至少一个所述第一设备。
  8. 根据权利要求7所述的第二设备,其特征在于,所述协作响应生成模块进一步根据至少两个所述第一设备的检测信息生成所述协作响应信息。
  9. 根据权利要求7所述的第二设备,其特征在于,所述第二设备还包括:
    检测模块,用于检测所述第二设备附近的非接触式手势操作;
    检测信息生成模块,用于根据所述第二设备附近的非接触式手势操作生成所述第二设备的检测信息。
  10. 根据权利要求9所述的第二设备,其特征在于,所述协作响应生成模块进一步根据所述第二设备的检测信息和至少一个所述第一设备的检测信息生成所述协作响应信息;
    所述第二设备还包括:
    协作响应处理模块,用于根据所述协作响应信息进行协作处理。
  11. 根据权利要求8或10任一项所述的第二设备,其特征在于,所述协作响应生成模块包括比较单元和协作响应生成单元,所述协作响应信息包括第一协作响应信息或第二协作响应信息,其中,所述比较单元比较至少两个所述第一设备的检测信息或者所述第二设备的检测信息和至少一个所述第一设备的检测信息是否满足预设的匹配条件,当比较的结果满足预设的匹配条件时,所述协作响应生成单元生成所述第一协作响应信息,当比较的结果不满足所述匹配条件时,所述协作响应生成单元生成所述第二协作响应信息或不产生协作响应信息。
  12. 根据权利要求11所述的第二设备,其特征在于,所述匹配条件包括时间匹配条件,所述时间匹配条件包括所述第一设备的检测信息和所述第二设备的检测信息是否发生在同一段时间内,或者是否在时间或时间差上有先后顺序。
  13. 根据权利要求11所述的第二设备,其特征在于,所述匹配条件包括模式匹配条件,所述模式匹配条件为所述第一设备的检测信息的模式和所述第二设备的检测信息的模式是否匹配。
  14. 根据权利要求11所述的第二设备,其特征在于,所述第一协作响应信息包括定位信息、分享信息、合屏信息、分屏信息、切换屏幕大小信息或切换所播放或所显示的内容的信息;
    所述第二协作响应信息包括不匹配信息、指示所述第一设备校正当前的协作模式、初始化检测模块或将检测模块恢复到预定的状态。
  15. 一种多设备协作系统,其特征在于,所述系统包括第二设备以及至少一个第一设备,其中,所述第一设备为如权利要求1-6所述的第一设备,所述第二设备为如权利要求7-14所述的第二设备。
  16. 一种第一设备的协作方法,其特征在于,所述方法包括以下步骤:
    检测所述第一设备附近的非接触式手势操作;
    根据所述第一设备附近的非接触式手势操作生成所述第一设备的检测信息;
    发送所述第一设备的检测信息到所述第二设备;
    接收所述第二设备发送的协作响应信息,其中,所述协作响应信息为所述第二设备根据所述第一设备的检测信息产生的;
    根据所述协作响应信息进行协作处理。
  17. 根据权利要求16所述的方法,其特征在于,所述根据所述第一设备附近的非接触式手势操作生成所述第一设备的检测信息的步骤进一步包括:
    判断所述第一设备附近的非接触式手势操作是否符合预设的第一设备检测信息生成条件;
    在判断的结果为是时,生成第一设备的检测信息;
    在判断的结果为否时,不生成第一设备的检测信息。
  18. 一种第二设备的协作方法,其特征在于,所述方法包括以下步骤:
    接收至少一个第一设备的检测信息,其中,所述第一设备的检测信息为所述第一设备根据其附近的非接触式手势操作生成的;
    根据至少一个所述第一设备的检测信息生成协作响应信息;
    发送所述协作响应信息到至少一个所述第一设备。
  19. 根据权利要求18所述的方法,其特征在于,所述根据至少一个所述第一设备的检测信息生成协作响应信息的步骤包括:
    根据至少两个所述第一设备的检测信息生成协作响应信息。
  20. 根据权利要求18所述的方法,其特征在于,所述根据至少一个所述第一设备的检测信息生成协作响应信息的步骤之前包括:
    检测所述第二设备附近的非接触式手势操作;
    根据所述第二设备附近的非接触式手势操作生成所述第二设备的检测信息。
  21. 根据权利要求20所述的方法,其特征在于,所述根据至少一个所述第一设备的检测信息生成协作响应信息的步骤进一步包括:
    根据所述第二设备的检测信息和至少一个所述第一设备的检测信息生成所述协作响应信息;
    所述发送所述协作响应信息到至少一个所述第一设备的步骤之后包括:
    所述第二设备和至少一个所述第一设备根据所述协作响应信息进行协作处理。
  22. 根据权利要求19或21任一项所述的方法,其特征在于,所述协作响应信息包括第一协作响应信息或第二协作响应信息,其中,所述根据至少一个所述第一设备的检测信息生成协作响应信息的步骤进一步包括:
    所述第二设备比较至少两个所述第一设备的检测信息或者所述第二设备的检测信息和至少一个所述第一设备的检测信息是否满足预设的匹配条件;
    当比较的结果满足预设的匹配条件时,生成所述第一协作响应信息;
    当比较的结果不满足预设的匹配条件时,生成所述第二协作响应信息或不产生协作响应信息。
  23. 根据权利要求22所述的方法,其特征在于,所述匹配条件包括时间匹配条件,所述时间匹配条件包括所述第一设备的检测信息和所述第二设备的检测信息是否发生在同一段时间内,或者是否在时间或时间差上有先后顺序。
  24. 根据权利要求22所述的方法,其特征在于,所述匹配条件包括模式匹配条件,所述模式匹配条件为所述第一设备的检测信息的模式和所述第二设备的检测信息的模式是否匹配。
PCT/CN2014/075920 2013-09-06 2014-04-22 多设备协作系统、第一设备、第二设备及其协作方法 WO2015032208A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310404992.X 2013-09-06
CN201310404992.XA CN104426986B (zh) 2013-09-06 2013-09-06 多设备协作系统、第一设备、第二设备及其协作方法

Publications (1)

Publication Number Publication Date
WO2015032208A1 true WO2015032208A1 (zh) 2015-03-12

Family

ID=52627766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/075920 WO2015032208A1 (zh) 2013-09-06 2014-04-22 多设备协作系统、第一设备、第二设备及其协作方法

Country Status (2)

Country Link
CN (1) CN104426986B (zh)
WO (1) WO2015032208A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203383A1 (en) * 2015-01-14 2016-07-14 Lenovo (Singapore) Pte. Ltd. Method apparatus and program product for enabling two or more electronic devices to perform operations based on a common subject

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102866777A (zh) * 2012-09-12 2013-01-09 中兴通讯股份有限公司 一种数字媒体内容播放转移的方法及播放设备及系统
CN102984592A (zh) * 2012-12-05 2013-03-20 中兴通讯股份有限公司 一种数字媒体内容播放转移的方法、装置和系统
CN103137128A (zh) * 2011-11-18 2013-06-05 索尼公司 用于设备控制的手势和语音识别

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103137128A (zh) * 2011-11-18 2013-06-05 索尼公司 用于设备控制的手势和语音识别
CN102866777A (zh) * 2012-09-12 2013-01-09 中兴通讯股份有限公司 一种数字媒体内容播放转移的方法及播放设备及系统
CN102984592A (zh) * 2012-12-05 2013-03-20 中兴通讯股份有限公司 一种数字媒体内容播放转移的方法、装置和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203383A1 (en) * 2015-01-14 2016-07-14 Lenovo (Singapore) Pte. Ltd. Method apparatus and program product for enabling two or more electronic devices to perform operations based on a common subject
US10929703B2 (en) * 2015-01-14 2021-02-23 Lenovo (Singapore) Pte. Ltd. Method apparatus and program product for enabling two or more electronic devices to perform operations based on a common subject

Also Published As

Publication number Publication date
CN104426986B (zh) 2018-10-19
CN104426986A (zh) 2015-03-18

Similar Documents

Publication Publication Date Title
WO2014014238A1 (en) System and method for providing image
WO2017018708A1 (ko) 디바이스들 간의 통신 방법 및 그 디바이스
WO2013012104A1 (ko) 전자기기 및 전자기기의 동작 방법
WO2021049827A1 (ko) 외부 전자 장치의 위치를 결정하기 위한 전자 장치 및 그 방법
WO2014030981A1 (en) Control method and control apparatus for apparatus including short range wireless communication module
WO2017000724A1 (zh) 一种畸变校正方法及终端
WO2015057013A1 (ko) 휴대용 장치가 웨어러블 장치를 통하여 정보를 표시하는 방법 및 그 장치
WO2015137604A1 (ko) 클라우드 스트리밍 서버 테스트 방법, 이를 위한 장치 및 시스템
WO2015194693A1 (ko) 영상 표시 기기 및 그의 동작 방법
WO2021251549A1 (en) Display device
WO2016036048A1 (en) Method and device for data encrypting
WO2018090822A1 (zh) 基于智能手表的移动终端相机控制方法及控制系统
WO2018034491A1 (en) A primary device, an accessory device, and methods for processing operations on the primary device and the accessory device
WO2020256458A1 (en) Electronic device for determining location information of external device
WO2017219636A1 (zh) 数据交互方法、云端服务器以及智能终端
WO2014003282A1 (en) Image processing apparatus, image relaying apparatus, method for processing image, and method for relaying image
WO2015032208A1 (zh) 多设备协作系统、第一设备、第二设备及其协作方法
WO2020141773A1 (ko) 출입 관리 시스템 및 이를 이용한 출입 관리 방법
WO2019009453A1 (ko) 디스플레이 장치
WO2019107946A1 (en) Electronic device and method for processing remote payment
WO2019139373A1 (en) Method of providing notification and electronic device supporting same
WO2020222418A1 (ko) 사용자를 인증하기 위한 방법 및 지원하는 전자 장치
WO2015103748A1 (zh) 认证关联方法及系统
WO2015093640A1 (ko) 사용자 단말 장치 및 그의 인터랙션 서비스 제공 방법
WO2017206882A1 (zh) 一种传感器控制方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14841918

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14841918

Country of ref document: EP

Kind code of ref document: A1