WO2015070624A1 - Information interaction - Google Patents

Information interaction Download PDF

Info

Publication number
WO2015070624A1
WO2015070624A1 PCT/CN2014/081495 CN2014081495W WO2015070624A1 WO 2015070624 A1 WO2015070624 A1 WO 2015070624A1 CN 2014081495 W CN2014081495 W CN 2014081495W WO 2015070624 A1 WO2015070624 A1 WO 2015070624A1
Authority
WO
WIPO (PCT)
Prior art keywords
piece
input information
image
information
identification information
Prior art date
Application number
PCT/CN2014/081495
Other languages
French (fr)
Inventor
Lin Du
Kuifei Yu
Original Assignee
Beijing Zhigu Rui Tuo Tech Co., Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Rui Tuo Tech Co., Ltd filed Critical Beijing Zhigu Rui Tuo Tech Co., Ltd
Publication of WO2015070624A1 publication Critical patent/WO2015070624A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

An information interaction comprises on a second device side, acquiring an image related to a first device, and acquiring identification information in the image and providing the identification information to a server, the identification information including device input information corresponding to the first device. On a server side, the information interaction comprises acquiring identification information provided by a second device, and acquiring device input information related to the first device included in the identification information, and providing the device input information to the first device. Further, on a first device side, the information interaction comprises embedding identification information into an image related to the first device, providing the image to external, and receiving the device input information from the external, and executing the corresponding operation. The user can perform corresponding operations for the devices as desired without remembering the device input information, thereby improving the user experience.

Description

INFORMATION INTERACTION
Related Application
[0001] This application claims priority to Chinese Patent Application No.
201310574246.5, filed on November 15, 2013 and entitled "INFORMATION INTERACTION METHOD AND INFORMATION INTERACTION DEVICE", which is hereby incorporated herein by reference in its entirety.
Technical Field
[0002] The present application relates to the technical field of device interaction, and, in particular, to interaction with information of a device.
Background
[0003] A mobile or wearable device generally sets up a screen lock based on reasons of saving energy and preventing fault operations, and screen unlocking may be encrypted or unencrypted; however, when the encrypted screen is unlocked, a user generally needs to remember some special passwords, patterns, actions, etc. that will be easily forgotten. Although the security can be ensured, this manner brings inconvenience to the user. Certainly, in addition to the occasion of screen unlocking, the above problem may also exist on other occasions with request to enter information, such as a password, for further operations.
[0004] By using a digital watermarking technology, some identification information (e.g., a digital watermark) may be directly embedded into a digital carrier without affecting the use of an original carrier, and it is difficult to detect or modify the identification information. The digital watermarking technology can be applied to many fields, such as copyright protection, anti-counterfeiting, authentication, and information hiding. If the digital watermarking technology is used to safely and privately help a user to enter a password to acquire the corresponding authorization, the above problem that the authentication cannot be carried out due to forgetting by the user can be solved, thereby improving the user experience. Summary
[0005] The following presents a simplified summary in order to provide a basic understanding of some example embodiments disclosed herein. This summary is not an extensive overview. It is intended to neither identify key or critical elements nor delineate the scope of the example embodiments disclosed. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0006] An example purpose of the present application is to provide an information interaction technology.
[0007] In a first example embodiment, the present application provides a method, comprising:
acquiring, by a system comprising a processor, an image related to a first device, the image comprising at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
acquiring the at least one piece of identification information in the image; and
initiating providing a server with the at least one piece of identification information.
[0008] In a second example embodiment, the present application provides a method, comprising:
acquiring, by a server comprising a processor, at least one piece of identification information from a second device;
acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and initiating providing the first device with the at least one piece of device input information.
[0009] In a third example embodiment, the present application provides a method, comprising:
embedding, by a second device comprising a processor, at least one piece of identification information into an image related to a first device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
initiating providing the image to an external device;
receiving the at least one piece of device input information from the external device; and
executing an operation corresponding to the at least one piece of device input information.
[0010] In a fourth example embodiment, the present application provides a device, comprising:
a processor that executes executable modules to perform operations of the device, the executable modules comprising:
an image acquisition module configured to acquire an image related to a first device, the image comprising at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
an identification information acquisition module configured to acquire the at least one piece of identification information in the image; and
an identification information providing module configured to provide a server with the at least one piece of identification information.
[0011] In a fifth example embodiment, the present application provides a wearable device that comprises the information interaction device in the above third example embodiment.
[0012] In a sixth example embodiment, the present application provides a server, comprising:
a processor that executes or facilitates execution of executable modules to perform operations of the device, the executable modules comprising:
an identification information acquisition module, configured to acquiring at least one piece of identification information from a second device;
an input information acquisition module, configured to acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and
an input information providing module, configured to providing the first device with the at least one piece of device input information.
[0013] In a seventh example embodiment, the present application provides a device, comprising:
an identification information embedding module, configured to embedding at least one piece of identification information into an image related to the information interaction device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the information interaction device;
an image providing module configured to provide the image externally;
an information input module configured to receive the at least one piece of device input information from externally; and
an execution module, configured to executing corresponding operations according to the at least one piece of device input information received by the information input module.
[0014] In an eighth example embodiment, the present application provides a computer readable memory device comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
acquiring an image related to a first device, the image comprising identification information that comprises device input information corresponding to the first device;
acquiring the identification information in the image; and
initiating sending of the identification information to a server device.
[0015] In a ninth example embodiment, the present application provides a first information interaction device, comprising a processor and a memory, the memory storing executable instructions, the processor and the memory being connected via a communication bus, and the processor executing the executable instructions stored in the memory when the first information interaction device is in operation, to cause the first information interaction device to execute operations, comprising:
acquiring an image related to a first device, the image comprising at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
acquiring the at least one piece of identification information in the image; and
sending the at least one piece of identification information to a server device.
[0016] In a tenth example embodiment, the present application provides a computer readable memory device comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
acquiring at least one piece of identification information from a second device;
acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and providing the first device with the at least one piece of device input information.
[0017] In an eleventh example embodiment, the present application provides a server comprising a processor and a memory, the memory storing executable instructions, the processor and the memory being communicatively connected, and the processor executing the executable instructions stored in the memory when the server is in operation, causing the server to execute operations, comprising:
acquiring at least one piece of identification information from a second device;
acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and initiating providing the first device with the at least one piece of device input information.
[0018] In a twelfth example embodiment, the present application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
embedding at least one piece of identification information into an image related to a first device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
providing the image to an external device;
receiving the at least one piece of device input information from the external device; and
executing an operation corresponding to the at least one piece of device input information.
[0019] In a thirteenth example embodiment, the present application provides a second information interaction device, comprising a processor and a memory, the memory storing executable instructions, the processor and the memory being communicatively connected, and the processor executing the executable instructions stored in the memory when the second information interaction device is in operation, to cause the second information interaction device to execute operations, comprising:
embedding at least one piece of identification information into an image related to a first device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
sending the image to an external device;
receiving the at least one piece of device input information from the external device; and
executing an operation corresponding to the at least one piece of device input information.
[0020] In at least one technical solution according to embodiments of the present application, an image related to a device is acquired, device input information comprised in the image is obtained, and the device is automatically provided with the device input information, such that the user will execute corresponding operations on the device as desired without remembering the device input information, thereby greatly facilitating the user and improving the user experience. Brief Description of the Drawings
[0021] Fig. 1 shows an example flowchart of steps of an information interaction method in an embodiment of the present application;
[0022] Fig. 2a shows an example schematic diagram of an image corresponding to an information interaction method in an embodiment of the present application;
[0023] Fig. 2b shows an example schematic diagram of an image corresponding to an information interaction method in an embodiment of the present application;
[0024] Fig. 3 shows an example flowchart of steps of another information interaction method in an embodiment of the present application;
[0025] Fig. 4a shows an example schematic diagram of a spot pattern used by an information interaction method in an embodiment of the present application;
[0026] Fig. 4b shows an example schematic diagram of a fundus pattern obtained by an information interaction method in an embodiment of the present application;
[0027] Fig. 5 shows an example structural block diagram of a first information interaction device in an embodiment of the present application;
[0028] Fig. 6a shows an example structural block diagram of another first information interaction device in an embodiment of the present application;
[0029] Fig. 6b shows an example structural block diagram of yet another first information interaction device in an embodiment of the present application;
[0030] Fig. 7a shows an example structural block diagram of a position detection module in a first information interaction device in an embodiment of the present application;
[0031] Fig. 7b shows an example structural block diagram of a position detection module in another first information interaction device in an embodiment of the present application;
[0032] Figs. 7c and 7d show an example schematic diagram of a corresponding optical path when a position detection module carries out position detection in an embodiment of the present application;
[0033] Fig. 8 shows an example schematic diagram of a first information interaction device applied to glasses in an embodiment of the present application;
[0034] Fig. 9 shows an example schematic diagram of another first information interaction device applied to glasses in an embodiment of the present application;
[0035] Fig. 10 shows an example schematic diagram of yet another first information interaction device applied to glasses in an embodiment of the present application;
[0036] Fig. 11 shows an example structural block diagram of another information interaction device in an embodiment of the present application;
[0037] Fig. 12 shows an example schematic diagram of a wearable device in an embodiment of the present application;
[0038] Fig. 13 shows an example flowchart of steps of a server information interaction method in an embodiment of the present application;
[0039] Fig. 14a shows an example flowchart of steps of a server information interaction method in an embodiment of the present application;
[0040] Fig. 14b shows an example flowchart of steps of a server information interaction method in an embodiment of the present application;
[0041] Fig. 15 shows an example schematic structural block diagram of a server in an embodiment of the present application;
[0042] Figs. 16a- 16c show example schematic structural block diagrams of another several servers in an embodiment of the present application;
[0043] Fig. 17 shows an example schematic flow diagram of an information interaction method in an embodiment of the present application;
[0044] Fig. 18 shows an example structural block diagram of a second information interaction device in an embodiment of the present application;
[0045] Fig. 19 shows an example structural block diagram of an electronic terminal in an embodiment of the present application;
[0046] Fig. 20 shows an example structural block diagram of another second information interaction device in an embodiment of the present application; and
[0047] Fig. 21 shows an example schematic diagram of an application scenario of an information interaction device in an embodiment of the present application. Detailed Description
[0048] The method and the device of the present application are described in detail below with reference to the accompanying drawings and the embodiments.
[0049] A user often needs to use various kinds of device input information.
The device input information here is information that needs to be entered into a device to accomplish an operation, for example, a user password or a specific gesture that a user needs to enter on a lock screen interface of an electronic device, user passwords required when a user logs in to accounts of some websites or applications, or various kinds of user authentication information, such as password information, required in some access control devices. A user needs to remember these various kinds of device input information, if not, a lot of inconveniences will be brought to the user. The technical solutions provided in the following embodiments of the present application can help a user to acquire these device input information without remembering them and to automatically accomplish the corresponding operations.
[0050] In the following description of the embodiments of the present application, the "user environment" is a use environment related to a user, for example, a user logs in via a user login interface of an electronic terminal such as a mobile phone, a computer, etc., to enter a use environment of an electronic terminal system that generally has a plurality of applications, e.g., after a user enters a use environment of a mobile phone system via a lock screen interface of the mobile phone, the user can start applications, such as phone, e-mail, message, and camera, corresponding to the functional modules in this system; alternatively, for example, the "user environment" may also be a use environment of an application after a user logs in via a login interface of a certain application, and the use environment of the application may comprise a plurality of next-level applications, for example, after starting the above phone application in the mobile phone system, the phone application may comprise next-level applications such as phone call, contact, and call log.
[0051] As shown in Fig. 1, an embodiment of the present application provides an information interaction method, including: [0052] S120: an image acquisition step of acquiring an image related to a first device, the image including at least one piece of identification information;
[0053] S140: an identification information acquisition step of acquiring the at least one piece of identification information in the image, the at least one piece of identification information including at least one piece of device input information corresponding to the first device; and
[0054] S160: an identification information providing step of providing a server with the at least one piece of identification information.
[0055] In the embodiment of the present application, an image related to a first device is acquired, and identification information included in the image is obtained and then is provided to a server, so that the server extracts device input information in the identification information and carries out information interaction with the first device, so as to facilitate operations of the user.
[0056] The steps are further described by using the following implementation manners according to an embodiment of the present application:
[0057] S120: an image acquisition step of acquiring an image related to a first device.
[0058] In the embodiment of the present application, the image related to a first device may be, for example, an image shown on the first device, for example, an image shown on a screen of an electronic terminal such as a mobile phone or a computer. In one implementation manner, the image is a login interface of a user environment shown on the first device. At this time, the device input information is login information of the user environment.
[0059] As shown in Fig. 2a and Fig. 2b, in a further implementation manner, the image is a lock screen interface 110 shown on the device. At this time, the device input information is unlocking information of the lock screen interface.
[0060] In other implementation manners of the embodiments of the present application, the image may also be, for example, an image shown on other devices or a still image printed on an object such as on paper or on a wall, and at this time, the image is related to the above first device. For example, the image is an image shown on a picture posted near a door, the first device is an electronic access control device of the door, and the electronic watermark of the image includes device input information (such as information of a password for opening the door) of the electronic access control device.
[0061] By using digital watermarking technology, some identification information (i.e., a digital watermark) may be directly embedded into a digital carrier without affecting the use of an original carrier, and it is difficult to detect or modify the identification information; therefore, in the embodiment of the present application, the identification information may be a digital watermark. Certainly, a person skilled in the art may know that the identification information may also be other information embedded into the image or existing independently.
[0062] In an embodiment of the present application, the image is acquired in many ways, for example:
[0063] 1) Acquire the image by photographing:
[0064] in the embodiment of the present application, for example, an object seen by a user may be shot by an intelligent glasses device, for example, when a user looks at the image, the intelligent glasses device takes a shot of the image.
[0065] 2) Acquire the image by receiving:
[0066] in a possible implementation manner of the embodiment of the present application, the image may also be acquired by other devices, or the image may be acquired by the interaction with a device that displays the image.
[0067] S140: An identification information acquisition step of acquiring the identification information in the image, the identification information including at least one piece of device input information corresponding to the first device.
[0068] In the step S140, the identification information in the image is extracted by performing corresponding processing for the image. In a case, for example, in which the identification information is a digital watermark, in a possible implementation manner, the digital watermark may be obtained by extracting the two lowest bits of RGBA (a colour space of Red, Green, Blue, and Alpha) of each pixel of the image and then combining them (i.e., the least significant bits LSB method).
[0069] S160: An identification information providing step of providing a server with the at least one piece of identification information.
[0070] In the embodiment of the present application, the at least one piece of identification information is provided directly or indirectly for a server through a data transmission manner. The providing manner may be sending by the local to the server or acquiring actively by the server from the local.
[0071] By the method in the embodiment of the present application, identification information included in a first device image can be acquired naturally and conveniently and provided to a server, and then, through the interaction between the server and the first device, the user can conveniently enable the function corresponding to the first device without manual operation.
[0072] Besides the step of providing the device with the device input information to execute corresponding operations, in order to enable the user to privately see the device input information, as shown in Fig. 3, in some embodiments, the method further includes the following steps:
[0073] S170: an input information acquisition step of acquiring the device input information provided by the server; and
[0074] S180: a projection step of projecting the device input information onto a user's fundus.
[0075] The device input information is generally acquired from the server in a data transmission manner in the step S170, and then the information is perceived by the user in the step S180.
[0076] The user herein may be a user who is looking at the image and is desired to acquire the device input information.
[0077] Hence, on one hand, the user may know the corresponding information, and on the other hand, in situations where some devices cannot receive the device input information (e.g., there are communications problems), the user may enter content into the device manually according to the acquired the device input information. Certainly, when the device input information is projected onto the user's fundus, the device input information needs to be converted into corresponding display content.
[0078] In the embodiment of the present application, in order to enable the user to obtain the device input information in private occasions, the user may obtain the corresponding device input information by projecting the device input information onto the user's fundus. [0079] In a possible implementation manner, the projection may be implemented by directly projecting the device input information onto the user's fundus by using a projection module.
[0080] In this implementation manner, the device input information is directly projected onto the user's fundus without an intermediate display, such that only the user can obtain the device input information, while the others cannot see it, thereby ensuring the information security of user.
[0081] In another possible implementation manner, the projection may also be implemented by displaying the device input information on a position which only the user can see (e.g., a display surface of intelligent glasses), and projecting the device input information onto the user's fundus via the display surface.
[0082] When the device input information is displayed at a near-to-eye position by using a device such as intelligent glasses, other users hardly see the device input information, and thus the implementation manner can effectively ensure the information security of user.
[0083] Here, since the device input information directly reaches the user's fundus without an intermediate display, the first manner has higher privacy.
[0084] This implementation manner is further described as follows. The projection step includes:
an information projection step of projecting the device input information; and
a parameter adjustment step of adjusting at least one projection imaging parameter of an optical path between a projection position and eyes of the user, until the image of the device input information is formed clearly on the user's fundi.
[0085] In a possible implementation manner of an embodiment of the present application, the parameter adjustment step includes:
adjusting at least one imaging parameter of at least one optical device in an optical path between the projection position and the eyes of user and/or the position of the optical device in the optical path.
[0086] Here, the imaging parameter includes the focal length, the direction of optical axis, etc. of the optical device. Through this adjustment, the device input information can be appropriately projected onto the user's fundus by, for example, adjusting the focal length of the optical device to form an image of the device input information clearly on the user's fundus. Alternatively, in the following implementation manner, when a stereo display is required, besides the step of directly generating a left image and a right image having a parallax when the device input information is generated, the same device input information can be projected in a certain bias to the two eyes separately to implement the stereo display effect of the device input information, and at this time, this effect may be achieved by adjusting the optical axis parameter of the optical device.
[0087] Since the gazing direction of eyes may have a change when the user views the device input information, it is required to desirably project the device input information onto the user's fundus in different gazing directions of eyes of the user; therefore, in a possible implementation manner of the embodiment of the present application, the projection step S180 further includes:
transferring the device input information to the user's fundus respectively corresponding to the pupil positions in different optical axis directions of eyes.
[0088] In a possible implementation manner of the embodiment of the present application, it may be required to implement the function of the above step by using a curved optical device such as a curved light splitter; however, the to-be-displayed content generally has a distortion after being transferred by the curved optical device, and thus in a possible implementation manner of the embodiment of the present application, the projection step S180 further includes:
performing an anti-distortion processing corresponding to the pupil positions in different optical axis directions of eyes for the device input information, so that the fundi receive the to-be-displayed device input information.
[0089] For example, the projected device input information is pre-processed to enable the projected device input information to have an anti-distortion effect contrary to the distortion, and this anti-distortion effect counteracts the distortion effect of the curved optical device after passing through the above curved optical device, such that the user's fundus may receive the device input information in a to-be-displayed effect. [0090] In a possible implementation manner, the device input information projected into the eyes of user is not required to align with the image, for example, when the user is required to enter a piece of password information in a certain order in an input box displayed in the image, such as " 1234", the user may view it just by projecting this information onto the user's fundi. However, in some cases, for example, when the device input information is the information generated by a specific action at a specific position, e.g., the information generated by drawing a specific track at a specific position on a screen showing the image, it is required to display the device input information aligned with the image. Therefore, In a possible implementation manner of the embodiment of the present application, in the projection step SI 80, the device input information is aligned with an image seen by the user and then projected onto the user's fundi.
[0091] In order to implement the above alignment function, in a possible implementation manner, the method further includes:
a position detection step of detecting the position of a gaze point of the user relative to the user's position.
[0092] Then, the projection step S180 further includes: on the user's fundus, aligning, the projected device input information with the image seen by the user according to the position of the gaze point of the user relative to the user's position.
[0093] Here, since user is looking at the image, e.g., a lock screen interface of a mobile phone of the user, the position corresponding to the user's gaze point is the position of the image.
[0094] In this implementation manner, there are many manners of detecting the position of the user's gaze point, including, for example, one or more manners as follows:
[0095] i) detecting the optical axis direction of one eye by using a pupil direction detector, and obtaining the depth of the eye gazing a scenario by using a depth sensor (e.g., an infrared distance meter), so as to obtain the position of the gaze point of sight of the eye. This technology is an existing technology and therefore no detail is given in this implementation manner;
[0096] ii) detecting the optical axis directions of two eyes separately, obtaining the gazing direction of the two eyes according to the optical axis directions of the two eyes, and obtaining the position of gaze point of sight of the eyes according to the intersection point of the gazing directions of the two eyes. This technology is also an existing technology, and therefore no detail is given herein; and
[0097] iii) according to the optical parameters of the optical path between the image acquisition position and the eyes and the optical parameters of eyes when the clearest image is formed on the imaging surface of the eyes, obtaining the position of gaze point of sight of eyes. The embodiment of the present application will give a detailed description of this method below, and therefore no detail is given herein.
[0098] Certainly, a person skilled in the art may know that besides the above gaze point detection manners, other manners for detecting the gaze point of the eyes of the user may also be used in the methods in the embodiments of the present application.
[0099] Here, the step of detecting the current position of gaze point by the third method includes:
a fundus image acquisition step of acquiring an image on the user's fundus;
an adjustable imaging step of adjusting at least one imaging parameter of the optical path between the fundus image acquisition position and the user's eyes until a clearest image is acquired; and
an image processing step of analyzing the acquired image of the fundi to obtain the imaging parameters, corresponding to the clearest image, of the optical path between the fundus image acquisition position and the eyes and obtain at least one optical parameter of the eyes, and calculating the position of the current gaze point of the user relative to the user's position.
[00100] By analyzing and processing the image on the fundus of the eye, the optical parameter of the eye is obtained when the clearest image is acquired, so as to calculate the current focus point position of sight, thereby providing foundation for further detection of the observation behaviour of the observer based on the precise focus position.
[00101] Here, the image displayed on the "fundus" is an image displayed on the retina, and may be an image of the fundus itself or an image of another object that is projected onto the fundus, e.g., a spot pattern described below.
[00102] In the adjustable imaging step, adjust the focal length of the optical device in the optical path between the eye and the acquisition position and/or the position of the optical device in the optical path, so as to obtain the clearest image on the fundus when the optical device is in a certain position or state. This adjustment is continuous and in real time.
[00103] In a possible implementation manner of the method in the embodiment of the present application, this optical device may be a lens with adjustable focal length, for accomplishing the adjustment of its focal length by adjusting the refractive index and/or shape of the optical device, which specifically includes: 1) adjusting the focal length by adjusting the curvature of at least one face of the lens with adjustable focal length, for example, adding or reducing a liquid medium in a cavity formed by a double-layer transparent layer to adjust the curvature of the lens with adjustable focal length; and 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, when the lens with adjustable focal length is filled with a specific liquid crystal medium, adjusting the arrangement mode of the liquid crystal medium by adjusting the voltage of an electrode corresponding to the liquid crystal medium so as to change the refractive index of the lens with adjustable focal length.
[00104] In another possible implementation manner of the method in the embodiment of the present application, the optical device may be a lens assembly that is used to accomplish the adjustment of the focal length of the lens assembly by adjusting the relative position between the lenses in the lens assembly. Alternatively, one or more lenses in the lens assembly are the above lens with adjustable focal length.
[00105] Besides the above two manners of changing optical path parameters of the system through the properties of the optical device, the optical path parameters of the system may also be changed by adjusting the position of the optical device in the optical path.
[00106] In addition, in a method in the embodiment of the present application, the image processing step further includes: analyzing the image that is acquired in the step of acquiring an image on the fundus, to find the clearest image; and
calculating the optical parameters of the eye according to the clearest image and the imaging parameter that is known when the clearest image is obtained.
[00107] The adjustment in the adjustable imaging step enables the acquisition of the clearest image, however, it is required to find the clearest image by the image processing step, and the optical parameters of the eye can be calculated according to the clearest image and the known optical path parameter.
[00108] In the method in the embodiment of the present application, the image processing step further includes:
projecting a spot onto the fundus, wherein the projected spot may have no specific pattern and is only used for lightening the fundus, and may also include a feature-rich pattern; and the rich features of pattern may facilitate detection and improve the detection accuracy. A schematic diagram of a spot pattern P is shown in Fig. 4a. The pattern may be formed by a spot pattern generator such as ground glass. Fig. 4b shows an image on the fundus acquired when there is a projected spot pattern P.
[00109] In order not to affect the normal viewing of eyes, the spot may be an invisible infrared spot. At this time, in order to reduce the interference of other spectra, the light except the invisible light in the projected spot can be filtered out.
[00110] Correspondingly, the method implemented by the present application may also include the following step:
controlling the brightness of the projected spot according to the result analyzed by the above step. The analyzing result includes, for example, features of the acquired image, including the contrast of image features, textural features, etc.
[00111] It should be noted that a special situation of the controlling the brightness of the projected spot is a situation of starting or stopping the projection, for example, projection can be stopped periodically when the observer continuously fixes eyes on one spot; and the projection can be stopped when the fundus of the observer is bright enough, and the fundus information can be used to detect the distance between the current focus point of sight of eye and the eye. [00112] Furthermore, the brightness of the projected spot may also be controlled according to the ambient light.
[00113] In the method in the embodiment of the present application, the image processing step may also include:
performing calibration for the image on the fundus to obtain at least one reference image corresponding to the image displayed on the fundus. In particular, the acquired image can be subjected to contrast calculation with the reference image to obtain the clearest image. Here, the clearest image may be an acquired image with the minimum difference with the reference image. In the method of this implementation manner, the difference between the acquired image and the reference image can be calculated by the existing image processing algorithm, e.g., by the classic phase-difference auto-focusing algorithm.
[00114] The optical parameter of the eye may include the eye's optical axis direction that is obtained according to the eye's feature when the clearest image is acquired. Here, the eye's feature may be acquired from the clearest image, or may be acquired in another manner. The gazing direction of the user's eye sight may be obtained according to the eye's optical axis direction. In particular, the eye's optical axis direction may be obtained according to the feature of the fundus when the clearest image is acquired, and the eye's optical axis direction can be determined in a higher precision through the feature of the fundus.
[00115] When the spot pattern the projected onto the fundus, the size of the spot pattern may be larger than or smaller than the visible region of the fundus, wherein:
[00116] when the area of the spot pattern is smaller than or equal to the visible region of fundus, the eye's optical axis direction may be determined through the classic matching algorithm of feature points (e.g., scale invariant feature transform (SIFT) algorithm) by detecting the position of the spot pattern on the image relative to the fundus position; and
[00117] when the area of the spot pattern is larger than or equal to the visible region of fundus, the observer's gazing direction may be determined by determining the eye's optical axis direction according to the position of the obtained spot pattern on the image relative to the original spot pattern (obtained by image calibration). [00118] In another possible implementation manner of the method in the embodiment of the present application, the eye's optical axis direction may also be obtained according to the feature of the eye pupil when the clearest image is acquired. Here, the feature of the pupil may be acquired from the clearest image, or may be acquired in another manner. The method for obtaining the eye's optical axis direction through the feature of the eye pupil is an existing technology, and therefore no detail is given herein.
[00119] In addition, the method in the embodiment of the present application may also include a step of calibrating the eye's optical axis direction to facilitate the determination of the eye's optical axis direction.
[00120] In the method in the embodiment of the present application, the known imaging parameter includes a fixed imaging parameter and a real-time imaging parameter, where the real-time imaging parameter is the parameter information of the optical device when the clearest image is acquired, and this parameter information can be obtained by recording in real time when the clearest image is acquired.
[00121] After the current optical parameter of the eye is obtained, the distance between the eye's focus point and the eye may be calculated (the particular process will be described in detail with reference to the device part) to obtain the position of eye's gaze point.
[00122] In order to make the device input information viewed by the user have a stereo display effect and be more realistic, in a possible implementation manner of the embodiment of the present application, in the projection step SI 80, the device input information is projected stereo scopically onto the user's fundus.
[00123] As described above, in a possible implementation manner, the stereo projection may be implemented by adjusting projection positions of the same projection information, such that the two eyes of the user may see the information having parallax to form a stereo display effect.
[00124] In another possible implementation manner, the device input information includes stereo information corresponding to the two eyes of user respectively; and in the projection step, the corresponding device input information may be respectively projected onto the two eyes of the user. That is, the device input information includes left eye information corresponding to the user's left eye, and right eye information corresponding to the user's right eye, and during the projection, the left eye information is projected onto the user's left eye, and the right eye information is projected onto the user's right eye, such that the user may see the device input information in an appropriate stereo display effect to bring better user experience. Furthermore, when the device input information input to the user includes three-dimensional information, a user may see the three-dimensional information through the above stereo projection. For example: when a user needs to make a specific gesture at a specific position in a three-dimensional space to correctly input the device input information, with the above method in the embodiment of the present application, the user may see the stereo device input information, acquire the specific position and the specific gesture, and further make the gesture prompted by the device input information at the specific position, while the others cannot acquire the space information even if they see that the user makes the gesture action, thereby improving the privacy of the device input information.
[00125] It should be noted that, in the embodiment of the present application, the sequence numbers of the above processes do not imply an execution sequence, and the execution sequence of the processes should be determined according to the functions and internal logic, which is not intended to limit the implementation processes of the embodiments of the present application in any way.
[00126] As shown in Fig. 5, an embodiment of the present application provides a first information interaction device 500, including:
an image acquisition module 510, for acquiring an image related to a first device, the image including at least one piece of identification information;
an identification information acquisition module 520, for acquiring the at least one piece of identification information in the image, the at least one piece of identification information including at least one piece of device input information corresponding to the first device; and
an identification information providing module 530, for providing a server with the at least one piece of identification information.
[00127] In the embodiment of the present application, an image related to a first device is acquired, and identification information included in the image is obtained and then provided to a server, so that the server extracts device input information in the identification information and carries out information interaction with the first device, so as to facilitate operations of the user.
[00128] The modules of the first information interaction device 500 are further described through the following implementation manners according to the embodiment of the present application:
[00129] In the implementation manner of the embodiment of the present application, the image acquisition module 510 is in various forms, for example:
[00130] As shown in Fig. 6a, the image acquisition module 510 includes an image acquisition submodule 511 for acquiring the image by photographing.
[00131] Here, the image acquisition submodule 511 may be, for example, a camera of intelligent glasses for shooting the images that the user is seeing.
[00132] As shown in Fig. 6b, in another implementation manner of the embodiment of the present application, the image acquisition module 510 includes:
a first communication submodule 512 for acquiring the image by a receiving mode.
[00133] In this implementation manner, the image may be acquired by other devices and sent to the device of the embodiment of the present application; or the image is acquired by interaction with the first device displaying the image (i.e., the first device sends the displayed image information to the device of the embodiment of the present application).
[00134] In the embodiment of the present application, besides the step of providing the device with the device input information to execute corresponding operations, in order to enable the user to privately view the device input information, as shown in Fig. 6b, the device 500 further includes:
an input information acquisition module 550, for acquiring the device input information provided by the server; and
a projection module 560, for projecting the device input information onto the user's fundus.
[00135] Here, in one implementation manner, the input information acquisition module 550 may be a communication module for receiving the device input information returned from the server. Functions of the communication module and the first communication submodule 512 may be implemented by a same device.
[00136] Hence, on one hand, the user may know the corresponding information, on the other hand, in situations where some devices cannot receive the device input information (e.g., there are communications problems), the user may enter content into the device manually according to the acquired the device input information.
[00137] As shown in Fig. 6b, in this implementation manner, the projection module 560 includes:
an information projection submodule 561, for projecting the device input information; and
a parameter adjustment submodule 562, for adjusting at least one projection imaging parameter of an optical path between a projection position and eyes of the user, until the image of the device input information is formed clearly on the user's fundi.
[00138] In one implementation manner, the parameter adjustment submodule
562 includes:
[00139] at least one adjustable lens device, with an adjustable focal length and/or an adjustable position on the optical path between the projection position and the user's eyes.
[00140] As shown in Fig. 6b, in an implementation manner, the projection module 560 includes:
a curved light splitter 563, for transferring the device input information to the user's fundi respectively corresponding to the pupil positions in different optical axis directions of eyes.
[00141] In one implementation manner, the projection module 560 includes:
an anti-distortion processing submodule 564, for performing anti-distortion processing corresponding to the pupil positions in different optical axis directions of eyes for the device input information, to enable the fundi to receive the to-be-displayed device input information.
[00142] In one implementation manner, the projection module 560 includes:
an alignment adjustment submodule 565, for aligning, on the user's fundus, the projected device input information with an image seen by the user. [00143] In one implementation manner, the device 500 further includes:
a position detection module 540, for detecting the position of a gaze point of the user relative to the user's position;
[00144] the alignment adjustment submodule 565, for aligning, on the user's fundus, the projected device input information with the image seen by the user according to the position of the gaze point of the user relative to the user's position.
[00145] For functions of the above submodules of the projection module, reference is made to the description of the corresponding steps in the above method embodiment, and examples may be given in the embodiments shown in Fig. 7a-7d, Fig. 8 and Fig. 9 below.
[00146] In the embodiments of the present application, the position detection module 540 may have multiple implementation manners, such as the device corresponding to the method in the method embodiments i)-iii). The position detection module corresponding to the method iii) is further described in the implementation manner corresponding to Fig. 7a-Fig. 7d, Fig. 8 and Fig. 9 according to the embodiments of the present application:
[00147] As shown in Fig. 7a, in a possible implementation manner of an embodiment of the present application, the position detection module 700 includes:
a fundus image acquisition submodule 710, for acquiring an image on the user's fundus;
an adjustable imaging submodule 720, for adjusting at least one imaging parameter of the optical path between the fundus image acquisition position and the user's eyes until a clearest image is acquired; and
an image processing submodule 730, for analyzing the acquired image of the fundi, to obtain the imaging parameters, corresponding to the clearest image, of the optical path between the fundus image acquisition position and the eyes and obtain at least one optical parameter of the eyes, and calculating the position of the current gaze point of the user relative to the user's position.
[00148] The position detection module 700 can perform analytical processing for the image on the eye's fundi to obtain the eye's optical parameter when the fundus image acquisition submodule acquires the clearest image, so as to calculate the current gaze point position of the eyes. [00149] Here, the image displayed on the "fundus" is an image displayed on the retina, and may be an image of the fundus itself or an image of another object that is projected onto the fundus. The eyes here may be human eyes, or eyes of other animals.
[00150] As shown in Fig. 7b, in a possible implementation manner of the embodiment of the present application, the fundus image acquisition submodule 710 may be a miniature camera; in another possible implementation manner of the embodiment of the present application, the fundus image acquisition submodule 710 may also directly use a photosensitive imaging device, such as a CCD or a CMOS.
[00151] In a possible implementation manner of the embodiment of the present application, the adjustable imaging submodule 720 includes: an adjustable lens device 721 that is located in an optical path between the eyes and the fundus image acquisition submodule 710, with adjustable focal length and/or adjustable position in the optical path. With the adjustable lens device 721, the system equivalent focal length between the eyes and the fundus image acquisition submodule 710 is adjustable, and by the adjustment of the adjustable lens device 721, the fundus image acquisition submodule 710 may acquire the clearest image on the fundus at a certain position or state of the adjustable lens device 721. In this implementation manner, the adjustable lens device 721 may perform continuous and real-time adjustment during the detection process.
[00152] In a possible implementation manner of the embodiment of the present application, the adjustable lens device 721 may be a lens with adjustable focal length for accomplishing the adjustment of its focal length by adjusting its refractive index and/or shape, which specifically includes: 1) adjusting the focal length by adjusting the curvature of at least one face of the lens with adjustable focal length, for example, adding or reducing a liquid medium in a cavity formed by a double-layer transparent layer to adjust the curvature of the lens with adjustable focal length; and 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, when the lens with adjustable focal length is filled with a specific liquid crystal medium, adjusting the arrangement mode of the liquid crystal medium by adjusting the voltage of an electrode corresponding to the liquid crystal medium so as to change the refractive index of the lens with adjustable focal length.
[00153] In another possible implementation manner of the embodiment of the present application, the adjustable lens device 721 includes a lens assembly consisting of a plurality of lenses, for accomplishing adjustment of the focal length of the lens assembly by adjusting the relative positions of the lenses in the lens assembly. The lens assembly may also include lenses with adjustable imaging parameters such as their own focal lengths.
[00154] Besides the above two methods for changing optical path parameters of the system by adjusting the properties of the adjustable lens device 721, the optical path parameters of the system may also be changed by adjusting the position of the adjustable lens device 721 in the optical path.
[00155] In a possible implementation manner of the embodiment of the present application, In order not to affect the viewing experience of the observer on the observed object, and in order to portably apply the system to a wearable device, the adjustable imaging submodule 720 may also include a light splitting unit 722 for forming light transfer paths between the eyes and the observed object and between the eyes and the fundus image acquisition submodule 710. Thus, the optical paths can be overlapped to reduce the system size while not affecting other visual experiences of the user as far as possible.
[00156] In this implementation manner, the light splitting unit may include a first light splitting unit that is located between the eyes and the observed object and used to transmit the light from the observed object to the eyes and transfer the light from the eyes to the fundus image acquisition submodule.
[00157] The first light splitting unit may be a light splitter, a light- splitting optical waveguide (including optical fibre), or another suitable light splitting device.
[00158] In a possible implementation manner of the embodiment of the present application, the image processing submodule 730 of the system includes an optical path calibration unit for calibrating the optical path of system, for example, performing aligning calibration for the optical axis of the optical path, to ensure the measurement accuracy.
[00159] In a possible implementation manner of the embodiment of the present application, the image processing submodule 730 includes:
an image analysis unit 731, for analyzing the image acquired by the fundus image acquisition submodule to find the clearest image; and
a parameter calculation unit 732, for calculating the optical parameter of the eye according to the clearest image and the known imaging parameter of the system when the clearest image is obtained.
[00160] In this implementation manner, with the adjustable imaging submodule 720, the fundus image acquisition submodule 710 may obtain the clearest image, but it is required to find the clearest image by using the image analysis unit 731, and at this time, the optical parameter of the eye may be calculated according to the clearest image and the known optical path parameter of the system. The optical parameter of the eye here may include the eye's optical axis direction.
[00161] In a possible implementation manner of the embodiment of the present application, the system may also include a projection submodule 740 for projecting a spot onto the fundus. In a possible implementation manner, the function of the projection submodule may be implemented by using a mini projector.
[00162] The projected spot here may have no specific pattern and is only used for lightening the fundus.
[00163] In an implementation manner of the embodiment of the present application, the projected spot may include a feature-rich pattern. The rich features of the pattern may facilitate detection and improve the detection accuracy. A schematic diagram of a spot pattern P is shown in Fig. 4a. The pattern may be formed by a spot pattern generator such as ground glass. Fig. 4b shows an image on the fundus that is shot when there is a projected spot pattern P.
[00164] In order not to affect the normal viewing of the eyes, the spot may be an invisible infrared spot.
[00165] At this time, in order to reduce the interference of other spectra:
[00166] the projection submodule may be provided on the emergent surface with an invisible light transmission filter.
[00167] The fundus image acquisition submodule is provided on the incident surface with an invisible light transmission filter. [00168] In a possible implementation manner of the embodiment of the present application, the image processing submodule 730 may also include:
[00169] a projection control unit 734, for controlling the brightness of the projected spot of the projection submodule 740 according to the result obtained by the image analysis unit 731.
[00170] For example, the projection control unit 734 may adaptively adjust the brightness according to the features of the image obtained by the fundus image acquisition submodule 710. The features of the image here include a contrast of image feature, a textural feature, etc.
[00171] Here, a special situation of the controlling the brightness of the projected spot of the projection submodule 740 is a situation of starting or stopping the projection submodule 740, for example, the projection submodule 740 can be stopped periodically when a user continuously fixes eyes on one spot; and when the user's fundus is bright enough, the light emitting source can be turned off, and the distance between the current gaze point of the eyes and the eyes is detected by only using the fundus information.
[00172] Furthermore, the projection control unit 734 may also control the brightness of the projected spot of the projection submodule 740 according to the ambient light.
[00173] In a possible implementation manner of the embodiment of the present application, the image processing submodule 730 may also include an image calibration unit 733 that is used to calibrate the image on the fundus and obtain at least one reference image corresponding to the image displayed on the fundus.
[00174] The image analysis unit 731 performs contrast calculation for the image acquired by the fundus image acquisition submodule 730 with the reference image to obtain the clearest image. Here, the clearest image may be an acquired image with the minimum difference with the reference image. In this implementation manner, the difference between the acquired image and the reference image can be calculated by the existing image processing algorithm, e.g., by the classic phase-difference auto-focusing algorithm.
[00175] In a possible implementation manner of the embodiment of the present application, the parameter calculation unit 732 may include: an optical axis direction determination subunit 7321, for obtaining the eye's optical axis direction according to the obtained eye's feature when the clearest image is acquired.
[00176] Here, the eye's feature may be acquired from the clearest image, or may be acquired in another manner. The gazing direction of the user's eye sight may be obtained according to the eye's optical axis direction.
[00177] In a possible implementation manner of the embodiment of the present application, the optical axis direction determination subunit 7321 may include: a first determination subunit for obtaining the eye's optical axis direction according to the feature of the fundus when the clearest image is obtained. Compared with the eye's optical axis direction that is obtained according to the features of pupil and eyeball surface, the eye's optical axis direction can be determined in a higher precision according to the feature of the fundus.
[00178] When the spot pattern is projected onto the fundus, the size of the spot pattern may be larger or smaller than the visible region of the fundus, wherein:
[00179] when the area of the spot pattern is smaller than or equal to the visible region of the fundus, the eye's optical axis direction may be determined by using the classic matching algorithm of feature points (e.g., scale invariant feature transform (SIFT) algorithm) by detecting the position of the spot pattern on the image relative to the fundus position; and
[00180] when the area of the spot pattern is larger than or equal to the visible region of the fundus, the user's gazing direction may be determined by determining the eye's optical axis direction according to the position of the spot pattern on the image relative to the original spot pattern (obtained by an image calibration unit).
[00181] In another possible implementation manner of the embodiment of the present application, the optical axis direction determination subunit 7321 includes: a second determination subunit for obtaining the eye's optical axis direction according to the feature of the eye pupil when the clearest image is obtained. Here, the feature of the pupil may be acquired from the clearest image, or may be acquired in another manner. The method for obtaining the eye's optical axis direction through the feature of the eye pupil is an existing technology, and therefore no detail is given herein.
[00182] In a possible implementation manner of the embodiment of the present application, the image processing submodule 730 may also include an eye's optical axis direction calibration unit 735 for calibrating the eye's optical axis direction to determine the above eye's optical axis direction more accurately.
[00183] In this implementation manner, the known imaging parameter of the system includes a fixed imaging parameter and a real-time imaging parameter, wherein the real-time imaging parameter is the parameter information of the adjustable lens device when the clearest image is acquired, and this parameter information can be obtained by recording in real time when the clearest image is acquired.
[00184] The distance between the eye's gaze point and the eyes is calculated as follows, which is specifically as follows:
[00185] Fig. 7c shows a schematic diagram of imaging of the eyes, and the following formula (1) may be obtained from Fig. 7c with reference to the lens imaging formula in the classic optical theory:
1 1 _ 1
d0 ~ + de ~ = 7e (1)
[00186] wherein d0and deare distances between the current observed object
7010 and an eye equivalent lens 7030 and between the real image 7020 on the retina and the eye equivalent lens 7030, respectively, feis an equivalent focal length of the eye equivalent lens 7030, and X is eye's gazing direction (which may be obtained from the eye's optical axis direction).
[00187] Fig. 7d shows a schematic diagram of the distance between the eye's gaze point and the eyes, the gaze point being obtained according to the known optical parameter of the system and the optical parameter of the eye, wherein a spot 7040 in Fig. 7d is converged into a virtual image with the adjustable lens device 721 (not shown in Fig. 7d). It is assumed that the distance between the virtual image and the lens is x (not shown in Fig. 7d), the following equation set may be obtained with reference to the formula (1):
Figure imgf000032_0001
[00188] wherein dp is an optical equivalent distance between the spot 7040 and the adjustable lens device 721, dj is an optical equivalent distance between the adjustable lens device 721 and the eye equivalent lens 7030, and fp is a focal length value of the adjustable lens device 721.
[00189] The distance d0 between the current observed object 7010 (the eye's gaze point) and the eye equivalent lens 7030 may be obtained from (1) and (2), as shown in formula (3):
d - f
d = d. + ^^ (3)
1 f - d
p p
[00190] The position of the eye's gaze point can be easily obtained according to the above calculated distance between the observed object 7010 and the eyes and the eye's optical axis direction obtained from the previous disclosures, to provide a basis for the subsequent further interaction related to the eyes.
[00191] Fig. 8 shows an embodiment in which a position detection module
800 is applied to glasses G in a possible implementation manner of the embodiment of the present application, which includes the content disclosed in the implementation manner shown in Fig. 7b. Specifically, it can be seen from Fig. 8 that in this implementation manner, the module 800 of this implementation manner is integrated on the right side of the glasses G (not so limited) and includes:
[00192] a miniature camera 810, which has the same function as the fundus image acquisition submodule disclosed in the implementation manner in Fig. 7b, and is arranged on the right outer side of the glasses G;
[00193] a first light splitter 820, which has the same function as the first light splitting unit disclosed in the implementation manner in Fig. 7b, and is arranged at a certain inclination on the intersection point of the gazing direction of eye A and the emergent direction of the camera 810, to transmit the light of the observed object into the eye A and reflect light from the eye to the camera 810; and
[00194] a lens 830 with adjustable focal length, which has the same function as the lens with adjustable focal length disclosed in the implementation manner in
Fig. 7b, and is located between the first light splitter 820 and the camera 810 to adjust the focal length value in real time, such that the camera 810 can take a shot of the clearest image on the fundus at a certain focal length value.
[00195] In this implementation manner, the image processing submodule is not shown in Fig. 8, and has the same function as the image processing submodule shown in Fig. 7b.
[00196] Since the brightness of the fundus is generally not enough, it is better to lighten the fundus, and in this implementation manner, the fundus is lightened by a light emitting source 840. In order not to affect the user experience, the light emitting source 840 here may be an invisible light emitting source, such as a near-infrared light emitting source which has little influence on the eye A and to which the camera 810 is sensitive.
[00197] In this implementation manner, the light emitting source 840 is located on the outer side of the right glasses frame, and thus a second light splitter 850 and the first light splitter 820 are both needed to accomplish the transfer of the light emitted from the light emitting source 840 to the fundus. In this implementation manner, the second light splitter 850 is located in front of the incident surface of the camera 810, and thus is required to transmit the light from the fundus to the second light splitter 850.
[00198] It can be seen therefrom that in this implementation manner, in order to improve the user experience and improve the acquisition definition of the camera 810, the first light splitter 820 may have features of high infrared reflectivity and high visible light transmittance. For example, the above feature may be implemented by arranging an infrared reflective film is arranged on the side of the first light splitter 820 facing to the eye A.
[00199] It can be seen from Fig. 8 that in this implementation manner, the position detection module 800 is located on the side of the lens of the glasses G far away from the eye A, and thus the lens may be regarded as a part of the eye A when the optical parameter of the eye is calculated, and at this time, it is not required to know the optical feature of the lens.
[00200] In other implementation manners of the embodiment of the present application, the position detection module 800 may be located on the side of the lens of the glasses G close to the eye A, and at this time, it is required to obtain the optical feature parameter of the lens in advance and consider the influencing factor of the lens when the gaze point distance is calculated.
[00201] In this embodiment, the light emitted from the light emitting source
840 is reflected by the second light splitter 850, projected by the lens 830 with adjustable focal length, reflected by the first light splitter 820, then transmitted into the user's eyes via the lens of the glasses G, and finally reaches the retina of fundus; and the camera 810 takes a shot of the image on the fundus through the pupil of eye A and via an optical path constituted by the first light splitter 820, the lens 830 with adjustable focal length and the second light splitter 850.
[00202] In a possible implementation manner, the other parts of the device according to the embodiment of the present application are also implemented on the glasses G; moreover, the position detection module and the projection module may both include a device having a projection function (such as the above information projection submodule of the projection module, and the projection submodule of the position detection module) and an imaging device having adjustable imaging parameters (such as the above parameter adjustment submodule of the projection module, and the adjustable imaging submodule of the position detection module), etc, and therefore, in a possible implementation manner of the embodiment of the present application, the functions of the position detection module and the projection module are implemented by a same device.
[00203] As shown in Fig. 8, in a possible implementation manner of the embodiment of the present application, besides the function of illuminating the position detection module, the light emitting source 840 may also be used as a light source of the information projection submodule of the projection module to assist the projection of the device input information. In a possible implementation manner, the light emitting source 840 may project invisible light for illuminating the position detection module, and project visible light for assisting the projection of the device input information at the same time. In another possible implementation manner, the light emitting source 840 may perform time-sharing switch of the projection of the invisible light and the visible light. In yet another possible implementation manner, the position detection module may use the device input information to accomplish the function of illuminating the fundus.
[00204] In a possible implementation manner of the embodiment of the present application, besides serving as the parameter adjustment submodule of the projection module, the first light splitter 820, the second light splitter 850 and the lens 830 with adjustable focal length may be used as the adjustable imaging submodule of the position detection module. Here, in a possible implementation manner, the focal length of the lens 830 with adjustable focal length can be adjusted in regions, and the focal lengths may be different as different regions respectively correspond to the position detection module and the projection module. Alternatively, the focal length of the lens 830 with adjustable focal length is integrally adjusted; however, the miniature camera 810 of the position detection module is also provided with another optical device (such as CCD) on the front end of the photosensitive unit to implement the auxiliary adjustment of the imaging parameter of the position detection module. Furthermore, in another possible implementation manner, it can be configured so that the optical distance from the light emitting surface(i.e., the outgoing position of the device input information) of the light emitting source 840 to the eyes is equal to the optical distance from the eyes to the miniature camera 810, and thus when the lens 830 with adjustable focal length is adjusted to enable the miniature camera 810 to receive the clearest image on the fundus, the device input information projected by the light emitting source 840 is just clearly imaged on the fundus.
[00205] It can be seen from the above that the functions of the position detection module and the projection module of the first information interaction device according to the embodiment of the present application may be implemented by one device, such that the entire system has simple structure and small size and is more portable.
[00206] Fig. 9 shows a schematic structural diagram of a position detection module 900 in another implementation manner according to an embodiment of the present application. It can be seen from Fig. 9 that this implementation manner is similar to the implementation manner shown in Fig. 8, and includes a miniature camera 910, a second light splitter 920, and a lens 930 with adjustable focal length, except that in this implementation manner, the projection submodule 940 is the projection submodule 940 for projecting a spot pattern, and a curved light splitter 950 is used as a curved light splitter to substitute for the first light splitter in the implementation manner in Fig. 8.
[00207] Here, the curved light splitter 950 is used to transfer the image displayed on the fundus to the fundus image acquisition submodule corresponding to the pupil positions when the eye's optical axis directions are different. Thus the camera may photograph images that are mixed and overlapped at various angles of the eyeball; however, only the part passing through the fundus of pupil may form a clear image on the camera, and other parts cannot form clear images due to out-of-focus, which will not greatly interfere with the imaging of the part on the fundus, and thus the feature of the part on the fundus can still be detected. Therefore, compared with the implementation manner shown in Fig. 8, the image on the fundus can be well obtained when the eye looks at different directions in this implementation manner, and thus the position detection module in this implementation manner has a wider application range and higher detection accuracy.
[00208] In a possible implementation manner of the embodiment of the present application, other parts of the first information interaction device according to the embodiment of the present application may also be implemented on the glasses G. In this implementation manner, the position detection module and the projection module may also be multiplexed. Similar to the embodiment shown in Fig. 8, the projection submodule 940 may perform simultaneous or time-sharing switch on the projected spot pattern and the device input information; alternatively, the position detection module can detect the projected device input information as the spot pattern. Similar to the embodiment shown in Fig. 8, in a possible implementation manner of the embodiment of the present application, besides serving as the parameter adjustment submodule of the projection module, the first light splitter 920, the second light splitter 950 and the lens 930 with adjustable focal length may be used as the adjustable imaging submodule of the position detection module.
[00209] At this time, the second light splitter 950 may also be used for transfer in the optical path between the projection module and the fundus corresponding to the pupil positions when the eye's optical axis directions are different. Since the device input information projected by the projection submodule 940 may have a distortion after passing through the curved second light splitter 950, in this implementation manner, the projection module includes:
an anti-distortion processing module (not shown in Fig. 9) for performing anti-distortion processing for the device input information corresponding to the curved light splitter to enable the fundus to receive the desired to-be-displayed device input information.
[00210] In one implementation manner, the projection module is used for projecting the device input information stereoscopically to the user's fundus.
[00211] The device input information includes stereo information respectively corresponding to the two eyes of the user, and the projection module projects the corresponding device input information to the two eyes of the user.
[00212] As shown in Fig. 10, when it is required to perform stereo display, the first information interaction device 1000 needs to be arranged with two projection modules respectively corresponding to the two eyes of the user, including:
a first projection module corresponding to the user's left eye; and a second projection module corresponding to the user's right eye.
[00213] Here, the structure of the second projection module is similar to the structure combining the function of the position detection module disclosed in the embodiment of Fig. 10, is a structure that may implement the function of the position detection module and the function of the projection module at the same time, and includes a miniature camera 1021, a second light splitter 1022, a second lens 1023 with adjustable focal length, and a first light splitter 1024 (the image processing submodule of the position detection module is not shown in Fig. 10), which have the same functions as the embodiment shown in Fig. 10, except that the projection submodule in this implementation manner may be a second projection submodule 1025 for projecting the device input information corresponding to the right eye. At the same time, the second projection submodule 1025 can be used for detecting the eye's gaze point position of the user, and clearly projecting the device input information corresponding to the right eye onto the fundus of the right eye.
[00214] The structure of the first projection module is similar to the structure of the second projection module 1020; however, the first projection module is not provided with a miniature camera, and does not combine the function of the position detection module. As shown in Fig. 10, the first projection module includes:
a first projection submodule 1011, for projecting the device input information corresponding to the left eye to the fundus of left eye;
a first lens 1013 with adjustable focal length, for adjusting the imaging parameter between the first projection submodule 1011 and the fundus, to clearly display the corresponding device input information on the fundus of the left eye fundus and enable the user to see the device input information displayed on the image;
a third light splitter 1012, for performing optical path transfer between the first projection submodule 1011 and the first lens 1013 with adjustable focal length; and
a fourth light splitter 1014, for performing optical path transfer between the first lens 1013 with adjustable focal length and the fundus of the left eye.
[00215] Through this embodiment, the device input information seen by the user has an appropriate stereo display effect to bring better user experience. Furthermore, when the device input information input to the user includes three-dimensional information, a user can see the three-dimensional information through the above stereo projection. For example: when a user needs to make a specific gesture at a specific position in a three-dimensional space to correctly input the device input information, with the above method in the embodiment of the present application, the user can see the stereo device input information, acquire the specific position and the specific gesture, and further make the gesture prompted by the device input information at the specific position, while the others cannot acquire the space information even if they see that the user makes the gesture action, thereby improving the privacy of the device input information.
[00216] Fig. 11 is a schematic structural diagram of yet another first information interaction device 1100 according to an embodiment of the present application, and the specific embodiment of the present application does not limit the specific implementation of the first information interaction device 1100. As shown in Fig. 11, this first information interaction device 1100 may include:
[00217] a processor 1110, a communications interface 1120, a memory 1130 and a communication bus 1140.
[00218] The processor 1110, the communications interface 1120 and the memory 1130 communicate with each other via the communication bus 1140.
[00219] The communications interface 1120 is used for network element communication with, for example, a client.
[00220] The processor 1110 is used for executing a program 1132, and in particular, executing the related steps in the above method of the embodiment.
[00221] Specifically, the program 1132 may include program code, the program code including computer operation instructions.
[00222] The processor 1110 may be a central processing unit CPU, or may be an Application Specific Integrated Circuit (ASIC), or may be configured as one or more integrated circuits according to the embodiment of the present application.
[00223] The memory 1130 is used for storing the program 1132. The memory
1130 may include a high speed RAM memory, and may also include a non- volatile memory, for example, at least one magnetic disk memory. The program 1132 may be specifically used to enable the first information interaction device 1100 to execute the steps as follows:
acquiring an image related to a first device, the image including at least one piece of identification information;
acquiring the at least one piece of identification information in the image, the at least one piece of identification information including at least one piece of device input information corresponding to the first device; and
providing a server with the at least one piece of identification information.
[00224] For the specific implementation of the steps in the program 1132, reference may be made to the corresponding steps and the corresponding descriptions in the units in the above embodiments, so the details are not described in detail herein again. A person skilled in the art may clearly understand that, for the purpose of convenient and brief description, for the specific operation process of the devices and modules described above. Reference may be made to the description of the corresponding process in the above method embodiments, so the details are not described in detail herein again.
[00225] Furthermore, also provided is a computer-readable medium that includes computer readable instructions for performing the following operations when being executed: executing operations of steps S120, S140 and S160 of the method in the above embodiment.
[00226] As shown in Fig. 12, an embodiment of the present application also provides a wearable device 1200 that includes the first information interaction device 1210 disclosed in the above embodiment.
[00227] The wearable device may be a pair of glasses. In some implementation manners, the glasses may be, for example, the structures in Fig. 8-Fig. 10.
[00228] As shown in Fig. 13, an embodiment of the present application also provides a server information interaction method, including the following steps:
[00229] S1320: an identification information acquisition step of acquiring at least one piece of identification information provided by a second device;
[00230] S1340: an input information acquisition step of acquiring at least one piece of device input information related to a first device and included in the at least one piece of identification information; and
[00231] S1360: an input information providing step of providing the first device with the at least one piece of device input information.
[00232] In the embodiment of the present application, the device input information is obtained from at least one piece of identification information acquired from a second device, and the device input information is then transferred to the corresponding first device, such that the first device may execute the function corresponding to the device input information, and therefore the user will execute corresponding operations for the first device as desired without remembering the device input information, thereby greatly facilitating the user and improving the user experience.
[00233] In the embodiment of the present application, the step S1340 of acquiring the device input information from the identification information may be implemented in many ways:
[00234] 1) extracting the device input information included in the identification information:
[00235] for example, in the case that the identification information is a digital watermark, in this implementation manner, for example, the digital watermark may be analyzed through an individual private key and a disclosed or private watermark extraction method so as to extract the device input information; and
[00236] 2) sending the identification information to external, and receiving the device input information included in the identification information from the external:
[00237] in this implementation manner, the identification information may be sent to the external, for example, sent to another server and/or a third party authority, to extract the device input information from the identification information by the another server and/or the third party authority.
[00238] In this implementation manner, when the image corresponding to the identification information is a login interface of a user environment displayed on the device, the device input information is the login information of the user environment. For example, when the image is a login page of a website displayed on an electronic device, the device input information is information such as a user name and a password corresponding to the website.
[00239] In this implementation manner, when the image is the device lock screen interface shown in Fig. 2a or Fig. 2b, the device input information is the unlocking information of the lock screen interface. Fig. 2a shows a lock screen interface of a touch screen mobile phone device. In the prior art, the user needs to draw a corresponding track on the screen of the mobile phone to unlock the mobile phone, so that the user enters the user environment of the mobile phone system to perform further operations. However, in this implementation manner, the lock screen interface is embedded with the digital watermark, and in this implementation manner, the corresponding unlocking information can be acquired via the digital watermark and then sent to the device, such that the device may be automatically unlocked after receiving this unlocking information.
[00240] As shown in Fig. 14a, in some possible implementation manners, besides the steps shown in Fig. 13, for the purpose of improving the use security of the device, the method further includes:
[00241] S1330: an authorization determination step of determining whether a user is an authorized user, wherein the input information acquisition step is performed only when the user is an authorized user.
[00242] Specifically, only when the user is the authorized user, the acquisition of the device input information is performed. In this implementation manner, after the server acquires the identification information, for the purpose of ensuring the user information security, it is required to perform authentication for the user, such that only the authorized user can perform corresponding operations for the first device, and the unauthorized user cannot perform corresponding operations with the method according to the embodiment of the present application. This authorization determination step S1330 may be performed on the remote end (e.g., another server), that is, the corresponding user information is sent to the remote end and then returned back to the local after the remote end makes a determination, or may be directly performed on the local server.
[00243] In other possible implementation manners, the authorization determination step may also be arranged before the input information providing step. As shown in Fig. 14b, in this implementation manner, the method further includes:
[00244] S1350: an authorization determination step of determining whether the user is an authorized user, wherein the input information providing step is performed only when the user is an authorized user.
[00245] In the embodiment of the present application, after the device input information corresponding to the first device is extracted via the identification information, it is required to perform authentication for the user, such that the device input information is provided for the first device only when the user is an authorized user. Similar to the embodiment shown in Fig. 14a, in this embodiment, the authorization determination step may be preformed on the remote end or on the local.
[00246] In a possible implementation manner of the embodiment of the present application, for the purpose to enable the user to acquire the device input information in order to use it manually if desired, the method further includes:
providing the second device with the device input information.
[00247] It should be noted that, in the embodiments of the present application, the sequence numbers of the above processes do not imply an execution sequence, and the execution sequence of the processes should be determined according to the functions and internal logic, which is not intended to limit the implementation processes of the embodiments of the present application in any way.
[00248] As shown in Fig. 15, an embodiment of the present application also provides a server 1500, including:
an identification information acquisition module 1510, for acquiring identification information provided by a second device;
an input information acquisition module 1520, for acquiring at least one piece of device input information related to a first device in the identification information; and
an input information providing module 1530, for providing the first device with the device input information.
[00249] In the embodiment of the present application, the device input information is obtained from at least one piece of identification information acquired from a second device, and the device input information is then transferred to the corresponding first device, such that the first device may execute the function corresponding to the device input information, and therefore the user will execute corresponding operations for the first device as desired without remembering the device input information, thereby greatly facilitating the user and improving the user experience.
[00250] In the embodiment of the present application, the input information acquisition module 1520 may be in various forms, for example:
[00251] As shown in Fig. 16a, the input information acquisition module 1520 includes:
an information extraction submodule 1521, for extracting the device input information included in the identification information.
[00252] In this implementation manner, the identification information may be, for example, a digital watermark, and the information extraction submodule 1521 may, for example, analyze the digital watermark through an individual private key and a disclosed or private watermark extraction method, so as to extract the device input information.
[00253] As shown in Fig. 16b, in another implementation manner of the embodiment of the present application, the input information acquisition module 1520 includes:
a second communication submodule 1522 used for:
sending the identification information to external; and
receiving the device input information included in the identification information from the external.
[00254] In this implementation manner, the identification information may be sent to the external, for example, sent to another server and/or a third party authority, to extract the device input information from the identification information by the another server and/or the third party authority and then send back to the second communication submodule 1522 of the embodiment of the present application.
[00255] In the embodiment of the present application, the identification information acquisition module 1510 and/or the input information providing module 1530 may also be a communication module, and their functions and the function of the second communication submodule 1522 may also be implemented via the same device.
[00256] As shown in Fig. 16a, in a possible implementation manner of the embodiment of the present application, the server 1500 further includes:
[00257] an authorization determination module 1550, for determining whether the user is an authorized user, and start the input information acquisition module 1520 to perform the corresponding operations only when the user is the authorized user; specifically, only when the user is the authorized user, the input information acquisition module 1520 performs acquisition of the device input information.
[00258] In this implementation manner, after the server acquires the identification information, for the purpose of ensuring the user information security, it is required to perform authentication for the user, such that only the authorized user can perform corresponding operations for the first device, and the unauthorized user cannot perform corresponding operations with the method according to the embodiment of the present application.
[00259] As shown in Fig. 16b, in other possible implementation manners of the embodiments of the present application, the authorization determination module 1550 determines whether the user is an authorized user, and starts the input information providing module to perform the corresponding operations when the user is the authorized user.
[00260] In the embodiment of the present application, after the device input information corresponding to the first device is extracted via the identification information, it is required to perform authentication for the user, such that the device input information is provided for the first device only when the user is an authorized user.
[00261] In the embodiment of the present application, the authorization determination may be performed on the remote end (e.g., another server) by using the authorization determination module 1550, that is, the corresponding user information is sent to the remote end and returned back to the local after the remote end makes determination. At this time, as shown in Fig. 16b, the authorization determination module 1550 includes:
a third communication submodule 1551 used for:
sending the corresponding information of the user; and receiving the result indicating whether the user is an authorized user.
[00262] The third communication submodule 1551 may be an individual communications interface device, or its function may be implemented by a same device that also implements the function of the second communication submodule 1522.
[00263] Certainly, a person skilled in the art may know that, when there is no need to perform authentication for a user, the device may not include the authorization determination module. [00264] In a possible implementation manner of the embodiment of the present application, for the purpose to enable the user to acquire the device input information in order to use it manually if desired, the input information providing module 1530 may also be used for:
providing the second device with the device input information.
[00265] Fig. 16c shows a schematic structural diagram of yet another server
1600 according to an embodiment of the present application, and the specific embodiment of the present application does not limit the specific implementation of the server 1600. As shown in Fig. 16c, the server 1600 may include:
a processor 1610, a communications interface 1620, a memory 1630 and a communication bus 1640.
[00266] The processor 1610, the communications interface 1620 and the memory 1630 communicate with each other via the communication bus 1640.
[00267] The communications interface 1620 is used for network element communication with, for example, a client.
[00268] The processor 1610 is used for executing a program 1632, and in particular, executing the related steps in the method embodiment shown in Fig. 13.
[00269] Specifically, the program 1632 may include program code, the program code including computer operation instructions.
[00270] The processor 1610 may be a central processing unit CPU, or may be an Application Specific Integrated Circuit (ASIC), or may be configured as one or more integrated circuits according to the embodiment of the present application.
[00271] The memory 1630 is used for storing the program 1632. The memory
1630 may include a high speed RAM memory, and may also include a non- volatile memory, for example, at least one magnetic disk memory. The program 1632 may be specifically used to enable the server 1600 to execute the steps as follows:
acquiring at least one piece of identification information from a second device;
acquiring at least one piece of device input information related to a first device and included in the at least one piece of identification information; and providing the first device with the at least one piece of device input information. [00272] For the specific implementation of the steps in the program, 1632 reference may be made to the corresponding steps and the corresponding descriptions in the units in the above embodiments, so the details are not described in detail herein again. A person skilled in the art may clearly understand that, for the purpose of convenient and brief description, for the specific operation process of the devices and modules described above, reference may be made to the description of the corresponding process in the above method embodiments, so the details are not described in detail herein again.
[00273] Furthermore, also provided is a computer-readable medium that includes computer readable instructions for performing the following operations when being executed: executing operations of steps S1320, S1340 and S1360 of the method in the above embodiment.
[00274] As shown in Fig. 17, an embodiment of the present application provides an information interaction method, including:
[00275] S1710: an identification information embedding step of embedding at least one piece of identification information into an image related to a first device, the at least one piece of identification information including at least one piece of device input information corresponding to the first device;
[00276] S1720: an image providing step of providing the image to external;
[00277] S1730: an information input step of receiving the at least one piece of device input information from the external; and
[00278] S1740: an execution step of executing operations corresponding to the at least one piece of device input information.
[00279] In the method according to the embodiment of the present application, the identification information is embedded into the image and then provided to the external, such that the external device may obtain the corresponding device input information according to the image and then return the device input information in the method of the present application. In the information input step in the method of the present application, the device input information is received from the external, and the corresponding operation is automatically performed without user manual operations, so as to facilitate the use by the user.
[00280] In the embodiment of the present application, the identification information may be the digital watermark. The digital watermark may be classified into a symmetric watermark and an asymmetric watermark according to the symmetry. The embedding and detection keys of the conventional symmetric watermark are the same, such that once the detection method and the key are publicized, it is easy to remove the watermark from a digital carrier. However, the asymmetric watermark technology embeds the watermark by means of a private key and extract and verify the watermark by means of a public key, such that an attacker hardly damage or remove, via the public key, the watermark that is embedded by means of the private key. Therefore, in the embodiment of the present application, the asymmetric digital watermark may be used.
[00281] In the embodiment of the present application, the embedded device input information that is required to be included in the identification information may be pre-set by a user according individual needs, or may be configured for the user actively by the system.
[00282] In a possible implementation manner of the embodiment of the present application, the step S1720 may include:
displaying the image.
[00283] Certainly, in other implementation manners of the embodiment of the present application, the step S1720 may also include: sending the image to the corresponding device through the interaction between devices in the method according to the embodiment of the present application.
[00284] In a possible implementation manner of the embodiment of the present application, the image may be a login interface of a user environment.
[00285] The operation corresponding to the device input information is the operation of logging in to the user environment according to the device input information.
[00286] For example, the image is a login interface of a user electronic bank account, the device input information is the account name and the password of the electronic bank account, and then the operation of logging in to the user's electronic bank account is performed after the device input information is received, such that the user may enter the user environment of the electronic bank account and further use the corresponding functions. [00287] Furthermore, in a possible implementation manner of the embodiment of the present application, the image is a lock screen interface;
and the operation corresponding to the device input information is: unlocking the corresponding screen according to the device input information.
[00288] For example, when the image is a lock screen interface of a mobile phone shown in Fig. 2a, the device input information is the unlocking information corresponding to the lock screen interface, the screen of the mobile phone is unlocked after the device input information is received, and the user may use the corresponding functions in the user environment of the mobile phone system.
[00289] In a possible implementation manner of the embodiment of the present application, the method further includes before executing the step S1740:
an authorization determination step of determining whether a user is an authorized user, wherein the execution step is performed only when the user is the authorized user.
[00290] Specifically, not all the device input information received by the device may trigger the execution of the corresponding operation, and the corresponding operation is executed only when the user is an authorized user. In a special situation, the current setting of the device is that no operation will be executed for all the received device input information, and at this time, all users are not authorized users.
[00291] It should be noted that, in the embodiments of the present application, the sequence numbers of the above processes do not imply an execution sequence, and the execution sequence of the processes should be determined according to the functions and internal logic, which is not intended to limit the implementation processes of the embodiments of the present application in any way.
[00292] As shown in Fig. 18, an embodiment of the present application provides a second information interaction device 1800, including:
an identification information embedding module 1810, for embedding at least one piece of identification information into an image related to the second information interaction device 1800, the at least one piece of identification information including at least one piece of device input information corresponding to the second information interaction device 1800; an image providing module 1820, for providing the image to external;
an information input module 1830, for receiving the at least one piece of device input information from the external; and
an execution module 1840, for executing corresponding operations according to the received at least one piece of device input information.
[00293] With the device according to the embodiment of the present application, the digital identification information is embedded and then provided to the external, such that an external device may obtain the corresponding device input information according to the image and return the device input information to the device according to the embodiment of the present application; and the device according to the embodiment of the present application receives the device input information from the external and then automatically perform the corresponding operations by using the execution module 1840 without user manual operations, so as to facilitate the use by the user.
[00294] Corresponding to the description of in the method embodiment shown in Fig. 17, the image providing module 1820 according to the embodiment of the present application includes:
a displaying submodule 1821 for displaying the image.
[00295] Corresponding to the method shown in Fig. 17, the image providing module 1820 may be, for example, an interaction interface, and may transfer the image to other devices (e.g., the above first information interaction device) in an interaction manner.
[00296] In a possible implementation manner of the embodiment of the present application, the image may be a login interface of a user environment.
[00297] The execution module 1840 is used for logging in to the user environment according to the device input information.
[00298] In a possible implementation manner of the embodiment of the present application, the image may be a lock screen interface;
[00299] The execution module 1840 is used for unlocking the corresponding screen according to the device input information.
[00300] In a possible implementation manner of the embodiment of the present application, the device 1800 may also include:
an authorization determination module 1850, for determining whether the user is an authorized user, and when the user is the authorized user, triggering execution module to perform the corresponding operation.
[00301] For the implementation of the functions of the above modules, reference may be made to the corresponding description in the method embodiment shown in Fig. 17, so the details are not described in detail herein again.
[00302] As shown in Fig. 19, an embodiment of the present application also provides an electronic terminal 1900, including the above second information interaction device 1910.
[00303] In a possible implementation manner of the embodiment of the present application, the electronic terminal 1900 is an electronic device, such as a mobile phone, a tablet computer, a computer, an electronic access control, or a vehicle-carried electronic device.
[00304] Fig. 20 is a schematic structural diagram of yet another second information interaction device 2000 according to an embodiment of the present application, and the specific embodiment of the present application does not limit the specific implementation of the second information interaction device 2000. As shown in Fig. 20, this second information interaction device 2000 may include:
a processor 2010, a communications interface 2020, a memory 2030 and a communication bus 2040.
[00305] The processor 2010, the communications interface 2020 and the memory 2030 communicate with each other via the communication bus 2040.
[00306] The communications interface 2020 is used for network element communication with, for example, a client.
[00307] The processor 2010 is used for executing a program 2032, and in particular, executing the related steps in the method embodiment shown in Fig. 17.
[00308] Specifically, the program 2032 may include program code, the program code including computer operation instructions.
[00309] The processor 2010 may be a central processing unit CPU, or may be an Application Specific Integrated Circuit (ASIC), or may be configured as one or more integrated circuits according to the embodiment of the present application. [00310] The memory 2030 is used for storing the program 2032. The memory
2030 may include a high speed RAM memory, and may also include a non- volatile memory, for example, at least one magnetic disk memory. The program 2032 may be specifically used to enable the second information interaction device 2000 to execute the steps as follows:
embedding at least one piece of identification information into an image related to the first device, the at least one piece of identification information including at least one piece of device input information corresponding to the first device;
providing the image to external;
receiving the at least one piece of device input information from the external; and
executing operations corresponding to the at least one piece of device input information.
[00311] For the specific implementation of the steps in the program 2032, reference may be made to the corresponding steps and the corresponding descriptions in the units in the above embodiments, so the details are not described in detail herein again. A person skilled in the art may clearly understand that, for the purpose of convenient and brief description, for the specific operation process of the devices and modules described above, reference may be made to the description of the corresponding process in the above method embodiments, so the details are not described in detail herein again.
[00312] Furthermore, also provided is a computer-readable medium that includes computer readable instructions for performing the following operations when being executed: executing operations of steps S1710, S1720, S1730 and S1740 of the method in the above embodiment.
[00313] Fig. 21 shows a schematic diagram of an application example of the first and the second information interaction devices according to an embodiment of the present application. In the embodiment of the present application, the electronic device: the phone device 2110 disclosed in the embodiment shown in Fig. 19, the wearable device: the intelligent glasses 2120 disclosed in the embodiment shown in Fig. 12, and the server 2130 disclosed in the embodiment shown in Fig. 15 are all involved.
[00314] In the embodiment of the present application, the intelligent glasses
2120 includes the first information interaction device according to the embodiments of Fig. 5 to Fig. 11, the function of the image acquisition module (basically, the image acquisition submodule) of the first information interaction device is implemented by the camera 2121 on the intelligent glasses 2120, and the identification information acquisition module (not shown in Fig. 21) and the identification information providing module (not shown in Fig. 21) of the information interaction device may be integrated on the original processing module of the intelligent glasses 2120, or may be arranged on the frame of the intelligent glasses 2120 (e.g., arranged on the glasses leg, or be a part of the frame) in another manner.
[00315] In the embodiment of the present application, the phone device 2110 includes the second information interaction device according to the embodiment shown in Fig. 18. The function of the displaying submodule of the second information interaction device is implemented by the display module of the phone device 2110; and the identification information embedding module, the information input module and the execution module may also be integrated on the existing processing module and communication module of the phone device 2110 to implement their functions, or may be arranged as an individual module in the phone device 2110. In the embodiment of the present application, the image is the lock screen interface 2111 (e.g., the image shown in Fig. 2a) of the phone device 2110, and the device input information is the corresponding unlocking information.
[00316] In the embodiment of the present application, the identification information is a digital watermark.
[00317] In this embodiment, the digital watermark including the unlocking information may be embedded in the lock screen interface 2111 of the phone device 2110 in advance by using the identification information embedding module, and when the user needs to use the phone device 2110, the digital watermark can be displayed by the display module of the phone device 2110 through a specific operation (e.g., pressing a power button of the phone device 2110). Generally, at this time, the user looks at the display screen of the phone device 2110, such that the camera 2121 of the intelligent glasses 2120 may acquire the image of the displayed lock screen interface 2111, automatically acquire the digital watermark according to the image and by means of the identification information acquisition module of the second information interaction device, and then send the identification information to the server 2130 with the identification information providing device (e.g., a wireless communications interface between devices). The server extracts the unlocking information from the digital watermark with the corresponding watermark extraction method, and then sends the unlocking information to the phone device 2110 with the input information providing module of the server. The phone device 2110 performs the corresponding unlocking operation with the execution module after receiving the unlocking information, such that the user may unlock the unlock state of the phone device 2110 without other actions and enter the user environment of the mobile phone system.
[00318] It can be seen from the above that the user may naturally and conveniently perform the corresponding operations (the mobile phone may be automatically unlocked when the user has a look at the lock screen interface of the mobile phone) by means of the information interaction device and method according to the embodiments of the present application, thereby improving the user experience.
[00319] Persons skilled in the art should appreciate that, with reference to the examples described in the embodiments here, units and algorithm steps can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether the functions are executed by hardware or software depends on the specific applications and design constraint conditions of technical solutions. Persons skilled in the art may use different methods to implement the described functions for each specific application, but it should not be considered that the implementation goes beyond the scope of the present application.
[00320] When being implemented in the form of a software functional unit and sold or used as a separate product, the functions may be stored in a computer-readable storage medium. Based on this understanding, the part of the technical solutions of the present application which contributes to the present invention over the prior art or the part of the technical solutions may be embodied in a form of a computer software product, and the computer software product is stored in a memory medium, and includes various instructions for causing a computer apparatus (which may be a personal computer, a server, a network apparatus, or the like) to execute all the steps or part of the steps of the methods of individual embodiments of the present application. The storage medium includes any medium that may store program codes, such as a U-disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.
[00321] The above implementation manners are used only to explain the present application, and are not limit the present application. A person skilled in the art will make various changes and modifications without departing from the scope and spirit of the present application. Therefore, all the equivalent technical solutions also fall within the range of the present application, and the scope of protection of the present application should be defined by the claims.

Claims

Claims
1. A method, comprising:
acquiring, by a system comprising a processor, an image related to a first device, the image comprising at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
acquiring the at least one piece of identification information in the image; and initiating providing a server with the at least one piece of identification information.
2. The method of claim 1, wherein the image is a login interface of a user environment displayed by the first device.
3. The method of claim 2, wherein the at least one piece of device input information is at least one piece of login information of the user environment.
4. The method of claim 1, wherein the image is a lock screen interface displayed by the first device.
5. The method of claim 4, wherein the at least one piece of device input information is at least one piece of unlocking information of the lock screen interface.
6. The method of claim 1, wherein the acquiring the image related to the first device comprises:
acquiring the image by photographing.
7. The method of claim 1, wherein the acquiring the image related to the first device comprises:
acquiring the image by receiving the image from an external device.
8. The method of claim 1, wherein the at least one piece of identification information is at least one digital watermark.
9. The method of claim 1, further comprising:
acquiring the at least one piece of device input information from the server; and
initiating projecting the at least one piece of device input information onto a fundus of a user.
10. The method of claim 9, wherein the projecting the at least one piece of device input information onto the fundus comprises:
aligning the at least one piece of device input information with another image seen by the user, and then projecting the at least one piece of device input information onto the fundus.
11. A method, comprising:
acquiring, by a server comprising a processor, at least one piece of identification information from a second device;
acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and
initiating providing the first device with the at least one piece of device input information.
12. The method of claim 11, wherein the acquiring the at least one piece of device input information related to the first device and comprised in the at least one piece of identification information comprises:
extracting the at least one piece of device input information comprised in the at least one piece of identification information.
13. The method of claim 11, wherein the acquiring the at least one piece of device input information related to the first device and comprised in the at least one piece of identification information comprises:
sending the at least one piece of identification information to an external interface; and
receiving, from the external interface, the at least one piece of device input information comprised in the at least one piece of identification information.
14. The method of claim 11, further comprising:
determining whether a user is an authorized user,
wherein the acquiring the at least one piece of device input information related to the first device and comprised in the at least one piece of identification information comprises:
in response to the user being determined to be the authorized user, acquiring the at least one piece of device input information related to the first device and comprised in the at least one piece of identification information; or
wherein the initiating the providing of the first device with the at least one piece of device input information comprises:
in response to the user being determined to be the authorized user, providing the first device with the at least one piece of device input information.
15. The method of claim 11, wherein the at least one piece of identification information is at least one digital watermark.
16. The method of claim 11, further comprising:
initiating providing the second device with the at least one piece of device input information.
17. A method, comprising:
embedding, by a second device comprising a processor, at least one piece of identification information into an image related to a first device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
initiating providing the image to an external device;
receiving the at least one piece of device input information from the external device; and
executing an operation corresponding to the at least one piece of device input information.
18. The method of claim 17, wherein the initiating the providing of the image to the external device comprises:
displaying the image.
19. The method of claim 18, wherein the image is a login interface of a user environment, and
wherein the executing the operation corresponding to the at least one piece of device input information comprises logging in to the user environment according to the at least one piece of device input information.
20. The method of claim 19, wherein the image is a lock screen interface, and wherein the executing the operation corresponding to the at least one piece of device input information comprises unlocking the corresponding screen according to the at least one piece of device input information.
21. The method of claim 17, wherein the at least one piece of identification information is at least one digital watermark.
22. The method of claim 17, further comprising, prior to the executing the operation corresponding to the at least one piece of device input information,
determining whether a user is an authorized user,
wherein the executing the operation corresponding to the at least one piece of device input information comprises, in response to the user being determined to be the authorized user, executing the operation corresponding to the at least one piece of device input information.
23. A device, comprising:
a processor that executes executable modules to perform operations of the device, the executable modules comprising:
an image acquisition module configured to acquire an image related to a first device, the image comprising at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
an identification information acquisition module configured to acquire the at least one piece of identification information in the image; and
an identification information providing module configured to provide a server with the at least one piece of identification information.
24. The device of claim 23, wherein the image acquisition module comprises: an image acquisition submodule configured to acquire the image by photographing.
25. The device of claim 23, wherein the image acquisition module comprises: a first communication submodule configured to acquire the image by receiving the image from an external device.
26. The device of claim 23, wherein the executable modules further comprise:
an input information acquisition module configured to acquire the at least one piece of device input information provided by the server; and a projection module configured to project the at least one piece of device input information onto a fundus of a user.
27. The device of claim 26, wherein the projection module comprises:
an alignment adjustment submodule configured to align the projected at least one piece of device input information with another image seen by the user on the fundus.
28. The device of claim 23, wherein the device is included in a wearable device.
29. The device of claim 28, wherein the wearable device is a pair of glasses.
30. A server, comprising:
a processor that executes or facilitates execution of executable modules to perform operations of the device, the executable modules comprising:
an identification information acquisition module configured to acquire at least one piece of identification information from a second device;
an input information acquisition module configured to acquire at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and
an input information providing module configured to provide the first device with the at least one piece of device input information.
31. The server of claim 30, wherein the input information acquisition module comprises:
an information extraction submodule, configured to extract the at least one piece of device input information comprised in the at least one piece of identification information.
32. The server of claim 30, wherein the input information acquisition module comprises: a second communication submodule configured to:
send the at least one piece of identification information to an external device; and
receive, from the external device, the at least one piece of device input information comprised in the at least one piece of identification information.
33. The server of claim 30, wherein the server further comprises:
an authorization determination module configured to determine whether a user is an authorized user, and
wherein the input information acquisition module is further configured to, in response to the user being determined to be the authorized user, acquire at least one piece of device input information related to the first device and comprised in the at least one piece of identification information; or
wherein the input information providing module is further configured to in response to the user being determined to be the authorized user, provide the first device with the at least one piece of device input information.
34. The server of claim 30, wherein the at least one piece of identification information is at least one digital watermark, and
Wherein the information extraction submodule is configured to extract the at least one piece of device input information comprised in the at least one digital watermark.
35. The server of claim 30, wherein the input information providing module is further configured to:
provide the second device with the at least one piece of device input information.
36. A device, comprising:
a processor that executes executable modules to perform operations of the device, the executable modules comprising:
an identification information embedding module configured to embed at least one piece of identification information into an image related to the information interaction device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the information interaction device;
an image providing module configured to provide the image externally;
an information input module configured to receive the at least one piece of device input information from externally; and
an execution module configured to execute corresponding operations according to the at least one piece of device input information received by the information input module.
37. The device of claim 36, wherein the image providing module comprises: a displaying submodule configured to display the image.
38. The device of claim 37, wherein the image is a login interface of a user environment, and
wherein the execution module is configured to log in to the user environment according to the at least one piece of device input information.
39. The device of claim 38, wherein the image is a lock screen interface, and wherein the execution module is configured to unlock the corresponding screen according to the at least one piece of device input information.
40. The device of claim 36, wherein the executable modules further comprises:
an authorization determination module configured to determine whether a user is an authorized user,
wherein the execution module is further configured to:
in response to the user being determined to be the authorized user, execute the operations corresponding to the at least one piece of device input information received by the information input module.
41. The device of claim 36, wherein the device is an electronic terminal.
42. A computer readable memory device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
acquiring an image related to a first device, the image comprising identification information that comprises device input information corresponding to the first device;
acquiring the identification information in the image; and
initiating sending of the identification information to a server device.
43. A first information interaction device, comprising a processor and a memory, the memory storing executable instructions, the processor and the memory being connected via a communication bus, and the processor executing the executable instructions stored in the memory when the first information interaction device is in operation, to cause the first information interaction device to execute operations, comprising:
acquiring an image related to a first device, the image comprising at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
acquiring the at least one piece of identification information in the image; and sending the at least one piece of identification information to a server device.
44. A computer readable memory device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
acquiring at least one piece of identification information from a second device; acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and
providing the first device with the at least one piece of device input information.
45. A server, comprising a processor and a memory, the memory storing executable instructions, the processor and the memory being communicatively connected, and the processor executing the executable instructions stored in the memory when the server is in operation, causing the server to execute operations, comprising:
acquiring at least one piece of identification information from a second device; acquiring at least one piece of device input information related to a first device and comprised in the at least one piece of identification information; and
initiating providing the first device with the at least one piece of device input information.
46. A computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
embedding at least one piece of identification information into an image related to a first device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
providing the image to an external device;
receiving the at least one piece of device input information from the external device; and
executing an operation corresponding to the at least one piece of device input information.
47. A second information interaction device, comprising a processor and a memory, the memory storing executable instructions, the processor and the memory being communicatively connected, and the processor executing the executable instructions stored in the memory when the second information interaction device is in operation, to cause the second information interaction device to execute operations, comprising:
embedding at least one piece of identification information into an image related to a first device, the at least one piece of identification information comprising at least one piece of device input information corresponding to the first device;
sending the image to an external device;
receiving the at least one piece of device input information from the external device; and
executing an operation corresponding to the at least one piece of device input information.
PCT/CN2014/081495 2013-11-15 2014-07-02 Information interaction WO2015070624A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310574246.5 2013-11-15
CN201310574246.5A CN103677631A (en) 2013-11-15 2013-11-15 Information interaction method and information interaction device

Publications (1)

Publication Number Publication Date
WO2015070624A1 true WO2015070624A1 (en) 2015-05-21

Family

ID=50315346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/081495 WO2015070624A1 (en) 2013-11-15 2014-07-02 Information interaction

Country Status (2)

Country Link
CN (1) CN103677631A (en)
WO (1) WO2015070624A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677631A (en) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device
WO2016055317A1 (en) * 2014-10-06 2016-04-14 Koninklijke Philips N.V. Docking system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075868A (en) * 2006-05-19 2007-11-21 华为技术有限公司 Long-distance identity-certifying system, terminal, servo and method
CN102970307A (en) * 2012-12-21 2013-03-13 网秦无限(北京)科技有限公司 Password safety system and password safety method
CN103116717A (en) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 User login method and system
CN103631503A (en) * 2013-11-15 2014-03-12 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device
CN103677631A (en) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8117458B2 (en) * 2006-05-24 2012-02-14 Vidoop Llc Methods and systems for graphical image authentication
US8601589B2 (en) * 2007-03-05 2013-12-03 Microsoft Corporation Simplified electronic messaging system
CN101321066B (en) * 2008-05-20 2012-03-07 北京深思洛克软件技术股份有限公司 Information safety device for internetwork communication
CN101729256B (en) * 2008-10-24 2012-08-08 深圳宝嘉电子设备有限公司 Security certificate method based on fingerprint, cryptographic technology and fragile digital watermark

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075868A (en) * 2006-05-19 2007-11-21 华为技术有限公司 Long-distance identity-certifying system, terminal, servo and method
CN102970307A (en) * 2012-12-21 2013-03-13 网秦无限(北京)科技有限公司 Password safety system and password safety method
CN103116717A (en) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 User login method and system
CN103631503A (en) * 2013-11-15 2014-03-12 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device
CN103677631A (en) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device

Also Published As

Publication number Publication date
CN103677631A (en) 2014-03-26

Similar Documents

Publication Publication Date Title
US10341113B2 (en) Password management
US9877015B2 (en) User information extraction method and user information extraction apparatus
CN104360484B (en) A kind of light wave medium, glasses and its imaging method
WO2015027599A1 (en) Content projection system and content projection method
CN106682540A (en) Intelligent peep-proof method and device
US20160173864A1 (en) Pickup of objects in three-dimensional display
KR20140124209A (en) Head-mounted display apparatus with enhanced secuirity and method for accessing encrypted information by the apparatus
WO2015051605A1 (en) Image collection and locating method, and image collection and locating device
US9838588B2 (en) User information acquisition method and user information acquisition apparatus
JP2008241822A (en) Image display device
WO2015070623A1 (en) Information interaction
CN111670455A (en) Authentication device
WO2017113286A1 (en) Authentication method and apparatus
KR20180134280A (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
KR20140053647A (en) 3d face recognition system and method for face recognition of thterof
WO2016008341A1 (en) Content sharing methods and apparatuses
CN105515777A (en) Dual authentication system and method for USBKEY equipment
WO2015078182A1 (en) Anti-counterfeiting for determination of authenticity
WO2015070624A1 (en) Information interaction
CN105844138A (en) Wired and wireless state switchable multi-mode mouse with iris recognition and USB Key functions
CN103761653B (en) Method for anti-counterfeit and false proof device
CN206696852U (en) A kind of iris authentication annex
US9836857B2 (en) System, device, and method for information exchange
US20210133910A1 (en) Image processing circuitry and image processing method
US11948402B2 (en) Spoof detection using intraocular reflection correspondences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14863000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14863000

Country of ref document: EP

Kind code of ref document: A1