WO2015070623A1 - Interaction d'informations - Google Patents

Interaction d'informations Download PDF

Info

Publication number
WO2015070623A1
WO2015070623A1 PCT/CN2014/081494 CN2014081494W WO2015070623A1 WO 2015070623 A1 WO2015070623 A1 WO 2015070623A1 CN 2014081494 W CN2014081494 W CN 2014081494W WO 2015070623 A1 WO2015070623 A1 WO 2015070623A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
input information
user
piece
information
Prior art date
Application number
PCT/CN2014/081494
Other languages
English (en)
Inventor
Lin Du
Kuifei Yu
Original Assignee
Beijing Zhigu Rui Tuo Tech Co., Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Rui Tuo Tech Co., Ltd filed Critical Beijing Zhigu Rui Tuo Tech Co., Ltd
Publication of WO2015070623A1 publication Critical patent/WO2015070623A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/34User authentication involving the use of external additional devices, e.g. dongles or smart cards
    • G06F21/35User authentication involving the use of external additional devices, e.g. dongles or smart cards communicating wirelessly
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/031Protect user input by software means

Definitions

  • This application relates to the technical field of device interaction, and, more particularly, to interaction with information of a device.
  • a screen lock will be provided usually on a mobile device or wearable device for the reasons of energy saving and prevention of misoperation, however, unlocking the screen can be done in an encrypted or unencrypted way.
  • a user usually needs to remember some special passwords, patterns, action, etc. Although safety can be ensured thereby, these are easily forgettable, bringing inconvenience to the user.
  • such problems also exist in the situation that information such as password, etc., is required to be inputted for further operation.
  • Some identifying information can be directly embedded into a digital carrier by the digital watermarking technique, without influencing the use of the original carrier or being detected and modified easily.
  • the digital watermarking technique is applicable to many aspects, such as for copyright protection, against counterfeit, for authentication, for hiding information, etc. If the digital watermarking technique can be used for helping users to enter the password to acquire the corresponding authorization safely and secretly, the above-mentioned problems that authentication cannot be carried out because the user forgets the password can be solved, thereby enhancing user experience.
  • An example aim of this application is to provide a method for information interaction.
  • this application provides a method, comprising:
  • this application provides a method, comprising:
  • this application provides a device, comprising:
  • a processor coupled to the memory, that executes the executable modules to perform operations of the device, the executable modules comprising:
  • an image acquisition module configured to acquire an image related to a device, the image comprising at least one digital watermark
  • an information acquisition module configured to acquire at least one piece of input information corresponding to the device included in the at least one digital watermark
  • an information providing module configured to send the at least one piece of input information to the device.
  • this application provides a wearable device; the wearable device contains the device for information interaction in the above-mentioned third example embodiment.
  • this application provides a device, comprising:
  • a processor that executes executable modules to perform operations of the device, the executable modules comprising:
  • a watermark embedding module configured to embed at least one digital watermark into an image related to the device for information interaction, wherein the at least one digital watermark comprises at least one piece of input information corresponding to the device for the information interaction;
  • an image providing module configured to provide the image to an external device
  • an information input module configured to receive the at least one piece of input information provided from the external device
  • an execution module configured to execute a corresponding operation according to the at least one piece of input information received by the information input module.
  • this application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
  • this application provides a device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, and the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executes the executable instructions stored in the memory, and the device for information interaction executes operations comprising:
  • this application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
  • this application provides a device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executing the executable instructions stored in the memory, the device for information interaction executes operations, comprising:
  • At least one technical solution of the embodiment of this application acquires an image related to a device and obtains input information contained in the image, and then automatically provides the input information to the device. Therefore, the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
  • FIG. 1 is an example flow diagram of a method for information interaction of an embodiment of this application
  • FIG. 2a is an example diagram of the corresponding image in a method for information interaction of an embodiment of this application;
  • FIG. 2b is an example diagram of the corresponding image in a method for information interaction of an embodiment of this application;
  • FIG. 3a is an example flow diagram of another method for information interaction of an embodiment of this application.
  • FIG. 3b is an example flow diagram of another method for information interaction of an embodiment of this application.
  • Fig. 4a is an example diagram of a light spot pattern used in a method for information interaction of an embodiment of this application;
  • FIG. 4b is an example diagram of a eye fundus pattern acquired by a method for information interaction of an embodiment of this application;
  • FIG. 5 is an example structural diagram of a first device for information interaction of an embodiment of this application.
  • FIG. 6a is an example structural diagram of another first device for information interaction of an embodiment of this application.
  • FIG. 6b is an example structural diagram of still another first device for information interaction of an embodiment of this application.
  • FIG. 7a is an example structural diagram of a position detection module in a first device for information interaction of an embodiment of this application;
  • Fig. 7b is an example structural diagram of a position detection module in another first device for information interaction of an embodiment of this application;
  • Fig. 7c and 7d are example diagrams of the corresponding optical paths of the position detection modules during the position detection of the embodiments of this application;
  • Fig. 8 is an example diagram showing that a first device for information interaction of an embodiment of this application is applied to the glasses;
  • Fig. 9 is an example diagram showing that another first device for information interaction of an embodiment of this application is applied to the glasses;
  • Fig. 10 is an example diagram that another first device for information interaction of an embodiment of this application is applied to the glasses;
  • FIG. 11 is an example structural diagram of another device for information interaction of an embodiment of this application.
  • Fig. 12 is an example diagram of a wearable device of an embodiment of this application.
  • FIG. 13 is an example flow diagram of a method for information interaction of an embodiment of this application.
  • FIG. 14 is an example structural diagram of a second device for information interaction of an embodiment of this application.
  • Fig. 15 is an example structural diagram of an electronic terminal of an embodiment of this application.
  • FIG. 16 is an example structural diagram of another second device for information interaction of an embodiment of this application.
  • Fig. 17 is an example schematic application scenario of a device for information interaction of an embodiment of this application.
  • a user often needs to use various input information in daily life, where the input information is the information required to be inputted to the device to complete a certain operation, such as various user authentication information such as a user password or specific hand gesture required to be inputted to the screen-locking interfaces of various electronic devices, a user password required when logging in to accounts of some websites or applications, or password information required in some access control devices, etc.
  • various user authentication information such as a user password or specific hand gesture required to be inputted to the screen-locking interfaces of various electronic devices
  • a user password required when logging in to accounts of some websites or applications
  • password information required in some access control devices etc.
  • the technical solution provided in the following embodiments of this application can help the user to acquire the input information without remembering the same, and complete the corresponding operations automatically.
  • “user environment” is the operational environment related to the user, for example the operational environment of an electronic terminal system entered after the user has logged in through the user login interface of the electronic terminals (such as, a cell phone, a computer, etc.).
  • the operational environment of the electronic terminal system generally comprises multiple applications, for example, the user can start the applications (such as applications of cell phone, e-mail, message, camera, etc.) corresponding to various functional modules in the system after entering the operational environment of the cell phone system through the screen-locking interface of the cell phone.
  • the "user environment” can also be the operational environment of a certain application that the user enters after logging in through the login interface of the application, and the operational environment of the application may also comprise multiple applications of the next level (such as, cell phone applications in above cell phone system), which cell phone applications, after starting, may also comprise some applications of the next level such as phone calling, contacts, call records, etc.
  • the next level such as, cell phone applications in above cell phone system
  • the embodiment of this application provides a method for information interaction, comprising:
  • SI 20 Image acquisition step acquiring an image related to a device, the image containing at least one digital watermark;
  • SI 40 Information acquisition step acquiring at least one piece of input information contained in the at least one digital watermark and corresponding to the device;
  • SI 60 Information providing step providing at least one piece of input information to the device.
  • the embodiment of this application acquires an image related to a device and obtains the input information contained in the image and provides the input information automatically to the device. Therefore the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
  • SI 20 Image acquisition step acquiring an image related to a device.
  • the image related to a device can be, for example, an image displayed on the device, such as an image displayed on the electronic terminal screen of the cell phone, computer, etc.
  • the image is a login interface of a user environment displayed on the device.
  • the image is a screen-locking interface 110 displayed on the device.
  • the image can also be, for example, the image displayed on other devices or a static image printed on the objects (such as, paper or wall), whereby the image is related to the device mentioned above.
  • the image is an image displayed on a picture posted near a door
  • the device is an electronic access control device for the door
  • the electronic watermark of the image contains the input information for the electronic access control device (such as, password information for opening the door).
  • the object seen by the user can be photographed through an intellectual glasses device, for example, when the user sees the image, the intellectual glasses device photographs the image.
  • the image can be acquired through other devices, or through the interaction with a device displaying the image.
  • S140 Information acquisition step acquiring at least one piece of input information which is contained in the at least one digital watermark and corresponds to the device.
  • the digital watermark in the image can be analyzed by a method for extracting an individual personal private key and a public or private watermark to extract the input information.
  • the image can be sent to the external, for example, to a cloud server and/or a third party authority, and the input information in at least one digital watermark can be extracted by the cloud server or the third party authority.
  • the input information is the login information about the user environment.
  • the image is the login interface of a website displayed on an electronic device
  • the input information is the login information, such as the user name, password, etc., corresponding to the website.
  • the input information is the unlock information about the screen-locking interface.
  • Fig. 2a shows the screen-locking interface of a touch screen cell phone device.
  • the user needs to draw a corresponding track on the cell phone screen to unlock the cell phone, so that the user can enter the user environment of the cell phone system for further operations.
  • the screen-locking interface is embedded with the digital watermark; this implementation acquires corresponding unlock information through the digital watermark; and sending same back to the device, so that the device can unlock automatically after receiving the unlock information.
  • SI 60 Information providing step providing the input information to the device.
  • the input information can be sent directly to the device through the interaction between devices locally.
  • the input information can also be sent to a remote end server, and the remote end server provides the input information to the device.
  • the input information from a device can be acquired naturally and conveniently and provided to the device, so that the corresponding function of the device can be started conveniently without user operations.
  • the method in addition to the steps of the embodiment shown in Fig. 1 , in order to improve the security of operation, before the information acquisition step SI 40, the method also comprises:
  • SI 30 Authorization determining step: determining whether the user is an authorized user, and conducting the information acquisition step only when the user is an authorized user.
  • the user can be a user who is using the device.
  • the user After the user acquires the pattern related to a device and having digital watermarks, in order to guarantee the security of user information, the user needs to be authorized, so that only authorized users can acquire the corresponding input information in the digital watermark, while the unauthorized users can only see the image.
  • the authorization determining step SI 30 can be conducted at the remote end (such as a cloud server), that is, the corresponding user information can be sent to the remote end and the determined results can be sent back to a local spot after being determined by the remote end; it can also be directly conducted locally.
  • the authorization determining step can also be set before the information providing step.
  • the method before the information providing step, the method also comprises:
  • SI 50 Authorization determining step: determining whether the user is an authorized user, and conducting the information providing step only when the user is an authorized user.
  • the authorization determining step can also be conducted at the remote end or locally.
  • the first device for information interaction described in the embodiment of this application is the device provided at the user's side; a person skilled in the art could know that the first device for information interaction can also be a cloud device, such as a server, etc.
  • the input information in the image can be extracted at a local spot or at other server sides after receiving the image sent from the user's side, and the user authorization determining step is conducted, and then the input information is sent to the device related to the image after confirming that the user is an authorized user.
  • the method also comprises:
  • SI 80 Projecting step projecting the input information to the eye fundus of the user.
  • the user can know the corresponding information, and on the other hand, in the occasions (for example there are communication problems) that some devices are unable to receive the input information, the user can input into the device manually according to acquired input information.
  • the input information needs to be converted into corresponding display content.
  • the corresponding input information can be provided to the user by projecting the input information to the user's eye fundus.
  • the projection can be performed by directly projecting the input information to the user eye fundus through a projection module.
  • the input information can be projected directly to the user's eye fundus without intermediate display, therefore only the user himself can acquire the input information, while other people are unable to see the information, thereby guaranteeing the information security for the user.
  • the projection can also be performed by displaying the input information in the positions only visible to the user (for example on the display surface of an intellectual glass), and projecting the input information to the user's eye fundus through the display surface.
  • the projecting step comprising:
  • an information projecting step projecting the input information; and a parameter adjusting step: adjusting at least one projection imaging parameter of the optical path between the projection position and the user's eyes, until the input information is imaged clearly in the user's eye fundus.
  • the parameter adjusting step comprises:
  • the imaging parameter comprises the focal length, direction of optical axis, etc. of the optical device.
  • the input information can be properly projected to the eye fundus of the user through this adjustment, for example, the input information is imaged clearly on the user's eye fundus by adjusting the focal length of the optical devices.
  • the three-dimensional display effect of the input information can also be achieved by projecting the same input information to the two eyes respectively with some deviations, and at this time, for example, the effect can be achieved by adjusting the parameter of the optical axis of the optical device.
  • the projecting step SI 80 also comprises:
  • the projecting step SI 80 also comprises:
  • the projected input information is pre-processed, so that the input information projected has a reversed deformation opposite to the deformation, which reversed deformation effect offsets the deformation effect of the curved optical device through the above curved optical device. Therefore the input information received on the user's eye fundus is the effect to be presented.
  • the input information projected to the user's eye needs not to be aligned with the image, for example, when the user is needed to input a set of password information in a certain order in an input box displayed in the image, for example "1234", the set of information only needs to be projected to the user's eye fundus to be seen by the user.
  • the input information is the information generated when completing a specific action in a specific position, for example, when it needs the information generated by drawing a specific track in a specific position on the screen displaying the image
  • the input information needs to be aligned with the image for displaying. Therefore, in a possible implementation of the embodiment of this application, in the projecting step SI 80, the input information can be projected to the user's eye fundus after being aligned with an image seen by the user.
  • the method also comprises:
  • a position detecting step for detecting the position of the user's gazing point relative to the user
  • the projecting step SI 80 aligns the projected input information with the image seen by the user on the eye fundus of the user according to the position of user's gazing point relative to the user.
  • the position corresponding to the user's gazing point is the position of the image.
  • a depth sensor such as infrared distance measurement
  • the detecting the current gazing point of the user through the method iii) comprises:
  • an eye fundus image collection step for collecting an image of the user's eye fundus
  • an adjustable imaging step for adjusting at least one imaging parameter of the optical path between the image collection position of the eye fundus and the user's eye until the clearest image is collected;
  • an image processing step for analyzing the collected image of the eye fundus, obtaining the imaging parameters of the optical path between the image collection position of the eye fundus corresponding to the clearest image and the eye as well as at least one optical parameter of the eye, and calculating the position of the user's current gazing point relative to the user.
  • an image presented on the "eye fundus” is primarily an image presented on the retina, which can be the image of the eye fundus per se, or the image of another object projected to the eye fundus, such as the light spot pattern mentioned below.
  • the clearest image of the eye fundus can be obtained when the optical device is in a certain position or state by adjusting the focal length of an optical device on the optical path between the eye and the collection position and/or its position in the optical path.
  • the adjustment can be continuous and in real time.
  • the optical device can be a lens with adjustable focal length, and is used for completing the adjustment of its focal length by adjusting the refractive index and/or shape of the optical device itself. Specifically, 1) adjusting the focal length by adjusting the curvature of at least one surface of the lens with adjustable focal length, for example, by increasing or decreasing the liquid medium in the cavity composed by two transparent layers to adjust the curvature of the lens with adjustable focal length; 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, since the lens with adjustable focal length is filled with a specific liquid crystal medium, adjusting the arrangement mode of the liquid crystal medium by adjusting the voltage of the respective electrode of the liquid crystal medium, thereby changing the refractive index of the lens with adjustable focal length.
  • the optical devices can be: a set of lenses, which are used for completing the adjustment of focal length of the set of lenses by adjusting the relative positions of lenses in the set of lenses.
  • one or more lenses in the set of lenses are the lens with adjustable focal length mentioned above.
  • optical path parameter of the system can also be changed by adjusting the position of the optical device on the optical path.
  • the image processing step further comprises:
  • the clearest image can be collected through adjustment in the adjustable imaging step, but the clearest image needs to be found out through the image processing step, so that the optical parameters of the eyes can be calculated according to the clearest image and the known optical path parameters.
  • the image processing step may also comprise:
  • the projected light spot may have no specific pattern and is only used for illuminating the eye fundus.
  • the projected light spot may also comprise patterns with rich features. Rich features of the pattern can be convenient for detecting, increasing the accuracy of detection.
  • Fig. 4a shows an example drawing of a light spot pattern P, which pattern can be formed by a light spot pattern generator, such as a frosted glass;
  • Fig. 4b shows the image of eye fundus collected with a light spot pattern P projected.
  • the light spot is an infrared light spot invisible to eyes. At this moment, in order to reduce the interference from other spectra, light other than that visible to eyes in the projected light spot can be filtered out.
  • the method of the embodiment of this application may also comprise the following steps:
  • the analysis result for example, comprises the features of the collected image, comprising contrast of the features of image and the texture features, etc.
  • the projection can be stopped periodically when the observer gazes one point continually.
  • the projection can be stopped when the eye fundus of the observer is bright enough, and the distance from focusing point of the current view of eyes to the eyes can be detected through eye fundus information.
  • the brightness of projected light spot can also be controlled according to the ambient light.
  • the image processing step may also comprise:
  • the conducting calibration of the eye fundus image acquiring at least one reference image corresponding to the image presented on the eye fundus. Specifically, the collected image and the reference image are compared and calculated to obtain the clearest image.
  • the clearest image can be the obtained image having the minimum difference from the reference image.
  • difference between the current obtained image and the reference image can be calculated using existing image processing algorithm, for example, using classical automatic focusing algorithm of phase difference.
  • the optical parameters of the eyes may also comprise the direction of optical axis of eyes obtained according to the characteristic of eyes when the clearest image is collected.
  • the characteristics of eyes can be acquired from the clearest image or by other means.
  • the gazing direction of the user's eyes view can be obtained according to the directions of optical axes of eyes.
  • the directions of optical axes of eyes can be obtained according to the characteristic of eye fundus when the clearest image is obtained, and the directions of optical axes of eyes can be determined by the characteristic of the eye fundus at higher accuracy.
  • the size of the light spot pattern may be larger than or smaller than the visible area in the eye fundus, wherein:
  • the directions of optical axes of eyes and line-of-sight direction of observer can be determined through the position of the light spot pattern on the obtained image relative to original light spot pattern (obtaining through image calibration).
  • the direction of optical axis of eye can be obtained through the characteristics of pupil when the clearest image is obtained.
  • the characteristics of pupil can be acquired from the clearest image or by other means. Obtaining the direction of optical axis of eye through characteristics of pupil is prior art, therefore it is not explained here.
  • the method of the embodiment of this application also comprises the calibration step of the direction of optical axis of eye, in order to determine the direction of optical axis of eye more precisely.
  • the known imaging parameters comprise fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are the information about parameters of the optical device when the clearest image is acquired, and the parameter information can be obtained by recording real-time when the clearest image is acquired.
  • the position of the eyes' gazing point can be obtained by combining the parameters with the distance from focusing point of eye to eye obtained by calculating.
  • the input information can be projected to the user's eye fundus three-dimensionally in the projecting step SI 80.
  • the three-dimensional projection can be realized by adjusting the projection position in such a manner that the user can see the information with parallax, thereby forming the three-dimensional display effect of the same projection information.
  • the input information respectively comprise three-dimensional information corresponding to the user's two eyes, and in the projecting step, corresponding input information can be projected to the two eyes of the user respectively. That is, the input information comprises left eye information corresponding to the user's left eye and right eye information corresponding to the user's right eye, wherein the left eye information can be projected to the user's left eye and the right eye information can be projected to the user's right eye, so that the input information seen by the user has suited three-dimensional display effect and brings better user experience.
  • the user can see the three-dimensional space information through the three-dimensional projection.
  • the input information can only be inputted correctly when the user makes a specific hand gesture in a specific position in the three-dimensional space
  • the user sees the three-dimensional input information and thus knows the specific position and the specific hand gesture, so that the user can make the hand gesture prompted by the input information in the specific position, while other people are unable to know the spatial information even if they see the hand gesture made by the user, thereby improving the secrecy effect of the input information.
  • the embodiment of this application provides a first device for information interaction 500 for, comprising:
  • an image acquisition module 510 used for acquiring an image related to a device, the image containing at least one digital watermark
  • an information acquisition module 520 used for acquiring the at least one piece of input information which is contained in at least one digital watermark and corresponding to the device;
  • an information providing module 530 used for providing the at least one piece of input information to the device.
  • the device of the embodiment of this application acquires an image related to a device and obtains the input information contained in the image and provides the input information automatically to the device. Therefore the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
  • the image acquisition module 510 comprises an image collection sub-module 511 used for acquiring the image by photographing.
  • the image collection sub-module 511 for example can be a camera of the intellectual glasses used for photographing the image seen by the users.
  • the image acquisition module 510 comprises:
  • a first communication sub-module 512 used for obtaining the image by receiving the same.
  • the image can be acquired by another device and then sent to the device of the embodiment of this application; alternatively, the image can be acquired through interaction with a device displaying the image (that is, the device transmits the displayed image information to the device of embodiment of this application).
  • the information acquisition module 520 there are various forms of the information acquisition module 520, for example:
  • the information acquisition module 520 comprises: an information extraction sub-module 521 used for extracting the input information from the image.
  • the information extraction sub-module 521 can analyze the digital watermarks in the image through a method for extracting a personal private key and public or private watermark, and extract the input information.
  • the information acquisition module 520 comprises: a third communication sub-module 522 used for:
  • the image can be sent to the external, for example, to a cloud server and/or a third party authority, and then after the cloud server or the third party authority extracts the at least one digital watermark in the input information, sent back to the third communication sub-module 522 in the embodiment of this application.
  • the functions of the first communication sub-module 512 and the third communication sub-module 522 can be achieved through the same communication module.
  • the device 500 also comprises:
  • an authorization determination module 550 is used for determining whether the user is an authorized user, and starting corresponding operations when the user is an authorized user, specifically, that is, the information acquisition module 520 can acquire the input information only when the user is an authorized user.
  • the authorization determination module 550 determines whether the user is an authorized user, and starts corresponding operations when the user is an authorized user. [00133] In the embodiment of this application, after the input information is extracted through the image, the user needs to be authorized, so that the input information can be provided to the device only when the user is an authorized user.
  • the authorization determination by the authorization determination module 550 can also be conducted at the remote end (such as a cloud server), that is, sending the corresponding user information to the remote end, and sending the results back to the local spot after determination.
  • the authorization determination module 550 comprises:
  • a second communication sub-module 551 used for:
  • the second communication sub-module 551 can be a separate communication interface device, or can be the same module as the first communication sub-module 512 and/or the third communication sub-module 522.
  • the device may comprise no authorization determination module.
  • the device 500 in addition to providing the input information to the device to perform corresponding operations, in order to ensure that the user can see the input information secretly, as shown in Fig. 6b, the device 500 also comprises:
  • a projection module 560 used for projecting the input information to the user's eye fundus.
  • the user can know the corresponding information, and on the other hand, in the occasions (for example there are communication problems) that some devices are unable to receive the input information, the user can input the information into the device manually according to acquired input information.
  • the projection module 560 comprises:
  • an information projecting sub-module 561 used for projecting the input information
  • a parameter adjustment sub-module 562 used for adjusting at least one projection imaging parameter of the optical path between the projection position and the user's eyes, until the input information is imaged clearly on the user's eye fundus.
  • the parameter adjustment sub-module 562 comprises:
  • At least one adjustable lens device with the focal length thereof being adjustable and/or the position thereof on the optical path between the projection position and the user's eyes being adjustable.
  • the projection module 560 comprises:
  • a curved spectral device 563 used for transmitting the input information to the user's eye fundus respectively corresponding to the positions of pupil in different directions of optical axis of the eye.
  • the projection module 560 comprises:
  • a reversed deformation processing sub-module 564 used for conducting the reversed deformation processing on the input information corresponding to the positions of pupil in different directions of optical axis of the eye so that the eye fundus receive the input information to be presented.
  • the projection module 560 comprises:
  • an alignment and adjustment sub-module 565 used for aligning the projected input information with the image seen by the user on the user's eye fundus.
  • the device 500 also comprises:
  • a position detection module 540 used for detecting the position of the user's gazing point relative to the user
  • the alignment and adjustment module 565 used for aligning the projected input information and the image seen by the user on the user's eye fundus according to the position of the user's gazing point relative to the user.
  • the position detection module 540 there are various embodiments of the position detection module 540, such as devices corresponding to methods i) to iii) in the method embodiments.
  • the embodiment of this application further illustrates the position detection module corresponding to method iii) through the implementations corresponding to Figure 7a-7d, 8 and 9:
  • the position detection module 700 comprises:
  • an image collection sub-module for eye fundus 710 used for collecting an image on the user's eye fundus
  • an adjustable imaging sub-module 720 used for adjusting at least one imaging parameter of the optical path between the image collection position of the eye fundus and the user's eye until the clearest image is collected;
  • an image processing sub-module 730 used for analyzing the collected image of the eye fundus, obtaining the imaging parameters of the optical path between the eye fundus image collection position corresponding to the clearest image and the eye as well as at least one optical parameter of the eye, and calculating the position of the user's current gazing point relative to the user.
  • the position detection module 700 By analyzing and processing the image on the eye fundus, the position detection module 700 obtains the optical parameters of the eye when the image collection sub-module of eye fundus obtaining the clearest image, and thus the current position of gazing point of eyes can be obtained by calculating.
  • the image presented on the "eye fundus” is primarily the image presented on the retina, which can be the image of the eye fundus itself or the image of another object projected to the eye fundus.
  • the eye can be human eyes or the eye of other animals.
  • the image collection sub-module for eye fundus 710 is a micro camera; in another possible implementation of the embodiment of this application, the image collection sub-module for eye fundus 710 can also use sensing imaging device directly, such as a CCD or CMOS.
  • the adjustable imaging sub-module 720 comprises: an adjustable lens device 721 located on the optical path between the eyes and the image collection sub-module for eye fundus 710, with the focal length thereof being adjustable and/or the position thereof in the optical path being adjustable. Equivalent focal length of the system from eyes to the image collection sub-module for eye fundus 710 can be adjusted through the adjustable lens device 721 , and the image collection sub-module for eye fundus 710 can acquire the clearest image on the eye fundus under a certain position or state of the adjustable lens device 721 by adjustment of the adjustable lens device 721.
  • the adjustable lens device 721 can be adjusted continuously and in real time in the detection process.
  • the adjustable lens device 721 can be: a lens with adjustable focal length, used for completing the adjustment of the focal length thereof by adjusting the refractive index and/or shape thereof. Specifically: 1) adjusting the focal length by adjusting the curvature of at least one surface of the lens with adjustable focal length, for example, by increasing or decreasing the liquid medium in the cavity composed of two transparent layers to adjust the curvature of the lens with adjustable focal length; 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, since the lens with adjustable focal length is filled with a specific liquid crystal medium, the arrangement mode of the liquid crystal medium can be adjusted by adjusting the voltage of the respective electrode of the liquid crystal medium, thereby changing the refractive index of the lens with adjustable focal length.
  • the adjustable lens device 721 comprises: a set of lenses composed of multiple lenses, used for completing the adjustment of focal length of the set of lenses by adjusting the relative positions between the lenses in the set of lenses.
  • the set of lens can also comprise the lenses with adjustable imaging parameters, such as focal length.
  • the optical path parameters of the system can be further changed by adjusting the position of the lens device 721 on the optical path.
  • the adjustable imaging sub-module 720 can also comprise: a spectroscopic unit 722, used for forming the light transmission paths between the eye and the observation object as well as between the eye and the image acquisition sub-module for eye fundus 710. This folds the optical path, reducing the system volume while avoiding influencing other viewing experiences of the user as far as possible.
  • the spectroscopic unit can comprise: a first spectroscopic unit located between the eye and the observation object, used for transmitting the light from the observation object to the eye and used for transferring the light from the eye to the image acquisition sub-module for eye fundus.
  • the first spectroscopic unit can be a spectroscope, a spectroscopic optical waveguide (comprising optical fibers) or other suitable spectroscopic devices.
  • the image processing sub-module 730 of the system comprises an optical path calibrating unit, used for calibrating the optical path of the system; for example, an alignment calibration for the optical axis of the optical path is performed to guarantee the accuracy of the measurement.
  • the image processing sub-module 730 comprises:
  • an image analysis unit 731 used for analyzing the image acquired by the image acquisition sub-module for eye fundus to find the clearest image
  • a parameter calculation unit 732 used for calculating the optical parameters of the eyes based on the clearest image as well as the known imaging parameters of the system when the clearest image is acquired.
  • the image acquisition sub-module for eye fundus is the image acquisition sub-module for eye fundus
  • the optical parameters of the eyes can be obtained by calculating based on the clearest image and the known optical path parameters of the system.
  • the optical parameters of the eye herein can comprise an optical axis direction of the eye.
  • the system can also comprise: a projection sub-module 740 used for projecting the light spot to the eye fundus.
  • the functions of the projection sub-module can be implemented through a micro-projector.
  • the light spot projected herein may have no specific pattern and is only used for illuminating the eye fundus.
  • the projected light spot can comprise a pattern with rich features.
  • the rich features of the pattern can be easy to detect, increasing the accuracy of detection.
  • Fig. 4a is a diagram of a light spot pattern P, and the pattern can be formed by a light spot pattern generator, such as a frosted glass;
  • Fig. 4b shows the eye fundus image taken when the light spot pattern P is projected.
  • the light spot can be an infrared light spot invisible to eyes.
  • a transmission filter for light invisible to eyes can be arranged on the emergent surface of the projection sub-module.
  • a transmission filter for light invisible to eyes can be arranged on the incident surface of the image acquisition sub-module for eye fundus.
  • the image processing sub-module 730 can also comprise:
  • a projection control unit 734 used for controlling the brightness of the projected light spot of the projection sub-module 740 based on the obtained results of the image analysis unit 731.
  • the projection control unit 734 can adaptively adjust the brightness based on the features of the image obtained by the image collection sub-module for eye fundus 710.
  • the features of the image herein comprise the contrast of the image features as well as the texture feature, etc.
  • a special condition of controlling the brightness of projected light spot of the projection sub-module 740 is turning-on or tuning-off the projection sub-module 740, and for example the projection sub-module 740 can be periodically turned off when the user continues to focus on a point.
  • the light emitting source can be turned off when the user eye fundus is bright enough, and the distance from the eyes' current gazing point to the eyes can be detected only by the eye fundus information.
  • the projection control unit 734 can further control the brightness of projected light spot of the projection sub-module 740 based on an ambient light.
  • the image processing sub-module 730 can also comprise: an image calibration unit 733 used for performing the calibration for the eye fundus image to obtain at least one reference image corresponding to the image present in the eye fundus.
  • the image analysis unit 731 compares the image acquired by the image acquisition sub-module for eye fundus 730 with the reference image and calculates, thereby acquiring the clearest image.
  • the clearest image can be an obtained image having the minimum difference from the reference image.
  • the difference between the current obtained image and the reference image can be calculated by an existing image processing algorithm such as a classical automatic focusing algorithm of phase difference.
  • the parameter calculation unit 732 can comprise:
  • a determination subunit 7321 for the direction of optical axis of eye used for obtaining the direction of optical axis of eye according to the characteristics of eye when the clearest image is acquired.
  • the characteristics of eyes herein can be obtained from the clearest image or by other means.
  • the gazing direction of line-of-sight of the user's eye can be obtained according to direction of optical axis of eye.
  • the determination subunit 7321 for the direction of optical axis of eye can comprise: a first determination subunit used for obtaining the direction of optical axis of eye according to the characteristics of the light spot when the clearest image is acquired. Compared with the direction of optical axis of eye obtained through the characteristics of pupil and the surface of eyeball, the direction of optical axis of eyes can be determined by the characteristics of the eye fundus at a higher accuracy.
  • the size of the light spot pattern may be larger than or smaller than the visible area of the eye fundus, wherein:
  • the classical matching algorithm of feature points can be used to determine the direction of optical axis of eyes by detecting the position of the light spot pattern in the image relative to the eye fundus; and
  • the direction of optical axis of eyes can be determined through the position of the light spot pattern in the obtained image relative to the original light spot pattern (obtained by the image calibration unit), thereby determining the line-of-sight direction of the user.
  • the determination subunit 7321 for the direction of optical axis of eye comprise: a second determination subunit used for obtaining the direction of optical axis of eye according to the characteristics of pupil when the clearest image is acquired.
  • the characteristics of pupil herein can be obtained from the clearest image or by other means. Obtaining the direction of optical axis of eye through characteristics of pupil is prior art, therefore it is not explained here.
  • the image processing sub-module 730 can also comprise: a calibration unit 735 for the direction of optical axis of eye used for calibrating the direction of optical axis of eye to more precisely determine the direction of optical axis of eye mentioned above.
  • the known imaging parameters of the system comprise the fixed imaging parameters and the real-time imaging parameters, wherein the real-time imaging parameters are the information about the parameters of the adjustable lens device when the clearest image is obtained, and the information about the parameters can be obtained by real-time recording when the clearest image is acquired.
  • the distance from eyes' gazing point to the eye can be obtained by calculation as follows, particularly:
  • Fig. 7c is a diagram of the eye imaging, and by combining a lens imaging formula in the classical optics theory, the formula 1) can be obtained from Fig. 7c:
  • do and de are respectively the distance from the current observation object 7010 of eyes and from a real image 7020 on the retina to the eye-equivalent lens 7030; fe is an equivalent focal length of the eye-equivalent lens 7030; and X is a line-of-sight direction of the eye (which may be obtained through the direction of optical axis of eye).
  • Fig. 7d is a diagram of the distance from the eyes' gazing point to the eye obtained based on the known optical parameters of the system and the optical parameters of the eyes, the light spot 7040 in Fig.7d are converted into a virtual image (not shown in Fig. 7d) through the adjustable lens device 721, assuming that the distance from the virtual image to the lens is x (not shown in Fig.7d), the following system of equations can be obtained by combining with formula (1):
  • d p is an optical equivalence distance from the light spot 7040 to the adjustable lens device 721 ; d; is an optical equivalence distance from the adjustable lens device 721 to the eye-equivalent lens 7030; and f p is the focal length value of the adjustable lens device 721.
  • the position of eyes' gazing point can be easily obtained based on the distance from the observation object 7010 to the eye obtained according to the above-mentioned calculations as well as the direction of optical axis of eye recorded previously, thereby providing a basis for a subsequent further interaction related to the eye.
  • Fig. 8 is an embodiment where the position detecting module 800 is applied to the glasses G in a possible implementation of the embodiment of this application, which comprises the recorded contents of the implementation shown in Fig. 7b, specifically: seen from the Fig. 8, in this implementation, the module 800 of this implementation is integrated on the right side (not limited thereto) of the glasses G, comprising:
  • a micro camera 810 whose function is the same as that of the image collection sub-module for eye fundus recorded in the implementation of Fig. 7b, and which is located on the right outer position of the glasses G in order not to influence the sight when the user views the object normally;
  • a first spectroscope 820 whose function is the same as that of the first spectroscopic unit recorded in the implementation of Fig. 7b, and which is located at the intersection point of the gazing direction of the eye A and the incidence direction of the camera 810 at a certain angle of inclination, so as to transmit the light of the observation object entering the eye A and reflect the light from the eye to the camera 810;
  • a lens with adjustable focal length 830 whose function is the same as that of the lens with adjustable focal length recorded in the implementation of Fig. 7b, and which is located between the first spectroscope 820 and the camera 810 to adjust the focal length value in real time, such that the camera 810 can take the clearest eye fundus image at a certain focal length value.
  • the eye fundus brightness is insufficient, thus it is preferable to illuminate the eye fundus, and in this implementation, the eye fundus is illuminated by a light emitting source 840.
  • the light emitting source 840 herein may be a light emitting source of light invisible to the human eyes, such as a near-infrared light emitting source which may have a slight impact on eye A and is relatively sensitive to the camera 810.
  • the light emitting source 840 is located outside of the right eyeglasses frame, therefore the transmission of the light emitted by the light emitting source 840 to the eye fundus requires a second spectroscope 850 along with the first spectroscope 820.
  • the second spectroscope 850 is located in front of the incident surface of the camera 810, therefore the light from the eye fundus to the second spectroscope 850 is required to be transmitted.
  • the first spectroscope 820 may have the properties of high infrared reflectivity and high transmission to visible light.
  • the above properties can be achieved by arranging an infrared reflective film on the side of the first spectroscope 820 toward the eye A.
  • the position detection module 800 is located on the side of the glasses G away from the eye A, therefore the lens may be regarded as a part of the eye A when the optical parameters of the eye are calculated, without needing to know the optical property of the lens.
  • the position detection module 800 may be located on the side of glasses G near the eye A; in this case, it is required to obtain the optical property parameters of the lens in advance and take the influencing factors of the lens into consideration when the distance of the gazing point is calculated.
  • the light emitted from the light emitting source 840 passes through the lens of the glasses G after the reflection of the second spectroscope 850, the projection of the lens with adjustable focal length 830 and the reflection of the first spectroscope 820, enters the user's eyes and finally arrives at the retina of eye fundus.
  • the eye fundus image is taken by the camera 810, through the pupil of eye A via the optical path composed of the first spectroscope 820, the lens with adjustable focal length 830 and the second spectroscope 850.
  • the other parts of the device of the embodiment of this application are also embodied on the glasses G, and because the position detection module and the projection module may simultaneously comprise a device having projection function (the information projection sub-module of the projection module and the projection sub-module of the position detection module, as described above) and an imaging device with adjustable imaging parameters (the parameter adjustment sub-module of the projection module and the adjustable imaging sub-module of the position detection module, as described above), accordingly, in a possible implementation of the embodiment of this application, the functions of the position detection module and the projection module are achieved by the same device.
  • the light emitting source 840 may be used for aiding the projection of the input information as the information projection sub-module of the projection module in addition to the illumination of the position detection module.
  • the light emitting source 840 may simultaneously project the invisible light for illuminating the position detection module and the visible light for aiding the projection of the input information, respectively; in another possible implementation, the light emitting source 840 may also switch between the projection of the invisible light and the visible light asynchronously; and in still another possible implementation, the position detection module may use the input information to achieve the function of illuminating the eye fundus.
  • the first spectroscope 820, the second spectroscope 850 and the lens with adjustable focal length 830 may be used as the parameter adjustment sub-module of the projection module and as the adjustable imaging sub-module of the position detection module.
  • its focal length may be adjusted region by region, different regions correspond respectively to the position detection module and the projection module, and the focal lengths may also be different.
  • the focal length of the lens with adjustable focal length 830 is adjusted as a whole, however other optical devices are arranged on the front end of a light sensing unit (such as CCD, etc.) of the micro camera 810 of the position detection module, to achieve the auxiliary adjustment of the imaging parameters of the position detection module.
  • a light sensing unit such as CCD, etc.
  • it is configured such that the optical path from the light emitting plane (where the input information is projected out) of the light emitting source 840 to the eyes is the same as that from the eyes to the micro camera 810, and when the lens with adjustable focal length 830 is adjusted to the clearest eye fundus image received by the micro camera 810, the input information projected by the light emitting source 840 just is imaged clearly in the eye fundus.
  • the functions of the position detection module and the projection module of the first device for information interaction of the embodiment of this application may be achieved by a set of means, such that the overall system has simple structure, small volume, and improved portability.
  • FIG. 9 The structural diagram of the position detection module 900 of another implementation of the embodiment of this application is shown in Fig. 9. It can be seen from Fig. 9 that this implementation is similar to the implementation shown in Fig. 8, comprising the micro camera 910, the second spectroscope 920, and the lens with adjustable focal length 930, except that the projection sub-module 940 of this implementation is a projection sub-module 940 for projecting light spot pattern and the first spectroscope of the implementation shown in Fig. 8 is replaced by a curved spectroscope 950 as the curved spectroscopic device.
  • the curved spectroscope 950 corresponds respectively to the positions of the pupil in different directions of the eyes' optical axes, and the image presented in the eye fundus is transmitted to the eye fundus image collection sub-module.
  • the camera may take the images mixed and superimposed in all angles of the eyeball, but only the eye fundus part passing through the pupil can image clearly in the camera, while the other parts will be out of focus and unable to image clearly; therefore, imaging in the eye fundus part will not be interfered seriously, and the feature of the eye fundus part may still be detected.
  • the eye fundus image may be acquired well when the eyes gaze in different directions, such that the position detection module of this implementation has a wider range of application and higher detection accuracy.
  • the other parts of the first device for information interaction of the embodiment of this application are embodied on the glasses G.
  • the position detection module and the projection module may also be reused.
  • the projection sub-module 940 may switch between the projection of the light spot pattern and the input information synchronously or asynchronously; alternatively, the projected input information is used by the position detection module as the light spot pattern for detection.
  • the first spectroscope 920, the second spectroscope 950 and the lens with adjustable focal length 930 may be used as the parameter adjustment sub-module of the projection module and as the adjustable imaging sub-module of the position detection module.
  • the second spectroscope 950 is also used respectively corresponding to the positions of the pupil in different directions of the eye' optical axis, to transmit in the optical path between the projection module and the eye fundus. Because the input information projected by the projection sub-module 940 is deformed after passing through the second curved spectroscope 950, in this implementation, the projection module comprises:
  • a reversed deformation processing module (not shown in Fig. 9) used for performing the reversed deformation processing corresponding to the curved spectroscopic device on the input information so that the input information to be presented is received by the eye fundus.
  • the projection module is used for projecting the input information to the user eye fundus in a three-dimensional way.
  • the input information comprises the three-dimensional information respectively corresponding to the two eyes of the user, and the projection module projects respectively the corresponding input information to the two eyes of the user.
  • the first device for information interaction 1000 requires to provide two sets of projection modules respectively corresponding to the two eyes of the user and comprises:
  • the structure of the second projection module is similar to the structure recorded in the embodiment shown in Fig. 10 which integrates the function of the position detection module, also has the structure which may simultaneously achieve the function of the position detection module and the function of the project module, and comprises the micro camera 1021, the second spectroscope 1022, the second lens with adjustable focal length 1023, and the first spectroscope 1024 (the position detection sub-module is not shown in Fig. 10) with the functions thereof being the same as the embodiment shown in Fig.
  • the projection sub-module of this implementation is the second projection sub-module 1025 which may project the input information corresponding to the right eye. It may be used for detecting the position of the gazing point of the user's eye and clearly projecting the input information corresponding to the right eye to the right eye fundus.
  • the structure of the first projection module is similar to that of the second projection module 1020, except that it neither has the micro camera nor integrates the function of the position detection module.
  • the first projection module comprises:
  • a first projection sub-module 1011 used for projecting the input information corresponding to the left eye to the left eye fundus
  • a first lens with adjustable focal length 1013 used for adjusting the imaging parameters between the first projection sub-module 1011 and the eye fundus, such that the corresponding input information may be clearly presented on the left eye fundus and that the user can see the input information presented in the image;
  • a third spectroscope 1012 used for transmitting in the optical path between the first projection sub-module 1011 and the first lens with adjustable focal length 1013;
  • a fourth spectroscope 1014 used for transmitting in the optical path between the first lens with adjustable focal length 1013 and the left eye fundus.
  • the input information seen by the user has the appropriate three-dimensional display effect, bringing better user experience. Furthermore, when the input information inputted to the user contains three-dimensional space information, the user may see the three-dimensional space information by means of the three-dimensional projection.
  • the input information can only be inputted correctly when the user makes a specific hand gesture in a specific position in the three-dimensional space
  • the user sees the three-dimensional input information and thus knows the specific position and the specific hand gesture, so that the user can make the hand gesture prompted by the input information in the specific position, while other people are unable to know the spatial information even if they see the hand gesture made by the user, thereby improving the secrecy effect of the input information.
  • Fig. 11 is the structural diagram of still another first device for information interaction 1100 provided by the embodiment of this application; and the specific embodiments of this application have no restriction on the specific realization of the first device for information interaction 1100.
  • the first device for information interaction 1100 may comprise:
  • a processor 1110 a communications interface 1120, a memory 1130, and a communication bus 1140.
  • the processor 1110, the communications interface 1120, and the memory 1130 communicate with each other via the communication bus 1140.
  • the communications interface 1120 is used for communicating with network elements such as client.
  • the processor 1110 is used for executing a program 1132, specifically executing the relevant steps of the above-mentioned method embodiment.
  • the program 1132 may comprise program codes which comprise computer operating instructions.
  • the processor 1110 may be a central processing unit CPU, a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement the embodiment of this application.
  • ASIC Application Specific Integrated Circuit
  • the memory 1130 is used for storing the program 1132.
  • the memory 1130 may contain a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk storage.
  • the program 1132 may be used to make the first device for information interaction 1100 perform the following steps:
  • a computer readable medium comprising computer readable instructions which perform the following operations when executed: executing the steps SI 20, SI 40 and SI 60 of the method in the above-mentioned embodiment.
  • the embodiment of this application also provides a wearable device 1200 containing the first device for information interaction 1210 recorded by the above-mentioned embodiment.
  • the wearable device may be a pair of glasses.
  • this pair of glasses may have a structure as shown in Figs 8 to 10.
  • the embodiment of this application provides a method for information interaction, comprising:
  • SI 310 a watermark embedding step: embedding at least one digital watermark into an image related to a device, the digital watermark containing the input information corresponding to the device;
  • SI 320 an image providing step: providing the image to external;
  • SI 330 an information input step: receiving the at least one piece of input information provided from the external;
  • SI 340 an execution step: executing the operation corresponding to the at least one piece of input information.
  • the digital watermark is provided to the external after embedded such that the external device may acquire the corresponding input information according to the image and then return the same to the method of this application; the information input step of the method of this application automatically carries out the corresponding operations after the input information provided from the external is received, without manual operation of the user, which is convenient for use by the user.
  • the digital watermark may be classified according to its symmetry into symmetrical watermarks and asymmetric watermarks. The embedding key and detection key of a conventional symmetrical watermark are identical, such that the watermark would be removed from a digital carrier easily, once the detection method and key are disclosed.
  • the asymmetrical watermarking technology uses a private key to embed a watermark and uses a public key to extract and verify the watermark, such that it is difficult for an attacker to use the public key for destroy or remove the watermark embedded with the private key. Therefore, in the embodiment of this application, an asymmetric digital watermark may be used.
  • the embedded input information to be contained in the digital watermark may be preset by the user according to his or her personalized requirement or actively configured for the user by the system.
  • SI 320 may comprise:
  • the step SI 320 may also be as follows: sending the image to the corresponding device by interacting between devices, in the method of the embodiment of this application.
  • the image is a login interface of a user environment
  • the image is a login interface of a user's electronic bank account, the input information being the name and password of the electronic bank account; after the input information is received, the user's electronic bank account is logged in such that the user can enter the user environment of the electronic bank account and in turn use the corresponding function.
  • the image is a screen-locking interface
  • the operation corresponding to the input information is unlocking the corresponding screen according to the input information.
  • the input information is the unlock information corresponding to the screen-locking interface; after the input information is received, the cell phone screen is unlocked, and the user may use the corresponding function of the cell phone system in the user environment.
  • the method may also comprise:
  • an authorization determining step for determining whether a user is an authorized user, and conducting the execution step only when the user is an authorized user.
  • the embodiment of this application provides a second device for information interaction 1400, comprising:
  • a watermark embedding module 1410 used for embedding at least one digital watermark into an image related to the second device for information interaction 1400, the at least one digital watermark containing the input information corresponding to the second device for information interaction 1400;
  • an image providing module 1420 used for providing the image to external
  • an information input module 1430 used for receiving the input information provided from the external
  • an execution module 1440 used for executing the corresponding operation according to the received input information.
  • the device of the embodiment of this application provides the digital watermark to the external after embedded such that the external device may acquire the corresponding input information according to the image and then return the same to the device of the embodiment of this application; the device of the embodiment of this application automatically carries out the corresponding operation by the execution module 1440 after the input information provided from the external is received, without manual operation of the user, which is convenient for use by the user.
  • the image providing module 1420 of the embodiment of this application comprises:
  • a display sub-module 1421 used for displaying the image.
  • the image providing module 1420 may also be, for example, an interaction interface, and the image is transferred to other devices (such as the above-mentioned first device for information interaction) by interaction.
  • the image may be a login interface of a user environment
  • the execution module 1440 is used for logging in to the user environment according to the input information.
  • the image may be a screen-locking interface
  • the execution module 1440 is used for unlocking the corresponding screen according to the input information.
  • the device 1400 may also comprise:
  • an authorization determination module 1450 used for determining whether a user is an authorized user, and triggering the corresponding operation by the execution module only when the user is an authorized user.
  • the embodiment of this application also provides an electronic terminal 1500 comprising the above-mentioned device for information interaction 1510.
  • the electronic terminal 1500 is an electronic device such as a cell phone, a tablet computer, a computer, an electronic entrance guard, and on-board electronic device.
  • Fig. 16 is the structural diagram of still another second device for information interaction 1600 provided by the embodiment of this application; and the specific embodiments of this application have no restriction on the specific realization of the second device for information interaction 1600.
  • the second device for information interaction 1600 may comprise:
  • a processor 1610 for executing instructions stored in a memory 1630, and a communication bus 1640.
  • a communications interface 1620 for communicating with a processor 1610, a communications interface 1620, a memory 1630, and a communication bus 1640.
  • a communication bus 1640 for communicating with a processor 1610, a communications interface 1620, a memory 1630, and a communication bus 1640.
  • the processor 1610, the communications interface 1620, and the memory 1630 communicate with each other via the communication bus 1640.
  • the communications interface 1620 is used for communicating with network elements such as client.
  • the processor 1610 is used for executing a program 1632, specifically executing the relevant steps of the method embodiment shown in Fig. 13.
  • the program 1632 may comprise program codes which comprise computer operating instructions.
  • the processor 1610 may be a central processing unit CPU, a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement the embodiment of this application.
  • ASIC Application Specific Integrated Circuit
  • the memory 1630 is used for storing the program 1632.
  • the memory 1630 may contain a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk storage.
  • the program 1632 may be used to make the second device for information interaction 1600 performs the following steps:
  • a watermark embedding step embedding at least one digital watermark into an image related to a device, the digital watermark containing the at least one piece of input information corresponding to the device;
  • an image providing step providing the image to external;
  • an information input step receiving the at least one piece of input information provided from the external and executing the operation corresponding to the input information.
  • a computer readable medium comprising computer readable instructions which implement the following operations when executed: the operations of executing the steps S1310, S1320, S1330 and S1340 of the method in the above-mentioned embodiment.
  • Fig. 17 is an application example diagram of a first and a second device for information interaction of an embodiment of this application.
  • the electronic device, the cell phone device 1710, recorded in the embodiment shown in Fig. 15 and the wearable device, the intellectual glasses 1720, recorded in the embodiment shown in Fig. 12 are comprised.
  • the intellectual glasses 1720 comprise the first device for information interaction described in the embodiments shown in Figs. 5 to 11 ; the function of the image acquisition module (mainly the image collection sub-module) of the first device for information interaction is achieved by the camera 1721 on the intellectual glasses 1720; the information acquisition module (not shown in Fig. 17) and the information providing module (not shown in Fig. 17) of the device for information interaction may be integrated in the original processing module of the intellectual glasses 1720 or arranged on the frame (for example, arranged on the legs of glasses, or become a part of the frame) of the intellectual glasses 1720, for realizing their functions.
  • the function of the image acquisition module mainly the image collection sub-module of the first device for information interaction is achieved by the camera 1721 on the intellectual glasses 1720
  • the information acquisition module (not shown in Fig. 17) and the information providing module (not shown in Fig. 17) of the device for information interaction may be integrated in the original processing module of the intellectual glasses 1720 or arranged on the frame (for example, arranged on the
  • the cell phone device 1710 comprises the second device for information interaction shown in Fig. 14.
  • the function of the display sub-module of the second device for information interaction is achieved by the display module of the cell phone device 1710; and the watermark embedding module, the information input module, and the execution module may be integrated in the existing processing module and communication module of the cell phone device 1710 or arranged in the cell phone device 1710 as a separate module, for realizing their functions.
  • the image is the screen-locking interface 1711 (for example, the image shown in Fig. 2a) of the cell phone device 1710, and the input information is the corresponding unlock information.
  • the watermark embedding module embeds the digital watermark with unlock information into the screen-locking interface 1711 of the cell phone device 1710 in advance, and when a user needs to use the cell phone device 1710, the digital watermark is displayed by the display module of the cell phone device 1710 by a specific operation (for example, pressing the power supply button of the cell phone device 1710).
  • the user will look at the display screen of the cell phone device 1710 so that the camera 1721 of the intellectual glasses 1720 can acquire the image displayed on the screen-locking interface 1711 , automatically acquire the unlock information according to the image by the information acquisition module of the second device for information interaction, and then send the unlock information to the cell phone device 1710 by the information providing device (for example, a wireless communication interface between devices).
  • the information providing device for example, a wireless communication interface between devices.
  • the corresponding unlock operation is carried out by the execution module such that the cell phone device 1710 can be released from the unlocked status without any other actions, so as to enter the user environment of the cell phone system.
  • the device and method for information interaction of the embodiments of this application can make the corresponding operations natural and convenient for the user (the cell phone will be automatically unlocked only by glancing the screen-locking interface of the cell phone by the user), providing better user experience.
  • the functions may be stored in a computer-readable storage medium.
  • a computer-readable storage medium which is stored in a readable storage medium and comprises various instructions for causing a computer apparatus (which may be a personal computer, a server, a network apparatus, or the like) to execute all or some of the steps in the method in the individual embodiments of the present invention.
  • the aforementioned storage medium comprises any medium that may store program codes, such as a USB-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un dispositif pour l'interaction d'informations, le procédé consistant d'une part à : acquérir une image liée à un dispositif, l'image contenant au moins un filigrane numérique; acquérir au moins un élément d'information d'entrée correspondant au dispositif contenu dans ledit au moins un filigrane numérique; et présenter les informations d'entrée au dispositif. Le procédé consiste d'autre part à : insérer au moins un filigrane numérique dans une image liée à un dispositif, le filigrane numérique contenant les informations d'entrée correspondant au dispositif; présenter l'image à l'extérieur; et recevoir les informations d'entrée obtenues de l'extérieur et exécuter l'opération correspondant aux informations d'entrée. Dans les modes de réalisation de l'invention, l'utilisateur peut utiliser un dispositif si nécessaire sans avoir à se rappeler les informations d'entrée, ce qui lui facilite grandement la tâche et améliore l'expérience d'utilisation.
PCT/CN2014/081494 2013-11-15 2014-07-02 Interaction d'informations WO2015070623A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310573092.8 2013-11-15
CN201310573092.8A CN103631503B (zh) 2013-11-15 2013-11-15 信息交互方法及信息交互装置

Publications (1)

Publication Number Publication Date
WO2015070623A1 true WO2015070623A1 (fr) 2015-05-21

Family

ID=50212630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/081494 WO2015070623A1 (fr) 2013-11-15 2014-07-02 Interaction d'informations

Country Status (2)

Country Link
CN (1) CN103631503B (fr)
WO (1) WO2015070623A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3445077A1 (fr) * 2017-08-16 2019-02-20 Beijing Xiaomi Mobile Software Co., Ltd. Terminal mobile de déverrouillage dans la réalité augmentée
CN113116358A (zh) * 2019-12-30 2021-07-16 华为技术有限公司 心电图的显示方法、装置、终端设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677631A (zh) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互装置
CN103631503B (zh) * 2013-11-15 2017-12-22 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互装置
KR20170011617A (ko) * 2015-07-23 2017-02-02 엘지전자 주식회사 이동 단말기 및 그것의 제어방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970307A (zh) * 2012-12-21 2013-03-13 网秦无限(北京)科技有限公司 密码安全系统和密码安全方法
CN103116717A (zh) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 一种用户登录方法及系统
CN103616998A (zh) * 2013-11-15 2014-03-05 北京智谷睿拓技术服务有限公司 用户信息获取方法及用户信息获取装置
CN103631503A (zh) * 2013-11-15 2014-03-12 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互装置
CN103678971A (zh) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 用户信息提取方法及用户信息提取装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6978376B2 (en) * 2000-12-15 2005-12-20 Authentica, Inc. Information security architecture for encrypting documents for remote access while maintaining access control
CN101449265A (zh) * 2006-03-15 2009-06-03 杰里·M·惠特克 具有浏览并与万维网交互的头戴式显示器的移动全球虚拟浏览器
JP5158007B2 (ja) * 2009-04-28 2013-03-06 ソニー株式会社 情報処理装置、および情報処理方法、並びにプログラム
CN103368617A (zh) * 2013-06-28 2013-10-23 东莞宇龙通信科技有限公司 智能设备交互系统和智能设备交互方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970307A (zh) * 2012-12-21 2013-03-13 网秦无限(北京)科技有限公司 密码安全系统和密码安全方法
CN103116717A (zh) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 一种用户登录方法及系统
CN103616998A (zh) * 2013-11-15 2014-03-05 北京智谷睿拓技术服务有限公司 用户信息获取方法及用户信息获取装置
CN103631503A (zh) * 2013-11-15 2014-03-12 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互装置
CN103678971A (zh) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 用户信息提取方法及用户信息提取装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3445077A1 (fr) * 2017-08-16 2019-02-20 Beijing Xiaomi Mobile Software Co., Ltd. Terminal mobile de déverrouillage dans la réalité augmentée
US11051170B2 (en) 2017-08-16 2021-06-29 Beijing Xiaomi Mobile Software Co., Ltd. Unlocking mobile terminal in augmented reality
CN113116358A (zh) * 2019-12-30 2021-07-16 华为技术有限公司 心电图的显示方法、装置、终端设备和存储介质

Also Published As

Publication number Publication date
CN103631503B (zh) 2017-12-22
CN103631503A (zh) 2014-03-12

Similar Documents

Publication Publication Date Title
EP0922271B1 (fr) Appareil d'identification personnelle
WO2018040307A1 (fr) Procédé et dispositif de détection in vivo basés sur une image visible par jumelle infrarouge
US20170323167A1 (en) Systems And Methods Of Biometric Analysis With A Specularity Characteristic
WO2015070623A1 (fr) Interaction d'informations
CN106682540A (zh) 一种智能防偷窥方法及装置
CN106503680B (zh) 用于移动终端虹膜识别的引导指示人机接口系统和方法
KR101645084B1 (ko) 실외 및 실내에서 홍채인식이 가능한 손 부착형 웨어러블 장치
CN103678971B (zh) 用户信息提取方法及用户信息提取装置
CN109726694B (zh) 一种虹膜图像采集方法及装置
JP2008241822A (ja) 画像表示装置
JP2007135149A (ja) 移動携帯端末
KR101231068B1 (ko) 생체 정보 수집 장치 및 방법
CN103616998B (zh) 用户信息获取方法及用户信息获取装置
CN108140114A (zh) 虹膜识别
WO2017113286A1 (fr) Procédé et appareil d'authentification
KR20180134280A (ko) 3차원 깊이정보 및 적외선정보에 기반하여 생체여부의 확인을 행하는 얼굴인식 장치 및 방법
KR20150139183A (ko) 정맥인식이 가능한 손목용 웨어러블 장치
US20160155000A1 (en) Anti-counterfeiting for determination of authenticity
KR20090132839A (ko) 전자 id 카드 발급 시스템 및 방법
CN108135468A (zh) 使用光场显微镜检查的眼科手术
WO2015070624A1 (fr) Interaction d'informations
CN103761653B (zh) 防伪方法及防伪装置
JP2001215109A (ja) 虹彩画像入力装置
TWM463878U (zh) 活體辨識系統以及身份認證裝置
JP2004233425A (ja) 画像表示装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14862935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14862935

Country of ref document: EP

Kind code of ref document: A1