CN103984413A - Information interaction method and information interaction device - Google Patents

Information interaction method and information interaction device Download PDF

Info

Publication number
CN103984413A
CN103984413A CN201410209581.XA CN201410209581A CN103984413A CN 103984413 A CN103984413 A CN 103984413A CN 201410209581 A CN201410209581 A CN 201410209581A CN 103984413 A CN103984413 A CN 103984413A
Authority
CN
China
Prior art keywords
user
information
eyes
attentively
sight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410209581.XA
Other languages
Chinese (zh)
Other versions
CN103984413B (en
Inventor
杜琳
张宏江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201410209581.XA priority Critical patent/CN103984413B/en
Publication of CN103984413A publication Critical patent/CN103984413A/en
Application granted granted Critical
Publication of CN103984413B publication Critical patent/CN103984413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the invention discloses an information interaction method and an information interaction device. The method comprises the steps of obtaining the vision communication information of one user with other users; confirming whether the vision communication information meets set interaction conditions; for the vision communication information corresponding to the interaction conditions, performing the information interaction of the user and/or other users among external equipment corresponding to other users. According to the embodiment of the invention, the information interaction of the user and/or the other users is naturally performed among the users according to the vision communication condition among the users, other normal communication among the users does not need to be interrupted, the individual information interaction is only performed among the users with the vision communication, and the interaction safety of the individual information is ensured.

Description

Information interacting method and information interactive device
Technical field
The application relates to technical field of information interaction, relates in particular to a kind of information interacting method and information interactive device.
Background technology
In commercial meet occasions such as meetings; during people's first meeting, conventionally can in talk process, judge whether to exchange visiting cards; and when needs exchange; stopping current other normally exchanges; find out oneself papery business card or electronic business card (electronic business card is such as the information such as organization names, address name and cell-phone number that comprise under user) to the other side, and receive the other side's business card or typing the other side's communication information.
Summary of the invention
The application's object is: a kind of information interaction scheme is provided.
First aspect, the application provides a kind of information interacting method, comprising:
Obtain the sight line exchange of information between a user and other user;
Confirm whether described sight line exchange of information meets the mutual condition of setting;
Described mutual condition is corresponding with meeting, and carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
Second aspect, the application provides a kind of information interactive device, comprising:
Acquisition module, for obtaining the sight line exchange of information between a user and other user;
Confirm module, for confirming whether described sight line exchange of information meets the mutual condition of setting;
Interactive module, for to meet described mutual condition corresponding, carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
At least one embodiment of the embodiment of the present application is mutual by the personal information of naturally carrying out between user according to the sight line interchange situation between user, not needing to interrupt other between user normally exchanges, and only between the user who has sight line to exchange, carry out the mutual of personal information, guarantee the mutual security of personal information.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of a kind of information interacting method of the embodiment of the present application;
Fig. 2 is the process flow diagram of the another kind of information interacting method of the embodiment of the present application;
Fig. 3 is the schematic diagram that a kind of information interacting method of the embodiment of the present application obtains sight line duration of contact of user and other user;
Fig. 4 is the structural representation block diagram of a kind of information interactive device of the embodiment of the present application;
Fig. 5 a is the structural representation block diagram of the another kind of information interactive device of the embodiment of the present application;
Fig. 5 b is the structural representation block diagram of watching confirmation unit attentively of a kind of information interactive device of the embodiment of the present application;
Fig. 5 c is the structural representation block diagram of watching confirmation unit attentively of the another kind of information interactive device of the embodiment of the present application;
Fig. 6 is the structural representation block diagram of a kind of near eye wearable device of the embodiment of the present application;
Fig. 7 is the structural representation block diagram of a kind of intelligent glasses of the embodiment of the present application;
Fig. 8 is the structural representation block diagram of another information interactive device of the embodiment of the present application.
Embodiment
Below in conjunction with accompanying drawing (in some accompanying drawings, identical label represents identical element) and embodiment, the application's embodiment is described in further detail.Following examples are used for illustrating the application, but are not used for limiting the application's scope.
It will be understood by those skilled in the art that the terms such as " first " in the application, " second ", only for distinguishing different step, equipment or module etc., neither represent any particular technology implication, also do not represent the inevitable logical order between them.
As shown in Figure 1, the embodiment of the present application provides a kind of information interacting method, comprising:
S110 obtains the sight line exchange of information between a user and other user;
S120 confirms whether described sight line exchange of information meets the mutual condition of setting;
S130, with to meet described mutual condition corresponding, carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
For instance, the information interactive device that the application provides, as the executive agent of the present embodiment, is carried out S110~S130.Particularly, described information interactive device can be arranged in the mode of software, hardware or software and hardware combining in a subscriber equipment, or described information interactive device itself is exactly described subscriber equipment; Described subscriber equipment includes but not limited to: smart mobile phone, intelligent glasses, intelligent helmet etc., wherein intelligent glasses is divided into again intelligent framework glasses and intelligent invisible glasses.In the embodiment of the present application, the user that described user is described information interactive device; The user that described other user is described external unit.
In the embodiment of the present application, described information interaction mainly comprises described user and/or described other user's personal information mutual.
The personal information that the embodiment of the present application is naturally carried out between user according to the sight line interchange situation between user is mutual, not needing to interrupt other between user normally exchanges, and only between the user who has sight line to exchange, carry out the mutual of personal information, guarantee the mutual security of personal information.
By the following examples, further illustrate each step of the embodiment of the present application:
As shown in Figure 2, in a kind of possible embodiment, described method also comprised before described step S110:
S100 confirms that whether described user is in face-to-face exchange state;
When confirming described user in described face-to-face exchange state, then forward described step S110 to.
When if user does not exchange with other user, described step S110 cannot obtain described sight line exchange of information, therefore to repeat to obtain described sight line exchange of information always, therefore, in the present embodiment, described method does not need to obtain described sight line exchange of information always, but is confirming that user, when exchanging with other user, just triggers described step S110.
In a kind of possible embodiment, whether the described user of described confirmation comprises in face-to-face exchange state:
Obtain described user acoustic information around;
By speech detection, described acoustic information is analyzed, confirmed that whether described user is in face-to-face exchange state.
In a kind of possible embodiment of present embodiment, can obtain described user acoustic information around by a sound sensor device, described sound sensor device can be a part for described information interactive device.In the possible embodiment of another kind, can also from other equipment, obtain described acoustic information by the mode of communication.
In the present embodiment, when described user and other user's face-to-face exchanges, generally have described user's voice and at least one other user's voice, by speech detection algorithms, confirm whether have described user's voice and at least one other user's voice in described acoustic information, can confirm that whether described user is in face-to-face exchange state.
In the another kind of possible embodiment of the application, whether the described user of described confirmation comprises in face-to-face exchange state:
Obtain described user's head movement attitude information;
By the identification of head movement gesture mode, described head movement attitude information is analyzed, confirmed that whether described user is in face-to-face exchange state.
In the embodiment of the present application, can obtain by being arranged at an athletic posture senser element of described user's head described user's head movement attitude information; Or, also can, by the mode of communication, from other equipment, obtain described head movement attitude information.
In the embodiment of the present application, because user has specific head movement feature and posture feature when the face-to-face exchange state, therefore, by the identification of head movement gesture mode, can confirm that whether described user is in face-to-face exchange state.Different from the embodiment by speech detection, in the present embodiment, even if there is no the interchange of language between user, also can confirm that whether described user is in face-to-face exchange state.
In the possible embodiment of another kind, can be simultaneously by described acoustic information and described head movement attitude information confirm that described user, whether in face-to-face exchange state, can further improve the accuracy of confirmation above.
Certainly, those skilled in the art can know, other for detection of user whether the method in described talk state also can apply in the embodiment of the present application.
S110 obtains the sight line exchange of information between a user and other user.
In the embodiment of the present application, described sight line exchange of information comprises: sight line duration of contact.
Here, described sight line is duration of contact: when described user watches described other user's eyes attentively, described other user is also in the time of watching described user's eyes attentively.
In the embodiment of the present application, the sight line of obtaining described user and described other user comprises duration of contact:
Obtain the very first time information that described user watches described other user's eyes attentively;
Obtain the second temporal information that described other user watches described user's eyes attentively;
According to described very first time information and described the second temporal information, obtain described sight line exchange of information.
As shown in Figure 3, according to described very first time information and described the second temporal information, can obtain the time period overlapping time period corresponding with described the second temporal information time period corresponding to described very first time information, this time period is described user and described other user's described sight line duration of contact.
In the embodiment of the present application, described in, obtaining the very first time information that described user watches described other user's eyes attentively comprises:
Confirm whether described user watches described other user's eyes attentively;
Record time of eyes that described user watches described other user attentively as described very first time information.
In the embodiment of the present application, confirm that eyes that whether described user watches described other user attentively are mainly by detecting the realization that whether overlaps of user's blinkpunkt and described other user's eyes.
Certainly, owing to detecting, likely have error, therefore in a kind of possible embodiment, the eyes whether described definite described user watches described other user attentively can also be the predeterminable areas whether described user's of detection blinkpunkt drops on the eyes that comprise described other user.Predeterminable area described here can be for example the region within the scope of described other user's eye socket, or can be even described other user's facial zone.
In a kind of possible embodiment, the eyes whether described user of described confirmation watches described other user attentively comprise:
Obtain the image corresponding with described user's the visual field;
Obtain described user's direction of visual lines;
Confirm whether object corresponding with described user's direction of visual lines on described image is described other user's eyes.
In the embodiment of the present application, image corresponding to the visual field described and described user refers to it is the image of object of the viewing areas (described viewing areas can be the subset in the user visual field, also can overlap with the user visual field) that comprises described user here.For example, image capture module that can be by a near-eye equipment along eyes of user towards direction take and obtain described image.Can by calibration confirm relation between described image and described user's viewing areas (as image with as described in viewing areas overlap completely, partially overlap etc.).
In the embodiment of the present application, can obtain by sight tracing described user's direction of visual lines.
In the embodiment of the present application, on the described image of described confirmation, whether the object corresponding with described user's direction of visual lines is that described other user's eyes can comprise:
The image capture module of calibrating described user's direction of visual lines and gathering described image gathers the corresponding relation between image;
Described image is analyzed, for example, by people's face detection algorithm, described image is analyzed, identify the eye areas of the people's face in described image;
Whether the direction of visual lines of confirming described user according to described corresponding relation is corresponding with described eye areas, and then can confirm whether described user's direction of visual lines is described other user's eyes.
Because overlapping with an object, user's direction of visual lines can not definitely confirm that user is watching this object attentively, for example, user is thinking deeply eyes does not have focusing or user seeing a transparent substance (for example seeing the content on through mode display), therefore, in order to improve the accuracy of watching confirmation attentively, in a kind of optional embodiment, the eyes whether described user of described confirmation watches described other user attentively also comprise:
Obtain described user's blinkpunkt with respect to the first distance of described user;
Obtain described other user with respect to described user's second distance;
Confirm whether described the first distance mates with described second distance.
In the embodiment of the present application, when described the first distance and described second distance coupling, again due to described user's direction of visual lines and the region of described other user's eyes on described image corresponding, can confirm more accurately that described user watches described other user's eyes attentively.
In the embodiment of the present application, the mode of obtaining described the first distance can have multiple, for example, be following a kind of:
1), by detecting respectively the direction of visual lines of two eyes of described user, obtain the intersection point of direction of visual lines of two eyes with respect to described user's position, and then obtain described the first distance;
2) by detecting the direction of visual lines of described user's eyes, and the depth map of described user's direction of visual lines, according to the corresponding relation of object on described direction of visual lines and described depth map, and then obtain described the first distance;
3) gather the eye fundus image on eyeground, the imaging parameters of the eye fundus image acquisition module when collecting eye fundus image clearly etc., obtain described the first distance.
In a kind of possible embodiment, can obtain described other user with respect to described user's second distance by depth detection.For example, by being arranged on a depth transducer of described user's side, obtain described second distance.
Due to can be in when talk, user generally can watch the other side attentively with a certain specific direction, therefore can be by presetting or the mode such as machine learning is confirmed user's presumptive area on the corresponding described image of direction of visual lines when the talk, now, in a kind of optional embodiment, simplifiedly, the mode that whether the described user of described confirmation watches described other user's eyes attentively can comprise:
Obtain the image corresponding with described user's the visual field;
Confirm that whether the eyes of other user described in described image are in presumptive area.
Owing to reading other user personal information user, if there be mug shot corresponding to this other user, can help more easily user that described personal information and described other user are mapped.In addition, according to above described in embodiment, when user watches described other user's eyes attentively, when carrying out described step S110, other user (face that for example comprises described user) described in can comprising in image corresponding to described and the described user's who obtains the visual field.Therefore,, in a kind of optional embodiment, described method also comprises:
The eyes of watching described other user with described user attentively are corresponding, preserve the user images that comprises described other user in described image.
In the embodiment of the present application, described in, obtaining the second temporal information that described other user watches described user's eyes attentively comprises:
From outside, obtain described the second temporal information.
The outside here can be for example external unit corresponding to described other user, also by the technology such as described eye tracking above, obtains described the second temporal information, and by described external unit, send to the executive agent of the application's method such as described other user.In addition, described outside can also be an external server, described external unit can send to described external server by described the second temporal information of obtaining, and the executive agent of the application's method obtains described the second temporal information by the mode of communication from described external server again.
In a kind of possible embodiment, described user and described other user's facial characteristics has corresponding relation between corresponding equipment respectively with it,, can be mapped to by the facial characteristics identifying the subscriber equipment of a correspondence that is.In this embodiment, described method determines that described user watches attentively after described other user's eyes, can carry out facial characteristics identification to the image of the eyes that comprise described other user that obtain, and by described other user's of identifying facial characteristics, be mapped to corresponding described other user's external unit, and set up and communicate by letter to obtain described the second temporal information with described external unit, and described very first time information can be sent to described external unit.
In addition, in the possible embodiment of another kind, can also be to have corresponding relation between described user and described other user's phonetic feature corresponding equipment respectively corresponding to it, that is, can be mapped to by the phonetic feature identifying the subscriber equipment of a correspondence.Described method can also connect according to other user's who collects the phonetic feature external unit corresponding with described other user, obtains described the second temporal information.
Wherein, communicating by letter and can complete in several ways between the executive agent of the application's method and described external unit, radio communication (as bluetooth, WiFi) for example, visual communication (as presented respectively corresponding Quick Response Code), the modes such as sound wave communication (as passed through ultrasound wave).
S120 confirms whether described sight line exchange of information meets the mutual condition of setting.
As described above, in one embodiment, when described sight line exchange of information is described sight line during duration of contact, described mutual condition for example can comprise:
Sight line between described user and described other user reaches the threshold value of setting duration of contact.
As shown in Figure 3, in a kind of possible embodiment, described sight line duration of contact can be the accumulation sight line duration of contact between described user and described other user, wherein, when having between described user and described other user that repeatedly sight line contacts, described accumulation sight line is this duration of contact repeatedly sight line duration of contact and, t1+t2+t3 as shown in Figure 3; In the possible embodiment of another kind, described sight line duration of contact can be also the single sight line duration of contact between described user and described other user, t1 as shown in Figure 3, t2 or t3; Or described sight line duration of contact can be also the maximal value of the repeatedly sight line between described user and described other user in duration of contact, t3 as shown in Figure 3.
S130, with to meet described mutual condition corresponding, carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
In a kind of possible embodiment of the embodiment of the present application, the information interaction of carrying out described user and/or described other user between external unit corresponding to described and described other user comprises:
To described external unit, send described user's first man information;
From described external unit, receive the second personal information of described other user.
In the possible embodiment of another kind, also can only to described external unit, send described user's first man information, or, the second personal information of other user described in only receiving from described external unit.
In the embodiment of the present application, described first man information can comprise following at least one: described user's attribute information (such as described user's name, affiliated unit's title, post etc.), described user's communication information (as described in user's telephone number, addresses of items of mail, instant messaging account etc.).Certainly, can also comprise that user described in other wants to present to other user's information (such as photo of described user etc.).
The information of wanting to present to described user that described the second personal information comprises described other user.For example, described other user's attribute information and/or described other user's communication information.
In a kind of possible embodiment, described method also comprises:
Described mutual condition is corresponding with meeting, and obtains the user images that comprises described other user;
Described user images is associated with described the second personal information.
As described above, described other user's described user images is associated with described the second personal information after, can help user better to confirm described other user.
Wherein, described user images can be to obtain in the user images of storing from above; Or, can also be to obtain from outside by the mode of communication.
It will be appreciated by those skilled in the art that, in the said method of the application's embodiment, the sequence number size of each step does not also mean that the priority of execution sequence, the execution sequence of each step should confirm with its function and internal logic, and should not form any restriction to the implementation process of the application's embodiment.
As shown in Figure 4, the embodiment of the present application provides a kind of information interactive device 400, comprising:
Acquisition module 410, for obtaining the sight line exchange of information between a user and other user;
Confirm module 420, for confirming whether described sight line exchange of information meets the mutual condition of setting;
Interactive module 430, for to meet described mutual condition corresponding, carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
The personal information that the embodiment of the present application is naturally carried out between user according to the sight line interchange situation between user is mutual, not needing to interrupt other between user normally exchanges, and only between the user who has sight line to exchange, carry out the mutual of personal information, guarantee the mutual security of personal information.
By the following examples, further illustrate the function of each module of the embodiment of the present application.
As shown in Figure 5 a, in a kind of possible embodiment, described device 400 also comprises:
Exchange and confirm module 440, for confirming that whether described user is in face-to-face exchange state;
Described acquisition module 410 is further used for:
Corresponding in described face-to-face exchange state with described user, obtain described sight line exchange of information.
In the present embodiment, while not exchanging with other user for fear of user, described acquisition module 410 constantly attempts obtaining described sight line exchange of information (but in fact cannot get described sight line exchange of information) always, in the present embodiment, by described interchange, confirm that module 440 confirmation users, after exchanging with other user, just trigger the work of described acquisition module 410.
Because the user when with other user's face-to-face exchanges has some specific motion characteristics, therefore, described interchange confirms that module 440 can confirm that whether described user is in described exchange status in several ways, and the structure example that module 440 is confirmed in therefore described interchange is as being following one or more:
1) described interchange confirms that module 440 can comprise:
Sound obtains submodule 441, for obtaining described user acoustic information around;
Speech analysis submodule 442, for described acoustic information being analyzed by speech detection, confirms that whether described user is in face-to-face exchange state.
In the present embodiment, when described user and other user's face-to-face exchanges, generally have described user speech and at least one other user's voice, by speech detection algorithms, confirm whether to have in described acoustic information described user's speech sound, can confirm that whether described user is in face-to-face exchange state.
In the embodiment of the present application, it can be a sound sensor device that described sound obtains submodule 441; Or it can also be a communication device that described sound obtains submodule 441, for for example, obtaining described acoustic information from other equipment (other portable equipment that user carries).
2) described interchange confirms that module 440 can comprise:
Header information obtains submodule 443, for obtaining described user's head movement attitude information;
Header pattern is analyzed submodule 444, for by the identification of head movement gesture mode, described head movement attitude information being analyzed, confirms that whether described user is in face-to-face exchange state.
In the embodiment of the present application, because user has specific head movement feature and posture feature when the face-to-face exchange state, therefore, by the identification of head movement gesture mode, can confirm that whether described user is in face-to-face exchange state.From the embodiment by speech detection is different, in the present embodiment, even if there is no the interchange of language between user, also can confirm that whether described user is in face-to-face exchange state above.
In the embodiment of the present application, header information obtains submodule 443 can comprise an athletic posture senser element (for example: acceleration transducer and gyroscope) that is arranged at described user's head, is used for obtaining described user's head movement attitude information; Or header information obtains submodule 443 can, by the mode of communication, obtain described head movement attitude information from other equipment.
As shown in Figure 5 a, in the present embodiment, described interchange confirms that module 440 comprises that sound recited above obtains submodule 441, speech analysis submodule 442, header information obtains submodule 443 and header pattern is analyzed submodule 444, can be simultaneously by described acoustic information and described head movement attitude information confirm that described user, whether in face-to-face exchange state, can further improve the accuracy of confirmation above.
Certainly, those skilled in the art can know, other for detection of user whether the structure in described talk state also can apply in the embodiment of the present application.
In the embodiment of the present application, described sight line exchange of information comprises: sight line duration of contact.Here, described sight line is duration of contact: when described user watches described other user's eyes attentively, described other user is also in the time of watching described user's eyes attentively.
In the embodiment of the present application, described acquisition module 410 is further used for obtaining the sight line duration of contact between described user and described other user.
As shown in Figure 5 a, in the embodiment of the present application, described acquisition module 410 comprises:
First obtains submodule 411, watches the very first time information of described other user's eyes for obtaining described user attentively;
Second obtains submodule 412, second temporal information of watching described user's eyes attentively for obtaining described other user;
Process submodule 413, for obtaining described sight line exchange of information according to described very first time information and described the second temporal information.
As shown in Figure 5 a, in a kind of possible embodiment, described first obtains submodule 411 comprises:
Watch confirmation unit 4111 attentively, for confirming whether described user watches described other user's eyes attentively;
Time record cell 4112, for recording time of eyes that described user watches described other user attentively as described very first time information.
In the embodiment of the present application, confirm that eyes that whether described user watches described other user attentively are mainly by detecting the realization that whether overlaps of user's blinkpunkt and described other user's eyes.
In embodiment as shown in Figure 5 b, described in watch confirmation unit 4111 attentively and comprise:
Image Acquisition subelement 4111a, for obtaining the image corresponding with described user's the visual field;
Eye tracking subelement 4111b, for obtaining described user's direction of visual lines;
Sight line is confirmed subelement 4111c, for confirming whether object corresponding with described user's direction of visual lines on described image is described other user's eyes.
In the embodiment of the present application, described Image Acquisition subelement 4111a can be an image acquisition device (being for example the camera of one near wearable device) that is arranged near eye position, along eyes of user towards direction take described image.Wherein, can by an alignment unit confirm relation between described image and described user's viewing areas (as image with as described in viewing areas overlap completely, partially overlap etc.).Or in other embodiments, described Image Acquisition subelement 4111a can also be a communication device, for for example, obtaining described image from other equipment (other near-eye equipment).
In the embodiment of the present application, corresponding description in the function embodiment shown in Figure 2 of described sight line confirmation subelement 4111c, repeats no more here.
In order to improve the accuracy of watching confirmation attentively, alternatively, in a kind of possible embodiment, described in watch confirmation unit 4111 attentively and can also comprise:
The first distance is obtained subelement 4111d, for obtaining described user's blinkpunkt with respect to the first distance of described user;
Second distance obtains subelement 4111e, for obtaining described other user with respect to described user's second distance;
Distance is confirmed subelement 4111f, for confirming whether described the first distance mates with described second distance.
In the embodiment of the present application, when described the first distance and described second distance coupling, again due to described user's direction of visual lines and the region of described other user's eyes on described image corresponding, can confirm more accurately that described user watches described other user's eyes attentively.
In the embodiment of the present application, obtaining structure that described the first distance obtains subelement 4111d can have multiplely, for example, be following a kind of:
1) described first comprises two an eye line tracking devices apart from obtaining subelement 4111d, for detecting respectively the direction of visual lines of two eyes of described user, described the first distance is obtained the intersection point of direction of visual lines that subelement 4111d obtains two eyes by the direction of visual lines of described two eyes again with respect to described user's position, and then obtains described the first distance;
2) described first comprises an eye tracking device apart from obtaining subelement 4111d, direction of visual lines for detection of described user's eyes, an and depth sensing device, for obtaining the depth map of described user's direction of visual lines, described the first distance is obtained subelement 4111d according to the corresponding relation of object on described direction of visual lines and described depth map, and then obtains described the first distance;
3) described first comprises an eye fundus image sampler apart from obtaining subelement 4111d, for gathering the eye fundus image on eyeground, described the first distance is obtained the imaging parameters of the eye fundus image acquisition module of subelement 4111d when collecting eye fundus image clearly etc., obtains described the first distance.
In a kind of possible embodiment, described second distance obtains subelement 4111e can comprise a depth transducer, by depth detection, obtains described other user with respect to described user's second distance.For example, by being arranged on a depth transducer of described user's side, obtain described second distance.
Due to can be in when talk, user generally can watch the other side attentively with a certain specific direction, therefore can be by presetting or the mode such as machine learning is confirmed user's presumptive area on the corresponding described image of direction of visual lines when the talk, now, in embodiment as shown in Figure 5 c, described in watch confirmation unit 4111 attentively and can comprise:
Image Acquisition subelement 4111g, for obtaining the image corresponding with described user's the visual field;
Subelement 4111h is confirmed in region, for the eyes of confirming other user described in described image whether in presumptive area.
As shown in Figure 5 a, in the embodiment of the present application, described second obtains submodule 412 comprises:
Communication unit 4121, for obtaining described the second temporal information from outside.
The outside here can be for example external unit corresponding to described other user, also by the technology such as described eye tracking above, obtains described the second temporal information, and send to described communication unit 4121 by described external unit such as described other user.In addition, described outside can also be an external server, and described external unit can send to described external server by described the second temporal information of obtaining, and the communication unit 4121 of the embodiment of the present application obtains described the second temporal information from described external server.
Wherein, communicating by letter and can complete in several ways between described communication unit 4121 and described external unit, radio communication (as bluetooth, WiFi) for example, visual communication (as presented respectively corresponding Quick Response Code), the modes such as sound wave communication (as passed through ultrasound wave).
In a kind of possible embodiment, described user and described other user's facial characteristics has corresponding relation between corresponding equipment respectively with it,, can be mapped to by the facial characteristics identifying the subscriber equipment of a correspondence that is.In this embodiment, describedly watch confirmation unit 4111 attentively and determine that described user watches attentively after described other user's eyes, can carry out facial characteristics identification to the image of the eyes that comprise described other user that obtain, and by described other user's of identifying facial characteristics, be mapped to corresponding described other user's external unit, and set up and communicate by letter to obtain described the second temporal information with described external unit by described communication unit 4121, and can described very first time information be sent to described external unit by described communication unit 4121.
In addition, in the possible embodiment of another kind, can also be to have corresponding relation between described user and described other user's phonetic feature corresponding equipment respectively corresponding to it, that is, can be mapped to by the phonetic feature identifying the subscriber equipment of a correspondence.Described communication unit 4121 can also connect according to other user's who collects the phonetic feature external unit corresponding with described other user, obtains described the second temporal information.
In a kind of possible embodiment, described processing submodule 413 can obtain described very first time information and overlapping time period (as shown in Figure 3) of described the second temporal information according to described very first time information and described the second temporal information, and this time period is described user and described other user's described sight line duration of contact.
Owing to reading other user personal information user, if there be mug shot corresponding to this other user, can help more easily user that described personal information and described other user are mapped.In addition, according to above described in embodiment, when user watches described other user's eyes attentively, other user (face that for example comprises described user) described in can comprising in image corresponding to described and the described user's who obtains the visual field.Therefore,, in a kind of optional embodiment, described device 400 also comprises:
Memory module 450, for to watch described other user's eyes attentively corresponding with described user, preserves the user images that comprises described other user in described image.
In a kind of possible embodiment, described mutual condition comprises:
Sight line between described user and described other user reaches the threshold value of setting duration of contact.
That is: described confirmation module 420 is for confirming whether the sight line between described user and described other user reaches the threshold value of setting duration of contact.
In a kind of possible embodiment, described sight line duration of contact can be the duration of contact of sight line accumulation between described user and described other user; In the possible embodiment of another kind, described sight line duration of contact can be also the single sight line duration of contact of sight line Continuous Contact between described user and described other user.
In a kind of possible embodiment, described interactive module 430 comprises:
Communicator module 431, for:
To described external unit, send described user's first man information; Or
From described external unit, receive the second personal information of described other user; Or
To described external unit, send described user's first man information and from described external unit, receive the second personal information of described other user.
In the embodiment of the present application, described first man information can comprise following at least one: described user's attribute information (such as described user's name, affiliated unit's title, post etc.), described user's communication information (as described in user's telephone number, addresses of items of mail, instant messaging account etc.).Certainly, can also comprise that user described in other wants to present to other user's information (such as photo of described user etc.).
The information of wanting to present to described user that described the second personal information comprises described other user.For example, described other user's attribute information and/or described other user's communication information.
As shown in Figure 5 a, in a kind of possible embodiment, described device 400 also comprises:
Image collection module 460, for to meet described mutual condition corresponding, obtains the user images that comprises described other user;
Relating module 470, for being associated described user images with described the second personal information.
As described above, described other user's described user images is associated with described the second personal information after, can help user better to confirm described other user.
Wherein, described user images can be to obtain in the user images that memory module 450 is preserved from above; Or, can also be to obtain from outside by a communication device.
As shown in Figure 6, the embodiment of the present application provides a kind of near eye wearable device 600, comprises the information interactive device 610 described in any in Fig. 4, Fig. 5 a-5c.
Or in a kind of possible embodiment, described nearly eye wearable device itself is exactly described information interactive device.
For example, in a kind of possible embodiment, described nearly eye wearable device is an intelligent glasses 700.
Described intelligent glasses 700 comprises a camera 710, for realizing the function of the Image Acquisition subelement of above-mentioned Fig. 5 b or Fig. 5 c illustrated embodiment, obtains the image corresponding with described user's the visual field.
Described intelligent glasses 700 also comprises an eye tracking device 720, for obtaining described user's direction of visual lines.
In a kind of possible embodiment, described camera 710 also comprises a depth transducer 711, for obtaining the described second distance of user described in described other user distance.
Described intelligent glasses 700 also comprises a communication module 730, for the external unit corresponding with described other user, communicate, obtain described the second temporal information, described the second personal information, and described very first time information and described first man information are sent to described external unit.
Described intelligent glasses 700 also comprises a processing module 740, for:
The described image obtaining according to described camera 710 and described second distance, the described direction of visual lines that described eye tracking device 720 obtains, determines described very first time information; And described the second temporal information of obtaining in conjunction with described communication module 730 is obtained the sight line exchange of information between described user and other user, confirm whether described sight line exchange of information meets the mutual condition of setting simultaneously; And when meeting described mutual condition, by carrying out described user and/or described other user's information interaction between described communication module 730 and described external unit.
The function of the embodiment of the present application intelligent glasses 700 each modules realizes and can, referring to description corresponding in Fig. 5 a-5c embodiment, repeat no more here.
The structural representation of another information interactive device 800 that Fig. 8 provides for the embodiment of the present application, the application's specific embodiment does not limit the specific implementation of information interactive device 800.As shown in Figure 8, this information interactive device 800 can comprise:
Processor (processor) 810, communication interface (Communications Interface) 820, storer (memory) 830 and communication bus 840.Wherein:
Processor 810, communication interface 820 and storer 830 complete mutual communication by communication bus 840.
Communication interface 820, for the net element communication with such as client etc.
Processor 810, for executive routine 832, specifically can carry out the correlation step in said method embodiment.
Particularly, program 832 can comprise program code, and described program code comprises computer-managed instruction.
Processor 810 may be a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or is configured to implement one or more integrated circuit of the embodiment of the present application.
Storer 830, for depositing program 832.Storer 830 may comprise high-speed RAM storer, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disk memory.Program 832 specifically can be for making described information interactive device 800 carry out following steps:
Obtain the sight line exchange of information between a user and other user;
Confirm whether described sight line exchange of information meets the mutual condition of setting;
Described mutual condition is corresponding with meeting, and carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
In program 832, the specific implementation of each step can, referring to description corresponding in the corresponding steps in above-described embodiment and unit, be not repeated herein.Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the equipment of foregoing description and module, can describe with reference to the corresponding process in preceding method embodiment, does not repeat them here.
Those of ordinary skills can recognize, unit and the method step of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can specifically should be used for realizing described function with distinct methods to each, but this realization should not thought and exceeds the application's scope.
If described function usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the part that the application's technical scheme contributes to prior art in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the application.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), the various media that can be program code stored such as random access memory (RAM, Random Access Memory), magnetic disc or CD.
Above embodiment is only for illustrating the application; and the not restriction to the application; the those of ordinary skill in relevant technologies field; in the situation that do not depart from the application's spirit and scope; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to the application's category, and the application's scope of patent protection should be defined by the claims.

Claims (31)

1. an information interacting method, is characterized in that, comprising:
Obtain the sight line exchange of information between a user and other user;
Confirm whether described sight line exchange of information meets the mutual condition of setting;
Described mutual condition is corresponding with meeting, and carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
2. the method for claim 1, is characterized in that, described sight line exchange of information comprises: sight line duration of contact.
3. method as claimed in claim 2, is characterized in that, described in the sight line exchange of information that obtains between a user and other user comprise:
Obtain the very first time information that described user watches described other user's eyes attentively;
Obtain the second temporal information that described other user watches described user's eyes attentively;
According to described very first time information and described the second temporal information, obtain described sight line exchange of information.
4. method as claimed in claim 3, is characterized in that, described in obtain the very first time information that described user watches described other user's eyes attentively and comprise:
Confirm whether described user watches described other user's eyes attentively;
Record time of eyes that described user watches described other user attentively as described very first time information.
5. method as claimed in claim 4, is characterized in that, the eyes whether described user of described confirmation watches described other user attentively comprise:
Obtain the image corresponding with described user's the visual field;
Obtain described user's direction of visual lines;
Confirm whether object corresponding with described user's direction of visual lines on described image is described other user's eyes.
6. method as claimed in claim 5, is characterized in that, the eyes whether described user of described confirmation watches described other user attentively also comprise:
Obtain described user's blinkpunkt with respect to the first distance of described user;
Obtain described other user with respect to described user's second distance;
Confirm whether described the first distance mates with described second distance.
7. method as claimed in claim 4, is characterized in that, the eyes whether described user of described confirmation watches described other user attentively comprise:
Obtain the image corresponding with described user's the visual field;
Confirm that whether the eyes of other user described in described image are in presumptive area.
8. the method as described in claim 5 or 7, is characterized in that, described method also comprises:
The eyes of watching described other user with described user attentively are corresponding, preserve the user images that comprises described other user in described image.
9. method as claimed in claim 3, is characterized in that, described in obtain the second temporal information that described other user watches described user's eyes attentively and comprise:
From outside, obtain described the second temporal information.
10. method as claimed in claim 2, is characterized in that, described mutual condition comprises:
Sight line between described user and described other user reaches the threshold value of setting duration of contact.
11. the method for claim 1, is characterized in that, described method also comprises:
Confirm that whether described user is in face-to-face exchange state;
The described sight line exchange of information obtaining between a user and other user comprises:
Corresponding in described face-to-face exchange state with described user, obtain described sight line exchange of information.
12. methods as claimed in claim 11, is characterized in that, whether the described user of described confirmation comprises in face-to-face exchange state:
Obtain described user acoustic information around;
By speech detection, described acoustic information is analyzed, confirmed that whether described user is in face-to-face exchange state.
13. methods as described in claim 11 or 12, is characterized in that, whether the described user of described confirmation comprises in face-to-face exchange state:
Obtain described user's head movement attitude information;
By the identification of head movement gesture mode, described head movement attitude information is analyzed, confirmed that whether described user is in face-to-face exchange state.
14. the method for claim 1, is characterized in that, the information interaction of carrying out described user and/or described other user between external unit corresponding to described and described other user comprises:
To described external unit, send described user's first man information; And/or
From described external unit, receive the second personal information of described other user.
15. methods as claimed in claim 14, is characterized in that, described method also comprises:
Described mutual condition is corresponding with meeting, and obtains the user images that comprises described other user;
Described user images is associated with described the second personal information.
16. 1 kinds of information interactive devices, is characterized in that, comprising:
Acquisition module, for obtaining the sight line exchange of information between a user and other user;
Confirm module, for confirming whether described sight line exchange of information meets the mutual condition of setting;
Interactive module, for to meet described mutual condition corresponding, carries out described user and/or described other user's information interaction between the external unit corresponding with described other user.
17. devices as claimed in claim 16, is characterized in that, described sight line exchange of information comprises: sight line duration of contact;
Described acquisition module is further used for, and obtains the sight line duration of contact between described user and described other user.
18. devices as claimed in claim 17, is characterized in that, described acquisition module comprises:
First obtains submodule, watches the very first time information of described other user's eyes for obtaining described user attentively;
Second obtains submodule, second temporal information of watching described user's eyes attentively for obtaining described other user;
Process submodule, for obtaining described sight line exchange of information according to described very first time information and described the second temporal information.
19. devices as claimed in claim 18, is characterized in that, described first obtains submodule comprises:
Watch confirmation unit attentively, for confirming whether described user watches described other user's eyes attentively;
Time record cell, for recording time of eyes that described user watches described other user attentively as described very first time information.
20. devices as claimed in claim 19, is characterized in that, described in watch confirmation unit attentively and comprise:
Image Acquisition subelement, for obtaining the image corresponding with described user's the visual field;
Eye tracking subelement, for obtaining described user's direction of visual lines;
Sight line is confirmed subelement, for confirming whether object corresponding with described user's direction of visual lines on described image is described other user's eyes.
21. devices as claimed in claim 20, is characterized in that, described in watch confirmation unit attentively and also comprise:
The first distance is obtained subelement, for obtaining described user's blinkpunkt with respect to the first distance of described user;
Second distance obtains subelement, for obtaining described other user with respect to described user's second distance;
Distance is confirmed subelement, for confirming whether described the first distance mates with described second distance.
22. devices as claimed in claim 19, is characterized in that, described in watch confirmation unit attentively and comprise:
Image Acquisition subelement, for obtaining the image corresponding with described user's the visual field;
Subelement is confirmed in region, for the eyes of confirming other user described in described image whether in presumptive area.
23. devices as described in claim 20 or 22, is characterized in that, described device also comprises:
Memory module, for to watch described other user's eyes attentively corresponding with described user, preserves the user images that comprises described other user in described image.
24. devices as claimed in claim 18, is characterized in that, described second obtains submodule comprises:
Communication unit, for obtaining described the second temporal information from outside.
25. devices as claimed in claim 17, is characterized in that, described mutual condition comprises:
Sight line between described user and described other user reaches the threshold value of setting duration of contact.
26. devices as claimed in claim 16, is characterized in that, described device also comprises:
Exchange and confirm module, for confirming that whether described user is in face-to-face exchange state;
Described acquisition module is further used for:
Corresponding in described face-to-face exchange state with described user, obtain described sight line exchange of information.
27. devices as claimed in claim 26, is characterized in that, described interchange confirms that module comprises:
Sound obtains submodule, for obtaining described user acoustic information around;
Speech analysis submodule, for described acoustic information being analyzed by speech detection, confirms that whether described user is in face-to-face exchange state.
28. devices as described in claim 26 or 27, is characterized in that, described interchange confirms that module comprises:
Header information obtains submodule, for obtaining described user's head movement attitude information;
Header pattern is analyzed submodule, for by the identification of head movement gesture mode, described head movement attitude information being analyzed, confirms that whether described user is in face-to-face exchange state.
29. devices as claimed in claim 16, is characterized in that, described interactive module comprises:
Communicator module, for:
To described external unit, send described user's first man information; And/or
From described external unit, receive the second personal information of described other user.
30. devices as claimed in claim 29, is characterized in that, described device also comprises:
Image collection module, for to meet described mutual condition corresponding, obtains the user images that comprises described other user;
Relating module, for being associated described user images with described the second personal information.
31. 1 kinds of nearly eye wearable devices, is characterized in that, comprise the information interactive device described in any one in described claim 16 to 30.
CN201410209581.XA 2014-05-19 2014-05-19 Information interacting method and information interactive device Active CN103984413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410209581.XA CN103984413B (en) 2014-05-19 2014-05-19 Information interacting method and information interactive device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410209581.XA CN103984413B (en) 2014-05-19 2014-05-19 Information interacting method and information interactive device

Publications (2)

Publication Number Publication Date
CN103984413A true CN103984413A (en) 2014-08-13
CN103984413B CN103984413B (en) 2017-12-08

Family

ID=51276423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410209581.XA Active CN103984413B (en) 2014-05-19 2014-05-19 Information interacting method and information interactive device

Country Status (1)

Country Link
CN (1) CN103984413B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754497A (en) * 2015-04-02 2015-07-01 清华大学 Visual attention driven communication connection establishment method
CN105824419A (en) * 2016-03-18 2016-08-03 苏州佳世达电通有限公司 Wearing device interaction system and wearing device interaction method
CN106774919A (en) * 2016-12-28 2017-05-31 苏州商信宝信息科技有限公司 A kind of information transmission system based on intelligent glasses lock onto target thing
CN106774918A (en) * 2016-12-28 2017-05-31 苏州商信宝信息科技有限公司 A kind of information retrieval system based on intelligent glasses
CN107646112A (en) * 2015-03-20 2018-01-30 高等教育自主非营利组织斯科尔科沃科学和技术研究所 The method and the method for machine learning being corrected using machine learning to eye image
WO2019076105A1 (en) * 2017-10-20 2019-04-25 华为技术有限公司 Identification code identification method, apparatus and device
CN111240471A (en) * 2019-12-31 2020-06-05 维沃移动通信有限公司 Information interaction method and wearable device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184983B1 (en) * 2010-11-12 2012-05-22 Google Inc. Wireless directional identification and subsequent communication between wearable electronic devices
CN102566756A (en) * 2010-12-16 2012-07-11 微软公司 Comprehension and intent-based content for augmented reality displays
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece
CN102981616A (en) * 2012-11-06 2013-03-20 中兴通讯股份有限公司 Identification method and identification system and computer capable of enhancing reality objects
WO2013049248A2 (en) * 2011-09-26 2013-04-04 Osterhout Group, Inc. Video display modification based on sensor input for a see-through near-to-eye display
WO2013173148A2 (en) * 2012-05-18 2013-11-21 Microsoft Corporation Interaction and management of devices using gaze detection
US20140081634A1 (en) * 2012-09-18 2014-03-20 Qualcomm Incorporated Leveraging head mounted displays to enable person-to-person interactions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece
US8184983B1 (en) * 2010-11-12 2012-05-22 Google Inc. Wireless directional identification and subsequent communication between wearable electronic devices
CN102566756A (en) * 2010-12-16 2012-07-11 微软公司 Comprehension and intent-based content for augmented reality displays
WO2013049248A2 (en) * 2011-09-26 2013-04-04 Osterhout Group, Inc. Video display modification based on sensor input for a see-through near-to-eye display
WO2013173148A2 (en) * 2012-05-18 2013-11-21 Microsoft Corporation Interaction and management of devices using gaze detection
US20140081634A1 (en) * 2012-09-18 2014-03-20 Qualcomm Incorporated Leveraging head mounted displays to enable person-to-person interactions
CN102981616A (en) * 2012-11-06 2013-03-20 中兴通讯股份有限公司 Identification method and identification system and computer capable of enhancing reality objects

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107646112A (en) * 2015-03-20 2018-01-30 高等教育自主非营利组织斯科尔科沃科学和技术研究所 The method and the method for machine learning being corrected using machine learning to eye image
CN107646112B (en) * 2015-03-20 2022-04-12 高等教育自主非营利组织斯科尔科沃科学和技术研究所 Method for correcting eye image using machine learning and method for machine learning
CN104754497A (en) * 2015-04-02 2015-07-01 清华大学 Visual attention driven communication connection establishment method
CN104754497B (en) * 2015-04-02 2018-10-16 清华大学 A kind of communication connection method for building up of vision attention power drive
CN105824419A (en) * 2016-03-18 2016-08-03 苏州佳世达电通有限公司 Wearing device interaction system and wearing device interaction method
CN105824419B (en) * 2016-03-18 2018-12-11 苏州佳世达电通有限公司 Object wearing device interactive system and object wearing device exchange method
CN106774919A (en) * 2016-12-28 2017-05-31 苏州商信宝信息科技有限公司 A kind of information transmission system based on intelligent glasses lock onto target thing
CN106774918A (en) * 2016-12-28 2017-05-31 苏州商信宝信息科技有限公司 A kind of information retrieval system based on intelligent glasses
WO2019076105A1 (en) * 2017-10-20 2019-04-25 华为技术有限公司 Identification code identification method, apparatus and device
CN111240471A (en) * 2019-12-31 2020-06-05 维沃移动通信有限公司 Information interaction method and wearable device

Also Published As

Publication number Publication date
CN103984413B (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN103984413A (en) Information interaction method and information interaction device
CN105940411B (en) Privacy information is shown on personal device
CN104036169B (en) Biological authentication method and biological authentication apparatus
CN117356116A (en) Beacon for locating and delivering content to a wearable device
JP2022078168A (en) Head mounted display system configured to exchange biometric information
KR20160068830A (en) Eye tracking
US20180349690A1 (en) Mobile terminal and control method therefor
US20170156589A1 (en) Method of identification based on smart glasses
CN108965954A (en) Use the terminal of the intellectual analysis of the playback duration for reducing video
CN105684045A (en) Display control device, display control method and program
US20140285402A1 (en) Social data-aware wearable display system
CN103618806A (en) Method for exchanging electronic business cards and information by virtue of handshaking actions
CN104238752B (en) Information processing method and first wearable device
CN104460955B (en) A kind of information processing method and wearable electronic equipment
CN108021902A (en) Head pose recognition methods, head pose identification device and storage medium
US11808941B2 (en) Augmented image generation using virtual content from wearable heads up display
CN110546596A (en) Sight tracking method and terminal for executing same
CN106101376A (en) A kind of message pusher, method and mobile terminal
CN104182043A (en) Object picking-up method, object picking-up device and user equipment
CN104202556B (en) Information acquisition method, information acquisition device and user equipment
CN103941953B (en) Information processing method and device
US10664689B2 (en) Determining user activity based on eye motion
WO2015165330A1 (en) Information processing method and apparatus
CN104156069A (en) Object picking method and device and user equipment
CN107479809A (en) Based reminding method and Related product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant