US20170262696A1 - Wearable apparatus and information processing method and device thereof - Google Patents
Wearable apparatus and information processing method and device thereof Download PDFInfo
- Publication number
- US20170262696A1 US20170262696A1 US15/326,114 US201615326114A US2017262696A1 US 20170262696 A1 US20170262696 A1 US 20170262696A1 US 201615326114 A US201615326114 A US 201615326114A US 2017262696 A1 US2017262696 A1 US 2017262696A1
- Authority
- US
- United States
- Prior art keywords
- pupil
- analysis
- wearable apparatus
- images
- analysis result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G06K9/00288—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G06K9/00335—
-
- G06K9/20—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Definitions
- the present disclosure relates to a wearable apparatus as well as an information processing method and an information processing device for a wearable apparatus.
- Intelligent wearable apparatuses can bring great convenience to their users' lives.
- intelligent glasses can be used by their users to take pictures of scenes sawn by them and share the pictures through networks.
- a lie detector can be used to determine whether what someone has said is true. For example, while the contacts of a lie detector are coupled to the body of the subject to be detected, the subject is asked questions and then the variation of his brain wave or heart rate is observed to determine comprehensively his index of lie.
- a wearable apparatus comprising: an image collector configured to acquire facial images and body images of an interaction partner who is interacting with the user of the apparatus; a controller connected with the image collector and configured to analyze the facial images and the body images to achieve analysis results; and an output device connected with the controller and configured to output the analysis results.
- the image collector is a first camera to acquire facial images and body images of the interaction partner; or the image collector comprises a second camera to acquire body images of the interaction partner and a receiver to receive facial images of the interaction partner.
- first camera and/or the second camera are/is binocular camera(s).
- the output device is a display screen for display of analysis results; or the output device is a voice displayer for playback of analysis results in voice.
- the wearable apparatus is intelligent glasses.
- the intelligent glasses further comprise a glasses body and the image collector, controller and output device are all disposed on the glasses body.
- the analysis results comprise an analysis result of behavior obtained through analyzing the body images as well as an analysis result of heart rate and an analysis result of pupil obtained by analyzing the facial images.
- the wearable apparatus further comprises a selecting device, which is connected with the controller and configured to receive a selecting instruction from the user and send it to the controller so that the controller may obtain analysis results corresponding to the selecting instruction.
- a selecting device which is connected with the controller and configured to receive a selecting instruction from the user and send it to the controller so that the controller may obtain analysis results corresponding to the selecting instruction.
- the selecting device comprises a first selecting unit to receive an analysis instruction for interest from the user and a second selecting unit to receive an analysis instruction for credibility from the user.
- the wearable apparatus further comprises: a wireless module connected with the controller and operable to communicate with network equipments through a wireless network; and/or a GPS module connected with the controller and operable to locate the wearable apparatus; and/or an image collector connected with the controller and operable to acquire facial images of the user of the wearable apparatus and a transmitter connected with the controller and operable to transmit the images acquired by the image collector.
- a wireless module connected with the controller and operable to communicate with network equipments through a wireless network
- a GPS module connected with the controller and operable to locate the wearable apparatus
- an image collector connected with the controller and operable to acquire facial images of the user of the wearable apparatus and a transmitter connected with the controller and operable to transmit the images acquired by the image collector.
- the wearable apparatus comprises a head-mounted wearable apparatus.
- the interaction partner comprises a conversation partner.
- An information processing method for a wearable apparatus comprising: acquiring image information of the interaction partner who is interacting with the user of the wearable apparatus, the information comprising facial images and body images of the interaction partner; analyzing the body images to achieve an analysis result of behavior and analyzing the facial images to achieve an analysis result of heart rate and an analysis result of pupil; determining an output result of the conversation contents given by the interaction partner in accordance with the analysis result of behavior, analysis result of heart rate and analysis result of pupil; and outputting the output result of the conversation contents to the user of the wearable apparatus.
- the method further comprises the step of receiving a selecting instruction; if the received selecting instruction is an analysis instruction for interest, the analysis results of behavior, heart rate and pupil are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the interaction contents; and if the received selecting instruction is an analysis instruction for credibility, the analysis results of behavior, heart rate and pupil are respectively analysis results of behavior, heart rate and pupil indicating the credibility rate of the interaction contents.
- the step of analyzing the body images to achieve an analysis result of behavior comprises: comparing the body image at the current point in time with the body image at the first point in time; and determining an analysis result of behavior indicating that the conversation partner is interested in the current interaction contents if the body image at the current point in time is nearer to the user relative to the body image at the first point in time; wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- the step of analyzing the facial images to achieve analysis results of heart rate and pupil comprises: analyzing the facial images using contactless pulse oximetry to achieve the analysis result of heart rate; obtaining the pupil area in the facial image at the current point in time and the pupil area in the facial image at the second point in time; and comparing the pupil area at the current point in time with the pupil area at the second point in time to achieve the analysis result of pupil; wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- analysis result of heart rate and analysis result of pupil comprises: determining the output result of the interaction contents of the interaction partner to be positive when at least two of the analysis result of behavior, analysis result of heart rate and analysis result of pupil are positive; or in accordance with predetermined weighting factors of individual analysis results, obtaining the output result of the interaction contents of the interaction partner by multiplying the analysis results with their own weighting factors and adding the products together; or obtaining the average value of all the analysis results and taking it as the output result of the interaction contents of the interaction partner.
- the wearable apparatus comprises a head-mounted wearable apparatus.
- the interaction partner comprises a conversation partner.
- an information processing device for a wearable apparatus comprising: an acquiring unit configured to acquire image information of the interaction partner who is interacting with the user of the wearable apparatus, the information comprising facial images and body images of the interaction partner; a processing unit configured to analyze the body images to achieve an analysis result of behavior and analyze the facial images to achieve analysis results of heart rate and pupil; an analyzing unit configured to determine an output result of the interaction contents given by the interaction partner in accordance with the analysis result of behavior, analysis result of heart rate and analysis result of pupil; and an output unit configured to output the output result of the interaction contents to the user of the wearable apparatus.
- the device further comprises: a receiving unit operable to receive an selecting instruction from the user before the body images and the facial images are analyzed by the processing unit; wherein if the received selecting instruction is an analysis instruction for interest, the analysis results of behavior, heart rate and pupil from the processing unit are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the interaction contents; and if the received selecting instruction is an analysis instruction for credibility, the analysis results of behavior, heart rate and pupil from the processing unit are respectively analysis results of behavior, heart rate and pupil indicating credibility rate of the interaction contents.
- the processing unit is further configured to compare he body image at the current point in time with the body image at the first point in time; and determine an analysis result of behavior indicating that the conversation partner is interested in the current interaction contents if the body image at the current point in time is nearer to the user relative to the body image at the first point in time; wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- the processing unit is further configured to analyze the facial images using contactless pulse oximetry to achieve an analysis result of heart rate; obtain the pupil area in the facial image at the current point in time and the pupil area in the facial image at a second point in time; and compare the pupil area at the current point in time with the pupil area at the second point in time to achieve an analysis result of pupil; wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- the analyzing unit is further configured to determine the output result of the interaction contents of the interaction partner to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive; or in accordance with predetermined weighting factors of individual analysis results, obtain an output result of the interaction contents of the interaction partner by multiplying the analysis results with their own weighting factors and adding the products together; or obtain the average value of all the analysis results and taking it as the output result of the interaction contents of the interaction partner.
- the wearable apparatus comprises a head-mounted wearable apparatus.
- the interaction partner comprises a conversation partner.
- FIGS. 1 a -1 b are structure diagrams of a head-mounted wearable apparatus provided in an embodiment of the present disclosure
- FIG. 2 is a structure diagram of a head-mounted wearable apparatus provided in another embodiment of the present disclosure.
- FIG. 3 is a flowchart of an information processing method for a head-mounted wearable apparatus provided in an embodiment of the present disclosure
- FIG. 4 is a flowchart of an information processing method for a head-mounted wearable apparatus provided in another embodiment of the present disclosure
- FIG. 5 are pictures of a scene for a head-mounted wearable apparatus provided in an embodiment of present disclosure
- FIG. 6 is a schematic diagram of an analysis area in a facial image provided in an embodiment of present disclosure.
- FIG. 7 is a structure diagram of an information processing device for a head-mounted wearable apparatus provided in an embodiment of present disclosure.
- FIGS. 1 a -1 b show structural diagrams of a head-mounted wearable apparatus provided in an embodiment of the present disclosure.
- the head-mounted wearable apparatus in the present embodiment includes an image collector 11 , a controller 12 connected with the image collector 11 and an output device 13 connected with the controller 12 .
- the image collector 11 may be a camera
- the controller 12 may be a central processing unit, a microprocessor chip, etc.
- the output device 13 may be a display, a speaker, etc.
- the head-mounted wearable apparatus is only an example of the present disclosure, and other wearable apparatuses such as intelligent watches, intelligent clothes, intelligent accessories, etc. may also be used in embodiments of the present disclosure.
- the above-mentioned image collector 11 is operable to acquire images of the face and body of the conversation partner who is interacting with the user of the head-mounted wearable apparatus.
- the controller 12 is operable to analyze the images of the face and body of the conversation partner and get analysis results.
- the output device 13 is operable to output the analysis results.
- the interaction with the user of the head-mounted wearable apparatus includes a variety of other ways of interactions or combinations thereof in addition to conversation. For example, interaction may proceed through body language such as gestures, through facial expression, etc.
- the above-mentioned analysis results include analysis results of content and manner of their conversation.
- the above-mentioned analysis results accordingly include analysis results of content and manner of the interaction.
- the present embodiment of the disclosure is described only in the case of the conversation partner who interacts with the user of the head-mounted wearable apparatus.
- the image collector 11 in the present embodiment may be a first camera to acquire images of the face and body of the conversation partner.
- the first camera may be a binocular one, which has a high resolution and can acquire two images of the conversation partner from different directions/locations so as to capture minor changes in facial images of the conversation partner for subsequent analysis by the controller.
- the above-mentioned image collector 11 may include two or more cameras with angles formed therebetween horizontally. It is to be noted that two of them may be spaced apart by a preset distance, can both capture high-resolution images, and can capture two images of the conversation partner simultaneously from two different angles/directions/locations, so that minor changes in facial images of the conversation partner can be captured for subsequent analysis by the controller to achieve accurate results.
- the above-mentioned output device 13 may be a display for display of analysis results, such as a liquid crystal display.
- the output device 13 may be a voice displayer, such as a microphone, to play back analysis results in voice.
- the head-mounted wearable apparatus in the present embodiment can acquire images of the conversation partner by means of a head-mounted image collector, analyze the images through the controller to determine the truthfulness/interest of the conversation partner with regard to the current conversation, and output the results through the output device 13 . Furthermore, the head-mounted wearable apparatus of the present disclosure is convenient to carry and of low costs, and can be used more widely and improve user experiences.
- the head-mounted wearable apparatus further includes a wireless module 14 and/or a Global Positioning System (GPS) module 15 .
- the wireless module 14 and the GPS module 15 are both connected with the controller 12 .
- the wireless module 14 is used to enable the controller to communicate with other network equipments (e.g. intelligent terminals, etc.), and it may be, for example, a communication module including a wireless router, an antenna, etc.
- the controller may send analysis results through the wireless module 14 to an intelligent terminal for display or other purposes.
- the GPS module 15 may be used to locate the head-mounted wearable apparatus and provide location information and the like.
- the above-mentioned head-mounted wearable apparatus may be intelligent glasses that include a glasses body, and the image collector 11 , the controller 12 and the output device 13 in the above FIG. 1 a can all be mounted on the glasses body.
- the head-mounted wearable apparatus shown in FIG. 1 and described above may include a selecting device 16 that is connected with the controller 12 and used to receive a selecting instruction from the user and send it to the controller 13 which then get analysis results corresponding to the selecting instruction.
- the selecting device 16 may be keys receiving user input, a microphone receiving voice commands from the user or the like.
- the selecting device 16 includes a first selecting unit 161 to receive an analysis instruction for interest from the user and a second selecting unit 162 to receive an analysis instruction for credibility from the user.
- the selecting device 16 described above may be selection buttons, such as buttons disposed on the glasses body of the intelligent glasses and connected with the controller for the user to select an analysis aspect.
- the controller 12 When a button is activated, the controller 12 will obtain analysis results in the aspect belonging to the activated button.
- buttons operable to enable or disable the intelligent mode are disposed on the glasses body of the intelligent glasses, and if the button operable to enable the intelligent mode is selected by the user, the controller 12 will obtain analysis results in a default analysis aspect.
- the default analysis aspect is about interest in the conversation.
- the intelligent glasses described above may be worn by A to acquire and analyze images of the face and body of B in real time and output analysis results to A, so that A can determine whether B is interested in the current conversation or determine the credibility of what B has said.
- the controller 12 as shown in FIG. 1 b may obtain corresponding analysis results in accordance with the selecting instruction. For example, the body image at the current point in time is compared with that at a first point in time, and if the former is nearer to the user relative to the latter, it can be determined that the conversation partner is interested in the contents of the current conversation, resulting in an analysis result of behavior.
- the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- the controller 12 can also analyze facial images, and obtain analysis results of heart rate and pupil.
- the controller analyzes facial images with contactless pulse oximetry to obtain an analysis result of heart rate; the controller obtains the pupil area in the facial image at the current point in time and that in the facial image at the second point in time, and makes comparison between them to get an analysis result of pupil.
- the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- controller 12 in the present embodiment may also include an analyzing module, which is operable to determine an output result of the conversation contents given by the conversation partner in accordance with the analysis results of behavior, heart rate and pupil.
- an analyzing module which is operable to determine an output result of the conversation contents given by the conversation partner in accordance with the analysis results of behavior, heart rate and pupil.
- the analyzing module may determine the output result of the conversation contents given by the conversation partner to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive.
- the analyzing module may determine the output result of the conversation contents given by the conversation partner by multiplying the analysis results with their own weighting factors and adding the products together.
- the analyzing module may obtain the average value of all the analysis results and take it as the output result of the conversation contents of the conversation partner.
- the positive result may be understood as the result desired by the user.
- misjudgments are avoided effectively during analysis of the various analysis results.
- the output result of the conversation contents of the conversation partner is determined to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive.
- stable heart rate is analyzed to mean no lying during heart rate analyzing (a positive result); dilated pupils are analyzed to mean lying during pupil analyzing (a negative result); and it is analyzed to mean no lying that the conversation partner is near to the user during behavior analyzing (a positive result).
- misjudgments can be avoided, which otherwise may be caused by analyzing only facial images or body images.
- misjudgments can be eliminated according to difference between the weighting factors of individual analysis results, which follows a principle similar to that described above, and no further details will be described herein.
- the intelligent glasses in the present embodiment may have other functionality, such as taking photographs, navigation, etc.
- the intelligent glasses in the present embodiment may be intelligent social glasses, whose hardware includes a glasses frame (i.e. a glasses body) as well as a binocular camera, a controller/processor, a wireless module, a GPS module, a power module, and the like mounted on the glasses frame.
- the head-mounted image collector acquires images of the conversation partner, and the controller analyzes the images to determine truthfulness/interest of the conversation partner with respect to the current conversation and provides an output result through the output device.
- the head-mounted wearable apparatus in the present disclosure is convenient to carry and of low costs.
- the head-mounted wearable apparatus can determine his variation of heart rate, eyeball movement and change of pupil so as to get credibility of what he has said or his degree of interest in the conversation, so that the apparatus is convenient to operate, can be applied widely, and improves user experiences.
- the image collector shown in FIG. 1 may include a second camera to acquire body images of the conversation partner and a receiver to receive facial images of the conversation partner.
- the conversation partner may also wear a head-mounted wearable apparatus, through which the user's own facial images can be acquired and sent to other user equipments.
- the receiver may receive the facial images sent by the conversation partner.
- FIG. 2 shows a structure diagram of a head-mounted wearable apparatus provided in another embodiment of the present disclosure.
- the head-mounted wearable apparatus includes a second camera 21 , a controller 22 (e.g. a CPU or a microprocessor) connected with the second camera 21 , an output device 23 (e.g. a display or a speaker) connected with the controller 22 and a receiving module 24 (e.g. a receiving antenna or a memory) connected with the controller 22 .
- a controller 22 e.g. a CPU or a microprocessor
- an output device 23 e.g. a display or a speaker
- a receiving module 24 e.g. a receiving antenna or a memory
- the second camera 21 may acquire body images of the conversation partner interacting with the user of the head-mounted wearable apparatus and the receiving module 24 may receive facial images of the conversation partner, such as those sent by the head-mounted wearable apparatus worn by the conversation partner.
- the controller 22 is operable to analyze the facial images and body images of the conversation partner and get analysis results, and the output device is operable to output the analysis results.
- the head-mounted wearable apparatus is only an example of the present disclosure, and other wearable apparatuses such as smart watches, intelligent clothes, intelligent accessories, or the like may be used in embodiments of the present disclosure.
- the interaction with the user of the head-mounted wearable apparatus may include a variety of other ways of interactions or combinations thereof in addition to conversation.
- the interaction may proceed through body language such as gestures or through facial expression.
- the above-mentioned analysis results include analysis results of conversation content and conversation manner.
- the above-mentioned analysis results accordingly include analysis results of the contents and manner of the other ways of interactions.
- the present embodiment of the disclosure is described only in the case of the conversation partner who interacts with the user of the head-mounted wearable apparatus.
- the receiving module 24 can receive in the present embodiment is facial images of the conversation partner sent by the head-mounted wearable apparatus worn by him.
- the receiving module 24 may also receive facial images of the conversation partner from any intelligent apparatus as long as the intelligent apparatus can acquire and send facial images of the conversation partner in real time.
- the second camera in the present embodiment is preferably a binocular camera, which has a relatively high resolution and can acquire two body images of the conversation partner from different directions/locations for subsequent analysis by the controller.
- the head-mounted wearable apparatus of the present embodiment may further include an image collector such as the third camera 25 shown in FIG. 2 and a transmitting module 26 (e.g. a transmitter), both of which are connected with the controller.
- an image collector such as the third camera 25 shown in FIG. 2
- a transmitting module 26 e.g. a transmitter
- the image collector i.e. the corresponding third camera 25
- the transmitting module 26 may transmit the facial images of the user to the head-mounted wearable apparatus worn by the conversation partner.
- the head-mounted wearable apparatus a worn by A acquires body images of B and facial images of A
- the head-mounted wearable apparatus b worn by B acquires facial images of B and body images of A
- the head-mounted wearable apparatus a worn by A receives the facial images of B sent by the head-mounted wearable apparatus b worn by B and analyzes the facial images and body images of B so as to get a result indicating whether B is interested in the current conversation or get the credibility of what B has just said or other information.
- the head-mounted wearable apparatus as shown in FIG. 2 may be intelligent glasses.
- the intelligent glasses further include a glasses body, and the above-mentioned second camera 21 , image collector, controller 22 , output device 23 , sending module 26 and receiving module 24 are all located on the glasses body.
- the head-mounted wearable apparatus shown in FIG. 2 may further include a selecting device connected with the controller 22 , which is the same as the one in FIG. 1 b and used to receive selecting instructions from the user for the controller to get analysis results corresponding to the selecting instructions.
- the above-mentioned selecting device may be selection buttons connected with the controller 22 , such as the buttons disposed on the glasses body for the user to select an analysis aspect.
- buttons operable to enable or disable the intelligent mode may further be disposed on the glasses body, and if the user selects to activate the button operable to enable the intelligent mode, the controller will get analysis results in the default analysis aspect.
- the intelligent glasses in the present embodiment can perform qualitative analysis on the conversation contents of the conversation partner, and the intelligent glasses have compact configuration, are convenient to carry and of low costs, can be applied widely, and have user experiences improved.
- FIG. 3 shows a flowchart of an information processing method for a head-mounted wearable apparatus in one embodiment of the present disclosure. As shown in FIG. 3 , the head-mounted wearable apparatus in the present embodiment operates as follows.
- step 301 image information of the conversation partner is acquired, which includes facial images and body images of the conversation partner.
- step 301 image information of the conversation partner will be acquired in real time with the image collector such as a binocular camera.
- step 301 body images of the conversation partner will be acquired in real time with the first image collector, and facial images of the conversation partner, such as those sent by the head-mounted wearable apparatus worn by the conversation partner will be received by the receiving module.
- step 302 the body images are analyzed to get an analysis result of behavior and the facial images are analyzed to get analysis results of heart rate and pupil.
- step 302 and the following step 303 may be performed through the controller of the head-mounted wearable apparatus.
- the controller may get an analysis result of heart rate through contactless pulse oximetry.
- the analysis aspect of interest is selected by the user and thus the analysis results of behavior, heart rate and pupil are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the conversation contents.
- the analysis results of behavior, heart rate and pupil will respectively be analysis results of behavior, heart rate and pupil indicating credibility rate of the conversation contents.
- step 303 an output result of the conversation contents of the conversation partner is determined in accordance with the analysis results of behavior, heart rate and pupil.
- step 304 the output result of conversation contents is output to the user of the head-mounted wearable apparatus.
- step 304 may be performed by the output device of the head-mounted wearable apparatus.
- the eyeball area and the pupil area in the facial images may be analyzed in the above-mentioned embodiment to get an analysis result.
- the head-mounted wearable apparatus in the present embodiment may be the above-mentioned intelligent glasses, which can determine analysis results of variation of heart rate, eyeball movement and change of pupil by acquiring image information of the conversation partner so as to get the credibility of what the conversation partner has said or the degree of interest of the conversation partner in the conversation.
- FIG. 4 shows a flowchart of an information processing method for intelligent glasses in one embodiment of the present disclosure.
- the information processing method for intelligent glasses in the present embodiment is as follows. It should be noted that the intelligent glasses in the present embodiment may be the head-mounted wearable apparatus shown in FIG. 1 or FIG. 2 .
- step 401 the selecting device of the intelligent glasses receives a selecting instruction.
- the selecting instruction may be an analyzing instruction for interest or credibility.
- the selecting device may be a receiving unit, the module/unit receiving selecting instructions is not limited in terms of name, as long as it has the functionality of receiving selecting instructions.
- step 402 the image collector of the intelligent glasses acquires image information of the conversation partner, which includes body images and facial images of the conversation partner.
- step 403 the controller of the intelligent glasses analyzes the body images to get an analysis result of behavior.
- step 403 may include the following sub-steps.
- sub-step 4031 the controller of the intelligent glasses compares the body image at the current point in time with that at a first point in time.
- sub-step 4032 if the body image at the current point in time is nearer to the user relative to that at the first point in time, it can be determined that the conversation partner is interested in the contents of the current conversation, resulting in an analysis result of behavior.
- the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- step 404 the controller of the intelligent glasses analyzes the facial images to achieve analysis results of heart rate and pupil.
- step 404 may include the following sub-steps.
- the controller of the intelligent glasses analyzes the facial images using contactless pulse oximetry to achieve an analysis result of heart rate.
- variation value of heart rate and in turn variation curve of heart rate of the conversation partner are achieved through contactless pulse oximetry. For example, if it is above a set threshold, the credibility of what the conversation partner has said is low. Generally, when a common person is lying, his heart rate will vary significantly.
- the controller of the intelligent glasses obtains the pupil area in the facial image at the current point in time and the pupil area in the facial image at the second point in time.
- the controller of the intelligent glasses compares the pupil area at the current point in time with that at the second point in time to get an analysis result of pupil.
- the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- step 405 the controller of the intelligent glasses determines an output result of the conversation contents of the conversation partner in accordance with the analysis results of behavior, heart rate and pupil.
- the controller of the intelligent glasses may determine the output result of the conversation contents of the conversation partner to be positive.
- the positive result may be understood as the result desired by the user.
- the output result will be one indicating interest.
- the selecting instruction is an analysis instruction for credibility
- the analysis result of behavior indicates low credibility
- the analysis result of heart rate indicates low credibility
- the analysis result of pupil indicates high credibility
- the output result will be one indicating low credibility
- the controller of the intelligent glasses may multiply the analysis results with their own weighting factors and add the products together to get the output result of the conversation contents given by the conversation partner.
- the controller of the intelligent glasses may calculate the average value of all the analysis results and take it as the output result of the conversation contents of the conversation partner.
- step 406 the output device of the intelligent glasses outputs the output result of the conversation contents to the user of the intelligent glasses.
- the intelligent glasses in the present embodiment may also determine variation of heart rate, eyeball movement and change of pupil of the conversation partner and in turn the credibility of what the conversation partner has said and the degree of interest of the conversation partner in the conversation.
- the intelligent glasses in the present embodiment of the disclosure are convenient to operate, of low costs, and make user experiences well improved.
- contactless pulse oximetry of SpO2 photographic technology may detect human heart rate using a common optical camera, wherein, for example, a video including facial images of a person is taken and the same analysis area (e.g. the area in the dashed line box) is determined from each image of the video.
- a common optical camera wherein, for example, a video including facial images of a person is taken and the same analysis area (e.g. the area in the dashed line box) is determined from each image of the video.
- G (green) and B (Blue) channels An average value of the pixels in G (green) and B (Blue) channels is extracted for the analyzed area in each image of the video, wherein the G channel is a green channel, and the B channel is a blue channel.
- variation curve of heart rate of the person can be achieved according to the variation curve over time of the average value of the pixels in G channels and the variation curve over time of the average value of the pixels in B channels in the analysis areas of all the images of the video.
- the head-mounted wearable apparatus in the present embodiment can find application in a variety of scenes such as a lie detecting scene, a blind date scene, a question and answer scene, etc.
- an information processing device for a head-mounted wearable apparatus is further provided.
- the information processing device for a head-mounted wearable apparatus in the present embodiment includes an acquiring unit 71 , a processing unit 72 , an analyzing unit 73 and an output unit 74 .
- the acquiring unit 71 is operable to acquire image information of the conversation partner including facial images and body images of the conversation partner.
- the processing unit 72 is operable to analyze the body images to achieve an analysis result of behavior and analyze the facial images to achieve analysis results of heart rate and pupil.
- the analyzing unit 73 is operable to determine an output result of the conversation contents of the conversation partner in accordance with the analysis results of behavior, heart rate and pupil.
- the output device 74 is operable to output the output result of the conversation contents to the user of the head-mounted wearable apparatus.
- the body image at the current point in time is compared with that at the first point in time.
- the conversation partner is interested in the contents of the current conversation, resulting in an analysis result of behavior.
- the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- the facial images are analyzed using contactless pulse oximetry to achieve an analysis result of heart rate.
- the pupil area in the facial image at the current point in time and the pupil area in the facial image at the second point in time are obtained.
- the pupil area at the current point in time is compared with that at the second point in time to achieve an analysis result of pupil.
- the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- the above-mentioned analyzing unit 73 is operable to determine the output result of the conversation contents of the conversation partner to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive.
- an output result of the conversation contents of the conversation partner may be obtained by multiplying the analysis results with their own weighting factors and adding the products together.
- an average value of all the analysis results is obtained and taken as the output result of the conversation contents of the conversation partner.
- the information processing device for a head-mounted wearable apparatus described above may include a receiving unit not shown in the figure, which will be described in the following.
- the receiving unit is operable to receive a selecting instruction from the user before the body images and the facial images are analyzed by the processing unit 72 .
- the analysis results of behavior, heart rate and pupil from the processing unit 72 are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the conversation contents.
- the receiving unit is operable to receive an analysis instruction for credibility before the body images and the facial images are analyzed by the processing unit 72 .
- the analysis results of behavior, heart rate and pupil from the processing unit 72 are respectively analysis results of behavior, heart rate and pupil indicating the credibility rate of the conversation contents.
- the information processing device for a head-mounted wearable apparatus in the present embodiment may be implemented through software, which may be integrated into a physical structure of the head-mounted wearable apparatus to execute the process described above.
- the information processing device in the present embodiment may also be implemented through physical circuit structures, which constitutes no limitation on the present embodiment and depends on specific circumstances.
- the information processing device for a head-mounted wearable apparatus in the present embodiment of the disclosure is convenient to operate, of low costs and makes user experiences well improved.
Abstract
Description
- The present disclosure relates to a wearable apparatus as well as an information processing method and an information processing device for a wearable apparatus.
- With the development of internet technologies, a growing number of intelligent wearable apparatuses have come into our life. Intelligent wearable apparatuses can bring great convenience to their users' lives. For example, intelligent glasses can be used by their users to take pictures of scenes sawn by them and share the pictures through networks.
- Nowadays, communications between two parties in society are often mixed with a lot of falsehood/lies and a lie detector can be used to determine whether what someone has said is true. For example, while the contacts of a lie detector are coupled to the body of the subject to be detected, the subject is asked questions and then the variation of his brain wave or heart rate is observed to determine comprehensively his index of lie.
- However, the test using the above-mentioned lie detector has to be repeated; besides the lie detector must be coupled to the body of the subject, and is complex to operate, of high costs and inconvenient to carry. Therefore, how the inner truth of a communicate (e.g. the truthfulness of what he has said) can be monitored using a simple and portable intelligent wearable apparatus has become a problem to be solved urgently.
- According to one aspect of this disclosure, a wearable apparatus is provided, comprising: an image collector configured to acquire facial images and body images of an interaction partner who is interacting with the user of the apparatus; a controller connected with the image collector and configured to analyze the facial images and the body images to achieve analysis results; and an output device connected with the controller and configured to output the analysis results.
- For example, wherein the image collector is a first camera to acquire facial images and body images of the interaction partner; or the image collector comprises a second camera to acquire body images of the interaction partner and a receiver to receive facial images of the interaction partner.
- For example, wherein the first camera and/or the second camera are/is binocular camera(s).
- For example, wherein the output device is a display screen for display of analysis results; or the output device is a voice displayer for playback of analysis results in voice.
- For example, wherein the wearable apparatus is intelligent glasses.
- For example, wherein the intelligent glasses further comprise a glasses body and the image collector, controller and output device are all disposed on the glasses body.
- For example, wherein the analysis results comprise an analysis result of behavior obtained through analyzing the body images as well as an analysis result of heart rate and an analysis result of pupil obtained by analyzing the facial images.
- For example, wherein the wearable apparatus further comprises a selecting device, which is connected with the controller and configured to receive a selecting instruction from the user and send it to the controller so that the controller may obtain analysis results corresponding to the selecting instruction.
- For example, wherein the selecting device comprises a first selecting unit to receive an analysis instruction for interest from the user and a second selecting unit to receive an analysis instruction for credibility from the user.
- For example, wherein the wearable apparatus further comprises: a wireless module connected with the controller and operable to communicate with network equipments through a wireless network; and/or a GPS module connected with the controller and operable to locate the wearable apparatus; and/or an image collector connected with the controller and operable to acquire facial images of the user of the wearable apparatus and a transmitter connected with the controller and operable to transmit the images acquired by the image collector.
- For example, wherein the wearable apparatus comprises a head-mounted wearable apparatus.
- For example, wherein the interaction partner comprises a conversation partner.
- According to one aspect of this disclosure, An information processing method for a wearable apparatus is provided, comprising: acquiring image information of the interaction partner who is interacting with the user of the wearable apparatus, the information comprising facial images and body images of the interaction partner; analyzing the body images to achieve an analysis result of behavior and analyzing the facial images to achieve an analysis result of heart rate and an analysis result of pupil; determining an output result of the conversation contents given by the interaction partner in accordance with the analysis result of behavior, analysis result of heart rate and analysis result of pupil; and outputting the output result of the conversation contents to the user of the wearable apparatus.
- For example, before the body images are analyzed, the method further comprises the step of receiving a selecting instruction; if the received selecting instruction is an analysis instruction for interest, the analysis results of behavior, heart rate and pupil are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the interaction contents; and if the received selecting instruction is an analysis instruction for credibility, the analysis results of behavior, heart rate and pupil are respectively analysis results of behavior, heart rate and pupil indicating the credibility rate of the interaction contents.
- For example, wherein the step of analyzing the body images to achieve an analysis result of behavior comprises: comparing the body image at the current point in time with the body image at the first point in time; and determining an analysis result of behavior indicating that the conversation partner is interested in the current interaction contents if the body image at the current point in time is nearer to the user relative to the body image at the first point in time; wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- For example, wherein the step of analyzing the facial images to achieve analysis results of heart rate and pupil comprises: analyzing the facial images using contactless pulse oximetry to achieve the analysis result of heart rate; obtaining the pupil area in the facial image at the current point in time and the pupil area in the facial image at the second point in time; and comparing the pupil area at the current point in time with the pupil area at the second point in time to achieve the analysis result of pupil; wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- For example, wherein the step of determining an output result of the interaction contents given by the interaction partner in accordance with the analysis result of behavior, analysis result of heart rate and analysis result of pupil comprises: determining the output result of the interaction contents of the interaction partner to be positive when at least two of the analysis result of behavior, analysis result of heart rate and analysis result of pupil are positive; or in accordance with predetermined weighting factors of individual analysis results, obtaining the output result of the interaction contents of the interaction partner by multiplying the analysis results with their own weighting factors and adding the products together; or obtaining the average value of all the analysis results and taking it as the output result of the interaction contents of the interaction partner.
- For example, wherein the wearable apparatus comprises a head-mounted wearable apparatus.
- For example, wherein the interaction partner comprises a conversation partner.
- According to one aspect of this disclosure, an information processing device for a wearable apparatus is provided, the device comprising: an acquiring unit configured to acquire image information of the interaction partner who is interacting with the user of the wearable apparatus, the information comprising facial images and body images of the interaction partner; a processing unit configured to analyze the body images to achieve an analysis result of behavior and analyze the facial images to achieve analysis results of heart rate and pupil; an analyzing unit configured to determine an output result of the interaction contents given by the interaction partner in accordance with the analysis result of behavior, analysis result of heart rate and analysis result of pupil; and an output unit configured to output the output result of the interaction contents to the user of the wearable apparatus.
- For example, wherein the device further comprises: a receiving unit operable to receive an selecting instruction from the user before the body images and the facial images are analyzed by the processing unit; wherein if the received selecting instruction is an analysis instruction for interest, the analysis results of behavior, heart rate and pupil from the processing unit are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the interaction contents; and if the received selecting instruction is an analysis instruction for credibility, the analysis results of behavior, heart rate and pupil from the processing unit are respectively analysis results of behavior, heart rate and pupil indicating credibility rate of the interaction contents.
- For example, wherein the processing unit is further configured to compare he body image at the current point in time with the body image at the first point in time; and determine an analysis result of behavior indicating that the conversation partner is interested in the current interaction contents if the body image at the current point in time is nearer to the user relative to the body image at the first point in time; wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- For example, wherein the processing unit is further configured to analyze the facial images using contactless pulse oximetry to achieve an analysis result of heart rate; obtain the pupil area in the facial image at the current point in time and the pupil area in the facial image at a second point in time; and compare the pupil area at the current point in time with the pupil area at the second point in time to achieve an analysis result of pupil; wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- For example, wherein the analyzing unit is further configured to determine the output result of the interaction contents of the interaction partner to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive; or in accordance with predetermined weighting factors of individual analysis results, obtain an output result of the interaction contents of the interaction partner by multiplying the analysis results with their own weighting factors and adding the products together; or obtain the average value of all the analysis results and taking it as the output result of the interaction contents of the interaction partner.
- For example, wherein the wearable apparatus comprises a head-mounted wearable apparatus.
- For example, wherein the interaction partner comprises a conversation partner.
-
FIGS. 1a-1b are structure diagrams of a head-mounted wearable apparatus provided in an embodiment of the present disclosure; -
FIG. 2 is a structure diagram of a head-mounted wearable apparatus provided in another embodiment of the present disclosure; -
FIG. 3 is a flowchart of an information processing method for a head-mounted wearable apparatus provided in an embodiment of the present disclosure; -
FIG. 4 is a flowchart of an information processing method for a head-mounted wearable apparatus provided in another embodiment of the present disclosure; -
FIG. 5 are pictures of a scene for a head-mounted wearable apparatus provided in an embodiment of present disclosure; -
FIG. 6 is a schematic diagram of an analysis area in a facial image provided in an embodiment of present disclosure; and -
FIG. 7 is a structure diagram of an information processing device for a head-mounted wearable apparatus provided in an embodiment of present disclosure. - The technical solutions of the embodiments will be described in a clearly and fully understandable way in connection with the drawings related to the embodiments of the disclosure. Apparently, the described embodiments are just a part but not all of the embodiments of the disclosure. Based on the described embodiments herein, those skilled in the art can obtain other embodiment(s), without any inventive work, which should be within the scope of the disclosure.
-
FIGS. 1a-1b show structural diagrams of a head-mounted wearable apparatus provided in an embodiment of the present disclosure. As shown inFIG. 1 a, the head-mounted wearable apparatus in the present embodiment includes animage collector 11, acontroller 12 connected with theimage collector 11 and anoutput device 13 connected with thecontroller 12. For example, theimage collector 11 may be a camera, thecontroller 12 may be a central processing unit, a microprocessor chip, etc., and theoutput device 13 may be a display, a speaker, etc. Of course, those of ordinary skills in the art should understand that the head-mounted wearable apparatus is only an example of the present disclosure, and other wearable apparatuses such as intelligent watches, intelligent clothes, intelligent accessories, etc. may also be used in embodiments of the present disclosure. - Wherein the above-mentioned
image collector 11 is operable to acquire images of the face and body of the conversation partner who is interacting with the user of the head-mounted wearable apparatus. Thecontroller 12 is operable to analyze the images of the face and body of the conversation partner and get analysis results. Theoutput device 13 is operable to output the analysis results. It should be noted that the interaction with the user of the head-mounted wearable apparatus includes a variety of other ways of interactions or combinations thereof in addition to conversation. For example, interaction may proceed through body language such as gestures, through facial expression, etc. When the user of the head-mounted wearable apparatus is interacting with his conversation partner through conversation, the above-mentioned analysis results include analysis results of content and manner of their conversation. When the interaction between the user of the head-mounted wearable apparatus and his conversation partner takes another way, the above-mentioned analysis results accordingly include analysis results of content and manner of the interaction. The present embodiment of the disclosure is described only in the case of the conversation partner who interacts with the user of the head-mounted wearable apparatus. - The
image collector 11 in the present embodiment may be a first camera to acquire images of the face and body of the conversation partner. For example, the first camera may be a binocular one, which has a high resolution and can acquire two images of the conversation partner from different directions/locations so as to capture minor changes in facial images of the conversation partner for subsequent analysis by the controller. - Of course, in other embodiments, the above-mentioned
image collector 11 may include two or more cameras with angles formed therebetween horizontally. It is to be noted that two of them may be spaced apart by a preset distance, can both capture high-resolution images, and can capture two images of the conversation partner simultaneously from two different angles/directions/locations, so that minor changes in facial images of the conversation partner can be captured for subsequent analysis by the controller to achieve accurate results. - The above-mentioned
output device 13 may be a display for display of analysis results, such as a liquid crystal display. Alternatively, in other embodiments, theoutput device 13 may be a voice displayer, such as a microphone, to play back analysis results in voice. - The head-mounted wearable apparatus in the present embodiment can acquire images of the conversation partner by means of a head-mounted image collector, analyze the images through the controller to determine the truthfulness/interest of the conversation partner with regard to the current conversation, and output the results through the
output device 13. Furthermore, the head-mounted wearable apparatus of the present disclosure is convenient to carry and of low costs, and can be used more widely and improve user experiences. - For example, with reference to
FIG. 1 b, the head-mounted wearable apparatus further includes awireless module 14 and/or a Global Positioning System (GPS)module 15. Thewireless module 14 and theGPS module 15 are both connected with thecontroller 12. Thewireless module 14 is used to enable the controller to communicate with other network equipments (e.g. intelligent terminals, etc.), and it may be, for example, a communication module including a wireless router, an antenna, etc. The controller may send analysis results through thewireless module 14 to an intelligent terminal for display or other purposes. - The
GPS module 15 may be used to locate the head-mounted wearable apparatus and provide location information and the like. - By way of example, the above-mentioned head-mounted wearable apparatus may be intelligent glasses that include a glasses body, and the
image collector 11, thecontroller 12 and theoutput device 13 in the aboveFIG. 1a can all be mounted on the glasses body. - Furthermore, in practical applications, with reference to
FIG. 1 b, the head-mounted wearable apparatus shown inFIG. 1 and described above may include a selectingdevice 16 that is connected with thecontroller 12 and used to receive a selecting instruction from the user and send it to thecontroller 13 which then get analysis results corresponding to the selecting instruction. The selectingdevice 16 may be keys receiving user input, a microphone receiving voice commands from the user or the like. - Furthermore, the selecting
device 16 includes a first selectingunit 161 to receive an analysis instruction for interest from the user and a second selectingunit 162 to receive an analysis instruction for credibility from the user. - By way of example, the selecting
device 16 described above may be selection buttons, such as buttons disposed on the glasses body of the intelligent glasses and connected with the controller for the user to select an analysis aspect. When a button is activated, thecontroller 12 will obtain analysis results in the aspect belonging to the activated button. - Alternatively, buttons operable to enable or disable the intelligent mode are disposed on the glasses body of the intelligent glasses, and if the button operable to enable the intelligent mode is selected by the user, the
controller 12 will obtain analysis results in a default analysis aspect. Generally, the default analysis aspect is about interest in the conversation. - For example, when A is conversing with B, the intelligent glasses described above may be worn by A to acquire and analyze images of the face and body of B in real time and output analysis results to A, so that A can determine whether B is interested in the current conversation or determine the credibility of what B has said.
- It is to be noted that when the selecting
device 16 receives a selecting instruction, thecontroller 12 as shown inFIG. 1b may obtain corresponding analysis results in accordance with the selecting instruction. For example, the body image at the current point in time is compared with that at a first point in time, and if the former is nearer to the user relative to the latter, it can be determined that the conversation partner is interested in the contents of the current conversation, resulting in an analysis result of behavior. - Wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- The
controller 12 can also analyze facial images, and obtain analysis results of heart rate and pupil. For example, the controller analyzes facial images with contactless pulse oximetry to obtain an analysis result of heart rate; the controller obtains the pupil area in the facial image at the current point in time and that in the facial image at the second point in time, and makes comparison between them to get an analysis result of pupil. - Wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- Furthermore, the
controller 12 in the present embodiment may also include an analyzing module, which is operable to determine an output result of the conversation contents given by the conversation partner in accordance with the analysis results of behavior, heart rate and pupil. - For example, the analyzing module may determine the output result of the conversation contents given by the conversation partner to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive.
- Alternatively, in accordance with predetermined weighting factors of individual analysis results, the analyzing module may determine the output result of the conversation contents given by the conversation partner by multiplying the analysis results with their own weighting factors and adding the products together.
- Alternatively, the analyzing module may obtain the average value of all the analysis results and take it as the output result of the conversation contents of the conversation partner. It is to be noted that the positive result, as used herein, may be understood as the result desired by the user. As such, misjudgments are avoided effectively during analysis of the various analysis results. The output result of the conversation contents of the conversation partner is determined to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive. In an example to illustrate this, stable heart rate is analyzed to mean no lying during heart rate analyzing (a positive result); dilated pupils are analyzed to mean lying during pupil analyzing (a negative result); and it is analyzed to mean no lying that the conversation partner is near to the user during behavior analyzing (a positive result). As a result, misjudgments can be avoided, which otherwise may be caused by analyzing only facial images or body images. Alternatively, misjudgments can be eliminated according to difference between the weighting factors of individual analysis results, which follows a principle similar to that described above, and no further details will be described herein.
- The intelligent glasses in the present embodiment may have other functionality, such as taking photographs, navigation, etc. The intelligent glasses in the present embodiment may be intelligent social glasses, whose hardware includes a glasses frame (i.e. a glasses body) as well as a binocular camera, a controller/processor, a wireless module, a GPS module, a power module, and the like mounted on the glasses frame.
- It can be seen from the above technical solution that in the head-mounted wearable apparatus and the information processing method and device thereof in the present disclosure the head-mounted image collector acquires images of the conversation partner, and the controller analyzes the images to determine truthfulness/interest of the conversation partner with respect to the current conversation and provides an output result through the output device. Moreover, the head-mounted wearable apparatus in the present disclosure is convenient to carry and of low costs. Furthermore, only by means of acquiring images of the conversation partner, the head-mounted wearable apparatus can determine his variation of heart rate, eyeball movement and change of pupil so as to get credibility of what he has said or his degree of interest in the conversation, so that the apparatus is convenient to operate, can be applied widely, and improves user experiences.
- A second embodiment of the present disclosure will be presented in the following description, which is similar to the first embodiment, but is still different from it in that the image collector shown in
FIG. 1 may include a second camera to acquire body images of the conversation partner and a receiver to receive facial images of the conversation partner. For example, the conversation partner may also wear a head-mounted wearable apparatus, through which the user's own facial images can be acquired and sent to other user equipments. In this way, the receiver may receive the facial images sent by the conversation partner. -
FIG. 2 shows a structure diagram of a head-mounted wearable apparatus provided in another embodiment of the present disclosure. As shown inFIG. 2 , the head-mounted wearable apparatus includes asecond camera 21, a controller 22 (e.g. a CPU or a microprocessor) connected with thesecond camera 21, an output device 23 (e.g. a display or a speaker) connected with thecontroller 22 and a receiving module 24 (e.g. a receiving antenna or a memory) connected with thecontroller 22. - The
second camera 21 may acquire body images of the conversation partner interacting with the user of the head-mounted wearable apparatus and the receivingmodule 24 may receive facial images of the conversation partner, such as those sent by the head-mounted wearable apparatus worn by the conversation partner. Thecontroller 22 is operable to analyze the facial images and body images of the conversation partner and get analysis results, and the output device is operable to output the analysis results. It is to be noted that the head-mounted wearable apparatus is only an example of the present disclosure, and other wearable apparatuses such as smart watches, intelligent clothes, intelligent accessories, or the like may be used in embodiments of the present disclosure. Moreover, the interaction with the user of the head-mounted wearable apparatus may include a variety of other ways of interactions or combinations thereof in addition to conversation. For example, the interaction may proceed through body language such as gestures or through facial expression. When the user of the head-mounted wearable apparatus is interacting with a conversation partner through conversation, the above-mentioned analysis results include analysis results of conversation content and conversation manner. When the user of the head-mounted wearable apparatus is interacting with his conversation partner in other ways, the above-mentioned analysis results accordingly include analysis results of the contents and manner of the other ways of interactions. The present embodiment of the disclosure is described only in the case of the conversation partner who interacts with the user of the head-mounted wearable apparatus. - It is to be noted that what the receiving
module 24 can receive in the present embodiment is facial images of the conversation partner sent by the head-mounted wearable apparatus worn by him. In other embodiments, the receivingmodule 24 may also receive facial images of the conversation partner from any intelligent apparatus as long as the intelligent apparatus can acquire and send facial images of the conversation partner in real time. - Furthermore, the second camera in the present embodiment is preferably a binocular camera, which has a relatively high resolution and can acquire two body images of the conversation partner from different directions/locations for subsequent analysis by the controller.
- Optionally, the head-mounted wearable apparatus of the present embodiment may further include an image collector such as the
third camera 25 shown inFIG. 2 and a transmitting module 26 (e.g. a transmitter), both of which are connected with the controller. - The image collector, i.e. the corresponding
third camera 25, may be used to acquire facial images of the user of the head-mounted wearable apparatus and the transmittingmodule 26 may transmit the facial images of the user to the head-mounted wearable apparatus worn by the conversation partner. - That is to say, when two parties are conversing, they may each use their own head-mounted wearable apparatuses as shown in
FIG. 2 to determine the value of interest (e.g. an interest index) of the other party in the current conversation contents, the credibility of what the other party has said, or other information. - For example, when A and B are conversing and both wear head-mounted wearable apparatuses as shown
FIG. 2 , the head-mounted wearable apparatus a worn by A acquires body images of B and facial images of A, and the head-mounted wearable apparatus b worn by B acquires facial images of B and body images of A; then the head-mounted wearable apparatus a worn by A receives the facial images of B sent by the head-mounted wearable apparatus b worn by B and analyzes the facial images and body images of B so as to get a result indicating whether B is interested in the current conversation or get the credibility of what B has just said or other information. - In practical applications, the head-mounted wearable apparatus as shown in
FIG. 2 may be intelligent glasses. As shown inFIG. 5 , when conversing, both of the two parties of the interaction wear intelligent glasses respectively to learn about the interest of the other party in the current conversation contents and the credibility of what the other party has said. The intelligent glasses further include a glasses body, and the above-mentionedsecond camera 21, image collector,controller 22,output device 23, sendingmodule 26 and receivingmodule 24 are all located on the glasses body. - Of course, the head-mounted wearable apparatus shown in
FIG. 2 may further include a selecting device connected with thecontroller 22, which is the same as the one inFIG. 1b and used to receive selecting instructions from the user for the controller to get analysis results corresponding to the selecting instructions. - By way of example, the above-mentioned selecting device may be selection buttons connected with the
controller 22, such as the buttons disposed on the glasses body for the user to select an analysis aspect. - Wherein when a button is activated by the user, the controller will get analysis results in the analysis aspect belonging to the activated button.
- Of course, buttons operable to enable or disable the intelligent mode may further be disposed on the glasses body, and if the user selects to activate the button operable to enable the intelligent mode, the controller will get analysis results in the default analysis aspect.
- The intelligent glasses in the present embodiment can perform qualitative analysis on the conversation contents of the conversation partner, and the intelligent glasses have compact configuration, are convenient to carry and of low costs, can be applied widely, and have user experiences improved.
-
FIG. 3 shows a flowchart of an information processing method for a head-mounted wearable apparatus in one embodiment of the present disclosure. As shown inFIG. 3 , the head-mounted wearable apparatus in the present embodiment operates as follows. - In
step 301, image information of the conversation partner is acquired, which includes facial images and body images of the conversation partner. - For example, if the head-mounted wearable apparatus shown in
FIG. 1 is used by a user, instep 301 image information of the conversation partner will be acquired in real time with the image collector such as a binocular camera. - If the head-mounted wearable apparatus shown in
FIG. 2 is used by a user, instep 301 body images of the conversation partner will be acquired in real time with the first image collector, and facial images of the conversation partner, such as those sent by the head-mounted wearable apparatus worn by the conversation partner will be received by the receiving module. - In
step 302, the body images are analyzed to get an analysis result of behavior and the facial images are analyzed to get analysis results of heart rate and pupil. - When the method shown in
FIG. 3 uses the head-mounted wearable apparatus shown inFIG. 1 orFIG. 2 ,step 302 and the followingstep 303 may be performed through the controller of the head-mounted wearable apparatus. By way of example, the controller may get an analysis result of heart rate through contactless pulse oximetry. - In the present embodiment, the analysis aspect of interest is selected by the user and thus the analysis results of behavior, heart rate and pupil are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the conversation contents.
- If the analysis aspect of credibility is selected, the analysis results of behavior, heart rate and pupil will respectively be analysis results of behavior, heart rate and pupil indicating credibility rate of the conversation contents.
- In
step 303, an output result of the conversation contents of the conversation partner is determined in accordance with the analysis results of behavior, heart rate and pupil. - In
step 304, the output result of conversation contents is output to the user of the head-mounted wearable apparatus. - When the method shown in
FIG. 3 uses the head-mounted wearable apparatus shown inFIG. 1 orFIG. 2 , step 304 may be performed by the output device of the head-mounted wearable apparatus. - Generally, when a person is lying, significant changes will occur in his gestures, use of words or other aspects, and therefore the eyeball area and the pupil area in the facial images may be analyzed in the above-mentioned embodiment to get an analysis result.
- The head-mounted wearable apparatus in the present embodiment may be the above-mentioned intelligent glasses, which can determine analysis results of variation of heart rate, eyeball movement and change of pupil by acquiring image information of the conversation partner so as to get the credibility of what the conversation partner has said or the degree of interest of the conversation partner in the conversation.
-
FIG. 4 shows a flowchart of an information processing method for intelligent glasses in one embodiment of the present disclosure. The information processing method for intelligent glasses in the present embodiment is as follows. It should be noted that the intelligent glasses in the present embodiment may be the head-mounted wearable apparatus shown inFIG. 1 orFIG. 2 . - In
step 401, the selecting device of the intelligent glasses receives a selecting instruction. - Of course, in practical applications, the selecting instruction may be an analyzing instruction for interest or credibility. Furthermore, in other embodiments, the selecting device may be a receiving unit, the module/unit receiving selecting instructions is not limited in terms of name, as long as it has the functionality of receiving selecting instructions.
- In
step 402, the image collector of the intelligent glasses acquires image information of the conversation partner, which includes body images and facial images of the conversation partner. - That is to say, when the intelligent glasses are worn by a user and a button for social function is activated (a button as illustrated above), more than two cameras will take facial images and body images of the conversation partner automatically.
- In
step 403, the controller of the intelligent glasses analyzes the body images to get an analysis result of behavior. - By way of example, step 403 may include the following sub-steps.
- In sub-step 4031, the controller of the intelligent glasses compares the body image at the current point in time with that at a first point in time.
- In sub-step 4032, if the body image at the current point in time is nearer to the user relative to that at the first point in time, it can be determined that the conversation partner is interested in the contents of the current conversation, resulting in an analysis result of behavior.
- Wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- In
step 404, the controller of the intelligent glasses analyzes the facial images to achieve analysis results of heart rate and pupil. - By way of example, step 404 may include the following sub-steps.
- In sub-step 4041, the controller of the intelligent glasses analyzes the facial images using contactless pulse oximetry to achieve an analysis result of heart rate.
- That is to say, variation value of heart rate and in turn variation curve of heart rate of the conversation partner are achieved through contactless pulse oximetry. For example, if it is above a set threshold, the credibility of what the conversation partner has said is low. Generally, when a common person is lying, his heart rate will vary significantly.
- In sub-step 4042, the controller of the intelligent glasses obtains the pupil area in the facial image at the current point in time and the pupil area in the facial image at the second point in time.
- In sub-step 4043, the controller of the intelligent glasses compares the pupil area at the current point in time with that at the second point in time to get an analysis result of pupil.
- Wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- Generally, most people have their eyeballs move toward upper right when they are lying, and move toward upper left when they are trying to remember something that really happened. Therefore, it can be determined whether the conversation partner is lying in accordance with the direction of eyeball movement in the pupil area of his facial images.
- In addition, when a person is caught in a repulsive, irritating or provoking topic, his pupil will contract involuntarily. On the contrary, when a person is involved in a pleasant topic, his pupil will dilate involuntarily, and if a person feels panic, cheerful, fond of something or excited, his pupil may dilate 4 or more times.
- In
step 405, the controller of the intelligent glasses determines an output result of the conversation contents of the conversation partner in accordance with the analysis results of behavior, heart rate and pupil. - By way of example, if at least two of the analysis results of behavior, heart rate and pupil are positive, the controller of the intelligent glasses may determine the output result of the conversation contents of the conversation partner to be positive. The positive result, as used herein, may be understood as the result desired by the user.
- That is to say, if the selecting instruction is one for interest, and the analysis result of heart rate indicates interest, the analysis result of pupil indicates interest and the analysis result of behavior indicates no interest, the output result will be one indicating interest.
- Alternatively, if the selecting instruction is an analysis instruction for credibility, and the analysis result of behavior indicates low credibility, the analysis result of heart rate indicates low credibility and the analysis result of pupil indicates high credibility, the output result will be one indicating low credibility.
- In another possible implementation, in accordance with predetermined weighting factors of individual analysis results, the controller of the intelligent glasses may multiply the analysis results with their own weighting factors and add the products together to get the output result of the conversation contents given by the conversation partner.
- In a third possible implementation, the controller of the intelligent glasses may calculate the average value of all the analysis results and take it as the output result of the conversation contents of the conversation partner.
- In
step 406, the output device of the intelligent glasses outputs the output result of the conversation contents to the user of the intelligent glasses. - By acquiring image information of the conversation partner, the intelligent glasses in the present embodiment may also determine variation of heart rate, eyeball movement and change of pupil of the conversation partner and in turn the credibility of what the conversation partner has said and the degree of interest of the conversation partner in the conversation. The intelligent glasses in the present embodiment of the disclosure are convenient to operate, of low costs, and make user experiences well improved.
- To better illustrate embodiments of the present disclosure, the method of analyzing the facial images through contactless pulse oximetry by the controller of the intelligent glasses to get an analysis result of heart rate in sub-step 4041 will be described in details as follows.
- At present, contactless pulse oximetry of SpO2 photographic technology may detect human heart rate using a common optical camera, wherein, for example, a video including facial images of a person is taken and the same analysis area (e.g. the area in the dashed line box) is determined from each image of the video.
- An average value of the pixels in G (green) and B (Blue) channels is extracted for the analyzed area in each image of the video, wherein the G channel is a green channel, and the B channel is a blue channel.
- And the variation curve of heart rate of the person can be achieved according to the variation curve over time of the average value of the pixels in G channels and the variation curve over time of the average value of the pixels in B channels in the analysis areas of all the images of the video.
- The head-mounted wearable apparatus in the present embodiment can find application in a variety of scenes such as a lie detecting scene, a blind date scene, a question and answer scene, etc.
- In the third aspect of the present embodiment of the disclosure, an information processing device for a head-mounted wearable apparatus is further provided. As shown in
FIG. 7 , the information processing device for a head-mounted wearable apparatus in the present embodiment includes an acquiringunit 71, aprocessing unit 72, an analyzingunit 73 and anoutput unit 74. - The acquiring
unit 71 is operable to acquire image information of the conversation partner including facial images and body images of the conversation partner. - The
processing unit 72 is operable to analyze the body images to achieve an analysis result of behavior and analyze the facial images to achieve analysis results of heart rate and pupil. - The analyzing
unit 73 is operable to determine an output result of the conversation contents of the conversation partner in accordance with the analysis results of behavior, heart rate and pupil. - The
output device 74 is operable to output the output result of the conversation contents to the user of the head-mounted wearable apparatus. - By way of example, the above-mentioned analyzing the body images to achieve an analysis result of behavior by the
processing unit 72 will be described as follows. - The body image at the current point in time is compared with that at the first point in time.
- If the body image at the current point in time is nearer to the user relative to that at the first point in time, it can be determined that the conversation partner is interested in the contents of the current conversation, resulting in an analysis result of behavior.
- Wherein the first point in time is prior to the current point in time, and there is a preset interval between the two points.
- Furthermore, the above mentioned analyzing the facial images to achieve analysis results of heart rate and pupil by the
processing unit 72 will be described as follows. - The facial images are analyzed using contactless pulse oximetry to achieve an analysis result of heart rate.
- The pupil area in the facial image at the current point in time and the pupil area in the facial image at the second point in time are obtained.
- The pupil area at the current point in time is compared with that at the second point in time to achieve an analysis result of pupil.
- Wherein the second point in time is prior to the current point in time, and there is a preset interval between the two points.
- Furthermore, the above-mentioned
analyzing unit 73 is operable to determine the output result of the conversation contents of the conversation partner to be positive when at least two of the analysis results of behavior, heart rate and pupil are positive. - Alternatively, in accordance with predetermined weighting factors of individual analysis results, an output result of the conversation contents of the conversation partner may be obtained by multiplying the analysis results with their own weighting factors and adding the products together.
- Alternatively, an average value of all the analysis results is obtained and taken as the output result of the conversation contents of the conversation partner.
- Moreover, in an optional implementation scene, the information processing device for a head-mounted wearable apparatus described above may include a receiving unit not shown in the figure, which will be described in the following.
- The receiving unit is operable to receive a selecting instruction from the user before the body images and the facial images are analyzed by the
processing unit 72. - Accordingly, the analysis results of behavior, heart rate and pupil from the
processing unit 72 are respectively analysis results of behavior, heart rate and pupil indicating whether there is interest in the conversation contents. - In another optional implementation scene, the receiving unit is operable to receive an analysis instruction for credibility before the body images and the facial images are analyzed by the
processing unit 72. - Accordingly, the analysis results of behavior, heart rate and pupil from the
processing unit 72 are respectively analysis results of behavior, heart rate and pupil indicating the credibility rate of the conversation contents. - In an implementation process, the information processing device for a head-mounted wearable apparatus in the present embodiment may be implemented through software, which may be integrated into a physical structure of the head-mounted wearable apparatus to execute the process described above. Of course, in combination with the configurations of the head-mounted wearable apparatuses shown in
FIG. 1 andFIG. 2 , the information processing device in the present embodiment may also be implemented through physical circuit structures, which constitutes no limitation on the present embodiment and depends on specific circumstances. - In the present embodiment, by acquiring image information of the conversation partner, variation of heart rate, eyeball movement and change of pupil of the conversation partner and in turn the credibility of what the conversation partner has said and the degree of interest of the conversation partner in the conversation can be determined. The information processing device for a head-mounted wearable apparatus in the present embodiment of the disclosure is convenient to operate, of low costs and makes user experiences well improved.
- Those skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope thereof. Thus, if these modifications and variations of the present disclosure are within the scope of the claims of the disclosure as well as their equivalents, the present disclosure is also intended to include these modifications and variations.
- The present application claims priority of China patent application No. 201510609800.8 filed on Sep. 22, 2015, the disclosure of which is incorporated herein in its entirety by reference as a part of the present application.
Claims (26)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510609800 | 2015-09-22 | ||
CN201510609800.8 | 2015-09-22 | ||
CN201510609800.8A CN105183170B (en) | 2015-09-22 | 2015-09-22 | Wear-type wearable device and its information processing method, device |
PCT/CN2016/073629 WO2017049843A1 (en) | 2015-09-22 | 2016-02-05 | Wearable device, and information processing method and information processing apparatus therefor |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170262696A1 true US20170262696A1 (en) | 2017-09-14 |
US10325144B2 US10325144B2 (en) | 2019-06-18 |
Family
ID=54905296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/326,114 Active 2036-02-12 US10325144B2 (en) | 2015-09-22 | 2016-02-05 | Wearable apparatus and information processing method and device thereof |
Country Status (3)
Country | Link |
---|---|
US (1) | US10325144B2 (en) |
CN (1) | CN105183170B (en) |
WO (1) | WO2017049843A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180160959A1 (en) * | 2016-12-12 | 2018-06-14 | Timothy James Wilde | Modular electronic lie and emotion detection systems, methods, and devices |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105183170B (en) | 2015-09-22 | 2018-09-25 | 京东方科技集团股份有限公司 | Wear-type wearable device and its information processing method, device |
CN107625527B (en) * | 2016-07-19 | 2021-04-20 | 杭州海康威视数字技术股份有限公司 | Lie detection method and device |
CN106236062B (en) * | 2016-08-09 | 2019-10-29 | 浙江大学 | A kind of police equipment of real-time monitoring policeman vital sign and field conditions on duty |
CN107396849A (en) * | 2017-07-28 | 2017-11-28 | 深圳市沃特沃德股份有限公司 | Obtain the method and device and pet wearable device of pet hobby |
US11119573B2 (en) * | 2018-09-28 | 2021-09-14 | Apple Inc. | Pupil modulation as a cognitive control signal |
CN109829927B (en) * | 2019-01-31 | 2020-09-01 | 深圳职业技术学院 | Electronic glasses and high-altitude scene image reconstruction method |
US11386804B2 (en) * | 2020-05-13 | 2022-07-12 | International Business Machines Corporation | Intelligent social interaction recognition and conveyance using computer generated prediction modeling |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040208496A1 (en) * | 2003-04-15 | 2004-10-21 | Hewlett-Packard Development Company, L.P. | Attention detection |
US7301648B2 (en) * | 2000-01-28 | 2007-11-27 | Intersense, Inc. | Self-referenced tracking |
US20130137076A1 (en) * | 2011-11-30 | 2013-05-30 | Kathryn Stone Perez | Head-mounted display based education and instruction |
US20130245396A1 (en) * | 2010-06-07 | 2013-09-19 | Affectiva, Inc. | Mental state analysis using wearable-camera devices |
US20160170584A1 (en) * | 2014-12-12 | 2016-06-16 | Samsung Electronics Co., Ltd. | Device and method for arranging contents displayed on screen |
US20160191995A1 (en) * | 2011-09-30 | 2016-06-30 | Affectiva, Inc. | Image analysis for attendance query evaluation |
US20160275817A1 (en) * | 2015-03-20 | 2016-09-22 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for storing and playback of information for blind users |
US20170095192A1 (en) * | 2010-06-07 | 2017-04-06 | Affectiva, Inc. | Mental state analysis using web servers |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4367663B2 (en) * | 2007-04-10 | 2009-11-18 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
CN103941574A (en) | 2014-04-18 | 2014-07-23 | 邓伟廷 | Intelligent spectacles |
CN104820495B (en) * | 2015-04-29 | 2019-06-21 | 姜振宇 | A kind of micro- Expression Recognition of exception and based reminding method and device |
CN105183170B (en) | 2015-09-22 | 2018-09-25 | 京东方科技集团股份有限公司 | Wear-type wearable device and its information processing method, device |
-
2015
- 2015-09-22 CN CN201510609800.8A patent/CN105183170B/en active Active
-
2016
- 2016-02-05 US US15/326,114 patent/US10325144B2/en active Active
- 2016-02-05 WO PCT/CN2016/073629 patent/WO2017049843A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7301648B2 (en) * | 2000-01-28 | 2007-11-27 | Intersense, Inc. | Self-referenced tracking |
US20040208496A1 (en) * | 2003-04-15 | 2004-10-21 | Hewlett-Packard Development Company, L.P. | Attention detection |
US20130245396A1 (en) * | 2010-06-07 | 2013-09-19 | Affectiva, Inc. | Mental state analysis using wearable-camera devices |
US20170095192A1 (en) * | 2010-06-07 | 2017-04-06 | Affectiva, Inc. | Mental state analysis using web servers |
US20160191995A1 (en) * | 2011-09-30 | 2016-06-30 | Affectiva, Inc. | Image analysis for attendance query evaluation |
US20130137076A1 (en) * | 2011-11-30 | 2013-05-30 | Kathryn Stone Perez | Head-mounted display based education and instruction |
US20160170584A1 (en) * | 2014-12-12 | 2016-06-16 | Samsung Electronics Co., Ltd. | Device and method for arranging contents displayed on screen |
US20160275817A1 (en) * | 2015-03-20 | 2016-09-22 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for storing and playback of information for blind users |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180160959A1 (en) * | 2016-12-12 | 2018-06-14 | Timothy James Wilde | Modular electronic lie and emotion detection systems, methods, and devices |
Also Published As
Publication number | Publication date |
---|---|
CN105183170A (en) | 2015-12-23 |
CN105183170B (en) | 2018-09-25 |
US10325144B2 (en) | 2019-06-18 |
WO2017049843A1 (en) | 2017-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10325144B2 (en) | Wearable apparatus and information processing method and device thereof | |
US10565763B2 (en) | Method and camera device for processing image | |
US10068130B2 (en) | Methods and devices for querying and obtaining user identification | |
WO2020216054A1 (en) | Sight line tracking model training method, and sight line tracking method and device | |
US9817235B2 (en) | Method and apparatus for prompting based on smart glasses | |
US11205426B2 (en) | Information processing device, information processing method, and program | |
US20170279898A1 (en) | Method for Accessing Virtual Desktop and Mobile Terminal | |
US10379602B2 (en) | Method and device for switching environment picture | |
US20180276281A1 (en) | Information processing system, information processing method, and storage medium | |
CN114466128B (en) | Target user focus tracking shooting method, electronic equipment and storage medium | |
CN108833262B (en) | Session processing method, device, terminal and storage medium | |
CN108564943B (en) | Voice interaction method and system | |
US20160231890A1 (en) | Information processing apparatus and phase output method for determining phrases based on an image | |
TW202009761A (en) | Identification method and apparatus and computer-readable storage medium | |
CN106227424A (en) | The display processing method of picture and device | |
CN106774849B (en) | Virtual reality equipment control method and device | |
WO2021047069A1 (en) | Face recognition method and electronic terminal device | |
CN106375178A (en) | Message display method and device based on instant messaging | |
US20130308829A1 (en) | Still image extraction apparatus | |
US20210264766A1 (en) | Anti-lost method and system for wearable terminal and wearable terminal | |
US20170034347A1 (en) | Method and device for state notification and computer-readable storage medium | |
CN105277193B (en) | Prompt information output method, apparatus and system | |
CN108632391B (en) | Information sharing method and device | |
CN109788367A (en) | A kind of information cuing method, device, electronic equipment and storage medium | |
CN108922495A (en) | Screen luminance adjustment method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BOE TECHNOLOGY GROUP CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZHANG, JINGYU;REEL/FRAME:040969/0635 Effective date: 20161013 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |