CN104460955A - Information processing method and wearable electronic equipment - Google Patents

Information processing method and wearable electronic equipment Download PDF

Info

Publication number
CN104460955A
CN104460955A CN201310421592.XA CN201310421592A CN104460955A CN 104460955 A CN104460955 A CN 104460955A CN 201310421592 A CN201310421592 A CN 201310421592A CN 104460955 A CN104460955 A CN 104460955A
Authority
CN
China
Prior art keywords
electronic equipment
sensed parameter
wearable electronic
sensing unit
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310421592.XA
Other languages
Chinese (zh)
Other versions
CN104460955B (en
Inventor
董超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310421592.XA priority Critical patent/CN104460955B/en
Publication of CN104460955A publication Critical patent/CN104460955A/en
Application granted granted Critical
Publication of CN104460955B publication Critical patent/CN104460955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Abstract

The invention discloses an information processing method used for reducing information losing rate. The method includes acquiring sensing parameters, used for characterizing the head and face motions of a user of wearable electronic equipment, through a sensing unit; determining whether the sensing parameters meet predetermined conditions or not, and generating a judgment result; if yes, generating a control instruction; responding to the control instruction to control the wearable electronic equipment. The invention further discloses the wearable electronic equipment used for implementing the method.

Description

A kind of information processing method and Wearable electronic equipment
Technical field
The present invention relates to computing machine and built-in field, particularly a kind of information processing method and Wearable electronic equipment.
Background technology
Intelligent glasses, also smart mirror is claimed, it can as smart mobile phone, there is independently operating system, the program that can be provided by software houses such as user's mounting software, game, by voice or action manipulated adding schedule, digital map navigation, interactive with good friend, take pictures and video, launch the functions such as video calling with friend, and wireless network access can be realized by mobile communication network.
Along with the arrival in intelligent glasses epoch, the mode of this electronic equipment of manipulation intelligent glasses also changes voice into gradually from keyboard, touch screen etc.By voice, intelligent glasses is controlled, without the need to user's manual operation, concerning user more for convenience.But control this mode by voice and have an obvious shortcoming, user when using voice command around people also can hear.Although voice signal is received by intelligent glasses by osteoacusis, user still needs voice command to say, this not only can destroy privacy, also can bother the people of surrounding.
For this problem, solution of the prior art is: use body sense instruction to replace phonetic order.Such as, user can face upward head, the order that the different angles of facing upward head can be corresponding different.
The shortcoming of the program: for intelligent glasses, the body sense information of the movement range of user larger ability accurate acquisition user may be needed, when the action of user is comparatively slight, intelligent glasses possibly cannot collect the body sense information of user, that is, information is easily lost in transmitting procedure, and the information that intelligent glasses collects may be inaccurate, not the order that user really wants, mistake responsiveness during intelligent glasses fill order so also will be caused higher.And user does action to control intelligent glasses always, around people it seems also can be extremely odd.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method and Wearable electronic equipment, easily causes the technical matters of information dropout for solving intelligent glasses in prior art in the process receiving order.
A kind of information processing method, be applied to Wearable electronic equipment, described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, and described method comprises:
Obtain sensed parameter by described sensing unit, described sensed parameter is for characterizing the facial movement of the head of the user wearing described Wearable electronic equipment;
Determine whether described sensed parameter meets predetermined condition, produce a judged result;
When described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order;
Respond described steering order, to control described Wearable electronic equipment.
Preferably, described 3rd sensing unit is arranged on the primary importance of described fixed cell, described primary importance, to make when user wears described Wearable electronic equipment, faces the place between the eyebrows of the head of described user when described 3rd sensing unit is positioned at the described primary importance of described fixed cell;
Sensed parameter is obtained by described sensing unit, comprise: obtain described sensed parameter by described sensing unit, the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion that described sensed parameter is described user, or the sound that produces collides by the audio parameter after air transmitted in the described sensed parameter tooth portion that is described user.
Preferably, described Wearable electronic equipment comprises the first sensing unit and the second sensing unit, and described first sensing unit is arranged on the 3rd position of described fixed cell, and described second sensing unit is arranged on the second place of described fixed cell; Described 3rd position and the second place are symmetrical arranged;
Obtain sensed parameter by described sensing unit, comprising: obtain the first sensed parameter by described first sensing unit, and obtain the second sensed parameter by described second sensing unit; Described first sensed parameter or described second sensed parameter are for: the tooth portion of described user colliding the sound that produces by the audio parameter after the bone conduction of described user, or for the tooth portion of described user colliding the sound that produces by the audio parameter after air transmitted, or be the facial parameters that the face feature of described user changes.
Preferably, determining whether described sensed parameter meets predetermined condition, before producing a judged result, also comprising:
Determine the first amplitude information that described first sensed parameter is corresponding, and determine the second amplitude information that described second sensed parameter is corresponding;
Determine described first amplitude information that amplitude corresponding in described first amplitude information and described second amplitude information is maximum;
Determine described first sensed parameter selecting described first amplitude information corresponding;
Determine whether described sensed parameter meets predetermined condition, produce a judged result, comprise; Determine whether described first sensed parameter meets predetermined condition, produces described judged result.
Preferably, before obtaining sensed parameter by described sensing unit, also comprise: determine a selection information, described selection information has first option and the second option that can cause different execution result; Wherein, by performing described steering order, described electronic equipment can be controlled and select described first option or described second option.
Preferably, when described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order, comprising: according to the correspondence set between predetermined condition and steering order, determine the described steering order that described predetermined condition is corresponding; Described steering order corresponds to described first option.
Preferably, respond described steering order, to control described Wearable electronic equipment, comprising: respond described first steering order, select described first option, with first execution result corresponding by described first option, described Wearable electronic equipment is controlled.
A kind of Wearable electronic equipment, described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, and described Wearable electronic equipment comprises:
Acquisition module, for obtaining sensed parameter by described sensing unit, described sensed parameter is for characterizing the facial movement of the head of the user wearing described Wearable electronic equipment;
First determination module, for determining whether described sensed parameter meets predetermined condition, produces a judged result;
Generation module, for characterize when described judged result described sensed parameter meet described predetermined condition time, generate a steering order;
Respond module, for responding described steering order, to control described Wearable electronic equipment.
Preferably, described 3rd sensing unit is arranged on the primary importance of described fixed cell, described primary importance, to make when user wears described Wearable electronic equipment, faces the place between the eyebrows of the head of described user when described 3rd sensing unit is positioned at the described primary importance of described fixed cell;
Described acquisition module specifically for: obtain described sensed parameter by described sensing unit, the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion that described sensed parameter is described user, or the sound that produces collides by the audio parameter after air transmitted in the described sensed parameter tooth portion that is described user.
Preferably, described Wearable electronic equipment comprises the first sensing unit and the second sensing unit, and described first sensing unit is arranged on the 3rd position of described fixed cell, and described second sensing unit is arranged on the second place of described fixed cell; Described 3rd position and the second place are symmetrical arranged;
Described acquisition module specifically for: obtain the first sensed parameter by described first sensing unit, and obtain the second sensed parameter by described second sensing unit; Described first sensed parameter or described second sensed parameter are for: the tooth portion of described user colliding the sound that produces by the audio parameter after the bone conduction of described user, or for the tooth portion of described user colliding the sound that produces by the audio parameter after air transmitted, or be the facial parameters that the face feature of described user changes.
Preferably, described Wearable electronic equipment also comprises: the second determination module, the 3rd determination module and the 4th determination module;
Described second determination module for determining the first amplitude information that described first sensed parameter is corresponding, and determines the second amplitude information that described second sensed parameter is corresponding;
Described 3rd determination module is for determining described first amplitude information that amplitude corresponding in described first amplitude information and described second amplitude information is maximum;
Described 4th determination module is for determining described first sensed parameter selecting described first amplitude information corresponding;
Described first determination module specifically for: determine whether described first sensed parameter meets predetermined condition, produces described judged result.
Preferably, described Wearable electronic equipment also comprises the 5th determination module, and for determining a selection information, described selection information has first option and the second option that can cause different execution result; Wherein, by performing described steering order, described electronic equipment can be controlled and select described first option or described second option.
Preferably, described generation module specifically for: according to the correspondence set between predetermined condition and steering order, determine the described steering order that described predetermined condition is corresponding; Described steering order corresponds to described first option.
Preferably, described respond module specifically for: respond described first steering order, select described first option, with first execution result corresponding by described first option, described Wearable electronic equipment is controlled.
Information processing method in the embodiment of the present invention can be applied to Wearable electronic equipment, described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, described method can comprise the following steps: obtain sensed parameter by described sensing unit, for the facial movement of the head characterizing the user wearing described Wearable electronic equipment, described sensed parameter determines whether described sensed parameter meets predetermined condition, produce a judged result; When described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order; Respond described steering order, to control described Wearable electronic equipment.
In the embodiment of the present invention, when described user carries out facial movement, described Wearable electronic equipment can obtain the described sensed parameter relevant to described facial movement, such as, can be preset with multiple predetermined condition in described electronic equipment, after the described sensed parameter of acquisition, described electronic equipment can judge whether described sensed parameter meets one of them predetermined condition, if met, then described electronic equipment can generate the described steering order corresponding with described predetermined condition, thus controls described Wearable electronic equipment.
Such as; described Wearable electronic equipment is glasses type electronic equipment; so when the user wearing this glasses type electronic equipment wants to control this equipment; can directly by carrying out facial movement to realize; without the need to sending phonetic order; interference can not be caused to bystander, also can protect the privacy of user as far as possible.And, because glasses type electronic equipment is located in user face, when user carries out facial movement, described Wearable electronic equipment can be easy to described sensed parameter be detected, thus can more adequately realize controlling, effectively prevent in prior art when using the moved work of body to control and easily cause information dropout because described Wearable electronic equipment cannot receive accurate information, or the inaccurate and technical matters of errored response of the information to receive because of described Wearable electronic equipment, improve reliability and the security of information transmission, also reduce the Loss Rate of information and the mistake responsiveness of described Wearable electronic equipment, also very convenient concerning operation user, improve user to experience.
Accompanying drawing explanation
Fig. 1 is the main flow figure of information processing method in the embodiment of the present invention;
Fig. 2 is the schematic diagram of Wearable electronic equipment in the embodiment of the present invention;
Fig. 3 selects information schematic diagram in the embodiment of the present invention;
Fig. 4 is the structural drawing of Wearable electronic equipment in the embodiment of the present invention.
Embodiment
Information processing method in the embodiment of the present invention can be applied to Wearable electronic equipment, described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, described method can comprise the following steps: obtain sensed parameter by described sensing unit, for the facial movement of the head characterizing the user wearing described Wearable electronic equipment, described sensed parameter determines whether described sensed parameter meets predetermined condition, produce a judged result; When described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order; Respond described steering order, to control described Wearable electronic equipment.
In the embodiment of the present invention, when described user carries out facial movement, described Wearable electronic equipment can obtain the described sensed parameter relevant to described facial movement, such as, can be preset with multiple predetermined condition in described electronic equipment, after the described sensed parameter of acquisition, described electronic equipment can judge whether described sensed parameter meets one of them predetermined condition, if met, then described electronic equipment can generate the described steering order corresponding with described predetermined condition, thus controls described Wearable electronic equipment.
Such as; described Wearable electronic equipment is glasses type electronic equipment; so when the user wearing this glasses type electronic equipment wants to control this equipment; can directly by carrying out facial movement to realize; without the need to sending phonetic order; interference can not be caused to bystander, also can protect the privacy of user as far as possible.And, because glasses type electronic equipment is located in user face, when user carries out facial movement, described Wearable electronic equipment can be easy to described sensed parameter be detected, thus can more adequately realize controlling, effectively prevent in prior art when using the moved work of body to control and easily cause information dropout because described Wearable electronic equipment cannot receive accurate information, or the inaccurate and technical matters of errored response of the information to receive because of described Wearable electronic equipment, improve reliability and the security of information transmission, also reduce the Loss Rate of information and the mistake responsiveness of described Wearable electronic equipment, also very convenient concerning operation user, improve user to experience.
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
In the embodiment of the present invention, described Wearable electronic equipment can be such as intelligent glasses, or described Wearable second electronic equipment can be such as that shutter 3D(is three-dimensional) glasses, or described Wearable second electronic equipment also can be other electronic equipments.
In addition, term "and/or" herein, being only a kind of incidence relation describing affiliated partner, can there are three kinds of relations in expression, and such as, A and/or B, can represent: individualism A, exists A and B simultaneously, these three kinds of situations of individualism B.In addition, character "/" herein, general expression forward-backward correlation is to the relation liking a kind of "or".
Below in conjunction with accompanying drawing, the preferred embodiment of the present invention is described in detail.
Embodiment one
Refer to Fig. 1, the embodiment of the present invention provides a kind of information processing method, described method can be applied to Wearable electronic equipment, described Wearable electronic equipment can comprise fixed cell and sensing unit, described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head, and described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit.The main flow of described method is as follows:
Step 101: obtain sensed parameter by described sensing unit, described sensed parameter is for characterizing the facial movement of the head of the user wearing described Wearable electronic equipment.
Referring to Fig. 2, is a kind of schematic diagram of possible described Wearable second electronic equipment.
Described Wearable electronic equipment is such as glasses type electronic equipment, such as, can be shutter type 3 D spectacles.This glasses type electronic equipment comprises structure member 200, structure member comprises nose support 201 and ear mount 202, for this glasses type electronic equipment being worn on the health of user, described glasses type electronic equipment can also comprise display unit 204, i.e. described second display unit.
In the present embodiment, display unit 204 is such as the eyeglass of glasses type electronic equipment itself, so user can watch extraneous scenery by display unit 204.
In the embodiment of the present invention, such as described Wearable electronic equipment is glasses type electronic equipment, then described fixed cell can refer to picture frame.
In the embodiment of the present invention, when user wears described Wearable electronic equipment, may want to control described Wearable electronic equipment.
In the embodiment of the present invention, when user wants to control described Wearable electronic equipment, described facial movement can be carried out.
Such as, described facial movement can be the motion of tooth portion, and such as described tooth portion motion can be the motion of left tooth or the motion of right tooth.
Left tooth motion wherein can be the relative collision movement between the upper tooth on the left side and the lower tooth on the left side, and right tooth motion wherein can be the relative collision movement between the upper tooth on the right and the lower tooth on the right.
Such as, described facial movement can be facial movement, such as eyelid movement etc.
In the embodiment of the present invention, when user carries out described facial movement, described Wearable electronic equipment can obtain described sensed parameter.
The sensed parameter that described facial movement can produce has multiple, it can be such as audio parameter, although the sound that this audio parameter is corresponding may be comparatively faint, because described Wearable electronic equipment is located in user face, therefore still can accurate acquisition to this audio parameter.Or can be facial movement parameter, because described kinematic parameter is brought by the facial movement of user, therefore described facial movement parameter also can be called facial parameters.
Preferably, in the embodiment of the present invention, described Wearable electronic equipment can have at least one sensing unit, and at least one sensing unit described may be used for gathering described sensed parameter.
Optionally, described Wearable electronic equipment can be glasses type electronic equipment, and this glasses type electronic equipment can have a sensing unit, such as, this sensing unit can be called the 3rd sensing unit.Described 3rd sensing unit such as can be arranged on the primary importance of described fixed cell, described primary importance can make when user wears described Wearable electronic equipment, faces the place between the eyebrows of the head of user when described 3rd sensing unit is positioned at the described primary importance of described fixed cell.
In the embodiment of the present invention, sensed parameter is obtained by described sensing unit, can be: obtain described sensed parameter by described sensing unit, the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion that described sensed parameter is described user, or the sound that produces collides by the audio parameter after air transmitted in the described sensed parameter tooth portion that is described user.
Preferably, described sensed parameter can be that the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion of described user, because if conducted by air, when user is in comparatively noisy environment, described sensing unit may not easily collect described audio parameter.
Optionally, described Wearable electronic equipment can be glasses type electronic equipment, this glasses type electronic equipment can have two sensing units, be respectively the first sensing unit and the second sensing unit, described first sensing unit is arranged on the 3rd position of described fixed cell, described second sensing unit is arranged on the second place of described fixed cell, and such as described 3rd position and the second place are for being symmetrical arranged.Such as, described first sensing unit and described second sensing unit can lay respectively on two legs of spectacles of this glasses type electronic equipment.So, described first sensing unit and described second sensing unit can be used for gathering from the sensed parameter of the left side face of user and the sensed parameter of the right face respectively.
Obtain sensed parameter by described sensing unit, Ke Yishi: obtain the first sensed parameter by described first sensing unit, and obtain the second sensed parameter by described second sensing unit.Described first sensed parameter or described second sensed parameter can be for: the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion of described user, or for the tooth portion of described user colliding the sound that produces by the audio parameter after air transmitted, or be the facial parameters that the face feature of described user changes.
In the embodiment of the present invention, described sensed parameter can be audio parameter, or described sensed parameter can be facial movement parameter, or described sensed parameter also can be other parameters, as long as described sensed parameter is the sensed parameter that described facial movement produces, the present invention is not restricted this.When described sensed parameter is audio parameter, described audio parameter can be conducted by the bone of user, or also can be conducted by air.
In the embodiment of the present invention, described sensing unit can be sensor.Specifically what sensor, can determine according to the sensed parameter of required collection.Such as, if described sensed parameter is audio parameter, so described sensor can be sound transducer, if described sensed parameter is facial movement parameter, so described sensor can be proximity transducer, etc.
In the embodiment of the present invention, if described electronic equipment only has a described sensing unit, then only can obtain a sensed parameter by described sensing unit.
If described electronic equipment has described first sensing unit and described second sensing unit, then can obtain the first sensed parameter by described first sensing unit, and/or, the second sensed parameter can be obtained by described second sensing unit.That is, now described sensed parameter can comprise described first sensed parameter and/or described second sensed parameter.
In the embodiment of the present invention, the sound that described audio parameter is corresponding can be the sound of the tooth portion motion generation because of user, facial action corresponding to described facial movement parameter can be drive the skin bulge near temple because of the tooth portion motion of user, or can be because user winks, closes lightly mouth and drive face feature to change, etc.。
The embodiment of the present invention preferably application scenarios is: when using described Wearable electronic equipment, if need to make a selection in the selection information with two options, can apply the described control method in the embodiment of the present invention.
The selection information with two options can have multiple, such as, for a video file, user can be pointed out to select " broadcasting " or " stopping ", after recorded a section audio file, user can be pointed out to select " preservation " or " cancellation ", during Album for glancing over pictures, user can be pointed out to select " upper one " or " next ", receive fresh information, user can be pointed out to select " browsing " or " cancellation ", etc.Wherein, these two options can cause two kinds of different execution results, in general, can cause the execution result that two kinds contrary.
Preferably, in an alternative embodiment of the invention, before obtaining sensed parameter by described sensing unit, namely before user carries out described facial movement, first described Wearable electronic equipment can determine a selection information, described selection information can have first option and the second option that can cause different execution result, such as, described selection information can be foregoing: for a video file, user can be pointed out to select " broadcasting " or " stopping ", after recorded a section audio file, user can be pointed out to select " preservation " or " cancellation ", etc..
Described selection information has described first option and described second option.
Step 102: determine whether described sensed parameter meets predetermined condition, produce a judged result.
In the embodiment of the present invention, after the described sensed parameter of acquisition, can determine whether described sensed parameter meets described predetermined condition.
In the embodiment of the present invention, can be preset with at least one predetermined condition in described Wearable electronic equipment, wherein each predetermined condition can to there being a steering order.Namely, a correspondence set can be preset with in described Wearable electronic equipment, the corresponding relation between at least one group of predetermined condition and steering order can be comprised in described correspondence set, wherein, in described correspondence set, predetermined condition and steering order can one_to_one corresponding.
Such as, if only have a sensed parameter, then directly can determine whether described sensed parameter meets one of them predetermined condition.
Such as, store two predetermined conditions in described Wearable electronic equipment, first predetermined condition is the sound of primary collision, second predetermined condition is the sound of double collision, first steering order corresponding to first predetermined condition is the option controlling to select "Yes", and second steering order corresponding to second predetermined condition is the option controlling to select "No".
Then, after determining described sensed parameter, can determine whether described sensed parameter is the sound of primary collision or the sound of double collision, namely determines whether described sensed parameter meets one of them predetermined condition.
Such as, if obtain described first sensed parameter and described second sensed parameter, so, determining whether described sensed parameter meets described predetermined condition, before producing described judged result, the first amplitude information that described first sensed parameter is corresponding can be determined, and determine the second amplitude information that described second sensed parameter is corresponding, after determining described first amplitude information and described second amplitude information, described first amplitude information that amplitude corresponding in described first amplitude information and described second amplitude information is maximum can be determined, namely the magnitude relationship between the first amplitude that described first amplitude information is corresponding and the second amplitude corresponding to described second amplitude information is determined, such as determine that described first amplitude is greater than described second amplitude, then can determine described first sensed parameter selecting described first amplitude information corresponding.
Such as, the tooth portion, the left side of user moves, the audio parameter no matter produced is conducted by the bone of user or is passed through air transmitted, described first sensing unit probably corresponding to user left side face can obtain described first sensed parameter, and described second sensing unit corresponding to face on the right of user also can obtain described second sensed parameter.Judge whether sensed parameter meets described predetermined condition, then first will select a sensed parameter from described first sensed parameter and described second sensed parameter.Have employed the mode judging amplitude in the embodiment of the present invention, amplitude is larger, just from sound source more close to, thus more close to the true selection of user.
Thus determine whether described sensed parameter meets described predetermined condition, produce described judged result, can be just: determine whether described first sensed parameter meets described predetermined condition, produces described judged result.
Step 103: when described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order.
Concrete, in the embodiment of the present invention, when described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order, can be: according to the described correspondence set between predetermined condition and steering order, determine the described steering order that described predetermined condition is corresponding, such as described steering order corresponds to described first option.
In the embodiment of the present invention, can be preset with multiple predetermined condition in described Wearable electronic equipment, described Wearable electronic equipment can judge whether described sensed parameter meets one of them predetermined condition.If described judged result shows that described sensed parameter meets one of them predetermined condition, then described Wearable electronic equipment can determine the steering order corresponding with described predetermined condition according to described correspondence set, thus described Wearable electronic equipment can generate described steering order.
Step 104: respond described steering order, to control described Wearable electronic equipment.
In the embodiment of the present invention, such as described steering order corresponds to described first option, then, respond described steering order, to control described Wearable electronic equipment, can be specifically: respond described first steering order, select described first option, with first execution result corresponding by described first option, described Wearable electronic equipment be controlled.
Below illustrate.
Embodiment two
Described Wearable electronic equipment as shown in Figure 2.
Such as, after recording one section of video information, described Wearable electronic equipment generates described selection information, and described selection information comprises described first option and described second option, and described first option is " preservation ", and described second option is " cancellation ".As shown in Figure 3.
As can be known from Fig. 2, described Wearable electronic equipment can have display unit 204, and so user can see described selection information by display unit 204, naturally also can see described first option and described second option.Now, user can be made a choice between described first option and described second option by described facial movement.
Described Wearable electronic equipment is glasses type electronic equipment, two legs of spectacles of this glasses type electronic equipment there is a sensor respectively, such as these two sensors are all sound transducers, can be respectively used to gather from the audio parameter of user left side face and the audio parameter from face on the right of user.
Such as, what user carried out is the motion of left tooth, and this left tooth motion is the relative collision movement that the upper tooth of left tooth carries out with the lower tooth of left tooth.Generally, although the motion that the tooth that user is side carries out, but the tooth of opposite side also can be affected sometimes, also relative motion is had, therefore, described Wearable electronic equipment according to the described tooth portion motion of user, may obtain described first sensed parameter by described first sensing unit, and obtains described second sensed parameter by described second sensing unit.
After described first sensed parameter of acquisition and described second sensed parameter, described Wearable electronic equipment first can select a sensed parameter from described first sensed parameter and described second sensed parameter.
Such as, described Wearable electronic equipment can determine the first amplitude information that described first sensed parameter is corresponding, and determine the second amplitude information that described second sensed parameter is corresponding, and the first amplitude that described first amplitude information is corresponding can be determined, and determine the second amplitude that described second amplitude information is corresponding.
After determining described first amplitude and described second amplitude, described Wearable electronic equipment can the size of more described first amplitude and described second amplitude.In the embodiment of the present invention, what carry out because of user is the motion of left tooth, and therefore can be greater than the sound from face on the right of user from the sound of user left side face, visible, described first amplitude can be greater than described second amplitude.
After determining that described first amplitude is greater than described second amplitude, described Wearable electronic equipment can determine described first sensed parameter selecting described first amplitude corresponding.
Multiple predetermined condition can be previously stored with in described Wearable electronic equipment, such as, be previously stored with the first predetermined condition and the second predetermined condition in described Wearable electronic equipment, described first predetermined condition is the sound of primary collision, and described second predetermined condition is the sound of double collision.
The described correspondence set between predetermined condition and steering order can be previously stored with in described Wearable electronic equipment, such as can comprise the corresponding relation between described first predetermined condition and the first steering order in described correspondence set, and comprise the corresponding relation between described second predetermined condition and the second steering order.Such as, described first steering order is control the steering order that described Wearable electronic equipment selects described first option, and described second steering order is control the steering order that described Wearable electronic equipment selects described second option.
After determining to select described first sensed parameter, described Wearable electronic equipment can judge whether described first sensed parameter meets described first predetermined condition, such as judge to determine that described first sensed parameter does not meet described first predetermined condition, then described Wearable electronic equipment can continue to judge whether described first sensed parameter meets described second predetermined condition, and such as described judged result judges to determine that described first sensed parameter meets described second predetermined condition.
After determining that described first sensed parameter meets described second predetermined condition, described Wearable electronic equipment can determine the steering order corresponding with described second predetermined condition according to described correspondence set, be described second steering order in the present embodiment, then described Wearable electronic equipment can generate described second steering order.
After described second steering order of generation, described Wearable electronic equipment can respond described second steering order, controls described Wearable electronic equipment.In the embodiment of the present invention, by performing described second steering order, by the second execution result that described second option is corresponding, described Wearable electronic equipment can be controlled.
Such as the second option described in the embodiment of the present invention is " cancellation ", so by performing described second steering order, just can cancel the video information that this section is recorded.
Embodiment three
Refer to Fig. 4, the embodiment of the present invention provides a kind of Wearable electronic equipment, and described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit.Described Wearable electronic equipment can comprise acquisition module 401, first determination module 402, generation module 403 and respond module 404.
Preferably, described Wearable electronic equipment can also comprise the second determination module, the 3rd determination module, the 4th determination module and the 5th determination module.
Described acquisition module 401 may be used for obtaining sensed parameter by described sensing unit, and described sensed parameter is for characterizing the facial movement of the head of the user wearing described Wearable electronic equipment.
Described first determination module 402 may be used for determining whether described sensed parameter meets predetermined condition, produces a judged result.
Described generation module 403 may be used for when described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order.
Described respond module 404 may be used for responding described steering order, to control described Wearable electronic equipment.
In the embodiment of the present invention, described 3rd sensing unit is arranged on the primary importance of described fixed cell, described primary importance, to make when user wears described Wearable electronic equipment, faces the place between the eyebrows of the head of described user when described 3rd sensing unit is positioned at the described primary importance of described fixed cell; Described acquisition module 401 specifically may be used for obtaining described sensed parameter by described sensing unit, the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion that described sensed parameter is described user, or the sound that produces collides by the audio parameter after air transmitted in the described sensed parameter tooth portion that is described user.
In the embodiment of the present invention, described Wearable electronic equipment comprises the first sensing unit and the second sensing unit, and described first sensing unit is arranged on the 3rd position of described fixed cell, and described second sensing unit is arranged on the second place of described fixed cell; Described 3rd position and the second place are symmetrical arranged; Described acquisition module 401 specifically may be used for obtaining the first sensed parameter by described first sensing unit, and obtains the second sensed parameter by described second sensing unit; Described first sensed parameter or described second sensed parameter are for: the tooth portion of described user colliding the sound that produces by the audio parameter after the bone conduction of described user, or for the tooth portion of described user colliding the sound that produces by the audio parameter after air transmitted, or be the facial parameters that the face feature of described user changes.
Described second determination module for determining the first amplitude information that described first sensed parameter is corresponding, and determines the second amplitude information that described second sensed parameter is corresponding;
Described 3rd determination module is for determining described first amplitude information that amplitude corresponding in described first amplitude information and described second amplitude information is maximum;
Described 4th determination module is for determining described first sensed parameter selecting described first amplitude information corresponding;
Described first determination module 402 specifically may be used for determining whether described first sensed parameter meets predetermined condition, produces described judged result.
Described 5th determination module is used for determining a selection information, and described selection information has first option and the second option that can cause different execution result; Wherein, by performing described steering order, described electronic equipment can be controlled and select described first option or described second option.
Described generation module 403 specifically may be used for according to the correspondence set between predetermined condition and steering order, determines the described steering order that described predetermined condition is corresponding; Described steering order corresponds to described first option.
Described respond module 404 specifically may be used for responding described first steering order, selects described first option, controls with the first execution result corresponding by described first option to described Wearable electronic equipment.
Information processing method in the embodiment of the present invention can be applied to Wearable electronic equipment, described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, described method can comprise the following steps: obtain sensed parameter by described sensing unit, for the facial movement of the head characterizing the user wearing described Wearable electronic equipment, described sensed parameter determines whether described sensed parameter meets predetermined condition, produce a judged result; When described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order; Respond described steering order, to control described Wearable electronic equipment.
In the embodiment of the present invention, when described user carries out facial movement, described Wearable electronic equipment can obtain the described sensed parameter relevant to described facial movement, such as, can be preset with multiple predetermined condition in described electronic equipment, after the described sensed parameter of acquisition, described electronic equipment can judge whether described sensed parameter meets one of them predetermined condition, if met, then described electronic equipment can generate the described steering order corresponding with described predetermined condition, thus controls described Wearable electronic equipment.
Such as; described Wearable electronic equipment is glasses type electronic equipment; so when the user wearing this glasses type electronic equipment wants to control this equipment; can directly by carrying out facial movement to realize; without the need to sending phonetic order; interference can not be caused to bystander, also can protect the privacy of user as far as possible.And, because glasses type electronic equipment is located in user face, when user carries out facial movement, described Wearable electronic equipment can be easy to described sensed parameter be detected, thus can more adequately realize controlling, effectively prevent in prior art when using the moved work of body to control and easily cause information dropout because described Wearable electronic equipment cannot receive accurate information, or the inaccurate and technical matters of errored response of the information to receive because of described Wearable electronic equipment, improve reliability and the security of information transmission, also reduce the Loss Rate of information and the mistake responsiveness of described Wearable electronic equipment, also very convenient concerning operation user, improve user to experience.
Those skilled in the art can be well understood to, for convenience and simplicity of description, only be illustrated with the division of above-mentioned each functional module, in practical application, can distribute as required and by above-mentioned functions and be completed by different functional modules, inner structure by device is divided into different functional modules, to complete all or part of function described above.The system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that, disclosed system, apparatus and method, can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described module or unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that the technical scheme of the application contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) or processor (processor) perform all or part of step of method described in each embodiment of the application.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above, above embodiment is only in order to be described in detail the technical scheme of the application, but the explanation of above embodiment just understands method of the present invention and core concept thereof for helping, and should not be construed as limitation of the present invention.Those skilled in the art are in the technical scope that the present invention discloses, and the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.

Claims (14)

1. an information processing method, is applied to Wearable electronic equipment, and described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, and described method comprises:
Obtain sensed parameter by described sensing unit, described sensed parameter is for characterizing the facial movement of the head of the user wearing described Wearable electronic equipment;
Determine whether described sensed parameter meets predetermined condition, produce a judged result;
When described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order;
Respond described steering order, to control described Wearable electronic equipment.
2. the method for claim 1, it is characterized in that, described 3rd sensing unit is arranged on the primary importance of described fixed cell, described primary importance, to make when user wears described Wearable electronic equipment, faces the place between the eyebrows of the head of described user when described 3rd sensing unit is positioned at the described primary importance of described fixed cell;
Sensed parameter is obtained by described sensing unit, comprise: obtain described sensed parameter by described sensing unit, the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion that described sensed parameter is described user, or the sound that produces collides by the audio parameter after air transmitted in the described sensed parameter tooth portion that is described user.
3. the method for claim 1, it is characterized in that, described Wearable electronic equipment comprises the first sensing unit and the second sensing unit, and described first sensing unit is arranged on the 3rd position of described fixed cell, and described second sensing unit is arranged on the second place of described fixed cell; Described 3rd position and the second place are symmetrical arranged;
Obtain sensed parameter by described sensing unit, comprising: obtain the first sensed parameter by described first sensing unit, and obtain the second sensed parameter by described second sensing unit; Described first sensed parameter or described second sensed parameter are for: the tooth portion of described user colliding the sound that produces by the audio parameter after the bone conduction of described user, or for the tooth portion of described user colliding the sound that produces by the audio parameter after air transmitted, or be the facial parameters that the face feature of described user changes.
4. method as claimed in claim 3, is characterized in that, determining whether described sensed parameter meets predetermined condition, before producing a judged result, also comprising:
Determine the first amplitude information that described first sensed parameter is corresponding, and determine the second amplitude information that described second sensed parameter is corresponding;
Determine described first amplitude information that amplitude corresponding in described first amplitude information and described second amplitude information is maximum;
Determine described first sensed parameter selecting described first amplitude information corresponding;
Determine whether described sensed parameter meets predetermined condition, produce a judged result, comprise; Determine whether described first sensed parameter meets predetermined condition, produces described judged result.
5. the method for claim 1, is characterized in that, before obtaining sensed parameter by described sensing unit, also comprise: determine a selection information, described selection information has first option and the second option that can cause different execution result; Wherein, by performing described steering order, described electronic equipment can be controlled and select described first option or described second option.
6. method as claimed in claim 5, it is characterized in that, when described judged result characterize described sensed parameter meet described predetermined condition time, generate a steering order, comprise: according to the correspondence set between predetermined condition and steering order, determine the described steering order that described predetermined condition is corresponding; Described steering order corresponds to described first option.
7. method as claimed in claim 6, it is characterized in that, respond described steering order, to control described Wearable electronic equipment, comprise: respond described first steering order, select described first option, with first execution result corresponding by described first option, described Wearable electronic equipment is controlled.
8. a Wearable electronic equipment, described Wearable electronic equipment comprises fixed cell and sensing unit, and described fixed cell is for maintaining the relative position relation of described Wearable electronic equipment and user's head; Described sensing unit is arranged on fixed cell or is arranged on the position near described fixing unit, and described Wearable electronic equipment comprises:
Acquisition module, for obtaining sensed parameter by described sensing unit, described sensed parameter is for characterizing the facial movement of the head of the user wearing described Wearable electronic equipment;
First determination module, for determining whether described sensed parameter meets predetermined condition, produces a judged result;
Generation module, for characterize when described judged result described sensed parameter meet described predetermined condition time, generate a steering order;
Respond module, for responding described steering order, to control described Wearable electronic equipment.
9. Wearable electronic equipment as claimed in claim 8, it is characterized in that, described 3rd sensing unit is arranged on the primary importance of described fixed cell, described primary importance, to make when user wears described Wearable electronic equipment, faces the place between the eyebrows of the head of described user when described 3rd sensing unit is positioned at the described primary importance of described fixed cell;
Described acquisition module specifically for: obtain described sensed parameter by described sensing unit, the sound that produces collides by the audio parameter after the bone conduction of described user in the tooth portion that described sensed parameter is described user, or the sound that produces collides by the audio parameter after air transmitted in the described sensed parameter tooth portion that is described user.
10. Wearable electronic equipment as claimed in claim 8, it is characterized in that, described Wearable electronic equipment comprises the first sensing unit and the second sensing unit, described first sensing unit is arranged on the 3rd position of described fixed cell, and described second sensing unit is arranged on the second place of described fixed cell; Described 3rd position and the second place are symmetrical arranged;
Described acquisition module specifically for: obtain the first sensed parameter by described first sensing unit, and obtain the second sensed parameter by described second sensing unit; Described first sensed parameter or described second sensed parameter are for: the tooth portion of described user colliding the sound that produces by the audio parameter after the bone conduction of described user, or for the tooth portion of described user colliding the sound that produces by the audio parameter after air transmitted, or be the facial parameters that the face feature of described user changes.
11. Wearable electronic equipment as claimed in claim 10, it is characterized in that, described Wearable electronic equipment also comprises: the second determination module, the 3rd determination module and the 4th determination module;
Described second determination module for determining the first amplitude information that described first sensed parameter is corresponding, and determines the second amplitude information that described second sensed parameter is corresponding;
Described 3rd determination module is for determining described first amplitude information that amplitude corresponding in described first amplitude information and described second amplitude information is maximum;
Described 4th determination module is for determining described first sensed parameter selecting described first amplitude information corresponding;
Described first determination module specifically for: determine whether described first sensed parameter meets predetermined condition, produces described judged result.
12. Wearable electronic equipment as claimed in claim 8, it is characterized in that, described Wearable electronic equipment also comprises the 5th determination module, and for determining a selection information, described selection information has first option and the second option that can cause different execution result; Wherein, by performing described steering order, described electronic equipment can be controlled and select described first option or described second option.
13. Wearable electronic equipment as claimed in claim 12, is characterized in that, described generation module specifically for: according to the correspondence set between predetermined condition and steering order, determine the described steering order that described predetermined condition is corresponding; Described steering order corresponds to described first option.
14. Wearable electronic equipment as claimed in claim 13, it is characterized in that, described respond module specifically for: respond described first steering order, select described first option, with first execution result corresponding by described first option, described Wearable electronic equipment is controlled.
CN201310421592.XA 2013-09-16 2013-09-16 A kind of information processing method and wearable electronic equipment Active CN104460955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310421592.XA CN104460955B (en) 2013-09-16 2013-09-16 A kind of information processing method and wearable electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310421592.XA CN104460955B (en) 2013-09-16 2013-09-16 A kind of information processing method and wearable electronic equipment

Publications (2)

Publication Number Publication Date
CN104460955A true CN104460955A (en) 2015-03-25
CN104460955B CN104460955B (en) 2018-08-10

Family

ID=52907158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310421592.XA Active CN104460955B (en) 2013-09-16 2013-09-16 A kind of information processing method and wearable electronic equipment

Country Status (1)

Country Link
CN (1) CN104460955B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165052A1 (en) * 2015-04-13 2016-10-20 Empire Technology Development Llc Detecting facial expressions
CN109144245A (en) * 2018-07-04 2019-01-04 Oppo(重庆)智能科技有限公司 Apparatus control method and Related product
US10515474B2 (en) 2017-01-19 2019-12-24 Mindmaze Holding Sa System, method and apparatus for detecting facial expression in a virtual reality system
US10521014B2 (en) 2017-01-19 2019-12-31 Mindmaze Holding Sa Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
US10943100B2 (en) 2017-01-19 2021-03-09 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
CN114304800A (en) * 2022-03-16 2022-04-12 江苏环亚医用科技集团股份有限公司 Helmet with adjustable video shooting transmission module
US11328533B1 (en) 2018-01-09 2022-05-10 Mindmaze Holding Sa System, method and apparatus for detecting facial expression for motion capture

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2034442U (en) * 1987-05-13 1989-03-22 龚鹓文 Double-guiding type hearing aid series for deaf-mute
CN1512490A (en) * 2002-12-30 2004-07-14 吕小麟 Method and device for inputing control signal
CN1531676A (en) * 2001-06-01 2004-09-22 ���ṫ˾ User input apparatus
CN101272727A (en) * 2005-09-27 2008-09-24 潘尼公司 A device for controlling an external unit
CN101785327A (en) * 2007-07-23 2010-07-21 艾瑟斯技术有限责任公司 Diaphonic acoustic transduction coupler and ear bud
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2034442U (en) * 1987-05-13 1989-03-22 龚鹓文 Double-guiding type hearing aid series for deaf-mute
CN1531676A (en) * 2001-06-01 2004-09-22 ���ṫ˾ User input apparatus
CN1512490A (en) * 2002-12-30 2004-07-14 吕小麟 Method and device for inputing control signal
CN101272727A (en) * 2005-09-27 2008-09-24 潘尼公司 A device for controlling an external unit
CN101785327A (en) * 2007-07-23 2010-07-21 艾瑟斯技术有限责任公司 Diaphonic acoustic transduction coupler and ear bud
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165052A1 (en) * 2015-04-13 2016-10-20 Empire Technology Development Llc Detecting facial expressions
US20180107275A1 (en) * 2015-04-13 2018-04-19 Empire Technology Development Llc Detecting facial expressions
US10515474B2 (en) 2017-01-19 2019-12-24 Mindmaze Holding Sa System, method and apparatus for detecting facial expression in a virtual reality system
US10521014B2 (en) 2017-01-19 2019-12-31 Mindmaze Holding Sa Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
US10943100B2 (en) 2017-01-19 2021-03-09 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
US11195316B2 (en) 2017-01-19 2021-12-07 Mindmaze Holding Sa System, method and apparatus for detecting facial expression in a virtual reality system
US11495053B2 (en) 2017-01-19 2022-11-08 Mindmaze Group Sa Systems, methods, devices and apparatuses for detecting facial expression
US11709548B2 (en) 2017-01-19 2023-07-25 Mindmaze Group Sa Systems, methods, devices and apparatuses for detecting facial expression
US11328533B1 (en) 2018-01-09 2022-05-10 Mindmaze Holding Sa System, method and apparatus for detecting facial expression for motion capture
CN109144245A (en) * 2018-07-04 2019-01-04 Oppo(重庆)智能科技有限公司 Apparatus control method and Related product
CN114304800A (en) * 2022-03-16 2022-04-12 江苏环亚医用科技集团股份有限公司 Helmet with adjustable video shooting transmission module
CN114304800B (en) * 2022-03-16 2022-05-10 江苏环亚医用科技集团股份有限公司 Helmet with adjustable video shooting transmission module

Also Published As

Publication number Publication date
CN104460955B (en) 2018-08-10

Similar Documents

Publication Publication Date Title
CN104460955A (en) Information processing method and wearable electronic equipment
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
US20190164394A1 (en) System with wearable device and haptic output device
US10776618B2 (en) Mobile terminal and control method therefor
CN104205880B (en) Audio frequency control based on orientation
CN110536665A (en) Carry out simulation space using virtual echolocation to perceive
CN114885274B (en) Spatialization audio system and method for rendering spatialization audio
CN108965954A (en) Use the terminal of the intellectual analysis of the playback duration for reducing video
CN110506249A (en) Information processing equipment, information processing method and recording medium
CN106575156A (en) Smart placement of virtual objects to stay in the field of view of a head mounted display
CN104750245A (en) Systems and methods for recording and playing back point-of-view videos with haptic content
CN106502377A (en) Mobile terminal and its control method
CN108495045A (en) Image capturing method, device, electronic device and storage medium
US10984571B2 (en) Preventing transition shocks during transitions between realities
EP4254353A1 (en) Augmented reality interaction method and electronic device
CN105653020A (en) Time traveling method and apparatus and glasses or helmet using same
CN106341521B (en) Mobile terminal
CN106406537A (en) Display method and device
US20220270315A1 (en) Systems configured to control digital characters utilizing real-time facial and/or body motion capture and methods of use thereof
CN106937143A (en) The control method for playing back and device and equipment of a kind of virtual reality video
CN108924534A (en) Methods of exhibiting, client, server and the storage medium of panoramic picture
CN106687944A (en) Activity based text rewriting using language generation
US11416075B1 (en) Wearable device and user input system for computing devices and artificial reality environments
CN107408186A (en) The display of privacy content
CN115150555B (en) Video recording method, device, equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant