CN104460955B - A kind of information processing method and wearable electronic equipment - Google Patents
A kind of information processing method and wearable electronic equipment Download PDFInfo
- Publication number
- CN104460955B CN104460955B CN201310421592.XA CN201310421592A CN104460955B CN 104460955 B CN104460955 B CN 104460955B CN 201310421592 A CN201310421592 A CN 201310421592A CN 104460955 B CN104460955 B CN 104460955B
- Authority
- CN
- China
- Prior art keywords
- electronic equipment
- sensed parameter
- wearable electronic
- user
- sensing unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of information processing methods, for reducing information Loss Rate.The method includes:Sensed parameter is obtained by the sensing unit, the sensed parameter is used to characterize the facial movement on the head for the user for wearing the wearable electronic equipment;It determines whether the sensed parameter meets predetermined condition, generates a judging result;When the judging result, which characterizes the sensed parameter, meets the predetermined condition, a control instruction is generated;The control instruction is responded, to control the wearable electronic equipment.The invention also discloses the wearable electronic equipments for realizing the method.
Description
Technical field
The present invention relates to computer and built-in field, more particularly to a kind of information processing method and wearable electronics are set
It is standby.
Background technology
Intelligent glasses, also referred to as smart mirror, it can have independent operating system as smart mobile phone, can be by making
User installs the program that the software houses such as software, game provide, and can complete adding schedule, map by voice or action manipulation
Navigation is unfolded the functions such as video calling with good friend's interaction, shooting photo and video, with friend, and can pass through wireless network
Network come realize wireless network access.
With the arrival in intelligent glasses epoch, the mode of this electronic equipment of manipulation intelligent glasses is also from keyboard, touch screen etc.
Gradually it is changed into voice.Intelligent glasses are controlled by voice, are manually operated without using person, for user more
It is convenient.However by voice, to control this mode, there are one apparent disadvantages, are exactly user's week when using voice command
The people enclosed can also hear.Although voice signal is received by intelligent glasses by osteoacusis, user still needs voice
Order says that this can not only destroy privacy, can also bother the people of surrounding.
For this problem, solution in the prior art is:Phonetic order is replaced using body-sensing instruction.For example, making
User can face upward head, face upward the different angle of head and can correspond to different orders.
The disadvantages of this solution:For intelligent glasses, it may be necessary to which the movement range of user is larger accurately to be adopted
The body-sensing information for collecting user, when the action of user is more slight, intelligent glasses possibly can not collect the body of user
Feel information, that is to say, that information is easily lost in transmission process, and the collected information of intelligent glasses may be inaccurate,
It is not the order that user really wants, it is higher also will result in the mistake responsiveness that intelligent glasses execute when ordering in this way.Moreover,
User does always action to control intelligent glasses, In the view of people around also can be extremely odd.
Invention content
A kind of information processing method of offer of the embodiment of the present invention and wearable electronic equipment, for solving intelligence in the prior art
Energy glasses be easy to cause the technical issues of information is lost during receiving order.
A kind of information processing method is applied to wearable electronic equipment, and the wearable electronic equipment includes fixed cell
And sensing unit, the fixed cell are used to maintain the relative position relation of the wearable electronic equipment and user head;
The sensing unit is arranged on fixed cell or is positioned close to the position of the fixed unit, the method includes:
Sensed parameter is obtained by the sensing unit, the sensed parameter is set for characterizing the wearing wearable electronics
The facial movement on the head of standby user;
It determines whether the sensed parameter meets predetermined condition, generates a judging result;
When the judging result, which characterizes the sensed parameter, meets the predetermined condition, a control instruction is generated;
The control instruction is responded, to control the wearable electronic equipment.
Preferably, the third sensing unit is arranged in the first position of the fixed cell, the first position so that
When proper user wears the wearable electronic equipment, the third sensing unit is located at described the first of the fixed cell
The place between the eyebrows on the head of the user is faced when position;
Sensed parameter is obtained by the sensing unit, including:The sensed parameter, institute are obtained by the sensing unit
State the sound after the bone conduction that sound passes through the user caused by the tooth portion collision that sensed parameter is the user
Sound is joined by the sound after air transmitted caused by parameter or the tooth portion collision that the sensed parameter is the user
Number.
Preferably, the wearable electronic equipment includes the first sensing unit and the second sensing unit, first induction
The third place in the fixed cell is arranged in unit, and the second in the fixed cell is arranged in second sensing unit
It sets;The third place and the second position are symmetrical arranged;
Sensed parameter is obtained by the sensing unit, including:The first induction ginseng is obtained by first sensing unit
Number, and the second sensed parameter is obtained by second sensing unit;First sensed parameter or second sensed parameter
For:Audio parameter after the bone conduction that sound passes through the user caused by the tooth portion collision of the user, or be
Sound caused by the tooth portion collision of the user is by the audio parameter after air transmitted, or is the face of the user
The facial parameters that feature changes.
Preferably, determining whether the sensed parameter meets predetermined condition, before generating a judging result, further include:
It determines corresponding first amplitude information of first sensed parameter, and determines second sensed parameter corresponding the
Two amplitude informations;
Determine corresponding maximum first amplitude of amplitude in first amplitude information and second amplitude information
Information;
It determines and selects corresponding first sensed parameter of first amplitude information;
It determines whether the sensed parameter meets predetermined condition, generates a judging result, including;Determine first induction
Whether parameter meets predetermined condition, generates the judging result.
Preferably, before obtaining sensed parameter by the sensing unit, further include:Determine a selection information, it is described
Selection information has the first option and the second option that can lead to different implementing results;Wherein, by executing the control instruction,
The electronic equipment can be controlled and select first option or second option.
Preferably, when the judging result characterizes the sensed parameter and meets the predetermined condition, generates a control and refer to
It enables, including:According to the correspondence set between predetermined condition and control instruction, the corresponding control of the predetermined condition is determined
System instruction;The control instruction corresponds to first option.
Preferably, the control instruction is responded, to control the wearable electronic equipment, including:Respond first control
System instruction, selects first option, with by corresponding first implementing result of first option to the wearable electronics
Equipment is controlled.
A kind of wearable electronic equipment, the wearable electronic equipment include fixed cell and sensing unit, the fixation
Unit is used to maintain the relative position relation of the wearable electronic equipment and user head;The sensing unit setting is solid
In order member or it is positioned close to the position of the fixed unit, the wearable electronic equipment includes:
Acquisition module, for obtaining sensed parameter by the sensing unit, the sensed parameter wears institute for characterizing
State the facial movement on the head of the user of wearable electronic equipment;
First determining module generates a judging result for determining whether the sensed parameter meets predetermined condition;
Generation module, for when the judging result characterizes the sensed parameter and meets the predetermined condition, generating one
Control instruction;
Respond module, for responding the control instruction, to control the wearable electronic equipment.
Preferably, the third sensing unit is arranged in the first position of the fixed cell, the first position so that
When proper user wears the wearable electronic equipment, the third sensing unit is located at described the first of the fixed cell
The place between the eyebrows on the head of the user is faced when position;
The acquisition module is specifically used for:The sensed parameter is obtained by the sensing unit, the sensed parameter is
Audio parameter after the bone conduction that sound passes through the user caused by the tooth portion collision of the user or the sense
Sound caused by the tooth portion collision that parameter is the user is answered to pass through the audio parameter after air transmitted.
Preferably, the wearable electronic equipment includes the first sensing unit and the second sensing unit, first induction
The third place in the fixed cell is arranged in unit, and the second in the fixed cell is arranged in second sensing unit
It sets;The third place and the second position are symmetrical arranged;
The acquisition module is specifically used for:The first sensed parameter is obtained by first sensing unit, and by described
Second sensing unit obtains the second sensed parameter;First sensed parameter or second sensed parameter are:The user
The collision of tooth portion caused by sound pass through the audio parameter after the conduction of the bone of the user, or the tooth for the user
Portion collision caused by sound by the audio parameter after air transmitted, or for the user face feature change face
Parameter.
Preferably, the wearable electronic equipment further includes:Second determining module, third determining module and the 4th determine mould
Block;
Second determining module is used to determine corresponding first amplitude information of first sensed parameter, and described in determination
Corresponding second amplitude information of second sensed parameter;
The third determining module is for determining corresponding in first amplitude information and second amplitude information shake
Maximum first amplitude information;
4th determining module selects corresponding first sensed parameter of first amplitude information for determining;
First determining module is specifically used for:It determines whether first sensed parameter meets predetermined condition, generates institute
State judging result.
Preferably, the wearable electronic equipment further includes the 5th determining module, for determining a selection information, the choosing
Selecting information has the first option and the second option that can lead to different implementing results;Wherein, by executing the control instruction, energy
It enough controls the electronic equipment and selects first option or second option.
Preferably, the generation module is specifically used for:According to the correspondence set between predetermined condition and control instruction,
Determine the corresponding control instruction of the predetermined condition;The control instruction corresponds to first option.
Preferably, the respond module is specifically used for:First control instruction is responded, first option is selected, with
The wearable electronic equipment is controlled by first option corresponding first implementing result.
Information processing method in the embodiment of the present invention can be applied to wearable electronic equipment, and the wearable electronics is set
Standby includes fixed cell and sensing unit, and the fixed cell is used to maintain the wearable electronic equipment and user head
Relative position relation;The sensing unit is arranged on fixed cell or is positioned close to the position of the fixed unit, institute
The method of stating may comprise steps of:Sensed parameter is obtained by the sensing unit, the sensed parameter is worn for characterizing
The facial movement on the head of the user of the wearable electronic equipment determines whether the sensed parameter meets predetermined condition, production
A raw judging result;When the judging result, which characterizes the sensed parameter, meets the predetermined condition, a control instruction is generated;
The control instruction is responded, to control the wearable electronic equipment.
In the embodiment of the present invention, when the user carries out facial movement, the wearable electronic equipment can obtain
With the relevant sensed parameter of the facial movement, such as multiple predetermined conditions can be preset in the electronic equipment,
After obtaining the sensed parameter, the electronic equipment may determine that whether the sensed parameter meets one of predetermined condition,
If it is satisfied, then the electronic equipment can generate the control instruction corresponding with the predetermined condition, described in control
Wearable electronic equipment.
For example, the wearable electronic equipment is glasses type electronic equipment, then when wearing the glasses type electronic equipment
When user wants to control the equipment, can directly it be realized by carrying out facial movement, it, will not be right without sending out phonetic order
Bystander interferes, and can also protect the privacy of user as possible.And because glasses type electronic equipment is located in user's face
Portion, when user carries out facial movement, the wearable electronic equipment can be readily possible to detect the sensed parameter, to
It can relatively accurately realize control, effectively prevent being easy to wear because described in when being controlled using body-sensing action in the prior art
The formula electronic equipment of wearing can not receive accurate information and the letter that causes information to lose or receive by the wearable electronic equipment
Breath is inaccurate and the technical issues of errored response, improve reliability and the safety of information transmission, also reduce losing for information
The mistake responsiveness of mistake rate and the wearable electronic equipment operates also very convenient, raising user's experience for user.
Description of the drawings
Fig. 1 is the broad flow diagram of information processing method in the embodiment of the present invention;
Fig. 2 is the schematic diagram of wearable electronic equipment in the embodiment of the present invention;
Fig. 3 is that information schematic diagram is selected in the embodiment of the present invention;
Fig. 4 is the structure chart of wearable electronic equipment in the embodiment of the present invention.
Specific implementation mode
Information processing method in the embodiment of the present invention can be applied to wearable electronic equipment, and the wearable electronics is set
Standby includes fixed cell and sensing unit, and the fixed cell is used to maintain the wearable electronic equipment and user head
Relative position relation;The sensing unit is arranged on fixed cell or is positioned close to the position of the fixed unit, institute
The method of stating may comprise steps of:Sensed parameter is obtained by the sensing unit, the sensed parameter is worn for characterizing
The facial movement on the head of the user of the wearable electronic equipment determines whether the sensed parameter meets predetermined condition, production
A raw judging result;When the judging result, which characterizes the sensed parameter, meets the predetermined condition, a control instruction is generated;
The control instruction is responded, to control the wearable electronic equipment.
In the embodiment of the present invention, when the user carries out facial movement, the wearable electronic equipment can obtain
With the relevant sensed parameter of the facial movement, such as multiple predetermined conditions can be preset in the electronic equipment,
After obtaining the sensed parameter, the electronic equipment may determine that whether the sensed parameter meets one of predetermined condition,
If it is satisfied, then the electronic equipment can generate the control instruction corresponding with the predetermined condition, described in control
Wearable electronic equipment.
For example, the wearable electronic equipment is glasses type electronic equipment, then when wearing the glasses type electronic equipment
When user wants to control the equipment, can directly it be realized by carrying out facial movement, it, will not be right without sending out phonetic order
Bystander interferes, and can also protect the privacy of user as possible.And because glasses type electronic equipment is located in user's face
Portion, when user carries out facial movement, the wearable electronic equipment can be readily possible to detect the sensed parameter, to
It can relatively accurately realize control, effectively prevent being easy to wear because described in when being controlled using body-sensing action in the prior art
The formula electronic equipment of wearing can not receive accurate information and the letter that causes information to lose or receive by the wearable electronic equipment
Breath is inaccurate and the technical issues of errored response, improve reliability and the safety of information transmission, also reduce losing for information
The mistake responsiveness of mistake rate and the wearable electronic equipment operates also very convenient, raising user's experience for user.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
The every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
In the embodiment of the present invention, the wearable electronic equipment for example can be intelligent glasses or described wearable
Two electronic equipments for example can be shutter 3D(It is three-dimensional)Glasses or wearable second electronic equipment can also be other
Electronic equipment.
In addition, the terms "and/or", only a kind of incidence relation of description affiliated partner, indicates may exist
Three kinds of relationships, for example, A and/or B, can indicate:Individualism A exists simultaneously A and B, these three situations of individualism B.Separately
Outside, character "/" herein, it is a kind of relationship of "or" to typically represent forward-backward correlation object.
The preferred embodiment of the present invention is described in detail below in conjunction with the accompanying drawings.
Embodiment one
Fig. 1 is referred to, the embodiment of the present invention provides a kind of information processing method, and the method can be applied to wearable electricity
Sub- equipment, the wearable electronic equipment may include fixed cell and sensing unit, and the fixed cell is for remaining described
The relative position relation of wearable electronic equipment and user head, the sensing unit are arranged on fixed cell or are arranged
Close to the position of the fixed unit.The main flow of the method is as follows:
Step 101:Sensed parameter is obtained by the sensing unit, the sensed parameter wears the wearing for characterizing
The facial movement on the head of the user of formula electronic equipment.
Fig. 2 is referred to, is a kind of schematic diagram of possible wearable second electronic equipment.
The wearable electronic equipment is, for example, glasses type electronic equipment, such as can be shutter type 3 D spectacles.The glasses
Formula electronic equipment includes structure member 200, and structure member includes nose support 201 and ear mount 202, is used for the glasses type electronic equipment
It is worn on the body of user, the glasses type electronic equipment can also include display unit 204, i.e., described second display is single
Member.
In this example it is shown that unit 204 is, for example, the eyeglass of glasses type electronic equipment itself, so user can be with
Extraneous scenery is watched by display unit 204.
In the embodiment of the present invention, such as the wearable electronic equipment is glasses type electronic equipment, then the fixed cell
It can refer to frame.
In the embodiment of the present invention, when user wears the wearable electronic equipment, it may want to the wearing
Formula electronic equipment is controlled.
In the embodiment of the present invention, when user wants to control the wearable electronic equipment, institute can be carried out
State facial movement.
For example, the facial movement can be the movement of tooth portion, such as tooth portion movement can be left tooth movement or right tooth
Movement.
Left tooth movement therein can be the opposite collision movement between the upper tooth on the left side and the lower tooth on the left side, the right side therein
Tooth movement can be the opposite collision movement between the upper tooth and the lower tooth on the right on the right.
For example, the facial movement can be facial movement, such as eyelid movement etc..
In the embodiment of the present invention, when user carries out the facial movement, the wearable electronic equipment can obtain
The sensed parameter.
There are many sensed parameters that the facial movement can generate, such as can be audio parameter, although the sound is joined
The corresponding sound of number may be more faint, but because the wearable electronic equipment is located in user face, therefore still can
It is enough accurately to collect the audio parameter.Or can be facial movement parameter, because the kinematic parameter is the face by user
Portion's movement is brought, therefore the facial movement parameter is referred to as facial parameters.
Preferably, in the embodiment of the present invention, the wearable electronic equipment can have at least one sensing unit, described
At least one sensing unit can be used for acquiring the sensed parameter.
Optionally, the wearable electronic equipment can be glasses type electronic equipment, which can have
There are one sensing units, such as the sensing unit can be known as to third sensing unit.The third sensing unit for example can be with
It is arranged in the first position of the fixed cell, the first position can to set when user wears the wearable electronics
When standby, the third sensing unit faces the eyebrow on the head of user when being located at the first position of the fixed cell
The heart.
In the embodiment of the present invention, sensed parameter, Ke Yishi are obtained by the sensing unit:It is obtained by the sensing unit
The sensed parameter is obtained, the sensed parameter is that sound caused by the tooth portion collision of the user passes through the user's
Sound passes through air caused by the tooth portion collision of audio parameter or the sensed parameter for the user after bone conduction
Audio parameter after conduction.
Preferably, the sensed parameter, which can be sound caused by the tooth portion collision of the user, passes through the use
Audio parameter after the bone conduction of person, because if being conducted by air, when user is in more noisy environment
When middle, the sensing unit may be not easy to collect the audio parameter.
Optionally, the wearable electronic equipment can be glasses type electronic equipment, which can have
There are two sensing unit, respectively the first sensing unit and the second sensing unit, first sensing unit is arranged described solid
The third place of order member, second sensing unit are arranged in the second position of the fixed cell, such as the third position
It is arranged symmetrically to set with the second position.For example, first sensing unit and second sensing unit can be located at this
On two legs of spectacles of glasses type electronic equipment.So, first sensing unit and second sensing unit can be with
It is respectively intended to acquire the sensed parameter of the sensed parameter and the right face of the left side face from user.
Sensed parameter, Ke Yishi are obtained by the sensing unit:The first induction is obtained by first sensing unit
Parameter, and the second sensed parameter is obtained by second sensing unit.First sensed parameter or the second induction ginseng
Number can be:Sound after the bone conduction that sound passes through the user caused by the tooth portion collision of the user is joined
Number, or generated sound is collided by the audio parameter after air transmitted for the tooth portion of the user, or be the use
The facial parameters that the face feature of person changes.
In the embodiment of the present invention, the sensed parameter can be audio parameter or the sensed parameter can be face
Kinematic parameter or the sensed parameter can also be other parameters, be generated as long as the sensed parameter is the facial movement
Sensed parameter, the invention is not limited in this regard.When the sensed parameter is audio parameter, the audio parameter can be with
It is conducted, or can also be conducted by air by the bone of user.
In the embodiment of the present invention, the sensing unit can be sensor.Specifically what sensor, can be according to required
The sensed parameter of acquisition determines.For example, if the sensed parameter is audio parameter, the sensor can be sound
Sensor, if the sensed parameter is facial movement parameter, the sensor can be proximity sensor, etc..
In the embodiment of the present invention, if the electronic equipment only has, there are one the sensing units, pass through the induction
Unit can only obtain a sensed parameter.
If the electronic equipment has first sensing unit and second sensing unit, can be by described
First sensing unit can obtain the first sensed parameter, and/or, the second induction ginseng can be obtained by second sensing unit
Number.That is, the sensed parameter may include first sensed parameter and/or second sensed parameter at this time.
In the embodiment of the present invention, the corresponding sound of the audio parameter can be the sound generated by the tooth portion movement of user
Sound, the corresponding facial action of the facial movement parameter can drive skin near temple because the tooth portion of user moves
Skin protrusion, or can drive face feature change, etc. because user winks, closes lightly mouth..
The preferable application scenarios of the embodiment of the present invention are:When using the wearable electronic equipment, if necessary to have
There are two a selection is made in the selection information of option, the control method in the embodiment of the present invention can be applied.
Tool there are two option selection information can there are many, for example, for a video file, user can be prompted
It selects " broadcasting " or " stopping ", after recorded a section audio file, user can be prompted to select " preservation " or " cancellation ", browsing
When photograph album, user can be prompted to select " upper one " or " next ", new information is received, user can be prompted to select
" browsing " or " cancellation ", etc..Wherein, the two options can cause two different implementing results, in general, can lead to two
The opposite implementing result of kind.
Preferably, in an alternative embodiment of the invention, before obtaining sensed parameter by the sensing unit, that is,
Before user carries out the facial movement, the wearable electronic equipment can determine a selection information, the choosing first
The first option and the second option that can lead to different implementing results can be had by selecting information, for example, the selection information can be
As previously described:For a video file, user can be prompted to select " broadcasting " or " stopping ", recorded section audio text
After part, user can be prompted to select " preservation " or " cancellation ", etc..
The selection information has first option and second option.
Step 102:It determines whether the sensed parameter meets predetermined condition, generates a judging result.
In the embodiment of the present invention, after obtaining the sensed parameter, it may be determined that whether the sensed parameter meets described
Predetermined condition.
In the embodiment of the present invention, at least one predetermined condition can be preset in the wearable electronic equipment, wherein often
A predetermined condition can be corresponding with a control instruction.That is, correspondence pass can be preset in the wearable electronic equipment
Assembly is closed, and may include the correspondence between at least one set of predetermined condition and control instruction in the correspondence set,
In, in the correspondence set, predetermined condition can be corresponded with control instruction.
For example, if only there are one sensed parameters, directly it can determine whether the sensed parameter meets one of them
Predetermined condition.
Such as there are two predetermined condition, first predetermined condition is primary collision for storage in the wearable electronic equipment
Sound, second predetermined condition are the sound collided twice in succession, and corresponding first control instruction of first predetermined condition is
The option of control selections "Yes", corresponding second control instruction of second predetermined condition select the option of "No" in order to control.
Then, after determining the sensed parameter, it may be determined that the sensed parameter whether be primary collision sound or company
The continuous sound collided twice determines whether the sensed parameter meets one of predetermined condition.
For example, if obtaining first sensed parameter and second sensed parameter, determining the induction
Whether parameter meets the predetermined condition, before generating the judging result, it may be determined that first sensed parameter is corresponding
First amplitude information, and determine corresponding second amplitude information of second sensed parameter, determining first amplitude information
After second amplitude information, it may be determined that corresponding amplitude is most in first amplitude information and second amplitude information
Big first amplitude information determines corresponding first amplitude of the first amplitude information and second amplitude information pair
Magnitude relationship between the second amplitude answered, such as determine that first amplitude is more than second amplitude, then it can determine choosing
Select corresponding first sensed parameter of first amplitude information.
For example, the left side tooth portion of user is moved, the audio parameter no matter generated is conducted by the bone of user
Still pass through air transmitted, it is likely that can obtain first sense corresponding to first sensing unit of user left side face
Parameter is answered, and second sensed parameter can be also obtained corresponding to second sensing unit of face on the right of user.Sentence
Whether disconnected sensed parameter meets the predetermined condition, then first has to from first sensed parameter and second sensed parameter
Select a sensed parameter.The mode for judging amplitude is used in the embodiment of the present invention, amplitude is bigger, just closer from sound source,
To just closer to the true selection of user.
So that it is determined that whether the sensed parameter meets the predetermined condition, the judging result is generated, so that it may to be:Really
Whether fixed first sensed parameter meets the predetermined condition, generates the judging result.
Step 103:When the judging result, which characterizes the sensed parameter, meets the predetermined condition, generates a control and refer to
It enables.
Specifically, in the embodiment of the present invention, meet the predetermined condition when the judging result characterizes the sensed parameter
When, generate a control instruction, Ke Yishi:According to the correspondence set between predetermined condition and control instruction, institute is determined
The corresponding control instruction of predetermined condition is stated, such as the control instruction corresponds to first option.
In the embodiment of the present invention, multiple predetermined conditions can be preset in the wearable electronic equipment, it is described wearable
Electronic equipment may determine that whether the sensed parameter meets one of predetermined condition.If the judging result shows described
Sensed parameter meets one of predetermined condition, then the wearable electronic equipment can be true according to the correspondence set
Fixed control instruction corresponding with the predetermined condition, to which the wearable electronic equipment can generate the control instruction.
Step 104:The control instruction is responded, to control the wearable electronic equipment.
In the embodiment of the present invention, such as the control instruction corresponds to first option, then, responds the control and refer to
It enables, to control the wearable electronic equipment, can be specifically:First control instruction is responded, first option is selected,
To be controlled the wearable electronic equipment by corresponding first implementing result of first option.
It is illustrated below.
Embodiment two
The wearable electronic equipment is as shown in Figure 2.
For example, the wearable electronic equipment generates the selection information, the selection after recording one section of video information
Information includes first option and second option, and first option is " preservation ", and second option is " cancellation ".
As shown in Figure 3.
As can be known from Fig. 2, the wearable electronic equipment can have display unit 204, then user can pass through
Display unit 204 sees the selection information, can also be seen that first option and second option naturally.At this point, making
User can be made a choice by the facial movement between first option and second option.
The wearable electronic equipment is glasses type electronic equipment, is distinguished on two legs of spectacles of the glasses type electronic equipment
There are one sensors, such as the two sensors are all sound transducers, may be respectively used for acquisition and come from user left side face
The audio parameter in portion and facial audio parameter on the right of user.
For example, user's progress is left tooth movement, which is that the upper tooth of left tooth and the lower tooth of left tooth carry out
Opposite collision movement.Under normal circumstances, although user is carry out the movement of tooth of side, but the tooth of the other side sometimes also can be by
To influence, relative motion is also had, therefore, the wearable electronic equipment may be moved according to the tooth portion of user,
First sensed parameter is obtained by first sensing unit, and second sense is obtained by second sensing unit
Answer parameter.
After obtaining first sensed parameter and second sensed parameter, the wearable electronic equipment can first from
A sensed parameter is selected in first sensed parameter and second sensed parameter.
For example, the wearable electronic equipment can determine corresponding first amplitude information of first sensed parameter, and
It determines corresponding second amplitude information of second sensed parameter, and can determine that first amplitude information corresponding first shakes
Width, and determine corresponding second amplitude of second amplitude information.
After determining first amplitude and second amplitude, the wearable electronic equipment can more described first
The size of amplitude and second amplitude.In the embodiment of the present invention, what it is because of user's progress is left tooth movement, therefore from use
The sound of person left side face can be more than the sound of the face on the right of the user, it is seen then that first amplitude can be more than described the
Two amplitudes.
After determining that first amplitude is more than second amplitude, the wearable electronic equipment can determine selection institute
State corresponding first sensed parameter of the first amplitude.
Multiple predetermined conditions, such as the wearable electronic equipment can be previously stored in the wearable electronic equipment
In be previously stored with the first predetermined condition and the second predetermined condition, first predetermined condition is the sound of primary collision, described
Second predetermined condition is the sound collided twice in succession.
The corresponding pass between predetermined condition and control instruction can be previously stored in the wearable electronic equipment
Assembly is closed, and for example may include corresponding between first predetermined condition and the first control instruction in the correspondence set
Relationship, and including the correspondence between second predetermined condition and the second control instruction.For example, first control instruction
The wearable electronic equipment selects the control instruction of first option, second control instruction described in order to control in order to control
Wearable electronic equipment selects the control instruction of second option.
After determining selection first sensed parameter, the wearable electronic equipment may determine that the first induction ginseng
Whether number meets first predetermined condition, such as judges to determine that first sensed parameter is unsatisfactory for the described first predetermined item
Part, then the wearable electronic equipment can continue to judge whether first sensed parameter meets second predetermined condition,
Such as the judging result judges to determine that first sensed parameter meets second predetermined condition.
After determining that first sensed parameter meets second predetermined condition, the wearable electronic equipment can root
Control instruction corresponding with second predetermined condition is determined according to the correspondence set, is second control in the present embodiment
System instruction, then the wearable electronic equipment can generate second control instruction.
After generating second control instruction, the wearable electronic equipment can respond second control instruction,
The wearable electronic equipment is controlled.In the embodiment of the present invention, by executing second control instruction, it can pass through
Corresponding second implementing result of second option controls the wearable electronic equipment.
Such as the second option described in the embodiment of the present invention is " cancellation ", then by executing second control instruction,
The video information that this section is recorded can be cancelled.
Embodiment three
Fig. 4 is referred to, the embodiment of the present invention provides a kind of wearable electronic equipment, and the wearable electronic equipment includes solid
Order member and sensing unit, the fixed cell are used to maintain the relative position of the wearable electronic equipment and user head
Relationship;The sensing unit is arranged on fixed cell or is positioned close to the position of the fixed unit.It is described wearable
Electronic equipment may include acquisition module 401, the first determining module 402, generation module 403 and respond module 404.
Preferably, the wearable electronic equipment can also include the second determining module, third determining module, the 4th determination
Module and the 5th determining module.
The acquisition module 401 can be used for obtaining sensed parameter by the sensing unit, and the sensed parameter is used for
Characterization wears the facial movement on the head of the user of the wearable electronic equipment.
First determining module 402 is determined for whether the sensed parameter meets predetermined condition, generates one and sentences
Disconnected result.
The generation module 403 can be used for meeting the predetermined condition when the judging result characterizes the sensed parameter
When, generate a control instruction.
The respond module 404 can be used for responding the control instruction, to control the wearable electronic equipment.
In the embodiment of the present invention, the third sensing unit is arranged in the first position of the fixed cell, and described first
So that when user wears the wearable electronic equipment, the third sensing unit is located at the fixed cell for position
The place between the eyebrows on the head of the user is faced when the first position;The acquisition module 401 specifically can be used for passing through institute
It states sensing unit and obtains the sensed parameter, the sensed parameter is that sound caused by the tooth portion collision of the user passes through
Caused by the tooth portion collision of audio parameter or the sensed parameter for the user after the bone conduction of the user
Sound passes through the audio parameter after air transmitted.
In the embodiment of the present invention, the wearable electronic equipment includes the first sensing unit and the second sensing unit, described
The third place in the fixed cell is arranged in first sensing unit, and second sensing unit is arranged in the fixed cell
The second position;The third place and the second position are symmetrical arranged;The acquisition module 401 specifically can be used for by described the
One sensing unit obtains the first sensed parameter, and obtains the second sensed parameter by second sensing unit;First sense
The parameter or second sensed parameter is answered to be:The bone that sound passes through the user caused by the tooth portion collision of the user
Audio parameter after head conduction, or pass through the ginseng of the sound after air transmitted for the generated sound of tooth portion collision of the user
Number, or for the user face feature change facial parameters.
Second determining module is used to determine corresponding first amplitude information of first sensed parameter, and described in determination
Corresponding second amplitude information of second sensed parameter;
The third determining module is for determining corresponding in first amplitude information and second amplitude information shake
Maximum first amplitude information;
4th determining module selects corresponding first sensed parameter of first amplitude information for determining;
First determining module 402 is specifically determined for whether first sensed parameter meets predetermined condition,
Generate the judging result.
5th determining module is for determining a selection information, and the selection information is with can lead to different implementing results
The first option and the second option;Wherein, by executing the control instruction, the electronic equipment selection described the can be controlled
One option or second option.
The generation module 403 specifically can be used for according to the correspondence set between predetermined condition and control instruction,
Determine the corresponding control instruction of the predetermined condition;The control instruction corresponds to first option.
The respond module 404 specifically can be used for responding first control instruction, select first option, with logical
Corresponding first implementing result of first option is crossed to control the wearable electronic equipment.
Information processing method in the embodiment of the present invention can be applied to wearable electronic equipment, and the wearable electronics is set
Standby includes fixed cell and sensing unit, and the fixed cell is used to maintain the wearable electronic equipment and user head
Relative position relation;The sensing unit is arranged on fixed cell or is positioned close to the position of the fixed unit, institute
The method of stating may comprise steps of:Sensed parameter is obtained by the sensing unit, the sensed parameter is worn for characterizing
The facial movement on the head of the user of the wearable electronic equipment determines whether the sensed parameter meets predetermined condition, production
A raw judging result;When the judging result, which characterizes the sensed parameter, meets the predetermined condition, a control instruction is generated;
The control instruction is responded, to control the wearable electronic equipment.
In the embodiment of the present invention, when the user carries out facial movement, the wearable electronic equipment can obtain
With the relevant sensed parameter of the facial movement, such as multiple predetermined conditions can be preset in the electronic equipment,
After obtaining the sensed parameter, the electronic equipment may determine that whether the sensed parameter meets one of predetermined condition,
If it is satisfied, then the electronic equipment can generate the control instruction corresponding with the predetermined condition, described in control
Wearable electronic equipment.
For example, the wearable electronic equipment is glasses type electronic equipment, then when wearing the glasses type electronic equipment
When user wants to control the equipment, can directly it be realized by carrying out facial movement, it, will not be right without sending out phonetic order
Bystander interferes, and can also protect the privacy of user as possible.And because glasses type electronic equipment is located in user's face
Portion, when user carries out facial movement, the wearable electronic equipment can be readily possible to detect the sensed parameter, to
It can relatively accurately realize control, effectively prevent being easy to wear because described in when being controlled using body-sensing action in the prior art
The formula electronic equipment of wearing can not receive accurate information and the letter that causes information to lose or receive by the wearable electronic equipment
Breath is inaccurate and the technical issues of errored response, improve reliability and the safety of information transmission, also reduce losing for information
The mistake responsiveness of mistake rate and the wearable electronic equipment operates also very convenient, raising user's experience for user.
It is apparent to those skilled in the art that for convenience and simplicity of description, only with above-mentioned each function
The division progress of module, can be as needed and by above-mentioned function distribution by different function moulds for example, in practical application
Block is completed, i.e., the internal structure of device is divided into different function modules, to complete all or part of work(described above
Energy.The specific work process of the system, apparatus, and unit of foregoing description can refer to corresponding in preceding method embodiment
Journey, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the module or
The division of unit, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units
Or component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, institute
Display or the mutual coupling, direct-coupling or communication connection discussed can be by some interfaces, device or unit
INDIRECT COUPLING or communication connection can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can be stored in a computer read/write memory medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment(Can be personal computer, server or the network equipment etc.)Or processor(processor)It is each to execute the application
The all or part of step of embodiment the method.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory
(ROM, Read-Only Memory), random access memory(RAM, Random Access Memory), magnetic disc or CD
Etc. the various media that can store program code.
The above, above example are only described in detail to the technical solution to the application, but the above implementation
The explanation of example is merely used to help understand the method and its core concept of the present invention, should not be construed as limiting the invention.This
In the technical scope disclosed by the present invention, the change or replacement that can be readily occurred in should all be covered those skilled in the art
Within protection scope of the present invention.
Claims (12)
1. a kind of information processing method, be applied to wearable electronic equipment, the wearable electronic equipment include fixed cell and
Sensing unit, the fixed cell are used to maintain the relative position relation of the wearable electronic equipment and user head;Institute
The position that sensing unit is arranged on fixed cell or is positioned close to the fixed cell is stated, the method includes:
Sensed parameter is obtained by the sensing unit, the sensed parameter is sound caused by the tooth portion collision of the user
Audio parameter or the sensed parameter after the bone conduction that sound passes through the user collide institute for the tooth portion of the user
The sound of generation wears the wearable electronic equipment by the audio parameter after air transmitted, the sensed parameter for characterizing
User head facial movement;
It determines whether the sensed parameter meets predetermined condition, generates a judging result;
When the judging result, which characterizes the sensed parameter, meets the predetermined condition, a control instruction is generated;
The control instruction is responded, to control the wearable electronic equipment;
The wearable electronic equipment includes the first sensing unit and the second sensing unit, and first sensing unit is arranged in institute
The third place of fixed cell is stated, second sensing unit is arranged in the second position of the fixed cell;The third position
It sets and is symmetrical arranged with the second position;
Sensed parameter is obtained by the sensing unit, including:The first sensed parameter is obtained by first sensing unit, and
The second sensed parameter is obtained by second sensing unit;First sensed parameter or second sensed parameter are:Institute
The audio parameter after the bone conduction that sound caused by the tooth portion collision of user passes through the user is stated, or is made to be described
Sound changes by the audio parameter after air transmitted, or for the face feature of the user caused by the tooth portion collision of user
The facial parameters of change.
2. the method as described in claim 1, which is characterized in that first in the fixed cell is arranged in the sensing unit
It sets, so that when user wears the wearable electronic equipment, the sensing unit is located at described solid for the first position
The place between the eyebrows on the head of the user is faced when the first position of order member.
3. the method as described in claim 1, which is characterized in that determining whether the sensed parameter meets predetermined condition, producing
Before a raw judging result, further include:
It determines corresponding first amplitude information of first sensed parameter, and determines that second sensed parameter corresponding second is shaken
Width information;
Determine corresponding maximum first amplitude information of amplitude in first amplitude information and second amplitude information;
It determines and selects corresponding first sensed parameter of first amplitude information;
It determines whether the sensed parameter meets predetermined condition, generates a judging result, including;Determine first sensed parameter
Whether meet predetermined condition, generates the judging result.
4. the method as described in claim 1, which is characterized in that before obtaining sensed parameter by the sensing unit, also
Including:Determine that a selection information, the selection information have the first option and the second option that can lead to different implementing results;Its
In, by executing the control instruction, the electronic equipment can be controlled and select first option or second option.
5. method as claimed in claim 4, which is characterized in that when the judging result characterizes described in the sensed parameter satisfaction
When predetermined condition, a control instruction is generated, including:According to the correspondence set between predetermined condition and control instruction, determine
The corresponding control instruction of the predetermined condition;The control instruction corresponds to first option.
6. method as claimed in claim 5, which is characterized in that the control instruction is responded, to control the wearable electronics
Equipment, including:The control instruction is responded, first option is selected, to be executed by first option corresponding first
As a result the wearable electronic equipment is controlled.
7. a kind of wearable electronic equipment, the wearable electronic equipment includes fixed cell and sensing unit, the fixed list
Relative position relation of the member for maintaining the wearable electronic equipment and user head;The sensing unit is arranged in fixation
On unit or it is positioned close to the position of the fixed cell, the wearable electronic equipment includes:
Acquisition module, for obtaining sensed parameter by the sensing unit, the sensed parameter is the tooth portion of the user
Audio parameter or the sensed parameter after the bone conduction that sound caused by collision passes through the user are the use
Sound is by the audio parameter after air transmitted caused by the tooth portion collision of person, and the sensed parameter is for characterizing described in wearing
The facial movement on the head of the user of wearable electronic equipment;
First determining module generates a judging result for determining whether the sensed parameter meets predetermined condition;
Generation module, for when the judging result characterizes the sensed parameter and meets the predetermined condition, generating a control
Instruction;
Respond module, for responding the control instruction, to control the wearable electronic equipment;
The wearable electronic equipment includes the first sensing unit and the second sensing unit, and first sensing unit is arranged in institute
The third place of fixed cell is stated, second sensing unit is arranged in the second position of the fixed cell;The third position
It sets and is symmetrical arranged with the second position;
The acquisition module is specifically used for:The first sensed parameter is obtained by first sensing unit, and passes through described second
Sensing unit obtains the second sensed parameter;First sensed parameter or second sensed parameter are:The tooth of the user
Audio parameter after the bone conduction that sound caused by portion's collision passes through the user, or touched for the tooth portion of the user
Sound caused by hitting is joined by the audio parameter after air transmitted, or for the face of the face feature change of the user
Number.
8. wearable electronic equipment as claimed in claim 7, which is characterized in that the sensing unit setting is described fixed single
Member first position, the first position so that when user wear the wearable electronic equipment when, the sensing unit
The place between the eyebrows on the head of the user is faced when positioned at the first position of the fixed cell.
9. wearable electronic equipment as claimed in claim 7, which is characterized in that the wearable electronic equipment further includes:The
Two determining modules, third determining module and the 4th determining module;
Second determining module is for determining corresponding first amplitude information of first sensed parameter, and determining described second
Corresponding second amplitude information of sensed parameter;
The third determining module is for determining that corresponding amplitude is most in first amplitude information and second amplitude information
Big first amplitude information;
4th determining module selects corresponding first sensed parameter of first amplitude information for determining;
First determining module is specifically used for:It determines whether first sensed parameter meets predetermined condition, sentences described in generation
Disconnected result.
10. wearable electronic equipment as claimed in claim 7, which is characterized in that the wearable electronic equipment further includes
Five determining modules, for determining a selection information, the selection information have the first option that can lead to different implementing results and
Second option;Wherein, by executing the control instruction, the electronic equipment can be controlled and select first option or described
Second option.
11. wearable electronic equipment as claimed in claim 10, which is characterized in that the generation module is specifically used for:According to
Correspondence set between predetermined condition and control instruction determines the corresponding control instruction of the predetermined condition;It is described
Control instruction corresponds to first option.
12. wearable electronic equipment as claimed in claim 11, which is characterized in that the respond module is specifically used for:Response
The control instruction selects first option, with by corresponding first implementing result of first option to the wearing
Formula electronic equipment is controlled.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310421592.XA CN104460955B (en) | 2013-09-16 | 2013-09-16 | A kind of information processing method and wearable electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310421592.XA CN104460955B (en) | 2013-09-16 | 2013-09-16 | A kind of information processing method and wearable electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104460955A CN104460955A (en) | 2015-03-25 |
CN104460955B true CN104460955B (en) | 2018-08-10 |
Family
ID=52907158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310421592.XA Active CN104460955B (en) | 2013-09-16 | 2013-09-16 | A kind of information processing method and wearable electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104460955B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180107275A1 (en) * | 2015-04-13 | 2018-04-19 | Empire Technology Development Llc | Detecting facial expressions |
WO2018142228A2 (en) | 2017-01-19 | 2018-08-09 | Mindmaze Holding Sa | Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location including for at least one of a virtual and augmented reality system |
US10515474B2 (en) | 2017-01-19 | 2019-12-24 | Mindmaze Holding Sa | System, method and apparatus for detecting facial expression in a virtual reality system |
US10943100B2 (en) | 2017-01-19 | 2021-03-09 | Mindmaze Holding Sa | Systems, methods, devices and apparatuses for detecting facial expression |
US11328533B1 (en) | 2018-01-09 | 2022-05-10 | Mindmaze Holding Sa | System, method and apparatus for detecting facial expression for motion capture |
CN109144245B (en) * | 2018-07-04 | 2021-09-14 | Oppo(重庆)智能科技有限公司 | Equipment control method and related product |
CN114304800B (en) * | 2022-03-16 | 2022-05-10 | 江苏环亚医用科技集团股份有限公司 | Helmet with adjustable video shooting transmission module |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2034442U (en) * | 1987-05-13 | 1989-03-22 | 龚鹓文 | Double-guiding type hearing aid series for deaf-mute |
CN1531676A (en) * | 2001-06-01 | 2004-09-22 | ���ṫ˾ | User input apparatus |
CN101272727A (en) * | 2005-09-27 | 2008-09-24 | 潘尼公司 | A device for controlling an external unit |
CN102906623A (en) * | 2010-02-28 | 2013-01-30 | 奥斯特豪特集团有限公司 | Local advertising content on an interactive head-mounted eyepiece |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1512490A (en) * | 2002-12-30 | 2004-07-14 | 吕小麟 | Method and device for inputing control signal |
US8340310B2 (en) * | 2007-07-23 | 2012-12-25 | Asius Technologies, Llc | Diaphonic acoustic transduction coupler and ear bud |
-
2013
- 2013-09-16 CN CN201310421592.XA patent/CN104460955B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2034442U (en) * | 1987-05-13 | 1989-03-22 | 龚鹓文 | Double-guiding type hearing aid series for deaf-mute |
CN1531676A (en) * | 2001-06-01 | 2004-09-22 | ���ṫ˾ | User input apparatus |
CN101272727A (en) * | 2005-09-27 | 2008-09-24 | 潘尼公司 | A device for controlling an external unit |
CN102906623A (en) * | 2010-02-28 | 2013-01-30 | 奥斯特豪特集团有限公司 | Local advertising content on an interactive head-mounted eyepiece |
Also Published As
Publication number | Publication date |
---|---|
CN104460955A (en) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104460955B (en) | A kind of information processing method and wearable electronic equipment | |
US10356398B2 (en) | Method for capturing virtual space and electronic device using the same | |
US10326922B2 (en) | Wearable apparatus and method for capturing image data using multiple image sensors | |
US20180011682A1 (en) | Variable computing engine for interactive media based upon user biometrics | |
JP6361649B2 (en) | Information processing apparatus, notification state control method, and program | |
US20130346168A1 (en) | Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command | |
KR102655147B1 (en) | Interactive animation character head system and method | |
CN108028957A (en) | Information processor, information processing method and program | |
CN106502377A (en) | Mobile terminal and its control method | |
CN105323378A (en) | Mobile terminal and controlling method thereof | |
JP6402718B2 (en) | Information processing apparatus, control method, and program | |
CN103576853A (en) | Method and display apparatus for providing content | |
US20200221218A1 (en) | Systems and methods for directing audio output of a wearable apparatus | |
CN108495045A (en) | Image capturing method, device, electronic device and storage medium | |
CN105657249A (en) | Image processing method and user terminal | |
CN104182041A (en) | Wink type determining method and wink type determining device | |
CN112506336A (en) | Head mounted display with haptic output | |
CN106406537A (en) | Display method and device | |
CN106292994A (en) | The control method of virtual reality device, device and virtual reality device | |
CN206161960U (en) | Virtual reality glasses | |
CN106067833A (en) | Mobile terminal and control method thereof | |
CN104238756B (en) | A kind of information processing method and electronic equipment | |
WO2021145452A1 (en) | Information processing device and information processing terminal | |
US11328187B2 (en) | Information processing apparatus and information processing method | |
WO2021145454A1 (en) | Information processing device, information processing terminal, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |