CN104615252A - Control method, control device, wearable electronic device and electronic equipment - Google Patents

Control method, control device, wearable electronic device and electronic equipment Download PDF

Info

Publication number
CN104615252A
CN104615252A CN201510083677.0A CN201510083677A CN104615252A CN 104615252 A CN104615252 A CN 104615252A CN 201510083677 A CN201510083677 A CN 201510083677A CN 104615252 A CN104615252 A CN 104615252A
Authority
CN
China
Prior art keywords
electronic equipment
unit
conditioned
audio
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510083677.0A
Other languages
Chinese (zh)
Other versions
CN104615252B (en
Inventor
武永贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510083677.0A priority Critical patent/CN104615252B/en
Publication of CN104615252A publication Critical patent/CN104615252A/en
Application granted granted Critical
Publication of CN104615252B publication Critical patent/CN104615252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a control method of electronic equipment. The control method includes: if detecting the fact that a data output unit outputs multimedia data, acquiring a face image of a user of the electronic equipment, acquired by an image acquisition unit; judging whether or not preset conditions are met, including the face image of the user of the electronic equipment meeting the first preset condition; based on the judging results, adjusting the output parameters of the data output unit or adjusting the output state of the multimedia data. The invention discloses a novel way of controlling the running process of the electronic equipment; compared with the prior key control method, the novel way allows more convenient operating and controlling. The invention further discloses a corresponding control device, a wearable electronic device comprising the control device, and the electronic equipment comprising the control device.

Description

Control method, control device, Wearable electronic equipment and electronic equipment
Technical field
The invention belongs to electronic equipment control technology field, particularly relate to a kind of control method, control device, Wearable electronic equipment and electronic equipment.
Background technology
Current electronic equipment can provide abundant function, as play multimedia information (audio-frequency information and video information).User can control the operation of electronic equipment by the button (virtual key of physical button or touch control unit display) on electronic equipment.Some are had to the electronic equipment of voice control function, user also can be controlled by voice.
But the control mode at present for electronic equipment is still comparatively single, how to enrich the control mode of electronic equipment further, is those skilled in the art's problem demanding prompt solutions.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of control method and the control device that are applied to electronic equipment, the mode of new control electronic equipment is provided.Meanwhile, the invention also discloses Wearable electronic equipment and electronic equipment.
For achieving the above object, the invention provides following technical scheme:
The present invention discloses a kind of control method, is applied to electronic equipment, and described electronic equipment comprises data outputting unit and image acquisition units, and described control method comprises:
When detecting that described data outputting unit exports multi-medium data, obtain the face-image of the electronic equipment user that described image acquisition units collects;
Judge whether to meet pre-conditioned, the described pre-conditioned face-image comprising described electronic equipment user meets first pre-conditioned;
Based on judged result, adjust the output parameter of described data outputting unit or adjust the output state of described multi-medium data.
Preferably, described electronic equipment also comprises audio collection unit, when detecting that described data outputting unit exports multi-medium data, also comprises: obtain the audio-frequency information that described audio collection unit collects; Describedly judge whether to meet pre-conditioned, comprising: judge whether the face-image of described electronic equipment user meets first pre-conditioned, and it is second pre-conditioned to judge whether described audio-frequency information meets.
Preferably, judge that whether the face-image of described electronic equipment user meets first pre-conditioned, comprising: the expressive features extracting two frame face-images of difference prefixed time interval; When difference between the expressive features of described two frame face-images is greater than threshold value, determine that the face-image of described electronic equipment user meets first pre-conditioned.
Preferably, judge that whether the face-image of described electronic equipment user meets first pre-conditioned, comprising: the expressive features extracting described face-image; Sample pattern of the expressive features extracted and the first kind prestored being expressed one's feelings mates, and obtains degree of confidence; When the degree of confidence obtained is higher than predetermined threshold value, determine that the face-image of described electronic equipment user meets first pre-conditioned.
Preferably, judge that whether described audio-frequency information meets second pre-conditioned, comprising: described audio-frequency information is detected, extract the voice messaging in described audio-frequency information; Analyze described voice messaging, obtain the vocabulary that described voice messaging comprises; When described vocabulary comprises specific vocabulary, determine that described audio-frequency information meets second pre-conditioned.
Preferably, judge that whether described audio-frequency information meets second pre-conditioned, comprising: described audio-frequency information is detected, extract the voice messaging in described audio-frequency information; Extract the vocal print feature of described voice messaging; The vocal print feature of electronic equipment user described in the vocal print characteristic sum of voice messaging described in comparison; When comparison result shows the vocal print characteristic matching of the vocal print feature of described voice messaging and described electronic equipment user, determine that described audio-frequency information meets second pre-conditioned.
Preferably, described data outputting unit comprises audio output unit and display unit; The output parameter of the described data outputting unit of described adjustment comprises: when described audio output unit outputting audio data, reduce the output volume of described audio output unit.
Preferably, described data outputting unit comprises audio output unit and display unit; The output state of the described multi-medium data of described adjustment comprises: suspend and play described multi-medium data.
Preferably, after the output parameter of the described data outputting unit of adjustment, also comprise: at predetermined time, if do not meet described pre-conditioned, then recover the output parameter of described data outputting unit;
After the output state of the described multi-medium data of adjustment, also comprise: at predetermined time, if do not meet described pre-conditioned, then recover the output state of described multi-medium data.
The present invention also discloses a kind of control device, is applied to electronic equipment, and described electronic equipment comprises data outputting unit and image acquisition units, and described control device comprises:
Image acquisition unit, for when detecting that described data outputting unit exports multi-medium data, obtains the face-image of the electronic equipment user that described image acquisition units collects;
Judging unit, pre-conditioned for judging whether to meet, the described pre-conditioned face-image comprising described electronic equipment user meets first pre-conditioned;
First control module, for based on judged result, adjusts the output parameter of described data outputting unit or adjusts the output state of described multi-medium data.
Preferably, described electronic equipment also comprises audio collection unit, described control device also comprises audio frequency acquiring unit, and described audio frequency acquiring unit is used for, when detecting that described data outputting unit exports multi-medium data, obtaining the audio-frequency information that described audio collection unit collects; Described judging unit comprises the first judgment sub-unit and the second judgment sub-unit, described first judgment sub-unit is first pre-conditioned for judging that whether the face-image of described electronic equipment user meets, and described second judgment sub-unit is second pre-conditioned for judging whether described audio-frequency information meets.
Preferably, described first judgment sub-unit comprises: the first human facial feature extraction module, for extracting the expressive features of two frame face-images of difference prefixed time interval; First processing module, when being greater than threshold value for the difference between the expressive features of described two frame face-images, determines that the face-image of described electronic equipment user meets first pre-conditioned.
Preferably, described first judgment sub-unit comprises: the second human facial feature extraction module, for extracting the expressive features of described face-image; Expressive features matching module, mates for sample pattern of the expressive features extracted and the first kind prestored being expressed one's feelings, and obtains degree of confidence; Second processing module, for when the degree of confidence obtained is higher than predetermined threshold value, determines that the face-image of described electronic equipment user meets second pre-conditioned.
Preferably, described second judgment sub-unit comprises: voice messaging extraction module, for detecting described audio-frequency information, extracts the voice messaging in described audio-frequency information; Speech analysis module, for analyzing described voice messaging, obtains the vocabulary that described voice messaging comprises; 3rd processing module, for when described vocabulary comprises specific vocabulary, determines that described audio-frequency information meets second pre-conditioned.
Preferably, described second judgment sub-unit comprises: voice messaging extraction module, for detecting described audio-frequency information, extracts the voice messaging in described audio-frequency information; Vocal print characteristic extracting module, for extracting the vocal print feature of described voice messaging; Comparing module, for voice messaging described in comparison vocal print characteristic sum described in the vocal print feature of electronic equipment user; 4th processing module, for showing the vocal print characteristic matching of the vocal print feature of described voice messaging and described electronic equipment user at comparison result, determines that described audio-frequency information meets second pre-conditioned.
Preferably, described data outputting unit comprises audio output unit and display unit; Described first control module comprises the first control module, and described first control module is used for when described audio output unit outputting audio data, reduces the output volume of described audio output unit.
Preferably, described data outputting unit comprises audio output unit and display unit; Described first control module comprises the second control module, and described second control module plays described multi-medium data for suspending.
Preferably, the second control module is also comprised; Predetermined time after the output parameter of the described data outputting unit of adjustment, if do not meet described pre-conditioned, then recovers the output parameter of described data outputting unit; Predetermined time after the output state of the described multi-medium data of adjustment, if do not meet described pre-conditioned, then recovers the output state of described multi-medium data.
The present invention also discloses a kind of Wearable electronic equipment, comprise a support, described support is for maintaining the relative position relation of described Wearable electronic equipment and electronic equipment user head, described Wearable electronic equipment also comprises lens module, audio output unit, audio collection unit, image acquisition units, and any one control device above-mentioned.
The present invention also discloses a kind of electronic equipment, and described electronic equipment comprises audio output unit, display unit, audio collection unit, image acquisition units, and any one control device above-mentioned.
As can be seen here, beneficial effect of the present invention is: the control method of electronic equipment disclosed by the invention, when the data outputting unit of electronic equipment exports multi-medium data, obtain the face-image of electronic equipment user as the foundation controlling electronic equipment operational process, when determining to meet pre-conditioned, the output parameter of the data outputting unit in adjustment electronic equipment or the output state of adjustment multi-medium data, provide a kind of mode of control electronic equipment operational process newly.Further, the control method of electronic equipment disclosed by the invention, compared to existing button control mode, has operation and controls advantage more easily.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only embodiments of the invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the accompanying drawing provided.
Fig. 1 is the process flow diagram of a kind of control method of electronic equipment disclosed by the invention;
Fig. 2 is whether a kind of face-image judging electronic equipment user disclosed by the invention meets the first pre-conditioned process flow diagram;
Fig. 3 is that another kind disclosed by the invention judges whether the face-image of electronic equipment user meets the first pre-conditioned process flow diagram;
Fig. 4 disclosed by the inventionly a kind ofly judges that whether audio-frequency information meets the second pre-conditioned process flow diagram;
Fig. 5 is that another kind disclosed by the invention judges whether audio-frequency information meets the second pre-conditioned process flow diagram;
Fig. 6 is the structural representation of a kind of intelligent glasses disclosed by the invention;
Fig. 7 is the structural representation of a kind of control device of electronic equipment disclosed by the invention;
Fig. 8 is the structural representation of the another kind of control device of electronic equipment disclosed by the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The present invention discloses a kind of control method being applied to electronic equipment, provides the mode of new control electronic equipment compared to prior art.
It is the process flow diagram of a kind of control method of electronic equipment disclosed by the invention see Fig. 1, Fig. 1.This electronic equipment comprises data outputting unit and image acquisition units, and this electronic equipment can play multimedia data, can be mobile phone, panel computer, Wearable electronic equipment.This control method comprises:
Step S11: when detecting that data outputting unit exports multi-medium data, obtains the face-image of the electronic equipment user that image acquisition units collects.
Be provided with image acquisition units in electronic equipment, this image acquisition units is configured to can the face-image of user of detected electrons equipment.Such as: image acquisition units is arranged in the main body of electronic equipment, by adjusting the placement location of electronic equipment, make the image acquisition units in electronic equipment be in the front of electronic equipment user, thus image acquisition can be carried out to the face of electronic equipment user; Or, image acquisition units is arranged at the position away from electronic device body, and this position makes user in use electronic equipment process, and image acquisition units is in the front of electronic equipment user, thus can carry out image acquisition to the face of electronic equipment user.Such as, the mirror holder of intelligent glasses arranges mounting bracket, this mounting bracket is extended to the front of user's face, the image acquisition units of intelligent glasses is arranged on the end of mounting bracket, when user wears intelligent glasses, namely image acquisition units is positioned at the front of user's face, can carry out image acquisition, as shown in Figure 6 to the face of intelligent glasses user.
The data outputting unit of electronic equipment can be audio output unit, can be display unit, can certainly be audio output unit and display unit.In addition, multi-medium data can be voice data or video data (comprising bound view data and voice data).In electronic equipment opening, when detecting that data outputting unit exports after multi-medium data, namely start to obtain the face-image that image acquisition units collects.
Step S12: judge whether to meet pre-conditioned.
Face-image that this is pre-conditioned at least comprises this electronic equipment user meets first pre-conditioned.When determining that the data outputting unit of electronic equipment exports multi-medium data, obtain the face-image of electronic equipment user, and using the face-image of the user of electronic equipment as the foundation controlling electronic equipment operational process.
Step S13: based on judged result, the output parameter of adjustment data outputting unit or the output state of adjustment multi-medium data.
When determining to meet pre-conditioned, just adjust the output parameter of data outputting unit in electronic equipment, or the output state of adjustment multi-medium data, namely adjust the running status of electronic equipment.
The control method of electronic equipment disclosed by the invention, when the data outputting unit of electronic equipment exports multi-medium data, obtain the face-image of electronic equipment user as the foundation controlling electronic equipment operational process, when determining to meet pre-conditioned, the output parameter of the data outputting unit in adjustment electronic equipment or the output state of adjustment multi-medium data, provide a kind of mode of control electronic equipment operational process newly.Further, the control method of electronic equipment disclosed by the invention, compared to existing button control mode, has operation and controls advantage more easily.
In enforcement, judge whether to meet pre-conditioned, can be configured to: judge whether the face-image of electronic equipment user meets first pre-conditioned.Accordingly, when determining that the face-image of electronic equipment user meets first condition, just can adjust the output parameter of data outputting unit or the output state of adjustment multi-medium data.
Wherein, judge that whether the face-image of electronic equipment user meets first pre-conditioned, can various ways be adopted.Composition graphs 2 and Fig. 3 are described respectively below.
Be whether a kind of face-image judging electronic equipment user disclosed by the invention meets the first pre-conditioned process flow diagram see Fig. 2, Fig. 2.Comprise:
Step S21: the expressive features extracting two frame face-images of difference prefixed time interval.
The facial expression of people is mainly reflected in connecting each other between the change of eyebrow, glasses, nose and face and these changes.The expressive features of face-image can characterize the facial expression of people.When the facial expression of people changes, the expressive features that face-image comprises also can have greatly changed.
In enforcement, extract two frame face-images in the face-image of image acquisition units collection, this two frames face-image can be continuous print two frame face-image, also can be two frame face-images of the some frames of difference.Afterwards, the expressive features of this two frames face-image is extracted.Here it should be noted that, expression recognition has been comparatively proven technique, and the expressive features extracting face-image in the application can adopt existing human facial feature extraction algorithm.
Step S22: when the difference between the expressive features of two frame face-images is greater than threshold value, determines that the face-image of electronic equipment user meets first pre-conditioned.
If larger change appears in the expressive features of two frame face-images, show that larger change appears in the expression of user, now determine that the facial expression of electronic equipment user meets first pre-conditioned.
Method shown in Fig. 2 of the present invention, pre-conditionedly to be configured to first: the difference between the expressive features of the two frame face-images of electronic equipment user is greater than threshold value, to be namely pre-conditionedly configured to first: larger change appears in the expression of electronic equipment user.In concrete enforcement, in the face-image that the image acquisition units of electronic equipment gathers, obtain two frame face-images of difference prefixed time interval, extract the expressive features of this two frames face-image afterwards, if the difference between the expressive features of this two frames face-image is greater than threshold value, just determine that the face-image of electronic equipment user meets first pre-conditioned.
Be that another kind disclosed by the invention judges that whether the face-image of electronic equipment user meets the first pre-conditioned process flow diagram see Fig. 3, Fig. 3.Comprise:
Step S31: the expressive features extracting face-image.
When determining that the data outputting unit of electronic equipment exports multi-medium data, obtaining the face-image of the electronic equipment user that image acquisition units collects, and extracting the expressive features of face-image.
Step S32: sample pattern of the expressive features extracted and the first kind prestored being expressed one's feelings mates, obtains degree of confidence.
Electronic equipment prestores first kind expression sample pattern.In enforcement, gather a large amount of face-images of electronic equipment user, specific facial expression image (as surprised facial expression image, laugh facial expression image, angry facial expression image) is selected in these face-images, using the facial expression image selected as positive sample, trained as negative sample by other facial expression images, obtain first kind expression sample pattern.The essence of first kind expression sample pattern is also expressive features.
Step S33: when the degree of confidence obtained is higher than predetermined threshold value, determines that the face-image of electronic equipment user meets first pre-conditioned.
Sample pattern of the expressive features extracted in face-image from electronic equipment user and the first kind being expressed one's feelings mates, if degree of confidence is between the two higher than predetermined threshold value, then determines that the facial expression of electronic equipment user meets first pre-conditioned.Here need illustrate, the degree of confidence that expressive features and the first kind are expressed one's feelings between sample pattern is higher, and the similarity of both explanations is higher.
Method shown in Fig. 3 of the present invention, pre-conditionedly to be configured to first: the expressive features of the face-image of electronic equipment user and the first kind express one's feelings the degree of confidence of sample pattern higher than predetermined threshold value, to be namely pre-conditionedly configured to first: the face of electronic equipment user presents specific expression.In concrete enforcement, in the face-image that the image acquisition units of electronic equipment gathers, obtain a frame face-image, extract the expressive features of this frame face-image afterwards, and sample pattern of the expressive features extracted and the first kind being expressed one's feelings mates, if degree of confidence is between the two higher than predetermined threshold value, just determine that the face-image of electronic equipment user meets first pre-conditioned.
Control method disclosed by the invention, when judge whether to meet pre-conditioned be configured to judge the face-image of electronic equipment user whether meet first pre-conditioned, whether can meet first pre-conditioned based on the face-image of the mode determination electronic equipment user shown in Fig. 2 or Fig. 3.Use in the process of electronic equipment user, if the data outputting unit of electronic equipment exports multi-medium data, then user can control by the expression of oneself operation that electronic equipment performs the output parameter of adjustment data outputting unit, or controls the operation that electronic equipment performs the data outputting unit of adjustment multi-medium data.
It should be noted that, user can control the operational process of electronic equipment by the facial expression having a mind to adjust oneself.In addition, electronic equipment can also adjust the operational process of self based on user's unconscious facial expression when there is some event.
Export in the process of multi-medium data at the data outputting unit of electronic equipment, the expression of user is in comparatively tranquil state usually, if there occurs the event that other need user to process, the expression of user can change, based on method disclosed by the invention, electronic equipment can adjust the output parameter of data outputting unit or the output state of adjustment multi-medium data automatically.
Such as: electronic equipment plays music by earphone, the facial expression of user in the process of listening to the music is in comparatively tranquil state always.When there being people to follow user session, when user makes response, its facial expression can change, after electronic equipment detects the expression shape change of user, reduce volume or suspend and play music, thus facilitate user and other people talk with, and do not need user to take earphone, or manually adjust volume.
In enforcement, if electronic equipment is also provided with audio collection unit, then when detecting that data outputting unit exports multi-medium data, the operation that control device obtains the audio-frequency information that audio collection unit collects can also be set further.
Accordingly, judge whether to meet pre-conditioned, can be configured to: judge whether the face-image of electronic equipment user meets first pre-conditioned, and judge that whether satisfied audio-frequency information that audio collection unit collects is second pre-conditioned.Only meet first pre-conditioned at the face-image of electronic equipment user, and the audio-frequency information that audio collection unit collects meet second pre-conditioned when, just determine currently to meet pre-conditioned, perform the output parameter of follow-up adjustment data outputting unit or the operation of output state of adjustment multi-medium data.In this case, the probability that electronic equipment carries out maloperation can be reduced.
Here it should be noted that, in concrete enforcement, do not limit the execution sequence that two judge operation.
Such as: first judge that whether the face-image of electronic equipment user meets first pre-conditioned, when determine satisfied first pre-conditioned, then perform and judge the whether satisfied second pre-conditioned operation of audio-frequency information that audio collection unit collects.Such as: first judge whether audio-frequency information that audio collection unit collects meets second pre-conditioned, when determine satisfied second pre-conditioned, then perform the whether satisfied first pre-conditioned operation of the face-image judging electronic equipment user.Such as: adopt multithreading, judge whether the face-image of electronic equipment user meets the first pre-conditioned operation by a thread execution, judge whether the audio-frequency information that audio collection unit collects meets the second pre-conditioned operation by another thread execution, the result afterwards in conjunction with two threads determines whether the operation performing the output parameter of follow-up adjustment data outputting unit or the output state of adjustment multi-medium data.
About judging whether the face-image of electronic equipment user meets the first pre-conditioned concrete processing procedure, discusses, no longer repeat above here.Below in conjunction with Fig. 4 and Fig. 5 to judging that whether audio-frequency information meets the second pre-conditioned process and be described.
Be disclosed by the inventionly a kind ofly judge that whether audio-frequency information meets the second pre-conditioned process flow diagram see Fig. 4, Fig. 4.Comprise:
Step S41: detect audio-frequency information, extracts the voice messaging in audio-frequency information.
The audio-frequency information that the audio collection unit of electronic equipment collects contains background noise information and voice messaging.Because background noise information is garbage, therefore the audio-frequency information that audio collection unit collects to be detected, extract the voice messaging in audio-frequency information, to reduce the data processing amount of subsequent extracted vocabulary.
Step S42: analyzing speech information, obtains the vocabulary that voice messaging comprises.
Step S43: when vocabulary comprises specific vocabulary, determines that audio-frequency information meets second pre-conditioned.
Existing speech analysis algorithms is utilized to analyze the voice messaging extracted in step S41, to obtain the vocabulary comprised in voice messaging.Afterwards the vocabulary got and the specific vocabulary that prestores are compared, if the vocabulary got in step S42 contains specific vocabulary, just determine that audio-frequency information meets second pre-conditioned.
In enforcement, the specific vocabulary that electronic equipment prestores can be configured according to the speech habits of user.Such as specific vocabulary can be configured to; " ", " ", " what " etc.
Method shown in Fig. 4 of the present invention, to be pre-conditionedly configured to second: the audio-frequency information that audio collection unit collects comprises specific vocabulary.In concrete enforcement, in the audio-frequency information that the audio collection unit of electronic equipment collects, extract voice messaging and analyze, obtain the vocabulary comprised in voice messaging, if the vocabulary got comprises specific vocabulary, just determine that audio-frequency information that the audio collection unit of electronic equipment collects meets second pre-conditioned.
Be that another kind disclosed by the invention judges that whether audio-frequency information meets the second pre-conditioned process flow diagram see Fig. 5, Fig. 5.Comprise:
Step S51: detect audio-frequency information, extracts the voice messaging in audio-frequency information.
Step S52: the vocal print feature extracting voice messaging.
The generation of human language is a complicated physiology physical process between Body Languages maincenter and vocal organs.The phonatory organ (tongue, tooth, larynx, nasal cavity) that people uses when talking there are differences in size and form, and the fine difference of these organs can cause the change of sounding air-flow, causes the difference of tonequality, factor.And the custom also faster or slower of people's sounding, firmly varies, causes the difference of loudness of a sound and the duration of a sound.Therefore, the vocal print of any two people is all variant, and vocal print has the effect of identification.
Step S53: the vocal print feature of the vocal print characteristic sum electronic equipment user of comparison voice messaging.
Step S54: when comparison result shows the vocal print characteristic matching of the vocal print feature of voice messaging and electronic equipment user, determines that audio-frequency information meets second pre-conditioned.
After extracting the voice messaging in audio-frequency information, extract the vocal print feature of this voice messaging, by comparing the vocal print feature of this vocal print feature and electronic equipment user, just can determine whether electronic equipment user is speaking, when both mate, determine that audio-frequency information meets second pre-conditioned.
Method shown in Fig. 5 of the present invention, pre-conditionedly to be configured to second: the vocal print feature of voice messaging and the vocal print characteristic matching of electronic equipment user in the audio-frequency information that audio collection unit collects, to be namely pre-conditionedly configured to second: the audio-frequency information that audio collection unit collects contains the voice messaging that electronic equipment user sends.
Control method disclosed by the invention, judging whether to meet, pre-conditioned to be configured to judge whether the face-image of electronic equipment user meets first pre-conditioned, and judge audio-frequency information that audio collection unit collects whether meet second pre-conditioned when, can be whether first pre-conditioned based on the face-image of the mode determination electronic equipment user shown in Fig. 2 or Fig. 3, it is second pre-conditioned whether the audio-frequency information collected based on the mode determination audio collection unit shown in Fig. 4 or Fig. 5 meets.Use in the process of electronic equipment user, if the data outputting unit of electronic equipment exports multi-medium data, then user can perform the operation of the output parameter adjusting data outputting unit by the expression of oneself and Voice command electronic equipment, or controls the operation that electronic equipment performs the data outputting unit of adjustment multi-medium data.
It should be noted that, user can control the operational process of electronic equipment by the facial expression and voice of having a mind to adjust oneself.In addition, electronic equipment can also adjust the operational process of self based on user unconscious facial expression and sounding when there is some event.
Export in the process of multi-medium data at the data outputting unit of electronic equipment, the expression of user is in comparatively tranquil state usually, and user keeps quite usually, if there occurs other events needing user to process, the expression of user can change, and user also can sound.Based on method disclosed by the invention, electronic equipment can adjust the output parameter of data outputting unit or the output state of adjustment multi-medium data automatically.
Such as: electronic equipment plays music by earphone, the facial expression of user in the process of listening to the music is in comparatively tranquil state always, also can not sound.User session will be followed when there being people, when user makes response, its facial expression can change, after the expression shape change that electronic equipment detects user and voice, reduce volume or suspend and play music, thus facilitate user and other people talk with, and do not need user to take earphone, or manually adjust volume.
Needs illustrate, control method disclosed by the invention, when determining that the data outputting unit of electronic equipment exports multi-medium data, judge whether to meet pre-conditioned operation according to prefixed time interval multiple exercise.
In addition, when the data outputting unit of electronic equipment comprises audio output unit and display unit, if audio output unit is at outputting audio data, then the output parameter adjusting data outputting unit can be: the output volume reducing audio output unit.
When the data outputting unit of electronic equipment comprises audio output unit and display unit, the output state of adjustment multi-medium data can be: suspend play multimedia data.
Here be described in conjunction with example.
Electronic equipment only displaying audio file time, for the consideration reducing electronic equipment power consumption, usually only have audio output unit in output audio information, if determine the output parameter needing to adjust data outputting unit, then can reduce the volume of audio output unit.
Electronic equipment is when playing video file, and the audio output unit of electronic equipment is in output audio information, and display unit is at display image simultaneously, if determine the output parameter needing to adjust data outputting unit, then reduces the volume of audio output unit.
Certainly, no matter electronic equipment is at playing video file, still only at displaying audio file, determine to meet pre-conditioned after, the output state of multi-medium data can also be adjusted, concrete, suspend play multimedia data.
In addition, the output parameter of adjustment data outputting unit, and the concrete scheme of the output state of adjustment multi-medium data is not limited to aforesaid way.The output parameter of adjustment data outputting unit can also be configured to: control display unit blank screen.When user does not think that other people see the content that display unit shows, can by the expression of oneself or by oneself expression and the quick blank screen of Voice command electronic equipment.
As a kind of preferred version, after the output parameter of adjustment data outputting unit, also comprise: at predetermined time, if do not meet pre-conditioned, then recover the output parameter of data outputting unit.
For in the volume meeting pre-conditioned situation decline bass output unit, after the operation performing the volume reducing audio output unit, if after the preset lime, no longer meet pre-conditioned, then recover the volume of audio output unit.
As a kind of preferred version, after the output state of adjustment multi-medium data, also comprise: at predetermined time, if do not meet pre-conditioned, then recover the output state of multi-medium data.
Suspending play multimedia data when meeting pre-conditioned, after performing the operation suspending play multimedia data, if after the preset lime, no longer meeting pre-conditioned, then continuing play multimedia data.
The above disclosed control method being applied to electronic equipment of the present invention, accordingly, the present invention is also openly applied to the control device of electronic equipment.
It is the structural representation of a kind of control device of electronic equipment disclosed by the invention see Fig. 7, Fig. 7.This electronic equipment comprises data outputting unit and image acquisition units, and this electronic equipment can play multimedia data, can be mobile phone, panel computer, Wearable electronic equipment.This control device comprises: image acquisition unit 100, judging unit 200 and the first control module 300.
Wherein:
Image acquisition unit 10, for when detecting that data outputting unit exports multi-medium data, obtains the face-image of the electronic equipment user that image acquisition units collects.Be provided with image acquisition units in electronic equipment, this image acquisition units is configured to can the face-image of user of detected electrons equipment.The concrete installation form of image acquisition units refers to and describes above.
Judging unit 20, pre-conditioned for judging whether to meet, this pre-conditioned face-image comprising electronic equipment user meets first pre-conditioned.When determining that the data outputting unit of electronic equipment exports multi-medium data, obtain the face-image of electronic equipment user, and using the face-image of the user of electronic equipment as the foundation controlling electronic equipment operational process.
First control module 30, for based on judged result, adjusts the output parameter of data outputting unit or the output state of adjustment multi-medium data.When determining that judging unit 20 meets pre-conditioned, the first control module 30 just adjusts the output parameter of data outputting unit in electronic equipment, or the output state of adjustment multi-medium data, namely adjusts the running status of electronic equipment.
The control device of electronic equipment disclosed by the invention, when the data outputting unit of electronic equipment exports multi-medium data, obtain the face-image of electronic equipment user as the foundation controlling electronic equipment operational process, when determining to meet pre-conditioned, the output parameter of the data outputting unit in adjustment electronic equipment or the output state of adjustment multi-medium data, provide a kind of mode of control electronic equipment operational process newly.Further, the control device of electronic equipment disclosed by the invention, compared to existing button control mode, has operation and controls advantage more easily.
In enforcement, judge whether to meet pre-conditioned, can be configured to: judge whether the face-image of electronic equipment user meets first pre-conditioned.Accordingly, judging unit 20 only comprises the first judgment sub-unit, this first judgment sub-unit is first pre-conditioned for judging that whether the face-image of electronic equipment user meets, when the face-image of the first judgment sub-unit determination electronic equipment user meets first condition, just determine to meet pre-conditioned, the output parameter of data outputting unit or the output state of adjustment multi-medium data can be adjusted.
When electronic equipment is also provided with audio collection unit, control device arranges audio frequency acquiring unit 40 further, and judge whether to meet pre-conditioned being configured to: judge that whether the face-image of electronic equipment user meets first pre-conditioned, and judge that whether satisfied audio-frequency information that audio collection unit collects is second pre-conditioned.Accordingly, judging unit 20 comprises the first judgment sub-unit 21 and the second judgment sub-unit 22, and the structure of control device can be shown in Figure 8.
Wherein:
Image acquisition unit 10, for when detecting that data outputting unit exports multi-medium data, obtains the face-image of the electronic equipment user that image acquisition units collects.
Audio frequency acquiring unit 40, for when detecting that data outputting unit exports multi-medium data, obtains the audio-frequency information that audio collection unit collects.
Judging unit 20 comprises the first judgment sub-unit 21 and the second judgment sub-unit 22.It is first pre-conditioned whether the first judgment sub-unit 21 meets for the face-image of the electronic equipment user judging image acquisition unit 10 and get, and it is second pre-conditioned whether the second judgment sub-unit 22 meets for the audio-frequency information judging audio frequency acquiring unit 40 and get.When the judged result of the first judgment sub-unit 21 and the second judgment sub-unit 22 be time, judging unit 20 is determined to meet pre-conditioned.
First control module 30, for the judged result based on judging unit 20, the output parameter of adjustment data outputting unit or the output state of adjustment multi-medium data.When judging unit 20 is determined to meet pre-conditioned, the first control module 30 just adjusts the output parameter of data outputting unit in electronic equipment, or the output state of adjustment multi-medium data, namely adjusts the running status of electronic equipment.
Control device shown in Fig. 8 of the present invention, only whether meet first pre-conditioned at the face-image of electronic equipment user, and audio-frequency information whether meet second pre-conditioned time, first control module just performs the operation of the output parameter of adjustment data outputting unit or the output state of adjustment multi-medium data, can reduce the probability of misoperation of electronic equipment.
As a kind of embodiment, the first judgment sub-unit comprises the first human facial feature extraction module and the first processing module.
Wherein:
First human facial feature extraction module is connected with image acquisition unit, for extracting the expressive features of two frame face-images of difference prefixed time interval.First processing module and the first human facial feature extraction model calling, when being greater than threshold value for the difference between the expressive features of two frame face-images, determine that the face-image of electronic equipment user meets first pre-conditioned.
As another kind of embodiment, the first judgment sub-unit comprises the second human facial feature extraction module, expressive features matching module and the second processing module.
Wherein:
Second human facial feature extraction module is connected with image acquisition unit, for extracting the expressive features of face-image.Expressive features matching module and the second human facial feature extraction model calling, mate for the expressive features the second human facial feature extraction module extracted and the first kind prestored sample pattern of expressing one's feelings, and obtains degree of confidence.Second processing module is connected with expressive features matching module, for the degree of confidence that obtains at expressive features matching module higher than predetermined threshold value, determines that the face-image of electronic equipment user meets second pre-conditioned.
In force, the second judgment sub-unit can adopt following structure.Second judgment sub-unit comprises voice messaging extraction module, speech analysis module and the 3rd processing module.
Wherein:
Voice messaging extraction module is connected with audio frequency acquiring unit, for detecting audio-frequency information, extracts the voice messaging in audio-frequency information.Speech analysis module is connected with voice extraction module, for the voice messaging that analyzing speech extraction module extracts, obtains the vocabulary that voice messaging comprises.3rd processing module is connected with speech analysis module, the vocabulary for obtaining in speech analysis module comprises specific vocabulary, determines that audio-frequency information meets second pre-conditioned.
In addition, the second judgment sub-unit can also adopt following structure.Second judgment sub-unit comprises voice messaging extraction module, vocal print characteristic extracting module, comparing module and the 4th processing module.
Wherein:
Voice messaging extraction module is connected with audio frequency acquiring unit, for detecting audio-frequency information, extracts the voice messaging in audio-frequency information.Vocal print characteristic extracting module is connected with voice extraction module, for extracting the vocal print feature of aforementioned voice information.Comparing module is connected with vocal print characteristic extracting module, for the vocal print feature of the vocal print characteristic sum electronic equipment user of comparison voice messaging.4th processing module is connected with comparing module, for showing the vocal print characteristic matching of the vocal print feature of voice messaging and electronic equipment user at comparison result, determines that audio-frequency information meets second pre-conditioned.
When the data outputting unit of electronic equipment comprises audio output unit and display unit, as a kind of preferred implementation, first control module comprises the first control module, first control module is used for when audio output unit outputting audio data, reduces the output volume of audio output unit.
When the data outputting unit of electronic equipment comprises audio output unit and display unit, as a kind of preferred implementation, the first control module comprises the second control module, and the second control module is for suspending play multimedia data.
On the basis of control device shown in Fig. 7 and Fig. 8, can also the second control module be set further.Predetermined time after the output parameter of the first control module adjustment data outputting unit, if do not meet pre-conditioned, then the second control module recovers the output parameter of data outputting unit; Predetermined time after the output state of the first control module adjustment multi-medium data, if do not meet pre-conditioned, then the second control module recovers the output state of multi-medium data.
The present invention also discloses a kind of Wearable electronic equipment, comprise a support, this support is for maintaining the relative position relation of Wearable electronic equipment and electronic equipment user head, this Wearable electronic equipment also comprises lens module, audio output unit, audio collection unit, image acquisition units, and above-mentioned any one control device disclosed of the present invention.Fig. 6 shows a kind of structure of Wearable electronic equipment.The face-image of user is controlled foundation as one by Wearable electronic equipment disclosed by the invention, provides new control mode.
In addition, the present invention also discloses a kind of electronic equipment, and this electronic equipment comprises audio output unit, display unit, audio collection unit, image acquisition units, and above-mentioned any one control device disclosed of the present invention.The face-image of user is controlled foundation as one by electronic equipment disclosed by the invention, provides new control mode.
Finally, also it should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
In this instructions, each embodiment adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar portion mutually see.For device disclosed in embodiment, because it corresponds to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.
To the above-mentioned explanation of the disclosed embodiments, professional and technical personnel in the field are realized or uses the present invention.To be apparent for those skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein can without departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (20)

1. a control method, is applied to electronic equipment, and described electronic equipment comprises data outputting unit and image acquisition units, it is characterized in that, described control method comprises:
When detecting that described data outputting unit exports multi-medium data, obtain the face-image of the electronic equipment user that described image acquisition units collects;
Judge whether to meet pre-conditioned, the described pre-conditioned face-image comprising described electronic equipment user meets first pre-conditioned;
Based on judged result, adjust the output parameter of described data outputting unit or adjust the output state of described multi-medium data.
2. control method according to claim 1, it is characterized in that, described electronic equipment also comprises audio collection unit, when detecting that described data outputting unit exports multi-medium data, also comprises: obtain the audio-frequency information that described audio collection unit collects;
Describedly judge whether to meet pre-conditioned, comprising: judge whether the face-image of described electronic equipment user meets first pre-conditioned, and it is second pre-conditioned to judge whether described audio-frequency information meets.
3. control method according to claim 1, is characterized in that, judges that whether the face-image of described electronic equipment user meets first pre-conditioned, comprising:
Extract the expressive features of two frame face-images of difference prefixed time interval;
When difference between the expressive features of described two frame face-images is greater than threshold value, determine that the face-image of described electronic equipment user meets first pre-conditioned.
4. control method according to claim 1, is characterized in that, judges that whether the face-image of described electronic equipment user meets first pre-conditioned, comprising:
Extract the expressive features of described face-image;
Sample pattern of the expressive features extracted and the first kind prestored being expressed one's feelings mates, and obtains degree of confidence;
When the degree of confidence obtained is higher than predetermined threshold value, determine that the face-image of described electronic equipment user meets first pre-conditioned.
5. the control method according to claim 2,3 or 4, is characterized in that, judges that whether described audio-frequency information meets second pre-conditioned, comprising:
Described audio-frequency information is detected, extracts the voice messaging in described audio-frequency information;
Analyze described voice messaging, obtain the vocabulary that described voice messaging comprises;
When described vocabulary comprises specific vocabulary, determine that described audio-frequency information meets second pre-conditioned.
6. the control method according to claim 2,3 or 4, is characterized in that, judges that whether described audio-frequency information meets second pre-conditioned, comprising:
Described audio-frequency information is detected, extracts the voice messaging in described audio-frequency information;
Extract the vocal print feature of described voice messaging;
The vocal print feature of electronic equipment user described in the vocal print characteristic sum of voice messaging described in comparison;
When comparison result shows the vocal print characteristic matching of the vocal print feature of described voice messaging and described electronic equipment user, determine that described audio-frequency information meets second pre-conditioned.
7. control method according to claim 1 and 2, is characterized in that, described data outputting unit comprises audio output unit and display unit;
The output parameter of the described data outputting unit of described adjustment comprises: when described audio output unit outputting audio data, reduce the output volume of described audio output unit.
8. control method according to claim 1 and 2, is characterized in that, described data outputting unit comprises audio output unit and display unit;
The output state of the described multi-medium data of described adjustment comprises: suspend and play described multi-medium data.
9. control method according to claim 1 and 2, is characterized in that,
After the output parameter of the described data outputting unit of adjustment, also comprise: at predetermined time, if do not meet described pre-conditioned, then recover the output parameter of described data outputting unit;
After the output state of the described multi-medium data of adjustment, also comprise: at predetermined time, if do not meet described pre-conditioned, then recover the output state of described multi-medium data.
10. a control device, is applied to electronic equipment, and described electronic equipment comprises data outputting unit and image acquisition units, it is characterized in that, described control device comprises:
Image acquisition unit, for when detecting that described data outputting unit exports multi-medium data, obtains the face-image of the electronic equipment user that described image acquisition units collects;
Judging unit, pre-conditioned for judging whether to meet, the described pre-conditioned face-image comprising described electronic equipment user meets first pre-conditioned;
First control module, for based on judged result, adjusts the output parameter of described data outputting unit or adjusts the output state of described multi-medium data.
11. control device according to claim 10, is characterized in that, described electronic equipment also comprises audio collection unit,
Described control device also comprises audio frequency acquiring unit, and described audio frequency acquiring unit is used for, when detecting that described data outputting unit exports multi-medium data, obtaining the audio-frequency information that described audio collection unit collects;
Described judging unit comprises the first judgment sub-unit and the second judgment sub-unit, described first judgment sub-unit is first pre-conditioned for judging that whether the face-image of described electronic equipment user meets, and described second judgment sub-unit is second pre-conditioned for judging whether described audio-frequency information meets.
12. control device according to claim 11, is characterized in that, described first judgment sub-unit comprises:
First human facial feature extraction module, for extracting the expressive features of two frame face-images of difference prefixed time interval;
First processing module, when being greater than threshold value for the difference between the expressive features of described two frame face-images, determines that the face-image of described electronic equipment user meets first pre-conditioned.
13. control device according to claim 11, is characterized in that, described first judgment sub-unit comprises:
Second human facial feature extraction module, for extracting the expressive features of described face-image;
Expressive features matching module, mates for sample pattern of the expressive features extracted and the first kind prestored being expressed one's feelings, and obtains degree of confidence;
Second processing module, for when the degree of confidence obtained is higher than predetermined threshold value, determines that the face-image of described electronic equipment user meets second pre-conditioned.
14. control device according to claim 11,12 or 13, it is characterized in that, described second judgment sub-unit comprises:
Voice messaging extraction module, for detecting described audio-frequency information, extracts the voice messaging in described audio-frequency information;
Speech analysis module, for analyzing described voice messaging, obtains the vocabulary that described voice messaging comprises;
3rd processing module, for when described vocabulary comprises specific vocabulary, determines that described audio-frequency information meets second pre-conditioned.
15. control device according to claim 11,12 or 13, it is characterized in that, described second judgment sub-unit comprises:
Voice messaging extraction module, for detecting described audio-frequency information, extracts the voice messaging in described audio-frequency information;
Vocal print characteristic extracting module, for extracting the vocal print feature of described voice messaging;
Comparing module, for voice messaging described in comparison vocal print characteristic sum described in the vocal print feature of electronic equipment user;
4th processing module, for showing the vocal print characteristic matching of the vocal print feature of described voice messaging and described electronic equipment user at comparison result, determines that described audio-frequency information meets second pre-conditioned.
16. control device according to claim 10 or 11, it is characterized in that, described data outputting unit comprises audio output unit and display unit;
Described first control module comprises the first control module, and described first control module is used for when described audio output unit outputting audio data, reduces the output volume of described audio output unit.
17. control device according to claim 10 or 11, it is characterized in that, described data outputting unit comprises audio output unit and display unit;
Described first control module comprises the second control module, and described second control module plays described multi-medium data for suspending.
18. control device according to claim 10 or 11, is characterized in that, also comprise the second control module;
Predetermined time after the output parameter of the described data outputting unit of adjustment, if do not meet described pre-conditioned, then recovers the output parameter of described data outputting unit;
Predetermined time after the output state of the described multi-medium data of adjustment, if do not meet described pre-conditioned, then recovers the output state of described multi-medium data.
19. 1 kinds of Wearable electronic equipment, comprise a support, described support is for maintaining the relative position relation of described Wearable electronic equipment and electronic equipment user head, described Wearable electronic equipment also comprises lens module, audio output unit, audio collection unit and image acquisition units, it is characterized in that, described Wearable electronic equipment also comprises the control device according to any one of claim 10 to 18.
20. 1 kinds of electronic equipments, described electronic equipment comprises audio output unit, display unit, audio collection unit and image acquisition units, it is characterized in that, described electronic equipment also comprises the control device according to any one of claim 10 to 18.
CN201510083677.0A 2015-02-16 2015-02-16 Control method, control device, wearable electronic equipment and electronic equipment Active CN104615252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510083677.0A CN104615252B (en) 2015-02-16 2015-02-16 Control method, control device, wearable electronic equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510083677.0A CN104615252B (en) 2015-02-16 2015-02-16 Control method, control device, wearable electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN104615252A true CN104615252A (en) 2015-05-13
CN104615252B CN104615252B (en) 2019-02-05

Family

ID=53149738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510083677.0A Active CN104615252B (en) 2015-02-16 2015-02-16 Control method, control device, wearable electronic equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN104615252B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954555A (en) * 2015-05-18 2015-09-30 百度在线网络技术(北京)有限公司 Volume adjusting method and system
CN105242942A (en) * 2015-09-17 2016-01-13 小米科技有限责任公司 Application control method and apparatus
CN107861708A (en) * 2017-12-21 2018-03-30 广东欧珀移动通信有限公司 Volume method to set up, device, terminal device and storage medium
WO2018121463A1 (en) * 2016-12-30 2018-07-05 Changchun Ruixinboguan Technology Development Co., Ltd. Systems and methods for interaction with an application
CN111803936A (en) * 2020-07-16 2020-10-23 网易(杭州)网络有限公司 Voice communication method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906939A (en) * 2004-04-07 2007-01-31 松下电器产业株式会社 Communication terminal and communication method
CN1908965A (en) * 2005-08-05 2007-02-07 索尼株式会社 Information processing apparatus and method, and program
CN102355527A (en) * 2011-07-22 2012-02-15 深圳市无线开锋科技有限公司 Mood induction apparatus of mobile phone and method thereof
CN102566740A (en) * 2010-12-16 2012-07-11 富泰华工业(深圳)有限公司 Electronic device with emotion recognition function, and output control method of such electronic device
TW201301868A (en) * 2011-06-17 2013-01-01 Inventec Appliances Shanghai A simulation of the remote control system and its method of operation
US20130050642A1 (en) * 2011-08-30 2013-02-28 John R. Lewis Aligning inter-pupillary distance in a near-eye display system
CN103186326A (en) * 2011-12-27 2013-07-03 联想(北京)有限公司 Application object operation method and electronic equipment
CN103491432A (en) * 2012-06-12 2014-01-01 联想(北京)有限公司 Method, device and system for multimedia information output control
CN103823548A (en) * 2012-11-19 2014-05-28 联想(北京)有限公司 Electronic equipment, wearing-type equipment, control system and control method
WO2014164901A1 (en) * 2013-03-11 2014-10-09 Magic Leap, Inc. System and method for augmented and virtual reality

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906939A (en) * 2004-04-07 2007-01-31 松下电器产业株式会社 Communication terminal and communication method
CN1908965A (en) * 2005-08-05 2007-02-07 索尼株式会社 Information processing apparatus and method, and program
CN102566740A (en) * 2010-12-16 2012-07-11 富泰华工业(深圳)有限公司 Electronic device with emotion recognition function, and output control method of such electronic device
TW201301868A (en) * 2011-06-17 2013-01-01 Inventec Appliances Shanghai A simulation of the remote control system and its method of operation
CN102355527A (en) * 2011-07-22 2012-02-15 深圳市无线开锋科技有限公司 Mood induction apparatus of mobile phone and method thereof
US20130050642A1 (en) * 2011-08-30 2013-02-28 John R. Lewis Aligning inter-pupillary distance in a near-eye display system
CN103186326A (en) * 2011-12-27 2013-07-03 联想(北京)有限公司 Application object operation method and electronic equipment
CN103491432A (en) * 2012-06-12 2014-01-01 联想(北京)有限公司 Method, device and system for multimedia information output control
CN103823548A (en) * 2012-11-19 2014-05-28 联想(北京)有限公司 Electronic equipment, wearing-type equipment, control system and control method
WO2014164901A1 (en) * 2013-03-11 2014-10-09 Magic Leap, Inc. System and method for augmented and virtual reality

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954555A (en) * 2015-05-18 2015-09-30 百度在线网络技术(北京)有限公司 Volume adjusting method and system
US10516776B2 (en) 2015-05-18 2019-12-24 Baidu Online Network Technology (Beijing) Co., Ltd. Volume adjusting method, system, apparatus and computer storage medium
CN105242942A (en) * 2015-09-17 2016-01-13 小米科技有限责任公司 Application control method and apparatus
WO2018121463A1 (en) * 2016-12-30 2018-07-05 Changchun Ruixinboguan Technology Development Co., Ltd. Systems and methods for interaction with an application
US11020654B2 (en) 2016-12-30 2021-06-01 Suzhou Yaoxinyan Technology Development Co., Ltd. Systems and methods for interaction with an application
CN107861708A (en) * 2017-12-21 2018-03-30 广东欧珀移动通信有限公司 Volume method to set up, device, terminal device and storage medium
CN111803936A (en) * 2020-07-16 2020-10-23 网易(杭州)网络有限公司 Voice communication method and device, electronic equipment and storage medium
CN111803936B (en) * 2020-07-16 2024-05-31 网易(杭州)网络有限公司 Voice communication method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104615252B (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN103426275B (en) The device of detection eye fatigue, desk lamp and method
CN104615252A (en) Control method, control device, wearable electronic device and electronic equipment
JP5323770B2 (en) User instruction acquisition device, user instruction acquisition program, and television receiver
CN108460334A (en) A kind of age forecasting system and method based on vocal print and facial image Fusion Features
CN111128157B (en) Wake-up-free voice recognition control method for intelligent household appliance, computer readable storage medium and air conditioner
EP3890342B1 (en) Waking up a wearable device
CN202150884U (en) Handset mood-induction device
JP2010511958A (en) Gesture / voice integrated recognition system and method
WO2020244257A1 (en) Method and system for voice wake-up, electronic device, and computer-readable storage medium
CN103700370A (en) Broadcast television voice recognition method and system
WO2016173132A1 (en) Method and device for voice recognition, and user equipment
CN111105796A (en) Wireless earphone control device and control method, and voice control setting method and system
WO2008069519A1 (en) Gesture/speech integrated recognition system and method
CN111583937A (en) Voice control awakening method, storage medium, processor, voice equipment and intelligent household appliance
CN103945140B (en) The generation method and system of video caption
CN104883503A (en) Customized shooting technology based on voice
WO2020125038A1 (en) Voice control method and device
CN110946554A (en) Cough type identification method, device and system
WO2017143951A1 (en) Expression feedback method and smart robot
CN109994129A (en) Speech processing system, method and apparatus
EP3793275A1 (en) Location reminder method and apparatus, storage medium, and electronic device
CN113727242A (en) Online pickup main power unit and method and wearable device
CN112672120B (en) Projector with voice analysis function and personal health data generation method
WO2023185007A1 (en) Sleep scene setting method and apparatus
CN111182456B (en) Audio playing switching method and system based on intelligent sound box and related equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant