CN104423547B - A kind of input method and electronic equipment - Google Patents
A kind of input method and electronic equipment Download PDFInfo
- Publication number
- CN104423547B CN104423547B CN201310381900.0A CN201310381900A CN104423547B CN 104423547 B CN104423547 B CN 104423547B CN 201310381900 A CN201310381900 A CN 201310381900A CN 104423547 B CN104423547 B CN 104423547B
- Authority
- CN
- China
- Prior art keywords
- image
- description item
- user
- feature
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
Abstract
The present invention provides a kind of input method and electronic equipment, this method includes:Absorb the first image, wherein, in described first image including at least user body a part of information;According to described first image, the characteristic information of the user is determined;From preset description item, the goal description item to match with the characteristic information is obtained, and input the goal description item.This method can improve the input speed of the description item such as character or expression.
Description
Technical field
The present invention relates to content input technology field, more particularly to a kind of input method and electronic equipment.
Background technology
When carrying out input operation using electronic equipment, in order to enable input content is more vivid or increases input process
In interest, the content inputted in electronic equipment input area in addition to word, can also be related to substantial amounts of spcial character or
Person's dynamic expression etc..For example, people during being exchanged using immediate communication tool, can input some by special word
Accord with the expression of composition, or some dynamic expressions etc..
But with these databases these spcial characters either expression increase, search some in input process
Expression or character then need to spend longer time, have impact on input speed.
The content of the invention
In view of this, the present invention provides a kind of input method and electronic equipment, to improve the description item such as character or expression
Input speed.
To achieve the above object, the present invention provides following technical solution:A kind of input method, including:
Absorb the first image, wherein, in described first image including at least user body a part of information;
According to described first image, the characteristic information of the user is determined;
From preset description item, the goal description item to match with the characteristic information is obtained, and input the target
Item is described.
Optionally, the first image of the intake, including:
First image of the intake including at least the facial information of the user;
Correspondingly, it is described according to described first image, determine the characteristic information of the user, including:
According to the facial information of the first image, the facial expression feature of the user is identified;
The description item that from preset description item, acquisition matches with the characteristic information, and input the description
, including:
From preset description item, the goal description item to match with the facial expression feature is obtained, and described in input
Goal description item.
Optionally, the facial information according to the first image, determines the facial expression feature of the user, including:
Facial information in described first image, carries out the feature extraction of face face organ, obtains face organ
Characteristic information;
According to the characteristic information of the face organ, expression classification recognition is carried out, determines the facial expression of the user
Classification.
Optionally, the first image of the intake includes:
First image of the intake including at least the body posture of the user;
Correspondingly, it is described according to described first image, determine the characteristic information of the user, including:
According to described first image, the limb action feature of the user is determined;
It is described to obtain the goal description item to match with the characteristic information from preset description item, and described in input
Goal description item, including:
From preset description item, the goal description item to match with the limb action feature is obtained, and described in input
Goal description item.
Optionally, the first image of the intake includes:
When detecting click or touching default description item input button, described first image is absorbed.
Optionally, the description item includes one or more of:
Expression picture, action picture, character and the symbol expression being made of character.
Optionally, it is described from preset description item, the goal description item to match with the characteristic information is obtained, and it is defeated
Enter the goal description item, including:
According to the correspondence of preset characteristic information and description item, from preset description item, obtain and the feature
The goal description item of information match, and input the goal description item.
On the other hand, present invention also offers a kind of electronic equipment, including:
Image capture unit, for absorbing the first image, wherein, the body of user is included at least in described first image
A part of information;
Image analyzing unit, for according to described first image, determining the characteristic information of the user;
Input unit, for from preset description item, obtaining the goal description item to match with the characteristic information, and
Input the goal description item.
Optionally, described image intake unit, including:
First image capture unit, for absorbing the first image of the facial information including at least the user;
Correspondingly, described image analytic unit, including:
First image analyzing unit, for the facial information according to the first image, identifies that the facial expression of the user is special
Sign;
The input unit, including:
First input unit, for from preset description item, obtaining the target to match with the facial expression feature
Item is described, and inputs the goal description item.
Optionally, described first image analytic unit, including:
Feature extraction unit, for the facial information in described first image, the feature of progress face face organ
Extraction, obtains the characteristic information of face organ;
Expression classification unit, for the characteristic information according to the face organ, carries out expression classification recognition, determines institute
State the facial expression classification of user.
Optionally, described image analytic unit includes:
Second image analyzing unit, for absorbing the first image of the body posture including at least the user;
Correspondingly, described image analytic unit, including:
Second image analyzing unit, for according to described first image, determining the limb action feature of the user;
The input unit, including:
Second input unit, for from preset description item, obtaining the target to match with the limb action feature
Item is described, and inputs the goal description item.
Optionally, described image intake unit, including:
Image capture subelement, for when detecting click or touching default description item input button, described in intake
First image.
Optionally, it is characterised in that the description item includes one or more of:
Expression picture, action picture, character and the symbol expression being made of character.
Optionally, the input unit, including:
Input subelement, for according to preset characteristic information with description item correspondence, from preset description item,
The goal description item to match with the characteristic information is obtained, and inputs the goal description item.
It can be seen via above technical scheme that compared with prior art, the present disclosure provides a kind of input method and
Electronic equipment, this method is after first image of the intake to a part of information for the body for including at least user, from first figure
The characteristic information of user is determined as in, and according to this feature information, is determined from preset description item and this feature information
The goal description item to match, and goal description item is inputted, avoid in input process, user is manual to be described from substantial amounts of
Description item to be entered is selected in, and selection course complexity occurs, the problem of long is taken, so as to improve input speed.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
The embodiment of invention, for those of ordinary skill in the art, without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 shows a kind of flow diagram of input method one embodiment of the application;
Fig. 2 shows a kind of flow diagram of another embodiment of input method of the application;
Fig. 3 shows the facial expression schematic diagram of user in the first image that the application absorbs a kind of;
Fig. 4 shows that the expression determined based on the facial expression shown in Fig. 3 describes the schematic diagram of item;
Fig. 5 shows a kind of flow diagram of another embodiment of input method of the application;
Fig. 6 shows the structure diagram of the application a kind of electronic equipment one embodiment.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work
Embodiment, belongs to the scope of protection of the invention.
The embodiment of the present application discloses a kind of input method, to provide the defeated of the description item special to character or expression etc.
Enter speed.
Referring to Fig. 1, a kind of flow diagram of input method one embodiment of the present invention is shown, the embodiment of the present application
Method can be applied to the electronic equipment for possessing Image Acquisition ability, such as laptop, desktop computer, mobile phone or palm electricity
Brain etc..The method of the present embodiment can include:
Step 101:Absorb the first image, wherein, in first image including at least user body a part letter
Breath.
A part for user's body is acquired by the image acquisition units for setting or connecting on electronic equipment so that
A part of information of body comprising user in the image absorbed.Such as can be comprising the face of user, head, four limbs or
Some or all of image information in trunk.
Step 102:According to first image, the characteristic information of the user is determined.
, can according to first image due to the image information of a part for the body that includes user in first image
To analyze the characteristic information of the user when absorbing first image.Wherein, the characteristic information of the user is taken the photograph for that can reflect
When taking first image, position composition that the user is ingested, body posture, expression, the information of action.
For example, include the arm of user in first image, then should then be included in this feature information the user arm,
The posture of the arm, posture analysis can also be presently according to the arm and goes out action that user is currently carried out etc..
Step 103:From preset description item, the goal description item with this feature information match is obtained, and input and be somebody's turn to do
Goal description item.
Wherein, description item refers to carry out the content of scene, the mood of user, the action of user or particular meaning straight
See the input options of description.In input process introduce description item, reduce all using word input caused by input in
To hold long, expression is not directly perceived, and the problem of be not easy to cause reader's interest.
Optionally, which can be picture, such as the expression picture comprising user's expression and comprising human action
Act picture, motion picture etc., it is to be understood that the picture can be that dynamic picture can also be static images;The description
Item can also be character, such as " → ";The description item can also be the symbol expression being grouped by character, such as " O (∩ _ ∩)
O ", " (^o^)/~" etc..
Interest input by user is added by introducing description item in input process, is also more conducive to emotion table
Reach, especially in instant messaging and social network sites, description item is widely used.
In the present embodiment, in the characteristic information of the user determined, it can be retouched according to this feature information from preset
State selection and the goal description of this feature information match in item.Wherein, include at least in goal description item and partly or entirely retouch
The related content of this feature information is stated, or can reflect the description item of active user's feature.
In the embodiment of the present application, after first image of the intake to a part of information for the body for including at least user, from
The characteristic information of user is determined in first image, and according to this feature information, is determined from preset description item with being somebody's turn to do
The goal description item that characteristic information matches, and input goal description item, avoids in input process, user it is manual from big
Description item to be entered is selected in the description item of amount, and selection course complexity occurs, the problem of long is taken, so as to improve defeated
Enter speed.
The input method of the application can be applied to word document or the input of other content of text, can also be applied to
Instant messaging carries out content input in social network sites.When detecting input operation, you can real time shooting includes user
The image of a part of information of body, and according to the characteristic information of the first graphical analysis user, if in preset description item
In the presence of the description item with this feature information match, then the matched description item is as goal description item, and the goal description item
Inputted.In that case, in order to avoid maloperation, which can be inputted to default area to be entered
Domain, a part of region such as in the candidate bar of word, when receiving user's confirmation input goal description item, by the target
Xiang Shangping outputs are described.
Optionally, in order to reduce maloperation, can be when detect click or touch it is default description item input button when,
First image of a part of information of the intake comprising user's body.Such as, a button can be preset and input button as description item,
When user clicks on or touches the button, then triggering carries out the intake of first image;And for example, in instant communication client or
In webpage, be generally provided with expression load button, when detect click on when, just trigger the intake to image.
It is understood that in the application any one embodiment, from preset description item, obtain and believe with this feature
Manner of breathing it is matched description item might have it is multiple, can select matching degree it is highest description item as goal description item.Optionally,
, can also be using all and multiple description Xiang Jun of this feature information matches as goal description in order to improve the accuracy of input
, and multiple goal description items are supplied to user, and input the goal description item of user's selection.
Referring to Fig. 2, a kind of flow diagram of another embodiment of input method of the application, the side of the present embodiment are shown
Method applies equally to have image acquisition units or can connect the electronic equipment of image collecting device, such as desktop computer, hand
Machine etc., the method for the present embodiment can include:
Step 201:First image of the intake including at least the facial information of user.
The facial information for having user is included at least in the embodiment of the present application, in first image absorbed.Such as this first
Image can be people's face face-image or the image comprising face and human body parts limbs.
Certainly, it equally can be when detecting input operation that first image is absorbed in the present embodiment, carry out image and take the photograph
Take;Can also be when detecting click or touching default scanned items button, triggering intake includes at least face face-image
The first image.
Step 202:According to the facial information of the first image, the facial expression feature of the user is identified.
Get in first image, Face datection is carried out to first image, gets the face in first image
Facial information.According to the facial information in first image, the facial expression feature of the user is confirmed.
Wherein, which is characterization the user's facial expression or the feature of organ state.Such as, the mouth of human eye
State that either sagging eyes are opened or closed etc. is chosen in type, eyebrow.
The facial expression feature can also show the characteristic information or classification information of the user's current emotional, such as facial table
Feelings feature can include glad, sad, surprised, frightened, painful, angry, detest, contempt etc. emotional characteristics.Certainly, every kind of feelings
Thread feature can also specifically include several different state, and such as happiness can include smiling, laugh, strange laugh at;Sadness can be with
Including grievance, cry, wail etc..
Step 203:From preset description item, the goal description item to match with the facial expression feature is obtained, and it is defeated
Enter the goal description item.
After the facial expression feature of the user identified, from preset description item, matching can be with first image
The matched goal description item of expression of middle user, and then input the goal description item.
Wherein, in the present embodiment, which can be any one in above example, optionally, this reality
Apply the goal description item in example and describe item, such as picture comprising expression, character or symbol expression for expression.
It is understood that in the present embodiment, according to the facial information in first image, to the facial characteristics of user
Information, which is identified, can use existing any human facial expression recognition mode, not be any limitation as herein.
Optionally, Face datection, after determining the facial information in first image, Ke Yigen are carried out to first image
According to the facial information in first image, the feature extraction of face face organ is carried out, gets the spy of the face organ of face
Reference ceases.When people is in different conditions, the state of each face organ of face can also change accordingly, wherein, should
Face organ changes the change for containing facial shape of face.Each organ of face can include various states.Such as, the eyebrow of people
State can include eyebrow and decline, lifting inside eyebrow, lifting etc. outside eyebrow;And for example, the state of the lip of people can be anti-
Reflect in many aspects, can specifically include:Lip collapses mutually, upper lip lifting, labial angle lifts, labial angle pulls down, lower lips are received
Tightly, lip is separated, stings lip etc..
According to the characteristic information of the face organ of face, it may be determined that the facial expression feature of the user, and then can be true
The description item that the characteristic information of fixed and the face face organ matches.Optionally, can be according to the feature of the face organ
Information, carries out expression classification recognition, determines the facial expression classification of the user.Correspondingly, according to each in preset description item
Expression meanings expressed by a description item, obtain the goal description item to match with the facial expression classification.
In order to make it easy to understand, using the facial information of user included in the first image of shooting as shown in Fig. 3.In the image
In, as seen from Figure 3 when shooting first image, user is laughing.
After getting first image, the facial information of the user in first image, can analyze the user
The corners of the mouth pull back, and raise and be exaggerated, lip opens, tooth(It is not drawn into figure)It can expose, the eyes of people are micro- to narrow, eye
Eyelid, according to the characteristic information of these human faces, is determined the facial expression classification of the user, then can be determined into gap-like etc.
Go out the user and be in laugh state.
Correspondingly, from preset description item, definite goal description item should also possess and can embody in laugh shape
The feature of state.For example, with reference to Fig. 4, it is under the facial expression shown in Fig. 3, the expression picture that system is determined, the facial expression image
In be similarly the expression of laugh, also contains the corners of the mouth in the expression picture raises, and lip opens, the features such as tooth exposes.When
So, Fig. 4 is only a kind of possible goal description item, and for the image shown in Fig. 3, the goal description item determined can also be
Symbol expression, such as " O (∩ _ ∩) O heartily~", " (* ^ ◎ ^*) ", which can also be that other can be embodied greatly
Dynamic picture laughed at etc..
Referring to Fig. 5, a kind of flow diagram of another embodiment of input method of the application, the side of the present embodiment are shown
Method applies equally to have image acquisition units or can connect the electronic equipment of image collecting device, such as desktop computer, hand
Machine etc., the method for the present embodiment can include:
Step 501:First image of the intake including at least the body posture of the user.
In the present embodiment, the body posture of user is included at least in first image, as residing for user's body trunk
The state in state, User Part limbs or joint etc..
For example, the image information of the gesture motion of the hand of user can be included in first image;Or first figure
It can include body degree of crook of user etc. as in.
Certainly, the first image of triggering intake in the present embodiment can detect that user carries out input operation, can also
It is to detect that user clicks on default description item input button etc., is not any limitation as herein.
Step 502:According to first image, the limb action feature of the user is determined.
By image analysis technology, the image of the human body limb part to being included in first image is analyzed, and root
According to human synovial shape, the connection relation in each joint and the current position relationship of each limb part, analyze the user's
Limb action feature.Wherein, the first image is analyzed, determines that the mode of the limb action feature of user can be existing
The mode what analyzes human action image, is not any limitation as herein.
Optionally, in this step, user can be analyzed according to the body posture of the user included in first image
Body limbs state feature, and then according to the state feature of the limbs, determine the action class that user is currently carried out
Not.
For example, the images of gestures of user can be included in first image, if two palms of user it is close to each other or
It is adjacent, then the current action classification of user can be determined to applaud.And for example, user's body and hand are included in first image
Arm action message, by analyzing first image, determines that body is in rotation status, and the angle of arm and trunk
More than predetermined angle, and then the state that user is currently at rotation or dancing can be obtained.
Step 503:From preset description item, the goal description item to match with the limb action feature is obtained, and it is defeated
Enter the goal description item.
The goal description item determined according to the limb action feature can be the picture of performance body posture, or show
The picture of human action behavior.Such as, when the body for determining user is in rotation gesture, can from preset description item,
Using the action picture with body hyperspin feature as goal description item.
In the present embodiment, which can also be character or character expression, e.g., when first image can be with
Gesture comprising user is the gesture being directed toward to the left, then it is character " ← " that can determine goal description item.
The embodiment of Fig. 2 and Fig. 5 is to be retouched respectively exemplified by including facial information or body posture in the first image
State, sea can include the facial information and body posture of user at the same time in first image actually in the embodiment of the present application
Information., can people in the image when including the facial information and body posture information of user at the same time in first image
Face and body posture are analyzed respectively, get the facial expression feature and limb action feature of user, and combine and be somebody's turn to do
Facial expression feature and limb action feature, determine goal description item.For example, the classification of the facial expression is feels uncertain, then the limb
Body action is difficult to tackle, and the goal description item determined can be that this is included with doubt expression difficult to tackle.
Certainly, when including the facial information and body posture information of user in first image at the same time, can also set
Determine the priority of facial expression feature and limb action feature.Such as, it is priority match mesh that can set facial expression feature
Mark, can be according to limb action feature progress if being not matched to goal description item corresponding with the facial expression
Match somebody with somebody.
It should be noted that in any of the above one embodiment, the first image of intake can be a width or
Several first images are continuously absorbed, and then subsequently the first image of one or more absorbed is analyzed, determine this
The characteristic information of user.
It is understood that in any of the above one embodiment, from preset description item, matching goal description item can
To be according to the characteristic information of user analyzed, the information included to preset description item is analyzed, to determine and
The matched goal description item of characteristic information of the user., can be pre- in order to more reduce error hiding and improve matching speed
First it is trained point according to the image largely comprising a part of information of user's body such as the facial expressions of user, body postures
Analysis, according to analysis result, determines corresponding description item needed for different user's characteristic informations, and establish the characteristic information of user with
The correspondence of item is described.In this way, work as the spy for according to the first image absorbed, analyzing the user included in first image
, can be according to the correspondence of the preset characteristic information and description item, from preset description item, acquisition and institute after reference breath
State the goal description item that characteristic information matches.
A kind of input method of the corresponding present invention, present invention also offers a kind of electronic equipment, referring to Fig. 6, shows this
The structure diagram of invention a kind of electronic equipment one embodiment, the electronic equipment of the present embodiment can be mobile phone, notebook electricity
Brain or desktop computer, the electronic equipment of the present embodiment can include:Image capture unit 601, image analyzing unit 602 and defeated
Enter unit 603.
Wherein, image capture unit 601, for absorbing the first image, wherein, user is included at least in described first image
Body a part of information.
Image analyzing unit 602, for according to described first image, determining the characteristic information of the user.
Input unit 603, for from preset description item, obtaining the goal description to match with the characteristic information
, and input the goal description item.
Wherein, the selected preset description item of the input unit and the description item finally determined can include following
It is one or more:Expression picture, action picture, character and the symbol expression being made of character.
Optionally, the image capture unit, can include
First image capture unit, for absorbing the first image of the facial information including at least the user;
Correspondingly, the image analyzing unit, can include:
First image analyzing unit, for the facial information according to the first image, identifies that the facial expression of the user is special
Sign;
The input unit, can include:
First input unit, for from preset description item, obtaining the target to match with the facial expression feature
Item is described, and inputs the goal description item.
Further, first image analyzing unit, can include:
Feature extraction unit, for the facial information in described first image, the feature of progress face face organ
Extraction, obtains the characteristic information of face organ;
Expression classification unit, for the characteristic information according to the face organ, carries out expression classification recognition, determines institute
State the facial expression classification of user.
On the basis of the embodiment of any of the above electronic equipment, which can include:
Second image analyzing unit, for absorbing the first image of the body posture including at least the user;
Correspondingly, described image analytic unit, including:
Second image analyzing unit, for according to described first image, determining the limb action feature of the user;
The input unit, including:
Second input unit, for from preset description item, obtaining the target to match with the limb action feature
Item is described, and inputs the goal description item.
Optionally, in any of the above embodiment, the image capture unit, can include:
Image capture subelement, for when detecting click or touching default description item input button, described in intake
First image.
Optionally, the input unit, can include:
Input subelement, for according to preset characteristic information with description item correspondence, from preset description item,
The goal description item to match with the characteristic information is obtained, and inputs the goal description item.
Each embodiment is described by the way of progressive in this specification, what each embodiment stressed be and other
The difference of embodiment, between each embodiment identical similar portion mutually referring to.For electronics disclosed in embodiment
For equipment, since it is corresponded to the methods disclosed in the examples, so description is fairly simple, related part is referring to method portion
Defend oneself bright.
Can directly it be held with reference to the step of method or algorithm that the embodiments described herein describes with hardware, processor
Capable software module, or the two combination are implemented.Software module can be placed in random access memory(RAM), memory, read-only deposit
Reservoir(ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology
In any other form of storage medium well known in field.
The foregoing description of the disclosed embodiments, enables professional and technical personnel in the field to realize or use the present invention.
A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention
The embodiments shown herein is not intended to be limited to, and is to fit to and the principles and novel features disclosed herein phase one
The most wide scope caused.
Claims (10)
- A kind of 1. input method, it is characterised in that including:Absorb the first image, wherein, in described first image including at least user body a part of information;According to described first image, the characteristic information of the user is determined;From preset description item, the goal description item to match with the characteristic information is obtained, and input the goal description ;Wherein, the first image of the intake, including:Absorb the first image including at least the facial information of the user and the body posture of the user;Correspondingly, it is described according to described first image, determine the characteristic information of the user, including:According to the facial information of the first image, the facial expression feature of the user is identified, and according to described first image, really The limb action feature of the fixed user;It is described to obtain the description item to match with the characteristic information from preset description item, and the description item is inputted, wrap Include:From preset description item, obtain the goal description item to match with the facial expression feature, and obtain with it is described The goal description item that limb action feature matches, and by the goal description item to match with the facial expression feature of acquisition The goal description item is inputted with the goal description item to match with the limb action feature.
- 2. according to the method described in claim 1, it is characterized in that, the facial information according to the first image, determines described The facial expression feature of user, including:Facial information in described first image, carries out the feature extraction of face face organ, obtains the spy of face organ Reference ceases;According to the characteristic information of the face organ, expression classification recognition is carried out, determines the facial expression classification of the user.
- 3. according to the method described in claim 1, it is characterized in that, the first image of the intake includes:When detecting click or touching default description item input button, described first image is absorbed.
- 4. according to claim 1-3 any one of them methods, it is characterised in that the description item includes following a kind of or more Kind:Expression picture, action picture, character and the symbol expression being made of character.
- 5. according to the method described in claim 1, it is characterized in that, described from preset description item, acquisition and the feature The goal description item of information match, and the goal description item is inputted, including:According to the correspondence of preset characteristic information and description item, from preset description item, obtain and the characteristic information The goal description item to match, and input the goal description item.
- 6. a kind of electronic equipment, it is characterised in that including:Image capture unit, for absorbing the first image, wherein, including at least one of body of user in described first image Divide information;Image analyzing unit, for according to described first image, determining the characteristic information of the user;Input unit, for from preset description item, obtaining the goal description item to match with the characteristic information, and input The goal description item;Wherein, described image intake unit is specifically used for, facial information and the user of the intake including at least the user First image of body posture;Correspondingly, described image analytic unit, including:First image analyzing unit, for the facial information according to the first image, identifies the facial expression feature of the user;Second image analyzing unit, for according to described first image, determining the limb action feature of the user;The input unit is specifically used for, and from preset description item, obtains the target to match with the facial expression feature Item is described, and obtains the goal description item to match with the limb action feature, and by acquisition and the facial expression The goal description item that feature matches and the goal description item input goal description to match with the limb action feature .
- 7. electronic equipment according to claim 6, it is characterised in that described first image analytic unit, including:Feature extraction unit, for the facial information in described first image, carries out the feature extraction of face face organ, Obtain the characteristic information of face organ;Expression classification unit, for the characteristic information according to the face organ, carries out expression classification recognition, determines the use The facial expression classification at family.
- 8. electronic equipment according to claim 6, it is characterised in that described image absorbs unit, including:Image capture subelement, for when detecting click or touching default description item input button, absorbing described first Image.
- 9. according to claim 6-8 any one of them electronic equipments, it is characterised in that the description item include it is following a kind of or It is a variety of:Expression picture, action picture, character and the symbol expression being made of character.
- 10. electronic equipment according to claim 6, it is characterised in that the input unit, including:Subelement is inputted, for the correspondence according to preset characteristic information and description item, from preset description item, is obtained The goal description item to match with the characteristic information, and input the goal description item.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310381900.0A CN104423547B (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
CN201810289019.0A CN108762480A (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310381900.0A CN104423547B (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810289019.0A Division CN108762480A (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104423547A CN104423547A (en) | 2015-03-18 |
CN104423547B true CN104423547B (en) | 2018-04-27 |
Family
ID=52972837
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310381900.0A Active CN104423547B (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
CN201810289019.0A Pending CN108762480A (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810289019.0A Pending CN108762480A (en) | 2013-08-28 | 2013-08-28 | A kind of input method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN104423547B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105022480A (en) * | 2015-07-02 | 2015-11-04 | 深圳市金立通信设备有限公司 | Input method and terminal |
CN105262676A (en) * | 2015-10-28 | 2016-01-20 | 广东欧珀移动通信有限公司 | Method and apparatus for transmitting message in instant messaging |
CN106293131A (en) * | 2016-08-16 | 2017-01-04 | 广东小天才科技有限公司 | expression input method and device |
CN110222210A (en) * | 2019-05-13 | 2019-09-10 | 深圳传音控股股份有限公司 | User's smart machine and its mood icon processing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102193620A (en) * | 2010-03-02 | 2011-09-21 | 三星电子(中国)研发中心 | Input method based on facial expression recognition |
CN102637071A (en) * | 2011-02-09 | 2012-08-15 | 英华达(上海)电子有限公司 | Multimedia input method applied to multimedia input device |
CN102890777A (en) * | 2011-07-21 | 2013-01-23 | 爱国者电子科技(天津)有限公司 | Computer system capable of identifying facial expressions |
CN102955569A (en) * | 2012-10-18 | 2013-03-06 | 北京天宇朗通通信设备股份有限公司 | Method and device for text input |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1328908C (en) * | 2004-11-15 | 2007-07-25 | 北京中星微电子有限公司 | A video communication method |
US8584031B2 (en) * | 2008-11-19 | 2013-11-12 | Apple Inc. | Portable touch screen device, method, and graphical user interface for using emoji characters |
CN102890776B (en) * | 2011-07-21 | 2017-08-04 | 爱国者电子科技有限公司 | The method that expression figure explanation is transferred by facial expression |
CN103297742A (en) * | 2012-02-27 | 2013-09-11 | 联想(北京)有限公司 | Data processing method, microprocessor, communication terminal and server |
-
2013
- 2013-08-28 CN CN201310381900.0A patent/CN104423547B/en active Active
- 2013-08-28 CN CN201810289019.0A patent/CN108762480A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102193620A (en) * | 2010-03-02 | 2011-09-21 | 三星电子(中国)研发中心 | Input method based on facial expression recognition |
CN102637071A (en) * | 2011-02-09 | 2012-08-15 | 英华达(上海)电子有限公司 | Multimedia input method applied to multimedia input device |
CN102890777A (en) * | 2011-07-21 | 2013-01-23 | 爱国者电子科技(天津)有限公司 | Computer system capable of identifying facial expressions |
CN102955569A (en) * | 2012-10-18 | 2013-03-06 | 北京天宇朗通通信设备股份有限公司 | Method and device for text input |
Also Published As
Publication number | Publication date |
---|---|
CN104423547A (en) | 2015-03-18 |
CN108762480A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103049761B (en) | Sign Language Recognition Method based on sign language glove and system | |
Wexelblat | An approach to natural gesture in virtual environments | |
Turk et al. | Perceptual interfaces | |
Kapuscinski et al. | Recognition of hand gestures observed by depth cameras | |
Aslan et al. | Mid-air authentication gestures: An exploration of authentication based on palm and finger motions | |
CN104423547B (en) | A kind of input method and electronic equipment | |
KR102148151B1 (en) | Intelligent chat based on digital communication network | |
Kour et al. | Sign language recognition using image processing | |
Basori | Emotion walking for humanoid avatars using brain signals | |
Chowdhury et al. | Gesture recognition based virtual mouse and keyboard | |
CN108829239A (en) | Control method, device and the terminal of terminal | |
Vivek Veeriah et al. | Robust hand gesture recognition algorithm for simple mouse control | |
Jiang et al. | independent hand gesture recognition with Kinect | |
CN107450717A (en) | A kind of information processing method and Wearable | |
Ozer et al. | Vision-based single-stroke character recognition for wearable computing | |
TW202016881A (en) | Program, information processing device, quantification method, and information processing system | |
CN110149427A (en) | Camera control method and Related product | |
KR20200081529A (en) | HMD based User Interface Method and Device for Social Acceptability | |
CN108628454A (en) | Visual interactive method and system based on visual human | |
Dhamanskar et al. | Human computer interaction using hand gestures and voice | |
Bevacqua et al. | Multimodal sensing, interpretation and copying of movements by a virtual agent | |
Younas et al. | Air-Writing Segmentation using a single IMU-based system | |
Canedo et al. | Mood estimation based on facial expressions and postures | |
Hassemer | Towards a theory of gesture form analysis | |
Bérci et al. | Vision based human-machine interface via hand gestures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |