CN107249100A - Photographing method and device, electronic equipment and storage medium - Google Patents
Photographing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN107249100A CN107249100A CN201710527676.XA CN201710527676A CN107249100A CN 107249100 A CN107249100 A CN 107249100A CN 201710527676 A CN201710527676 A CN 201710527676A CN 107249100 A CN107249100 A CN 107249100A
- Authority
- CN
- China
- Prior art keywords
- point
- face image
- expressive features
- everyone face
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a photographing method, a photographing device, electronic equipment and a storage medium. Wherein, the method comprises the following steps: carrying out face detection by using a camera to obtain at least one face image; identifying the expression characteristics of each facial image in the at least one facial image; and when the expression characteristic of each face image is detected to be a photographing characteristic, performing photographing operation. By adopting the embodiment of the invention, the shooting effect is improved.
Description
Technical field
The present invention relates to image processing field, more particularly to a kind of photographic method, device, electronic equipment and storage medium.
Background technology
With electronic equipment, the popularization of such as smart mobile phone, the various functions of electronic equipment and application all increasingly tend to
Maturing.For example, electronic equipment only possesses basic camera function from beginning, finally to possess recognition of face, U.S. face etc. more
Item function, is greatly enriched the amusement and recreation life of people.
However, electronic equipment is when being taken pictures, mainly by clicking on screen virtual camera button or the external use of terminal
Taken pictures in the button taken pictures, cause photographer's dispersion attention, it is impossible to take the preferable photo of effect.
The content of the invention
The embodiment of the present invention provides a kind of photographic method, device, electronic equipment and storage medium, and can solve terminal can not
The problem of taking effect preferable photo, contributes to improvement to take pictures effect.
In a first aspect, the embodiments of the invention provide a kind of photographic method, including:
Face datection is carried out using camera, at least one facial image is obtained;
The expressive features of everyone face image at least one described facial image of identification;
When the expressive features of everyone face image are smile characteristics, photographing operation is performed.
Alternatively, at least one facial image described in the identification everyone face image expressive features, including:
Extract the set of characteristic points of everyone face image at least one described facial image;
Target signature point is extracted from the set of characteristic points of everyone face image;
Expressive features are determined according to the target signature point of everyone face image.
Alternatively, the target signature point of everyone face image described in the basis determines expressive features, including:
Obtain the positional information of the lip feature point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the lip feature point.
Alternatively, the target signature point of everyone face image described in the basis determines expressive features, including:
Obtain the positional information of the eye feature point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the eye feature point.
Alternatively, the target signature point of everyone face image described in the basis determines expressive features, including:
Obtain the positional information of the nose characteristic point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the nose characteristic point.
Alternatively, described take pictures is characterized as default expressive features, and everyone face image expressive features
Between difference value be less than predetermined threshold value feature.
Second aspect, the embodiments of the invention provide a kind of camera arrangement, including:
Acquisition module, for carrying out Face datection using camera, obtains at least one facial image;
Identification module, the expressive features for recognizing everyone face image at least one described facial image;
Photo module, for when the expressive features of everyone face image are smile characteristics, performing photographing operation.
Alternatively, the identification module, specifically for extracting everyone face image at least one described facial image
Set of characteristic points;Target signature point is extracted from the set of characteristic points of everyone face image;According to it is described each
The target signature point of facial image determines expressive features.
Alternatively, the identification module includes:
First acquisition unit, the position of the lip feature point in target signature point for obtaining everyone face image
Confidence ceases;
First determining unit, for everyone face image described in being determined according to the positional information of the lip feature point
Expressive features.
Alternatively, the identification module includes:
Second acquisition unit, the position of the eye feature point in target signature point for obtaining everyone face image
Confidence ceases;
Second determining unit, for everyone face image described in being determined according to the positional information of the eye feature point
Expressive features.
Alternatively, the identification module includes:
3rd acquiring unit, the position of the nose characteristic point in target signature point for obtaining everyone face image
Confidence ceases;
3rd determining unit, for everyone face image described in being determined according to the positional information of the nose characteristic point
Expressive features.
Alternatively, described take pictures is characterized as default expressive features, and everyone face image expressive features
Between difference value be less than predetermined threshold value feature.
The third aspect, the embodiments of the invention provide a kind of electronic equipment, including:Processor, memory, communication interface and
Communication bus;The processor, the memory and the communication interface are connected by the communication bus and completed each other
Communication;The memory storage executable program code;What the processor was stored by reading in the memory holds
Line program code runs program corresponding with the executable program code, for performing a kind of photographic method;Wherein, institute
The method of stating includes:
Face datection is carried out using camera, at least one facial image is obtained;
The expressive features of everyone face image at least one described facial image of identification;
When the expressive features of everyone face image are smile characteristics, photographing operation is performed.
Fourth aspect, the embodiments of the invention provide a kind of computer program product, wherein, when the computer program product
In instruction by computing device when, perform the photographic method described in a kind of above-mentioned first aspect.
5th aspect, the embodiments of the invention provide a kind of storage medium, the storage medium is used to store application program, institute
Stating application program is used to operationally perform the photographic method described in first aspect.
In the embodiment of the present invention, electronic equipment can be carried out after Face datection using camera, obtain at least one face
Image, and recognize the expressive features of everyone face image at least one facial image, and detecting this everyone
The expressive features of face image are when taking pictures feature, to perform photographing operation.Middle electronic equipment is by clicking on screen compared with prior art
Or button is taken pictures, the embodiment of the present invention can be feature of taking pictures in the expressive features of each facial image detected
When, realize take pictures automate, it is intelligent.Also, by the detection to expressive features, it can make it that taking the photo come reaches
Expected effect.In addition, it is the spy that takes pictures in the expressive features of each facial image detected if number of taking pictures is multiple
Taken pictures when levying, can to take the expression come and more unify, improve the effect of shooting, and effectively reduce because
The probability shot is repeated caused by a few peoples' expression is not in place.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of photographic method provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic flow sheet for photographic method that yet another embodiment of the invention is provided;
Fig. 3 is a kind of facial image that marked characteristic point provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic flow sheet for photographic method that another embodiment of the present invention is provided;
Fig. 5 is a kind of structural representation of camera arrangement provided in an embodiment of the present invention;
Fig. 6 is the structural representation of a kind of electronic equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, to being illustrated in the embodiment of the present invention.
Referring to Fig. 1, being a kind of schematic flow sheet of photographic method provided in an embodiment of the present invention.This method can be applied
In each electronic equipment, the electronics that including but not limited to smart mobile phone, tablet personal computer, intelligent camera etc. have camera function is set
It is standby.Wherein, this method may comprise steps of:
S101, electronic equipment receive user and operated for the unlatching of camera.
In the embodiment of the present invention, unlatching operation includes but is not limited to the camera diagram of the user interface for electronic equipment
Target clicking operation, the pressing operation for being used to open the physical button of camera for electronic equipment.
It should be noted that after user clicks on camera icon or pressing respective physical button, electronic equipment can connect
Receive unlatching of the user to camera to operate, and interface of taking pictures can be entered, so as to perform step S102.
S102, electronic equipment receive user and operated for the unlatching of Smile Mode.
It should be noted that after electronic equipment enters interface of taking pictures, the setting of exposal model can be carried out, for example, can
To set U.S. face pattern, expression pattern.The expression pattern can include Smile Mode, and the pattern can be carried out with the form of icon
Present, or can be presented with the form of option.
In the embodiment of the present invention, user in step s 102 operates for the unlatching of Smile Mode, includes but is not limited to
User is directed to the clicking operation of the icon of Smile Mode, for the selection operation of Smile Mode.
S103, electronic equipment carry out facial image detection by camera collection image, and to the image of collection;
S104, electronic equipment judge whether to detect at least one facial image.If it is not, then performing step S103.If so,
Then perform step S105.
As a rule, when electronic equipment is taken pictures using camera, it can be photographed by camera to gather needs
Image.For example, in step s 103, electronic equipment can call camera collection image after selection Smile Mode.
Alternatively, when electronic equipment gathers the image currently taken pictures by camera, it can also be entered using the camera
Row Face datection.Wherein, the method for Face datection includes but is not limited to method, Statistics-Based Method, the template of feature based
Matching.
In the embodiment of the present invention, the quantity for camera is not limited, including but not limited to single camera, double shootings
Head.
It should be noted that in step S104, electronic equipment can decide whether to detect facial image.If detecting
Facial image can perform step S105.If being not detected by image, step S103 is performed, to re-start facial image inspection
Survey.
Alternatively, be not detected by facial image, prompt message can also be exported with point out user adjustment take pictures posture or
Person takes pictures expression.
At least one facial image is pre-processed to this for S105, electronic equipment, and extracts feature.
Alternatively, electronic equipment can carry out the pretreatment such as noise-removed filtering to the image that detects, and electronic equipment can be from
Feature is extracted in the facial image of acquisition.Wherein, the specific expressive features of feature recognition that electronic equipment can be based on the extraction.
In the embodiment of the present invention, the feature of the extraction can be global feature, such as facial characteristics, can also for one or
The multiple following local features of person:Eye feature, lip feature, nose feature, eyebrow feature etc..
Alternatively, feature recognition expressive features of the electronic equipment based on extraction include but is not limited to:Extract this at least one
The set of characteristic points of everyone face image in facial image;From this target is extracted in set of characteristic points of everyone face image
Characteristic point;According to this, the target signature point of everyone face image determines expressive features.
In the embodiment of the present invention, this feature point set can be the global feature point of everyone face image, such as face
Characteristic point.This feature point set can also be the local feature region of everyone face image, such as lip feature point, eye feature
Point, nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.The target signature point can be for determining that expression is special
The multiple characteristic points levied.Also, the target signature point include but is not limited to the lip feature point of facial image, eye feature point,
Nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.
Wherein, the characteristic point of the extraction can be the feature extraction of each several part based on facial image correspondence facial zone
's.This feature point set, can include extracting the characteristic points of relevant position and respective numbers according to the actual requirements, including but
It is not limited at 64 points, 68 points, 60 points, 76 points.
S106, electronic equipment detect whether everyone face image all matches smile spy at least one facial image
Levy.If so, then performing step S107.If it is not, then performing step S105.
It should be noted that in step S105, electronic equipment can extract feature to recognize expressive features, specifically may be used
With including:Extract the set of characteristic points of everyone face image at least one facial image;Everyone face image from this
Set of characteristic points in extract target signature point;According to this, the target signature point of everyone face image determines expressive features.
It should be noted that in step s 106, electronic equipment detect at least one facial image whether each
Facial image all match cognization smile characteristics, include but is not limited at least any one of following or multinomial:According between target signature point
Distance value determine expressive features, expressive features are determined according to the changing value of distance between target signature point, according to target signature point
Between the corner dimension of line formation determine expressive features, or determine to express one's feelings according to the changing value of the angle of target signature point formation
Feature.
For example, if target signature point is lip feature point, the lip feature point can at least include:Upper lip is following
Characteristic point at edge central spot characteristic point, upper lip left side labial angle, between left side labial angle and the upper lip lower edge central point
Upper lip lower edge midpoint characteristic point.If upper lip lower edge central spot characteristic point, and in left side labial angle and the upper lip
The distance between upper lip lower edge midpoint characteristic point between lower edge central point value changes value, more than preset distance change
When being worth threshold value, such as more than 3 (mm), then it is smile characteristics that can determine the expressive features.
Alternatively, can if there are facial image mismatch smile characteristics at least one facial image detected
To export prompt message to point out user's adjustment to take pictures posture or expression of taking pictures.
S107, electronic equipment perform photographing operation, and export corresponding picture.
For example, the image of camera collection includes facial image A, facial image B, facial image C, face figure
As D.Wherein, the facial image detected includes facial image A, facial image B, facial image C, then electronic equipment is in detection
When all matching smile characteristics to facial image A, facial image B, facial image C, then photographing operation is performed, and export corresponding figure
Piece.
It can be seen that, in the embodiment shown in Fig. 1, electronic equipment can carry out Face datection using camera, it is possible to detect
Obtain at least one facial image whether everyone face image all matches smile characteristics, if matching performs photographing operation, make
It must take pictures more intelligent, the improvement for the effect that is conducive to taking pictures.
Referring to Fig. 2, a kind of schematic flow sheet of the photographic method provided for yet another embodiment of the invention, this method can be with
Applied to each electronic equipment, the electronic equipment, which includes but is not limited to smart mobile phone, tablet personal computer, intelligent camera etc., has bat
According to the electronic equipment of function.Wherein:
S201, using camera carry out Face datection, obtain at least one facial image.
As a rule, when electronic equipment is taken pictures using camera, it can be photographed by camera to gather needs
Image.
For example, for part smart mobile phone, the icon of camera can be clicked on, into interface is taken pictures to show that camera is adopted
The image collected.For some intelligent cameras, can be by pressing physical button, into interface of taking pictures, to show camera
The image collected.
Alternatively, when electronic equipment gathers the image currently taken pictures by camera, it can also be entered using the camera
Row Face datection.Wherein, the method for Face datection includes but is not limited to method, Statistics-Based Method, the template of feature based
Matching.
In the embodiment of the present invention, the quantity for camera is not limited, including but not limited to single camera, double shootings
Head.
It should be noted that in step s 201, electronic equipment can carry out Face datection using camera, and obtain extremely
A few facial image.
For example, the image of camera collection includes facial image A, facial image B, facial image C, face figure
As D.Wherein, the facial image that camera is detected includes facial image A, facial image B, facial image C, then electronic equipment
Facial image A, facial image B, facial image C can just be obtained.
The expressive features of everyone face image in S202, at least one described facial image of identification.
It should be noted that in step s 201, electronic equipment can get at least one facial image.In step
In S202, electronic equipment can recognize the expressive features of everyone face image at least one described facial image, so as to electricity
Sub- equipment can be taken pictures according to different expressions.
In the embodiment of the present invention, the expressive features can be used for determining whether to perform photographing operation.Wherein, the expressive features
It can be smile characteristics, can also be the feature of a variety of moods such as angry feature, sad feature, frightened feature.Wherein, except smile
The feature of other moods outside feature can also use the method that the embodiment of the present invention is provided that is similar to, for determining whether
Perform photographing operation.
Alternatively, in step s 201, electronic equipment is obtained after at least one facial image, can at this at least one
Everyone extracts feature at face image in facial image, to recognize the expressive features of everyone face image.
Wherein, the feature of the extraction can be the global feature of everyone face image, such as facial characteristics.The extraction
Feature can also be the local feature of everyone face image, such as lip feature or eye feature.The feature bag of the extraction
Include but be not limited to the characteristic point of relevant position to represent.
Alternatively, in step S202, electronic equipment recognizes at least one facial image everyone face image
Expressive features include but is not limited to following manner:Electronic equipment extracts at least one facial image everyone face image
Set of characteristic points;Target signature point is extracted in set of characteristic points of everyone face image from this;According to each face figure
The target signature point of picture determines expressive features.
In the embodiment of the present invention, this feature point set can be the global feature point of everyone face image, such as face
Characteristic point.This feature point set can also be the local feature region of everyone face image, such as lip feature point, eye feature
Point, nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.The target signature point can be for determining that expression is special
The multiple characteristic points levied.Also, the target signature point include but is not limited to the lip feature point of facial image, eye feature point,
Nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.
It should be noted that electronic equipment determines expressive features according to the target signature point of extraction, include but is not limited to extremely
Few following any one or multinomial:Expressive features are determined according to the distance value between target signature point, according to distance between target signature point
Changing value determine expressive features, expressive features are determined according to the corner dimension of line formation between target signature point, or according to
The changing value of the angle of target signature point formation determines expressive features.
For example, as shown in figure 3, the set of characteristic points extracted in facial image A can be mark 60 in figure
Characteristic point.Electronic equipment can extract target signature point, such as characteristic point from 60 characteristic pointsWherein, may be used
With according to ∠Size determine expressive features.If ∠ Less than some preset angle threshold value, it is possible to
It is smile characteristics to determine the expressive features.If ∠ More than some preset angle threshold value, it is possible to determine that the expression is special
Levy as sad feature.
Again for example, if the target signature point of the extraction be the characteristic point at pupil two ends, it is as shown in Figure 3 at upper lip
Characteristic pointThe changing value of distance value between the characteristic point at pupil two ends is more than preset interpupillary distance changing value
∠ at threshold value, and lipIt can determine that the expressive features are frightened feature if more than some preset angle threshold value.
It should be noted that this feature point set includes but is not limited to 60 characteristic points marked in Fig. 3, can also be
64 characteristic points, 68 characteristic points, 76 characteristic points etc..This set of characteristic points extracted can be according to identification expressive features
Precision and other demands be adjusted.Wherein, the embodiment of the present invention is for the sequence of the characteristic point marked in set of characteristic points
Number sequencing do not limit.
Alternatively, because the left half of face image and right half of face image of facial image have certain similarity and symmetrical
Property, therefore when extracting characteristic point, electronic equipment can choose the image zooming-out set of characteristic points of wherein one half of face, and be based on
This feature point set extracts target signature point.
Alternatively, electronic equipment, can be based on extraction according to the difference of the target signature point extracted from set of characteristic points
Different target signature point integrates determination expressive features in combination, to reduce the decision errors of expressive features.Wherein,
Electronic equipment can be based on any in the characteristic points such as the lip feature point in target signature point, nose characteristic point, eye feature point
Whether two or multinomial, it is smile characteristics to carry out comprehensive descision expressive features.
Still optionally further, by taking lip feature point and eye feature point as an example, the value of the angle between the eye feature point
During more than preset angle threshold value, and the value of angle between the lip feature point is when being less than preset angle threshold value, can be true
The fixed expressive features are smile characteristics.
For example, if as shown in figure 3, eye feature point is respectivelyThe electronic equipment can determine ∠Value, and ∠Value be more than some preset angle threshold value, such as c °.If also, lip feature point is distinguished
ForThe electronic equipment can determine ∠ Value, and ∠Value be less than some preset folder
Angle threshold value, such as b °.Therefore, ∠ Value be more than some preset angle threshold value, and ∠Value be less than some
During preset angle threshold value, it may be determined that the expressive features are smile characteristics.
S203, when detecting the expressive features of everyone face image for take pictures feature when, perform photographing operation.
It should be noted that in step S202, electronic equipment can recognize the expressive features of face.In step S203
In, electronic equipment can be taken pictures detecting the expressive features of everyone face image feature when, perform photographing operation.Example
Such as, when being characterized as default smile characteristics when taking pictures, then photographing operation can be performed.
For example, the image of camera collection includes facial image A, facial image B, facial image C, face figure
As D.Wherein, the facial image that camera is detected includes facial image A, facial image B, facial image C, then electronic equipment
Facial image A, facial image B, facial image C can be just obtained, facial image A, facial image B, face figure is being recognized
When picture C expressive features are smile characteristics, then photographing operation is performed.
In the embodiment of the present invention, the feature of taking pictures for default expressive features and/or can detect this everyone
Difference value between the expressive features of face image is less than default preset feature.Wherein, the default feature, includes but is not limited to
Default smile characteristics, angry feature, sad feature, happy feature, frightened feature.
Alternatively, the difference value can be obtained by way of the expressive features of each facial image are matched, from
And the matching result on each expressive features is obtained, the matching result includes but is not limited in the form of numerical value present, such as percentage
Number, numeral.
As an example it is assumed that special to smile in the expressive features for recognizing facial image A, facial image B, facial image C
When levying, and the difference value of the expressive features between facial image A, facial image B, facial image C is 10%.If predetermined threshold value
For 25%, this 20%<25%, then perform and take pictures.
Alternatively, the difference value can be by the way that the expressive features of everyone face image and default expressive features be carried out
The mode of matching is obtained.
Still optionally further, the difference value can be obtain at least one facial image in expressive features do not meet it is default
Expressive features facial image quantitative value.
As an example it is assumed that when recognizing facial image A, facial image B expressive features being smile characteristics, face figure
Picture C expressive features are not smile characteristics, then the difference value is just 2 (individual), if predetermined threshold value is 4 (individual), 2 (individual)<4
(individual), then perform and take pictures.
Alternatively, when it is determined that the expressive features of everyone face image are smile characteristics, prompting letter can also be exported
Cease to point out user.
It can be seen that, in the embodiment shown in Fig. 2, electronic equipment can carry out Face datection using camera, it is possible to recognize
Obtain the expressive features of everyone face image at least one facial image;When the expression for detecting everyone face image
Be characterized as taking pictures feature when, perform photographing operation so that take pictures more intelligent.
Referring to Fig. 4, a kind of schematic flow sheet of the photographic method provided for another embodiment of the present invention.This method can be with
Applied to each electronic equipment, such as each intelligent electronic device, the intelligent electronic device includes but is not limited to smart mobile phone, flat board
Computer, intelligent camera etc. have the intelligent electronic device of camera function.Wherein, this method may comprise steps of:
S401, using camera carry out Face datection, obtain at least one facial image;
S402, the set of characteristic points for extracting everyone face image at least one described facial image.
It should be noted that in step S401, when electronic equipment gathers the image currently taken pictures by camera, also
Face datection can be carried out using the camera, and is obtained at least one facial image.Wherein, the method for Face datection includes
But it is not limited to method, Statistics-Based Method, the template matches of feature based.
In the embodiment of the present invention, the quantity of the camera for being set on the electronic equipment is not limited, including but is not limited
In single camera, dual camera.
It should be noted that in step S402, electronic equipment extracts each face at least one facial image
The set of characteristic points of image.The set of characteristic points of the extraction can be obtained according to the feature of the facial image relevant position.
In the embodiment of the present invention, this feature point set can be the global feature point of everyone face image, such as face
Characteristic point.This feature point set can also be the local feature region of everyone face image, such as lip feature point, eye feature
Point, nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.The target signature point can be for determining that expression is special
The multiple characteristic points levied.
In the embodiment of the present invention, the expressive features include but is not limited to following any one or more:Smile characteristics, indignation
Feature, frightened feature, sad feature.
S403, the extraction target signature point from the set of characteristic points of everyone face image.
It should be noted that in step S403, electronic equipment can from this everyone face image set of characteristic points
Middle extraction target signature point.After execution of step S403, step S404, S406, S408 can be performed side by side.
It should be noted that electronic equipment is after target signature point is extracted, it can be determined to express one's feelings according to the target signature point
Feature, includes but is not limited at least any one of following or multinomial:Expressive features, root are determined according to the distance value between target signature point
Expressive features are determined according to the changing value of distance between target signature point, are determined according to the corner dimension of line formation between target signature point
Expressive features, or expressive features are determined according to the changing value of the angle of target signature point formation.
Wherein, the target signature point includes but is not limited to the lip feature point, eye feature point, nose feature of facial image
Point, eyebrow characteristic point, facial other provincial characteristics points.
The positional information of lip feature point in the target signature point of everyone face image described in S404, acquisition.
It should be noted that when the positional information of lip feature point is extracted, can obtaining whole lip, each is special
Levy positional information a little.
Alternatively, because the face of human body is substantially symmetrical, therefore, it is special that electronic equipment can also obtain left half of lip
Levy positional information a little or the positional information of right half of lip feature point.Wherein, aforementioned location information can be by setting phase
The coordinate or vector answered represents that the embodiment of the present invention will not be described here.
Still optionally further, the lip feature point can at least include:Upper lip lower edge central spot characteristic point, upper mouth
Feature at characteristic point at the labial angle of the lip left side, the upper lip lower edge point between left side labial angle and the upper lip lower edge central point
Point, typically takes midpoint characteristic point.
For example, as shown in figure 3, the lip feature point can include characteristic pointWherein, these three are special
It is a little characteristic point at the labial angle of the upper lip left side respectively to levy, upper lip lower edge central spot characteristic point, on left side labial angle and this
Upper lip lower edge midpoint characteristic point between lip lower edge central point.
Still optionally further, the lip feature point can also at least include:Upper lip lower edge central spot characteristic point, on
Characteristic point at labial angle on the right of lip, it is special at the upper lip lower edge point between labial angle and the upper lip lower edge central point on the right
Levy a little, typically take midpoint characteristic point.
For example, as shown in figure 3, the lip feature point can include characteristic pointWherein, these three are special
It is a little characteristic point at the labial angle of the upper lip left side respectively to levy, upper lip lower edge central spot characteristic point, on left side labial angle and this
Upper lip lower edge midpoint characteristic point between lip lower edge central point.
It should be noted that Fig. 3 institutes target sequence number is only a kind of in order to preferably extract each feature shown in the present invention
The mark mode of point, the sequence number number and the characteristic point that can extract according to actual needs of sequence be adjusted, it is of the invention
Embodiment is not limited to it.
S405, according to the positional information of the lip feature point determine described in everyone face image expressive features.
Alternatively, electronic equipment can determine the distance value between lip feature point according to the positional information of lip feature point
Changing value, so as to determine the degree of drawing of lip muscles according to the distance value changing value, so that it is determined that expressive features.
For example, if as shown in figure 3, lip feature point is respectivelyCorrespondence position information is respectively position
Put 1, position 2, position 3, then the electronic equipment can determine lip feature point according to position 1 and position 3WithBetween
Distance value changing value.
Still optionally further, the distance value changing value between the lip feature point exceedes distance between preset lip feature point
During changing value threshold value, it may be determined that the expressive features are smile characteristics.
For example, if as shown in figure 3, lip feature pointWithThe distance between value changes value, more than it is preset away from
During from changing value threshold value, such as more than 3 (mm), then it is smile characteristics that can determine the expressive features.
Again for example, if as shown in figure 3, lip characteristic pointWithThe distance between value changes value, more than it is preset away from
During from changing value threshold value, such as more than 2 (mm), then it is smile characteristics that can determine the expressive features.
Alternatively, electronic equipment can determine the folder being thought of as between lip feature point according to the positional information of lip feature point
The value at angle, expressive features are determined so as to the value according to the angle.
For example, if as shown in figure 3, lip feature point is respectivelyCorrespondence position information is respectively position
1st, position 2, position 3, then the electronic equipment can determine ∠ according to position 1, position 2, position 3Value.Wherein accord with
Number " ∠ " represents angle in mathematics.
Still optionally further, when the value of the angle between the lip feature point exceedes preset angle threshold value, it may be determined that
The expressive features are smile characteristics.
For example, if as shown in figure 3, ∠Value be less than some preset angle threshold value, such as b °, it is possible to
It is smile characteristics to determine the expressive features.
Alternatively, electronic equipment can determine angle between lip feature point according to the positional information of lip feature point
Changing value, expressive features are determined so as to the changing value according to the angle.
For example, if as shown in figure 3, lip feature point is respectivelyCorrespondence position information is respectively position
1st, position 2, position 3, then the electronic equipment can determine ∠ according to position 1, position 2, position 3Changing value.
Still optionally further, can when the angle change value between the lip feature point exceedes preset variable angle threshold value
To determine the expressive features as smile characteristics.
For example, if as shown in figure 3, ∠Changing value value be less than some preset variable angle threshold value, such as
△ a °, it is possible to which it is smile characteristics to determine the expressive features.
The positional information of eye feature point in the target signature point of everyone face image described in S406, acquisition.
It should be noted that when the positional information of lip feature point is extracted, can obtaining two eyes, each is special
Levy positional information a little.
Alternatively, because the face of human body is substantially symmetrical, therefore, electronic equipment can also obtain left eye characteristic point
The positional information of positional information or right eye characteristic point.Still optionally further, electronic equipment can also obtain the left half of eye of left eye
The positional information of the characteristic point of eyeball, or the right half of eyes of left eye positional information.Correspondingly, electronic equipment can also also obtain the right side
The positional information of the characteristic point of the right half of eyes of eye, or the right half of eyes of right eye positional information.Wherein, aforementioned location information can
To be represented by setting corresponding coordinate or vector, the embodiment of the present invention will not be described here.
In the embodiment of the present invention, the left eye characteristic point can at least include:Eye under informer middle part characteristic point, left eye on left eye
Characteristic point in the middle part of line.Alternatively, the characteristic point of the left eye can also include canthus characteristic point, such as left eye eye head characteristic point and/or
Left eye eye tail characteristic point.
For example, as shown in figure 3, the left eye characteristic point at least includes characteristic pointWherein, the two characteristic points
It is characteristic point in the middle part of informer under characteristic point, left eye in the middle part of informer on left eye respectively.The characteristic point of the left eye can also includeWherein, the two characteristic points are left eye eye head characteristic point, left eye eye tail characteristic point respectively.
Still optionally further, the lip feature point can also at least include:Eye under informer middle part characteristic point, right eye on right eye
Characteristic point in the middle part of line.Alternatively, the characteristic point of the right eye can also include canthus characteristic point, such as right eye eye head characteristic point and/or
Right eye eye tail characteristic point.
For example, as shown in figure 3, the right eye characteristic point at least includes characteristic pointWherein, the two characteristic points
It is characteristic point in the middle part of informer under characteristic point, right eye in the middle part of informer on right eye respectively.The characteristic point of the right eye can also includeWherein, the two characteristic points are right eye eye head characteristic point, right eye eye tail characteristic point respectively.
S407, according to the positional information of the eye feature point determine described in everyone face image expressive features.
Alternatively, electronic equipment can determine the distance value between eye feature point according to the positional information of eye feature point,
So as to determine expressive features according to the distance value.
For example, if as shown in figure 3, eye feature point is respectivelyCorrespondence position information be respectively position 4,
Position 5, then the electronic equipment can determine eye feature point according to position 4 and position 5WithThe distance between value.
Again for example, if eye feature point is respectivelyCorrespondence position information is respectively position 6, position 7, that
The electronic equipment can determine eye feature point according to position 6 and position 7WithThe distance between value.
Still optionally further, the distance value between the eye feature point exceedes distance threshold between preset eye feature point
When, it may be determined that the expressive features are smile characteristics.
For example, if as shown in figure 3, eye feature pointWithThe distance between value, more than preset distance threshold
When, such as more than 3 (mm), then it is smile characteristics that can determine the expressive features.
Again for example, if as shown in figure 3, eye feature pointWithThe distance between value, become more than preset distance
During change value threshold value, such as more than 3 (mm), then it is smile characteristics that can determine the expressive features.
Alternatively, electronic equipment can determine angle between eye feature point according to the positional information of eye feature point
Value, expressive features are determined so as to the value according to the angle.
For example, if as shown in figure 3, eye feature point is respectivelyCorrespondence position information is respectively position
Put 4, position 5, position 8, then the electronic equipment can determine ∠ according to position 4, position 8, position 5Value.
Again for example, if as shown in figure 3, eye feature point is respectivelyCorrespondence position information is respectively position
Put 6, position 9, position 7, then the electronic equipment can determine ∠ according to position 6, position 9, position 7Value.
Still optionally further, when the value of the angle between the eye feature point exceedes preset angle threshold value, it may be determined that
The expressive features are smile characteristics.
For example, if as shown in figure 3, ∠Value be more than some preset angle threshold value, such as c °, it is possible to
It is smile characteristics to determine the expressive features.
Again for example, if as shown in figure 3, ∠Value be more than some preset angle threshold value, such as c °, so that it may
To determine the expressive features as smile characteristics.
The positional information of nose characteristic point in the target signature point of everyone face image described in S408, acquisition.
It should be noted that when the positional information of nose characteristic point is extracted, can obtaining whole nose, each is special
Levy positional information a little.
Alternatively, because the face of human body is substantially symmetrical, therefore, it is special that electronic equipment can also obtain left half of nose
Levy the positional information of positional information or right half of nose characteristic point a little.Wherein, aforementioned location information can be by setting phase
The coordinate or vector answered represents that the embodiment of the present invention will not be described here.
Still optionally further, the nose characteristic point can at least include:Left half of nose lower left edge characteristic point, it is left
Midpoint characteristic point between half of nose lower left edge and right half of nose lower right edge.The nose characteristic point is also
Right half of nose lower right edge characteristic point can be included.
For example, as shown in figure 3, the nose characteristic point can include characteristic pointWherein, the two characteristic points
It is left half of nose lower left edge characteristic point, left half of nose lower left edge and right half of nose lower right side respectively
Midpoint characteristic point between edge.
Again for example, as shown in figure 3, the lip feature point can include characteristic pointWherein, the two features
Point difference right half of nose lower right edge characteristic point, left half of nose lower left edge and right half of nose lower right side
Midpoint characteristic point between edge.
S409, according to the positional information of the nose characteristic point determine described in everyone face image expressive features.
Alternatively, electronic equipment can determine the distance value between nose characteristic point according to the positional information of nose characteristic point
Changing value, so as to determine expressive features according to the distance value changing value.
For example, if as shown in figure 3, nose characteristic point is respectivelyCorrespondence position information be respectively position 10,
Position 11, then the electronic equipment can determine nose characteristic point according to position 10 and position 11WithThe distance between value
Changing value.
Again for example, if nose characteristic point is respectivelyCorrespondence position information is respectively position 12, position 11,
So the electronic equipment can determine nose characteristic point according to position 12 and position 11WithThe distance between value changes value.
Again for example, if nose characteristic point is respectivelyCorrespondence position information is respectively position 12, position 10,
So the electronic equipment can determine nose characteristic point according to position 12 and position 10WithThe distance between value changes value.
Still optionally further, the distance value changing value between the nose characteristic point exceedes distance between preset nose characteristic point
During change threshold, it may be determined that the expressive features are smile characteristics.
For example, if as shown in figure 3, nose characteristic pointWithThe distance between value changes value, more than it is preset away from
During from change threshold, such as more than △ d (mm), then it is smile characteristics that can determine the expressive features.
Again for example, if as shown in figure 3, nose characteristic pointWithThe distance between value changes value, more than preset
During distance change threshold value, such as more than △ d (mm), then it is smile characteristics that can determine the expressive features.
Again for example, if as shown in figure 3, nose characteristic pointWithThe distance between value changes value, more than preset
During distance change threshold value, such as more than △ e (mm), then it is smile characteristics that can determine the expressive features.
S410, when detecting the expressive features of everyone face image for take pictures feature when, perform photographing operation.
It should be noted that in step S202, electronic equipment can recognize the expressive features of face.In step S203
In, electronic equipment can be taken pictures detecting the expressive features of everyone face image feature when, perform photographing operation.Example
Such as, when being characterized as smile characteristics when taking pictures, then photographing operation can be performed.
For example, the image of camera collection includes facial image A, facial image B, facial image C, face figure
As D.Wherein, the facial image that camera is detected includes facial image A, facial image B, facial image C, then electronic equipment
Facial image A, facial image B, facial image C can be just obtained, facial image A, facial image B, face figure is being recognized
When picture C expressive features are smile characteristics, then photographing operation is performed.
In the embodiment of the present invention, the feature of taking pictures for default expressive features and/or can detect this everyone
Difference value between the expressive features of face image is less than default preset feature.Wherein, the default feature, includes but is not limited to
Default smile characteristics, angry feature, sad feature, happy feature, frightened feature.
Alternatively, the difference value can be obtained by way of the expressive features of each facial image are matched, from
And the matching result on each expressive features is obtained, the matching result includes but is not limited in the form of numerical value present, such as percentage
Number, numeral.
Alternatively, the difference value can be by the way that the expressive features of everyone face image and default expressive features be carried out
The mode of matching is obtained.
Still optionally further, the difference value can be obtain at least one facial image in expressive features do not meet it is default
Expressive features facial image quantitative value.
Alternatively, when it is determined that the expressive features of everyone face image are smile characteristics, prompting letter can also be exported
Cease to point out user.
In one embodiment, it can also be progressive relationship between step S404, S406, S408, you can with according to lip
Any two or multinomial determine in positional information, the positional information of eye feature point and the nose characteristic information of characteristic point
The expressive features of everyone face image.
It can be seen that, in the embodiment shown in Fig. 4, electronic equipment can carry out Face datection using camera, it is possible to recognize
Obtain the expressive features of everyone face image at least one facial image;When the expression for detecting everyone face image
Be characterized as taking pictures feature when, perform photographing operation so that take pictures more intelligent.
Referring to Fig. 5, a kind of structural representation of the camera arrangement provided for first embodiment of the invention.Specifically, should
Device can apply to various electronic equipments, and including but not limited to tablet personal computer, smart mobile phone, intelligent camera etc. has and taken pictures
The intelligent terminal of function.Specifically, the device can include:
Acquisition module 501, for carrying out Face datection using camera, obtains at least one facial image;
Identification module 502, the expressive features for recognizing everyone face image at least one described facial image;
Photo module 503, for when the expressive features for detecting everyone face image are to take pictures feature, performing
Photographing operation.
In the embodiment of the present invention, the expressive features can be used for determining whether to perform photographing operation.Wherein, the expressive features
It can be smile characteristics, can also be the feature of a variety of moods such as angry feature, sad feature, frightened feature.Wherein, except smile
The device that the feature of other moods outside feature can also be provided using the embodiment of the present invention performs corresponding function.
Alternatively, this, which is taken pictures, is characterized as between default expressive features, and the expressive features of everyone face image
Difference value be less than predetermined threshold value feature.
Alternatively, the identification module 502, specifically for extracting each face figure at least one described facial image
The set of characteristic points of picture;Target signature point is extracted from the set of characteristic points of everyone face image;According to described each
The target signature point of individual facial image determines expressive features.
In the embodiment of the present invention, this feature point set can be the global feature point of everyone face image, such as face
Characteristic point.This feature point set can also be the local feature region of everyone face image, such as lip feature point, eye feature
Point, nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.The target signature point can be for determining that expression is special
The multiple characteristic points levied.Also, the target signature point include but is not limited to the lip feature point of facial image, eye feature point,
Nose characteristic point, eyebrow characteristic point, facial other provincial characteristics points.
Alternatively, the identification module 502 includes:First acquisition unit 5021, for obtaining each described face figure
The positional information of lip feature point in the target signature point of picture;First determining unit 5022, for according to the lip feature
The positional information of point determines the expressive features of everyone face image.
Still optionally further, the lip feature point can at least include:Upper lip lower edge central spot characteristic point, upper mouth
Feature at characteristic point at the labial angle of the lip left side, the upper lip lower edge point between left side labial angle and the upper lip lower edge central point
Point, typically takes midpoint characteristic point.
Still optionally further, the lip feature point can also at least include:Upper lip lower edge central spot characteristic point, on
Characteristic point at labial angle on the right of lip, it is special at the upper lip lower edge point between labial angle and the upper lip lower edge central point on the right
Levy a little, typically take midpoint characteristic point.
Alternatively, the identification module 502 includes:Second acquisition unit 5023, for obtaining each described face figure
The positional information of eye feature point in the target signature point of picture;Second determining unit 5024, for according to the eye feature
The positional information of point determines the expressive features of everyone face image.
Because the face of human body is substantially symmetrical, therefore, identification module 502 can also obtain the position of left eye characteristic point
The positional information of information or right eye characteristic point.Still optionally further, identification module 502 can also obtain the left half of eyes of left eye
Characteristic point positional information, or the right half of eyes of left eye positional information.Correspondingly, identification module 502 can also also be obtained
The positional information of the characteristic point of the right half of eyes of right eye, or the right half of eyes of right eye positional information.Wherein, aforementioned location information
It can be represented by setting corresponding coordinate or vector, the embodiment of the present invention will not be described here.
Alternatively, the identification module 502 includes:3rd acquiring unit 5025, for obtaining each described face figure
The positional information of nose characteristic point in the target signature point of picture;3rd determining unit 5026, for according to the nose feature
The positional information of point determines the expressive features of everyone face image.
Still optionally further, identification module 502 can also only obtain positional information or the right side of left half of nose characteristic point
The positional information of half of nose characteristic point.Wherein, aforementioned location information can be represented by setting corresponding coordinate or vector,
The embodiment of the present invention will not be described here.The nose characteristic point can at least include:Left half of nose lower left edge feature
Point, the midpoint characteristic point between left half of nose lower left edge and right half of nose lower right edge.The nose is special
Right half of nose lower right edge characteristic point can also a little be included by levying.
It can be seen that, in the embodiment shown in Fig. 5, terminal can carry out Face datection using camera, it is possible to which identification is obtained
The expressive features of everyone face image at least one facial image;When the expressive features for detecting everyone face image
For take pictures feature when, perform photographing operation so that take pictures more intelligent.
Referring to Fig. 6, being the structural representation of a kind of electronic equipment provided in an embodiment of the present invention.The electronic equipment includes
But it is not limited to the electronic equipment that tablet personal computer, smart mobile phone, intelligent camera etc. possess camera function.Specifically, the electronics is set
It is standby to include:At least one processor 601, such as central processing unit (Central Processing Unit, CPU), at least
One communication interface 602, at least one communication bus 603, memory 604.Wherein, communication interface 602 can include camera,
Display screen (Display), keyboard (Keyboard), alternatively, communication interface 602 can also include wireline interface, the nothing of standard
Line interface.Wherein, communication bus 603 is used to realize the connection communication between these components.Memory 604 can be arbitrary access
Memory (Random Access Memory, RAM) or nonvolatile memory (non-volatile memory),
For example, at least one magnetic disk storage.Memory 604, alternatively, can also be that at least one is located remotely from aforementioned processor 601
Storage device.Wherein, batch processing code can be stored in memory 604, processor 601 can be with described in conjunction with Figure 5
Device, calls the program code stored in memory 604, for performing a kind of request processing method, i.e., following for performing
Operation:
Face datection is carried out using camera, at least one facial image is obtained;
The expressive features of everyone face image at least one described facial image of identification;
When the expressive features for detecting everyone face image are to take pictures feature, photographing operation is performed.
Alternatively, processor 601 calls the program code in memory 604, at least one face figure described in the identification
The expressive features of everyone face image as in, for performing following operation:
Extract the set of characteristic points of everyone face image at least one described facial image;
Target signature point is extracted from the set of characteristic points of everyone face image;
Expressive features are determined according to the target signature point of everyone face image.
Alternatively, processor 601 calls the program code in memory 604, everyone face image described in the basis
Target signature point determine expressive features, for performing following operation:
Obtain the positional information of the lip feature point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the lip feature point.
Alternatively, processor 601 calls the program code in memory 604, everyone face image described in the basis
Target signature point determine expressive features, be additionally operable to perform following operation:
Obtain the positional information of the eye feature point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the eye feature point.
Alternatively, processor 601 calls the program code in memory 604, everyone face image described in the basis
Target signature point determine expressive features, be additionally operable to perform following operation:
Obtain the positional information of the nose characteristic point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the nose characteristic point.
Alternatively, described take pictures is characterized as default expressive features, and everyone face image expressive features
Between difference value be less than predetermined threshold value feature.
The embodiments of the invention provide a kind of computer program product, wherein, the instruction in the computer program product
By computing device when, the photographic method as shown in the application Fig. 1, Fig. 2, Fig. 4 embodiment can be performed.
The embodiment of the present invention additionally provides a kind of storage medium, and the storage medium is used to store application program, the application
Program is used to operationally perform the photographic method described in the embodiment of the present invention.
It can be seen that, in the embodiment shown in Fig. 6, terminal can carry out Face datection using camera, it is possible to which identification is obtained
The expressive features of everyone face image at least one facial image;When the expressive features for detecting everyone face image
For take pictures feature when, perform photographing operation so that take pictures more intelligent, improve effect of taking pictures.
Above to a kind of photographic method, device, electronic equipment and storage medium row disclosed in the embodiment of the present invention in detail
Introduce, specific case used herein is set forth to the principle and embodiment of the present invention, the explanation of above example
It is only intended to the method and its core concept for helping to understand the present invention;Simultaneously for those of ordinary skill in the art, according to this
The thought of invention, be will change in specific embodiments and applications, in summary, and this specification content should not
It is interpreted as limitation of the present invention.
The term used in the embodiment of the present application is the purpose only merely for description specific embodiment, and is not intended to be limiting
The application." one kind ", " described " and "the" of singulative used in the embodiment of the present application and appended claims
It is also intended to including most forms, unless context clearly shows that other implications.It is also understood that term used herein
"and/or" refers to and may combined comprising one or more associated any or all of project listed.
It will be appreciated that though may be described in the embodiment of the present application using term " first ", " second ", " the 3rd " etc.
Various connectivity ports and identification information etc., but these connectivity ports and identification information etc. should not necessarily be limited by these terms.These terms
Only it is used for connectivity port and identification information etc. being distinguished from each other out.For example, in the case where not departing from the embodiment of the present application scope,
First connectivity port can also be referred to as the second connectivity port, similarly, and the second connectivity port can also be referred to as the first connection
Port.
Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determining " or " in response to detection ".Similarly, depending on linguistic context, phrase " if it is determined that " or " if detection
(condition or event of statement) " can be construed to " when it is determined that when " or " in response to determine " or " when the detection (condition of statement
Or event) when " or " in response to detection (condition or event of statement) ".
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description
It is convenient and succinct, can as needed will be upper only with the division progress of above-mentioned each functional module for example, in practical application
State function distribution to be completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, to complete
All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before may be referred to
The corresponding process in embodiment of the method is stated, be will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the module or
The division of unit, only a kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units
Or component can combine or be desirably integrated into another system, or some features can be ignored, or not perform.It is another, institute
Display or the coupling each other discussed or direct-coupling or communication connection can be by some interfaces, device or unit
INDIRECT COUPLING or communication connection, can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in the application each embodiment can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used
When, it can be stored in a computer read/write memory medium.Understood based on such, the technical scheme of the application is substantially
The part contributed in other words to prior art or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are to cause a computer
Equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the application each
The all or part of step of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage
(Read Only Memory;Hereinafter referred to as:ROM), random access memory (Random Access Memory;Hereinafter referred to as:
RAM), magnetic disc or CD etc. are various can be with the medium of store program codes.
It is described above, the only embodiment of the application, but the protection domain of the application is not limited thereto, and it is any
Those familiar with the art can readily occur in change or replacement in the technical scope that the application is disclosed, and should all contain
Cover within the protection domain of the application.Therefore, the protection domain of the application should be based on the protection scope of the described claims.
Claims (10)
1. a kind of photographic method, it is characterised in that methods described includes:
Face datection is carried out using camera, at least one facial image is obtained;
The expressive features of everyone face image at least one described facial image of identification;
When the expressive features for detecting everyone face image are to take pictures feature, photographing operation is performed.
2. according to the method described in claim 1, it is characterised in that each at least one facial image described in the identification
The expressive features of facial image, including:
Extract the set of characteristic points of everyone face image at least one described facial image;
Target signature point is extracted from the set of characteristic points of everyone face image;
Expressive features are determined according to the target signature point of everyone face image.
3. method according to claim 2, it is characterised in that the target signature of everyone face image described in the basis
Point determines expressive features, including:
Obtain the positional information of the lip feature point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the lip feature point.
4. method according to claim 2, it is characterised in that the target signature of everyone face image described in the basis
Point determines expressive features, including:
Obtain the positional information of the eye feature point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the eye feature point.
5. method according to claim 2, it is characterised in that the target signature of everyone face image described in the basis
Point determines expressive features, including:
Obtain the positional information of the nose characteristic point in the target signature point of everyone face image;
The expressive features of everyone face image according to being determined the positional information of the nose characteristic point.
6. the method according to any one of claim 1 to 5, it is characterised in that described take pictures is characterized as that default expression is special
Levy, and the difference value between the expressive features of everyone face image is less than the feature of predetermined threshold value.
7. a kind of camera arrangement, it is characterised in that described device includes:
Acquisition module, for carrying out Face datection using camera, obtains at least one facial image;
Identification module, the expressive features for recognizing everyone face image at least one described facial image;
Photo module, for when the expressive features for detecting everyone face image are to take pictures feature, performing the behaviour that takes pictures
Make.
8. device according to claim 7, it is characterised in that the identification module, specifically at least one described in extraction
The set of characteristic points of everyone face image in individual facial image;Extracted from the set of characteristic points of everyone face image
Target signature point;Expressive features are determined according to the target signature point of everyone face image.
9. a kind of electronic equipment, it is characterised in that including:Processor, memory, communication interface and communication bus;The processing
Device, the memory and the communication interface are connected by the communication bus and complete mutual communication;The memory
Store executable program code;The processor run by reading the executable program code stored in the memory with
The corresponding program of the executable program code, for performing method as claimed in any one of claims 1 to 6.
10. a kind of storage medium, it is characterised in that the storage medium is used to store application program, and the application program is used for
Operationally perform the photographic method described in any one of claim 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710527676.XA CN107249100A (en) | 2017-06-30 | 2017-06-30 | Photographing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710527676.XA CN107249100A (en) | 2017-06-30 | 2017-06-30 | Photographing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107249100A true CN107249100A (en) | 2017-10-13 |
Family
ID=60015245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710527676.XA Pending CN107249100A (en) | 2017-06-30 | 2017-06-30 | Photographing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107249100A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108495031A (en) * | 2018-03-22 | 2018-09-04 | 广东小天才科技有限公司 | A kind of photographic method and wearable device based on wearable device |
CN108737729A (en) * | 2018-05-04 | 2018-11-02 | Oppo广东移动通信有限公司 | Automatic photographing method and device |
CN108765264A (en) * | 2018-05-21 | 2018-11-06 | 深圳市梦网科技发展有限公司 | Image U.S. face method, apparatus, equipment and storage medium |
CN108769537A (en) * | 2018-07-25 | 2018-11-06 | 珠海格力电器股份有限公司 | A kind of photographic method, device, terminal and readable storage medium storing program for executing |
CN108924410A (en) * | 2018-06-15 | 2018-11-30 | Oppo广东移动通信有限公司 | Camera control method and relevant apparatus |
CN109151325A (en) * | 2018-10-26 | 2019-01-04 | 昆山亿趣信息技术研究院有限公司 | A kind of processing method and processing unit synthesizing smiling face |
CN112843690A (en) * | 2020-12-31 | 2021-05-28 | 上海米哈游天命科技有限公司 | Shooting method, device, equipment and storage medium |
CN112843731A (en) * | 2020-12-31 | 2021-05-28 | 上海米哈游天命科技有限公司 | Shooting method, device, equipment and storage medium |
CN113657188A (en) * | 2021-07-26 | 2021-11-16 | 浙江大华技术股份有限公司 | Face age identification method, system, electronic device and storage medium |
CN114363516A (en) * | 2021-12-28 | 2022-04-15 | 苏州金螳螂文化发展股份有限公司 | Interactive photographing system based on human face recognition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1764238A (en) * | 2004-10-18 | 2006-04-26 | 欧姆龙株式会社 | Image pickup unit |
CN102100062A (en) * | 2008-07-17 | 2011-06-15 | 日本电气株式会社 | Imaging device, imaging method and program |
US20110261220A1 (en) * | 2010-04-26 | 2011-10-27 | Kyocera Corporation | Terminal device and control method |
CN104519261A (en) * | 2013-09-27 | 2015-04-15 | 联想(北京)有限公司 | Information processing method and electronic device |
CN106817541A (en) * | 2017-01-10 | 2017-06-09 | 惠州Tcl移动通信有限公司 | A kind of method and system taken pictures based on facial expression control |
-
2017
- 2017-06-30 CN CN201710527676.XA patent/CN107249100A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1764238A (en) * | 2004-10-18 | 2006-04-26 | 欧姆龙株式会社 | Image pickup unit |
CN102100062A (en) * | 2008-07-17 | 2011-06-15 | 日本电气株式会社 | Imaging device, imaging method and program |
US20110261220A1 (en) * | 2010-04-26 | 2011-10-27 | Kyocera Corporation | Terminal device and control method |
CN104519261A (en) * | 2013-09-27 | 2015-04-15 | 联想(北京)有限公司 | Information processing method and electronic device |
CN106817541A (en) * | 2017-01-10 | 2017-06-09 | 惠州Tcl移动通信有限公司 | A kind of method and system taken pictures based on facial expression control |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108495031A (en) * | 2018-03-22 | 2018-09-04 | 广东小天才科技有限公司 | A kind of photographic method and wearable device based on wearable device |
CN108737729A (en) * | 2018-05-04 | 2018-11-02 | Oppo广东移动通信有限公司 | Automatic photographing method and device |
CN108765264A (en) * | 2018-05-21 | 2018-11-06 | 深圳市梦网科技发展有限公司 | Image U.S. face method, apparatus, equipment and storage medium |
CN108765264B (en) * | 2018-05-21 | 2022-05-20 | 深圳市梦网科技发展有限公司 | Image beautifying method, device, equipment and storage medium |
CN108924410A (en) * | 2018-06-15 | 2018-11-30 | Oppo广东移动通信有限公司 | Camera control method and relevant apparatus |
CN108924410B (en) * | 2018-06-15 | 2021-01-29 | Oppo广东移动通信有限公司 | Photographing control method and related device |
CN108769537A (en) * | 2018-07-25 | 2018-11-06 | 珠海格力电器股份有限公司 | A kind of photographic method, device, terminal and readable storage medium storing program for executing |
CN109151325A (en) * | 2018-10-26 | 2019-01-04 | 昆山亿趣信息技术研究院有限公司 | A kind of processing method and processing unit synthesizing smiling face |
CN112843690A (en) * | 2020-12-31 | 2021-05-28 | 上海米哈游天命科技有限公司 | Shooting method, device, equipment and storage medium |
CN112843731A (en) * | 2020-12-31 | 2021-05-28 | 上海米哈游天命科技有限公司 | Shooting method, device, equipment and storage medium |
CN113657188A (en) * | 2021-07-26 | 2021-11-16 | 浙江大华技术股份有限公司 | Face age identification method, system, electronic device and storage medium |
CN114363516A (en) * | 2021-12-28 | 2022-04-15 | 苏州金螳螂文化发展股份有限公司 | Interactive photographing system based on human face recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107249100A (en) | Photographing method and device, electronic equipment and storage medium | |
US20230394775A1 (en) | Electronic device for generating image including 3d avatar reflecting face motion through 3d avatar corresponding to face and method of operating same | |
CN109815845B (en) | Face recognition method and device and storage medium | |
Fang et al. | Towards computational models of kinship verification | |
CN107450708A (en) | Solve lock control method and Related product | |
CN108197250B (en) | Picture retrieval method, electronic equipment and storage medium | |
CN108229369A (en) | Image capturing method, device, storage medium and electronic equipment | |
CN108182714B (en) | Image processing method and device and storage medium | |
WO2019033573A1 (en) | Facial emotion identification method, apparatus and storage medium | |
CN107437067A (en) | Human face in-vivo detection method and Related product | |
EP3410258B1 (en) | Method for pushing picture, mobile terminal and storage medium | |
CN109117760A (en) | Image processing method, device, electronic equipment and computer-readable medium | |
WO2021003964A1 (en) | Method and apparatus for face shape recognition, electronic device and storage medium | |
CN107451535A (en) | Living iris detection method and Related product | |
CN107483834A (en) | A kind of image processing method, continuous shooting method and device and related media production | |
CN109413326A (en) | Camera control method and Related product | |
CN107527046A (en) | Solve lock control method and Related product | |
CN107622483A (en) | A kind of image combining method and terminal | |
CN110427108A (en) | Photographic method and Related product based on eyeball tracking | |
Furnari et al. | Recognizing personal contexts from egocentric images | |
WO2019033567A1 (en) | Method for capturing eyeball movement, device and storage medium | |
CN108021905A (en) | image processing method, device, terminal device and storage medium | |
CN106471440A (en) | Eye tracking based on efficient forest sensing | |
Chalup et al. | Simulating pareidolia of faces for architectural image analysis | |
CN106406527A (en) | Input method and device based on virtual reality and virtual reality device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171013 |
|
WD01 | Invention patent application deemed withdrawn after publication |