CN106713764A - Photographic method and mobile terminal - Google Patents
Photographic method and mobile terminal Download PDFInfo
- Publication number
- CN106713764A CN106713764A CN201710054919.2A CN201710054919A CN106713764A CN 106713764 A CN106713764 A CN 106713764A CN 201710054919 A CN201710054919 A CN 201710054919A CN 106713764 A CN106713764 A CN 106713764A
- Authority
- CN
- China
- Prior art keywords
- mobile terminal
- front camera
- human face
- preview area
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present invention provides a photographic method, being applied to mobile terminal with a front camera and a rear camera. The method comprises: receiving the trigger operation of the mobile terminal user and turning on the front camera and the rear camera at the same time; detecting whether the sight of the mobile terminal user is in the first preview area on the display screen of the mobile terminal through the front camera; if it is detected that the sight of the mobile terminal user is in the first preview area, recognizing the first face expression of the first image collected by the front camera; determining whether the first face expression matches at least one of the prestored second face expressions; if the first face expression matches the second face expression, according to the photographic mode associated to the second face expression generating the second image collected by the rear camera. The embodiment of the present invention also provides a mobile terminal. The embodiment of the present invention does not need to manually trigger the photographing during the photographing process and the photos generated are clear.
Description
Technical field
The present invention relates to communication technical field, more particularly to a kind of photographic method and mobile terminal.
Background technology
Mobile terminal of the prior art has camera function mostly.Mobile terminal user is clapped using mobile terminal
According to when, generally require to use hand to grip mobile terminal so as to fixed cellular terminal, click on mobile terminal screen carry out it is right
It is defocused, then taken pictures by clicking on the triggering completion of the imaging icon on mobile terminal display screen, but mobile terminal is clicked on by finger
The click action of display screen is easily caused mobile terminal and rocks, and causes the image definition of generation poor, weaker in extraneous light
In the case of, the definition of the picture of generation is worse.
Therefore, in how avoiding existing photographic method, when click is taken pictures caused by mobile terminal rock, generate apparent
Degree difference turns into technical problem urgently to be resolved hurrily.
The content of the invention
The embodiment of the present invention provides a kind of photographic method, and mobile terminal is easily caused with the photographic method for solving prior art
Rock so that the technical problem of the image definition difference of generation.
The embodiment of the present invention also provides a kind of mobile terminal, easy during taking pictures to solve prior art mobile terminal
Rock so that the technical problem of the image definition difference of generation.
A kind of first aspect, there is provided photographic method, is applied to the mobile terminal with front camera and rear camera,
Methods described includes:The trigger action of mobile terminal user is received, while opening the front camera and the rearmounted shooting
Head;Whether the sight line for detecting the mobile terminal user by the front camera is located on the display screen of the mobile terminal
The first preview area in;If the sight line for detecting the mobile terminal user is located in first preview area, recognize
The first human face expression in first image of the front camera collection, wherein, described first image is in first preview
Shown in region;Determine whether first human face expression matches with least one second human face expressions for prestoring;If described
One human face expression is matched with second human face expression, then the exposal model for being associated according to second human face expression, generates institute
State the second image photograph of rear camera collection, wherein, second image on the display screen of the mobile terminal the
Display in two preview areas.
A kind of second aspect, there is provided mobile terminal, including front camera and rear camera, the mobile terminal are also wrapped
Include:First opening module, for receiving the trigger action of mobile terminal user, while open the front camera and it is described after
Put camera;Whether detection module, the sight line for detecting the mobile terminal user by the front camera is located at institute
State in the first preview area on the display screen of mobile terminal;First acquisition module, if being used for detecting the mobile terminal
The sight line at family is located in first preview area, then recognize the first face in the first image of the front camera collection
Expression, wherein, described first image shows in first preview area;Determining module, for determining first face
Express one's feelings and whether matched with least one second human face expressions for prestoring;Generation module, if for first human face expression and institute
The matching of the second human face expression is stated, then the exposal model for being associated according to second human face expression generates the rear camera and adopts
Second image photograph of collection, wherein, second image shows in the second preview area on the display screen of the mobile terminal
Show.
In the embodiment of the present invention, after the sight line of mobile terminal user is located in the first preview area, preposition taking the photograph is can trigger
First human face expression of mobile terminal user is gathered as head, so as to the first human face expression can be entered with the second human face expression for prestoring
Row matching, in the case where the match is successful, can generate photo, mobile terminal by the corresponding exposal model of the second human face expression
User still can be taken pictures with hands grasping mobile terminal, it is to avoid be shaken because user clicks on mobile terminal caused by mobile terminal is taken pictures,
Cause the poor definition of the photo of generation, stability when mobile terminal user's hands grasping mobile terminal is taken pictures can be increased, carry
The definition of height generation photo.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below by institute in the description to the embodiment of the present invention
The accompanying drawing for needing to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the invention
Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is the flow chart of the photographic method of first embodiment of the invention;
Fig. 2 is the flow chart of the photographic method of second embodiment of the invention;
Fig. 3 is the structured flowchart of the mobile terminal of third embodiment of the invention;
Fig. 4 is the structured flowchart of the mobile terminal of fourth embodiment of the invention;
Fig. 5 is the structured flowchart of the mobile terminal of fifth embodiment of the invention;
Fig. 6 is the structured flowchart of the mobile terminal of sixth embodiment of the invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the acquired every other implementation under the premise of creative work is not made of those of ordinary skill in the art
Example, belongs to the scope of protection of the invention.
First embodiment
First embodiment of the invention discloses a kind of photographic method.The photographic method is applied to front camera with after
Put the mobile terminal of camera.As shown in figure 1, the method includes the steps:
Step S101:The trigger action of mobile terminal user is received, while opening front camera and rear camera.
Mobile terminal user can be by phonetic order, key command, touch gestures instruction or every the triggering such as empty gesture instruction
Operation, makes unlatching front camera and rear camera.
Step S102:Whether the sight line for detecting mobile terminal user by front camera is located at the display screen of mobile terminal
On the first preview area in.
Specifically, whether the display of mobile terminal can be located at by the sight line of eyeball tracking technology for detection mobile terminal user
In the first preview area on screen.For example, can be by recognizing that the changing features of eyeball and eyeball periphery detect that mobile terminal is used
Whether the sight line at family is located in the first preview area;Or, can be by recognizing that iris angle change detects mobile terminal user's
Whether sight line is located in the first preview area;Or, feature can be extracted to iris by actively projecting the light beams such as infrared ray
Detect whether the sight line of mobile terminal user is located in the first preview area.It is preceding when using actively ultrared mode is projected
Putting camera can use infrared front camera and increase infrared light emission pipe.
Step S103:If the sight line for detecting mobile terminal user is located in the first preview area, preposition shooting is recognized
The first human face expression in first image of head collection.
Wherein, the first image shows in the first preview area.If the sight line for detecting mobile terminal user is located at first
In preview area, show that mobile terminal user is satisfied with the second image of rear camera collection, the second image can be generated and shone
Piece, therefore, the first human face expression in the first image of identification.First human face expression can be but not limited to:Blink, open one's mouth,
Wink at, frown, grimace etc..
If the sight line of mobile terminal user is not in the first preview area, show that mobile terminal user temporarily need not
Second image is generated into photo, even if there is the first human face expression in the first image, front camera will not also gather the expression.
Step S104:Determine whether the first human face expression matches with least one second human face expressions for prestoring.
Second human face expression is associated with exposal model.The second different human face expressions associates different exposal models.It is logical
Cross the step, it may be determined that whether the first human face expression matches with least one second human face expressions, right so as to can be determined whether
Answer specific exposal model, and the specific any exposal model of correspondence.
Step S105:If the first human face expression is matched with the second human face expression, according to the bat that the second human face expression is associated
According to pattern, the second image photograph of generation rear camera collection.
Wherein, shown in the second preview area of second image on the display screen of mobile terminal.For example, second preview
Region can be whole display screen, then the first preview area can be suspended in the window on the second preview area.
If the first human face expression is matched with the second human face expression, can determine that exposal model for the second human face expression is corresponding
Pattern, according to the schema creation photo, the action triggers such as key generation photo is shot without mobile terminal user by clicking on, so that
Can be when photo be generated, mobile terminal user still can use hands grasping mobile terminal, increase the movement of user's hands grasping
Stability when terminal is taken pictures, prevents rear camera from shaking.
To sum up, the photographic method of first embodiment of the invention, when the sight line of mobile terminal user is located at the first preview area
After interior, can trigger front camera gather mobile terminal user the first human face expression so that can by the first human face expression with it is pre-
The second human face expression deposited is compared, in the case where the match is successful, to be taken pictures by the way that the second human face expression is corresponding
Pattern, generates photo, and mobile terminal user still can use hands grasping mobile terminal, increases mobile terminal user's hands grasping and moves
Stability when dynamic terminal is taken pictures, prevents rear camera from shaking, and prevents the photo of generation because rear camera is shaken and mould
Paste, so as to improve the definition of generation photo.
Second embodiment
Second embodiment of the invention discloses a kind of photographic method.The photographic method is applied to front camera with after
Put the mobile terminal of camera.As shown in Fig. 2 the method includes the steps:
Step S201:Gather second human face expression of mobile terminal user.
The step can be gathered by front camera, also can gather the second human face expression by rear camera.General,
The second human face expression is checked for convenience, is gathered using front camera.Second human face expression can be but not limited to:Blink
Eye, open one's mouth, wink at, frown, grimace etc..
Step S202:Set up the incidence relation of the second human face expression and exposal model.
The exposal model can include but is not limited to common exposal model, timing exposal model, burst mode, U.S. face and take pictures
Pattern etc..Every 1 second human face expression can be associated with each exposal model by the step.For example, the second face table
Feelings are blink, then exposal model claps exposal model for common;To open one's mouth, then exposal model is mould of regularly taking pictures to second human face expression
Formula;Second human face expression is wink at, then exposal model is burst mode;Second human face expression is to frown, then exposal model
It is U.S. face exposal model.
Step S203:The trigger action of mobile terminal user is received, while opening front camera and rear camera.
Mobile terminal user can be by phonetic order, key command, touch gestures instruction or every the triggering such as empty gesture instruction
Operation, makes unlatching front camera and rear camera.
Step S204:Obtain the light luminance value of front camera collection.
The front camera is internally provided with photo-sensitive cell.The photo-sensitive cell can perceive the light into inside front camera
The brightness of line, when the light luminance value is less than default light luminance threshold value, automatically turns on the light compensating lamp of front camera.
Light luminance value can be obtained by the step, to determine whether the light luminance value is able to ensure that front camera
Collect the first image for meeting definition.
Step S205:If light luminance value is less than default light luminance threshold value, the light compensating lamp of front camera is opened.
The default light luminance threshold value can be set by mobile terminal system or mobile terminal user, the default light
Luminance threshold should meet makes front camera to collect clearly the first image, thus it is clearly recognizable go out the first face table
Feelings.If light luminance value is not less than default light luminance threshold value, show light luminance enough, without opening front camera
Light compensating lamp.If light luminance value is less than default light luminance threshold value, show that light luminance is not enough, it is impossible to which it is clear to collect
The first image, it is necessary to open the light compensating lamp of front camera, increase brightness, to collect clearly the first image enough.
For example, it is 100 to set light luminance threshold value, when light luminance value is not less than 100, clearly first can be collected
Image, is capable of identify that the first human face expression on first image.If the light luminance value of front camera collection is 50, the light
Line brightness value is less than light luminance threshold value, then open the light compensating lamp of front camera.If the light luminance of front camera collection
Value is not less than 100, then need not open the light compensating lamp of front camera.
Step S206:Adjust first display location of first preview area on the display screen of mobile terminal.
First preview area is the viewing area of the first image of front camera collection.Mobile terminal user can be according to certainly
Oneself demand, custom are liked, and adjust the position of first preview area.Specifically, mobile terminal user can be by finger
The mode of dragging moves the window of first preview area, so as to adjust first preview area on the display screen of mobile terminal
The first display location.It should be appreciated that other modes can also be set to adjust the first preview area in mobile terminal
Display screen on the first display location.
Step S207:The first display location according to the first preview area on the display screen of mobile terminal, obtains first
Display location parameter.
If for example, first preview area is square, the first display location parameter can be four of square
The coordinate at angle.
Step S208:By the sight line of front camera real time tracking mobile terminal user.
Specifically, can be by the sight line of eyeball tracking technology real time tracking mobile terminal user.For example, can be by identification
The sight line of the change real time tracking mobile terminal user on eyeball and eyeball periphery;Or, can be by recognizing iris angle change reality
When tracking mobile terminal user sight line;Or, feature reality can be extracted to iris by actively projecting the light beams such as infrared ray
When tracking mobile terminal user sight line.
Step S209:The second display location that sight line according to mobile terminal user intersects with the display screen of mobile terminal,
Obtain the second display location parameter.
By obtaining the second display location parameter, the second display location parameter can be carried out with the first display location parameter
Compare.
The second display location parameter is the sight line of mobile terminal user and the coordinate of the intersection point of the display screen of mobile terminal.
For example, the intersection point can be that the visual axis and plane where display screen for extending are put centered on the eyeball of mobile terminal user
Intersection point.
Specifically, the second display location parameter can be obtained using a kind of following method:According to the several of user's face
Specific key point, determines to obtain the eye socket area of mobile terminal user's visual focus on a display screen by facial recognition techniques
Domain, and further determine that pupil region and the eyeball area of mobile terminal user by matching default pupil shape and eyeball shape
Domain.Then the three-dimensional coordinate of mobile terminal user's pupil center location now and eyeball center is obtained, and it is relatively more above-mentioned
Two three-dimensional coordinates.Mobile terminal passes through to compare side-play amount of the acquisition pupil center location relative to eyeball center, so that
Calculate the direction of visual lines of mobile terminal user.Mobile terminal collection includes the eyeball characteristic of the direction of visual lines, and passes through
The eyeball characteristic is converted into image processing techniques the location data of corresponding visual focus.In the embodiment of the present invention, depending on
Feel that focus can regard the intersection point of the visual axis and plane where display screen centered on eyeball as.In fact, being regarded due to existing
There is one-to-one relation in line direction, the coordinate that the pupil center of eyes is located on display screen with visual focus.For the first time
Before carrying out eyeball focusing, system is illustrated on four point midways on four sides on display screen and the central point position of display screen
Put, and require that mobile terminal user watches above-mentioned 5 points attentively respectively.Mobile terminal is captured when mobile terminal user notes respectively respectively
Depending on the eyeball characteristic of mobile terminal user at above-mentioned 5;Aforesaid operations be define 5 for refer to datum mark (four
The center on bar side, and display screen center position), it is possible to mobile terminal is divided using above-mentioned 5 datum marks
The position range of the visual focus coordinate on the display screen corresponding to the direction of visual lines of user eyeball.When mobile terminal will read
Eyeball characteristic be converted into the location data of visual focus to position during required visual focus, be referred to above-mentioned 5
Individual datum mark, positions in certain scope to real-time vision focus.Mobile terminal is by the location data of the visual focus
It is processed into the visual focus of touch screen.
Step S210:Judge whether the second display location parameter is located in the range of the first display location parameter restriction.
Whether the sight line that can determine that mobile terminal user by the step is located in the first preview area.If the second display position
Parameter is put outside the scope that the first display location parameter is limited, then showing the sight line of mobile terminal user, to be not at first pre-
Look in region, front camera will not gather first human face expression of mobile terminal user.
Step S211:If the second display location parameter is located in the range of the first display location parameter restriction, it is determined that moved
The sight line of dynamic terminal user is located in the first preview area.
By the step, it may be determined that the sight line of mobile terminal user is located in the first preview area, so as to show preposition taking the photograph
First human face expression of mobile terminal user can be gathered as head.
Step S212:If the sight line for detecting mobile terminal user is located in the first preview area, preposition shooting is recognized
The first human face expression in first image of head collection.
Wherein, the first image shows in the first preview area.If the sight line for detecting mobile terminal user is located at first
In preview area, show that mobile terminal user meets the second image of rear camera collection, the second image can be generated and shone
Piece, therefore, the first human face expression in the first image of identification.First human face expression can be but not limited to:Blink, open one's mouth,
Wink at, frown, grimace etc..
Step S213:Determine whether the first human face expression matches with least one second human face expressions for prestoring.
Second human face expression is associated with exposal model.The second different human face expressions associates different exposal models.It is logical
Cross the step to compare, it may be determined that whether whether the first human face expression match with least one second human face expressions, so that
Can be determined whether the specific exposal model of correspondence, and the specific any exposal model of correspondence.
Step S214:If the first human face expression is matched with the second human face expression, according to the bat that the second human face expression is associated
According to pattern, the second image photograph of generation rear camera collection.
Wherein, shown in the second preview area of second image on the display screen of mobile terminal.For example, second preview
Region can be whole display screen, then the first preview area can be suspended in the window on the second preview area.
If the first human face expression is matched with the second human face expression, can determine that exposal model for the second human face expression is corresponding
Pattern, according to the schema creation photo, the action triggers such as key generation photo is shot without mobile terminal user by clicking on, so that
Can be when photo be generated, mobile terminal user still can use hands grasping mobile terminal, increase the movement of user's hands grasping
Stability when terminal is taken pictures, prevents rear camera from shaking.
To sum up, the photographic method of second embodiment of the invention, mobile terminal user can according to the demand of oneself, custom or
Hobby, adjusts the position of first preview area, and can be less than default light luminance threshold value in light luminance value, then open
The light compensating lamp of front camera increases light luminance to collect clearly the first image, when the sight line position of mobile terminal user
After in the first preview area, the first human face expression that front camera gathers mobile terminal user is can trigger, so that can be by the
One human face expression is compared with the second human face expression for prestoring, so as in the case where the match is successful, can be by the second face
Express one's feelings corresponding exposal model, generate photo, mobile terminal user still can use hands grasping mobile terminal, increase mobile terminal
Stability when user's hands grasping mobile terminal is taken pictures, prevents rear camera from shaking, and prevents the photo of generation due to rearmounted
Camera is shaken and is obscured, so as to improve the definition of generation photo.
3rd embodiment
Third embodiment of the invention discloses a kind of mobile terminal, can realize the thin of photographic method in above-described embodiment
Section, and reach identical effect.The mobile terminal can be but not limited to mobile phone, panel computer, MP3/MP4, intelligent watch, intelligence
Energy bracelet, personal digital assistant (Personal Digital Assistant, PDA), vehicle-mounted computer etc..
The mobile terminal 300 includes:Front camera 301 and rear camera 302.The front camera 301 is used to adopt
Collect the first image.The rear camera is used to gather the second image.
Additionally, the mobile terminal 300 also includes:
First opening module 303, the trigger action for receiving the user of mobile terminal 300, while opening front camera
301 and rear camera 302.
The user of mobile terminal 300 can be by phonetic order, key command, touch gestures instruction or tactile every empty gesture instruction etc.
Hair operation, makes the first opening module 303 open front camera 301 and rear camera 302.
Detection module 304, whether the sight line for being detected the user of mobile terminal 300 by front camera 301 is located at is moved
In the first preview area on the display screen of dynamic terminal 300.
Specifically, whether detection module 304 can be located at by the sight line of the user of eyeball tracking technology for detection mobile terminal 300
In the first preview area on the display screen of mobile terminal 300.For example, change that can be by recognizing eyeball and eyeball periphery is examined
Whether the sight line for surveying the user of mobile terminal 300 is located in the first preview area;Or, can be by recognizing that iris angle change is detected
Whether the sight line of the user of mobile terminal 300 is located in the first preview area;Or, can be by actively projecting the light beams such as infrared ray
Whether the sight line that the user of feature detection mobile terminal 300 is extracted to iris is located in the first preview area.When using actively throwing
When penetrating ultrared mode, the front camera 301 can use infrared front camera and increase infrared light emission pipe.
First acquisition module 305, if being located in the first preview area for detecting the sight line of the user of mobile terminal 300,
The first human face expression in the first image that then identification front camera 301 is gathered.
Wherein, the first image shows in the first preview area.If the sight line for detecting the user of mobile terminal 300 is located at the
In one preview area, show that the user of mobile terminal 300 is satisfied with the second image of the collection of rear camera 302, can be by the second figure
As generation photo, therefore, the first acquisition module 305 recognizes the first human face expression in the first image.First human face expression can
To be but not limited to:Blink, open one's mouth, wink at, frown, grimace etc..
If the sight line of the user of mobile terminal 300 is not in the first preview area, show that the user of mobile terminal 300 is temporary transient
Second image need not be generated photo, even if there is the first human face expression in the first image, front camera 301 will not also be gathered
The expression.
Determining module 306, for determining whether the first human face expression matches with least one second human face expressions for prestoring.
Second human face expression is associated with exposal model.The second different human face expressions associates different exposal models.It is logical
Cross the module to compare, it may be determined that whether the first human face expression matches with least one second human face expressions, so that can be true
It is fixed whether to correspond to specific exposal model, and the specific any exposal model of correspondence.
Generation module 307, if being matched with the second human face expression for the first human face expression, closes according to the second human face expression
The exposal model of connection, the second image photograph of the generation collection of rear camera 302.
Wherein, shown in the second preview area of second image on the display screen of mobile terminal 300.For example, this second
Preview area can be whole display screen, then the first preview area can be suspended in the window on the second preview area.
If the first human face expression is matched with the second human face expression, can determine that exposal model for the second human face expression is corresponding
Pattern, according to the schema creation photo, the action triggers such as key generation photo is shot without the user of mobile terminal 300 by clicking on,
So that can be when photo is generated, the user of mobile terminal 300 still can use hands grasping mobile terminal 300, increase mobile whole
Stability when holding 300 user's hands grasping mobile terminals 300 to take pictures, prevents rear camera 302 from shaking.
To sum up, the mobile terminal 300 of third embodiment of the invention, when the sight line of the user of mobile terminal 300 is pre- positioned at first
After looking in region, the first human face expression that front camera 301 gathers the user of mobile terminal 300 is can trigger, so that can be by first
Human face expression is compared with the second human face expression for prestoring, so as in the case where the match is successful, can be by the second face table
The corresponding exposal model of feelings, generates photo, and the user of mobile terminal 300 still can use hands grasping mobile terminal 300, increase movement
The stability of the user's hands grasping mobile terminal 300 of terminal 300, prevents rear camera 302 from shaking, prevent generation photo by
Shaken in rear camera 302 and obscured, so as to improve the definition of generation photo.
Fourth embodiment
Fourth embodiment of the invention discloses a kind of mobile terminal, can realize the thin of photographic method in above-described embodiment
Section, and reach identical effect.The mobile terminal can be but not limited to mobile phone, panel computer, MP3/MP4, intelligent watch, intelligence
Energy bracelet, personal digital assistant (Personal Digital Assistant, PDA), vehicle-mounted computer etc..
The mobile terminal 400 includes:Front camera 401, rear camera 402, the first opening module 403, detection mould
Block 404, the first acquisition module 405, determining module 406 and generation module 407.Above-mentioned module and the identical of 3rd embodiment
Functions of modules is identical, will not be repeated here.
Preferably, the mobile terminal 400 also includes:
Second acquisition module 408, the trigger action for receiving the user of mobile terminal 400, while opening front camera
401 and the step of rear camera 402 before, gather second human face expression of the user of mobile terminal 400.
The module can be gathered by front camera 401, also can gather the second human face expression by rear camera 402.
General, the second human face expression is checked for convenience, gathered using front camera 401.Second human face expression can be but
It is not limited to:Blink, open one's mouth, wink at, frown, grimace etc..
Module 409 is set up, the incidence relation for setting up the second human face expression and exposal model.
The exposal model can include but is not limited to common exposal model, timing exposal model, burst mode, U.S. face and take pictures
Pattern etc..Every 1 second human face expression can be associated with each exposal model by the module.For example, the second face table
Feelings are blink, then exposal model claps exposal model for common;To open one's mouth, then exposal model is timing exposal model to second expression;
Second human face expression is wink at, then exposal model is burst mode;To frown, then exposal model is U.S. to second human face expression
Face exposal model.
Designed by above-mentioned module, mobile terminal 400 can pre-set the second face associated from different screening-modes
Expression, it is for confirmation using which kind of exposal model generation photograph for being compared with the first human face expression of identification during taking pictures
Piece.
Preferably, the mobile terminal 400 also includes:
Acquisition module 410, the trigger action for receiving the user of mobile terminal 400, while opening the He of front camera 401
After the step of rear camera 402, the light luminance value of the collection of front camera 402 is obtained.
The front camera 401 is internally provided with photo-sensitive cell.The photo-sensitive cell can be perceived into front camera 401
The brightness of the light in portion, when the light luminance value is less than default light luminance threshold value, automatically turns on front camera 401
Light compensating lamp.
Light luminance value can be obtained by the module, to determine whether the light luminance value is able to ensure that front camera
401 collect the first image for meeting definition.
Second opening module 411, if being less than default light luminance threshold value for light luminance value, opens preposition shooting
First 401 light compensating lamp.
The default light luminance threshold value can be set by the system of mobile terminal 400 or the user of mobile terminal 400, and this is preset
Light luminance threshold value should meet and front camera 401 is collected clearly the first image so that it is clearly recognizable go out
One human face expression.If light luminance value is not less than default light luminance threshold value, show light luminance enough, before unlatching
Put the light compensating lamp of camera 401.If light luminance value is less than default light luminance threshold value, show that light luminance is not enough, nothing
Method collects clearly the first image, it is necessary to open the light compensating lamp of front camera 401, increases brightness, to collect enough
Clearly the first image.
For example, it is 100 to set light luminance threshold value, when light luminance value is not less than 100, front camera 401 can be adopted
Collect clearly the first image, be capable of identify that the first human face expression on first image.If the light of the collection of front camera 401
Line brightness value is 50, and the light luminance value is less than light luminance threshold value, then open the light compensating lamp of front camera 401.If preposition
The light luminance value of the collection of camera 401 is not less than 100, then need not open the light compensating lamp of front camera 401.
Designed by above-mentioned module, the light compensating lamp of front camera 401 in the case of insufficient light, can be opened, with
Just front camera 401 can collect clearly the first image, consequently facilitating the first human face expression of identification.
Preferably, the mobile terminal 400 also includes:
Adjusting module 412, whether the sight line for being detected the user of mobile terminal 400 by front camera 401 is located at is moved
Before the step in the first preview area on the display screen of dynamic terminal 400, the first preview area of adjustment is in mobile terminal 400
The first display location on display screen.
First preview area is the viewing area of the first image of the collection of front camera 401.The user of mobile terminal 400 can
Demand, custom or hobby according to oneself, adjust the position of first preview area.Specifically, the user of mobile terminal 400 can
The window of first preview area is moved by way of finger is dragged, so as to adjust first preview area in mobile terminal
The first display location on 400 display screen.It should be appreciated that other modes can also be set to adjust the first preview region
First display location of the domain on the display screen of mobile terminal 400.
Designed by above-mentioned module, according to the demand of the user of mobile terminal 400, custom or can liked, adjust this
The position of one preview area.
Preferably, the detection module 404 specifically includes following submodule:
First acquisition submodule 4041, for according to the first preview area on the display screen of mobile terminal 400 first
Display location, obtains the first display location parameter.
If for example, first preview area is square, the first display location parameter can be four of square
The coordinate at angle.
Tracking submodule 4042, for the sight line by the user of 401 real time tracking mobile terminal of front camera 400.
Specifically, tracking submodule 4042 can be by the sight line of the user of eyeball tracking technology real time tracking mobile terminal 400.
For example, can be by recognizing the sight line of the change user of real time tracking mobile terminal 400 on eyeball and eyeball periphery;Or, can lead to
Cross the sight line of the user of identification iris angle change real time tracking mobile terminal 400;Or, can be by actively projecting infrared ray etc.
Light beam extracts the sight line of the user of feature real time tracking mobile terminal 400 to iris.
Second acquisition submodule 4043, for the sight line according to the user of mobile terminal 400 and the display screen of mobile terminal 400
The second intersecting display location, obtains the second display location parameter.
By obtaining the second display location parameter, the second display location parameter can be carried out with the first display location parameter
Compare.
The second display location parameter is the sight line of the user of mobile terminal 400 and the intersection point of the display screen of mobile terminal 400
Coordinate.For example, the intersection point can be that the visual axis and display screen for extending are put centered on the eyeball of the user of mobile terminal 400
The intersection point of place plane.
Specifically, the second acquisition submodule 4043 can obtain the second display location parameter using a kind of following method:
According to the several specific key point of the user's face of mobile terminal 400, determine to obtain mobile terminal 400 by facial recognition techniques
The eye orbit areas of user's visual focus on a display screen, and it is further true by matching default pupil shape and eyeball shape
Determine the pupil region and eyeball of the user of mobile terminal 400.Then the user of mobile terminal 400 pupil center position now is obtained
The three-dimensional coordinate with eyeball center is put, and compares above-mentioned two three-dimensional coordinate.Mobile terminal 400 is by comparing acquisition pupil
Center relative to eyeball center side-play amount, so as to calculate the direction of visual lines of the user of mobile terminal 400.Collection bag
The eyeball characteristic of the direction of visual lines is included, and the eyeball characteristic is converted into by corresponding vision by image processing techniques
The location data of focus.In the embodiment of the present invention, visual focus can regard the visual axis and display screen centered on eyeball as
The intersection point of place plane.In fact, due to there is direction of visual lines, pupil center and the visual focus of eyes are located on display screen
There is one-to-one relation in coordinate.Before first time carries out eyeball focusing, system is illustrated on four sides on display screen
The center position of four point midways and display screen, and require that the user of mobile terminal 400 watches above-mentioned 5 points attentively respectively, point
The eyeball characteristic of the user of mobile terminal 400 when the user of mobile terminal 400 watches attentively at above-mentioned 5 respectively is not captured.It is above-mentioned
Operation define 5 for refer to datum mark (center of four edges, and display screen center position), and can
To divide the vision on the display screen corresponding to the direction of visual lines of the user eyeball of mobile terminal 400 using above-mentioned 5 datum marks
The position range of focal coordinates.When the eyeball characteristic that will be read is converted into the location data of visual focus with needed for positioning
During the visual focus wanted, above-mentioned 5 datum marks are referred to, real-time vision focus is positioned in certain scope, will
The location data of the visual focus is processed into the visual focus of touch screen.
Judging submodule 4044, for judging whether the second display location parameter is located at what the first display location parameter was limited
In the range of.
Whether the sight line that can determine that the user of mobile terminal 400 by the module is located in the first preview area.If second shows
Show that location parameter is located at outside the scope that the first display location parameter is limited, then showing the sight line of the user of mobile terminal 400 does not have position
In in the first preview area, front camera 401 will not gather first human face expression of the user of mobile terminal 400.
Determination sub-module 4045, if being located at the scope that the first display location parameter is limited for the second display location parameter
It is interior, it is determined that the sight line of the user of mobile terminal 400 is located in the first preview area.
By the module, it may be determined that the sight line of the user of mobile terminal 400 is located in the first preview area, preposition so as to show
Camera 401 can gather first human face expression of the user of mobile terminal 400.
Designed by above-mentioned module, whether the sight line that can detect the user of mobile terminal 400 is located at the aobvious of mobile terminal 400
In the first preview area in display screen.
To sum up, the mobile terminal 400 of fourth embodiment of the invention, the user of mobile terminal 400 can be according to the demand of oneself, habit
Used or hobby, adjusts the position of first preview area, and can be less than default light luminance threshold value in light luminance value,
Then opening the light compensating lamp of front camera 401 increases light luminance to collect clearly the first image, when mobile terminal 400
After the sight line of user is located in the first preview area, the first of the collection user of mobile terminal 400 of front camera 401 is can trigger
Face is expressed one's feelings, so that the first human face expression can be compared with the second human face expression for prestoring, so as in the situation that the match is successful
Under, photo can be generated by the corresponding exposal model of the second human face expression, the user of mobile terminal 400 still can use hands grasping
Mobile terminal 400, increases the stability when user's hands grasping mobile terminal 400 of mobile terminal 400 is taken pictures, and prevents rearmounted shooting
First 402 shake, prevents the photo of generation from being obscured because rear camera 402 is shaken, so as to improve the definition of generation photo.
5th embodiment
Fig. 5 is the structured flowchart of the mobile terminal of fifth embodiment of the invention.Mobile terminal 500 shown in Fig. 5 includes:Extremely
A few processor 501, memory 502, at least one network interface 504, user interface 503, front camera 506 and rearmounted
Camera 507.Each component in mobile terminal 500 is coupled by bus system 505.It is understood that bus system 505
For realizing the connection communication between these components.Bus system 505 in addition to including data/address bus, also including power bus,
Controlling bus and status signal bus in addition.But for the sake of for clear explanation, various buses are all designated as bus system in Figure 5
505。
Wherein, user interface 503 can include display, keyboard or pointing device for example, mouse, trace ball
(trackball), touch-sensitive plate or touch-screen etc..
It is appreciated that the memory 502 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Or may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read-
Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), the read-only storage of erasable programmable
Device (Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, EEPROM) or
Flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), and it is used as outside height
Speed caching.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory
(Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate
SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (Synch Link DRAM, SLDRAM) and direct rambus random access memory (Direct
Rambus RAM, DRRAM).The embodiment of the present invention description system and method memory 502 be intended to including but not limited to these
With the memory of any other suitable type.
In some embodiments, memory 502 stores following element, can perform module or data structure, or
Person their subset, or their superset:Operating system 5021 and application program 5022.
Wherein, operating system 5021, comprising various system programs, such as ccf layer, core library layer, driving layer etc. are used for
Realize various basic businesses and process hardware based task.Application program 5022, comprising various application programs, such as media
Player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize embodiment of the present invention side
The program of method may be embodied in application program 5022.
In embodiments of the present invention, by the program for calling memory 502 to store or instruction, specifically, can be application
The program stored in program 5022 or instruction.Processor 501 is used for:The trigger action of the user of mobile terminal 500 is received, while opening
Open front camera 506 and rear camera 507;By front camera 506 detect the user of mobile terminal 500 sight line whether
In the first preview area on the display screen of mobile terminal 500;If the sight line for detecting the user of mobile terminal 500 is located at the
In one preview area, then the first human face expression in the first image of the collection of front camera 506 is recognized, wherein, the first image
Shown in the first preview area;Determine whether the first human face expression matches with least one second human face expressions for prestoring;If
First human face expression is matched with the second human face expression, then the exposal model for being associated according to the second human face expression, generates rearmounted shooting
Second image photograph of first 507 collection, wherein, the second image is in the second preview area on the display screen of mobile terminal 500
Display.
The method that the embodiments of the present invention are disclosed can apply in processor 501, or be realized by processor 501.
Processor 501 is probably a kind of IC chip, the disposal ability with signal.In implementation process, the above method it is each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 501 or software form.Above-mentioned treatment
Device 501 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated electricity
Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.Can realize or perform disclosed each method in the embodiment of the present invention, step and logic diagram.It is general
Processor can be microprocessor or the processor can also be any conventional processor etc..With reference to embodiment of the present invention institute
The step of disclosed method, can be embodied directly in hardware decoding processor and perform completion, or with the hardware in decoding processor
And software module combination performs completion.Software module may be located at random access memory, and flash memory, read-only storage may be programmed read-only
In the ripe storage medium in this area such as memory or electrically erasable programmable memory, register.The storage medium is located at
Memory 502, processor 501 reads the information in memory 502, with reference to the step of its hardware completion above method.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware,
Microcode or its combination are realized.Realized for hardware, processing unit can be realized in one or more application specific integrated circuits
(Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal
Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable
Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general place
In reason device, controller, microcontroller, microprocessor, other electronic units for performing herein described function or its combination.
For software realize, can by perform the module (such as process, function etc.) of function described in the embodiment of the present invention come
Realize the technology described in the embodiment of the present invention.Software code is storable in memory 502 and by computing device.Memory
502 can realize in processor 501 or outside processor 501.
Alternatively, processor 501 is used for:By front camera 501 detect the user of mobile terminal 500 sight line whether position
In the step in the first preview area on the display screen of mobile terminal 500, processor 501 specifically for:It is pre- according to first
Look at first display location of the region on the display screen of mobile terminal 500, obtain the first display location parameter;By preposition shooting
The sight line of the first user of 506 real time tracking mobile terminal 500;Sight line according to the user of mobile terminal 500 is aobvious with mobile terminal 500
The second intersecting display location of display screen, obtains the second display location parameter;Judge whether the second display location parameter is located at first
In the range of display location parameter is limited;If the second display location parameter is located in the range of the first display location parameter restriction,
Then determine that the sight line of the user of mobile terminal 500 is located in the first preview area.
Alternatively, processor 501 is used for:The trigger action of the user of mobile terminal 500 is received, while opening front camera
506 and the step of rear camera 507 before, processor 501 is additionally operable to:Gather the second face table of the user of mobile terminal 500
Feelings;Set up the incidence relation of the second human face expression and exposal model.
Alternatively, processor 501 is used for:The trigger action of the user of mobile terminal 500 is received, while opening front camera
506 and the step of rear camera 507 after, processor 501 is additionally operable to:Obtain the light luminance of the collection of front camera 506
Value;If light luminance value is less than default light luminance threshold value, the light compensating lamp of front camera 506 is opened.
Alternatively, processor 501 is used for:By front camera 506 detect the user of mobile terminal 500 sight line whether position
Before the step in the first preview area on the display screen of mobile terminal 500, treatment 501 specifically for:Adjust first pre-
Look at first display location of the region on the display screen of mobile terminal 500.
Mobile terminal 500 can realize each process of mobile terminal realization in previous embodiment, to avoid repeating, here
Repeat no more.
The mobile terminal 500 of the embodiment of the present invention, the user of mobile terminal 500 can be according to the demand of oneself, custom or happiness
It is good, the position of first preview area is adjusted, and default light luminance threshold value can be less than in light luminance value, then before opening
Putting the light compensating lamp of camera 506 increases light luminance to collect clearly the first image, when regarding for the user of mobile terminal 500
After line is located in the first preview area, the first human face expression that front camera 506 gathers the user of mobile terminal 500 is can trigger,
So as to the first human face expression can be compared with the second human face expression for prestoring, in the case where the match is successful, to lead to
The corresponding exposal model of the second human face expression is crossed, photo is generated, the user of mobile terminal 500 still can use hands grasping mobile terminal
500, increase the stability of mobile terminal 500, prevent rear camera 507 from shaking, the photo of generation is prevented due to rearmounted shooting
First 507 shake and obscure.
Sixth embodiment
Fig. 6 is the structural representation of the mobile terminal of sixth embodiment of the invention.Specifically, the mobile terminal 600 in Fig. 6
Can be mobile phone, panel computer, personal digital assistant (Personal Digital Assistant, PDA) or vehicle-mounted computer etc..
Mobile terminal 600 in Fig. 6 includes radio frequency (Radio Frequency, RF) circuit 610, memory 620, input
Unit 630, display unit 640, processor 660, voicefrequency circuit 670, WiFi (Wireless Fidelity) module 680, power supply
690th, front camera 650 and rear camera 651.
Wherein, input block 630 can be used to receive the numeral or character information of user input, and produce and mobile terminal
600 user is set and the relevant signal input of function control.Specifically, in the embodiment of the present invention, the input block 630 can
With including contact panel 631.Contact panel 631, also referred to as touch-screen, can collect user thereon or neighbouring touch operation
(such as user uses the operations of any suitable object or annex on contact panel 631 such as finger, stylus), and according to advance
The formula of setting drives corresponding attachment means.Optionally, contact panel 631 may include touch detecting apparatus and touch controller
Two parts.Wherein, touch detecting apparatus detect the touch orientation of user, and detect the signal that touch operation brings, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give the processor 660 again, and the order sent of receiving processor 660 and can be performed.Furthermore, it is possible to using resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize contact panel 631.Except contact panel 631, input block
630 can also include other input equipments 632, and other input equipments 632 can include but is not limited to physical keyboard, function key
One or more in (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 640 can be used for display by the information of user input or be supplied to information and the movement of user
The various menu interfaces of terminal 600.Display unit 640 may include display panel 641, optionally, can use LCD or organic hairs
The forms such as optical diode (Organic Light-Emitting Diode, OLED) configure display panel 641.
It should be noted that contact panel 631 can cover display panel 641, touch display screen is formed, when touch display screen inspection
Measure thereon or after neighbouring touch operation, processor 660 is sent to determine the type of touch event, with preprocessor
660 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area
And the arrangement mode of the conventional control viewing area is not limited, can be arranged above and below, left-right situs etc. can distinguish two and show
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area
It can also be the empty interface not comprising any content.The conventional control viewing area be used for show utilization rate control higher, for example,
Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 660 is the control centre of mobile terminal 600, using various interfaces and connection whole mobile phone
Various pieces, by running or performing software program and/or module of the storage in first memory 621, and call storage
Data in second memory 622, perform the various functions and processing data of mobile terminal 600, so as to mobile terminal 600
Carry out integral monitoring.Optionally, processor 660 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 621 in software program and/or module and/
Or the data in the second memory 622, processor 660 is used for:The trigger action of the user of mobile terminal 600 is received, while opening
Open front camera 650 and rear camera 651;By front camera 650 detect the user of mobile terminal 600 sight line whether
In the first preview area on the display screen of mobile terminal 600;If the sight line for detecting the user of mobile terminal 600 is located at the
In one preview area, then the first human face expression in the first image of the collection of front camera 650 is recognized, wherein, the first image
Shown in the first preview area;Determine whether the first human face expression matches with least one second human face expressions for prestoring;If
First human face expression is matched with the second human face expression, then the exposal model for being associated according to the second human face expression, generates rearmounted shooting
Second image photograph of first 651 collection, wherein, the second image is in the second preview area on the display screen of mobile terminal 600
Display.
Alternatively, processor 660 is used for:By front camera 650 detect the user of mobile terminal 600 sight line whether position
In the step in the first preview area on the display screen of mobile terminal 600, processor 660 specifically for:It is pre- according to first
Look at first display location of the region on the display screen of mobile terminal 600, obtain the first display location parameter;By preposition shooting
The sight line of the first user of 650 real time tracking mobile terminal 600;Sight line according to the user of mobile terminal 600 is aobvious with mobile terminal 600
The second intersecting display location of display screen, obtains the second display location parameter;Judge whether the second display location parameter is located at first
In the range of display location parameter is limited;If the second display location parameter is located in the range of the first display location parameter restriction,
Then determine that the sight line of the user of mobile terminal 600 is located in the first preview area.
Alternatively, processor 660 is used for:The trigger action of the user of mobile terminal 600 is received, while opening front camera
650 and the step of rear camera 651 before, processor 660 is additionally operable to:Gather the second face table of the user of mobile terminal 600
Feelings;Set up the incidence relation of the second human face expression and exposal model.
Alternatively, processor 660 is used for:The trigger action of the user of mobile terminal 600 is received, while opening front camera
650 and the step of rear camera 651 after, processor 660 is additionally operable to:Obtain the light luminance of the collection of front camera 650
Value;If light luminance value is less than default light luminance threshold value, the light compensating lamp of front camera 650 is opened.
Alternatively, processor 660 is used for:By front camera 650 detect the user of mobile terminal 600 sight line whether position
Before the step in the first preview area on the display screen of mobile terminal 600, treatment 660 specifically for:Adjust first pre-
Look at first display location of the region on the display screen of mobile terminal 600.
It can be seen that, mobile terminal 600, the user of mobile terminal 600 can be somebody's turn to do according to the demand of oneself, custom or hobby, adjustment
The position of the first preview area, and predetermined threshold value can be less than in light luminance, then the light compensating lamp for opening front camera 650 increases
Plus light luminance is to collect clearly the first image, when the sight line of the user of mobile terminal 600 is located in the first preview area
Afterwards, the first human face expression that front camera 650 gathers the user of mobile terminal 600 is can trigger, so that can be by the first human face expression
Compare with the second human face expression for prestoring, so as in the case where the match is successful, can be corresponding by the second human face expression
Exposal model, generates photo, and the user of mobile terminal 600 still can use hands grasping mobile terminal 600, increase mobile terminal 600
Stability, prevent rear camera 651 from shaking, prevent generation photo obscured because rear camera 651 is shaken.
Those of ordinary skill in the art it is to be appreciated that with reference to disclosed in the embodiment of the present invention embodiment description it is each
The unit and algorithm steps of example, can be realized with the combination of electronic hardware or computer software and electronic hardware.These
Function is performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specialty
Technical staff can realize described function to each specific application using distinct methods, but this realization should not
Think beyond the scope of this invention.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can be by other
Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only
A kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of device or unit by some interfaces
Connect, can be electrical, mechanical or other forms.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme
's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.
If the function is to realize in the form of SFU software functional unit and as independent production marketing or when using, can be with
Storage is in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used to so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the invention.
And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes
Medium.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (10)
1. a kind of photographic method, is applied to the mobile terminal with front camera and rear camera, it is characterised in that described
Method includes:
The trigger action of mobile terminal user is received, while opening the front camera and the rear camera;
Whether the sight line for detecting the mobile terminal user by the front camera is located at the display screen of the mobile terminal
On the first preview area in;
If the sight line for detecting the mobile terminal user is located in first preview area, the front camera is recognized
The first human face expression in first image of collection, wherein, described first image shows in first preview area;
Determine whether first human face expression matches with least one second human face expressions for prestoring;
If first human face expression is matched with second human face expression, according to taking pictures that second human face expression is associated
Pattern, generates the second image photograph of the rear camera collection, wherein, second image is in the aobvious of the mobile terminal
Display in the second preview area in display screen.
2. method according to claim 1, it is characterised in that it is described by the front camera detect it is described it is mobile eventually
Whether the sight line of end subscriber is located at the step in the first preview area on the display screen of the mobile terminal, including:
According to first display location of first preview area on the display screen of the mobile terminal, obtain first and show position
Put parameter;
By the sight line of mobile terminal user described in the front camera real-time tracking;
The second display location that sight line according to the mobile terminal user intersects with the display screen of the mobile terminal, obtains the
Two display location parameters;
Judge whether second display location parameter is located in the range of the restriction of first display location parameter;
If second display location parameter is located in the range of the restriction of first display location parameter, it is determined that the movement
The sight line of terminal user is located in first preview area.
3. method according to claim 1, it is characterised in that the trigger action of the reception mobile terminal user, while
Before the step of opening the front camera and the rear camera, methods described also includes:
Gather second human face expression of the mobile terminal user;
Set up the incidence relation of second human face expression and the exposal model.
4. method according to claim 1, it is characterised in that the trigger action of the reception mobile terminal user, while
After the step of opening the front camera and the rear camera, methods described also includes:
Obtain the light luminance value of the front camera collection;
If the light luminance value is less than default light luminance threshold value, the light compensating lamp of the front camera is opened.
5. method according to claim 1, it is characterised in that it is described by the front camera detect it is described it is mobile eventually
Before whether the sight line of end subscriber is located at the step in the first preview area on the display screen of the mobile terminal, methods described
Also include:
Adjust first display location of first preview area on the display screen of the mobile terminal.
6. a kind of mobile terminal, including front camera and rear camera, it is characterised in that the mobile terminal also includes:
First opening module, for receiving the trigger action of mobile terminal user, while opening the front camera and described
Rear camera;
Whether detection module, the sight line for detecting the mobile terminal user by the front camera is located at the movement
In the first preview area on the display screen of terminal;
First acquisition module, if being located in first preview area for detecting the sight line of the mobile terminal user,
The first human face expression in the first image of the front camera collection is recognized, wherein, described first image is described first
Display in preview area;
Determining module, for determining whether first human face expression matches with least one second human face expressions for prestoring;
Generation module, if being matched with second human face expression for first human face expression, according to second face
The exposal model of expression association, generates the second image photograph of the rear camera collection, wherein, second image is in institute
State display in the second preview area on the display screen of mobile terminal.
7. mobile terminal according to claim 6, it is characterised in that the detection module includes:
First acquisition submodule, for the first display according to first preview area on the display screen of the mobile terminal
Position, obtains the first display location parameter;
Tracking submodule, for the sight line by mobile terminal user described in the front camera real-time tracking;
Second acquisition submodule, intersects for the sight line according to the mobile terminal user with the display screen of the mobile terminal
Second display location, obtains the second display location parameter;
Judging submodule, for judging whether second display location parameter is located at what first display location parameter was limited
In the range of;
Determination sub-module, if being located at the scope that first display location parameter is limited for second display location parameter
It is interior, it is determined that the sight line of the mobile terminal user is located in first preview area.
8. mobile terminal according to claim 6, it is characterised in that also include:
Second acquisition module, for it is described reception mobile terminal user trigger action, while open the front camera and
Before the step of rear camera, second human face expression of the mobile terminal user is gathered;
Module is set up, the incidence relation for setting up second human face expression and the exposal model.
9. mobile terminal according to claim 6, it is characterised in that also include:
Acquisition module, for the trigger action of the reception mobile terminal user, while opening the front camera and described
After the step of rear camera, the light luminance value of the front camera collection is obtained;
Second opening module, if being less than default light luminance threshold value for the light luminance value, opens described preposition take the photograph
As the light compensating lamp of head.
10. mobile terminal according to claim 6, it is characterised in that also include:
Whether adjusting module, detect the sight line of the mobile terminal user positioned at described for described by the front camera
Before the step in the first preview area on the display screen of mobile terminal, first preview area is adjusted described mobile whole
The first display location on the display screen at end.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710054919.2A CN106713764A (en) | 2017-01-24 | 2017-01-24 | Photographic method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710054919.2A CN106713764A (en) | 2017-01-24 | 2017-01-24 | Photographic method and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106713764A true CN106713764A (en) | 2017-05-24 |
Family
ID=58909613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710054919.2A Pending CN106713764A (en) | 2017-01-24 | 2017-01-24 | Photographic method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106713764A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106973237A (en) * | 2017-05-25 | 2017-07-21 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN107333059A (en) * | 2017-06-26 | 2017-11-07 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN107483814A (en) * | 2017-08-09 | 2017-12-15 | 广东欧珀移动通信有限公司 | Exposal model method to set up, device and mobile device |
CN107491685A (en) * | 2017-09-27 | 2017-12-19 | 维沃移动通信有限公司 | A kind of face identification method and mobile terminal |
CN108156387A (en) * | 2018-01-12 | 2018-06-12 | 深圳奥比中光科技有限公司 | Terminate the device and method of camera shooting automatically by detecting eye sight line |
CN108650454A (en) * | 2018-04-23 | 2018-10-12 | 联想(北京)有限公司 | A kind of information processing method, electronic equipment and computer readable storage medium |
CN109257649A (en) * | 2018-11-28 | 2019-01-22 | 维沃移动通信有限公司 | A kind of multimedia file producting method and terminal device |
CN110099219A (en) * | 2019-06-13 | 2019-08-06 | Oppo广东移动通信有限公司 | Panorama shooting method and Related product |
CN110139022A (en) * | 2018-09-29 | 2019-08-16 | 广东小天才科技有限公司 | A kind of camera control method and wearable device, storage medium |
CN110139043A (en) * | 2018-09-29 | 2019-08-16 | 广东小天才科技有限公司 | A kind of camera control method and wearable device of wearable device |
CN110139025A (en) * | 2018-09-29 | 2019-08-16 | 广东小天才科技有限公司 | A kind of social user's recommended method and wearable device based on the behavior of taking pictures |
CN111771372A (en) * | 2018-12-21 | 2020-10-13 | 华为技术有限公司 | Method and device for determining camera shooting parameters |
CN112672053A (en) * | 2020-12-23 | 2021-04-16 | 深圳创维-Rgb电子有限公司 | Photographing method, photographing device, terminal equipment and computer-readable storage medium |
CN114173061A (en) * | 2021-12-13 | 2022-03-11 | 深圳万兴软件有限公司 | Multi-mode camera shooting control method and device, computer equipment and storage medium |
WO2022267861A1 (en) * | 2021-06-24 | 2022-12-29 | 荣耀终端有限公司 | Photographing method and device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101686273A (en) * | 2008-09-26 | 2010-03-31 | 深圳富泰宏精密工业有限公司 | Portable communication device |
CN103079034A (en) * | 2013-01-06 | 2013-05-01 | 北京百度网讯科技有限公司 | Perception shooting method and system |
CN103297696A (en) * | 2013-05-24 | 2013-09-11 | 北京小米科技有限责任公司 | Photographing method, photographing device and photographing terminal |
CN103685940A (en) * | 2013-11-25 | 2014-03-26 | 上海斐讯数据通信技术有限公司 | Method for recognizing shot photos by facial expressions |
CN103763476A (en) * | 2014-02-20 | 2014-04-30 | 北京百纳威尔科技有限公司 | Shooting parameter setting method and device |
CN103914150A (en) * | 2014-04-08 | 2014-07-09 | 小米科技有限责任公司 | Camera control method and device |
CN104065880A (en) * | 2014-06-05 | 2014-09-24 | 惠州Tcl移动通信有限公司 | Processing method and system for automatically taking pictures based on eye tracking technology |
US20140300538A1 (en) * | 2013-04-08 | 2014-10-09 | Cogisen S.R.L. | Method for gaze tracking |
CN104869318A (en) * | 2015-06-12 | 2015-08-26 | 上海鼎讯电子有限公司 | Automatic shooting subassembly |
CN105704369A (en) * | 2016-01-20 | 2016-06-22 | 努比亚技术有限公司 | Information-processing method and device, and electronic device |
CN105892685A (en) * | 2016-04-29 | 2016-08-24 | 广东小天才科技有限公司 | Method and device for searching subjects of intelligent equipment |
CN105979141A (en) * | 2016-06-03 | 2016-09-28 | 北京奇虎科技有限公司 | Image shooting method, device and mobile terminal |
CN106303193A (en) * | 2015-05-25 | 2017-01-04 | 展讯通信(天津)有限公司 | Image capturing method and device |
-
2017
- 2017-01-24 CN CN201710054919.2A patent/CN106713764A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101686273A (en) * | 2008-09-26 | 2010-03-31 | 深圳富泰宏精密工业有限公司 | Portable communication device |
CN103079034A (en) * | 2013-01-06 | 2013-05-01 | 北京百度网讯科技有限公司 | Perception shooting method and system |
US20140300538A1 (en) * | 2013-04-08 | 2014-10-09 | Cogisen S.R.L. | Method for gaze tracking |
CN103297696A (en) * | 2013-05-24 | 2013-09-11 | 北京小米科技有限责任公司 | Photographing method, photographing device and photographing terminal |
CN103685940A (en) * | 2013-11-25 | 2014-03-26 | 上海斐讯数据通信技术有限公司 | Method for recognizing shot photos by facial expressions |
CN103763476A (en) * | 2014-02-20 | 2014-04-30 | 北京百纳威尔科技有限公司 | Shooting parameter setting method and device |
CN103914150A (en) * | 2014-04-08 | 2014-07-09 | 小米科技有限责任公司 | Camera control method and device |
CN104065880A (en) * | 2014-06-05 | 2014-09-24 | 惠州Tcl移动通信有限公司 | Processing method and system for automatically taking pictures based on eye tracking technology |
CN106303193A (en) * | 2015-05-25 | 2017-01-04 | 展讯通信(天津)有限公司 | Image capturing method and device |
CN104869318A (en) * | 2015-06-12 | 2015-08-26 | 上海鼎讯电子有限公司 | Automatic shooting subassembly |
CN105704369A (en) * | 2016-01-20 | 2016-06-22 | 努比亚技术有限公司 | Information-processing method and device, and electronic device |
CN105892685A (en) * | 2016-04-29 | 2016-08-24 | 广东小天才科技有限公司 | Method and device for searching subjects of intelligent equipment |
CN105979141A (en) * | 2016-06-03 | 2016-09-28 | 北京奇虎科技有限公司 | Image shooting method, device and mobile terminal |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106973237A (en) * | 2017-05-25 | 2017-07-21 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN107333059A (en) * | 2017-06-26 | 2017-11-07 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN107483814A (en) * | 2017-08-09 | 2017-12-15 | 广东欧珀移动通信有限公司 | Exposal model method to set up, device and mobile device |
CN107491685A (en) * | 2017-09-27 | 2017-12-19 | 维沃移动通信有限公司 | A kind of face identification method and mobile terminal |
CN107491685B (en) * | 2017-09-27 | 2020-06-26 | 维沃移动通信有限公司 | Face recognition method and mobile terminal |
CN108156387A (en) * | 2018-01-12 | 2018-06-12 | 深圳奥比中光科技有限公司 | Terminate the device and method of camera shooting automatically by detecting eye sight line |
CN108650454A (en) * | 2018-04-23 | 2018-10-12 | 联想(北京)有限公司 | A kind of information processing method, electronic equipment and computer readable storage medium |
CN110139022A (en) * | 2018-09-29 | 2019-08-16 | 广东小天才科技有限公司 | A kind of camera control method and wearable device, storage medium |
CN110139043A (en) * | 2018-09-29 | 2019-08-16 | 广东小天才科技有限公司 | A kind of camera control method and wearable device of wearable device |
CN110139025A (en) * | 2018-09-29 | 2019-08-16 | 广东小天才科技有限公司 | A kind of social user's recommended method and wearable device based on the behavior of taking pictures |
CN109257649A (en) * | 2018-11-28 | 2019-01-22 | 维沃移动通信有限公司 | A kind of multimedia file producting method and terminal device |
CN109257649B (en) * | 2018-11-28 | 2021-12-24 | 维沃移动通信有限公司 | Multimedia file generation method and terminal equipment |
CN111771372A (en) * | 2018-12-21 | 2020-10-13 | 华为技术有限公司 | Method and device for determining camera shooting parameters |
CN110099219A (en) * | 2019-06-13 | 2019-08-06 | Oppo广东移动通信有限公司 | Panorama shooting method and Related product |
CN112672053A (en) * | 2020-12-23 | 2021-04-16 | 深圳创维-Rgb电子有限公司 | Photographing method, photographing device, terminal equipment and computer-readable storage medium |
WO2022267861A1 (en) * | 2021-06-24 | 2022-12-29 | 荣耀终端有限公司 | Photographing method and device |
CN114173061A (en) * | 2021-12-13 | 2022-03-11 | 深圳万兴软件有限公司 | Multi-mode camera shooting control method and device, computer equipment and storage medium |
CN114173061B (en) * | 2021-12-13 | 2023-09-29 | 深圳万兴软件有限公司 | Multi-mode camera shooting control method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106713764A (en) | Photographic method and mobile terminal | |
CN108234891B (en) | A kind of photographic method and mobile terminal | |
CN106131341B (en) | A kind of photographic method and mobile terminal | |
CN105827952B (en) | A kind of photographic method and mobile terminal removing specified object | |
CN106210526A (en) | A kind of image pickup method and mobile terminal | |
CN106527693A (en) | Application control method and mobile terminal | |
CN106231178B (en) | A kind of self-timer method and mobile terminal | |
CN105933607A (en) | Photographing effect adjusting method of mobile terminal and mobile terminal | |
CN107133005A (en) | The display methods and mobile terminal of a kind of flexible screen | |
CN106713780A (en) | Control method for flash lamp and mobile terminal | |
CN107197170A (en) | A kind of exposal control method and mobile terminal | |
CN106406710A (en) | Screen recording method and mobile terminal | |
CN106060406A (en) | Photographing method and mobile terminal | |
CN105915782A (en) | Picture obtaining method based on face identification, and mobile terminal | |
CN106973222A (en) | The control method and mobile terminal of a kind of Digital Zoom | |
CN107566717A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
CN106534685A (en) | Panoramic image generation method and mobile terminal | |
CN106791393A (en) | A kind of image pickup method and mobile terminal | |
CN107317993A (en) | A kind of video call method and mobile terminal | |
CN107124543A (en) | A kind of image pickup method and mobile terminal | |
CN107222737B (en) | A kind of processing method and mobile terminal of depth image data | |
CN108632543A (en) | Method for displaying image, device, storage medium and electronic equipment | |
CN106778670A (en) | Gesture identifying device and recognition methods | |
CN106713659A (en) | panoramic shooting method and mobile terminal | |
CN110245607A (en) | Eyeball tracking method and Related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170524 |
|
RJ01 | Rejection of invention patent application after publication |