CN107147852B - Image photographing method, mobile terminal and computer-readable storage medium - Google Patents

Image photographing method, mobile terminal and computer-readable storage medium Download PDF

Info

Publication number
CN107147852B
CN107147852B CN201710518229.8A CN201710518229A CN107147852B CN 107147852 B CN107147852 B CN 107147852B CN 201710518229 A CN201710518229 A CN 201710518229A CN 107147852 B CN107147852 B CN 107147852B
Authority
CN
China
Prior art keywords
user
shooting
face image
image
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710518229.8A
Other languages
Chinese (zh)
Other versions
CN107147852A (en
Inventor
芮元乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710518229.8A priority Critical patent/CN107147852B/en
Publication of CN107147852A publication Critical patent/CN107147852A/en
Application granted granted Critical
Publication of CN107147852B publication Critical patent/CN107147852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The invention provides an image shooting method, a mobile terminal and a computer readable storage medium, wherein the method comprises the following steps: determining a shooting angle of a user; calling a front camera to obtain a face image of a user; acquiring expression information of a user from the face image; and prompting the user to adjust the current expression and/or the current shooting angle based on the optimal shooting parameters determined by training the face image which is shot by the user history and has the satisfaction degree reaching the preset standard. The image shooting method provided by the embodiment of the invention can provide personalized shooting auxiliary service for the user, so that the shooting expression is closest to the same best expression, the shooting angle reaches the best shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.

Description

Image photographing method, mobile terminal and computer-readable storage medium
Technical Field
The present invention relates to the field of image capturing technologies, and in particular, to an image capturing method, a mobile terminal, and a computer-readable storage medium.
Background
With the continuous development of electronic products, mobile terminals with shooting functions are more and more popular, users can shoot by using the mobile terminals at any time and any place, and the obtained images can be obtained by shooting, so that the mobile terminals are convenient and quick. Captured images are also often uploaded to the internet for sharing with others.
At present, the mobile terminal can only provide a photographing auxiliary line such as a squared figure when an image is photographed, and the photographing auxiliary line assists a user in positioning a photographed object in a screen. The photographing auxiliary line can only simply help the user to correct the position of the photographed object, and cannot provide personalized photographing auxiliary services, such as services for correcting the wrong photographing angle of the user and the like, so that the quality of the photographed image is poor, and the photographing experience of the user is influenced.
Disclosure of Invention
The invention provides an image shooting method, a mobile terminal and a computer readable storage medium, which are used for solving the problem that the existing mobile terminal cannot provide personalized shooting auxiliary service for a user.
According to an aspect of the present invention, there is provided an image photographing method applied to a mobile terminal, the method including: determining a shooting angle of a user; calling a front camera to obtain a face image of a user; acquiring expression information of a user from the face image; based on the best shooting parameters determined by training the facial images shot by the user history and with the satisfaction degree reaching the preset standard, prompting the user to adjust the current expression and/or the current shooting angle, wherein the best shooting parameters comprise the corresponding relation between the best expression and the best shooting angle, and the facial images shot by the user history and with the satisfaction degree reaching the preset standard comprise: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
According to another aspect of the present invention, there is provided a mobile terminal including: the angle determining module is used for determining the shooting angle of the user; the image acquisition module is used for calling the front camera to acquire a face image of the user; the recognition module is used for acquiring expression information of the user from the face image; the prompting module is used for prompting a user to adjust the current expression and/or the current shooting angle based on the optimal shooting parameters determined by training the face image shot by the user history and with the satisfaction degree reaching the preset standard, wherein the optimal shooting parameters comprise the corresponding relation between the optimal expression and the optimal shooting angle, and the face image shot by the user history and with the satisfaction degree reaching the preset standard comprises: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
According to still another aspect of the present invention, there is provided a mobile terminal including: a memory, a processor and an image capture program stored on the memory and executable on the processor, the image capture program when executed by the processor implementing the steps of any of the image capture methods as claimed.
According to still another aspect of the present invention, there is provided a computer-readable storage medium having stored thereon an image capturing program which, when executed by a processor, implements the steps of any one of the image capturing methods as set forth in the claims.
Compared with the prior art, the invention has the following advantages:
according to the image shooting method, the mobile terminal and the computer readable storage medium provided by the embodiment of the invention, the optimal shooting parameters matched with the shooting habits of the user are trained in advance based on the face image with the satisfaction degree reaching the preset standard shot by the user history, the current shooting angle and the current expression of the user are determined during image shooting, and the user is prompted to adjust the current expression/the current shooting angle to provide personalized shooting auxiliary service for the user through the corresponding relation between the expression in the latest shooting parameters and the optimal shooting angle, so that the shooting angle is closest to the same optimal expression, the shooting angle reaches the optimal shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating steps of an image capturing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of an image capturing method according to a second embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 4 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention;
fig. 6 is a block diagram of a mobile terminal according to a sixth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
Referring to fig. 1, a flowchart illustrating steps of an image capturing method according to a first embodiment of the present invention is shown.
The image shooting method of the embodiment of the invention comprises the following steps:
step 101: the photographing angle of the user is determined.
The shooting angle is the included angle of the front camera of the mobile terminal relative to the eyes of the user. When determining the shooting angle, a preferred determination method is as follows:
the method comprises the steps of detecting the distance between a display screen of the mobile terminal and a human face through a distance sensor arranged in the mobile terminal, detecting the horizontal distance between the mobile terminal and a user, calculating the included angle between the screen of the mobile terminal and the face of the user through the two distances, and determining the included angle as the included angle of a front camera of the mobile terminal relative to eyes of the user.
Step 102: and calling a front camera to acquire a face image of the user.
Because the shooting preview interface is opened, the front-facing camera can acquire the face image of the user by shooting one frame of picture.
Step 103: and acquiring expression information of the user from the face image.
Facial expressions can be roughly divided into eight categories: excitement, happiness, surprise, hurting, fear, photophobia, aversion, anger. Generally, the eye and muscle groups near the mouth are the most abundantly expressed parts of the face.
In this step, the category to which the current expression belongs can be determined by the states of muscle groups near the eyes and the oral cavity in the face image.
Step 104: and prompting the user to adjust the current expression and/or the current shooting angle based on the optimal shooting parameters determined by training the face image which is shot by the user history and has the satisfaction degree reaching the preset standard.
In a specific implementation process, the user can be only prompted to adjust the current expression or only to adjust the current shooting angle; of course the user may also be prompted to make adjustments to both.
And when the optimal shooting parameters are obtained, the facial images shot by the user history and the satisfaction degree of which reaches a preset standard are analyzed. The optimal shooting parameters are matched with the shooting habits of the user, and the optimal shooting parameters comprise the corresponding relation between the optimal expression of at least one type of expression and the optimal shooting angle. Of course, the corresponding relation between the optimal expression of various expressions and the optimal shooting angle can also be obtained through training.
The face images shot by the user history and having the satisfaction degree reaching the preset standard include but are not limited to: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
Determining an expression category to which a current expression of a user belongs when providing personalized shooting auxiliary service for the user based on optimal shooting parameters determined by training face images shot by a user history and having satisfaction reaching a preset standard, and determining an optimal expression and an optimal shooting angle corresponding to the expression category in the optimal shooting parameters; comparing the current expression with the optimal expression, and giving a user expression adjustment prompt, for example: personalized prompt that the mouth corner is slightly upwarped and the face is slightly raised; comparing the current shooting angle with the optimal shooting angle, and giving a user shooting angle adjustment prompt, for example: personalized cues to slightly increase the angle of capture. The user adjusts the expression and the shooting angle according to the personalized prompt, so that the expression shot by the user is close to the optimal expression, and the shooting angle is close to the optimal shooting angle, and a high-quality image is shot.
According to the image shooting method provided by the embodiment of the invention, the optimal shooting parameters matched with the shooting habits of the user are trained in advance based on the face image which is shot by the user history and has the satisfaction degree reaching the preset standard, the current shooting angle and the current expression of the user are determined during image shooting, and the user is prompted to adjust the current expression/current shooting angle to provide personalized shooting auxiliary service for the user through the corresponding relation between the expression in the latest shooting parameters and the optimal shooting angle, so that the shooting expression is closest to the same optimal expression, the shooting angle reaches the optimal shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of an image capturing method according to a second embodiment of the present invention is shown.
The image shooting method of the embodiment of the invention specifically comprises the following steps:
step 201: and determining the face image to be trained with the user satisfaction degree reaching the preset standard from the face images shot by the user history.
The image whose user satisfaction meets the preset standard may include, but is not limited to: the method comprises a first face image which is shot by a user within a preset time period, uploaded to a social network site and has a high evaluation rate reaching a preset value, and a second face image which is locally stored and has a checked time exceeding a preset time within the preset time period.
Each face image corresponds to a shooting angle.
In the specific implementation process, a preset time period may be set by a person skilled in the art according to an actual requirement, which is not specifically limited in the embodiment of the present invention. Wherein, the preset time period can be set to be about 1 month, about 2 months or about 3 months.
The image that has been uploaded to the social network site and that has received a good rating up to the preset value is, without any doubt, an image that is satisfactory to the user. The social networking site may be a qq space, a WeChat circle of friends, and the like.
The longer the image is viewed, the more satisfied the user is with the image, so that the second face image which is locally stored and the viewed time length of which exceeds the preset time length in the preset time can be used as the face image to be trained. The preset time period can be set by those skilled in the art according to actual requirements, for example: set to 30 seconds, 60 seconds, 90 seconds, etc.
In a specific implementation process, only the first face image can be determined as the face image to be trained; the first face image and the second face image can be determined as the face images to be trained. When the first face image and the second face image are determined as the face images to be trained to perform the optimal shooting parameter training, the reliability of the training result can be improved due to the fact that the number of the face images participating in the training is large.
When the first face image and the second face image are both determined as the face images to be trained, the same face image may exist in the first face image and the second face image, and only one of the first face image and the second face image is reserved when the same face image appears.
Step 202: and performing expression recognition on each facial image to be trained.
The expression corresponding to each face image can be identified by carrying out expression identification on the face images, and an expression sequence is obtained.
Step 203: and determining the optimal shooting parameters based on the shooting angles and the expressions corresponding to the facial images to be trained.
When the optimal shooting parameters are determined, the expressions of each face image are divided into corresponding expression classifications, which may include but are not limited to: excitement, happiness, surprise, hurting, fear, photophobia, aversion, anger.
And determining each face image corresponding to the expression classification according to each expression classification, determining the optimal shooting angle and the optimal expression according to the expression and the shooting angle of each face image, and establishing a corresponding relation between the optimal shooting angle and the optimal expression. By the method, the best shooting angle and the best shooting expression corresponding to each expression classification are determined.
A feasible method for determining the optimal shooting angle and the optimal shooting expression corresponding to a certain type of expression classification comprises the following steps: determining the expression with the largest repetition frequency in the recognized expressions in each face image as the optimal expression, and determining the shooting angle with the largest repetition frequency in the shooting angles corresponding to each face image as the optimal shooting angle.
In the embodiment of the present invention, steps 201 to 203 show a single process of determining the optimal shooting parameters matched with the shooting habits of the user, and in the specific implementation process, the mobile terminal may periodically determine and update the optimal shooting parameters, and replace the optimal shooting parameters determined in the previous period with the newly determined optimal shooting parameters. The update period may be set by a person skilled in the art according to actual requirements, and is not particularly limited in the embodiment of the present invention. For example: the update cycle of the optimum shooting parameters is determined to be 1 day, 3 days, 1 week, and the like.
Step 204: the photographing angle of the user is determined.
A preferred way of determining the current shooting angle is as follows:
firstly, a distance sensor is called to detect the horizontal distance between a mobile terminal and a user and the distance between a human face and a display screen of the mobile terminal;
secondly, determining the current shooting angle based on the horizontal distance and the distance between the human face and the display screen of the mobile terminal.
For example, if the horizontal distance is X, the distance between the human face and the display screen of the mobile terminal is Y, and the current shooting angle is a, the current shooting angle a is determined based on cosA ═ X/Y.
Step 205: and calling a front camera to acquire a face image of the user.
Step 206: and acquiring expression information of the user from the face image.
In this step, the classification of the current expression can be determined according to the acquired expression information through the expression information, such as the states of the eyes and muscles near the oral cavity in the face image.
Step 207: and prompting the user to adjust the current expression and/or the current shooting angle based on the optimal shooting parameters determined by training the face image which is shot by the user history and has the satisfaction degree reaching the preset standard.
For the specific implementation of this step, reference may be made to the relevant description in the first embodiment, which is not described again in this embodiment of the present invention.
Step 208: and adjusting the brightness of the light supplement lamp to shoot the face image according to the light intensity of the current shooting environment.
The user adjusts the expression and the shooting angle according to the personalized prompt, so that the expression shot by the user is close to the optimal expression, and the shooting angle is close to the optimal shooting angle. And the mobile terminal shoots the figure image, before shooting, the mobile terminal automatically adjusts the brightness of the light supplement lamp according to the light intensity of the current shooting environment, and a high-quality image which is moderate in brightness and matched with the shooting habit of the user is obtained through shooting. The brightness of the light supplement lamp is automatically adjusted during shooting, and the brightness quality of the shot image can be ensured.
Step 209: and storing the corresponding relation among the shot face image, the shooting angle of the face image and the shooting time.
The time period to which the image belongs can be determined by the shooting time.
If the stored face image is uploaded to the social network site by the user and the rating of the face image reaches a preset value or the checked time length of the image exceeds the preset time length, the face image is used as one of the face images to be trained when the latest shooting parameters are trained next time.
According to the image shooting method provided by the embodiment of the invention, the optimal shooting parameters matched with the shooting habits of the user are trained in advance based on the face image which is shot by the user history and has the satisfaction degree reaching the preset standard, the current shooting angle and the current expression of the user are determined during image shooting, and the user is prompted to adjust the current expression/current shooting angle to provide personalized shooting auxiliary service for the user through the corresponding relation between the expression in the latest shooting parameters and the optimal shooting angle, so that the shooting expression is closest to the same optimal expression, the shooting angle reaches the optimal shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.
EXAMPLE III
Referring to fig. 3, a block diagram of a mobile terminal according to a third embodiment of the present invention is shown.
The mobile terminal of the embodiment of the invention can comprise: an angle determining module 301, configured to determine a shooting angle of a user; the image acquisition module 302 is used for calling a front camera to acquire a face image of a user; the recognition module 303 is configured to obtain expression information of the user from the face image; the prompting module 304 is configured to prompt the user to adjust the current expression and/or the current shooting angle based on an optimal shooting parameter determined by training a facial image shot by a user history and having a satisfaction degree meeting a preset standard, where the optimal shooting parameter includes a correspondence between the optimal expression and the optimal shooting angle, and the facial image shot by the user history and having a satisfaction degree meeting the preset standard includes: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
According to the mobile terminal provided by the embodiment of the invention, the optimal shooting parameters matched with the shooting habits of the user are trained in advance based on the face image which is shot by the user history and has the satisfaction degree reaching the preset standard, the current shooting angle and the current expression of the user are determined when the image is shot, and the user is prompted to adjust the current expression/current shooting angle to provide personalized shooting auxiliary service for the user through the corresponding relation between the expression in the latest shooting parameters and the optimal shooting angle, so that the shooting expression is closest to the same optimal expression, the shooting angle reaches the optimal shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.
Example four
Referring to fig. 4, a block diagram of a mobile terminal according to a fourth embodiment of the present invention is shown.
The mobile terminal of the embodiment of the present invention is further optimized for the mobile terminal of the third embodiment, and the optimized mobile terminal includes: an angle determining module 401, configured to determine a shooting angle of a user; an image obtaining module 402, configured to invoke a front-facing camera to obtain a face image of a user; an identifying module 403, configured to obtain expression information of a user from the face image; a prompting module 404, configured to prompt a user to adjust a current expression and/or a current shooting angle based on an optimal shooting parameter determined by training a facial image shot by a user history and whose satisfaction reaches a preset standard, where the optimal shooting parameter includes a correspondence between the optimal expression and the optimal shooting angle, and the facial image shot by the user history and whose satisfaction reaches the preset standard includes: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
Preferably, the mobile terminal in the embodiment of the present invention may further include: the image determining module 405 is configured to determine a to-be-trained face image of which the user satisfaction reaches a preset standard from face images shot by a user history, where the face images correspond to shooting angles; the training module 406 is used for performing expression recognition on each facial image to be trained; and determining the optimal shooting parameters based on the shooting angles and the expressions corresponding to the facial images to be trained.
Preferably, the face images shot by the user history and having the satisfaction degree reaching the preset standard further include: and the second face image is locally stored, and the checked time length of the second face image exceeds the preset time length in the preset time period. Preferably, the angle determining module 401 includes: the calling sub-module 4011 is configured to call a distance sensor to detect a horizontal distance between the mobile terminal and the user and a distance between a human face and a display screen of the mobile terminal; an angle determination sub-module 4012, configured to determine a shooting angle of the user based on the horizontal distance and the distance.
Preferably, implementing the mobile terminal of the present invention may further include: a shooting module 407, configured to adjust the brightness of the fill-in light to shoot a face image according to the light intensity of the current shooting environment after the prompt module 404 prompts the user to adjust the current expression and/or the current shooting angle; the storage module 408 is configured to store the face image, and a corresponding relationship between the shooting angle and the shooting time of the face image.
The mobile terminal user in the embodiment of the present invention implements the corresponding image capturing method in the foregoing first embodiment and second embodiment, and has the corresponding beneficial effects with the method embodiments, which are not described herein again.
EXAMPLE five
Referring to fig. 5, a block diagram of a mobile terminal according to a fifth embodiment of the present invention is shown.
The mobile terminal 700 of the embodiment of the present invention includes: at least one processor 701, memory 702, at least one network interface 704, and other user interfaces 703. The various components in the mobile terminal 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 5 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, track ball, touch pad, or touch screen, etc.).
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and direct memory bus DRAM (DRRAM). The memory 702 of the systems and methods described in this embodiment of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 702 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In the embodiment of the present invention, the processor 701 is configured to determine the shooting angle of the user by calling a program or an instruction stored in the memory 702, specifically, a program or an instruction stored in the application 7022; calling a front camera to obtain a face image of a user; acquiring expression information of a user from the face image; based on the best shooting parameters determined by training the facial images shot by the user history and with the satisfaction degree reaching the preset standard, prompting the user to adjust the current expression and/or the current shooting angle, wherein the best shooting parameters comprise the corresponding relation between the best expression and the best shooting angle, and the facial images shot by the user history and with the satisfaction degree reaching the preset standard comprise: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The Processor 701 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this embodiment of the invention may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this embodiment of the invention. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the optimal shooting parameters are trained by the processor 701 in the following manner: determining a face image to be trained, the user satisfaction of which reaches a preset standard, from face images shot by a user history, wherein the face image corresponds to a shooting angle; performing expression recognition on each facial image to be trained; and determining the optimal shooting parameters based on the shooting angles and the expressions corresponding to the facial images to be trained.
Optionally, the facial image shot by the user history and having a satisfaction degree reaching a preset standard further includes: and the second face image is locally stored, and the checked time length of the second face image exceeds the preset time length in the preset time period. .
Optionally, when the processor 701 determines the shooting angle of the user, it is specifically configured to: calling a distance sensor to detect a horizontal distance between a mobile terminal and a user and a distance between a human face and a display screen of the mobile terminal; and determining the shooting angle of the user based on the horizontal distance and the distance.
Optionally, after prompting the user to adjust the current expression and/or the current shooting angle, the processor 701 is further configured to: adjusting the brightness of the light supplement lamp to shoot the face image according to the light intensity of the current shooting environment; and storing the corresponding relation among the face image, the shooting angle of the face image and the shooting time.
The mobile terminal 700 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
According to the mobile terminal provided by the embodiment of the invention, the optimal shooting parameters matched with the shooting habits of the user are trained in advance based on the face image which is shot by the user history and has the satisfaction degree reaching the preset standard, the current shooting angle and the current expression of the user are determined when the image is shot, and the user is prompted to adjust the current expression/current shooting angle to provide personalized shooting auxiliary service for the user through the corresponding relation between the expression in the latest shooting parameters and the optimal shooting angle, so that the shooting expression is closest to the same optimal expression, the shooting angle reaches the optimal shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.
EXAMPLE six
Referring to fig. 6, a block diagram of a mobile terminal according to a sixth embodiment of the present invention is shown.
The mobile terminal in the embodiment of the present invention may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal in fig. 6 includes a Radio Frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a processor 860, an audio circuit 870, a wifi (wirelessfidelity) module 880, and a power supply 890.
The input unit 830 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal. Specifically, in the embodiment of the present invention, the input unit 830 may include a touch panel 831. The touch panel 831, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 831 may include two portions, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 860, and can receive and execute commands sent by the processor 860. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 831, the input unit 830 may include other input devices 832, and the other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among them, the display unit 840 may be used to display information input by a user or information provided to the user and various menu interfaces of the mobile terminal. The display unit 840 may include a display panel 841, and the display panel 841 may be alternatively configured in the form of an LCD or an Organic Light-Emitting Diode (OLED), or the like.
It should be noted that the touch panel 831 can overlay the display panel 841 to form a touch display screen, which, when it detects a touch operation thereon or nearby, is passed to the processor 860 to determine the type of touch event, and then the processor 860 provides a corresponding visual output on the touch display screen according to the type of touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 860 is a control center of the mobile terminal, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the first memory 821 and calling data stored in the second memory 822, thereby performing overall monitoring of the mobile terminal. Optionally, processor 860 may include one or more processing units.
In an embodiment of the present invention, the processor 860 is configured to determine the photographing angle of the user by calling a software program and/or a module stored in the first memory 821 and/or data stored in the second memory 822; calling a front camera to obtain a face image of a user; acquiring expression information of a user from the face image; based on the best shooting parameters determined by training the facial images shot by the user history and with the satisfaction degree reaching the preset standard, prompting the user to adjust the current expression and/or the current shooting angle, wherein the best shooting parameters comprise the corresponding relation between the best expression and the best shooting angle, and the facial images shot by the user history and with the satisfaction degree reaching the preset standard comprise: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
Optionally, the optimal shooting parameters are trained by the processor 860 by: determining a face image to be trained, the user satisfaction of which reaches a preset standard, from face images shot by a user history, wherein the face image corresponds to a shooting angle; performing expression recognition on each facial image to be trained; and determining the optimal shooting parameters based on the shooting angles and the expressions corresponding to the facial images to be trained.
Optionally, the facial image shot by the user history and having a satisfaction degree reaching a preset standard further includes: and the second face image is locally stored, and the checked time length of the second face image exceeds the preset time length in the preset time period. .
Optionally, when the processor 860 determines the shooting angle of the user, it is specifically configured to: calling a distance sensor to detect a horizontal distance between a mobile terminal and a user and a distance between a human face and a display screen of the mobile terminal; and determining the shooting angle of the user based on the horizontal distance and the distance.
Optionally, the processor 860 is further configured to, after the prompting the user to adjust the current expression and/or the current shooting angle: adjusting the brightness of the light supplement lamp to shoot the face image according to the light intensity of the current shooting environment; and storing the corresponding relation among the face image, the shooting angle of the face image and the shooting time.
According to the mobile terminal provided by the embodiment of the invention, the optimal shooting parameters matched with the shooting habits of the user are trained in advance based on the face image which is shot by the user history and has the satisfaction degree reaching the preset standard, the current shooting angle and the current expression of the user are determined when the image is shot, and the user is prompted to adjust the current expression/current shooting angle to provide personalized shooting auxiliary service for the user through the corresponding relation between the expression in the latest shooting parameters and the optimal shooting angle, so that the shooting expression is closest to the same optimal expression, the shooting angle reaches the optimal shooting angle, the high-quality image is shot finally, and the shooting experience of the user is improved.
The embodiment of the invention also provides a mobile terminal, which comprises: the image capturing system comprises a memory, a processor and an image capturing program stored on the memory and capable of running on the processor, wherein the image capturing program realizes the steps of any one of the image capturing methods shown in the invention when being executed by the processor.
The embodiment of the invention also provides a computer readable storage medium, wherein an image shooting program is stored on the computer readable storage medium, and the image shooting program is executed by a processor to realize the steps of any one of the image shooting methods shown in the invention.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The image capture schemes provided herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The structure required to construct a system incorporating aspects of the present invention will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the image capturing method, the mobile terminal, and the computer readable storage medium according to the embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (12)

1. An image shooting method is applied to a mobile terminal, and is characterized by comprising the following steps:
determining a shooting angle of a user;
calling a front camera to obtain a face image of a user;
acquiring expression information of a user from the face image;
based on the best shooting parameters determined by training the facial images shot by the user history and with the satisfaction degree reaching the preset standard, prompting the user to adjust the current expression and/or the current shooting angle, wherein the best shooting parameters comprise the corresponding relation between the best expression corresponding to the current expression and the best shooting angle in multiple types of expressions, and the facial images shot by the user history and with the satisfaction degree reaching the preset standard comprise: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
2. The method of claim 1, wherein the optimal shooting parameters are trained by:
determining a face image to be trained, the user satisfaction of which reaches a preset standard, from face images shot by a user history, wherein the face image corresponds to a shooting angle;
performing expression recognition on each facial image to be trained;
and determining the optimal shooting parameters based on the shooting angles and the expressions corresponding to the facial images to be trained.
3. The method of claim 1, wherein the face images taken by the user history and satisfying a preset criterion further comprise:
and the second face image is locally stored, and the checked time length of the second face image exceeds the preset time length in the preset time period.
4. The method of claim 1, wherein the step of determining the user's shooting angle comprises:
calling a distance sensor to detect a horizontal distance between a mobile terminal and a user and a distance between a human face and a display screen of the mobile terminal;
and determining the shooting angle of the user based on the horizontal distance and the distance.
5. The method of claim 1, wherein after the step of prompting the user to adjust the current expression and/or current camera angle, the method further comprises:
adjusting the brightness of the light supplement lamp to shoot the face image according to the light intensity of the current shooting environment;
and storing the corresponding relation among the face image, the shooting angle of the face image and the shooting time.
6. A mobile terminal, comprising:
the angle determining module is used for determining the shooting angle of the user;
the image acquisition module is used for calling the front camera to acquire a face image of the user;
the recognition module is used for acquiring expression information of the user from the face image;
the prompting module is used for prompting a user to adjust a current expression and/or a current shooting angle based on an optimal shooting parameter determined by training a face image which is shot by a user history and has a satisfaction degree reaching a preset standard, wherein the optimal shooting parameter comprises a corresponding relation between an optimal expression corresponding to the current expression and the optimal shooting angle in multiple expressions, and the face image which is shot by the user history and has the satisfaction degree reaching the preset standard comprises: the first face image is shot by the user within a preset time period, uploaded to the social network site and having a high evaluation rate reaching a preset value.
7. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the image determining module is used for determining a face image to be trained, the user satisfaction of which reaches a preset standard, from face images shot by a user history, wherein the face images correspond to a shooting angle;
the training module is used for carrying out expression recognition on each facial image to be trained; and determining the optimal shooting parameters based on the shooting angles and the expressions corresponding to the facial images to be trained.
8. The mobile terminal of claim 6, wherein the face image captured by the user history and satisfying a preset criterion further comprises:
and the second face image is locally stored, and the checked time length of the second face image exceeds the preset time length in the preset time period.
9. The mobile terminal of claim 6, wherein the angle determining module comprises:
the calling submodule is used for calling a distance sensor to detect the horizontal distance between the mobile terminal and a user and the distance between a human face and a display screen of the mobile terminal;
and the angle determining submodule is used for determining the shooting angle of the user based on the horizontal distance and the distance.
10. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the shooting module is used for adjusting the brightness of the light supplement lamp to shoot the face image according to the light intensity of the current shooting environment after the prompt module prompts a user to adjust the current expression and/or the current shooting angle;
and the storage module is used for storing the face image, the shooting angle of the face image and the shooting time.
11. A mobile terminal, comprising: memory, a processor and an image capturing program stored on the memory and executable on the processor, the image capturing program realizing the steps of the image capturing method as claimed in any one of claims 1 to 5 when executed by the processor.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an image capturing program which, when executed by a processor, implements the steps of the image capturing method as claimed in any one of claims 1 to 5.
CN201710518229.8A 2017-06-29 2017-06-29 Image photographing method, mobile terminal and computer-readable storage medium Active CN107147852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710518229.8A CN107147852B (en) 2017-06-29 2017-06-29 Image photographing method, mobile terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710518229.8A CN107147852B (en) 2017-06-29 2017-06-29 Image photographing method, mobile terminal and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN107147852A CN107147852A (en) 2017-09-08
CN107147852B true CN107147852B (en) 2019-12-31

Family

ID=59785324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710518229.8A Active CN107147852B (en) 2017-06-29 2017-06-29 Image photographing method, mobile terminal and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN107147852B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685755B (en) * 2017-10-17 2023-04-18 阿里巴巴集团控股有限公司 Electronic photo generation method, device, equipment and computer storage medium
CN108055461B (en) * 2017-12-21 2020-01-14 Oppo广东移动通信有限公司 Self-photographing angle recommendation method and device, terminal equipment and storage medium
CN108229369B (en) * 2017-12-28 2020-06-02 Oppo广东移动通信有限公司 Image shooting method and device, storage medium and electronic equipment
CN108363750B (en) * 2018-01-29 2022-01-04 Oppo广东移动通信有限公司 Clothing recommendation method and related products
CN108174108A (en) * 2018-03-08 2018-06-15 广州三星通信技术研究有限公司 The method and apparatus and mobile terminal for effect of taking pictures are adjusted in the terminal
CN111415301B (en) * 2019-01-07 2024-03-12 珠海金山办公软件有限公司 Image processing method, device and computer readable storage medium
CN109840515B (en) * 2019-03-06 2022-01-25 百度在线网络技术(北京)有限公司 Face posture adjusting method and device and terminal
CN110248450B (en) * 2019-04-30 2021-11-12 广州富港生活智能科技有限公司 Method and device for controlling light by combining people
CN113259581B (en) * 2020-02-13 2022-11-04 深圳市万普拉斯科技有限公司 Photographing prompting method and device, computer equipment and storage medium
CN112004022B (en) * 2020-08-26 2022-03-22 三星电子(中国)研发中心 Method and device for generating shooting prompt information
CN113674234A (en) * 2021-08-13 2021-11-19 扬州大学 Pressure damage detection method and system
CN114173061B (en) * 2021-12-13 2023-09-29 深圳万兴软件有限公司 Multi-mode camera shooting control method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004151812A (en) * 2002-10-29 2004-05-27 Yokogawa Electric Corp Face image processor
CN103971131A (en) * 2014-05-13 2014-08-06 华为技术有限公司 Preset facial expression recognition method and device
CN104394315A (en) * 2014-11-07 2015-03-04 深圳市金立通信设备有限公司 A method for photographing an image
CN105578027A (en) * 2015-07-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN105677025A (en) * 2015-12-31 2016-06-15 宇龙计算机通信科技(深圳)有限公司 Terminal application starting method and device, and terminal
CN106101550A (en) * 2016-07-15 2016-11-09 珠海市魅族科技有限公司 Electronic equipment and filming control method thereof and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2925865A1 (en) * 2013-10-04 2015-04-09 Honda Motor Co., Ltd. In-vehicle picture storage device for motorcycle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004151812A (en) * 2002-10-29 2004-05-27 Yokogawa Electric Corp Face image processor
CN103971131A (en) * 2014-05-13 2014-08-06 华为技术有限公司 Preset facial expression recognition method and device
CN104394315A (en) * 2014-11-07 2015-03-04 深圳市金立通信设备有限公司 A method for photographing an image
CN105578027A (en) * 2015-07-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN105677025A (en) * 2015-12-31 2016-06-15 宇龙计算机通信科技(深圳)有限公司 Terminal application starting method and device, and terminal
CN106101550A (en) * 2016-07-15 2016-11-09 珠海市魅族科技有限公司 Electronic equipment and filming control method thereof and system

Also Published As

Publication number Publication date
CN107147852A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN107147852B (en) Image photographing method, mobile terminal and computer-readable storage medium
CN107566717B (en) Shooting method, mobile terminal and computer readable storage medium
CN107659722B (en) Image selection method and mobile terminal
CN107528972B (en) Display method and mobile terminal
CN107025629B (en) Image processing method and mobile terminal
CN106791364A (en) Method and mobile terminal that a kind of many people take pictures
CN107509030B (en) focusing method and mobile terminal
CN106954027B (en) Image shooting method and mobile terminal
CN107124543B (en) Shooting method and mobile terminal
CN107172347B (en) Photographing method and terminal
CN107483821B (en) Image processing method and mobile terminal
CN106993091B (en) Image blurring method and mobile terminal
US10122925B2 (en) Method, apparatus, and computer program product for capturing image data
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
CN110427849B (en) Face pose determination method and device, storage medium and electronic equipment
CN107959789B (en) Image processing method and mobile terminal
CN109409235B (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN107463051B (en) Exposure method and device
CN111182167B (en) File scanning method and electronic equipment
CN107450996B (en) Information prompting method, mobile terminal and computer readable storage medium
CN106896931B (en) Input method error correction method and device
CN107592457B (en) Beautifying method and mobile terminal
CN107424130B (en) Picture beautifying method and device
CN106201509A (en) A kind of method for information display, device and mobile terminal
CN107786894B (en) User feedback data identification method, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant