CN107124543B - Shooting method and mobile terminal - Google Patents

Shooting method and mobile terminal Download PDF

Info

Publication number
CN107124543B
CN107124543B CN201710090640.XA CN201710090640A CN107124543B CN 107124543 B CN107124543 B CN 107124543B CN 201710090640 A CN201710090640 A CN 201710090640A CN 107124543 B CN107124543 B CN 107124543B
Authority
CN
China
Prior art keywords
face
image
target
target image
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710090640.XA
Other languages
Chinese (zh)
Other versions
CN107124543A (en
Inventor
卢异龄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710090640.XA priority Critical patent/CN107124543B/en
Publication of CN107124543A publication Critical patent/CN107124543A/en
Application granted granted Critical
Publication of CN107124543B publication Critical patent/CN107124543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The embodiment of the invention provides a shooting method, which is applied to a mobile terminal with a camera and specifically comprises the following steps: acquiring a frame of preview image acquired by a camera of the mobile terminal; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and performing associated storage on the first target image and the second target image. The embodiment of the invention can amplify the face with a longer shooting distance, clearly display the five sense organs characteristics of the target person, improve the imaging effect of the image, provide the second target image without any processing and the first target image subjected to amplification processing for the selection of the user, and enrich the selection of the user.

Description

Shooting method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting method and a mobile terminal.
Background
With the rapid development of electronic device technologies, mobile terminals such as smart phones and tablet computers are becoming more and more popular, and the number of camera pixels of the mobile terminals is also increasing, so that taking pictures of multiple persons or taking pictures of multiple persons by themselves with the mobile terminals to record life has gradually become a daily life habit.
When an existing mobile terminal shoots an image which is obtained by multiple people through combination, if a distance between a person and a camera is far, the head portrait of the person far away from the camera in the shot image is small, even the person cannot see the five sense organs clearly, and image distortion is caused. Moreover, because the camera lens of the mobile terminal belongs to the optical camera lens, the camera lens has a distortion problem, so that the shot image also has distortion, and the distortion degree is gradually increased from the center to the edge of the image, so that the distortion of the head image at the edge part of the shot image is obvious when the image shot by multiple persons is shot. In addition, when the image of many people's group shot is shot, because the external light and shade degree of everyone's position is different, the corresponding head portrait light and shade degree of everyone in the image that obtains of shooing is also different, if the external light of the position that someone was located when shooting the group shot was darker, then the head portrait that he corresponded in the image that obtains of shooing also can be darker, and it is difficult even to go through the head portrait and discern five sense organs, causes the picture distortion.
Disclosure of Invention
The invention provides an image processing method and a mobile terminal, and aims to solve the problems that when the shooting distance is long, the head portrait of a remote person in an image is small and difficult to recognize, and the image distortion is easily caused.
In a first aspect, an embodiment of the present invention provides a shooting method, where the method is applied to a mobile terminal, and the method includes:
acquiring a frame of preview image acquired by a camera of the mobile terminal;
acquiring the shooting distance and the face size value of each face in the preview image;
determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face;
amplifying the at least one target face to generate a first target image;
and generating a second target image from the preview image, and performing associated storage on the first target image and the second target image.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including:
the preview image acquisition module is used for acquiring a frame of preview image acquired by a camera of the mobile terminal;
the face information acquisition module is used for acquiring the shooting distance and the face size value of each face in the preview image;
the target face determining module is used for determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face;
the target face amplification module is used for amplifying the at least one target face to generate a first target image;
and the image storage module is used for generating a second target image from the preview image and storing the first target image and the second target image in a correlation manner.
In this way, in the embodiment of the present invention, a frame of preview image collected by a camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance of each face and the face size value; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image. In addition, in the embodiment of the invention, two images can be provided, one is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other is the first target image obtained by amplifying the preview image, the imaging effect of the first target image is better, and a user can select at least one of the two images according to needs, so that the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a photographing method according to a first embodiment of the present invention;
FIG. 2a is a flowchart of a photographing method according to a second embodiment of the present invention;
FIG. 2b is a flowchart of a photographing method according to a second embodiment of the present invention;
FIG. 2c is a flowchart of a photographing method according to a second embodiment of the present invention;
FIG. 2d is a flowchart of a photographing method according to a second embodiment of the present invention
Fig. 3 is a flowchart of a method for obtaining a shooting distance according to a second embodiment of the present invention;
FIG. 4 is a flowchart of a method for determining a target face according to a second embodiment of the present invention;
FIG. 5 is a flowchart of a method for magnifying a target face according to a second embodiment of the present invention;
fig. 6a is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 6b is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 6c is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 6d is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 6e is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 7 is a block diagram of a structure of a face information obtaining module in the third embodiment of the present invention;
fig. 8 is a block diagram of a target face determining module according to a third embodiment of the present invention;
FIG. 9 is a block diagram of a target face magnification module in an embodiment of the present invention;
fig. 10 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 11 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making an invasive task, are within the scope of the present invention.
The following describes an image processing method and a mobile terminal according to the present invention in detail by exemplifying several specific embodiments.
Example one
Referring to fig. 1, a flowchart of a shooting method in a first embodiment of the present invention is shown, and is applied to a mobile terminal with a camera, where the method specifically includes the following steps:
step 101: acquiring a frame of preview image acquired by a camera of the mobile terminal.
In the embodiment of the invention, the mobile terminal can shoot the target person through the camera and acquire the preview image of the target person. For example, the mobile terminal can take a picture of a plurality of people through the camera and acquire a preview image of the plurality of people. In step 101, after entering the shooting preview mode, the camera starts to acquire each frame of preview image, and acquires one frame of preview image acquired by the camera of the mobile terminal.
Step 102: and acquiring the shooting distance and the face size value of each face in the preview image.
In the embodiment of the invention, the infrared distance measuring sensor is arranged on the mobile terminal to measure the shooting distance of each face in the preview image, so as to obtain the face size value of each face in the preview image.
Step 103: and determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face.
In the embodiment of the invention, based on the shooting distance and the face size value of each face in the preview image, the shooting distance and the face size value of each face in the preview image can be sequentially compared, when the shooting distance of a certain face in the preview image exceeds a preset distance threshold value and the face size value of the face is smaller than a standard value, the face is determined to be a target face to be amplified in the preview image, and the shooting distance and the face size value of each face in the preview image of the multiple-person combination are sequentially judged and compared, so that at least one target face to be amplified in the preview image can be determined.
For example, in a preview image of a multiple-person group photograph, the photographing distance of the 1 st face is a1, the face size value is B1, first, it is determined whether the photographing distance a1 is greater than a preset distance threshold a0, if the photographing distance a1 is greater than a preset distance threshold a0, the face size value B1 of the 1 st face is compared with a previously stored standard size value B0 of the face corresponding to the photographing distance a1, and if the face size value B1 is less than the standard size value B0, the 1 st face is determined as a target face.
Step 104: amplifying the at least one target face to generate a first target image
In the embodiment of the present invention, for the at least one target face, a pre-stored standard size value of a face corresponding to the shooting distance of the at least one target face may be obtained, and the at least one target face is amplified with reference to the standard size value to obtain a first target face image.
Step 105: and generating a second target image from the preview image, and performing associated storage on the first target image and the second target image.
In the embodiment of the present invention, a second target image generated from the preview image obtained in step 101 may be further generated, and the first target image and the second target image are stored in an associated manner, so that the embodiment of the present invention may finally provide two images, one is the second target image generated from the preview image obtained in step 101, and the other is the first target image obtained by performing enlargement processing on the preview image, and the first target image and the second target image are associated, so that a user may select at least one of the two images as needed, thereby enriching the selection of the user.
In this way, in the embodiment of the present invention, a frame of preview image collected by a camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance of each face and the face size value; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image. In addition, in the embodiment of the invention, two images can be provided, one is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other is the first target image obtained by amplifying the preview image, the imaging effect of the first target image is better, and a user can select at least one of the two images according to needs, so that the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
Example two
Referring to fig. 2a, a flowchart of a shooting method in the second embodiment of the present invention is shown, and is applied to a mobile terminal with a camera, where the method specifically includes the following steps:
step 201: acquiring a frame of preview image acquired by a camera of the mobile terminal.
In the embodiment of the invention, the mobile terminal can shoot the target person through the camera and acquire the preview image of the target person. After the mobile terminal enters a shooting preview mode, the camera starts to collect each frame of preview image, and one frame of preview image collected by the camera of the mobile terminal is obtained.
For example, a mobile terminal user shoots a scene with multiple people for shooting the photos, 50 people are shot for shooting the photos, the shooting distances of the 50 people from the mobile terminal are inconsistent, and the shooting distances are far or near, after the mobile terminal enters a shooting preview mode, a camera starts to acquire preview images of the 50 people, and the preview images of the 50 people acquired by the camera of the mobile terminal are acquired.
Step 202: and acquiring the shooting distance and the face size value of each face in the preview image.
In the embodiment of the invention, the infrared distance measuring sensor is arranged on the mobile terminal to measure the shooting distance of each face in the preview image and acquire the face size value of each face in the preview image.
For example, if the preview image is a photo of 50 persons, the photographing distance of the 50 persons from the mobile terminal is not consistent, and the size of each person face in the preview image is also different, wherein the 10 person photographing distances a1 and a2 … … a50 can be respectively measured by arranging an infrared distance measuring sensor on the mobile terminal. The image area where each face is located is identified through face identification, and the face size values B1 and B2 … … B50 of the 50 faces can be obtained through calculating the area size of the image area where each face is located.
Referring to fig. 3, the step of obtaining the shooting distance of each face in the preview image may specifically include the following steps:
step 2021: and carrying out face recognition on the preview image, and determining a face image of each face.
Specifically, the mobile terminal may perform face recognition on the preview image, and determine a face image of each face.
For example, if the preview image is a group of 50 persons, the mobile terminal may perform face recognition on the preview image, determine face images of each of the 50 persons, and obtain 50 face images.
Step 2022: and sending an infrared ranging signal to the face image of each face through an infrared sensor of the mobile terminal.
In this embodiment, after the face image of each face in the preview image is determined, the infrared sensor on the mobile terminal may send an infrared ranging signal to the face image of each face.
For example, if the preview image is a merged photograph of 50 people, after 50 face images are determined, the infrared sensor on the mobile terminal may send an infrared ranging signal, which is usually infrared light, to the 50 face images.
Step 2023: and receiving a feedback signal reflected by the infrared ranging signal.
Specifically, the mobile terminal may receive a feedback signal reflected by the infrared ranging signal.
For example, 50 human face images in the preview image reflect a feedback signal after receiving an infrared ranging signal from an infrared sensor, and the feedback signal is generally reflected light.
Step 2024: and determining the shooting distance of each face based on the feedback signal.
According to the distance measurement principle of the existing infrared distance measurement sensor, the shooting distance of each face can be calculated according to the feedback signals reflected by the infrared distance measurement signals. Specifically, in this embodiment, the mobile terminal may determine the shooting distance of each face in the preview image after the preview image is acquired.
For example, if the feedback signal is reflected light, based on the principle of infrared distance measurement, a specific value of the shooting distance of each face in the preview image can be calculated, for example, the shooting distance a1 of the first face is 60cm, and the shooting distance a2 of the second face is 50 cm.
In the embodiment of the invention, the infrared distance measuring sensor is arranged on the mobile terminal, and the shooting distance of each face in the preview image is determined according to the infrared distance measuring principle.
Step 203: and determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face.
In the embodiment of the invention, based on the shooting distance and the face size value of each face in the preview image, the shooting distance and the face size value of each face in the preview image can be sequentially compared, when the shooting distance of a certain face in the preview image exceeds a preset distance threshold value and the face size value of the face is smaller than a standard value, the face is determined to be a target face to be amplified in the preview image, and the shooting distance and the face size value of each face in the preview image of the multiple-person combination are sequentially judged and compared, so that at least one target face to be amplified in the preview image can be determined.
Referring to fig. 4, the step of determining at least one target face to be enlarged in the preview image based on the shooting distance and the face size value of each face may specifically include the following steps:
step 2031: and judging whether the shooting distance of each face is greater than a preset distance threshold value.
In practical applications, the preset distance threshold may be preset according to actual conditions, and in step 2031, it is determined whether the shooting distance of the face is greater than the preset distance threshold for each face in the preview image.
For example, the preset distance threshold may be set to 40 cm.
By setting a preset distance threshold and comparing the shooting distance of each face with the preset distance threshold, whether the shooting distance of each face is greater than the preset distance threshold is judged, the face with the shooting distance less than the preset distance threshold in the preview image can be excluded, and subsequent workload is reduced.
Step 2032: and if the shooting distance of the face is greater than a preset distance threshold, acquiring a pre-stored standard size value of the face corresponding to the shooting distance of the face.
In practical application, the pre-stored standard size value of the face corresponding to the shooting distance of the face is contained in a preset database, each shooting distance in the preset database has a corresponding standard size value of the face, and if the shooting distance of the face is greater than a preset distance threshold, the pre-stored standard size value of the face corresponding to the shooting distance of the face can be obtained by calling the preset database.
For example, assuming that the preset distance threshold is 40cm, if the shooting distance a1 of the first face in the preview image is 60cm, that is, the shooting distance of the face is greater than the preset distance threshold, the preset database may be called to obtain a standard size value of the face, which is stored in advance and corresponds to the shooting distance a1 of the face.
Step 2033: and comparing the face size value of the face with the standard size value.
In practical applications, the size value of a human face is usually described by the horizontal distance L × the vertical distance H of the human face. Specifically, in the embodiment of the present invention, the face size value of the target face may be described by a target lateral distance L1 × a target longitudinal distance H1 of the target face, in the preset database, a standard value size of the face corresponding to the target face shooting distance may be determined by a face recognition technology, and the standard value size of the corresponding face may be described by a standard lateral distance L2 × a standard longitudinal distance H2. In step 2033, the mobile terminal compares the face size value of the face with the standard size value.
Step 2034: and if the face size value of the face is smaller than the standard size value, determining the face as a target face.
In this embodiment, if the face size value of the face is smaller than the standard size value, the face is determined as the target face.
For example, if the face size value of the face is 0.4cm × 0.6cm, the standard size value is 0.6cm × 0.9cm, and the face size value of the face is smaller than the standard size value, the face is determined as the target face.
In the method of the embodiment of the invention, the shooting distance and the face size value of each face in the preview image are compared in sequence, so that the target face can be accurately selected, and the target face can be further operated conveniently.
Step 204: and amplifying the at least one target face to generate a first target image.
In the embodiment of the present invention, for the at least one target face, a pre-stored standard size value of a face corresponding to the shooting distance of the at least one target face may be obtained, and the at least one target face is amplified with reference to the standard size value to obtain a first target face image.
Referring to fig. 5, the step of performing an amplification process on the at least one target face to generate a first target image may specifically include the following steps:
step 2041: and for each target face, acquiring a standard size value of the face corresponding to the shooting distance of the target face, which is stored in advance.
In practical application, the standard size value of the face corresponding to the shooting distance of the target face, which is stored in advance, is contained in a preset database, the standard size value of the face corresponding to each shooting distance in the preset database is obtained, and the standard size value of the face corresponding to the shooting distance of the target face, which is stored in advance, can be obtained by calling the preset database according to the shooting distance of the target face.
For example, if the shooting distance of the target face a1 is 60cm, the preset database may be called to obtain a standard size value of a face that is stored in advance and corresponds to the shooting distance of the target face of 60 cm.
Step 2042: and determining the amplification ratio based on the face size value of the target face and the standard size value.
In practical application, the amplification scale may be determined based on a proportional relationship between the face size value of the target face and the standard size value.
For example, the face size value of the target face is 0.4cm × 0.6cm, the standard size value is 0.6cm × 0.9cm, and the ratio of the face size value of the target face to the standard size value is 2:3, that is, the standard size value is 1.5 times the face size value of the target face, so that the enlargement ratio can be determined to be 1.5.
Step 2043: and carrying out amplification processing on the face image of the target face according to the amplification scale.
Specifically, the face image of the target face is amplified according to the amplification scale.
For example, if the magnification ratio determined in step 2042 is 1.5, the face image of the target face is magnified at the magnification ratio of 1.5.
Step 2044: and generating a first target image after the amplification processing of each target face is completed.
In this embodiment, after each target face is sequentially amplified, a first target image can be generated.
For example, if the target faces include a1, a2, and A3, the first target image may be generated after the large processing is performed on each of the target faces a1, a2, and A3.
In the embodiment of the invention, the first target image is generated by amplifying at least one target face to be amplified in the preview image, so that the faces with a longer shooting distance can be amplified, the face size value of each face in the first target image is ensured to be appropriate, the facial features of target persons can be clearly displayed, the imaging effect of the image is improved, and particularly for the application occasions with a longer shooting distance, the embodiment of the invention can ensure that the face corresponding to the person with a longer shooting distance has a better image effect.
Step 205: and extracting the face features in the face image of each target face.
Specifically, through face feature detection, face features in a face image of the target face may be extracted, where the face features specifically include face feature points and feature components between the face feature points.
For example, the face feature points may be shape features of face organs such as eyes, nose, mouth, and the like, and euclidean distances, curvatures, and angles between the face feature points may be used as feature components between the face feature points.
Step 206: and comparing the face features with standard face features in a preset face feature library, and judging whether the face image of the target face has distortion.
In the embodiment of the invention, the mobile terminal stores a preset human face feature library in advance, the preset human face feature library is mainly used for storing data of standard facial features, and whether the human face image of the target human face has distortion or not can be judged by comparing the human face features with the standard facial features in the preset human face feature library.
For example, whether the face image of the target face has distortion or not is judged by comparing the eyes, the nose and the mouth of the face feature point with the eyes, the nose and the mouth of the standard face feature point in a preset face feature library.
Step 207: and if the face image of the target face has distortion, performing distortion correction on the face image of the target face according to the standard facial features to generate the first target image subjected to distortion correction.
Specifically, if the face image of the target face has distortion, distortion correction is performed on the face features in the face image of the target face according to the standard face features, the target face image with the face features subjected to distortion correction is generated into the first target image subjected to distortion correction, and the first face target image is the first target image subjected to distortion correction.
For example, if the mouth of the target human face is distorted, the mouth of the target human face is distorted with reference to the shape of the standard facial feature mouth, and the first target image after distortion correction is generated.
In the embodiment of the invention, the distortion correction is carried out on the face image of the target face to generate the first target image after the distortion correction, so that the distortion of each face feature in the face image of the target face can be corrected, the face image distortion of the target face is avoided, and the image quality of the first target image is improved.
Step 208: generating a second target image from the preview image, and performing associated storage on the first target image and the second target image
This step is similar in principle to step 105 in the first embodiment and will not be described in detail here.
In this way, in the embodiment of the present invention, a frame of preview image collected by a camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance of each face and the face size value; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the implementation of the present invention, the distortion correction is performed on the face image of the target face to generate the first target image after the distortion correction, so that the distortion of each face feature in the face image of the target face can be corrected, the face image distortion of the target face is avoided, and the image quality of the first target image is improved.
In addition, the embodiment of the invention can finally provide two images, one image is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other image is the first target image obtained by amplifying the preview image, the effect of the first target image is better, a user can select at least one of the two images according to needs, the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
In an optional embodiment of the present invention, after step 204, a step of correcting the body image and the background image of each target face may further be included, so as to correct distortion existing in the body image and the background image of the target face, enhance harmony between the face image, the body image, and the background image of the target face, and improve the overall image quality of the first target image.
Referring to fig. 2b, which shows a flowchart of a shooting method in the second embodiment of the present invention, applied to a mobile terminal with a camera, after step 204, the method may further include the following steps:
step 209: and extracting a body image of each target face and a background image which is away from the target face within a preset range.
In the embodiment of the present invention, the body image of the target face may be extracted by an image extraction technique. For the background image of the target face, a preset range can be preset, and an image within the preset range from the target face can be regarded as the background image of the target face, and the background image is extracted by an image extraction technology.
For example, if the preset range is 0.8cm, an image 0.8cm away from the target face may be regarded as a background image of the target face, and the background image is extracted by an image extraction technology.
Step 210: and judging whether the body image and the background image have distortion or not.
In the embodiment of the invention, after the face image of the target face is amplified according to the amplification scale, the joint of the target face image area, the surrounding background and the image area where the clothes of the target face are located may be distorted, and distortion correction is needed.
Step 211: and if the body image and the background image have distortion, performing distortion correction on the body image and the background image to generate the first target image after distortion correction.
Specifically, if the body image and the background image of the target face have distortion, the body image and the background image of the target face are subjected to distortion correction by an image distortion correction technology, the target face image in which the body image and the background image are subjected to distortion correction is generated, and the first target image after distortion correction is generated, so that the first face target image is the first target image after distortion correction.
Therefore, in the embodiment of the invention, the distortion of the body image and the background image of the target face can be corrected, the harmony of the face image, the body image and the background image of the target face is enhanced, and the overall image quality of the first target image is improved.
For example, if a clothing shape at a position where the clothing shape is connected to the target face is distorted, the clothing shape is distorted, and the first target image after distortion correction is generated.
In this way, in the embodiment of the present invention, a frame of preview image collected by a camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance of each face and the face size value; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the embodiment of the invention, a body image of each target face and a background image which is away from the target face within a preset range are extracted; judging whether the body image and the background image have distortion or not; and if the body image and the background image are distorted, performing distortion correction on the body image and the background image to generate the first target image after distortion correction, so that the distortion of the body image and the background image of the target face can be corrected, the harmony of the face image, the body image and the background image of the target face is enhanced, and the overall image quality of the first target image is improved.
In addition, the embodiment of the invention can finally provide two images, one image is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other image is the first target image obtained by amplifying the preview image, the effect of the first target image is better, a user can select at least one of the two images according to needs, the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
In an optional embodiment of the present invention, after step 204, a step of adjusting shooting parameters of the first target image may be further included, so as to improve a brightness uniformity degree of the first target image, so that the first target image has a better imaging effect.
Referring to fig. 2c, which shows a flowchart of a shooting method in the second embodiment of the present invention, applied to a mobile terminal with a camera, after step 204, the method may further include the following steps:
step 212: and carrying out shooting parameter detection on the first target image.
Specifically, for the first target image, the first target image is divided into different image areas, and the shooting parameters of the first target image are detected, that is, the shooting parameters of the different image areas in the first target image are detected. For example, the shooting parameter detection may be performed for each face image region in the first target image
The shooting parameters can be brightness, contrast or color temperature.
For example, the brightness of the images in different areas in the first target image is detected, and whether the brightness of each different area is different or not is judged.
Step 213: and if the shooting parameter abnormity is detected, adjusting the shooting parameters of the first target image, and generating the first target image after the shooting parameters are adjusted.
In the embodiment of the present invention, the abnormal shooting parameters refer to: and if at least one of the brightness, the contrast and the color temperature of different image areas in the first target image is detected to be different, adjusting the color level and/or the contrast of the first target image, and generating the first target image with the shooting parameters adjusted. The method comprises the steps of comparing at least one of brightness, contrast and color temperature of each image area in a first target image to determine whether light difference and color orientation difference of different image areas exist or not, and when it is detected that at least one of the brightness, contrast and color temperature of different image areas in the first target image has difference, namely light difference and color orientation difference of different image areas exist, adjusting at least one of the color level and contrast of the first target image according to detected difference data, so that the brightness uniformity degree of the first target image can be improved, and a better imaging effect can be realized. Therefore, the faces with longer shooting distance can be amplified to ensure that the face size value of each face in the first target image is proper and the facial features of the target person are clearly displayed, and the brightness of the shot image is uniform.
For example, if it is detected that there is a difference in brightness of different areas in the first target image, the first target image with the shooting parameters adjusted is generated by adjusting the brightness difference of different areas in the first target image.
In this way, in the embodiment of the present invention, a frame of preview image collected by a camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance of each face and the face size value; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
Furthermore, in the embodiment of the present invention, the first target image is subjected to shooting parameter detection; and if the shooting parameter abnormity is detected, adjusting the shooting parameter of the first target image, and generating the first target image after the shooting parameter is adjusted. The brightness uniformity of the first target image can be improved, and the first target image can have a better imaging effect.
In addition, the embodiment of the invention can finally provide two images, one image is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other image is the first target image obtained by amplifying the preview image, the effect of the first target image is better, a user can select at least one of the two images according to needs, the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
In an alternative embodiment of the present invention, after step 208, in order to also include displaying the first target image and the second target image,
referring to fig. 2d, which shows a flowchart of a shooting method in the second embodiment of the present invention, applied to a mobile terminal with a camera, after step 208, the method may further include the following steps:
step 214: and receiving a viewing instruction of the mobile terminal user to the first target image.
In the embodiment of the present invention, after the shooting is completed, the first target image is displayed in the generated first target image and the second target image by default. When the mobile terminal user needs to view the first target image, a viewing instruction can be input, the viewing instruction includes a click operation, a touch screen operation or a shaking of the mobile terminal, and the mobile terminal can receive the viewing instruction of the mobile terminal user on the first target image.
For example, the mobile terminal user will click on a "photo" operation on the mobile terminal as a viewing instruction to view the first target image, and the mobile terminal may receive the viewing instruction of the user.
Step 215: and displaying the first target image.
Specifically, the mobile terminal displays the first target image on a screen.
Step 216: and detecting the touch operation of the mobile terminal user on the first target image.
Specifically, when the mobile terminal user needs to view the second target image, a touch operation may be performed on the first target image, and optionally, the touch operation includes: in a specific application, the user of the mobile terminal may select a touch operation mode according to an actual situation, and the touch operation mode is not specifically limited in the embodiments of the present invention. The mobile terminal may detect a touch operation of the mobile terminal user on the first target image.
For example, if the touch operation is a certain space gesture operation, it may be detected whether the user of the mobile terminal has a certain space gesture operation on the first target image.
Step 217: and when the touch operation matched with the pre-stored trigger operation is detected, displaying the second target image.
Specifically, the trigger operation may be pre-stored in the mobile terminal, and when the mobile terminal detects a touch operation matched with the pre-stored trigger operation, the second target image may be displayed on a screen of the mobile terminal.
For example, when the pre-stored trigger operation is a certain space gesture operation, once the mobile terminal detects that a touch operation matched with the certain space gesture operation is performed on the first target, the second target image can be displayed on the screen of the mobile terminal.
Therefore, the mobile terminal can display the first target image or the second target image according to the viewing instruction or the touch operation of the user of the mobile terminal and the requirement of the user, and the convenience of viewing the first target image or the second target image by the user of the mobile terminal is improved.
In this way, in the embodiment of the present invention, a frame of preview image collected by a camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance of each face and the face size value; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the embodiment of the invention, the viewing instruction of the mobile terminal user to the first target image is received; displaying the first target image; detecting touch operation of the mobile terminal user on the first target image; and when the touch operation matched with the pre-stored trigger operation is detected, displaying the second target image. The mobile terminal can display the first target image or the second target image according to the user requirement according to the viewing instruction or the touch operation of the user of the mobile terminal, and the convenience of viewing the first target image or the second target image by the user of the mobile terminal is improved.
In addition, the embodiment of the invention can finally provide two images, one image is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other image is the first target image obtained by amplifying the preview image, the effect of the first target image is better, a user can select at least one of the two images according to needs, the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
It should be noted that, in the second embodiment, each of the shooting methods shown in fig. 2a, fig. 2b, fig. 2c, and fig. 2d is described as an independent shooting method for simplicity of description, but it should be understood by those skilled in the art that an embodiment in which the shooting methods are combined also belongs to the protection scope of the present invention, and the shooting methods are not limited by the described sequence of actions when combined, because some steps may be performed in other sequences or simultaneously according to the embodiment of the present invention.
It is further noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
In summary, in the second embodiment, a frame of preview image collected by the camera of the mobile terminal is obtained; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and performing association storage on the first target image and the second target image. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the implementation of the present invention, the distortion correction is performed on the face image of the target face to generate the first target image after the distortion correction, so that the distortion of each face feature in the face image of the target face can be corrected, the face image distortion of the target face is avoided, and the image quality of the first target image is improved.
Secondly, in the embodiment of the invention, a body image of each target face and a background image which is away from the target face within a preset range are extracted; judging whether the body image and the background image have distortion or not; and if the body image and the background image are distorted, performing distortion correction on the body image and the background image to generate the first target image after distortion correction, so that the distortion of the body image and the background image of the target face can be corrected, the harmony of the face image, the body image and the background image of the target face is enhanced, and the overall image quality of the first target image is improved.
Thirdly, in the embodiment of the invention, shooting parameter detection is carried out on the first target image; and if the shooting parameter abnormity is detected, adjusting the shooting parameter of the first target image, and generating the first target image after the shooting parameter is adjusted. The brightness uniformity of the first target image can be improved, and the first target image can have a better imaging effect.
In addition, the embodiment of the invention can finally provide two images, one image is the second target image generated by the preview image, the second target image is not processed and has interest of original taste, the other image is the first target image obtained by amplifying the preview image, the effect of the first target image is better, a user can select at least one of the two images according to needs, the user can conveniently check and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
In addition, in the embodiment of the invention, a viewing instruction of the mobile terminal user to the first target image is received; displaying the first target image; detecting touch operation of the mobile terminal user on the first target image; and when the touch operation matched with the pre-stored trigger operation is detected, displaying the second target image. The mobile terminal can display the first target image or the second target image according to the user requirement according to the viewing instruction or the touch operation of the user of the mobile terminal, and the convenience of viewing the first target image or the second target image by the user of the mobile terminal is improved.
EXAMPLE III
Referring to fig. 6a, a block diagram of a mobile terminal according to a third embodiment of the present invention is shown.
The mobile terminal 600 shown in fig. 6a may include: a preview image acquisition module 601, a face information acquisition module 602, a target face determination module 603, a target face amplification module 604, an image storage module 605 and a camera 606.
A preview image obtaining module 601, configured to obtain a frame of preview image acquired by a camera of the mobile terminal;
a face information obtaining module 602, configured to obtain a shooting distance and a face size value of each face in the preview image;
a target face determining module 603, configured to determine at least one target face to be enlarged in the preview image based on the shooting distance and the face size value of each face;
a target face amplifying module 604, configured to amplify the at least one target face to generate a first target image;
an image storage module 605, configured to generate a second target image from the preview image, and store the first target image and the second target image in an associated manner.
Referring to fig. 7, on the basis of fig. 6a, the face information acquisition module 602 may further include a face recognition unit 6021, a ranging signal transmitting unit 6022, a feedback signal receiving unit 6023, and a photographing distance calculating unit 6024.
A face recognition unit 6021, configured to perform face recognition on the preview image, and determine a face image of each face;
a ranging signal sending unit 6022, configured to send an infrared ranging signal to the face image of each face through an infrared sensor of the mobile terminal;
a feedback signal receiving unit 6023, configured to receive a feedback signal reflected by the infrared ranging signal;
a shooting distance calculation unit 6024 for determining a shooting distance of each face based on the feedback signal.
Referring to fig. 8, on the basis of fig. 6a, the target face determining module 603 may further include a shooting distance determining unit 6031, a first standard face obtaining unit 6032, a face comparing unit 6033 and a target face determining unit 6034.
A shooting distance determination unit 6031 configured to determine, for each face, whether a shooting distance of the face is greater than a preset distance threshold;
a first standard face obtaining unit 6032, configured to obtain a standard size value of a face, which is stored in advance and corresponds to a shooting distance of the face, if the shooting distance of the face is greater than a preset distance threshold;
a face comparison unit 6033, configured to compare a face size value of the face with the standard size value;
a target face determining unit 6034, configured to determine the face as a target face if the face size value of the face is smaller than the standard size value.
Referring to fig. 9, on the basis of fig. 6a, the target face enlargement module 604 may further include a first standard face acquisition unit 6041, an enlargement ratio determination unit 6042, a target face enlargement unit 6043, and a first target image generation unit 6044.
A first standard face obtaining unit 6041 configured to obtain, for each target face, a standard size value of a face corresponding to a shooting distance of the target face, which is stored in advance;
an enlargement ratio determining unit 6042 configured to determine an enlargement ratio based on the face size value of the target face and the standard size value;
a target face enlarging unit 6043, configured to perform enlargement processing on the face image of the target face according to the enlargement ratio;
a first target image generation unit 6044 configured to generate a first target image after the enlargement processing is completed for each target face.
Referring to fig. 6b, a block diagram of a mobile terminal in a third embodiment of the present invention is shown.
The mobile terminal 600 shown in fig. 6b further includes, on the basis of fig. 6 a: a face feature extraction module 607, a face feature comparison module 608 and a distortion correction module 609.
A face feature extraction module 607, configured to extract a face feature in a face image of each target face;
a face feature comparison module 608, configured to compare the face features with standard face features in a preset face feature library, and determine whether a face image of the target face has distortion;
and a distortion correction module 609, configured to perform distortion correction on the face image of the target face according to the standard facial feature if the face image of the target face has distortion, and generate the first target image after distortion correction.
Referring to fig. 6c, a block diagram of a mobile terminal in a third embodiment of the present invention is shown.
The mobile terminal 600 shown in fig. 6c further includes, on the basis of fig. 6 a: an image extraction module 610, an image distortion judgment module 611 and an image distortion correction module 612.
An image extraction module 610, configured to extract a body image of each target face and a background image within a preset range from the target face;
an image distortion determining module 611, configured to determine whether the body image and the background image have distortion;
an image distortion correction module 612, configured to perform distortion correction on the body image and the background image if the body image and the background image have distortion, and generate the first target image after the distortion correction.
Referring to fig. 6d, a block diagram of a mobile terminal in a third embodiment of the present invention is shown.
The mobile terminal 600 shown in fig. 6d further includes, on the basis of fig. 6 a: a shooting parameter detection module 613, and a shooting parameter enlargement module 614.
A shooting parameter detecting module 613, configured to perform shooting parameter detection on the first target image;
and a shooting parameter amplifying module 614, configured to adjust a shooting parameter of the first target image if it is detected that there is shooting parameter abnormality, and generate the first target image after the shooting parameter is adjusted.
Optionally, the shooting parameter amplifying module 614 may further include:
and the shooting parameter amplifying unit is used for adjusting the color level and/or the contrast of the first target image and generating the first target image with the shooting parameters adjusted if at least one of the brightness, the contrast and the color temperature of different image areas in the first target image is detected to have a difference.
Referring to fig. 6e, a block diagram of a mobile terminal in a third embodiment of the present invention is shown.
The mobile terminal 600 shown in fig. 6e further includes, on the basis of fig. 6 a:
a viewing instruction receiving module 615, configured to receive a viewing instruction of the mobile terminal user for the first target image;
a first target image display module 616, configured to display the first target image;
a touch operation detection module 617, configured to detect a touch operation of the mobile terminal user on the first target image;
the second target image display module 618 is configured to display the second target image when a touch operation matching a pre-stored trigger operation is detected.
Optionally, the touch operation includes: at least one of a long press operation, a touch screen gesture operation, and an air gesture operation.
In this way, in the embodiment of the present invention, the mobile terminal obtains a frame of preview image collected by a camera of the mobile terminal; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the embodiment of the present invention, the mobile terminal may provide two images, one is a second target image generated from the preview image, the second target image is not processed and has interest in original taste, and the other is a first target image obtained by amplifying the preview image, the imaging effect of the first target image is better, and a user may select at least one of the two images as needed, so that the user may conveniently view and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
Example four
Fig. 10 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 1000 shown in fig. 10 includes: at least one processor 1001, memory 1002, at least one network interface 1004, other user interfaces 1003, and a camera 1006. Various components in mobile terminal 1000 are coupled together by a bus system 1005. It is understood that bus system 1005 is used to enable communications among the components connected. The bus system 1005 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. But for the sake of clarity the various busses are labeled in figure 10 as the bus system 1005.
The user interface 1003 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 1002 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (synchlink DRAM, SLDRAM), and direct memory bus RAM (DRRAM). The memory 1002 of the subject systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 1002 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 10021 and applications 10022.
The operating system 10021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 10022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. The program implementing the method according to the embodiment of the present invention may be included in the application program 10022.
In the embodiment of the present invention, by calling a program or an instruction stored in the memory 1002, specifically, a program or an instruction stored in the application program 10022, the camera 1006 is configured to acquire a preview image, and the processor 1001 is configured to acquire a frame of preview image acquired by the camera 1006 of the mobile terminal; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and performing associated storage on the first target image and the second target image.
The method disclosed by the embodiment of the invention can be applied to the processor 1001 or can be implemented by the processor 1001. The processor 1001 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1001. The processor 1001 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component. The various methods, steps and logic blocks disclosed in embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1002, and the processor 1001 reads the information in the memory 1002 and performs the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the processor 1001 is configured to perform face recognition on the preview image, and determine a face image of each face; sending an infrared ranging signal to the face image of each face through an infrared sensor of the mobile terminal; receiving a feedback signal reflected by the infrared ranging signal; and determining the shooting distance of each face based on the feedback signal.
Optionally, the processor 1001 is configured to, for each face, determine whether a shooting distance of the face is greater than a preset distance threshold; if the shooting distance of the face is greater than a preset distance threshold, acquiring a standard size value of the face corresponding to the shooting distance of the face, which is stored in advance; comparing the face size value of the face with the standard size value; and if the face size value of the face is smaller than the standard size value, determining the face as a target face.
Optionally, the processor 1001 is configured to, for each target face, obtain a standard size value of a face, which is stored in advance and corresponds to a shooting distance of the target face; determining a magnification ratio based on the face size value of the target face and the standard size value; according to the amplification scale, amplifying the face image of the target face; and generating a first target image after the amplification processing of each target face is completed.
Optionally, the processor 1001 is configured to extract a face feature in a face image of each target face; comparing the face features with standard face features in a preset face feature library, and judging whether a face image of the target face has distortion or not; and if the face image of the target face has distortion, performing distortion correction on the face image of the target face according to the standard face characteristics to generate the first target image subjected to distortion correction.
Optionally, the processor 1001 is configured to extract a body image of each target face and a background image within a preset range from the target face; judging whether the body image and the background image have distortion or not; and if the body image and the background image have distortion, performing distortion correction on the body image and the background image to generate the first target image after distortion correction.
Optionally, the processor 1001 is configured to perform shooting parameter detection on the first target image; and if the shooting parameter abnormity is detected, adjusting the shooting parameter of the first target image, and generating the first target image after the shooting parameter is adjusted.
Optionally, the processor 1001 is configured to adjust a color level and/or a contrast of the first target image if it is detected that at least one of brightness, contrast, and color temperature of different image areas in the first target image is different, and generate the first target image with the shooting parameters adjusted.
Optionally, the processor 1001 is configured to receive a viewing instruction of the mobile terminal user for the first target image; displaying the first target image; detecting touch operation of the mobile terminal user on the first target image; and when the touch operation matched with the pre-stored trigger operation is detected, displaying the second target image.
Optionally, the touch operation includes: at least one of a long press operation, a touch screen gesture operation, and an air gesture operation.
In this way, in the embodiment of the present invention, the mobile terminal obtains a frame of preview image collected by a camera of the mobile terminal; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the embodiment of the present invention, the mobile terminal may provide two images, one is a second target image generated from the preview image, the second target image is not processed and has interest in original taste, and the other is a first target image obtained by magnifying the preview image, the imaging effect of the first target image is better, and a user may select at least one of the two images as needed, so as to enrich the user's selection.
Fig. 11 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 1100 in fig. 11 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 1400 in fig. 11 includes a Radio Frequency (RF) circuit 1110, a memory 1120, an input unit 1130, a display unit 1140, a processor 1160, an audio circuit 1170, a wifi (wirelessfidelity) module 1180, a power supply 1190, and a camera 1101.
The input unit 1130 may be used to receive numeric or character information input by a user and generate signal inputs related to user settings and function control of the mobile terminal 1100, among other things. Specifically, in the embodiment of the present invention, the input unit 1130 may include a touch panel 1131. The touch panel 1131, also referred to as a touch screen, can collect touch operations of a user (for example, operations of the user on the touch panel 1131 by using a finger, a stylus pen, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 1160, and can receive and execute commands sent by the processor 1160. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1131, the input unit 1130 may further include other input devices 1132, and the other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, a function key (such as a volume control key, a switch key, and the like), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 1140 may be used to display information input by or provided to the user and various menu interfaces of the mobile terminal 1100. The display unit 1140 may include a display panel 1141, and optionally, the display panel 1141 may be configured in the form of an LCD or an organic light-emitting diode (OLED).
It should be noted that touch panel 1131 may cover display panel 1141 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to processor 110 to determine the type of the touch event, and then processor 1160 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement of the application interface display area and the common control display area is not limited, and may be an arrangement that can distinguish two display areas, such as an up-down arrangement, a left-right arrangement, and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, for example, application icons such as buttons, interface numbers, scroll bars, phone book icons and the like are arranged.
The processor 1160 is a control center of the mobile terminal 1100, connects various parts of the entire mobile phone through various interfaces and lines, and executes various functions and processes data of the mobile terminal 1100 by operating or executing software programs and/or modules stored in the first memory 1121 and calling data stored in the second memory 1122, thereby integrally monitoring the mobile terminal 1100. Optionally, processor 1160 may include one or more processing units.
In the embodiment of the present invention, the processor 1160 is configured to obtain a frame of preview image collected by the camera 1101 of the mobile terminal by calling the software program and/or module stored in the first memory 1121 and/or the data stored in the second memory 1122; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. .
Optionally, the processor 1160 is configured to perform face recognition on the preview image, and determine a face image of each face; sending an infrared ranging signal to the face image of each face through an infrared sensor of the mobile terminal; receiving a feedback signal reflected by the infrared ranging signal; and determining the shooting distance of each face based on the feedback signal.
Optionally, the processor 1160 is configured to determine, for each face, whether a shooting distance of the face is greater than a preset distance threshold; if the shooting distance of the face is greater than a preset distance threshold, acquiring a standard size value of the face corresponding to the shooting distance of the face, which is stored in advance; comparing the face size value of the face with the standard size value; and if the face size value of the face is smaller than the standard size value, determining the face as a target face.
Optionally, the processor 1160 is configured to, for each target face, obtain a standard size value of a face, which is stored in advance and corresponds to a shooting distance of the target face; determining a magnification ratio based on the face size value of the target face and the standard size value; according to the amplification scale, amplifying the face image of the target face; and generating a first target image after the amplification processing of each target face is completed.
Optionally, the processor 1160 is configured to extract a face feature in a face image of each target face; comparing the face features with standard face features in a preset face feature library, and judging whether a face image of the target face has distortion or not; and if the face image of the target face has distortion, performing distortion correction on the face image of the target face according to the standard face characteristics to generate the first target image subjected to distortion correction.
Optionally, the processor 1160 is configured to extract a body image of each target face and a background image within a preset range from the target face; judging whether the body image and the background image have distortion or not; and if the body image and the background image have distortion, performing distortion correction on the body image and the background image to generate the first target image after distortion correction.
Optionally, the processor 1160 is configured to perform shooting parameter detection on the first target image; and if the shooting parameter abnormity is detected, adjusting the shooting parameter of the first target image, and generating the first target image after the shooting parameter is adjusted.
Optionally, the processor 1160 is configured to adjust a color level and/or a contrast of the first target image if it is detected that at least one of brightness, contrast, and color temperature of different image areas in the first target image is different, and generate the first target image with adjusted shooting parameters.
Optionally, the processor 1160 is configured to receive a viewing instruction of the mobile terminal user for the first target image; displaying the first target image; detecting touch operation of the mobile terminal user on the first target image; and when the touch operation matched with the pre-stored trigger operation is detected, displaying the second target image.
Optionally, the touch operation includes: at least one of a long press operation, a touch screen gesture operation, and an air gesture operation.
In this way, in the embodiment of the present invention, the mobile terminal obtains a frame of preview image collected by a camera of the mobile terminal; acquiring the shooting distance and the face size value of each face in the preview image; determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face; amplifying the at least one target face to generate a first target image; and generating a second target image from the preview image, and storing the first target image and the second target image in a correlation manner. The method and the device can amplify the faces with longer shooting distance so as to ensure that the face size value of each face in the first target image is proper, can clearly display the facial features of the target person, and improve the imaging effect of the image.
In addition, in the embodiment of the present invention, the mobile terminal may provide two images, one is a second target image generated from the preview image, the second target image is not processed and has interest in original taste, and the other is a first target image obtained by amplifying the preview image, the imaging effect of the first target image is better, and a user may select at least one of the two images as needed, so that the user may conveniently view and switch the images before and after processing, the operation is convenient, and the selection of the user is enriched.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (16)

1. A shooting method is applied to a mobile terminal with a camera, and is characterized by comprising the following steps:
acquiring a frame of preview image acquired by a camera of the mobile terminal;
acquiring the shooting distance and the face size value of each face in the preview image;
determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face;
amplifying the at least one target face to generate a first target image;
extracting the face features in the face image of each target face;
comparing the face features with standard face features in a preset face feature library, and judging whether a face image of the target face has distortion or not;
if the face image of the target face has distortion, carrying out distortion correction on the face image of the target face according to the standard facial features to generate a first target image after distortion correction;
generating a second target image from the preview image, and performing associated storage on the first target image and the second target image;
the step of determining at least one target face to be magnified in the preview image based on the shooting distance and the face size value of each face comprises:
for each face, judging whether the shooting distance of the face is greater than a preset distance threshold value;
if the shooting distance of the face is larger than a preset distance threshold, acquiring a pre-stored standard size value of the face corresponding to the shooting distance of the face;
comparing the face size value of the face with the standard size value;
and if the face size value of the face is smaller than the standard size value, determining the face as a target face.
2. The method according to claim 1, wherein the step of obtaining the shooting distance of each face in the preview image comprises:
performing face recognition on the preview image, and determining a face image of each face;
sending an infrared ranging signal to the face image of each face through an infrared sensor of the mobile terminal;
receiving a feedback signal reflected by the infrared ranging signal;
and determining the shooting distance of each face based on the feedback signal.
3. The method of claim 2, wherein the step of generating the first target image by magnifying the at least one target face comprises:
for each target face, acquiring a standard size value of the face corresponding to the shooting distance of the target face, which is stored in advance;
determining an amplification ratio based on the face size value of the target face and the standard size value;
according to the amplification scale, amplifying the face image of the target face;
and generating a first target image after the amplification processing of each target face is completed.
4. The method of claim 3, wherein after the step of generating the first target image after the completion of the magnification processing of each target face, the method further comprises:
extracting a body image of each target face and a background image which is away from the target face within a preset range;
judging whether the body image and the background image have distortion or not;
and if the body image and the background image have distortion, performing distortion correction on the body image and the background image to generate the first target image after distortion correction.
5. The method of claim 3, wherein after the step of generating the first target image after the completion of the magnification processing of each target face, the method further comprises:
shooting parameter detection is carried out on the first target image;
and if the shooting parameter abnormity is detected, adjusting the shooting parameter of the first target image, and generating the first target image after the shooting parameter is adjusted.
6. The method according to claim 5, wherein the step of adjusting the shooting parameters of the first target image and generating the first target image with the adjusted shooting parameters if the shooting parameter abnormality is detected comprises:
and if at least one of the brightness, the contrast and the color temperature of different image areas in the first target image is detected to be different, adjusting the color level and/or the contrast of the first target image, and generating the first target image with the shooting parameters adjusted.
7. The method of claim 1, wherein after the step of generating the preview image into the second target image and storing the first target image and the second target image in association, the method further comprises:
receiving a viewing instruction of the mobile terminal user for the first target image;
displaying the first target image;
detecting touch operation of the mobile terminal user on the first target image;
and when the touch operation matched with the pre-stored trigger operation is detected, displaying the second target image.
8. The method of claim 7, wherein the touch operation comprises: at least one of a long press operation, a touch screen gesture operation, and an air gesture operation.
9. A mobile terminal, comprising:
the preview image acquisition module is used for acquiring a frame of preview image acquired by a camera of the mobile terminal;
the face information acquisition module is used for acquiring the shooting distance and the face size value of each face in the preview image;
the target face determining module is used for determining at least one target face to be amplified in the preview image based on the shooting distance and the face size value of each face;
the target face amplification module is used for amplifying the at least one target face to generate a first target image;
the face feature extraction module is used for extracting the face features in the face image of each target face;
the human face feature comparison module is used for comparing the human face features with standard facial features in a preset human face feature library and judging whether the human face image of the target human face has distortion or not;
the distortion correction module is used for carrying out distortion correction on the face image of the target face according to the standard face characteristics if the face image of the target face has distortion, and generating the first target image after distortion correction;
the image storage module is used for generating a second target image from the preview image and storing the first target image and the second target image in a related manner;
the target face determination module comprises:
the shooting distance judging unit is used for judging whether the shooting distance of each face is greater than a preset distance threshold value or not;
the first standard face acquisition unit is used for acquiring a standard size value of a face corresponding to the shooting distance of the face, which is stored in advance, if the shooting distance of the face is greater than a preset distance threshold;
the face comparison unit is used for comparing the face size value of the face with the standard size value;
and the target face determining unit is used for determining the face as a target face if the face size value of the face is smaller than the standard size value.
10. The mobile terminal of claim 9, wherein the face information obtaining module comprises:
the face recognition unit is used for carrying out face recognition on the preview image and determining a face image of each face;
the distance measurement signal sending unit is used for sending an infrared distance measurement signal to the face image of each face through an infrared sensor of the mobile terminal;
the feedback signal receiving unit is used for receiving a feedback signal reflected by the infrared ranging signal;
and the shooting distance calculation unit is used for determining the shooting distance of each face based on the feedback signal.
11. The mobile terminal of claim 9, wherein the target face enlarging module comprises:
the first standard face acquisition unit is used for acquiring a standard size value of a face corresponding to the shooting distance of the target face, which is stored in advance, for each target face;
the amplification ratio determining unit is used for determining the amplification ratio based on the face size value of the target face and the standard size value;
the target face amplifying unit is used for amplifying the face image of the target face according to the amplification proportion;
and the first target image generating unit is used for generating a first target image after the amplification processing of each target face is finished.
12. The mobile terminal of claim 11, wherein the mobile terminal further comprises:
the image extraction module is used for extracting a body image of each target face and a background image which is away from the target face within a preset range;
the image distortion judging module is used for judging whether the body image and the background image have distortion or not;
and the image distortion correction module is used for performing distortion correction on the body image and the background image if the body image and the background image have distortion, and generating the first target image after the distortion correction.
13. The mobile terminal of claim 11, wherein the mobile terminal further comprises:
the shooting parameter detection module is used for carrying out shooting parameter detection on the first target image;
and the shooting parameter amplifying module is used for adjusting the shooting parameters of the first target image and generating the first target image after the shooting parameters are adjusted if the shooting parameter abnormity is detected.
14. The mobile terminal of claim 13, wherein the shooting parameter amplifying module comprises:
and the shooting parameter amplifying unit is used for adjusting the color level and/or the contrast of the first target image and generating the first target image with the shooting parameters adjusted if at least one of the brightness, the contrast and the color temperature of different image areas in the first target image is detected to have a difference.
15. The mobile terminal of claim 9, wherein the mobile terminal further comprises:
the viewing instruction receiving module is used for receiving a viewing instruction of the mobile terminal user to the first target image;
the first target image display module is used for displaying the first target image;
the touch operation detection module is used for detecting the touch operation of the mobile terminal user on the first target image;
and the second target image display module is used for displaying the second target image when the touch operation matched with the pre-stored trigger operation is detected.
16. The mobile terminal of claim 15, wherein the touch operation comprises: at least one of a long press operation, a touch screen gesture operation, and an air gesture operation.
CN201710090640.XA 2017-02-20 2017-02-20 Shooting method and mobile terminal Active CN107124543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710090640.XA CN107124543B (en) 2017-02-20 2017-02-20 Shooting method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710090640.XA CN107124543B (en) 2017-02-20 2017-02-20 Shooting method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107124543A CN107124543A (en) 2017-09-01
CN107124543B true CN107124543B (en) 2020-05-29

Family

ID=59717953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710090640.XA Active CN107124543B (en) 2017-02-20 2017-02-20 Shooting method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107124543B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107820007A (en) * 2017-11-09 2018-03-20 维沃移动通信有限公司 The picture processing method and mobile terminal of a kind of camera
CN108510084B (en) * 2018-04-04 2022-08-23 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108965697A (en) * 2018-06-28 2018-12-07 努比亚技术有限公司 A kind of filming control method, terminal and computer readable storage medium
CN110232667B (en) * 2019-06-17 2021-06-04 厦门美图之家科技有限公司 Image distortion correction method, device, electronic equipment and readable storage medium
CN110166696B (en) * 2019-06-28 2021-03-26 Oppo广东移动通信有限公司 Photographing method, photographing device, terminal equipment and computer-readable storage medium
CN110933293A (en) * 2019-10-31 2020-03-27 努比亚技术有限公司 Shooting method, terminal and computer readable storage medium
CN110896451B (en) * 2019-11-20 2022-01-28 维沃移动通信有限公司 Preview picture display method, electronic device and computer readable storage medium
CN113850726A (en) * 2020-06-28 2021-12-28 华为技术有限公司 Image transformation method and device
WO2022001630A1 (en) * 2020-06-29 2022-01-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and system for capturing at least one smart media
CN112004054A (en) * 2020-07-29 2020-11-27 深圳宏芯宇电子股份有限公司 Multi-azimuth monitoring method, equipment and computer readable storage medium
CN112258393A (en) * 2020-10-26 2021-01-22 珠海格力电器股份有限公司 Method, device and equipment for displaying picture

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013025719A (en) * 2011-07-25 2013-02-04 Olympus Corp Object detection device, imaging device, and image processing program
CN103945121A (en) * 2014-03-24 2014-07-23 联想(北京)有限公司 Information processing method and electronic equipment
CN104732210A (en) * 2015-03-17 2015-06-24 深圳超多维光电子有限公司 Target human face tracking method and electronic equipment
CN105095893A (en) * 2014-05-16 2015-11-25 北京天诚盛业科技有限公司 Image acquisition device and method
CN105959563A (en) * 2016-06-14 2016-09-21 北京小米移动软件有限公司 Image storing method and image storing apparatus
CN106101545A (en) * 2016-06-30 2016-11-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8711265B2 (en) * 2008-04-24 2014-04-29 Canon Kabushiki Kaisha Image processing apparatus, control method for the same, and storage medium
KR101108835B1 (en) * 2009-04-28 2012-02-06 삼성전기주식회사 Face authentication system and the authentication method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013025719A (en) * 2011-07-25 2013-02-04 Olympus Corp Object detection device, imaging device, and image processing program
CN103945121A (en) * 2014-03-24 2014-07-23 联想(北京)有限公司 Information processing method and electronic equipment
CN105095893A (en) * 2014-05-16 2015-11-25 北京天诚盛业科技有限公司 Image acquisition device and method
CN104732210A (en) * 2015-03-17 2015-06-24 深圳超多维光电子有限公司 Target human face tracking method and electronic equipment
CN105959563A (en) * 2016-06-14 2016-09-21 北京小米移动软件有限公司 Image storing method and image storing apparatus
CN106101545A (en) * 2016-06-30 2016-11-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Also Published As

Publication number Publication date
CN107124543A (en) 2017-09-01

Similar Documents

Publication Publication Date Title
CN107124543B (en) Shooting method and mobile terminal
CN106406710B (en) Screen recording method and mobile terminal
KR102529120B1 (en) Method and device for acquiring image and recordimg medium thereof
US10715761B2 (en) Method for providing video content and electronic device for supporting the same
CN107566717B (en) Shooting method, mobile terminal and computer readable storage medium
CN106060406B (en) Photographing method and mobile terminal
CN107613203B (en) Image processing method and mobile terminal
WO2019001152A1 (en) Photographing method and mobile terminal
CN107172347B (en) Photographing method and terminal
CN107172346B (en) Virtualization method and mobile terminal
CN107659722B (en) Image selection method and mobile terminal
CN107613202B (en) Shooting method and mobile terminal
CN107959789B (en) Image processing method and mobile terminal
CN107566749B (en) Shooting method and mobile terminal
CN106791437B (en) Panoramic image shooting method and mobile terminal
CN107360375B (en) Shooting method and mobile terminal
US20210084228A1 (en) Tracking shot method and device, and storage medium
KR102655625B1 (en) Method and photographing device for controlling the photographing device according to proximity of a user
KR102584187B1 (en) Electronic device and method for processing image
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN107370758B (en) Login method and mobile terminal
CN107592458B (en) Shooting method and mobile terminal
CN107480500B (en) Face verification method and mobile terminal
CN106502614B (en) Font adjusting method and mobile terminal
US9560272B2 (en) Electronic device and method for image data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant