CN108616687B - Photographing method and device and mobile terminal - Google Patents

Photographing method and device and mobile terminal Download PDF

Info

Publication number
CN108616687B
CN108616687B CN201810243314.2A CN201810243314A CN108616687B CN 108616687 B CN108616687 B CN 108616687B CN 201810243314 A CN201810243314 A CN 201810243314A CN 108616687 B CN108616687 B CN 108616687B
Authority
CN
China
Prior art keywords
image
target area
area
light
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810243314.2A
Other languages
Chinese (zh)
Other versions
CN108616687A (en
Inventor
林雄周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810243314.2A priority Critical patent/CN108616687B/en
Publication of CN108616687A publication Critical patent/CN108616687A/en
Application granted granted Critical
Publication of CN108616687B publication Critical patent/CN108616687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a photographing method, a photographing device and a mobile terminal, wherein the method comprises the following steps: after receiving a photographing instruction, identifying a target area in a photographing area; determining a light metering point in the target area for metering light, and taking a picture to obtain a first image; determining a light measuring point in a non-target area for light measurement, and taking a picture to obtain a second image, wherein the light brightness difference value between the target area and the non-target area is greater than a set threshold value; and splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo. In the third image obtained by splicing, the images of the target area and the non-target area are respectively subjected to photometry, so that the normal brightness of the photo can be ensured, the over-exposure phenomenon and the over-darkness phenomenon can not occur, and the quality of the photo is improved compared with the prior art.

Description

Photographing method and device and mobile terminal
Technical Field
The present invention relates to the field of communications/terminals, and in particular, to a photographing method, apparatus, and mobile terminal.
Background
With the popularization of digital products, more and more digital devices have a photographing function. In some shooting scenes, both a strong light area and a weak light area are often in the viewing range.
When the light metering and photographing are carried out based on the weak light area, the light metering algorithm can enable the brightness of an object in the weak light area to reach a normal level as far as possible, and the strong light area can possibly have an overexposure phenomenon, so that the details of a picture in the strong light area cannot be distinguished, and the quality of the obtained picture is not high.
Disclosure of Invention
The embodiment of the invention provides a photographing method, which aims to solve the problem that the picture obtained by the conventional photographing method has an over-explosion phenomenon.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a photographing method is provided, which includes:
after receiving a photographing instruction, identifying a target area in a photographing area;
determining a light metering point in the target area for metering light, and taking a picture to obtain a first image;
determining a light measuring point in a non-target area for light measurement, and taking a picture to obtain a second image, wherein the light brightness difference value between the target area and the non-target area is greater than a set threshold value;
and splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo.
In a second aspect, a terminal device is provided, which includes:
the identification module is used for identifying a target area in the shooting area after receiving the shooting instruction;
the first shooting module is used for determining a light metering point in the target area to perform light metering and shooting a picture to obtain a first image;
the second shooting module is used for determining the light metering point in a non-target area to perform light metering and shooting a picture to obtain a second image, wherein the light brightness difference value between the target area and the non-target area is greater than a set threshold value;
and the synthesis module is used for splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo.
In a third aspect, a terminal device is provided, the terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect.
In the embodiment of the invention, the target area and the non-target area in the shooting area are subjected to photometry and the photo is shot, so that the images of the target area and the non-target area in the spliced third image are respectively subjected to photometry, the brightness of the photo can be ensured to be normal, the overexposure phenomenon and the over-darkness phenomenon can not occur, and the quality of the photo is improved compared with the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a first flowchart of a photographing method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a photographing method according to an embodiment of the present invention;
fig. 3 is a schematic view of a first structure of a photographing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In some shooting scenes, both a strong light area and a weak light area are often in the viewing range. When taking a picture, the default focus and light measuring point are often located in the center or a certain specific area of the picture, so that when taking a light measuring picture based on a low light area, an overexposure phenomenon occurs on the background part, and the picture cannot be distinguished in detail.
For example, in the case of taking a self-timer photograph, if the background light is strong, the brightness of the portrait area is much lower than that of the background area, whereas if the photo-metering photograph is taken based on the portrait area, the brightness of the portrait area is normal, but the brightness of the background area is too high, which leads to overexposure.
When a photograph is taken based on a high-light area, the low-light area is too dark, and the photograph cannot be distinguished from the low-light area.
In one or more embodiments of the present description, after receiving a photographing instruction, a target area in a photographing area may be identified, a light metering point is determined in the target area for light metering, and a photograph is taken to obtain a first image; then, determining the light measuring points in a non-target area for light measurement, and taking a picture to obtain a second image; and then, splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo. Thus, the target area and the non-target area of the obtained photo can be clear photos.
For convenience of description, in one or more embodiments of the present specification, an executing subject of the photographing method may be a terminal device with a photographing function, such as a smart phone, a tablet computer, a digital camera, and the like. In addition, the execution subject of the method may also be software itself that controls the camera to take a picture. The following describes an embodiment of the method, taking the execution subject as a camera as an example. It is to be understood that the implementation of the method by a camera is merely an exemplary illustration and should not be construed as a limitation of the method.
In one or more embodiments of the present specification, an implementation flow diagram of a photographing method is shown in fig. 1, and includes the following steps:
step S100: after receiving a photographing instruction, identifying a target area in a photographing area;
the photographing instruction may be used to control the camera to photograph, for example, the photographing instruction may be triggered by a user pressing a shutter key of the camera, or in the intelligent terminal device, the photographing instruction may also be triggered by the user clicking a photographing button preset in the human-computer interaction interface. The triggering mode of the photographing instruction is not specifically limited in this specification.
The photographing region may be a view area of the camera, and the general photographing region may be fed back to the user in real time in the form of a preview image.
The target area may be an area where a target object is located in the photographing area, and the target area may be determined according to the target object. In specific implementation, the characteristics of the target object may be predetermined, so that the region where the target object is located in the shooting region, that is, the target region, may be identified according to the characteristics of the target object. In specific implementation, the feature of the target object may be preset, so that the region where the target object is located is identified according to the feature of the target object during shooting.
When the area where the target object is located is identified according to the feature of the target object, a preview image of the shooting area may be acquired first, and then image identification may be performed on the preview image, that is, the target object in the preview image is identified according to the feature of the target object. If the target image is recognized, the area where the target image is located may be taken as the target area.
In one embodiment, the target object may be a person, and the target area is an area where the portrait is located, and when the target area is identified, the target area may be identified based on the portrait characteristics. Specifically, in the recognition, a preview image of the shooting area may be acquired first, then image recognition may be performed in the preview image based on a preset portrait feature, and if a portrait is recognized in the preview image, an area where the portrait is located may be used as a target area.
For example, in a self-timer mode of a camera, since a photographed target object is often a person, a target area in the self-timer mode may be set in advance as an area where an image is located. Thus, when the photo is taken, the region where the portrait is located can be identified.
If the target area is identified in the shooting area, the step S102 is entered, otherwise, whether the target area exists in the shooting area is continuously identified.
Step S102: determining a light metering point in the target area for metering light, and taking a picture to obtain a first image;
when the light metering point is determined to be at a certain position for light metering, the brightness of the light metering position reaches a certain preset value by adjusting the optical parameters during photographing, and the preset value is usually a value which enables the light metering area to display normally and is preset by developers. So-called normal display, generally, under visual observation, the brightness of an image is neither too high to cause overexposure nor too dark to cause image blurring.
Then, when the light measuring point is determined to measure light in the target area, the brightness of the target area is displayed normally, and at this time, the brightness of the target area is displayed normally in the first image obtained by shooting.
Step S104: determining the light metering point in a non-target area for light metering, and taking a picture to obtain a second image;
the non-target area includes an area other than the target area in the photographing area, that is, when the non-target area performs photometry, photometry may be performed and a photograph may be taken in a part of the photographing area other than the target area, for example, photometry may be performed in a position other than the target area and a photograph may be taken, or photometry may be performed and a photograph may be taken in a distribution of a plurality of positions other than the target area.
The difference value of the luminance of the target area and the luminance of the non-target area can be larger than a set threshold value, so that the images of the target area and the non-target area can be ensured to be displayed normally without over exposure or over darkness by respectively carrying out photometry on the target area and the non-target area. The specific value of the set threshold may be preset by a developer, and this embodiment is not particularly limited to this. Generally, if the brightness difference is greater than the set threshold, the non-target area will be overexposed when the photo is taken based on the target area, and if the brightness difference is not greater than the set threshold, the non-target area will not be overexposed.
In one embodiment, the light metering point may be determined at a position of the non-target area, and then light metering is performed for the position and a photograph is taken, resulting in the second image. When the light measuring point is determined to measure light in the non-target area, the brightness of the non-target area is displayed normally, and at the moment, the brightness of the non-target area is displayed normally in the shot second image.
Of course, the non-target area may not be uniform in brightness, and photometry may be performed for a plurality of different positions of the non-target area, respectively. In one embodiment, the photometric points may be determined at different positions of the non-target area, then photometry is performed for each position and a picture is taken, resulting in second sub-images taken at each photometric point, and then each second sub-image is combined to result in a second image. In this way, the second image is obtained by performing photometry on a plurality of points, and the photo quality of the second image is improved.
If a plurality of photographs are taken by performing photometry on a non-target area a plurality of times, the plurality of photographs taken may be referred to as a second sub-image, and an image obtained by combining the second sub-images may be referred to as a second image. In this way, an image obtained by performing photometric photographing on a non-target region may be referred to as a second image for convenience of reference in the following description. The terms "first", "second", "sub", and the like used in the present invention may be used to distinguish different things to clearly illustrate the core idea of the present invention, and should not be construed as an absolute limitation of the present invention.
When the obtained second sub-images are synthesized, the average value of the pixels of the same position in the second sub-images can be determined. In practical application, the pixel value of each pixel point with the same coordinate in the second sub-images can be determined, and then the average value of the pixel values is taken to obtain the pixel average value, so that the pixel average value of the pixel point at each position can be obtained.
Then, the pixel value of the pixel point at the first position in the second image may be set to be the pixel average value of the pixel point at the first position in the second sub-image. In one embodiment, the pixel value of the pixel point at each position in the second image may be set as the average value of the pixels at the same position calculated above.
By taking the average value of the pixels, the brightness of the second area in the second image is more uniform on the whole, and the situation that some places are too bright or some places are too dark is avoided, so that the whole effect of the image is better.
In one embodiment, the target area is a portrait area, and when performing photometric photography on a non-target area, photometry can be performed at a first position of the non-target area and a photograph can be taken, where the first position is located at the upper left corner outside the portrait area, or photometry can be performed at a second position of the non-target area and a photograph can be taken, where the second position is located at the upper right corner outside the portrait area. Here, the execution sequence of performing photometry and photographing at the first position and the second position is not limited.
Since the person is photographed, the background area is generally located at the upper left corner and the upper right corner of the portrait. And the upper left corner and the upper right corner of the portrait are areas where high-brightness light sources such as lamplight and the sun are located, so that the light can be measured in the upper left corner and the upper right corner respectively, the light can be measured in the background in the portrait scene better, and the background picture obtained in the portrait scene is clearer.
In a specific implementation, the specific position of the first position in the image may be that the longitudinal position is 3/4 from the lowest point of the portrait area by the height of the portrait, and the lateral position is the center point between the right edge of the portrait area and the right edge of the image at the longitudinal position. The specific location of the second location in the image may be 3/4 where the longitudinal location is a height of the portrait from the lowest point of the portrait area and the lateral location is a center point between the left edge of the portrait area and the left edge of the image at the longitudinal location.
Step S106: and splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo.
Based on the above steps, a first image with normal brightness of the target area and a second image with normal brightness of the non-target area can be obtained, and then the image of the target area in the first image and the image of the non-target area in the second image can be spliced and synthesized to obtain a third image as a photo.
In one embodiment, an image of the target area in the first image may be extracted, and then the extracted image of the target area may be overlaid on the target area in the second image to obtain the third image. In this way, the image of the target area in the third image is an image with normal brightness, and the image of the non-target area is also an image with normal brightness.
In one embodiment, since the target area has been identified in step S100, the target area and the non-target area in the first image may be segmented according to the target area identified in step S100 to obtain an image of the target area.
In another embodiment, in consideration that the target region identified in step S100 may be deviated from the target region in the first image obtained by subsequent shooting, when extracting the image of the target region, the target region may be identified in the first image according to the features of the image to be identified, and then the target region and the non-target region in the first image may be segmented to obtain the image of the target region.
After the image of the target area is obtained, the image of the target area and the image of the non-target area in the second image can be spliced and synthesized. The finally obtained third image target area image and the non-target area image are respectively subjected to photometry, so that the normal brightness of the photo can be ensured, the overexposure phenomenon and the over-darkness phenomenon can not occur, and the quality of the photo is improved compared with the prior art.
In the present application, the size of the first image, the second sub-image, the second image, and the like captured after the photometry is the same as the default size, so that subsequent segmentation and merging can be performed, and if the sizes are different, the images may be aligned first. In addition, the difference between the view ranges of each image during shooting can be as small as possible, so the time interval during shooting can be as short as possible, and the large difference between the view ranges of multiple shot pictures caused by shaking and other external reasons can be avoided.
Based on the above inventive concept of the present application, in order to better understand the technical features, means and effects of the present application, the photographing method in one or more embodiments of the present specification will be further described, and concepts not described below may be referred to in the foregoing description.
The photographing method in one or more embodiments of the present description may be applied to a portrait photographing scene, for example, a self-portrait photographing scene in which a target area is an area where a portrait is located. In a portrait photographing scene, photographing synthesis can be realized based on a portrait segmentation technology, and an implementation flow diagram of a specific photographing method is shown in fig. 2, and includes the following steps:
step S201: receiving a photographing instruction;
step S202: identifying whether a portrait exists in a shooting area;
for a specific identification method, reference is made to the related description in step S100, which is not described herein again. If the portrait is recognized in the shooting area, the step S203 is proceeded to, otherwise, whether the portrait exists in the shooting area is continuously recognized.
Step S203: if the portrait is recognized in the shooting area, determining a light metering point in the area where the portrait is located to perform light metering, and shooting a photo to obtain a photo A;
the light measuring point can be determined at the center point of the portrait for light measurement.
Step S204: respectively measuring light at the upper left corner and the upper right corner outside the portrait area and taking a picture to obtain a picture B and a picture C;
step S205: mixing the photos B and C to obtain a new photo D;
in one embodiment, the specific mixing comprises: and solving the pixel average value of the pixel points at the same position of the picture B and the picture C, and then taking the pixel average value as the pixel value of the pixel point at the same position in the picture D.
For example, in the RGB color mode, the average value of R, the average value of G, and the average value of B in each pixel point can be calculated:
Figure RE-GDA0001693322450000091
Figure RE-GDA0001693322450000092
Figure RE-GDA0001693322450000093
wherein, the pixel value of a certain pixel point in the picture B can be expressed as (R)1,G1,B1) The pixel value of the pixel point at the same position as the pixel point in the photo C can be expressed as (R)2,G2,B2) The pixel value of the pixel point at the position in the photo D can be finally obtained as (R, G, B). By the method, the pixel value of the pixel point at each position in the picture D can be obtained.
Step S206: obtaining a portrait in the photo A through portrait segmentation;
that is, image data of the portrait area in photograph a is acquired.
Step S207: the portrait in photograph A is overlaid into photograph D to obtain photograph E.
The photograph E is stored as a photograph in response to the photographing instruction in step S201.
The final photo E is obtained by respectively measuring the light of the portrait area (foreground) and the light of the area (background) outside the portrait area, so that the normal brightness of the photo can be ensured, the problem of overexposure of the background picture under a backlight environment caused by only measuring the light of the foreground can be effectively avoided, the problem of too dark foreground can be avoided, and the quality of the photo is improved compared with the prior art.
Based on the same idea, the photographing method provided by the embodiment of the present application further provides a corresponding photographing device, as shown in fig. 3, where the photographing device specifically includes:
the identification module 301 identifies a target area in the shooting area after receiving the shooting instruction;
the first shooting module 302 is used for determining a light metering point in the target area to perform light metering and shooting a picture to obtain a first image;
the second shooting module 303 determines the photometric point to perform photometry in a non-target area, and shoots a picture to obtain a second image;
and a synthesis module 304, which is used for splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo.
In an embodiment, the second capturing module 303 is specifically configured to:
determining light measuring points at different positions of the non-target area, respectively measuring light at each position and taking a picture to obtain each second sub-image shot under each light measuring point;
and synthesizing each second sub-image to obtain the second image.
In an embodiment, the second capturing module 303 is specifically configured to:
determining the pixel average value of pixel points at the same position of the second sub-images;
and setting the pixel value of the pixel point at the first position in the second image as the pixel average value of the pixel point at the first position in the second subimage.
In an embodiment, the target region is an image region, and the second capturing module 303 is specifically configured to:
performing photometry at a first position of a non-target area and taking a picture, wherein the first position is positioned at the upper left corner outside the portrait area;
and performing photometry at a second position of the non-target area and taking a picture, wherein the second position is positioned at the upper right corner outside the portrait area.
In an embodiment, if the target region is an image region, the identifying module 301 is specifically configured to:
acquiring a preview image of the shooting area;
performing image recognition on the preview image based on portrait features;
and if the portrait is identified in the preview image, taking the area where the portrait is as the target area.
In one embodiment, the synthesis module 304 is specifically configured to:
extracting an image of a target area in the first image;
and covering the extracted image of the target area to the target area in the second image to obtain a third image.
The photographing apparatus provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition. According to the photographing device provided by the invention, the target area and the non-target area in the photographing area are subjected to photometry and the photo is photographed, so that the images of the target area and the non-target area in the spliced third image are subjected to photometry respectively, the normal brightness of the photo can be ensured, the overexposure phenomenon and the over-darkness phenomenon can not occur, and the quality of the photo is improved compared with the prior art.
Fig. 4 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 4 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 is configured to identify a target area in the shooting area after receiving the shooting instruction;
determining a light metering point in the target area for metering light, and taking a picture to obtain a first image;
determining the light metering point in a non-target area for light metering, and taking a picture to obtain a second image;
splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo;
the target area and the non-target area in the shooting area are distributed to measure light and the photo is shot, so that in the third image obtained by splicing, the images of the target area and the non-target area are respectively measured, the brightness of the photo can be ensured to be normal, the over-exposure phenomenon and the over-darkness phenomenon can not occur, and the quality of the photo is improved compared with the prior art.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 4, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 is an interface through which an external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the mobile terminal 100 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned photographing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the above-mentioned photographing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the descriptions thereof are omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method of taking a picture, comprising:
after receiving a photographing instruction, identifying a target area in a photographing area, wherein the target area is an area where a target object is located in the photographing area and is identified according to the characteristics of the target object;
determining a light metering point in the target area for metering light, and taking a picture to obtain a first image;
determining a light measuring point in a non-target area for light measurement, and taking a picture to obtain a second image, wherein the light brightness difference value between the target area and the non-target area is greater than a set threshold value;
splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo;
the determining of the light metering point in a non-target area for light metering and taking a picture to obtain a second image specifically include:
determining light measuring points at different positions of the non-target area, respectively measuring light at each position and taking a picture to obtain each second sub-image shot under each light measuring point;
and synthesizing each second sub-image to obtain the second image.
2. The method according to claim 1, wherein if the target area is a portrait area, the determining the photometric point at different positions of the non-target area, performing photometric measurement at each position, and taking a picture to obtain each second sub-image taken under each photometric point includes:
performing photometry at a first position of a non-target area and taking a picture, wherein the first position is positioned at the upper left corner outside the portrait area;
and performing photometry at a second position of the non-target area and taking a picture, wherein the second position is positioned at the upper right corner outside the portrait area.
3. The method according to claim 1, wherein synthesizing each of the second sub-images to obtain the second image comprises:
determining the pixel average value of pixel points at the same position of the second sub-images;
and setting the pixel value of the pixel point at the first position in the second image as the pixel average value of the pixel point at the first position in the second subimage.
4. The method according to any one of claims 1 to 3, wherein if the target area is a portrait area, the identifying the target area in the shooting area specifically includes:
acquiring a preview image of the shooting area;
performing image recognition on the preview image based on portrait features;
and if the portrait is identified in the preview image, taking the area where the portrait is as the target area.
5. The method according to claim 1, wherein the splicing and merging the image of the target region in the first image and the image outside the target region in the second image to obtain a third image specifically comprises:
extracting an image of a target area in the first image;
and covering the extracted image of the target area to the target area in the second image to obtain a third image.
6. A photographing apparatus, comprising:
the identification module is used for identifying a target area in a shooting area after receiving a shooting instruction, wherein the target area is an area where a target object is located in the shooting area and is identified according to the characteristics of the target object;
the first shooting module is used for determining a light metering point in the target area to perform light metering and shooting a picture to obtain a first image;
the second shooting module is used for determining the light metering point in a non-target area to perform light metering and shooting a picture to obtain a second image, wherein the light brightness difference value between the target area and the non-target area is greater than a set threshold value; the determining of the light metering point in a non-target area for light metering and taking a picture to obtain a second image specifically include: determining light measuring points at different positions of the non-target area, respectively measuring light at each position and taking a picture to obtain each second sub-image shot under each light measuring point; synthesizing each second sub-image to obtain a second image;
and the synthesis module is used for splicing and synthesizing the image of the target area in the first image and the image of the non-target area in the second image to obtain a third image as a photo.
7. The apparatus of claim 6, wherein the second capture module is specifically configured to:
determining the pixel average value of pixel points at the same position of the second sub-images;
and setting the pixel value of the pixel point at the first position in the second image as the pixel average value of the pixel point at the first position in the second subimage.
8. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201810243314.2A 2018-03-23 2018-03-23 Photographing method and device and mobile terminal Active CN108616687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810243314.2A CN108616687B (en) 2018-03-23 2018-03-23 Photographing method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810243314.2A CN108616687B (en) 2018-03-23 2018-03-23 Photographing method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN108616687A CN108616687A (en) 2018-10-02
CN108616687B true CN108616687B (en) 2020-07-21

Family

ID=63658769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810243314.2A Active CN108616687B (en) 2018-03-23 2018-03-23 Photographing method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN108616687B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109490311B (en) * 2018-10-25 2021-09-10 武汉精立电子技术有限公司 Backlight panel defect detection system and method based on multi-angle shooting
CN110018167B (en) * 2019-04-04 2021-10-29 武汉精立电子技术有限公司 Method and system for rapidly detecting appearance defects of curved screen
WO2022001615A1 (en) * 2020-06-29 2022-01-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and system for automatically removing glare regions
CN111815531B (en) * 2020-07-09 2024-03-01 Oppo广东移动通信有限公司 Image processing method, device, terminal equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905732A (en) * 2014-04-02 2014-07-02 深圳市中兴移动通信有限公司 Shooting method and shooting device
CN107343140A (en) * 2017-06-14 2017-11-10 努比亚技术有限公司 A kind of image processing method and mobile terminal
CN107392087A (en) * 2017-05-27 2017-11-24 华勤通讯技术有限公司 A kind of image processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8107762B2 (en) * 2006-03-17 2012-01-31 Qualcomm Incorporated Systems, methods, and apparatus for exposure control
JP2010268441A (en) * 2009-04-16 2010-11-25 Sanyo Electric Co Ltd Image processor, imaging device, and image reproducing device
CN106550184B (en) * 2015-09-18 2020-04-03 中兴通讯股份有限公司 Photo processing method and device
CN105554411B (en) * 2016-02-15 2018-03-06 广东欧珀移动通信有限公司 A kind of photographic method based on screen light filling, device and mobile terminal
CN105959559A (en) * 2016-06-08 2016-09-21 维沃移动通信有限公司 Night scene shooting method and mobile terminal
CN107241559B (en) * 2017-06-16 2020-01-10 Oppo广东移动通信有限公司 Portrait photographing method and device and camera equipment
CN107197169B (en) * 2017-06-22 2019-12-06 维沃移动通信有限公司 high dynamic range image shooting method and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905732A (en) * 2014-04-02 2014-07-02 深圳市中兴移动通信有限公司 Shooting method and shooting device
CN107392087A (en) * 2017-05-27 2017-11-24 华勤通讯技术有限公司 A kind of image processing method and device
CN107343140A (en) * 2017-06-14 2017-11-10 努比亚技术有限公司 A kind of image processing method and mobile terminal

Also Published As

Publication number Publication date
CN108616687A (en) 2018-10-02

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN107592466B (en) Photographing method and mobile terminal
CN108111754B (en) Method for determining image acquisition mode and mobile terminal
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN108616687B (en) Photographing method and device and mobile terminal
CN109218626B (en) Photographing method and terminal
CN110602401A (en) Photographing method and terminal
CN107566749B (en) Shooting method and mobile terminal
CN107948505B (en) Panoramic shooting method and mobile terminal
CN108307106B (en) Image processing method and device and mobile terminal
CN107623818B (en) Image exposure method and mobile terminal
CN108924414B (en) Shooting method and terminal equipment
CN111064895B (en) Virtual shooting method and electronic equipment
CN110213484B (en) Photographing method, terminal equipment and computer readable storage medium
CN111246102A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111601032A (en) Shooting method and device and electronic equipment
CN110798621A (en) Image processing method and electronic equipment
CN109246351B (en) Composition method and terminal equipment
CN108718388B (en) Photographing method and mobile terminal
CN111050069B (en) Shooting method and electronic equipment
CN111447365B (en) Shooting method and electronic equipment
CN111083386B (en) Image processing method and electronic device
CN107807488B (en) Camera assembly, aperture adjusting method and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant