CN107948538B - Imaging method, imaging device, mobile terminal and storage medium - Google Patents

Imaging method, imaging device, mobile terminal and storage medium Download PDF

Info

Publication number
CN107948538B
CN107948538B CN201711124811.2A CN201711124811A CN107948538B CN 107948538 B CN107948538 B CN 107948538B CN 201711124811 A CN201711124811 A CN 201711124811A CN 107948538 B CN107948538 B CN 107948538B
Authority
CN
China
Prior art keywords
brightness
portrait
camera
scene
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711124811.2A
Other languages
Chinese (zh)
Other versions
CN107948538A (en
Inventor
袁全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711124811.2A priority Critical patent/CN107948538B/en
Publication of CN107948538A publication Critical patent/CN107948538A/en
Application granted granted Critical
Publication of CN107948538B publication Critical patent/CN107948538B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Stroboscope Apparatuses (AREA)

Abstract

The application discloses an imaging method, an imaging device, a mobile terminal and a storage medium, wherein the method comprises the following steps: when the view-finding picture belongs to a target scene, whether the view-finding picture contains a portrait or not is identified, wherein the background of the target scene is in a low-light state, if the view-finding picture contains the portrait, the light sensitivity ISO of the camera is reduced, the flash lamp is started to supplement the portrait, and the camera is adopted to shoot the view-finding picture in the state that the flash lamp is started. When a person is shot in a target scene with a background in a low-light state, the background brightness is reduced by reducing the light sensitivity ISO of the camera, and the light of the person is supplemented by starting the flash lamp, so that the effects of simultaneously reducing the background brightness and improving the person brightness are achieved, the shooting effect is improved, and the technical problem that the effects of simultaneously reducing the background brightness and improving the person brightness cannot be achieved in the prior art is solved.

Description

Imaging method, imaging device, mobile terminal and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an imaging method, an imaging apparatus, a mobile terminal, and a storage medium.
Background
In recent years, the technology of camera modules of mobile terminals has been greatly developed, and the quality of photos taken by mobile terminals is getting better. However, because the mobile terminal and the camera module have smaller sizes, the shooting effect of the mobile terminal is still different from that of professional camera equipment.
In some scenes in daily life, videos or photos shot by the mobile terminal cannot achieve ideal effects. For example, in a sunset scene, when people are photographed with sunset as a background, if the camera module automatically adjusts the photographing parameters according to the photographing scene, the background may be overexposed, and the portrait may be underexposed, which may affect the overall photographing effect.
Content of application
The application provides an imaging method, an imaging device, a mobile terminal and a storage medium, when a person is shot in a target scene with a background in a low-light state, background brightness is reduced by reducing light sensitivity ISO of a camera, and a flash lamp is turned on to supplement light for the person, so that the effects of reducing the background brightness and improving the person brightness are achieved, and the shooting effect is improved.
The embodiment of the application provides an imaging method, which comprises the following steps:
when the framing picture belongs to a target scene, identifying whether the framing picture contains a portrait or not; the background of the target scene is in a low-light state;
if the framing picture contains a portrait, the light sensitivity ISO of the camera is reduced;
starting a flash lamp to supplement light for the portrait;
and shooting the framing picture by adopting the camera under the state that the flash lamp is turned on.
According to the imaging method, when the view-finding picture belongs to the target scene, whether the view-finding picture contains a portrait or not is recognized, wherein the background of the target scene is in a low-light state, if the view-finding picture contains the portrait, the light sensitivity ISO of the camera is reduced, the flash lamp is started to supplement light for the portrait, and the camera is adopted to shoot the view-finding picture in the state that the flash lamp is started. In the embodiment, when a person is shot in a target scene with a background in a low-light state, the background brightness is reduced by reducing the light sensitivity ISO of the camera, and the person is supplemented by turning on the flash lamp, so that the effects of simultaneously reducing the background brightness and improving the person brightness are achieved, the shooting effect is improved, and the technical problem that the effects of reducing the background brightness and improving the person brightness cannot be simultaneously achieved in the prior art is solved.
Another embodiment of the present application provides an image forming apparatus including:
the identification module is used for identifying whether the framing picture contains a portrait or not when the framing picture belongs to a target scene; the background of the target scene is in a low-light state;
the adjusting module is used for reducing the light sensitivity ISO of the camera if the framing picture contains a portrait;
the light supplementing module is used for starting the flash lamp to supplement light for the portrait;
and the control module is used for shooting the framing picture by adopting the camera under the state that the flash lamp is started.
The imaging device provided by the embodiment of the application identifies whether the framing picture contains a portrait or not when the framing picture belongs to a target scene, wherein the background of the target scene is in a low-light state, if the framing picture contains the portrait, the light sensitivity ISO of the camera is reduced, the flash lamp is started to supplement light for the portrait, and the camera is adopted to shoot the framing picture in a state that the flash lamp is started. In the embodiment, when a person is shot in a target scene with a background in a low-light state, the background brightness is reduced by reducing the light sensitivity ISO of the camera, and the person is supplemented by turning on the flash lamp, so that the effects of simultaneously reducing the background brightness and improving the person brightness are achieved, the shooting effect is improved, and the technical problem that the effects of reducing the background brightness and improving the person brightness cannot be simultaneously achieved in the prior art is solved.
Another embodiment of the present application provides a mobile terminal, including: the imaging system comprises a camera, a flash, a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein when the processor executes the program, the imaging method is realized as the imaging method in the embodiment of the application.
Yet another embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the imaging method according to the above-described embodiment of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of an imaging method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of another imaging method provided by embodiments of the present application;
FIG. 3 is a schematic flow chart of another imaging method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of triangulation;
fig. 5 is a schematic structural diagram of an imaging device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another imaging device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An imaging method, an apparatus, a mobile terminal, and a storage medium of embodiments of the present application are described below with reference to the accompanying drawings.
In some scenes in daily life, the background is in a low light state, and such scenes with the background in the low light state are referred to as target scenes hereinafter. In the target scene, the background needs to be imaged clearly although it is in a low light state. Moreover, the background usually includes a light source, so that when the portrait is in a backlight state in a target scene, detailed features cannot be clearly imaged, and a video or a photo including the portrait taken by the mobile terminal cannot achieve an ideal effect.
For example, when the target scene is a sunset scene or a sunward scene, in such a scene, it is required that both the background and the portrait can be imaged clearly. However, in practice, the background contains a light source, and a person is between the light source and the camera, so that the person appears in a backlit state. However, the backlight is different from the backlight in the general situation, the brightness of the background in the scene is larger than that of the portrait, the brightness of the background is overall darker and is in a low-light state that the average brightness is lower than the first threshold brightness, if the shooting parameters are automatically adjusted by the camera module according to the average brightness, the light source area with the brightness higher than the second threshold brightness in the background may be overexposed, the portrait may be underexposed, and meanwhile, the overall background in the imaging image is distorted due to the overexposure, so that the overall shooting effect is affected.
In order to solve the problem, the application provides an imaging method, when a person is shot in a target scene with a background in a low-light state, the background brightness is reduced by reducing the light sensitivity ISO of a camera, and a flash lamp is turned on to supplement light to the person, so that the effects of reducing the background brightness and improving the person brightness are achieved simultaneously, and the shooting effect is improved.
Fig. 1 is a schematic flowchart of an imaging method according to an embodiment of the present disclosure.
As shown in fig. 1, the imaging method includes:
step 101, when a framing picture belongs to a target scene, identifying whether the framing picture contains a portrait or not; the background of the target scene is in a low light state.
In a target scene such as a sunset scene or a sunward scene, the background brightness is dark as a whole, when a person is shot, the background brightness is often larger than the brightness of a portrait, the brightness difference between the portrait and the background is large, if the background brightness is reduced, the shot background effect is good, but the shot portrait is dark, and if the brightness of the portrait is improved, the background may be overexposed.
In this embodiment, when the finder screen belongs to a scene in which the background is in a low-light state, whether a portrait is included in the finder screen is identified.
As a possible implementation manner, the contour data of the human body is pre-stored in a mobile terminal such as a mobile phone or a tablet computer, wherein the contour data of the human body may be multiple, including the contour data of the human body corresponding to the contour of the human body from different angles such as the front side and the side. When a user shoots with a camera of the mobile terminal, after the camera acquires a view-finding picture, the outline of an object in the view-finding picture is extracted and then compared with the pre-stored human body outline. If the extracted contour is consistent with at least one of the plurality of pre-stored human body contours, the fact that the viewfinder picture contains the portrait can be determined.
As another possible implementation manner, a face detection manner may be adopted, and when it is detected that a face exists in the viewfinder image, it is determined that the viewfinder image contains a portrait. As an example, face contour data including contour data of a front face, a side face, and the like, and contour data of different face types may be stored in the mobile terminal in advance. And extracting the object contour in the view-finding picture to compare with the prestored face contour, if the extracted contour is consistent with at least one of the prestored face contours, extracting key feature points from the region surrounded by the face contour in the view-finding picture, and comparing the key feature points with prestored data of the feature points such as a nose, eyes, a mouth and the like to further determine whether the face exists in the view-finding picture. Optionally, the face detection may also be performed by using the principle based on opencv face detection to determine whether the viewfinder frame contains a portrait.
In the present embodiment, the shooting may be a picture taken by a mobile terminal with an image capturing function, or a video or the like.
And 102, if the framing picture contains a portrait, reducing the sensitivity ISO of the camera.
The sensitivity ISO is used to measure the sensitivity of the film to light, and in a digital camera, to measure the sensitivity of the photosensitive device to light. When other conditions such as the aperture and the shutter speed are the same, the higher the sensitivity ISO, the brighter the captured screen.
In this embodiment, when it is determined that a finder image belonging to a target scene includes a portrait, the sensitivity ISO of the camera is reduced to reduce the background brightness, so that the background becomes dark. The current sensitivity ISO has values of 100, 200, 400, 800, and the like, and the larger the value of the sensitivity ISO, the brighter the imaged image is under the same other conditions such as aperture, shutter speed, and the like. In particular implementations, the value of the camera's sensitivity ISO may be adjusted down by one or more levels to suppress background brightness.
For example: 4 ISO classes can be set, with values of 100, 200, 400, 800 respectively. When the measured ISO should be 800, ISO may be set to 400.
It should be noted that the ISO class and the corresponding value-taking method in this embodiment are only examples, and are not used to limit this embodiment, and those skilled in the art may know that the ISO class and the corresponding value-taking method are not limited to the method provided in this embodiment, and other ISO classes and corresponding value-taking methods may also be used.
And 103, turning on a flash lamp to supplement light for the portrait.
Because the brightness of the portrait is lower than that of the background, when the framing picture of the target scene contains the portrait, after the photosensitivity ISO is reduced, the background brightness is further reduced, but the portrait cannot be clearly imaged, and therefore the flash lamp can be started to supplement light for the portrait. Generally, the light supplementing range of the flash lamp is small, and the influence of the light supplementing of the flash lamp on the human image on the imaging of the background is small.
Specifically, the camera module in the camera is controlled to stop operations such as automatic metering and parameter adjustment, wherein the operations of automatic metering and parameter adjustment of the camera module comprise automatic metering calculation of a focus or a metering point such as a current portrait so as to adjust numerical values such as an aperture, a shutter speed and sensitivity ISO. After the camera module stops automatic photometry, parameter adjustment and other operations, the mobile terminal sends a starting command to the flash lamp. After the flash lamp receives the opening command, the flash lamp is opened to supplement light to the portrait and improve the brightness of the portrait.
And step 104, taking a framing picture by adopting the camera under the state that the flash lamp is turned on.
In the state that the flash lamp is started, after a user clicks a shooting button, the mobile terminal receives a shooting instruction, and the mobile terminal shoots a framing picture by adopting a camera according to the shooting instruction. Or after the flash lamp is turned on for a preset time, for example, after 0.1s, in the state that the flash lamp is turned on, the mobile terminal shoots a framing picture by using the camera. Therefore, the shooting effect of reducing the background brightness and improving the human image brightness can be realized.
For the sake of clarity of the above embodiment, the target scene is specifically a sunset scene and a sunward scene in this embodiment. In the sunset scene and the sunward scene mentioned in this embodiment, it is important to confirm whether the shooting scene is the sunset scene or the sunward scene. The imaging method proposed by the present application is explained below by another example. Fig. 2 is a schematic flow chart of another imaging method provided in the embodiment of the present application.
As shown in fig. 2, the imaging method includes:
in step 201, the average brightness of the background in the viewfinder frame is obtained.
When a user takes a picture with a mobile terminal having a camera, the mobile terminal can display a preview view-finding picture. After the viewfinder frame is obtained, the background of the viewfinder frame can be divided into a plurality of areas, the brightness of each area is obtained, and then the average brightness of the background in the viewfinder frame is calculated.
In step 202, if the average brightness of the background is lower than the first threshold brightness, it is identified whether the background in the viewfinder image includes a light source area with brightness higher than the second threshold brightness.
In sunset or sunward scenes, the background often includes the sun and the sky, and sometimes clouds appear in the sky, so that the brightness of each part area is not uniform even when the background is dark as a whole. In the embodiment of the application, the average brightness of the background is compared with a preset first threshold brightness, when the background brightness is lower than the first threshold brightness, the background is relatively dim, and whether the background in the framing picture includes a light source area with brightness higher than a second threshold brightness is further identified. The second threshold brightness is greater than the first threshold brightness, and the light source area can be understood as an area with relatively high brightness in the background.
Specifically, the position, size, and the like of the light source region in the finder screen are set in advance, and for example, if the set light source region is a rectangle, the size may be the length and width of the rectangle, or if the set light source is a circle, the size may be the radius of the circle. And obtaining a limited range of the light source area according to the set size at the set position in the view finding picture, acquiring the pixel value of each pixel point in the view finding picture in the limited range, and judging whether one or more continuous area ranges exist according to the pixel value. If so, the continuous area range may be determined to be the light source area.
Then, the luminance of the light source region in the through-view is acquired and compared with the second threshold luminance. If the intensity of the light source region is above the second threshold intensity, then the light source region may be due to the effects of sunrise or sunset.
In step 203, if it is recognized that the background in the viewfinder image includes a light source area, the shooting time is obtained.
In order to eliminate the backlight scene in the day and the starry sky scene at night, the brightness of the background is not required to be reduced and the brightness of the portrait is increased under the conditions that the whole background is dark and the brightness of each part of the background is uneven under some conditions, when the background in the framing picture contains a light source area with the brightness higher than the second threshold value brightness, the mobile terminal acquires the current shooting time so as to eliminate the backlight scene in the day and the starry sky scene at night through the shooting time.
And step 204, if the shooting time belongs to the sunrise time period or the sunset time period, determining that the framing picture belongs to a sunset scene or a sunward scene.
As a possible implementation manner, the sunrise time periods or sunrise time periods are different due to different seasons in the same region, and the sunrise time periods or sunrise time periods are also different in the same season in regions belonging to different time zones. Therefore, in the embodiment of the present application, the user can manually set the sunrise time period or the sunset time period according to the region and the season in which the user is located. For example, Beijing may be in the eight east sector, while Wuluqizi may be in the six east sector, with a time difference of two hours, a user living in Beijing may set the sunrise period to 4 o 50 o 5 o 40 o, and a user living in Wuluqizi may set the sunrise period to 6 o 50 o 7 o 50 o. After acquiring the photographing time, the mobile terminal compares the photographing time with the sunrise period and the sunset period set by the user to determine whether the photographing time belongs to the sunset period or the sunrise period.
As another possible implementation manner, whether the current shooting time belongs to the sunrise period or the sunset period may be determined by interacting with the server. Specifically, the mobile terminal sends an inquiry request to the server, wherein the inquiry request carries shooting time and shooting place. And after receiving the request, the server inquires the sunrise time and the sunset time of the shooting place on the same day and compares the shooting time sent by the mobile terminal with the sunrise time and the sunset time. If the shooting time belongs to the sunrise or sunset time period, the server sends a query result that the shooting time belongs to the sunrise time period or the sunset time period to the mobile terminal; and if the shooting time does not belong to the sunrise time period or the sunset time period, the server sends a query result that the shooting time does not belong to the sunrise time period or the sunset time period to the mobile terminal. After the mobile terminal receives the returned query result, if the result is that the mobile terminal belongs to the sunrise time period or the sunset time period, the mobile terminal can determine that the framing picture belongs to the sunward scene or the sunset scene.
In this embodiment, by determining whether the shooting time belongs to the sunrise time period or the sunset time period, a night street lamp scene, a daytime backlight scene, a night starry sky scene, and the like can be excluded, thereby improving the accuracy of determining a sunset scene or a sunward scene.
Step 205, when the framing picture belongs to a sunset scene or a sunward scene, identifying whether the framing picture contains a portrait.
For a specific method for identifying whether the framing picture includes the portrait, reference may be made to the related contents described in the above embodiments, and details are not repeated here.
And step 206, if the framing picture contains a portrait, reducing the light sensitivity ISO of the camera.
In this embodiment, when it is determined that a portrait is included in a finder image belonging to a sunset scene or a sunward scene, the sensitivity ISO of the camera is reduced to reduce the background brightness, so that the background becomes dark.
And step 207, turning on a flash lamp to supplement light for the portrait.
Because the brightness of the portrait is lower than that of the background, when the framing picture of the sunset scene or the sunward scene contains the portrait, after the sensitivity ISO is reduced, the background brightness is further reduced, but the portrait cannot be clearly imaged, and therefore the flash lamp can be turned on to supplement light for the portrait. Generally, the light supplementing range of the flash lamp is small, and the influence of the light supplementing of the flash lamp on the human image on the imaging of the background is small.
Specifically, the camera module in the camera is controlled to stop operations such as automatic photometry and parameter adjustment. After the camera module stops automatic photometry, parameter adjustment and other operations, the mobile terminal sends a starting command to the flash lamp. After the flash lamp receives the opening command, the flash lamp is opened to supplement light to the portrait and improve the brightness of the portrait.
And step 208, taking a framing picture by adopting the camera under the state that the flash lamp is turned on.
In the state that the flash lamp is started, after a user clicks a shooting button, the mobile terminal receives a shooting instruction, and the mobile terminal shoots a framing picture by adopting a camera according to the shooting instruction. Or after the flash lamp is turned on for a preset time, for example, after 0.1s, in the state that the flash lamp is turned on, the mobile terminal shoots a framing picture by using the camera. Therefore, the shooting effect of reducing the background brightness and improving the human image brightness can be realized.
For example, when shooting a person in a sunset scene, the effects of increasing the brightness of the portrait and simultaneously reducing the background brightness of the sunset can be achieved by reducing the light sensitivity ISO and turning on the flash lamp.
According to the imaging method, whether the framing picture is a sunset scene or a sunward scene is determined by judging whether the background of the framing picture contains the light source area meeting the conditions and the time period to which the shooting time belongs, so that a basis is provided for subsequent operations.
In order to further improve the accuracy of judging sunset scenes or sunward scenes, the condition that sunset or sunrise is invisible due to bad weather can be eliminated through weather information.
Specifically, after step 202, the mobile terminal may obtain weather information of the current shooting location from the server, and further determine whether the weather information of the shooting location is clear or cloudy weather. If the weather information of the current shooting place is not clear or cloudy weather, sunrise or sunset can be determined to be invisible, and therefore the view-finding picture can be determined not to belong to a sunny scene or a sunset scene. If the weather information of the current shooting place is sunny weather or cloudy weather, and the current shooting time belongs to the sunrise time period or the sunset time period, the view finding picture can be further confirmed to belong to a sunward scene or a sunset scene.
Further, the emitted visible light is warmed due to the sun rays being affected by atmospheric refraction at sunrise and sunset. Human eyes can often see red to orange sun at sunrise and sunset, so that whether a viewfinder belongs to a sunset scene or a sunward scene can be verified according to the color temperature of a light source region in the background.
Specifically, after it is determined in step 202 that the light source region is included in the finder screen, the average color temperature of the light source region is acquired, and then the average color temperature is compared with the threshold color temperature. If the average color temperature of the light source region is lower than the threshold color temperature and the photographing time belongs to the sunrise period or the sunset period, it can be more determined that the finder picture belongs to the sunward scene or the sunset scene.
It is to be understood that, when determining whether the finder screen belongs to a sunset scene or a sunward scene, it may be determined that the finder screen belongs to the sunset scene or the sunward scene when conditions are satisfied, such as weather information of a shooting location, an average color temperature of a light source region, and a time period to which a shooting time belongs.
When the framing picture contains the portrait, if the shot figure is far away from the camera, the effect of light supplement can not be achieved even if the flash lamp is turned on. This is explained in detail below with reference to fig. 3 by means of a further embodiment.
As shown in fig. 3, the imaging method includes:
step 301, when the framing picture belongs to a target scene, identifying whether the framing picture contains a portrait or not; the background of the target scene is in a low light state.
In this embodiment, when the finder screen belongs to a scene in which the background is in a low-light state, whether a portrait is included in the finder screen is identified. For a specific identification method, reference may be made to the related contents described in the foregoing embodiments, which are not described herein again.
Step 302, if the framing picture contains a portrait, the sensitivity ISO of the camera is reduced.
In this embodiment, when it is determined that a finder image belonging to a target scene includes a portrait, the sensitivity ISO of the camera is reduced to reduce the background brightness, so that the background becomes dark. In a specific implementation, the value of the sensitivity ISO of the camera may be adjusted down by one level or more.
And step 303, when the area proportion of the portrait in the framing picture is larger than the threshold proportion, identifying the distance between the person corresponding to the portrait and the camera.
It can be understood that when shooting, when a person is at a certain distance from the camera, the camera is started to achieve the purpose of supplementing light to the person.
Generally, the smaller the distance between the person and the camera, the larger the proportion of the area occupied by the person in the finder image. Therefore, whether the area proportion of the portrait in the view finding picture is larger than the threshold value proportion or not can be judged firstly, if the threshold value proportion is
Figure BDA0001468165030000081
And when the area proportion of the portrait in the framing picture is larger than the threshold proportion, identifying the distance between the person corresponding to the portrait and the camera.
As a possible implementation manner, the distance between the person and the plane where the camera is located can be calculated through the principle of triangulation. Based on FIG. 4, in the real space, the person to be imaged is drawn, and the positions O of the two cameras are shownlAnd OrAnd focal planes of the two cameras, wherein the distance between the focal planes and the plane where the two cameras are located is f.
p and p' are the positions of the same person in different captured images, respectively. Wherein the distance from the point p to the left boundary of the shot image is xlThe distance of the p' point from the left boundary of the shot image is xr。OlAnd OrThe two cameras are respectively arranged on the same plane, and the distance between the two cameras is Z.
Based on the principle of triangulation, the distance b between the person in fig. 4 and the plane where the two cameras are located has the following relationship:
Figure BDA0001468165030000091
based on this, can be derived
Figure BDA0001468165030000092
Where d is a distance difference between positions of the same person in different captured images. Z, f is constant, so the distance b between the person and the plane of the camera can be determined according to d.
As another possible implementation manner, an infrared distance meter is disposed beside the camera, and when the camera takes a picture, the infrared distance meter emits infrared rays to a person and receives infrared rays reflected by the person, and the distance between the person and the camera can be calculated according to the propagation speed of light and the time taken from the emission of the infrared rays to the reception of the reflected infrared rays.
And step 304, determining that the distance between the person and the camera is not more than the maximum effective distance of the flash lamp, and starting the flash lamp to supplement light for the person.
After the distance between the person and the camera is identified, the distance between the person and the camera is compared with the maximum effective distance of the flash lamp. When the distance between the person and the camera is not greater than the maximum effective distance of the flash lamp, the flash lamp can supplement light to the person, and therefore an opening command is sent to the flash lamp. And the flash lamp is started according to the starting command to supplement light to the image.
And 305, taking a framing picture by adopting a camera in the state that the flash lamp is turned on.
In the state that the flash lamp is started, after a user clicks a shooting button, the mobile terminal receives a shooting instruction, and the mobile terminal shoots a framing picture by adopting a camera according to the shooting instruction. Or after the flash lamp is started for a preset time, for example, after 0.1s, the camera is used for shooting a framing picture in the state that the flash lamp is started. Therefore, the shooting effect of reducing the background brightness and improving the human image brightness can be realized.
And step 306, if the distance between the person and the camera is larger than the maximum effective distance of the flash lamp, shooting a framing picture by using the camera under the state that the flash lamp is not started to obtain a target image.
When the distance between the person and the camera is larger than the maximum effective distance of the flash lamp, the flash lamp cannot achieve the light supplementing effect on the person, and therefore the camera is used for shooting a framing picture under the state that the flash lamp is not started to obtain a target image.
Step 307, a portrait area in the target image is identified.
In order to improve the shooting effect, the brightness of the portrait area can be increased. Specifically, the contours of a plurality of objects in the target image may be extracted, and then the extracted contours may be compared with the pre-stored contours of the human body, thereby identifying the portrait area in the target image.
Step 308, the brightness of the portrait area in the target image is increased.
After the portrait area in the target image is identified, the brightness of the portrait area in the target image is increased so as to improve the display effect of the portrait in the target image.
Because the brightness of the portrait area is adjusted to be high, the brightness of the portrait area and the brightness of the surrounding area are possibly more abrupt, and the edge of the portrait area can be smoothed by using the existing edge smoothing algorithm, such as mean filtering, Gaussian filtering and the like, so that the brightness transition of the portrait area and the surrounding area is smooth, and the display effect of the shot image is improved.
Step 309, the brightness of the target image is reduced except for the portrait area.
In order to improve the overall display effect of the target image, the brightness of the target image except for the portrait area may be further reduced to reduce the brightness difference between the portrait area and the target image except for the portrait area, thereby improving the overall display effect of the target image.
According to the imaging method, the distance between the person corresponding to the portrait and the camera is compared with the maximum effective distance of the flash lamp, different processing modes are adopted according to the comparison result, and the processing modes are flexible.
In addition, when shooting, if only a scene is shot and an irrelevant person is shot carelessly, the screen can be clicked to focus, and a camera is used for shooting a viewfinder picture under the state of not turning on a flash lamp.
According to the imaging method, when the view-finding picture belongs to the target scene, whether the view-finding picture contains a portrait or not is recognized, wherein the background of the target scene is in a low-light state, if the view-finding picture contains the portrait, the light sensitivity ISO of the camera is reduced, the flash lamp is started to supplement light for the portrait, and the camera is adopted to shoot the view-finding picture in the state that the flash lamp is started. In the embodiment, when a person is shot in a target scene with a background in a low-light state, the background brightness is reduced by reducing the light sensitivity ISO of the camera, and the person is supplemented by turning on the flash lamp, so that the effects of simultaneously reducing the background brightness and improving the person brightness are achieved, the shooting effect is improved, and the technical problem that the effects of reducing the background brightness and improving the person brightness cannot be simultaneously achieved in the prior art is solved.
The application also provides an imaging device. Fig. 5 is a schematic structural diagram of an imaging device according to an embodiment of the present application.
As shown in fig. 5, the image forming apparatus includes: an identification module 510, an adjustment module 520, a light supplement module 530, and a control module 540.
The identifying module 510 is configured to identify whether the framing picture includes a portrait or not when the framing picture belongs to a target scene, where a background of the target scene is in a low-light state.
The adjusting module 520 is configured to reduce the sensitivity ISO of the camera if the framing picture includes a portrait.
And a light supplement module 530 for turning on the flash lamp to supplement light to the image.
And the control module 540 is configured to take a framing picture by using the camera in a state where the flash is turned on.
Specifically, the imaging device provided in the embodiment of the present application may execute the imaging method provided in the embodiment of the present application, and the device may be configured in a mobile terminal. The mobile devices are of various types, and may be mobile phones, tablet computers, notebook computers, and the like. Fig. 5 illustrates a mobile terminal as a mobile phone.
In an embodiment of the present application, the target scene includes a sunset scene and/or a sunward scene, and on the basis of fig. 5, as shown in fig. 6, the imaging apparatus may further include: a first acquisition module 550, a light source region identification module 560, a second acquisition module 570, and a determination module 580.
A first obtaining module 550, configured to obtain an average brightness of a background in the viewfinder frame.
A light source area identification module 560, configured to identify whether the background in the viewfinder frame includes a light source area with brightness higher than a second threshold brightness if the average brightness of the background is lower than the first threshold brightness; the second threshold luminance is greater than the first threshold luminance.
The second obtaining module 570 is configured to obtain the shooting time if it is recognized that the background in the finder frame includes the light source area.
The determining module 580 is configured to determine that the framing picture belongs to a sunset scene or a sunset scene if the shooting time belongs to a sunrise period or a sunset period.
In an embodiment of the present application, the second obtaining module 570 is further configured to obtain weather information of a shooting location;
the determining module 580 is further configured to determine that the weather information of the shooting location is sunny or cloudy weather.
In an embodiment of the present application, the second obtaining module 570 is further configured to identify an average color temperature of the light source region in the viewfinder frame;
the determining module 580 is further configured to determine that the average color temperature is below the threshold color temperature.
Further, in an embodiment of the present application, the imaging apparatus further includes:
the third acquisition module is used for identifying the distance between a person corresponding to the portrait and the camera when the area proportion occupied by the portrait in the framing picture is larger than the threshold proportion;
the determining module 580 is further configured to determine that the distance between the person and the camera is not greater than the maximum effective distance of the flash.
In one embodiment of the present application, the apparatus may further include:
the control module 540 is further configured to, if the distance between the person and the camera is greater than the maximum effective distance of the flash lamp, take a framing picture by using the camera in a state where the flash lamp is not turned on, and obtain a target image.
And the portrait area identification module is used for identifying the portrait area in the target image.
And the brightness adjusting module is used for adjusting the brightness of the portrait area in the target image.
In an embodiment of the application, the brightness adjusting module is further configured to turn down the brightness of the target image except for the portrait area.
The division of the modules in the imaging device is only for illustration, and in other embodiments, the imaging device may be divided into different modules as needed to complete all or part of the functions of the imaging device.
It should be noted that the foregoing explanation of the embodiment of the imaging method is also applicable to the imaging device of the embodiment, and therefore, the explanation is not repeated herein.
The imaging device provided by the embodiment of the application identifies whether the framing picture contains a portrait or not when the framing picture belongs to a target scene, wherein the background of the target scene is in a low-light state, if the framing picture contains the portrait, the light sensitivity ISO of the camera is reduced, the flash lamp is started to supplement light for the portrait, and the camera is adopted to shoot the framing picture in a state that the flash lamp is started. In the embodiment, when a person is shot in a target scene with a background in a low-light state, the background brightness is reduced by reducing the light sensitivity ISO of the camera, and the person is supplemented by turning on the flash lamp, so that the effects of simultaneously reducing the background brightness and improving the person brightness are achieved, the shooting effect is improved, and the technical problem that the effects of reducing the background brightness and improving the person brightness cannot be simultaneously achieved in the prior art is solved.
The application also provides a mobile terminal. Fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
As shown in fig. 7, the mobile terminal includes: camera 710, flash 720, memory 730, processor 740, and a computer program stored on memory 730 and executable on processor 740.
Wherein, when the processor 740 executes the program, the imaging method according to any of the foregoing embodiments is implemented.
The present application also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the imaging method according to any of the preceding embodiments.
The mobile terminal may further include an Image Processing circuit, which may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. Imaging device 910 may specifically include two cameras, each of which may include one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 may provide the raw image data to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 940 may also receive pixel data from image memory 930. For example, raw pixel data is sent from the sensor 920 interface to the image memory 930, and the raw pixel data in the image memory 930 is then provided to the ISP processor 940 for processing. The image Memory 930 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 920 interface or from the image memory 930, the ISP processor 940 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 receives processed data from image memory 930 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 970 for viewing by a user and/or further processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 970 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 960 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on a display 970 device. The encoder/decoder 960 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 920 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading correction parameters.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (7)

1. An imaging method, characterized in that the method comprises the steps of:
when the framing picture belongs to a target scene, identifying whether the framing picture contains a portrait or not; the background of the target scene is in a low light state, the target scene comprises a sunset scene and/or a sunward scene, and before the identification of whether the framing picture contains a portrait, the method further comprises the following steps: acquiring the average brightness of the background in the framing picture; if the average brightness of the background is lower than a first threshold brightness, identifying whether the background in the framing picture comprises a light source area with brightness higher than a second threshold brightness, wherein the second threshold brightness is higher than the first threshold brightness; if the background in the framing picture is identified to contain a light source area, acquiring shooting time; if the shooting time belongs to the sunrise time period or the sunset time period, determining that the viewing picture belongs to a sunset scene or a sunward scene;
the step of identifying whether the background in the framing picture contains a light source area with the brightness higher than a second threshold brightness comprises the following steps: acquiring a limited range according to the position and the size of a preset light source area in the framing picture; determining whether a continuous area range exists according to the pixel value of each pixel point in the limited range; if yes, determining the continuous area range as the light source area; judging whether the brightness of the light source area is higher than a second threshold value brightness;
if the framing picture contains a portrait, the light sensitivity ISO of the camera is reduced, and after the framing picture contains the portrait, the method further comprises the following steps: when the area proportion occupied by the portrait in the framing picture is larger than a threshold value proportion, identifying the distance between a person corresponding to the portrait and the camera; determining that the distance between the person and the camera is not greater than the maximum effective distance of a flash lamp;
starting a flash lamp to supplement light for the portrait;
shooting the framing picture by adopting the camera under the state that the flash lamp is turned on;
after the distance between the person corresponding to the portrait and the camera is identified, the method further comprises the following steps: if the distance between the person and the camera is larger than the maximum effective distance of the flash lamp, shooting the framing picture by using the camera under the state that the flash lamp is not started to obtain a target image; identifying a portrait area in the target image; and adjusting the brightness of the portrait area in the target image.
2. The imaging method of claim 1, wherein prior to determining that the framed view belongs to a sunset scene or a sunward scene, further comprising:
acquiring weather information of a shooting place;
and determining weather information of the shooting place as sunny or cloudy weather.
3. The imaging method of claim 1, wherein prior to determining that the framed view belongs to a sunset scene or a sunward scene, further comprising:
identifying an average color temperature of the light source area in the viewfinder;
determining that the average color temperature is below a threshold color temperature.
4. The imaging method according to claim 1, wherein after the step of highlighting the portrait area in the target image, further comprising:
and reducing the brightness of the part except the portrait area in the target image.
5. An imaging apparatus, characterized in that the apparatus comprises:
the identification module is used for identifying whether the framing picture contains a portrait or not when the framing picture belongs to a target scene; the background of the target scene is in a low-light state;
the adjusting module is used for reducing the light sensitivity ISO of the camera if the framing picture contains a portrait;
the light supplementing module is used for starting the flash lamp to supplement light for the portrait;
the control module is used for shooting the framing picture by adopting the camera in the state that the flash lamp is started;
the target scene comprises a sunset scene and/or a sunward scene, and the device further comprises:
the first acquisition module is used for acquiring the average brightness of the background in the framing picture;
a light source area identification module, configured to identify whether a background in the viewfinder frame includes a light source area with brightness higher than a second threshold brightness if the average brightness of the background is lower than a first threshold brightness, where the second threshold brightness is greater than the first threshold brightness, and the light source area identification module is specifically configured to: acquiring a limited range according to the position and the size of a preset light source area in the framing picture; determining whether a continuous area range exists according to the pixel value of each pixel point in the limited range; if yes, determining the continuous area range as the light source area; judging whether the brightness of the light source area is higher than a second threshold value brightness;
the second acquisition module is used for acquiring shooting time if the background in the framing picture is identified to contain a light source area;
the determining module is used for determining that the framing picture belongs to a sunset scene or a sunset scene if the shooting time belongs to a sunrise time interval or a sunset time interval;
the third acquisition module is used for identifying the distance between a person corresponding to the portrait and the camera when the area proportion occupied by the portrait in the framing picture is larger than the threshold proportion;
the determining module is further used for determining that the distance between the person and the camera is not greater than the maximum effective distance of the flash lamp;
the control module is further used for shooting the framing picture by using the camera under the state that the flash lamp is not started to obtain a target image if the distance between the person and the camera is larger than the maximum effective distance of the flash lamp;
the portrait area identification module is used for identifying a portrait area in the target image;
and the brightness adjusting module is used for adjusting the brightness of the portrait area in the target image.
6. A mobile terminal, comprising: a camera, a flash, a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the imaging method of any one of claims 1-4.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the imaging method as set forth in any one of claims 1-4.
CN201711124811.2A 2017-11-14 2017-11-14 Imaging method, imaging device, mobile terminal and storage medium Expired - Fee Related CN107948538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711124811.2A CN107948538B (en) 2017-11-14 2017-11-14 Imaging method, imaging device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711124811.2A CN107948538B (en) 2017-11-14 2017-11-14 Imaging method, imaging device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN107948538A CN107948538A (en) 2018-04-20
CN107948538B true CN107948538B (en) 2020-02-21

Family

ID=61932120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711124811.2A Expired - Fee Related CN107948538B (en) 2017-11-14 2017-11-14 Imaging method, imaging device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN107948538B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826372B (en) * 2018-08-10 2024-04-09 浙江宇视科技有限公司 Face feature point detection method and device
CN109462922A (en) * 2018-09-20 2019-03-12 百度在线网络技术(北京)有限公司 Control method, device, equipment and the computer readable storage medium of lighting apparatus
CN109120864B (en) * 2018-10-23 2021-02-02 Oppo广东移动通信有限公司 Light supplement processing method and device, storage medium and mobile terminal
CN109348089B (en) * 2018-11-22 2020-05-22 Oppo广东移动通信有限公司 Night scene image processing method and device, electronic equipment and storage medium
CN110611774B (en) * 2019-09-20 2021-07-16 深圳市梦网视讯有限公司 Illumination compensation method and system for road surface monitoring video
CN110944163A (en) * 2019-11-21 2020-03-31 维沃移动通信有限公司 Image processing method and electronic equipment
CN111385454B (en) * 2020-03-20 2022-03-08 深圳传音控股股份有限公司 Photographing method, mobile terminal and computer-readable storage medium
CN112687108B (en) * 2020-06-28 2022-08-09 浙江大华技术股份有限公司 Detection method and detection device for man-vehicle matching, shooting device and storage medium
CN112383685B (en) * 2020-11-02 2023-05-02 北京如影智能科技有限公司 Holding device, control method and control device
CN114885096B (en) * 2022-03-29 2024-03-15 北京旷视科技有限公司 Shooting mode switching method, electronic equipment and storage medium
CN116977332A (en) * 2023-09-21 2023-10-31 合肥联宝信息技术有限公司 Camera light filling lamp performance test method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626457A (en) * 2008-07-08 2010-01-13 华晶科技股份有限公司 Image capturing method
CN101800858A (en) * 2009-02-06 2010-08-11 佳能株式会社 Picture pick-up device and control method thereof
CN102542552A (en) * 2010-12-21 2012-07-04 北京汉王智通科技有限公司 Frontlighting and backlighting judgment of video images and detection method of shooting time
CN102685379A (en) * 2011-03-18 2012-09-19 卡西欧计算机株式会社 Image processing apparatus with function for specifying image quality, and method and storage medium
CN105472268A (en) * 2015-12-24 2016-04-06 Tcl集团股份有限公司 Method and device for filling light during photographing
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106713780A (en) * 2017-01-16 2017-05-24 维沃移动通信有限公司 Control method for flash lamp and mobile terminal
CN107135353A (en) * 2017-04-27 2017-09-05 努比亚技术有限公司 A kind of exposure regulating method, mobile terminal and computer-readable recording medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626457A (en) * 2008-07-08 2010-01-13 华晶科技股份有限公司 Image capturing method
CN101800858A (en) * 2009-02-06 2010-08-11 佳能株式会社 Picture pick-up device and control method thereof
CN102542552A (en) * 2010-12-21 2012-07-04 北京汉王智通科技有限公司 Frontlighting and backlighting judgment of video images and detection method of shooting time
CN102685379A (en) * 2011-03-18 2012-09-19 卡西欧计算机株式会社 Image processing apparatus with function for specifying image quality, and method and storage medium
CN105472268A (en) * 2015-12-24 2016-04-06 Tcl集团股份有限公司 Method and device for filling light during photographing
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106713780A (en) * 2017-01-16 2017-05-24 维沃移动通信有限公司 Control method for flash lamp and mobile terminal
CN107135353A (en) * 2017-04-27 2017-09-05 努比亚技术有限公司 A kind of exposure regulating method, mobile terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN107948538A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107948538B (en) Imaging method, imaging device, mobile terminal and storage medium
CN107948519B (en) Image processing method, device and equipment
CN108419023B (en) Method for generating high dynamic range image and related equipment
CN107977940B (en) Background blurring processing method, device and equipment
CN109788207B (en) Image synthesis method and device, electronic equipment and readable storage medium
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108055452B (en) Image processing method, device and equipment
CN108712608B (en) Terminal equipment shooting method and device
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
CN108024054B (en) Image processing method, device, equipment and storage medium
WO2021109620A1 (en) Exposure parameter adjustment method and apparatus
US8106965B2 (en) Image capturing device which corrects a target luminance, based on which an exposure condition is determined
CN112102386A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111447374B (en) Light supplement adjusting method and device, electronic equipment and storage medium
CN110213494B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN108616689B (en) Portrait-based high dynamic range image acquisition method, device and equipment
US20100328498A1 (en) Shooting parameter adjustment method for face detection and image capturing device for face detection
CN108024057B (en) Background blurring processing method, device and equipment
CN108933899A (en) Panorama shooting method, device, terminal and computer readable storage medium
CN109040607B (en) Imaging control method, imaging control device, electronic device and computer-readable storage medium
CN108156369B (en) Image processing method and device
CN108337446B (en) High dynamic range image acquisition method, device and equipment based on double cameras
CN102300049A (en) Image signal processing system
WO2020034702A1 (en) Control method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200221