CN113411498B - Image shooting method, mobile terminal and storage medium - Google Patents

Image shooting method, mobile terminal and storage medium Download PDF

Info

Publication number
CN113411498B
CN113411498B CN202110674807.3A CN202110674807A CN113411498B CN 113411498 B CN113411498 B CN 113411498B CN 202110674807 A CN202110674807 A CN 202110674807A CN 113411498 B CN113411498 B CN 113411498B
Authority
CN
China
Prior art keywords
image data
preset information
camera
image
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110674807.3A
Other languages
Chinese (zh)
Other versions
CN113411498A (en
Inventor
王洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202110674807.3A priority Critical patent/CN113411498B/en
Publication of CN113411498A publication Critical patent/CN113411498A/en
Application granted granted Critical
Publication of CN113411498B publication Critical patent/CN113411498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image shooting method, which is applied to a mobile terminal, and comprises the following steps: when the image data collected by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information; acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera; and generating a target image according to the target object image data and the preset information image data. The application also discloses a mobile terminal and a storage medium. According to the method and the device, clear pictures are shot under the preset scene, and the problem that the background area is too bright and the target area object is darker when the picture containing the preset information is shot is solved.

Description

Image shooting method, mobile terminal and storage medium
Technical Field
The application relates to the technical field of photographing, in particular to an image photographing method, a mobile terminal and a storage medium.
Background
Along with the rapid development of mobile terminals such as mobile phones and tablet computers, the shooting functions of the mobile terminals are more and more diversified, and when a user uses the mobile terminal to shoot a picture containing preset information (such as sky, lamplight and the like), the shot picture is overexposed due to backlight shooting, so that a shot object cannot be seen clearly due to too bright background, and the shooting effect is poor.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
Aiming at the technical problems, the application provides an image shooting method, a mobile terminal and a storage medium, which solve the problems that a background area is too bright and a target object area is darker when a picture containing preset information is shot.
In order to solve the above technical problems, the present application provides an image capturing method, which is applied to a mobile terminal, and the steps of the image capturing method include:
when the image data collected by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information;
acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
and generating a target image according to the target object image data and the preset information image data.
Optionally, the photographing parameters include at least one of aperture, exposure time, and white balance.
Optionally, when it is detected that the image data collected by the first camera and/or the second camera includes preset information, the step of adjusting the shooting parameters of the second camera according to the preset information includes:
When the image data acquired by the first camera and/or the second camera contains preset information, acquiring the brightness of the preset information area, wherein the preset information comprises the brightness of the preset information area optionally;
determining a target shooting parameter according to the brightness;
and adjusting the shooting parameters of the second camera according to the target shooting parameters.
Optionally, the step of obtaining the brightness of the preset information area includes:
dividing the image data of the preset information area into a plurality of areas, and obtaining the brightness of each area;
obtaining a brightness average value according to the brightness of each area;
and determining the brightness of a preset information area by the brightness average value.
Optionally, the image capturing method further includes:
when the image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of the preset information area, wherein the preset information optionally comprises the area of the preset information area;
and executing the step of adjusting the shooting parameters of the second camera according to the preset information when the area of the preset information area is larger than a preset threshold value.
Optionally, when it is detected that the image data collected by the first camera and/or the second camera includes preset information, the step of adjusting the shooting parameters of the second camera according to the preset information includes:
when the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting shooting parameters of the second camera according to the preset information.
Optionally, the step of adjusting the shooting parameters of the second camera according to the preset information is performed, and the following steps are also performed:
acquiring a parameter threshold of the target object based on the acquired image data, the parameter threshold including at least one of a brightness value and a sharpness;
and when the parameter threshold is smaller than a preset threshold, adjusting shooting parameters of the first camera.
Optionally, the step of acquiring target object image data in the image data acquired by the first camera and second preset information image data in the image data acquired by the second camera includes:
acquiring image data acquired by a first camera and image data acquired by a second camera;
determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
Extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
Optionally, if the second camera is a wide-angle camera, after the step of extracting second preset information image data of the second camera according to a preset information model, the image capturing method further includes:
acquiring radial distortion parameters and tangential distortion parameters of the second camera;
adjusting second preset information image data of the second camera according to the radial distortion parameters and the tangential distortion parameters;
and taking the adjusted second preset information image data as the preset information image data.
Optionally, the step of generating the target image according to the target object image data and the preset information image data includes:
replacing first preset information image data of the first camera with the preset information image data;
and carrying out preset processing on the target object image data and the preset information image data to generate a target image.
Optionally, after the step of performing the preset processing on the target object image data and the preset information image data, the method further includes:
Acquiring a first pixel of the target object image data at a preset position and a second pixel of the second preset information image data at the preset position;
determining a target pixel at the preset position according to the first pixel and the second pixel;
and adjusting the pixel at the preset position according to the target pixel to generate a target image.
Optionally, after the step of generating the target image according to the target object image data and the preset information image data, the image capturing method further includes:
acquiring brightness of a target object and a preset information area in the target image;
and adjusting the target image according to the brightness.
Optionally, the step of adjusting the target image according to the brightness includes:
determining a brightness difference value based on the brightness of the target object and the preset information area;
and when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or the preset information area so that the brightness difference value is smaller than or equal to the preset threshold value.
The application also provides a mobile terminal, which comprises: the image shooting device comprises a memory and a processor, wherein the memory stores an image shooting program, and the image shooting program realizes the steps of the image shooting method when being executed by the processor. .
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image capturing method as described in any one of the above.
As described above, the image shooting method is applied to a mobile terminal, the mobile terminal is provided with a first camera and a second camera, when a user uses the cameras to preview shooting scenes, and when detecting that preset information is contained in image data collected by the first camera and/or the second camera, shooting parameters of the second camera are adjusted according to the preset information, and/or shooting parameters of a first camera are adjusted according to a parameter threshold value by acquiring parameter threshold values of a target object, so that preset information image data in the image data of the target object in the image data collected by the first camera and the image data of the preset information in the image data collected by the second camera are acquired; generating a target image according to the target object image data and the preset information image data, wherein the target image comprises a clear target object and a clear preset information image, based on the target object image data acquired based on the parameter threshold of the target object, the target object area is clear, the preset information image data acquired based on the environment brightness of the preset information area is clear, and further, according to the target object image data and the target image acquired by the preset information image data, the clear picture is shot in a preset scene, and the problem that the background area is too bright and the target object area is darker when the picture containing the preset information is shot is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a first embodiment of an image capturing method of the present application;
FIG. 3 is a flowchart of a second embodiment of an image capturing method according to the present application;
fig. 4 is a schematic diagram of a refinement flow of step S10 in the third embodiment of the image capturing method of the present application;
fig. 5 is a schematic diagram of a refinement flow of step S11 in the fourth embodiment of the image capturing method of the present application;
fig. 6 is a schematic diagram of a refinement flow of step S20 in a fifth embodiment of the image capturing method of the present application;
fig. 7 is a schematic diagram of a refinement flow of step S23 in a sixth embodiment of the image capturing method of the present application;
Fig. 8 is a schematic diagram of a refinement flow of step S30 in a seventh embodiment of the image capturing method of the present application;
fig. 9 is a schematic flow chart of an eighth embodiment of an image capturing method of the present application;
fig. 10 is a flowchart of a ninth embodiment of an image capturing method of the present application;
fig. 11 is a schematic diagram of a refinement flow of step S70 in the tenth embodiment of the image capturing method of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The term "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context. Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or," "and/or," "including at least one of," and the like, as used herein, may be construed as inclusive, or meaning any one or any combination. For example, "including at least one of: A. b, C "means" any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C ", again as examples," A, B or C "or" A, B and/or C "means" any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with other steps or at least a portion of the other steps or stages.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be noted that, in this document, step numbers such as S10 and S20 are adopted, and the purpose of the present invention is to more clearly and briefly describe the corresponding content, and not to constitute a substantial limitation on the sequence, and those skilled in the art may execute S20 first and then execute S10 when implementing the present invention, which is within the scope of protection of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
The main solutions of the embodiments of the present invention are: when the image data collected by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information; acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera; and generating a target image according to the target object image data and the preset information image data.
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal structure of a hardware running environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, or can be a mobile terminal device with a display function, such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 3) player, a portable computer and the like.
The following description will be given taking a mobile terminal as an example, and those skilled in the art will understand that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
As shown in fig. 1, the terminal may include: a first camera 1006, a second camera 1007, a processor 1001 such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communications bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), a microphone array, etc., and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include RF (Radio Frequency) circuitry, sensors, audio circuitry, wiFi modules, and the like. Among other sensors, such as light sensors, motion sensors, and other sensors. Alternatively, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an image capturing program may be included in the memory 1005 as one type of computer storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call an image capturing program stored in the memory 1005 and perform the following operations:
alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
when the image data collected by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information;
acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
generating a target image according to the target object image data and the preset information image data
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
When the image data acquired by the first camera and/or the second camera contains preset information, acquiring the brightness of the preset information area, wherein the preset information comprises the brightness of the preset information area optionally;
determining a target shooting parameter according to the brightness;
and adjusting the shooting parameters of the second camera according to the target shooting parameters.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
dividing the image data of the preset information area into a plurality of areas, and obtaining the brightness of each area;
obtaining a brightness average value according to the brightness of each area;
and determining the brightness of a preset information area by the brightness average value.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
when the image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of the preset information area, wherein the preset information optionally comprises the area of the preset information area;
and executing the step of adjusting the shooting parameters of the second camera according to the preset information when the area of the preset information area is larger than a preset threshold value.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
when the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting shooting parameters of the second camera according to the preset information.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
acquiring a parameter threshold of the target object based on the acquired image data, the parameter threshold including at least one of a brightness value and a sharpness;
and when the parameter threshold is smaller than a preset threshold, adjusting shooting parameters of the first camera.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
acquiring image data acquired by a first camera and image data acquired by a second camera;
determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
acquiring radial distortion parameters and tangential distortion parameters of the second camera;
adjusting second preset information image data of the second camera according to the radial distortion parameters and the tangential distortion parameters;
and taking the adjusted second preset information image data as the preset information image data.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
replacing first preset information image data of the first camera with the preset information image data;
and carrying out preset processing on the target object image data and the preset information image data to generate a target image.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
acquiring a first pixel of the target object image data at a preset position and a second pixel of the second preset information image data at the preset position;
determining a target pixel at the preset position according to the first pixel and the second pixel;
And adjusting the pixel at the preset position according to the target pixel to generate a target image.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
acquiring brightness of a target object and a preset information area in the target image;
and adjusting the target image according to the brightness.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and also perform the following operations:
determining a brightness difference value based on the brightness of the target object and the preset information area;
and when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or the preset information area so that the brightness difference value is smaller than or equal to the preset threshold value.
With the popularization of digital cameras, various mobile terminals equipped with cameras are being used for photographing. When shooting an image containing preset information, because the preset information is too bright, the shot image is subjected to the overexposure phenomenon of the preset information, and the shot object cannot be seen clearly due to the too bright background, so that the shooting effect is poor.
Based on this, the first embodiment is proposed.
Referring to fig. 2, a first embodiment of the present invention proposes an image photographing method including the steps of:
Step S10, when the image data acquired by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information;
step S20, acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
and step S30, generating a target image according to the target object image data and the preset information image data.
In this embodiment, at least two cameras are built in the mobile terminal, and image data is synchronously acquired through the two cameras, and it can be understood that more than two cameras can be built in the mobile terminal.
The embodiment of the application is illustrated with two cameras.
Optionally, after the user optionally turns on the first camera and the second camera, the first camera and the second camera are used for optionally collecting image data in a shooting scene, and according to the image data collected by the first camera or the second camera, the image data of the shot scene can be watched and captured by the user based on a preview screen, and the image data includes, but is not limited to, image brightness, a target object, a background area, definition, shooting time, shooting place and the like. And detecting and identifying whether preset information is included in the image data according to the image data displayed on the preview interface. When the image data comprises preset information, in order to obtain a clear picture with the preset information, shooting parameters of the second camera are adjusted according to the preset information.
Optionally, the preset information includes, but is not limited to, sky, sun, moon, light, etc.
Optionally, the photographing parameters include at least one of aperture, exposure time, and white balance. The aperture is used for increasing or reducing the light entering quantity during shooting and adjusting the brightness during shooting; the exposure time is used for increasing or reducing the shutter speed during shooting and adjusting the light inlet amount during shooting; white balance is used to adjust the color temperature while photographing, maintain the color balance of the image, including but not limited to aperture, exposure time, and white balance.
Optionally, the shooting parameters of the second camera are automatically adjusted according to the preset information, and the image data are acquired through the adjusted shooting parameters, so that the preset information image acquired by the second camera is clear.
Optionally, in an embodiment, when the preset information is sky, the brightness of a sky area in the preview image is obtained, and the shooting parameters of the second camera are adjusted according to the brightness. For example, when the brightness of the sky area exceeds a preset brightness threshold, the brightness of the sky area representing the current shooting is too high, if the sky area of the obtained image is shot based on a default rule, overexposure phenomenon occurs, so that no clear sky details are seen, therefore, in order to avoid overexposure of the sky area, the light entering quantity can be reduced by adjusting a small aperture and/or increasing the shutter speed, and meanwhile, the white balance can be adjusted according to the current shooting scene, so that the color of the sky area is not distorted, and the color balance is maintained.
Optionally, in another embodiment, when the preset information is sun, it is proved that the current shooting scene includes sun, brightness of a sun area in the preview image is obtained, and shooting parameters of the second camera are adjusted according to the brightness. Under the general condition, the brightness of the sun area is too high, and the sun area of the image shot based on the default rule is subjected to overexposure, so that the shooting parameters of the second camera can be automatically adjusted according to the difference value between the brightness of the sun area and the preset brightness threshold value, so that the light incoming quantity is reduced, and the condition that the overexposure of the sun area is avoided is ensured.
Optionally, in another embodiment, when the preset information is moon, it is proved that the scene shot on the same day is at night, and the environmental brightness is low at this time, in order to obtain a clear moon image, the shooting parameters of the second camera may be adjusted to increase the light entering amount, so that the brightness of the image corresponding to the moon area is increased, and the details of the moon area are seen clearly.
Optionally, in another embodiment, when the preset information is light, if the brightness value of the light is higher than the brightness value of the target object, the light area in the captured image is whitened according to a default rule, at this time, the capturing parameters of the second camera may be automatically adjusted according to the difference between the brightness of the light area and the preset brightness threshold, for example, the aperture may be reduced and/or the door speed may be increased, and at the same time, the white balance may be adjusted according to the color temperature of the current light, so that the color balance of the captured light area may be balanced.
Optionally, the preset information is not limited to the above embodiment, in the actual shooting process, by acquiring image data of the preset information area, shooting parameters of the second camera are automatically adjusted according to the image data, so that an overexposure or an overdischarge phenomenon of an image of the preset information area is avoided, and meanwhile, color balance of the image of the preset information area is ensured, so that a preset information image with high definition and color balance is acquired.
Optionally, when the user performs the shooting operation, the method for controlling the camera to be turned on by the mobile terminal further includes:
when the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting shooting parameters of the second camera according to the preset information.
Optionally, when the user performs shooting operation, the mobile terminal controls the first camera to be opened and controls the second camera to be closed, controls the first camera to collect image data, and displays the image data on a preview interface according to the image data. And further judging whether the image data contains preset information. After the image data comprise preset information, a second camera is started, and further shooting parameters of the second camera are adjusted according to the preset information.
Optionally, in the controlling a starting manner of the first camera and the second camera according to the embodiment of the present application, when a user performs a shooting operation, the mobile terminal may control the second camera to be started and control the first camera to be closed, control the second camera to collect image data, and display the image data on a preview interface according to the image data. And further judging whether the image data contains preset information. After the image data comprise preset information, starting the first camera, and further adjusting shooting parameters of the second camera according to the preset information.
Optionally, the embodiment of the application opens the first camera and the second camera, reduces the reaction time when shooting, and improves the user experience.
Optionally, the method for detecting and identifying whether the image data includes preset information may be to identify whether the image data includes preset information by using a preset information model, where the preset information model is established based on a neural network algorithm, specifically, the method includes the steps of forming a data set through a large number of preset information sample pictures, constructing a neural network model, training the neural network model by using the constructed data set, and obtaining a preset information model based on the neural network after the training is completed after a certain accuracy is reached. And when the mobile terminal is in a preview state, acquiring image data, preprocessing the image data, and identifying preset information by using a preset information model.
Optionally, in order to obtain a clear target image while adjusting the second camera according to preset information, in the method for adjusting the shooting parameters of the first camera according to the image data according to the embodiment of the present application, the method for adjusting the shooting parameters of the first camera according to the image data may be to adjust the shooting parameters of the first camera according to target object image data in the image data, or may be to adjust the shooting parameters of the first camera according to target object data in the image data and preset information image data. It is to be understood that the preset information image data is image data of a corresponding area of the preset information image, and the target image data is image data excluding the preset information image data.
Optionally, the clear target image data is obtained by adjusting the shooting parameters of the first camera, and/or the clear preset information image data is obtained by adjusting the shooting parameters of the second camera, then a target image is generated by the clear target image data and the clear preset information image data, the target object of the target image is clear, the preset information area is clear, and the target image is high in definition and color balance.
After the mobile terminal generates a target image, the target image is displayed on a preview picture, the shooting effect of the adjusted first camera and the second camera is displayed for a user, then a shooting button is clicked on the mobile terminal, or a specified position of a screen of the mobile terminal is clicked, a shooting instruction input by the user is received on the mobile terminal, and then the target image is stored in the mobile terminal.
According to the embodiment of the invention, the shooting parameters of the first camera and the second camera are adjusted, the image data are respectively acquired through the adjusted shooting parameters, the target image is generated based on the target object image in the image data acquired by the first camera and the preset information area image of the image data acquired by the second camera, the target object area is clear based on the target object image data acquired by the parameter threshold of the target object, the preset information area is clear based on the preset information image data acquired by the environment brightness of the preset information area, and the target image acquired according to the target object image data and the preset information image data is clear.
Optionally, referring to fig. 3, based on the first embodiment, a specific implementation manner of adjusting the shooting parameters of the first camera according to the image data is described in the second embodiment, and the steps of adjusting the shooting parameters of the second camera according to the preset information are performed while the following steps are performed:
Step S40, acquiring a parameter threshold of a target object based on the acquired image data, wherein the parameter threshold comprises at least one of brightness value and definition;
and S50, when the parameter threshold is smaller than a preset threshold, adjusting shooting parameters of the first camera.
In the embodiment of the application, the embodiment terminal is a first camera. Optionally, the first camera acquires image data, and detects a target object according to the image data, and it may be understood that in this embodiment, the target object is an area other than a preset information area, and the target object may include a face, an animal, a landscape, and the like, and the target object is a shooting subject for shooting by a user.
After determining a target object, a parameter threshold of the target object is obtained, wherein the parameter threshold comprises at least one of brightness value and definition. The parameter threshold is the display state of the target object in the preview picture; when the parameter threshold is smaller than a preset threshold, the brightness of the target object in the preview picture is too dark or too bright, and the definition is not high, so that the embodiment of the invention does not meet the requirements of users, and therefore, the target object image is not too bright or too dark, the definition is high and the color balance by adjusting the shooting parameters of the first camera, including but not limited to at least one of aperture, exposure time and white balance, so that the target object image satisfactory to the users is obtained.
According to the method and the device for obtaining the clear target object image, the shooting parameters of the first camera are adjusted through the parameter threshold of the target object, and therefore the clear target object image is obtained.
Alternatively, referring to fig. 4, fig. 4 is a schematic diagram of a refinement flow of step S10 in the third embodiment of the image capturing method of the present application. Based on the first embodiment, the step S10 further includes:
step S11, when the fact that the image data collected by the first camera and/or the second camera contains preset information is detected, the brightness of the preset information area is obtained, and optionally, the preset information comprises the brightness of the preset information area;
step S12, determining a target shooting parameter according to the brightness;
and S13, adjusting the shooting parameters of the second camera according to the target shooting parameters.
In this embodiment, when the user uses the mobile terminal to take a photograph, the brightness of the preset information area is obtained according to the collected preset information, and optionally, the preset information includes, but is not limited to, the brightness of the preset information area, and may also include the area of the preset information area.
The brightness of the preset information area can detect the ambient light brightness in real time by using an ambient light sensor (also called a light sensor) configured in the mobile terminal or other devices (such as a color temperature sensor and the like) integrated with an ambient light brightness detection function, so as to provide reference data for determining shooting parameters of the second camera.
Optionally, after the brightness of the preset information area is obtained, the brightness of the preset information area may be compared with a preset brightness threshold, where the preset brightness threshold is a necessary condition for determining whether to adjust the shooting parameters of the second camera, that is, when the brightness of the preset information area is greater than or less than the preset brightness threshold, the shooting parameters of the second camera need to be adjusted according to the brightness of the preset information area, where the preset brightness threshold may be a default of the system, for example, a designer determines, through an experiment, a corresponding shooting brightness with a better shooting effect when the adjusted second camera, determines the shooting brightness as the preset brightness threshold, and writes the preset brightness threshold into the mobile terminal, so that a user directly invokes the preset brightness threshold when performing a shooting operation.
Optionally, when the user performs shooting operation, when the brightness of the preset information area is greater than the preset brightness threshold, the image may have an overexposure phenomenon of the preset information area, so that the preset information area is whitened, and the details of the preset information are not clearly seen.
Optionally, when the brightness of the preset information area is smaller than the preset brightness threshold, the preset information area is too dark, based on the preset information area, the shooting parameter of the second camera is adjusted, so that the preset information area is not excessively dark when a user performs shooting operation, optionally, the aperture of the second camera can be increased, and/or the exposure time is increased, so that the exposure of the second camera is increased, and further, clear and bright preset information images are acquired through the adjusted second camera.
Alternatively, referring to fig. 5, fig. 5 is a schematic diagram of a refinement flow of step S11 in the fourth embodiment of the image capturing method of the present application. Based on the third embodiment, the step S11 includes:
step S111, dividing the image data of the preset information area into a plurality of areas, and obtaining the brightness of each area;
step S112, obtaining a brightness average value according to the brightness of each area;
step S113, determining the brightness of the preset information area by the brightness average value.
In this embodiment, the preset information area is divided into a plurality of areas, each area is subjected to photometry, brightness of each area is obtained, and then a brightness average value is determined according to the brightness of each area, the brightness average value is used as the brightness of the preset information area, and the accuracy of the method for determining the brightness of the preset information area by calculating the brightness average value is high.
Alternatively, the method for obtaining the brightness of the preset information area may further divide the image of the preset information area into a plurality of areas, delete the extremely bright and extremely dark blocks (that is, the areas with too large and too small brightness values) from each of the areas to obtain an effective area, and further calculate the brightness weighted average of the effective area, for example, the weight of the effective area at the center position may be set to be high, the weight of the effective area at the edge position may be set to be low, and the brightness weighted average may be determined as the brightness of the preset information area.
It should be noted that, the collected image data includes an area of a preset information area, when a user photographs, when the preset information area included in a photographed scene is small, a strong light area is small, so that an overexposure phenomenon of the preset information image cannot occur, and based on this, the image photographing method according to the embodiment of the present invention further includes the following steps:
When the image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of the preset information area, wherein the preset information optionally comprises the area of the preset information area;
and executing the step of adjusting the shooting parameters of the second camera according to the preset information when the area of the preset information area is larger than a preset threshold value.
The preset information comprises the area of a preset information area, and after the area of the preset information area is obtained according to the preset information, whether the step of adjusting the shooting parameters of the second camera according to the preset information is needed to be executed is judged according to the area of the preset information area.
When the preset information area is smaller than a preset threshold value, the fact that the preset information area in the image data is small in proportion is proved, the brightness of the preset information area cannot influence the preset information data, or influence is small, namely, when a user performs shooting operation, the obtained target image cannot be subjected to overexposure phenomenon, and based on the fact, in order to reduce consumption of mobile terminal hardware, only the first camera is selected to perform shooting operation.
When the preset information area is larger than a preset threshold value, the fact that the preset information area in the image data is larger than the preset threshold value proves that the brightness of the preset information area exceeds the preset brightness value, a user performs shooting operation, the obtained target image can be subjected to overexposure phenomenon, based on the fact, shooting parameters of the second camera are adjusted according to the preset information, and the fact that the target image cannot be subjected to overexposure phenomenon is achieved.
Optionally, after the first camera collects image data through the adjusted shooting parameters, and the second camera collects image data through the adjusted shooting parameters, how to obtain a target image desired by the user needs to be considered, referring to fig. 6, fig. 6 is a schematic flow diagram of step S20 of a fifth embodiment of the image shooting method of the present application, and based on the first embodiment, the step S20 includes:
step S21, acquiring image data acquired by a first camera and image data acquired by a second camera;
step S22, determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
step S23, extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
In this embodiment, the image data acquired by the first camera is acquired by the first camera through the adjusted shooting parameters, and the image data acquired by the second camera is acquired by the second camera through the adjusted shooting parameters.
After the image data collected by the first camera and the image data collected by the second camera are obtained, the first preset information image data of the first camera is determined through a preset information model, the preset information model is obtained based on a neural network algorithm, and specific steps are the same as those for establishing the preset information model in the first embodiment and are not repeated here. The image data comprises preset information image data and non-preset information image data, after the first preset information image data is identified, the non-preset information image data is determined to be target object image data, and it can be understood that the target image data is acquired by the first camera based on the adjusted shooting parameters, and a target object image corresponding to the target object image data is clear and has balanced color.
Optionally, the same preset information model is used to extract second preset information image data of the second camera, and it can be understood that the second preset information image data is acquired by the second camera based on the adjusted shooting parameters, and the preset information image corresponding to the second preset information image data is clear and color-balanced, so that the second preset information image data is used as preset information image data, that is, the preset information image corresponding to the preset information image data is clear and color-balanced.
The embodiment of the invention provides a method for acquiring a clear target image and a preset information image, which is used for extracting first preset information image data and second preset information data through a preset information model so as to acquire the target object image data and the preset information image data.
Optionally, when the user shoots wide preset information, the wide-angle camera can be used to shoot large-range preset information, but the shot image is distorted more or less, so that the image is distorted to obtain a normal image, the embodiment of the present application provides a method for processing image data acquired by the second camera, referring to fig. 7, fig. 7 is a schematic flow diagram of a sixth embodiment of the image shooting method of the present application, based on the fifth embodiment, the step S23 further includes:
step S231, acquiring radial distortion parameters and tangential distortion parameters of the second camera;
step S232, adjusting second preset information image data of the second camera according to the radial distortion parameters and the tangential distortion parameters;
step S233, taking the adjusted second preset information image data as the preset information image data.
In this embodiment, the second camera may be a standard camera or a wide-angle camera, and the second camera may default to the wide-angle camera or may be set by the user during the shooting operation.
When the second camera is a wide-angle camera, the corresponding preset information image in the acquired preset information image data can generate larger or smaller distortion, the picture distortion degree is small under the condition of small distortion degree, and the picture distortion degree is large under the condition of large distortion degree, so that the distortion elimination operation is needed for the preset information image corresponding to the preset information image data. It can be understood that the embodiment of the invention can determine whether to execute the orthodontic operation by judging the distortion degree of the preset information image, and when the distortion degree is greater than a preset threshold value, acquire the radial distortion parameter and the tangential distortion parameter of the second camera (the wide-angle camera), and further adjust the second preset information image data of the second camera according to the radial distortion parameter and the tangential distortion parameter, thereby acquiring an undistorted preset information image, and taking the undistorted preset information image as the preset information image in the target image.
In this embodiment, the wide-angle camera is used to obtain a preset information image with a wider range, and the second preset information image data of the second camera is adjusted according to the radial distortion parameter and the tangential distortion parameter, so as to obtain a preset information image with a wider range and no distortion.
Optionally, after acquiring the clear target object image and the preset information image, referring to fig. 8, fig. 8 is a flowchart of a seventh embodiment of the image capturing method of the present application, based on the first embodiment, the step S30 includes:
step S31, replacing the first preset information image data of the first camera with the preset information image data;
and step S32, performing preset processing on the target object image data and the preset information image data to generate a target image.
In this embodiment, the target image includes a clear target object image and a clear preset information image, and the specific steps include acquiring, by the first camera, image data according to the adjusted shooting parameters, where the image includes first preset information image data and first target object image data, and exposing and acquiring, based on the first target image data, the target object image corresponding to the first target object data is clear and color-balanced, where the target object image data includes a target object image; further, determining a target object image corresponding to the first target object data as a target object image in the target image; optionally, the second camera obtains image data according to the adjusted shooting parameters, the image includes second preset information image data and second target object image data, the second preset information image data is acquired based on the adjusted shooting parameters in an exposing way, so that a preset information image corresponding to the second preset information image data is clear and color-balanced, and the second preset information image data includes a second preset information image; further, taking a second preset information image corresponding to the second preset information image data as a preset information image in the target image; and then, carrying out preset processing on the target object image and the preset information image through an image stitching algorithm to generate a target image, further displaying the target image on a preview picture, displaying the shooting effects of the adjusted first camera and second camera for a user, clicking a shooting button on the mobile terminal, or clicking a designated position of a screen of the mobile terminal, receiving a shooting instruction input by the user on the mobile terminal, and further storing the target image in the mobile terminal.
Optionally, the preset processing includes splicing and/or fusing.
In the embodiment of the invention, the first preset information image corresponding to the first preset information image data acquired by the first camera is replaced by the preset information image corresponding to the preset information image data acquired by the second camera according to the adjusted shooting parameters, so that the target object image corresponding to the target object image data acquired by the first camera according to the adjusted shooting parameters and the preset information image are generated into the target image through an image stitching algorithm, the situations that preset information overexposure and underexposure of the target object cannot occur in a preset scene are realized, and a user does not need to manually adjust the shooting parameters, so that a preset information picture with good quality can be shot.
Optionally, during the preset processing of the target object image and the preset information image, a stitching trace may occur at a preset position to affect the quality of the target image, based on which, referring to fig. 9, fig. 9 is a schematic flow diagram of an eighth embodiment of the image capturing method according to the present application, and based on the seventh embodiment, the step S32 further includes:
step S33, a first pixel of the target object image data at a preset position and a second pixel of the preset information image data at the preset position are obtained;
Step S34, determining a target pixel of the preset position according to the first pixel and the second pixel;
step S35, adjusting the pixels at the preset positions according to the target pixels to generate a target image.
Optionally, the target object image data includes a target object image and a pixel value of the target object image, the preset information image data includes a preset information image and a pixel value of the preset information image, two images spliced are the target object image and the preset information image, a preset position of the target object image and the preset information image is replaced by the pixel value, for example, a pixel point is selected at the preset position, a first pixel value of the target object image at the pixel point and a second pixel value of the preset information image at the pixel point are obtained, a first pixel value and a second pixel value are obtained according to the first pixel value and the second pixel value in a calculating mode, a target pixel of the pixel point is determined according to the average pixel value, a final average pixel value of each pixel point is further determined to be the target pixel of the preset position through selecting a plurality of pixel points, and the final average pixel value of each pixel point is calculated, and then the target pixel of the preset position is adjusted according to the target pixel point, and the spliced trace is eliminated.
Optionally, the preset position includes a stitching position of the target object image data and preset information image data.
Preferably, the replacing of the pixel value at the preset position of the target object image and the preset information image may further be performed by selecting a pixel point at the preset position, further obtaining a first pixel value of the target object image at the pixel point and a second pixel value of the preset information image at the pixel point, and/or further obtaining a first distance from the center point of the target image to the preset position and a second distance from the center point of the preset information image to the preset position, further calculating an average pixel value of the first pixel value and the second pixel value according to the first pixel value, the second pixel value, the first distance and the second distance, and determining a target pixel at the preset position according to the average pixel value, thereby ensuring that the weight occupied by the pixel value in the target image is larger when the pixel point is closer to the target object image, and the weight occupied by the pixel value in the preset information image is larger when the pixel point needing to be replaced is closer to the first preset information image, and further considering the better splicing effect of the pixel value in the preset information image.
Optionally, after generating the target image, the target image and the preset information image in the target image may have an inconsistent brightness, based on this, referring to fig. 10, fig. 10 is a schematic flow chart of a ninth embodiment of the image capturing method according to the present application, and based on all the foregoing embodiments, the step S30 further includes:
step S60, obtaining the brightness of a target object and a preset information area in the target image;
and step S70, adjusting the target image according to the brightness.
Optionally, in order to ensure brightness coordination of the target object and the preset information area in the target image, based on this, in the embodiment of the present invention, whether the brightness of the target object and the preset information area in the target image is coordinated is determined by acquiring the brightness of the target object and the preset information area in the target image, and in the case that the brightness is not coordinated, the target image is adjusted according to the brightness.
Optionally, the brightness of the target object may be average brightness of the area corresponding to the target object, specifically, the method includes dividing the area corresponding to the target object into a plurality of areas, performing photometry on each area to obtain brightness of each area, further determining a brightness average value according to the brightness of each area, using the brightness average value as the brightness of the target object, and determining the brightness of the target object by calculating the brightness average value.
Or, the method for obtaining the brightness of the target object may further be to obtain brightness values of a center point and a boundary point of the target object, where the center point may be one or more, and the boundary point may be more, preferably, brightness values of a plurality of boundary points are obtained, and further, an average boundary brightness value of the boundary point is calculated, so that a boundary brightness value can be obtained more accurately, and further, a first weight is given to the center point, and a second weight is given to the boundary point, and the first weight is higher than the second weight, and further, the brightness of the final target object is obtained according to a product of the brightness of the center point and the weight and a product of the brightness of the boundary point and the second weight.
Alternatively, the method for obtaining the brightness of the target object may further be to divide the area corresponding to the target object into a plurality of areas, delete the extremely bright and extremely dark blocks (i.e. the areas with too large and too small brightness values) from each of the areas to obtain an effective area, and further calculate the brightness weighted average value of the effective area, for example, the weight of the effective area at the center position may be set to be high, the weight of the effective area at the edge position may be set to be low, and the brightness weighted average value may be determined as the brightness of the target object.
Optionally, the method for obtaining the brightness of the preset information area is similar to the method for obtaining the brightness of the target object, which is not described herein.
Alternatively, when the difference between the brightness of the target object and the brightness of the preset information area is greater than the preset brightness threshold, the brightness representing the brightness of the target object and the brightness of the preset information area are not coordinated, and the target image is adjusted according to the brightness. Specifically, the brightness of the target object and/or the brightness of the preset information area can be coordinated through the brightness of the target object and/or the brightness of the preset information area.
Alternatively, referring to fig. 11, fig. 11 is a schematic diagram of a refinement flow of step S70 in a tenth embodiment of the image capturing method of the present application, and based on the ninth embodiment, the step 70 includes:
step S71, determining a brightness difference value based on the brightness of the target object and the preset information area;
and step S72, when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or the preset information area so that the brightness difference value is smaller than or equal to the preset threshold value.
Optionally, after obtaining the brightness of the target object and the preset information area, calculating to obtain a brightness difference according to the brightness, and further judging whether the brightness of the target object and the brightness of the preset information area in the target image are coordinated according to the brightness difference. When the brightness difference value is larger than a preset threshold value, proving that the brightness of the target object and the brightness of the preset information area are coordinated, and adjusting is not needed; when the brightness difference is smaller than a preset threshold, the target object and the preset information area are proved to be uncoordinated, and specifically, the brightness of the target object is higher than the brightness of the preset information area or the brightness of the target object is smaller than the brightness of the preset information area. Based on this, the embodiment of the present invention adjusts the brightness of the target object and/or the preset information area so that the brightness difference is less than or equal to a preset threshold, that is, the brightness of the target object and the preset information area are coordinated.
The specific adjustment method may be to reduce the exposure of the target object or increase the exposure of the preset information area when the brightness of the target object is greater than the brightness of the preset information area, so as to coordinate the brightness of the target object and the brightness of the preset information area.
When the brightness of the target object is smaller than the brightness of the preset information area, the brightness of the target object and the brightness of the preset information area are coordinated by increasing the exposure of the target object or reducing the exposure of the preset information area.
Or, the method for adjusting the brightness of the target object and/or the preset information area may further be to obtain an intermediate value of the brightness difference value, and adjust the brightness of the target object and the preset information area according to the intermediate value, so that the brightness of the target object and the brightness of the preset information area are both intermediate values, and the brightness of the target object and the brightness of the preset information area are coordinated.
It should be noted that, the adjusting the brightness of the target object and/or the preset information area includes, but is not limited to, adjusting the exposure of the target object and/or the preset information area, and adjusting the brightness of the target object and/or the preset information area, so that the brightness of the target object is coordinated, and the problem that the partial area is excessive or excessively dark does not occur.
The embodiment of the application also provides a mobile terminal device, the terminal device comprises a memory and a processor, wherein the memory stores an image shooting program, and the image shooting program realizes the steps of the image shooting method in any embodiment when being executed by the processor.
The embodiment of the present application also provides a computer readable storage medium, on which an image capturing program is stored, which when executed by a processor, implements the steps of the image capturing method in any of the above embodiments.
Embodiments of the mobile terminal and the computer readable storage medium provided in the present application include all technical features of each embodiment of the image capturing method, and the expansion and explanation contents of the description are substantially the same as those of each embodiment of the method, which are not repeated herein.
The present embodiments also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method in the various possible implementations as above.
The embodiments also provide a chip including a memory for storing a computer program and a processor for calling and running the computer program from the memory, so that a device on which the chip is mounted performs the method in the above possible embodiments.
It can be understood that the above scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiments of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and pruned according to actual needs.
In this application, the same or similar term concept, technical solution, and/or application scenario description will generally be described in detail only when first appearing, and when repeated later, for brevity, will not generally be repeated, and when understanding the content of the technical solution of the present application, etc., reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution, and/or application scenario description, etc., which are not described in detail later.
In this application, the descriptions of the embodiments are focused on, and the details or descriptions of one embodiment may be found in the related descriptions of other embodiments.
The technical features of the technical solutions of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, storage disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (13)

1. An image shooting method applied to a mobile terminal, characterized in that the image shooting method comprises the following steps:
when the image data collected by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information, acquiring a parameter threshold of a target object based on the collected image data, and adjusting shooting parameters of the first camera according to the target object and the preset information in the image data when the parameter threshold is smaller than the preset threshold, wherein the parameter threshold comprises at least one of a brightness value and a definition, the shooting parameters comprise at least one of an aperture, an exposure time and a white balance, so that the target object image collected by the first camera and the preset information image collected by the second camera do not generate overexposure or overdarkness, and color balance of the preset information image and the target object image is ensured, and a detection mode of the preset information comprises the step of identifying whether the image data contains preset information or not by using a preset information model;
Acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera, wherein the target object image data is the image data except the preset information image data in the image data acquired by the first camera;
generating a target image according to the target object image data and the preset information image data, including: and replacing preset information image data in the image data acquired by the first camera with preset information image data in the image data acquired by the second camera, and generating a target image according to the replaced image data acquired by the first camera so as to store the target image in the mobile terminal when a shooting instruction is received.
2. The image capturing method according to claim 1, wherein when it is detected that the image data collected by the first camera and/or the second camera includes preset information, the step of adjusting the capturing parameters of the second camera according to the preset information includes:
when the image data acquired by the first camera and/or the second camera contains preset information, acquiring the brightness of a preset information area;
Determining a target shooting parameter according to the brightness;
and adjusting the shooting parameters of the second camera according to the target shooting parameters.
3. The image capturing method according to claim 2, wherein the step of acquiring the brightness of the preset information region includes:
dividing the image data of the preset information area into a plurality of areas, and obtaining the brightness of each area;
obtaining a brightness average value according to the brightness of each area;
and determining the average brightness value as the brightness of a preset information area.
4. The image capturing method according to any one of claims 1 to 3, characterized in that the image capturing method further comprises:
when the image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of a preset information area;
and when the area of the preset information area is larger than a preset threshold value, executing the step of adjusting the shooting parameters of the second camera according to the preset information.
5. The image capturing method according to any one of claims 1 to 3, wherein when detecting that the image data collected by the first camera includes preset information, adjusting the capturing parameters of the second camera according to the preset information includes:
When the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting shooting parameters of the second camera according to the preset information.
6. The image capturing method according to any one of claims 1 to 3, wherein the step of acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera includes:
acquiring image data acquired by a first camera and image data acquired by a second camera;
determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
7. The image capturing method according to claim 6, wherein, if the second camera is a wide-angle camera, after the step of extracting second preset information image data of the second camera according to a preset information model, the image capturing method further comprises:
Acquiring radial distortion parameters and tangential distortion parameters of the second camera;
adjusting second preset information image data of the second camera according to the radial distortion parameters and the tangential distortion parameters;
and taking the adjusted second preset information image data as the preset information image data.
8. The image capturing method according to claim 7, wherein the step of generating a target image from the target object image data and the preset information image data includes:
replacing first preset information image data of the first camera with the preset information image data;
and carrying out preset processing on the target object image data and the preset information image data to generate a target image.
9. The image capturing method according to claim 8, wherein after the step of performing a preset process on the target object image data and the preset information image data, further comprising:
acquiring a first pixel of the target object image data at a preset position and a second pixel of the second preset information image data at the preset position;
determining a target pixel at the preset position according to the first pixel and the second pixel;
And adjusting the pixel at the preset position according to the target pixel to generate a target image.
10. The image capturing method according to any one of claims 1 to 3, wherein after the step of generating a target image from the target object image data and the preset information image data, the image capturing method further comprises:
acquiring brightness of a target object and a preset information area in the target image;
and adjusting the target image according to the brightness.
11. The image capturing method according to claim 10, wherein the step of adjusting the target image according to the brightness includes:
determining a brightness difference value based on the brightness of the target object and the preset information area;
and when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or the preset information area so that the brightness difference value is smaller than or equal to the preset threshold value.
12. A mobile terminal, the mobile terminal comprising: a memory, a processor, wherein the memory has stored thereon an image capturing program which, when executed by the processor, implements the steps of the image capturing method according to any one of claims 1 to 11.
13. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image capturing method according to any one of claims 1 to 11.
CN202110674807.3A 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium Active CN113411498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674807.3A CN113411498B (en) 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674807.3A CN113411498B (en) 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113411498A CN113411498A (en) 2021-09-17
CN113411498B true CN113411498B (en) 2023-04-28

Family

ID=77684947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674807.3A Active CN113411498B (en) 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113411498B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023122906A1 (en) * 2021-12-27 2023-07-06 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium
CN115019515B (en) * 2022-04-19 2023-03-03 北京拙河科技有限公司 Imaging control method and system
CN115278066A (en) * 2022-07-18 2022-11-01 Oppo广东移动通信有限公司 Point light source detection method, focusing method and device, storage medium and electronic equipment
CN115500740B (en) * 2022-11-18 2023-04-18 科大讯飞股份有限公司 Cleaning robot and cleaning robot control method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN111062881A (en) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI387332B (en) * 2009-08-26 2013-02-21 Altek Corp A method of backlighting
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106161967B (en) * 2016-09-13 2020-03-17 维沃移动通信有限公司 Backlight scene panoramic shooting method and mobile terminal
CN106331510B (en) * 2016-10-31 2019-10-15 维沃移动通信有限公司 A kind of backlight photographic method and mobile terminal
CN110177207B (en) * 2019-05-29 2023-06-30 努比亚技术有限公司 Backlight image shooting method, mobile terminal and computer readable storage medium
CN112073645B (en) * 2020-09-04 2022-04-08 深圳创维-Rgb电子有限公司 Exposure control method, device, terminal equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN111062881A (en) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113411498A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN113411498B (en) Image shooting method, mobile terminal and storage medium
CN109005366B (en) Night scene shooting processing method and device for camera module, electronic equipment and storage medium
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
US11158027B2 (en) Image capturing method and apparatus, and terminal
KR20190073518A (en) Optical imaging method and apparatus
CN105472246B (en) Camera arrangement and method
CN107623818B (en) Image exposure method and mobile terminal
CN107818283A (en) Quick Response Code image pickup method, mobile terminal and computer-readable recording medium
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN112017137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111405185B (en) Zoom control method and device for camera, electronic equipment and storage medium
CN112669231B (en) Image processing method, training method, device and medium of image processing model
CN109845241A (en) Photographic device, image capture method and program
CN112565604A (en) Video recording method and device and electronic equipment
CN111586280B (en) Shooting method, shooting device, terminal and readable storage medium
CN112351197A (en) Shooting parameter adjusting method and device, storage medium and electronic equipment
WO2023071933A1 (en) Camera photographing parameter adjustment method and apparatus and electronic device
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
US9948828B2 (en) Image sensing apparatus and method of controlling operation of same
CN116723383A (en) Shooting method and related equipment
CN111026893A (en) Intelligent terminal, image processing method and computer-readable storage medium
CN112866505B (en) Image processing method, device and storage medium
CN116051368B (en) Image processing method and related device
CN112689089B (en) Shooting information sharing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant