CN113411498A - Image shooting method, mobile terminal and storage medium - Google Patents

Image shooting method, mobile terminal and storage medium Download PDF

Info

Publication number
CN113411498A
CN113411498A CN202110674807.3A CN202110674807A CN113411498A CN 113411498 A CN113411498 A CN 113411498A CN 202110674807 A CN202110674807 A CN 202110674807A CN 113411498 A CN113411498 A CN 113411498A
Authority
CN
China
Prior art keywords
preset information
image data
camera
image
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110674807.3A
Other languages
Chinese (zh)
Other versions
CN113411498B (en
Inventor
王洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202110674807.3A priority Critical patent/CN113411498B/en
Publication of CN113411498A publication Critical patent/CN113411498A/en
Application granted granted Critical
Publication of CN113411498B publication Critical patent/CN113411498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Abstract

The application discloses an image shooting method, which is applied to a mobile terminal and comprises the following steps: when detecting that image data acquired by a first camera and/or a second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information; acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera; and generating a target image according to the target object image data and the preset information image data. The application also discloses a mobile terminal and a storage medium. The method and the device have the advantages that the clear picture is shot in the preset scene, and the problems that the background area is too bright and the target area is darker when the picture containing the preset information is shot are solved.

Description

Image shooting method, mobile terminal and storage medium
Technical Field
The present application relates to the field of photographing technologies, and in particular, to an image photographing method, a mobile terminal, and a storage medium.
Background
Along with the rapid development of mobile terminals such as mobile phones and tablet computers, the shooting function of the mobile terminal is more and more diversified, when a user uses the mobile terminal to shoot a picture containing preset information (such as sky, light and the like), the picture shot by the user is overexposed due to backlight shooting, the background is too bright to enable the shot object to be seen clearly, and the shooting effect is poor.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an image capturing method, a mobile terminal, and a storage medium, which solve the problem that a background area is too bright and a target object area is darker when a picture including preset information is captured.
In order to solve the above technical problem, the present application provides an image capturing method applied to a mobile terminal, where the image capturing method includes:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information;
acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
and generating a target image according to the target object image data and the preset information image data.
Optionally, the photographing parameters include at least one of an aperture, an exposure time, and a white balance.
Optionally, when it is detected that image data acquired by the first camera and/or the second camera includes preset information, the step of adjusting the shooting parameter of the second camera according to the preset information includes:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring brightness of a preset information area, and optionally, the preset information includes brightness of the preset information area;
determining target shooting parameters according to the brightness;
and adjusting the shooting parameters of the second camera according to the target shooting parameters.
Optionally, the step of obtaining the brightness of the preset information area includes:
dividing the image data of the preset information area into a plurality of areas, and acquiring the brightness of each area;
obtaining a brightness average value according to the brightness of each region;
and determining the brightness of the preset information area according to the brightness average value.
Optionally, the image capturing method further includes:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of a preset information area, optionally, the preset information includes the area of the preset information area;
and when the area of the preset information area is larger than a preset threshold value, executing the step of adjusting the shooting parameters of the second camera according to the preset information.
Optionally, when it is detected that image data acquired by the first camera and/or the second camera includes preset information, the step of adjusting the shooting parameter of the second camera according to the preset information includes:
when detecting that the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting the shooting parameters of the second camera according to the preset information.
Optionally, while the step of adjusting the shooting parameters of the second camera according to the preset information is executed, the following steps are also executed:
acquiring a parameter threshold of a target object based on the acquired image data, wherein the parameter threshold comprises at least one of a brightness value and a definition;
and when the parameter threshold value is smaller than a preset threshold value, adjusting the shooting parameter of the first camera.
Optionally, the step of acquiring target object image data in the image data collected by the first camera and second preset information image data in the image data collected by the second camera includes:
acquiring image data acquired by a first camera and image data acquired by a second camera;
determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
and extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
Optionally, if the second camera is a wide-angle camera, after the step of extracting second preset information image data of the second camera according to a preset information model, the image capturing method further includes:
acquiring a radial distortion parameter and a tangential distortion parameter of the second camera;
adjusting second preset information image data of the second camera according to the radial distortion parameter and the tangential distortion parameter;
and taking the adjusted second preset information image data as the preset information image data.
Optionally, the step of generating a target image according to the target object image data and the preset information image data includes:
replacing first preset information image data of the first camera with the preset information image data;
and performing preset processing on the target object image data and the preset information image data to generate a target image.
Optionally, after the step of performing preset processing on the target object image data and the preset information image data, the method further includes:
acquiring a first pixel of the target object image data at a preset position and a second pixel of the second preset information image data at a preset position;
determining a target pixel of the preset position according to the first pixel and the second pixel;
and adjusting the pixels at the preset positions according to the target pixels to generate a target image.
Optionally, after the step of generating the target image according to the target object image data and the preset information image data, the image capturing method further includes:
acquiring the brightness of a target object and a preset information area in the target image;
and adjusting the target image according to the brightness.
Optionally, the step of adjusting the target image according to the brightness includes:
determining a brightness difference value based on the brightness of the target object and the preset information area;
and when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or a preset information area so as to enable the brightness difference value to be smaller than or equal to the preset threshold value.
The present application further provides a mobile terminal, the mobile terminal including: a memory, and a processor, wherein the memory has an image capturing program stored thereon, and the image capturing program when executed by the processor implements the steps of the image capturing method as set forth in any one of the above. .
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the image capturing method as set forth in any one of the above.
As described above, the image capturing method of the present application is applied to a mobile terminal, where the mobile terminal is provided with a first camera and a second camera, when a user uses the camera to preview a captured scene and detects that image data acquired by the first camera and/or the second camera includes preset information, adjusting a capturing parameter of the second camera according to the preset information, and/or by acquiring a parameter threshold of a target object and adjusting a capturing parameter of the first camera according to the parameter threshold, further acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera; generating a target image according to the target object image data and the preset information image data, wherein the target image comprises a clear target object and a clear preset information image, based on which, the target object image data obtained based on the parameter threshold value of the target object makes the target object region clear, the preset information image data obtained based on the environmental light brightness of the preset information region makes the preset information region clear, and further the target image obtained according to the target object image data and the preset information image data is clear, thereby realizing the shooting of clear pictures in preset scenes, and solving the problem that the background region is too bright and the target object region is dark when the pictures containing the preset information are shot.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a first embodiment of an image capture method according to the present application;
FIG. 3 is a schematic flowchart of a second embodiment of an image capturing method according to the present application;
FIG. 4 is a detailed flowchart of step S10 of the image capturing method according to the third embodiment of the present application;
FIG. 5 is a detailed flowchart of step S11 of the fourth embodiment of the image capturing method of the present application;
FIG. 6 is a detailed flowchart of step S20 in the fifth embodiment of the image capturing method of the present application;
FIG. 7 is a detailed flowchart of step S23 in the sixth embodiment of the image capturing method of the present application;
FIG. 8 is a detailed flowchart of step S30 in the seventh embodiment of the image capturing method of the present application;
FIG. 9 is a schematic flowchart of an eighth embodiment of an image capture method according to the present application;
FIG. 10 is a flowchart illustrating a ninth embodiment of an image capture method according to the present application;
fig. 11 is a detailed flowchart of step S70 in the tenth embodiment of the image capturing method of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination". Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S10 and S20 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S20 first and then S10 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The main solution of the embodiment of the invention is as follows: when detecting that image data acquired by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information; acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera; and generating a target image according to the target object image data and the preset information image data.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, dynamic video Experts compress standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, dynamic video Experts compress standard Audio Layer 3) player, a portable computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
As shown in fig. 1, the terminal may include: a first camera 1006, a second camera 1007, a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), a microphone array, etc., and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include RF (Radio Frequency) circuits, sensors, audio circuits, WiFi modules, and the like. Such as light sensors, motion sensors, and other sensors. Alternatively, the light sensor may include an ambient light sensor that adjusts the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that turns off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image capturing program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the image capturing program stored in the memory 1005 and perform the following operations:
alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information;
acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
generating a target image according to the target object image data and the preset information image data
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring brightness of a preset information area, and optionally, the preset information includes brightness of the preset information area;
determining target shooting parameters according to the brightness;
and adjusting the shooting parameters of the second camera according to the target shooting parameters.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
dividing the image data of the preset information area into a plurality of areas, and acquiring the brightness of each area;
obtaining a brightness average value according to the brightness of each region;
and determining the brightness of the preset information area according to the brightness average value.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of a preset information area, optionally, the preset information includes the area of the preset information area;
and when the area of the preset information area is larger than a preset threshold value, executing the step of adjusting the shooting parameters of the second camera according to the preset information.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
when detecting that the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting the shooting parameters of the second camera according to the preset information.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
acquiring a parameter threshold of a target object based on the acquired image data, wherein the parameter threshold comprises at least one of a brightness value and a definition;
and when the parameter threshold value is smaller than a preset threshold value, adjusting the shooting parameter of the first camera.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
acquiring image data acquired by a first camera and image data acquired by a second camera;
determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
and extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
acquiring a radial distortion parameter and a tangential distortion parameter of the second camera;
adjusting second preset information image data of the second camera according to the radial distortion parameter and the tangential distortion parameter;
and taking the adjusted second preset information image data as the preset information image data.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
replacing first preset information image data of the first camera with the preset information image data;
and performing preset processing on the target object image data and the preset information image data to generate a target image.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
acquiring a first pixel of the target object image data at a preset position and a second pixel of the second preset information image data at a preset position;
determining a target pixel of the preset position according to the first pixel and the second pixel;
and adjusting the pixels at the preset positions according to the target pixels to generate a target image.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
acquiring the brightness of a target object and a preset information area in the target image;
and adjusting the target image according to the brightness.
Alternatively, the processor 1001 may call an image capturing program stored in the memory 1005, and further perform the following operations:
determining a brightness difference value based on the brightness of the target object and the preset information area;
and when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or a preset information area so as to enable the brightness difference value to be smaller than or equal to the preset threshold value.
With the popularization of digital cameras and various mobile terminals equipped with cameras, people use various mobile terminals to shoot. When the image containing the preset information is shot, the shot image is overexposed due to the fact that the preset information is too bright, the shot image is poor in shooting effect due to the fact that the background is too bright and the shot object cannot be seen clearly.
Based on this, the first embodiment is proposed.
Referring to fig. 2, a first embodiment of the present invention proposes an image capturing method, the steps of which include:
step S10, when detecting that the image data collected by the first camera and/or the second camera contains preset information, adjusting the shooting parameters of the second camera according to the preset information;
step S20, acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
step S30, generating a target image according to the target object image data and the preset information image data.
In this embodiment, the mobile terminal is internally provided with at least two cameras, and the two cameras are used for synchronously acquiring image data, it can be understood that the mobile terminal may be internally provided with more than two cameras, and the embodiment of the present application is not limited.
The embodiment of the application is explained by two cameras.
Optionally, after the user optionally turns on the first camera and the second camera, the first camera and the second camera are used for optionally collecting image data in the shot scene and displaying the image data on the preview interface according to the image data collected by the first camera or the second camera, and the user can watch and capture the image data of the shot scene based on the preview picture, wherein the image data includes but is not limited to image brightness, target object, background area, definition, shooting time, shooting place and the like. And detecting and identifying whether the image data comprises preset information or not according to the image data displayed on the preview interface. And when the image data comprises preset information, in order to acquire a clear preset information picture, adjusting the shooting parameters of the second camera according to the preset information.
Optionally, the preset information includes, but is not limited to, sky, sun, moon, lights, etc.
Optionally, the photographing parameters include at least one of an aperture, an exposure time, and a white balance. The aperture is used for increasing or decreasing the light entering amount during shooting and adjusting the brightness during shooting; the exposure time is used for increasing or decreasing the shutter speed during shooting and adjusting the light inlet quantity during shooting; white balance is used to adjust the color temperature and maintain the color balance of the image during shooting, and the shooting parameters include, but are not limited to, aperture, exposure time, and white balance.
Optionally, the shooting parameters of the second camera are automatically adjusted according to preset information, and image data are collected through the adjusted shooting parameters, so that the preset information image acquired by the second camera is clear.
Optionally, in an embodiment, when the preset information is a space time, the brightness of a space area in a preview image is obtained, and the shooting parameter of the second camera is adjusted according to the brightness. For example, when the brightness of the sky region exceeds a preset brightness threshold, the brightness of the currently photographed sky region is too high, if the sky region is photographed based on a default rule, an overexposure phenomenon occurs in the sky region of the obtained image, and the sky details are not clearly seen, so that in order to avoid the overexposure of the sky region, the amount of light entering can be reduced by adjusting the aperture and/or increasing the shutter speed, and meanwhile, the white balance can be adjusted according to the currently photographed scene, so that the color of the sky region is not distorted, and the color balance is maintained.
Optionally, in another embodiment, when the preset information is sun, it is proved that the current shooting scene includes sun, brightness of a sun region in the preview image is obtained, and shooting parameters of the second camera are adjusted according to the brightness. In general, the brightness of the solar region is too high, and the solar region of the image obtained by shooting based on the default rule is subjected to an overexposure phenomenon, so that the shooting parameters of the second camera can be automatically adjusted according to the difference value between the brightness of the solar region and a preset brightness threshold value, the light incoming amount is reduced, and the solar region is ensured not to be subjected to the overexposure phenomenon.
Optionally, in another embodiment, when the preset information is a moon, it is proved that the shooting scene of the day is a night, and the ambient brightness is low, in order to obtain a clear moon image, the shooting parameter of the second camera may be adjusted to increase the light entering amount, so that the brightness of the image corresponding to the moon region is increased, and the details of the moon region are seen clearly.
Optionally, in another embodiment, when the preset information is light, since the brightness value of the light is higher than the brightness value of the target object, if the image is shot according to a default rule, the light area in the shot image is whitened, at this time, the shooting parameters of the second camera may be automatically adjusted according to the difference between the brightness of the light area and a preset brightness threshold, such as adjusting the aperture down and/or increasing the shutter speed, and meanwhile, the white balance may be adjusted according to the color temperature of the current light, so as to balance the color of the shot light area.
Optionally, the preset information is not limited to the above embodiment, and in an actual shooting process, the image data of the preset information area is obtained, and the shooting parameters of the second camera are automatically adjusted according to the image data, so that the image of the preset information area is not overexposed or too dark, and the color balance of the image of the preset information area is also ensured, thereby obtaining the preset information image with high definition and color balance.
Optionally, when the user performs the shooting operation, the method for controlling the camera to be turned on by the mobile terminal further includes:
when detecting that the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting the shooting parameters of the second camera according to the preset information.
Optionally, when the user executes a shooting operation, the mobile terminal controls the first camera to be opened and controls the second camera to be closed, controls the first camera to acquire image data, and displays the image data on a preview interface according to the image data. And then judging whether the image data contains preset information. And after the image data comprises preset information, starting a second camera, and further adjusting the shooting parameters of the second camera according to the preset information.
Optionally, in the embodiment of the present application, the manner of controlling the first camera and the second camera to be opened may be that when the user executes the shooting operation, the mobile terminal controls the second camera to be opened and controls the first camera to be closed, controls the second camera to acquire image data, and displays the image data on the preview interface according to the image data. And then judging whether the image data contains preset information. And after the image data comprises preset information, starting the first camera, and further adjusting the shooting parameters of the second camera according to the preset information.
Optionally, the first camera and the second camera are started in the embodiment of the application, so that the reaction time during shooting is reduced, and the user experience is improved.
Optionally, the method for detecting and identifying whether the image data includes the preset information may be to identify whether the image data includes the preset information by using a preset information model, where the preset information model is established based on a neural network algorithm, and the specific steps include forming a data set by using a large number of preset information sample pictures, constructing the neural network model, training the neural network model by using the constructed data set, and obtaining a preset information model based on the neural network after the training is finished after a certain accuracy is reached. When the mobile terminal is in a preview state, image data are collected, the image data are preprocessed, and preset information is identified by using a preset information model.
Optionally, while adjusting the second camera according to the preset information, in order to obtain a clear target image, in the embodiment of the present application, the shooting parameters of the first camera are adjusted according to the image data, where the method for adjusting the shooting parameters of the first camera according to the image data may be adjusting the shooting parameters of the first camera according to target object image data in the image data, or adjusting the shooting parameters of the first camera according to target object data in the image data and preset information image data. It is to be understood that the preset information image data is image data of a corresponding area of the preset information image, and the target image data is image data excluding the preset information image data.
Optionally, after the shooting parameters of the first camera are adjusted to obtain clear target image data, and/or after the shooting parameters of the second camera are adjusted to obtain clear preset information image data, a target image is generated through the clear target image data and the clear preset information image data, the target object of the target image is clear, the preset information area is clear, and then the target image is high in definition and balanced in color.
After the mobile terminal generates a target image, the target image is displayed on a preview picture, the adjusted shooting effects of the first camera and the second camera are displayed for a user, then a shooting button is clicked on the mobile terminal, or the appointed position of a screen of the mobile terminal is clicked, a shooting instruction input by the user is received on the mobile terminal, and then the target image is stored in the mobile terminal.
According to the embodiment of the invention, the shooting parameters of the first camera and the second camera are adjusted, the image data are respectively collected through the adjusted shooting parameters, the target image is generated based on the target object image in the image data collected by the first camera and the preset information area image of the image data collected by the second camera, the target object area is clear through the target object image data obtained based on the parameter threshold value of the target object, the preset information area is clear through the preset information image data obtained based on the environmental light brightness of the preset information area, and the target image obtained according to the target object image data and the preset information image data is clear.
Optionally, referring to fig. 3, based on the first embodiment, a second embodiment of the present application sets forth a specific implementation manner of adjusting the shooting parameters of the first camera according to the image data, and the step of adjusting the shooting parameters of the second camera according to the preset information is executed while the following steps are executed:
step S40, acquiring a parameter threshold of the target object based on the acquired image data, wherein the parameter threshold comprises at least one of a brightness value and a definition;
and step S50, when the parameter threshold is smaller than a preset threshold, adjusting the shooting parameters of the first camera.
In the embodiment of the application, the terminal in the embodiment is a first camera. Optionally, the first camera acquires image data, and performs target object detection according to the image data, it is understood that in this embodiment, the target object is an area other than a preset information area, the target object may include a human face, an animal, a landscape, and the like, and the target object is a subject of shooting by a user.
After a target object is determined, a parameter threshold of the target object is obtained, wherein the parameter threshold comprises at least one of a brightness value and a definition. The parameter threshold is the display state of the target object on the preview picture; when the parameter threshold is smaller than the preset threshold, the brightness of the target object in the preview picture is too dark or too bright, the definition is not high, and the expectation of the user is not met, so that the embodiment of the invention adjusts the shooting parameters of the first camera, wherein the shooting parameters include but are not limited to at least one of aperture, exposure time and white balance, so that the target object image is not too bright or too dark, the definition is high, and the color is balanced, thereby obtaining the target object image which is satisfactory for the user.
According to the embodiment of the application, the shooting parameters of the first camera are adjusted through the parameter threshold of the target object, so that a clear target object image is obtained.
Alternatively, referring to fig. 4, fig. 4 is a detailed flowchart of step S10 in the third embodiment of the image capturing method of the present application. Based on the first embodiment, the step S10 includes:
step S11, when detecting that image data acquired by the first camera and/or the second camera includes preset information, acquiring brightness of the preset information area, and optionally, the preset information includes brightness of the preset information area;
step S12, determining target shooting parameters according to the brightness;
and step S13, adjusting the shooting parameters of the second camera according to the target shooting parameters.
In this embodiment, when the user uses the mobile terminal to take a picture, the brightness of the preset information region is obtained according to the collected preset information, optionally, the preset information includes but is not limited to the brightness of the preset information region, and may also include the area of the preset information region.
The brightness of the preset information area can be detected in real time by using an ambient light sensor (also called a light sensor) configured in the mobile terminal or other devices (such as a color temperature sensor and the like) integrated with an ambient light brightness detection function, so that reference data is provided for determining the shooting parameters of the second camera.
Optionally, after obtaining the brightness of the preset information region, the brightness of the preset information region may be compared with a preset brightness threshold, where the preset brightness threshold is a necessary condition for determining whether to adjust a shooting parameter of the second camera, that is, when the brightness of the preset information region is greater than or less than the preset brightness threshold, the shooting parameter of the second camera needs to be adjusted according to the brightness of the preset information region, and the preset brightness threshold may be a default value, and if a designer determines, in an experimental manner, a corresponding shooting brightness with a better shooting effect at the adjusted second camera, the shooting brightness is determined as the preset brightness threshold, and the preset brightness threshold is written into the mobile terminal, so that the user directly calls the preset brightness threshold when performing a shooting operation.
Alternatively, when the brightness of the preset information area is greater than the preset brightness threshold value when the user performs a shooting operation, the image will have the overexposure phenomenon of the preset information area, which causes the preset information area to be whitish and the details of the preset information not to be clear, and based on this, by adjusting the shooting parameters of the second camera, the preset information does not generate an overexposure phenomenon when a user carries out shooting operation, alternatively, the second camera may be controlled by reducing the aperture of the second camera, and/or reducing the exposure time, further reducing the exposure of the second camera, and further acquiring a clear preset information image through the adjusted second camera, it can be understood that the shooting parameter can also be white balance, when shooting operation is carried out, the color balance of the preset information image obtained under the influence of the current environment is realized by adjusting the white balance parameters.
Optionally, when the brightness of the preset information area is smaller than the preset brightness threshold, the preset information area is too dark, and based on this, by adjusting the shooting parameter of the second camera, when a user performs shooting operation, the preset information area is not too dark, optionally, the exposure of the second camera can be increased by increasing the aperture of the second camera and/or increasing the exposure time, and then a clear and bright preset information image is acquired by the adjusted second camera.
Alternatively, referring to fig. 5, fig. 5 is a detailed flowchart of step S11 in the fourth embodiment of the image capturing method of the present application. Based on the third embodiment, the above step S11 includes:
step S111, dividing the image data of the preset information area into a plurality of areas, and acquiring the brightness of each area;
step S112, acquiring a brightness average value according to the brightness of each area;
and step S113, determining the brightness of the preset information area according to the brightness average value.
In this embodiment, the method for determining the brightness of the preset information area by dividing the preset information area into a plurality of areas, performing photometry on each of the areas, obtaining the brightness of each of the areas, further determining a brightness average value according to the brightness of each of the areas, taking the brightness average value as the brightness of the preset information area, and calculating the brightness average value has high accuracy.
Alternatively, the method of obtaining the brightness of the preset information region may further include dividing an image of the preset information region into a plurality of regions, deleting extremely bright and dark blocks (i.e., regions with too large and too small brightness values) from each of the regions to obtain an effective region, and further calculating a brightness weighted average of the effective region, for example, setting the weight of the effective region at the center position to be high, setting the weight of the effective region at the edge position to be low, and determining the brightness weighted average as the brightness of the preset information region.
It should be noted that the acquired image data includes an area of a preset information area, and when a user takes a picture, when the area of the preset information included in a shooting scene is small, the area of strong light is small, so that an overexposure phenomenon of a preset information image does not occur, and based on this, the image shooting method of the embodiment of the present invention further includes the following steps:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of a preset information area, optionally, the preset information includes the area of the preset information area;
and when the area of the preset information area is larger than a preset threshold value, executing the step of adjusting the shooting parameters of the second camera according to the preset information.
The preset information comprises the area of a preset information area, and after the area of the preset information area is obtained according to the preset information, whether the step of adjusting the shooting parameters of the second camera according to the preset information needs to be executed or not is judged according to the area of the preset information area.
When the preset information area is smaller than a preset threshold value, the fact that the proportion of the preset information area in the image data is small is proved, the brightness of the preset information area does not influence the preset information data, or the influence is small, namely, when a user carries out shooting operation, the obtained target image does not have an overexposure phenomenon, and based on the fact, in order to reduce consumption of mobile terminal hardware, only a first camera is selected to carry out shooting operation.
When the preset information area is larger than a preset threshold value, the preset information area in the image data is proved to be large in proportion, when the brightness of the preset information area exceeds a preset brightness value, a user carries out shooting operation, the obtained target image can have an over-exposure phenomenon, and on the basis, the shooting parameters of the second camera are adjusted according to the preset information, so that the target image can not have the over-exposure phenomenon.
Optionally, the first camera acquires image data according to the adjusted shooting parameters, and after the second camera acquires image data according to the adjusted shooting parameters, how to acquire a target image desired by a user needs to be considered, referring to fig. 6, where fig. 6 is a schematic flowchart of step S20 in a fifth embodiment of the image shooting method of the present application, and based on the first embodiment, the step S20 includes:
step S21, acquiring image data collected by a first camera and image data collected by a second camera;
step S22, determining first preset information image data of the first camera according to a preset information model, and taking image data other than the first preset information image data as target object image data;
step S23, extracting second preset information image data of the second camera according to a preset information model, and using the second preset information image data as preset information image data.
In this embodiment, acquiring the image data acquired by the first camera is acquired by the first camera through the adjusted shooting parameters, and acquiring the image data acquired by the second camera is acquired by the second camera through the adjusted shooting parameters.
After image data acquired by a first camera and image data acquired by a second camera are acquired, first preset information image data of the first camera is determined through a preset information model, the preset information model is obtained based on a neural network algorithm, the specific steps are the same as those of the first embodiment for establishing the preset information model, and details are not repeated here. The image data comprises preset information image data and non-preset information image data, the non-preset information image data is determined as target object image data after the first preset information image data is identified, it can be understood that the target image data is acquired by the first camera based on the adjusted shooting parameters, and the target object image corresponding to the target object image data is clear and has balanced color.
Optionally, the same preset information model is used to extract second preset information image data of the second camera, it can be understood that the second preset information image data is acquired by the second camera based on the adjusted shooting parameters, and the preset information image corresponding to the second preset information image data is clear and color-balanced, so that the second preset information image data is used as the preset information image data, that is, the preset information image corresponding to the preset information image data is clear and color-balanced.
The embodiment of the invention provides a method for acquiring a clear target image and a clear preset information image.
Optionally, when a user takes a wide range of preset information, a wide-angle camera may be used to take a wide range of preset information, but the taken picture may have greater or lesser distortion, which causes image distortion to obtain a normal picture, the embodiment of the present application provides a method for processing image data acquired by a second camera, referring to fig. 7, where fig. 7 is a flowchart of a sixth embodiment of the image taking method of the present application, and based on the fifth embodiment, the step S23 further includes:
step S231, acquiring a radial distortion parameter and a tangential distortion parameter of the second camera;
step S232, adjusting second preset information image data of the second camera according to the radial distortion parameter and the tangential distortion parameter;
in step S233, the adjusted second preset information image data is used as the preset information image data.
In this embodiment, the second camera can be standard camera, also can be wide-angle camera, the second camera can be acquiescently for wide-angle camera, also can be user self-setting when shooing the operation.
When the second camera is a wide-angle camera, the corresponding preset information image in the collected preset information image data can generate larger or smaller distortion, the degree of image distortion is small under the condition of small distortion degree, and the degree of image distortion is large under the condition of large distortion degree, so that the distortion elimination operation needs to be performed on the preset information image corresponding to the preset information image data. It can be understood that, in the embodiment of the present invention, whether to perform distortion removal operation may be determined by determining a distortion degree of the preset information image, and when the distortion degree is greater than a preset threshold, a radial distortion parameter and a tangential distortion parameter of the second camera (wide-angle camera) are obtained, and then second preset information image data of the second camera is adjusted according to the radial distortion parameter and the tangential distortion parameter, so as to obtain a distortion-free preset information image, and use the distortion-free preset information image as a preset information image in a target image.
In this embodiment, when obtaining the wider preset information image by using the wide-angle camera, the second preset information image data of the second camera is adjusted according to the radial distortion parameter and the tangential distortion parameter, so as to obtain the wider and undistorted preset information image.
Optionally, after acquiring a clear target object image and a preset information image, referring to fig. 8, fig. 8 is a schematic flowchart of a seventh embodiment of the image capturing method of the present application, and based on the first embodiment, the step S30 includes:
step S31, replacing the first preset information image data of the first camera with the preset information image data;
step S32, performing preset processing on the target object image data and the preset information image data to generate a target image.
In this embodiment, the target image includes a clear target object image and a clear preset information image, and the specific steps include acquiring image data of the first camera according to the adjusted shooting parameters, where the image includes first preset information image data and first target object image data, and based on the first target image data, the image is acquired based on the adjusted shooting parameters, so that the target object image corresponding to the first target object data is clear and color-balanced, where the target object image data includes the target object image; further determining a target object image corresponding to the first target object data as a target object image in a target image; optionally, image data acquired by the second camera according to the adjusted shooting parameters is acquired, wherein the image includes second preset information image data and second target object image data, and the second preset information image data is acquired based on the adjusted shooting parameters through exposure, so that a preset information image corresponding to the second preset information image data is clear and has balanced color, and the second preset information image data includes a second preset information image; further taking a second preset information image corresponding to the second preset information image data as a preset information image in a target image; and then, performing preset processing on the target object image and the preset information image through an image stitching algorithm to generate a target image, further displaying the target image on a preview picture, displaying the adjusted shooting effect of the first camera and the second camera for a user, further clicking a shooting button on the mobile terminal or clicking the appointed position of a screen of the mobile terminal, receiving a shooting instruction input by the user on the mobile terminal, and further storing the target image in the mobile terminal.
Optionally, the pre-set treatment comprises splicing and/or fusion.
In the embodiment of the invention, a first preset information image corresponding to first preset information image data acquired by a first camera is replaced by a preset information image corresponding to preset information image data acquired by a second camera according to adjusted shooting parameters, and a target object image corresponding to target object image data acquired by the first camera according to the adjusted shooting parameters and the preset information image are used for generating a target image through an image splicing algorithm, so that the situations of preset information overexposure and target object underexposure can be avoided in a preset scene, a user does not need to manually adjust the shooting parameters, and a high-quality preset information image can be shot.
Optionally, during the process of performing the preset processing on the target object image and the preset information image, a splicing trace may occur at a preset position, which affects the quality of the target image, based on this, referring to fig. 9, fig. 9 is a schematic flow diagram of an eighth embodiment of the image capturing method of the present application, and based on the seventh embodiment, after the step S32, the method further includes:
step S33, acquiring a first pixel of the target object image data at a preset position and a second pixel of the preset information image data at a preset position;
step S34, determining a target pixel of the preset position according to the first pixel and the second pixel;
and step S35, adjusting the pixel at the preset position according to the target pixel to generate a target image.
Optionally, the target object image data includes a target object image and a pixel value of the target object image, the preset information image data includes a preset information image and a pixel value of the preset information image, the two images to be spliced are the target object image and the preset information image respectively, the pixel value of the preset positions of the target object image and the preset information image is replaced, for example, a pixel point is selected at the preset position, a first pixel value of the target object image at the pixel point and a second pixel value of the preset information image at the pixel point are obtained, an average pixel value of the first pixel value and the second pixel value is obtained by calculation according to the first pixel value and the second pixel value, the target pixel of the pixel point is determined according to the average pixel value, and a plurality of pixel points are selected, and calculating the final average pixel value of each pixel point, determining the final average pixel value as the target pixel of the preset position, and adjusting the pixel of the preset position according to the target pixel to eliminate splicing traces.
Optionally, the preset position includes a splicing position of the target object image data and preset information image data.
Preferably, the replacement of the pixel value of the target object image and the preset position of the preset information image may also be selecting a pixel point at the preset position, further obtaining a first pixel value of the target object image at the pixel point and a second pixel value of the preset information image at the pixel point, and/or further obtaining a first distance from the center point of the target image to the preset position and a second distance from the center point of the preset information image to the preset position, further obtaining an average pixel value of the first pixel value and the second pixel value by calculating according to the first pixel value, the second pixel value, the first distance and the second distance, and determining the target pixel at the preset position according to the average pixel value, thereby ensuring that the weight occupied by the pixel value in the target image is greater when the pixel point is closer to the target object image, when the pixel point needing pixel value replacement is closer to the first preset information image, the weight occupied by the pixel value in the preset information image is larger, the preset processing method considers the distance between the image and the pixel point needing pixel value replacement, and the splicing effect is better.
Optionally, after the target image is generated, brightness of the target object image in the target image may be inconsistent with brightness of the preset information image, based on this, referring to fig. 10, fig. 10 is a flowchart illustrating a ninth embodiment of the image capturing method according to the present application, and based on all the embodiments, after step S30, the method further includes:
step S60, acquiring the brightness of the target object and the preset information area in the target image;
and step S70, adjusting the target image according to the brightness.
Optionally, in order to ensure that the brightness of the target object and the preset information area in the target image are coordinated, in the embodiment of the present invention, whether the brightness of the target object and the preset information area in the target image is coordinated is determined by obtaining the brightness of the target object and the preset information area in the target image, and in the case that the brightness is not coordinated, the target image is adjusted according to the brightness.
Optionally, the brightness of the target object may be an average brightness of a region corresponding to the target object, and the specific steps include dividing the region corresponding to the target object into a plurality of regions, performing photometry on each region, obtaining the brightness of each region, determining a brightness average value according to the brightness of each region, taking the brightness average value as the brightness of the target object, and determining the brightness of the target object by calculating the brightness average value with high accuracy.
Or, the method of obtaining the brightness of the target object may further obtain brightness values of a center point and a boundary point of the target object, where the center point may be one or more, and the boundary point may be multiple, and preferably, the brightness values of the multiple boundary points are obtained to calculate an average boundary brightness value of the boundary point, so as to obtain a boundary brightness value more accurately, further to give a first weight to the center point, and to give a second weight to the boundary point, and the first weight is higher than the second weight, and further to obtain the brightness of the final target object according to a product of the brightness of the center point and the weight plus a product of the brightness of the boundary point and the second weight.
Alternatively, the method of obtaining the brightness of the target object may be to divide the region corresponding to the target object into a plurality of regions, delete the extremely bright and dark blocks (i.e., regions with too large and too small brightness values) from each of the regions to obtain an effective region, and further calculate a brightness weighted average of the effective region, for example, the weight of the effective region at the center position may be set to be high, the weight of the effective region at the edge position may be set to be low, and the brightness weighted average may be determined as the brightness of the target object.
Optionally, a method for acquiring the brightness of the preset information area is similar to the method for acquiring the brightness of the target object, and is not described herein again.
Optionally, when a difference between the brightness of the target object and the brightness of the preset information area is greater than a preset brightness threshold, it represents that the brightness of the target object is inconsistent with the brightness of the preset information area, and at this time, the target image should be adjusted according to the brightness. Specifically, the brightness of the target object and the brightness of the preset information region may be coordinated by the brightness of the target object and/or the brightness of the preset information region.
Optionally, referring to fig. 11, fig. 11 is a schematic view of a detailed flow of step S70 in the tenth embodiment of the image capturing method of the present application, and based on the ninth embodiment, the step 70 includes:
step S71, determining a brightness difference value based on the brightness of the target object and the preset information area;
step S72, when the brightness difference is greater than the preset threshold, adjusting the brightness of the target object and/or the preset information area so that the brightness difference is less than or equal to the preset threshold.
Optionally, after obtaining the brightness of the target object and the preset information area, calculating a brightness difference value according to the brightness, and then judging whether the brightness of the target object in the target image is coordinated with the brightness of the preset information area according to the brightness difference value. When the brightness difference value is larger than a preset threshold value, the brightness coordination of the target object and a preset information area is proved without adjustment; when the brightness difference is smaller than a preset threshold value, the target object and the preset information region are proved to be uncoordinated, which may specifically include that the brightness of the target object is higher than the brightness of the preset information region or the brightness of the target object is smaller than the brightness of the preset information region. Based on this, the embodiments of the present invention adjust the brightness of the target object and/or the preset information area so that the brightness difference is smaller than or equal to the preset threshold, that is, the brightness of the target object and the preset information area are coordinated.
The specific adjusting method may be to coordinate the brightness of the target object and the preset information region by reducing the exposure amount of the target object or increasing the exposure amount of the preset information region when the brightness of the target object is greater than the brightness of the preset information region.
When the brightness of the target object is smaller than the brightness of a preset information area, the exposure of the target object is increased or the exposure of the preset information area is reduced, so that the brightness of the target object and the brightness of the preset information area are coordinated.
Or, the method for adjusting the brightness of the target object and/or the preset information region may further include obtaining an intermediate value of a brightness difference value, and adjusting the brightness of the target object and the preset information region according to the intermediate value, so that the brightness of the target object and the preset information region are both the intermediate value, and the brightness of the target object and the preset information region is coordinated.
It should be noted that, the adjusting the brightness of the target object and/or the preset information area includes, but is not limited to, adjusting the exposure amount of the target object and/or the preset information area, and by adjusting the brightness of the target object and/or the preset information area, the brightness of the target object is harmonized, and the problem of excessive or excessively dark partial area does not occur.
The embodiment of the application further provides a mobile terminal device, the terminal device comprises a memory and a processor, the memory stores an image shooting program, and the image shooting program is executed by the processor to realize the steps of the image shooting method in any one of the embodiments.
An embodiment of the present application further provides a computer-readable storage medium, where an image capturing program is stored on the computer-readable storage medium, and when the image capturing program is executed by a processor, the steps of the image capturing method in any of the above embodiments are implemented.
In the embodiments of the mobile terminal and the computer-readable storage medium provided in the present application, all technical features of the embodiments of the image capturing method are included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (14)

1. An image shooting method is applied to a mobile terminal, and is characterized by comprising the following steps:
when detecting that image data acquired by a first camera and/or a second camera contains preset information, adjusting shooting parameters of the second camera according to the preset information;
acquiring target object image data in the image data acquired by the first camera and preset information image data in the image data acquired by the second camera;
and generating a target image according to the target object image data and the preset information image data.
2. The image capturing method according to claim 1, wherein when it is detected that the image data collected by the first camera and/or the second camera includes preset information, the step of adjusting the capturing parameters of the second camera according to the preset information includes:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring the brightness of the preset information area;
determining target shooting parameters according to the brightness;
and adjusting the shooting parameters of the second camera according to the target shooting parameters.
3. The image photographing method of claim 2, wherein the step of acquiring the brightness of the preset information area comprises:
dividing the image data of the preset information area into a plurality of areas, and acquiring the brightness of each area;
obtaining a brightness average value according to the brightness of each region;
and determining the brightness of the preset information area according to the brightness average value.
4. The image capturing method according to any one of claims 1 to 3, characterized in that the image capturing method further includes:
when detecting that image data acquired by the first camera and/or the second camera contains preset information, acquiring the area of the preset information area;
and when the area of the preset information area is larger than a preset threshold value, executing the step of adjusting the shooting parameters of the second camera according to the preset information.
5. The image capturing method according to any one of claims 1 to 3, wherein when it is detected that image data collected by the first camera and/or the second camera includes preset information, the step of adjusting the capturing parameters of the second camera according to the preset information includes:
when detecting that the image data acquired by the first camera contains preset information, starting the second camera;
and adjusting the shooting parameters of the second camera according to the preset information.
6. The image processing method according to any one of claims 1 to 3, wherein the step of adjusting the shooting parameters of the second camera according to the preset information is performed while the following steps are performed:
acquiring a parameter threshold of a target object based on the acquired image data;
and when the parameter threshold value is smaller than a preset threshold value, adjusting the shooting parameter of the first camera.
7. The image capturing method according to any one of claims 1 to 3, wherein the step of acquiring the target object image data in the image data collected by the first camera and the preset information image data in the image data collected by the second camera includes:
acquiring image data acquired by a first camera and image data acquired by a second camera;
determining first preset information image data of the first camera according to a preset information model, and taking image data except the first preset information image data as target object image data;
and extracting second preset information image data of the second camera according to a preset information model, and taking the second preset information image data as preset information image data.
8. The image capturing method as claimed in claim 7, wherein if the second camera is a wide-angle camera, after the step of extracting the second pre-set information image data of the second camera according to a pre-set information model, the image capturing method further comprises:
acquiring a radial distortion parameter and a tangential distortion parameter of the second camera;
adjusting second preset information image data of the second camera according to the radial distortion parameter and the tangential distortion parameter;
and taking the adjusted second preset information image data as the preset information image data.
9. The image capturing method according to claim 7, wherein the step of generating a target image from the target object image data and the preset information image data includes:
replacing first preset information image data of the first camera with the preset information image data;
and performing preset processing on the target object image data and the preset information image data to generate a target image.
10. The image capturing method according to claim 9, wherein after the subjecting the target object image data and the preset information image data to the preset processing step, further comprises:
acquiring a first pixel of the target object image data at a preset position and a second pixel of the second preset information image data at a preset position;
determining a target pixel of the preset position according to the first pixel and the second pixel;
and adjusting the pixels at the preset positions according to the target pixels to generate a target image.
11. The image capturing method according to any one of claims 1 to 3, wherein after the step of generating a target image from the target object image data and the preset information image data, the image capturing method further comprises:
acquiring the brightness of a target object and a preset information area in the target image;
and adjusting the target image according to the brightness.
12. The image capturing method according to claim 11, wherein the step of adjusting the target image according to the brightness includes:
determining a brightness difference value based on the brightness of the target object and the preset information area;
and when the brightness difference value is larger than a preset threshold value, adjusting the brightness of the target object and/or a preset information area so as to enable the brightness difference value to be smaller than or equal to the preset threshold value.
13. A mobile terminal, characterized in that the mobile terminal comprises: memory, a processor, wherein the memory has stored thereon an image capturing program which when executed by the processor implements the steps of the image capturing method as claimed in any one of claims 1 to 12.
14. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image capturing method according to any one of claims 1 to 12.
CN202110674807.3A 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium Active CN113411498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674807.3A CN113411498B (en) 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674807.3A CN113411498B (en) 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113411498A true CN113411498A (en) 2021-09-17
CN113411498B CN113411498B (en) 2023-04-28

Family

ID=77684947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674807.3A Active CN113411498B (en) 2021-06-17 2021-06-17 Image shooting method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113411498B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019515A (en) * 2022-04-19 2022-09-06 北京拙河科技有限公司 Imaging control method and system
CN115278066A (en) * 2022-07-18 2022-11-01 Oppo广东移动通信有限公司 Point light source detection method, focusing method and device, storage medium and electronic equipment
CN115500740A (en) * 2022-11-18 2022-12-23 科大讯飞股份有限公司 Cleaning robot and cleaning robot control method
WO2023122906A1 (en) * 2021-12-27 2023-07-06 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110050937A1 (en) * 2009-08-26 2011-03-03 Altek Corporation Backlight photographing method
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106161967A (en) * 2016-09-13 2016-11-23 维沃移动通信有限公司 A kind of backlight scene panorama shooting method and mobile terminal
CN106331510A (en) * 2016-10-31 2017-01-11 维沃移动通信有限公司 Backlight photographing method and mobile terminal
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110177207A (en) * 2019-05-29 2019-08-27 努比亚技术有限公司 Image pickup method, mobile terminal and the computer readable storage medium of backlight image
CN111062881A (en) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112073645A (en) * 2020-09-04 2020-12-11 深圳创维-Rgb电子有限公司 Exposure control method, device, terminal equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110050937A1 (en) * 2009-08-26 2011-03-03 Altek Corporation Backlight photographing method
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
WO2018018771A1 (en) * 2016-07-29 2018-02-01 宇龙计算机通信科技(深圳)有限公司 Dual camera-based photography method and system
CN106161967A (en) * 2016-09-13 2016-11-23 维沃移动通信有限公司 A kind of backlight scene panorama shooting method and mobile terminal
CN106331510A (en) * 2016-10-31 2017-01-11 维沃移动通信有限公司 Backlight photographing method and mobile terminal
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110177207A (en) * 2019-05-29 2019-08-27 努比亚技术有限公司 Image pickup method, mobile terminal and the computer readable storage medium of backlight image
CN111062881A (en) * 2019-11-20 2020-04-24 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112073645A (en) * 2020-09-04 2020-12-11 深圳创维-Rgb电子有限公司 Exposure control method, device, terminal equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023122906A1 (en) * 2021-12-27 2023-07-06 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium
CN115019515A (en) * 2022-04-19 2022-09-06 北京拙河科技有限公司 Imaging control method and system
CN115019515B (en) * 2022-04-19 2023-03-03 北京拙河科技有限公司 Imaging control method and system
CN115278066A (en) * 2022-07-18 2022-11-01 Oppo广东移动通信有限公司 Point light source detection method, focusing method and device, storage medium and electronic equipment
CN115500740A (en) * 2022-11-18 2022-12-23 科大讯飞股份有限公司 Cleaning robot and cleaning robot control method
CN115500740B (en) * 2022-11-18 2023-04-18 科大讯飞股份有限公司 Cleaning robot and cleaning robot control method

Also Published As

Publication number Publication date
CN113411498B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN113411498B (en) Image shooting method, mobile terminal and storage medium
US9451173B2 (en) Electronic device and control method of the same
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN105874776B (en) Image processing apparatus and method
CN109951627B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN105472246B (en) Camera arrangement and method
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN107818283A (en) Quick Response Code image pickup method, mobile terminal and computer-readable recording medium
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN111835982B (en) Image acquisition method, image acquisition device, electronic device, and storage medium
CN110807769B (en) Image display control method and device
CN103945116A (en) Apparatus and method for processing image in mobile terminal having camera
CN112017137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20150237248A1 (en) Image processing method and image processing device
CN111696058A (en) Image processing method, device and storage medium
CN111416936B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111567034A (en) Exposure compensation method, device and computer readable storage medium
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN112351197B (en) Shooting parameter adjusting method and device, storage medium and electronic equipment
CN115037883A (en) Exposure parameter adjusting method and device, storage medium and electronic equipment
CN111543047A (en) Video shooting method and device and computer readable storage medium
CN111602390A (en) Terminal white balance processing method, terminal and computer readable storage medium
US20150256718A1 (en) Image sensing apparatus and method of controlling operation of same
CN111026893A (en) Intelligent terminal, image processing method and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant