CN112437237B - Shooting method and device - Google Patents

Shooting method and device Download PDF

Info

Publication number
CN112437237B
CN112437237B CN202011489181.0A CN202011489181A CN112437237B CN 112437237 B CN112437237 B CN 112437237B CN 202011489181 A CN202011489181 A CN 202011489181A CN 112437237 B CN112437237 B CN 112437237B
Authority
CN
China
Prior art keywords
image
image data
working mode
target
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011489181.0A
Other languages
Chinese (zh)
Other versions
CN112437237A (en
Inventor
肖旭
覃保恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011489181.0A priority Critical patent/CN112437237B/en
Publication of CN112437237A publication Critical patent/CN112437237A/en
Application granted granted Critical
Publication of CN112437237B publication Critical patent/CN112437237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/94
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Abstract

The application discloses a shooting method and a shooting device, and belongs to the technical field of shooting. The method comprises the following steps: acquiring a first image through a camera, determining a target image area in the first image, determining a target photosensitive area corresponding to the target image area, acquiring first image data through the target photosensitive area by adopting a first working mode, and obtaining a target image according to the first image data and second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode. Compared with the prior art that the whole frame image under different exposure values of multiple frames is synthesized, the calculation amount is small, and therefore the efficiency of obtaining the image with the HDR effect can be improved.

Description

Shooting method and device
Technical Field
The application belongs to the technical field of shooting, and particularly relates to a shooting method and device.
Background
High-Dynamic Range (HDR) images can provide more Dynamic Range and image details than ordinary images, and often reflect real-world brightness contrast more than low-Dynamic Range images, and are closer to the image effect seen by human eyes.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: the implementation of the image of the existing HDR effect is generally as follows: the method comprises the steps of shooting an image with a low exposure value, an image with a normal exposure value and an image with a high exposure value in the same scene, and then synthesizing a plurality of whole-frame images obtained under different exposure values to obtain a final HDR (high resolution ratio) image.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method and a shooting device, which can solve the problem that in the prior art, the data size of synthesis calculation required for obtaining an image with an HDR effect is large, and the efficiency of obtaining the image with the HDR effect is influenced.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
acquiring a first image through a camera;
determining a target image area in the first image and determining a target photosensitive area corresponding to the target image area;
acquiring first image data through the target photosensitive area by adopting a first working mode;
obtaining a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode.
In a second aspect, an embodiment of the present application provides a shooting apparatus, including:
the acquisition module is used for acquiring a first image through a camera;
the determining module is used for determining a target image area in the first image and determining a target photosensitive area corresponding to the target image area;
the acquisition module is used for acquiring first image data through the target photosensitive area by adopting a first working mode;
the acquisition module is used for acquiring a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first image is obtained through a camera, a target image area in the first image is determined, a target photosensitive area corresponding to the target image area is determined, first image data are collected through the target photosensitive area in a first working mode, and a target image is obtained according to the first image data and second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode, so that the calculated amount is reduced, and the efficiency of acquiring the image with the HDR effect is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a photographing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a pixel merge mode according to an embodiment of the present disclosure;
FIG. 3 is a diagram of a single pixel mode according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a shooting interface provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a first image provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of obtaining a target image in different operation modes according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of obtaining a target image according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another photographing apparatus provided in an embodiment of the present application;
fig. 9 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application;
fig. 10 is a schematic hardware structure diagram of another electronic device for implementing the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be implemented in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a shooting method provided in an embodiment of the present application, where the method may include the following steps:
step 101, acquiring a first image through a camera.
In this step, the first image may be directly obtained according to the current working mode of the camera, or according to a default working mode, or the working mode of the camera may be adjusted according to the brightness of the ambient light. And under the condition that the brightness of the ambient light is dark, for example, the brightness of the ambient light is less than or equal to a fourth preset threshold, acquiring third image data by adopting a pixel merging mode, and obtaining a first image according to the acquired third image data.
Fig. 2 is a schematic diagram of a pixel merging mode according to an embodiment of the present disclosure. As shown in fig. 2, four adjacent R pixels on the left of the dotted line in fig. 2 are combined into 1R pixel on the right of the dotted line, four adjacent G pixels on the left of the dotted line in fig. 2 are combined into 1G pixel on the right of the dotted line, and four adjacent B pixels on the left of the dotted line in fig. 2 are combined into 1B pixel on the right of the dotted line, so that the image resolution is reduced, and the brightness of the pixels is improved. Wherein, R represents red, G represents green, B represents blue, and RGB is a color representing three channels of red, green, and blue. In addition to the embodiment shown in fig. 2, other numbers of pixel combination modes may be adopted, such as nine-in-one pixel combination, sixteen-in-one pixel combination, and the like, which is not specifically limited in this embodiment of the present invention.
And acquiring third image data by adopting a pixel merging mode under the condition that the brightness of the ambient light is less than or equal to a fourth preset threshold value, so that the brightness of the whole picture of the preview image can be improved under the condition that the brightness of the ambient light is dark.
It should be noted that, when the brightness of the ambient light is greater than or equal to the third preset threshold, the image data is acquired in the single-pixel mode, and the first image is obtained according to the acquired image data.
Under the condition that the brightness of the ambient light is greater than or equal to the third preset threshold value, the single-pixel mode is adopted to collect image data, so that under the condition that the ambient light is bright, the definition of the whole picture of the previewed image can be ensured through the single-pixel mode. The single pixel mode is a Full Size mode, refer to fig. 3, and fig. 3 is a schematic diagram of the single pixel mode provided in the embodiment of the present application, the single pixel mode rearranges pixel points output by the imaging sensor into normal pixel points, the resolution of an image is unchanged, the definition of the image in this mode is high, a process of rearranging the pixel points is called bayer image regeneration, namely remosaic, and the rearranged pixel points are shown on the right of a dotted line in fig. 3.
Step 102, determining a target image area in the first image, and determining a target photosensitive area corresponding to the target image area.
In this step, an overexposed area or a darker area in the first image may be determined as a target image area, so that image acquisition is performed again in a targeted manner to achieve a better image effect.
Wherein, determining the target image area in the first image can be realized by the following steps:
determining a region of which the brightness value is less than or equal to a first preset threshold value in the first image as a target image region; the first working mode is a pixel merging mode;
or determining a region of the first image with the brightness value being greater than or equal to a second preset threshold value as a target image region; the first operation mode is a single pixel mode.
In this way, the image can be reacquired in a binning mode for darker areas or in a single pixel mode for overexposed areas, thereby preventing the image from being darker or overexposed.
It should be noted that, if the first image is obtained according to the third image data acquired by using the pixel combination mode, since the pixel combination mode may increase the brightness of the pixel point, an overexposed region may exist in the first image, and the overexposed region is determined as the target image region. In this embodiment, a region in the first image with a luminance value greater than or equal to the second preset threshold may be determined as the target image region, that is, the region in the first image with a luminance value greater than or equal to the second preset threshold is considered as the overexposure region.
If the first image is obtained according to the third image data acquired in the single-pixel mode, the brightness of the pixel points cannot be improved in the single-pixel mode, but the definition of the first image can be guaranteed, so that a relatively dark area may exist in the first image, and the relatively dark area is determined as a target image area. In this embodiment, a region of the first image with a luminance value less than or equal to the first preset threshold may be determined as the target image region, that is, the region of the first image with a luminance value less than or equal to the first preset threshold is considered as a relatively dark region.
Optionally, determining the target image area in the first image may also be implemented by:
receiving a first input of a user to a first image;
in response to the first input, a region in the first image selected by the user is determined as the target image region.
As shown in fig. 4, fig. 4 is a schematic view of a shooting interface provided in an embodiment of the present application. The user may circle the region 401 desired to acquire the HDR effect in the first image displayed on the screen, that is, the first input may be an operation of the user to circle the region 401 desired to acquire the HDR effect, and the electronic device takes an image of the region desired to acquire the HDR effect, which is circled by the user, as the target region image in response to the first input.
After the target image area is determined, a target photosensitive area corresponding to the target image area may be determined. The target photosensitive area refers to an area where a photosensitive element in the camera is located.
Step 103, collecting first image data through the target photosensitive area by adopting a first working mode.
Optionally, when the first image is obtained according to third image data acquired in the pixel merging mode, and a region in the first image, where a luminance value is greater than or equal to a second preset threshold, is determined as a target image region, the first operating mode is a single-pixel mode, that is, the first image data is acquired through the target photosensitive region in the single-pixel mode.
Optionally, when the first image is obtained according to third image data acquired in a single-pixel mode, and a region in the first image, where a luminance value is smaller than or equal to a first preset threshold, is determined as a target image region, the first operating mode is a pixel merging mode, that is, the first image data is acquired through the target photosensitive region in the pixel merging mode.
104, obtaining a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode.
It should be noted that the second image data may be data acquired through other photosensitive areas in the second operating mode, where the other photosensitive areas refer to areas other than the target photosensitive area in the camera. In the application, different working modes can be simultaneously adopted to acquire image data of different areas, namely, a first working mode can be simultaneously adopted to acquire first image data through a target photosensitive area, a second working mode is adopted to acquire second image data through other photosensitive areas, and then a target image is obtained according to the first image data and the second image data.
For example, referring to fig. 5 and fig. 6, fig. 5 is a schematic diagram of a first image provided in an embodiment of the present application, and fig. 6 is a schematic diagram of obtaining a target image by using different operation modes provided in an embodiment of the present application. The region of the first image with the luminance value less than or equal to the first preset threshold is, for example, a solid-line frame region 501 shown in fig. 5, in which case the solid-line frame region 501 is a target photosensitive region, and the other photosensitive regions are, for example, a dashed-line frame region 502 shown in fig. 5. As shown in fig. 6 according to the result identified in fig. 5, the first operating mode is a pixel merge mode, and the second operating mode is a single-pixel mode, that is, the pixel merge mode is used to collect the first image data through the target photosensitive area, so as to improve the brightness of the image obtained according to the image data collected by the target photosensitive area.
The second image data and the first image data may not be acquired at the same time, that is, the second image data and the first image data are data of different frame images, and the target image may be obtained by performing synthesis calculation on the data of the different frame images.
For example, as shown in fig. 7, fig. 7 is a schematic diagram for obtaining a target image according to an embodiment of the present application, and the schematic diagram on the left of the ellipses in fig. 7 represents a first image acquired when the brightness of the ambient light is equal to or less than a fourth preset threshold. When the brightness of the ambient light is smaller than or equal to a fourth preset threshold value, the first image is obtained according to third image data acquired by adopting a pixel merging mode, and an area with the brightness value larger than or equal to a second preset threshold value in the first image is determined as a target image area, and the first working mode is a single pixel mode, the second working mode is the pixel merging mode, and the first image data is acquired through a target photosensitive area by adopting the single pixel mode.
Specifically, the second image data may be third image data acquired by using the pixel merge mode when the first image is acquired, that is, the second image data may be data of an nth frame image shown in fig. 7. After the second image data is obtained, the first image data can be collected through the target photosensitive area again in the single-pixel mode, and the first image data and the second image data are data of different frame images. And then, obtaining a target image according to the first image data and the second image data, wherein the exposure value is not required to be changed in the whole process of obtaining the target image, and the obtained first image data and the second image data are image data collected under the same exposure value, so that data which do not belong to the same area as the first image data in the second image data are not required to participate in synthesis calculation, and the synthesis calculation amount is small compared with the synthesis calculation amount of synthesizing a whole frame of image under different exposure values of a plurality of frames in the prior art. Only data in the second image data, which belong to the same region as the first image data, needs to be synthesized with the first image data to obtain a target image. When the brightness of the ambient light is less than or equal to the fourth preset threshold, the pixel merging mode is adopted to collect the third image data, so that the brightness of the first image obtained according to the third image data can be improved, the dark area can be brightened, meanwhile, in order to avoid overexposure, the single pixel mode is adopted to collect the first image data through the target photosensitive area, so that the overexposure of the image collected by the target photosensitive area can be avoided, and the obtained target image is an image with a high dynamic range effect. Therefore, by reducing the amount of synthesis calculation, the efficiency of obtaining an image of the HDR effect is improved.
Optionally, when the brightness of the ambient light is greater than or equal to a third preset threshold, the first image is obtained according to third image data acquired in a single-pixel mode, and a region in the first image, where the brightness value is less than or equal to the first preset threshold, is determined as a target image region, the first working mode is a pixel merge mode, the second working mode is a single-pixel mode, the first image data is acquired through the target photosensitive region in the pixel merge mode, and the second image data is acquired in the single-pixel mode.
Specifically, the second image data may be third image data acquired in a single-pixel mode when the first image is acquired. And then obtaining the target image according to the first image data and the second image data, wherein the obtained target image also reduces the synthesis calculation amount. When the brightness of the ambient light is greater than or equal to a third preset threshold value, the single-pixel mode is adopted to collect the third image data, so that the definition of a first image obtained according to the third image data can be ensured, meanwhile, in order to avoid the problem of a dark area in the first image, the pixel merging mode is adopted to collect the first image data through the target photosensitive area, so that the problem of dark brightness of an image corresponding to the first image data collected by the target photosensitive area can be avoided, and the obtained target image is an image with a high dynamic range effect. Therefore, by reducing the amount of synthesis calculation, the efficiency of obtaining an image of the HDR effect is improved.
Optionally, the third preset threshold is used to determine whether overexposure is performed, and the fourth preset threshold is used to determine whether darkness is performed, so that the third preset threshold may be set to be greater than the fourth preset threshold.
In the shooting method provided by the embodiment, a first image is acquired through a camera, a target image area in the first image is determined, a target photosensitive area corresponding to the target image area is determined, first image data is acquired through the target photosensitive area by adopting a first working mode, and a target image is acquired according to the first image data and second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode, so that the calculated amount is reduced, and the efficiency of acquiring the image with the HDR effect is improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a camera provided in an embodiment of the present application, where the camera 800 includes:
an obtaining module 810, configured to obtain a first image through a camera;
a determining module 820, configured to determine a target image area in the first image, and determine a target photosensitive area corresponding to the target image area;
an acquiring module 830, configured to acquire first image data through the target photosensitive area in a first working mode;
an obtaining module 840, configured to obtain a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode.
The shooting device provided by the embodiment acquires a first image through a camera, determines a target image area in the first image, determines a target photosensitive area corresponding to the target image area, acquires first image data through the target photosensitive area by adopting a first working mode, and acquires a target image according to the first image data and second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode, so that the calculated amount is reduced, and the efficiency of acquiring the image with the HDR effect is improved.
Optionally, the determining module 820 is specifically configured to determine, as the target image region, a region in the first image whose brightness value is less than or equal to a first preset threshold; the first working mode is a pixel merging mode;
or determining a region of the first image with a brightness value greater than or equal to a second preset threshold as the target image region; the first operating mode is a single pixel mode.
Optionally, the determining module 820 is specifically configured to receive a first input of the first image by the user;
in response to the first input, determining a region in the first image selected by a user as the target image region.
Optionally, the obtaining module 810 is specifically configured to, when the brightness of the ambient light meets a preset condition, adopt the second working mode to collect third image data; the third image data comprises the second image data; and obtaining the first image according to the third image data.
Optionally, if the preset condition is greater than or equal to a third preset threshold, the second working mode is a single-pixel mode;
if the preset condition is that the preset condition is smaller than or equal to a fourth preset threshold, the second working mode is a pixel merging mode; wherein the third preset threshold is greater than the fourth preset threshold.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, an electronic device is further provided in an embodiment of the present application, as shown in fig. 9, and fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application. The electronic device 900 includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and capable of running on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the embodiment of the shooting method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 10 is a schematic hardware structure diagram of another electronic device for implementing the embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 is configured to acquire a first image through a camera;
determining a target image area in the first image and determining a target photosensitive area corresponding to the target image area;
acquiring first image data through the target photosensitive area by adopting a first working mode;
obtaining a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode.
The processor 1010 is further configured to determine, as the target image region, a region in the first image whose brightness value is less than or equal to a first preset threshold; the first working mode is a pixel merging mode;
or determining a region of the first image with a brightness value greater than or equal to a second preset threshold as the target image region; the first operating mode is a single pixel mode.
A processor 1010 further configured to receive a first input from a user to the first image;
in response to the first input, determining a region in the first image selected by a user as the target image region.
The processor 1010 is configured to acquire third image data in the second working mode when the brightness of the ambient light meets a preset condition; the third image data comprises the second image data; and obtaining the first image according to the third image data.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the noise reduction function control method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as computer Read-Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, etc.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used for storing software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (4)

1. A photographing method, characterized in that the method comprises:
acquiring a first image through a camera;
determining a target image area in the first image and determining a target photosensitive area corresponding to the target image area;
acquiring first image data through the target photosensitive area by adopting a first working mode;
obtaining a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode;
the second image data is data of other photosensitive areas except the target photosensitive area acquired by adopting the second working mode;
the determining the target image area in the first image specifically includes:
determining a region of the first image with a brightness value smaller than or equal to a first preset threshold as the target image region; the first working mode is a pixel merging mode;
or determining a region of the first image with a brightness value greater than or equal to a second preset threshold as the target image region; the first working mode is a single-pixel mode; the single pixel mode is that pixel points output by the imaging sensor are rearranged into normal pixel points, and the resolution ratio of the image is unchanged;
the acquiring of the first image by the camera specifically includes:
under the condition that the brightness of the ambient light meets a preset condition, acquiring third image data by adopting the second working mode; the third image data comprises the second image data;
obtaining the first image according to the third image data;
if the preset condition is that the preset condition is greater than or equal to a third preset threshold value, the second working mode is a single-pixel mode;
if the preset condition is that the preset condition is smaller than or equal to a fourth preset threshold, the second working mode is a pixel merging mode; wherein the third preset threshold is greater than the fourth preset threshold; the third preset threshold is used for judging whether overexposure exists; and the fourth preset threshold is used for judging whether the light is dark or not.
2. The method according to claim 1, wherein the determining the target image region in the first image specifically comprises:
receiving a first input of the first image by a user;
in response to the first input, determining a region in the first image selected by a user as the target image region.
3. A camera, characterized in that the camera comprises:
the acquisition module is used for acquiring a first image through the camera;
the determining module is used for determining a target image area in the first image and determining a target photosensitive area corresponding to the target image area;
the acquisition module is used for acquiring first image data through the target photosensitive area by adopting a first working mode;
the acquisition module is used for acquiring a target image according to the first image data and the second image data; the second image data is acquired by adopting a second working mode, and the first working mode is different from the second working mode; acquiring data of other photosensitive areas except the target photosensitive area by adopting the second working mode according to the second image data;
the determining module is specifically configured to determine, as the target image region, a region in the first image whose luminance value is less than or equal to a first preset threshold; the first working mode is a pixel merging mode;
or determining a region of the first image with a brightness value greater than or equal to a second preset threshold as the target image region; the first working mode is a single-pixel mode; the single pixel mode is that pixel points output by the imaging sensor are rearranged into normal pixel points, and the resolution ratio of the image is unchanged;
the acquisition module is specifically configured to acquire third image data in the second working mode when the brightness of the ambient light meets a preset condition; the third image data comprises the second image data; obtaining the first image according to the third image data;
if the preset condition is that the preset condition is greater than or equal to a third preset threshold value, the second working mode is a single-pixel mode;
if the preset condition is that the preset condition is less than or equal to a fourth preset threshold, the second working mode is a pixel merging mode; wherein the third preset threshold is greater than the fourth preset threshold; the third preset threshold is used for judging whether overexposure exists; and the fourth preset threshold is used for judging whether the light is dark or not.
4. The apparatus of claim 3, the determining module being specifically configured to receive a first input of the first image by a user;
in response to the first input, determining a region in the first image selected by a user as the target image region.
CN202011489181.0A 2020-12-16 2020-12-16 Shooting method and device Active CN112437237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011489181.0A CN112437237B (en) 2020-12-16 2020-12-16 Shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011489181.0A CN112437237B (en) 2020-12-16 2020-12-16 Shooting method and device

Publications (2)

Publication Number Publication Date
CN112437237A CN112437237A (en) 2021-03-02
CN112437237B true CN112437237B (en) 2023-02-03

Family

ID=74691626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011489181.0A Active CN112437237B (en) 2020-12-16 2020-12-16 Shooting method and device

Country Status (1)

Country Link
CN (1) CN112437237B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113141461A (en) * 2021-04-12 2021-07-20 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN115118889A (en) * 2022-06-24 2022-09-27 维沃移动通信有限公司 Image generation method, image generation device, electronic equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8446481B1 (en) * 2012-09-11 2013-05-21 Google Inc. Interleaved capture for high dynamic range image acquisition and synthesis
JP5901854B2 (en) * 2013-12-05 2016-04-13 オリンパス株式会社 Imaging device
CN107079083B (en) * 2015-11-25 2020-11-06 华为技术有限公司 Photographing method, photographing device and terminal
CN105611185B (en) * 2015-12-18 2017-10-31 广东欧珀移动通信有限公司 image generating method, device and terminal device
CN106504217B (en) * 2016-11-29 2019-03-15 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, imaging device and electronic device
CN106412407B (en) * 2016-11-29 2019-06-07 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN108419022A (en) * 2018-03-06 2018-08-17 广东欧珀移动通信有限公司 Control method, control device, computer readable storage medium and computer equipment
CN108881731B (en) * 2018-08-06 2021-07-02 Oppo广东移动通信有限公司 Panoramic shooting method and device and imaging equipment
CN111246128B (en) * 2018-11-29 2022-09-13 北京图森智途科技有限公司 Pixel combination method, imaging device, image sensor and automobile
CN110475066A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Control method, imaging device and electronic equipment
US10855931B1 (en) * 2019-11-07 2020-12-01 Novatek Microelectronics Corp. High dynamic range image sensing method for image sensing device

Also Published As

Publication number Publication date
CN112437237A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN107302664B (en) Shooting method and mobile terminal
KR102149187B1 (en) Electronic device and control method of the same
CN112437237B (en) Shooting method and device
CN113014803A (en) Filter adding method and device and electronic equipment
CN111835982B (en) Image acquisition method, image acquisition device, electronic device, and storage medium
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113660425B (en) Image processing method, device, electronic equipment and readable storage medium
CN112419218A (en) Image processing method and device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112508820A (en) Image processing method and device and electronic equipment
CN111835937A (en) Image processing method and device and electronic equipment
CN116055891A (en) Image processing method and device
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
CN113676674B (en) Image processing method, device, electronic equipment and readable storage medium
CN111866476B (en) Image shooting method and device and electronic equipment
CN114285978A (en) Video processing method, video processing device and electronic equipment
CN112446848A (en) Image processing method and device and electronic equipment
CN113487497A (en) Image processing method and device and electronic equipment
CN112312024A (en) Photographing processing method and device and storage medium
CN112367464A (en) Image output method and device and electronic equipment
CN112887629B (en) Frequency detection method, frequency detection device, electronic equipment and storage medium
CN112492208B (en) Shooting method and electronic equipment
CN113114930B (en) Information display method, device, equipment and medium
CN112367470B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant