WO2020207387A1 - 图像处理方法、装置、存储介质及电子设备 - Google Patents

图像处理方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2020207387A1
WO2020207387A1 PCT/CN2020/083572 CN2020083572W WO2020207387A1 WO 2020207387 A1 WO2020207387 A1 WO 2020207387A1 CN 2020083572 W CN2020083572 W CN 2020083572W WO 2020207387 A1 WO2020207387 A1 WO 2020207387A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dynamic range
scene
range value
synthesized
Prior art date
Application number
PCT/CN2020/083572
Other languages
English (en)
French (fr)
Inventor
张弓
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020207387A1 publication Critical patent/WO2020207387A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • This application relates to the field of computer technology, in particular to an image processing method, device, storage medium and electronic equipment.
  • the current electronic device can only shoot scenes with a relatively small brightness range. If the scene is shot when the brightness of the scene is too different, the captured images are likely to lose the details of the bright and/or dark areas.
  • the related art proposes a high dynamic range (or wide dynamic range) synthesis technology, which synthesizes a high dynamic range image by taking multiple images.
  • the embodiments of the present application provide an image processing method, device, storage medium, and electronic equipment, which can efficiently realize the synthesis of high dynamic range images.
  • an embodiment of the present application provides an image processing method applied to an electronic device, and the image processing method includes:
  • an embodiment of the present application provides an image processing device, which is applied to electronic equipment, and the image processing device includes:
  • An image acquisition module for acquiring an image sequence of a shooting scene, the image sequence including multiple scene images with different exposure parameters
  • An image synthesis module for extracting the first two scene images in the image sequence, and synthesizing the first two scene images to obtain a synthesized image
  • An area recognition module for recognizing a target area in the composite image whose dynamic range value does not reach a preset dynamic range value
  • the image synthesis module is also used to extract the next unsynthesized image from the image sequence, and synthesize the unsynthesized next image and the synthesized image according to the target area, until the dynamics of all areas are synthesized High dynamic range images whose range values all reach the preset dynamic range value.
  • an embodiment of the present application provides a storage medium on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute:
  • an embodiment of the present application provides an electronic device, including a processor and a memory, the memory stores a computer program, and the processor invokes the computer program to execute:
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a scene image being synthesized frame by frame in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of identifying a target area in a composite image in an embodiment of the present application.
  • Fig. 4 is a schematic diagram of the positions of the first camera and the second camera in an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an image sequence obtained by sorting multiple scene images in an embodiment of the present application.
  • FIG. 6 is another schematic diagram of an image sequence obtained by shooting multiple scene images in an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 10 is another schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the embodiment of the present application first provides an image processing method, which is applied to an electronic device.
  • the execution subject of the image processing method may be the image processing device provided in the embodiment of the present application, or an electronic device integrated with the image processing device.
  • the image processing device may be implemented in hardware or software, and the electronic device may be a smart device.
  • Mobile phones, tablet computers, handheld computers, notebook computers, or desktop computers are equipped with processors and have processing capabilities.
  • This application provides an image processing method, including:
  • the identifying the target area in the composite image whose dynamic range value does not reach a preset dynamic range value includes:
  • the target area is determined in the composite image according to an area in the down-sampled image whose dynamic range value does not reach the preset dynamic range value.
  • the synthesizing the first two scene images to obtain a synthesized image includes:
  • the first two scene images are synthesized according to the weight value to obtain the synthesized image.
  • the acquiring an image sequence of a shooting scene includes:
  • the image sequence of the shooting scene is acquired.
  • the electronic device includes a first camera and a second camera
  • the acquiring the image sequence of the shooting scene includes:
  • the multiple scene images of the shooting scene are sorted to obtain the image sequence.
  • the acquiring the image sequence of the shooting scene includes:
  • the performing backlight environment recognition on the shooting scene includes:
  • the environmental parameters are input into a pre-trained support vector machine classifier for classification, and a recognition result of whether the shooting scene is in a backlight environment is obtained.
  • the image processing method further includes:
  • the performing quality optimization processing on the high dynamic range image includes:
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of this application.
  • the image processing method is applied to the electronic device provided in the embodiment of the application.
  • the flow of the image processing method provided in the embodiment of the application may be as follows:
  • an image sequence of a shooting scene is acquired, and the image sequence includes multiple scene images with different exposure parameters.
  • the scene to which the camera is aimed is the shooting scene.
  • a shooting application for example, the system application "camera” of the electronic device
  • the scene to which the camera is aimed is the shooting scene.
  • the user clicks on the icon of the “camera” application on the electronic device to start the “camera application” if the user uses the camera of the electronic device to aim at a scene including XX objects, the scene including XX objects is the shooting scene.
  • the shooting scene does not specifically refer to a specific scene, but a scene that is aligned in real time following the direction of the camera.
  • the electronic device acquires multiple scene images with different exposure parameters corresponding to the shooting scene.
  • the exposure parameters include exposure duration and exposure value (commonly known as EV value).
  • the electronic device can obtain multiple scene images with long exposure duration and short exposure duration sequentially overlapping to form an image sequence of the shooting scene;
  • the electronic device can also obtain multiple scene images in which the overexposure value and the underexposure value sequentially overlap to form an image sequence of the shooting scene; for another example, the electronic device can also obtain multiple images of the shooting scene by bracketing exposure. Scene images to form an image sequence of the shooting scene.
  • the first two scene images in the image sequence are extracted, and the first two scene images are synthesized to obtain a synthesized image.
  • the electronic device after the electronic device acquires the image sequence of the shooting scene, the electronic device first extracts the first two scene images in the image sequence. It is assumed that the image sequence includes 3 scene images of the shooting scene, which are sequentially For image A, scene image B, and scene image C, the electronic device extracts scene image A and scene image B from them for synthesis.
  • the electronic device After extracting the first two scene images in the image sequence, the electronic device further synthesizes the first two scene images to obtain a composite image.
  • scene image A For example, suppose the first two scene images are scene image A with a short exposure time and scene image B with a long exposure time. Since the exposure time of scene image A is shorter than the exposure time of scene image B, scene image A is compared with scene image B. B retains more features of the brighter area in the shooting scene, while scene image B retains more features of the darker area in the shooting scene than scene image A. Therefore, scene image A with a long exposure time can be used.
  • the retained features of the darker area in the shooting scene and the scene image B with a short exposure duration and the retained features of the brighter area in the shooting scene are synthesized to obtain a composite image, which has more features than the original scene image A and scene image B. High dynamic range.
  • the first two scene images are under-exposed scene image A and over-exposed scene image B. Because scene image A is under-exposed, scene image A retains more brightness in the shooting scene than scene image B The characteristics of the area, because the scene image B is overexposed, the scene image B retains the features of the darker area in the shooting scene more than the scene image A. Therefore, the shooting scene retained by the overexposed scene image A can be used The features of the medium and darker regions and the features of the brighter regions in the shooting scene retained by the underexposed scene image B are synthesized to obtain a composite image, which has a higher dynamic range than the original scene image A and the scene image B.
  • the electronic device after the first two images of the composite image sequence are obtained by the composite image, the electronic device further identifies the target area in the composite image whose dynamic range value does not reach the preset dynamic range value.
  • the electronic device can divide the composite image into multiple sub-regions. For any sub-region of the composite image, the dynamic range value is determined according to the histogram variance corresponding to the image brightness value, or according to the highest brightness value and the lowest brightness. After determining the dynamic range value of each sub-area, determine the sub-area whose dynamic range value does not reach the preset dynamic range value as the target area of the composite image.
  • the electronic device after the electronic device recognizes the target area whose dynamic range value does not reach the preset dynamic range value in the synthesized image, it further extracts the next unsynthesized image from the image sequence, and synthesizes the unsynthesized image according to the target area
  • the next image and the composite image are combined until the composite obtains a high dynamic range image in which the dynamic range value of all areas reaches the preset dynamic range value.
  • Figure 2 In layman's terms, after completing the synthesis of the first two synthesized images in the image sequence, the synthesis effect of some areas has reached the requirements, and no further synthesis is required, and for the areas where the synthesis effect does not meet the requirements (I.e.
  • the target area whose dynamic range value does not reach the preset dynamic range value sequentially select the next image in the image sequence that has not yet been synthesized to synthesize the area where the synthesis effect does not meet the requirements, obtain a new synthesized image, and then determine the new In the composite image of the composite image whose composite effect does not meet the requirements, the composite will continue to be composited until the composite has a high dynamic range image whose dynamic range values of all regions reach the preset dynamic range value (as shown in Figure 2, as the number of composites increases, the composite The area where the effect does not meet the requirements (ie the target area) gradually decreases).
  • the electronic device can first obtain the image sequence of the shooting scene, which includes multiple scene images with different exposure parameters. After the first two scene images are synthesized to obtain the synthesized image, the synthesized image The synthesis effect of some areas has reached the requirements, and no further synthesis is required. For areas where the synthesis effect does not meet the requirements (that is, the target area whose dynamic range value does not reach the preset dynamic range value), sequentially select the image sequence that has not been synthesized The next image is synthesized until the synthesized result is a high dynamic range image whose dynamic range values of all areas reach the preset dynamic range value. Therefore, it is not necessary to synthesize all regions of all scene images, and the efficiency of synthesizing high dynamic range images can be improved.
  • identifying the target area in the composite image whose dynamic range value does not reach the preset dynamic range value includes:
  • the electronic device first downsamples the composite image to obtain a downsampled image.
  • the image content of the downsampled image The image content is the same as the composite image, but the resolution is lower. Then, the electronic device divides the down-sampled image into multiple regions.
  • the electronic device determines the dynamic range value according to the histogram variance corresponding to the image brightness value, or according to the highest brightness value and brightness The lowest value determines its dynamic range value. After determining the dynamic range value of each area in the down-sampled image, the electronic device can compare the dynamic range value of each area in the down-sampled image with the preset dynamic range value, thereby determining that the dynamic range value in the down-sampled image has not reached The area where the dynamic range value is preset.
  • the electronic device further maps the areas in the down-sampled image whose dynamic range value does not reach the preset dynamic range value according to the resolution of the down-sampled image and the synthesized image In the composite image, the target area in the composite image whose dynamic range value does not reach the preset dynamic range value is obtained.
  • "synthesizing the first two scene images to obtain a synthesized image” includes:
  • the first two scene images are synthesized according to the weight value to obtain a synthesized image.
  • the size of the first two scene images are the same but the exposure parameters are different.
  • the first two scene images are scene image A and scene image B, where scene image A is a long exposure image and scene image B is Short exposure image, or scene image A is an overexposed image, and scene image B is an underexposure image. Therefore, the pixel data (such as brightness value) at the same position in the first two scene images can reflect the difference of the shooting scene under different exposure parameters, and the weight of the first two scene images can be analyzed according to the difference. value.
  • the first two scene images can be synthesized according to the weight value to obtain a composite image, and the composite image has a higher dynamic compared with the first two scene images range.
  • "acquiring an image sequence of a shooting scene” includes:
  • the electronic device can recognize the backlit environment of the shooting scene when receiving the image shooting request, so that when it recognizes that the shooting scene is in the backlit environment, obtain the image sequence of the shooting scene, and synthesize the shooting scene according to the image sequence. High dynamic range image.
  • the backlight environment recognition of the shooting scene can be implemented in multiple ways.
  • the environmental parameters of the shooting scene can be acquired, and the backlight environment recognition of the shooting scene can be performed according to the acquired environmental parameters.
  • the environmental parameters of the electronic device can be acquired, and the environmental parameters of the electronic device are used as the environmental parameters of the shooting scene.
  • the environmental parameters include, but are not limited to, time information, time zone information, location information, weather information, and location information of the electronic device where the electronic device is located.
  • the obtained environmental parameters can be input into a pre-trained support vector machine classifier, and the support vector machine classifier classifies according to the input environmental parameters to determine whether the shooting scene is In a backlit environment.
  • the histogram information of the shooting scene in the preset channel may be acquired, and the backlight environment recognition of the shooting scene may be performed according to the acquired histogram information.
  • the preset channels include R, G, and B three channels.
  • statistics are performed on the histogram information of the shooting scene to obtain statistical results.
  • the foregoing preset condition may be set as: the number of pixels in the first brightness interval and the second brightness interval both reach the preset number threshold, and the minimum brightness is less than the first preset brightness threshold and/or the maximum brightness is greater than the second preset brightness Threshold, where the preset number threshold, the first preset brightness threshold, and the second preset brightness threshold are empirical parameters, which can be selected by those of ordinary skill in the art according to actual needs.
  • "acquiring an image sequence of a shooting scene” includes:
  • a first camera and a second camera are provided on the same side of the electronic device.
  • the electronic device uses the first camera and the second camera to shoot the shooting scene according to different exposure parameters to obtain multiple scene images of the shooting scene.
  • the electronic device uses the first camera to expose the shooting scene according to the short exposure time, and at the same time uses the second camera to expose the shooting scene according to the long exposure time, so as to obtain two scene images of the shooting scene by using "one exposure operation".
  • They are long-exposure images and short-exposure images; for example, the electronic device overexposes the shooting scene through the first camera, while underexposing the shooting scene through the second camera, so that the “one-exposure operation” can obtain the image of the shooting scene.
  • Two scene images are overexposed and underexposed respectively. As a result, the efficiency of acquiring scene images can be improved.
  • the electronic device sorts the multiple scene images to obtain an image sequence of the shooting scene. For example, if the first camera and the second camera shoot according to the long exposure time and the short exposure time respectively, the electronic device can sort the captured images of multiple scenes in a way that the long exposure time and the short exposure time overlap, as shown in the figure 5; For another example, if the first camera and the second camera respectively perform over-exposure and under-exposure on the shooting scene, then the electronic device can sort the multiple scene images obtained by overlapping the over-exposure and under-exposure. As shown in Figure 6.
  • "acquiring an image sequence of a shooting scene” includes:
  • an image cache queue is also preset in the electronic device, and the image cache queue may be a fixed-length queue or a variable-length queue.
  • the image cache queue is a fixed-length queue and can cache 8 images.
  • the electronic device caches the scene images of the shooting scene (the camera alternates according to different exposure parameters) collected by the camera in real time into the image buffer queue. Therefore, when the electronic device acquires the image sequence of the shooting scene, it can acquire the image sequence of the shooting scene from the preset image buffer queue.
  • the image processing method provided in this application further includes:
  • the quality of the high dynamic range image is optimized.
  • the quality optimization processing performed in the embodiment of the present application includes but is not limited to sharpening and noise reduction, etc. Specifically, a person of ordinary skill in the art can select an appropriate quality optimization processing method according to actual needs.
  • the quality of the synthesized high dynamic range image is optimized to further improve its image quality.
  • FIG. 7 is a schematic diagram of another flow of an image processing method provided by an embodiment of this application.
  • the flow of the image processing method may include:
  • the electronic device acquires an image sequence of a shooting scene, and the image sequence includes multiple scene images with different exposure parameters.
  • the scene to which the camera is aimed is the shooting scene.
  • a shooting application for example, the system application "camera” of the electronic device
  • the scene to which the camera is aimed is the shooting scene.
  • the user clicks on the icon of the “camera” application on the electronic device to start the “camera application” if the user uses the camera of the electronic device to aim at a scene including XX objects, the scene including XX objects is the shooting scene.
  • the shooting scene does not specifically refer to a specific scene, but a scene that is aligned in real time following the direction of the camera.
  • the electronic device acquires multiple scene images with different exposure parameters corresponding to the shooting scene.
  • the exposure parameters include exposure duration and exposure value (commonly known as EV value).
  • the electronic device can obtain multiple scene images with long exposure duration and short exposure duration sequentially overlapping to form an image sequence of the shooting scene;
  • the electronic device can also obtain multiple scene images in which the overexposure value and the underexposure value sequentially overlap to form an image sequence of the shooting scene; for another example, the electronic device can also obtain multiple images of the shooting scene by bracketing exposure. Scene images to form an image sequence of the shooting scene.
  • the electronic device extracts the first two scene images in the image sequence, and synthesizes the first two scene images to obtain a composite image.
  • the electronic device after the electronic device acquires the image sequence of the shooting scene, the electronic device first extracts the first two scene images in the image sequence. It is assumed that the image sequence includes 3 scene images of the shooting scene, which are sequentially For image A, scene image B, and scene image C, the electronic device extracts scene image A and scene image B from them for synthesis.
  • the electronic device After extracting the first two scene images in the image sequence, the electronic device further synthesizes the first two scene images to obtain a composite image.
  • scene image A For example, suppose the first two scene images are scene image A with a short exposure time and scene image B with a long exposure time. Since the exposure time of scene image A is shorter than the exposure time of scene image B, scene image A is compared with scene image B. B retains more features of the brighter area in the shooting scene, while scene image B retains more features of the darker area in the shooting scene than scene image A. Therefore, scene image A with a long exposure time can be used.
  • the retained features of the darker area in the shooting scene and the scene image B with a short exposure duration and the retained features of the brighter area in the shooting scene are synthesized to obtain a composite image, which has more features than the original scene image A and scene image B. High dynamic range.
  • the first two scene images are under-exposed scene image A and over-exposed scene image B. Because scene image A is under-exposed, scene image A retains more brightness in the shooting scene than scene image B The characteristics of the area, because the scene image B is overexposed, the scene image B retains the features of the darker area in the shooting scene more than the scene image A. Therefore, the shooting scene retained by the overexposed scene image A can be used The features of the medium and darker regions and the features of the brighter regions in the shooting scene retained by the underexposed scene image B are synthesized to obtain a composite image, which has a higher dynamic range than the original scene image A and the scene image B.
  • the electronic device down-samples the synthesized image to obtain the down-sampled image.
  • the electronic device obtains the dynamic range value of each area in the down-sampled image, and determines the area in the down-sampled image where the dynamic range value does not reach the preset dynamic range value.
  • the electronic device determines the target area in the composite image according to the area in the down-sampled image whose dynamic range value does not reach the preset dynamic range value.
  • the electronic device after the first two images of the composite image sequence are obtained by the composite image, the electronic device further identifies the target area in the composite image whose dynamic range value does not reach the preset dynamic range value.
  • the electronic device in order to be able to more efficiently identify the target area in the composite image whose dynamic range value does not reach the preset dynamic range value, the electronic device first downsamples the composite image to obtain a downsampled image.
  • the image content of the downsampled image The image content is the same as the composite image, but the resolution is lower.
  • the electronic device divides the down-sampled image into multiple regions. For any region of the down-sampled image, the electronic device determines the dynamic range value according to the histogram variance corresponding to the image brightness value, or according to the highest brightness value and brightness The lowest value determines its dynamic range value.
  • the electronic device After determining the dynamic range value of each area in the down-sampled image, the electronic device can compare the dynamic range value of each area in the down-sampled image with the preset dynamic range value, thereby determining that the dynamic range value in the down-sampled image has not reached The area where the dynamic range value is preset. Since the down-sampled image and the synthesized image have the same image content but different resolutions, the electronic device further maps the areas in the down-sampled image whose dynamic range value does not reach the preset dynamic range value according to the resolution of the down-sampled image and the synthesized image In the composite image, the target area in the composite image whose dynamic range value does not reach the preset dynamic range value is obtained.
  • the electronic device extracts the unsynthesized next image from the image sequence, and synthesizes the unsynthesized next image and the synthesized image according to the target area, until the synthesized dynamic range values of all areas reach the preset dynamic range value High dynamic range image.
  • the electronic device after the electronic device recognizes the target area whose dynamic range value does not reach the preset dynamic range value in the synthesized image, it further extracts the next unsynthesized image from the image sequence, and synthesizes the unsynthesized image according to the target area
  • the next image and the composite image are combined until the composite obtains a high dynamic range image in which the dynamic range value of all areas reaches the preset dynamic range value.
  • Figure 2 In layman's terms, after completing the synthesis of the first two synthesized images in the image sequence, the synthesis effect of some areas has reached the requirements, and no further synthesis is required, and for the areas where the synthesis effect does not meet the requirements (I.e.
  • the target area whose dynamic range value does not reach the preset dynamic range value sequentially select the next image in the image sequence that has not yet been synthesized to synthesize the area where the synthesis effect does not meet the requirements, obtain a new synthesized image, and then determine the new In the composite image of the composite image whose composite effect does not meet the requirements, the composite will continue to be composited until the composite has a high dynamic range image whose dynamic range values of all regions reach the preset dynamic range value (as shown in Figure 2, as the number of composites increases, the composite The area where the effect does not meet the requirements (ie the target area) gradually decreases).
  • the embodiment of the present application also provides an image processing device.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the application.
  • the image processing device is applied to an electronic device.
  • the electronic device includes an image sensor.
  • the image sensor has a first working mode and a second working mode.
  • the image processing device includes an image acquisition module 501, an image synthesis module 502, and an area recognition module 503.
  • the image synthesis module 504 is as follows:
  • the image acquisition module 501 is configured to acquire an image sequence of a shooting scene, and the image sequence includes multiple scene images with different exposure parameters;
  • the image synthesis module 502 is used to extract the first two scene images in the image sequence, and synthesize the first two scene images to obtain a synthesized image;
  • the area recognition module 503 is used to recognize the target area in the composite image whose dynamic range value does not reach the preset dynamic range value;
  • the image synthesis module 504 is used to extract the unsynthesized next image from the image sequence, and synthesize the unsynthesized next image and the synthesized image according to the target area, until the synthesized dynamic range values of all areas reach the preset dynamic range value High dynamic range image.
  • the area recognition module 503 may be used to:
  • the image synthesis module 502 may be used to:
  • the first two scene images are synthesized according to the weight value to obtain a synthesized image.
  • the image acquiring module 501 when acquiring an image sequence of a shooting scene, the image acquiring module 501 may be used to:
  • the electronic device includes a first camera and a second camera.
  • the image synthesis module 502 may be used to:
  • the image synthesis module 502 when acquiring an image sequence of a shooting scene, the image synthesis module 502 may be used to:
  • the image acquisition module 501 can be used to:
  • the image processing device further includes an image optimization module for:
  • the quality of the high dynamic range image is optimized.
  • the image optimization module when performing quality optimization processing on a high dynamic range image, is used to:
  • the image processing device provided in the embodiment of the application belongs to the same concept as the image processing method in the above embodiment, and any method provided in the image processing method embodiment can be run on the image processing device, and its specific implementation For details of the process, refer to the embodiment of the image processing method, which will not be repeated here.
  • the embodiment of the present application provides a computer-readable storage medium with a computer program stored thereon, and when the stored computer program is executed on a computer, the computer executes the steps in the image processing method provided in the embodiment of the present application.
  • the storage medium may be a magnetic disk, an optical disc, a read only memory (Read Only Memory, ROM,) or a random access device (Random Access Memory, RAM), etc.
  • the electronic device includes a processor 701 and a memory 702.
  • the processor 701 is electrically connected to the memory 702.
  • the processor 701 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device. It executes the electronic device by running or loading the computer program stored in the memory 702 and calling the data stored in the memory 702. Various functions and process data.
  • the memory 702 may be used to store software programs and modules.
  • the processor 701 executes various functional applications and data processing by running the computer programs and modules stored in the memory 702.
  • the memory 702 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, a computer program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of electronic equipment, etc.
  • the memory 702 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 702 may further include a memory controller to provide the processor 701 with access to the memory 702.
  • the processor 701 in the electronic device will load the instructions corresponding to the process of one or more computer programs into the memory 702 according to the following steps, and run the instructions by the processor 701 and store them in the memory 702
  • the computer program to achieve various functions, as follows:
  • the image sequence includes multiple scene images with different exposure parameters
  • FIG. 10 is another schematic structural diagram of the electronic device provided by an embodiment of the application. The difference from the electronic device shown in FIG. 9 is that the electronic device further includes components such as an input unit 703 and an output unit 704.
  • the input unit 703 can be used to receive input numbers, character information, or user characteristic information (such as fingerprints), and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • user characteristic information such as fingerprints
  • the output unit 704 may be used to display information input by the user or information provided to the user, such as a screen.
  • the processor 701 in the electronic device will load the instructions corresponding to the process of one or more computer programs into the memory 702 according to the following steps, and run the instructions by the processor 701 and store them in the memory 702
  • the computer program to achieve various functions, as follows:
  • the image sequence includes multiple scene images with different exposure parameters
  • the processor 701 may execute:
  • the processor 701 may execute:
  • the first two scene images are synthesized according to the weight value to obtain a synthesized image.
  • the processor 701 may further execute:
  • the processor 701 may execute:
  • the processor 701 may execute:
  • the processor 701 may execute:
  • processor 701 may also execute:
  • the quality of the high dynamic range image is optimized.
  • the processor 701 may execute:
  • the electronic device provided in this embodiment of the application belongs to the same concept as the image processing method in the above embodiment. Any method provided in the image processing method embodiment can be run on the electronic device. The specific implementation process is detailed. See the embodiment of the feature extraction method, which will not be repeated here.
  • the image processing method of the embodiment of the present application ordinary testers in the field can understand that all or part of the process of implementing the image processing method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program.
  • the computer program may be stored in a computer readable storage medium, such as stored in the memory of an electronic device, and executed by at least one processor in the electronic device.
  • the execution process may include such as image processing methods.
  • the storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, and the like.
  • the image processing device of the embodiment of the present application its functional modules may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium, such as a read-only memory, a magnetic disk or an optical disk, etc. .

Abstract

本申请实施例公开了一种图像处理方法、装置、存储介质及电子设备,其中,首先获取曝光参数不同的多个场景图像,在合成其中前两个场景图像得到合成图像之后,对于合成效果未达到要求的区域,依序选取图像序列中尚未合成的下一个图像进行合成,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。

Description

图像处理方法、装置、存储介质及电子设备
本申请要求于2019年04月09日提交中国专利局、申请号为201910280090.7、发明名称为“图像处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种图像处理方法、装置、存储介质及电子设备。
背景技术
由于电子设备本身的硬件限制,目前的电子设备只能拍摄亮度范围比较小的场景,若在场景明暗相差太大时进行拍摄,拍摄出来的图像容易丢失明处和/或暗处的细节。为此,相关技术提出了高动态范围(或称宽动态范围)合成技术,通过拍摄多张图像合成一张高动态范围的图像。
发明内容
本申请实施例提供了一种图像处理方法、装置、存储介质及电子设备,能够高效的实现高动态范围图像的合成。
第一方面,本申请实施例提供了一种图像处理方法,应用于电子设备,所述图像处理方法包括:
获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
第二方面,本申请实施例提供了一种图像处理装置,应用于电子设备,所述图像处理装置包括:
图像获取模块,用于获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
图像合成模块,用于提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
区域识别模块,用于识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
所述图像合成模块还用于从所述图像序列中提取尚未合成的下一个图像,并根据所述 目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
第三方面,本申请实施例提供了一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行:
获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
第四方面,本申请实施例提供了一种电子设备,包括处理器和存储器,所述存储器存储有计算机程序,所述处理器通过调用所述计算机程序,用于执行:
获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的图像处理方法的一流程示意图。
图2是本申请实施例中逐帧合成场景图像的示意图。
图3是本申请实施例中识别合成图像中目标区域的示意图。
图4是本申请实施例中第一摄像头和第二摄像头的设置位置示意图。
图5是本申请实施例中排序多个场景图像得到图像序列的一示意图。
图6是本申请实施例中拍摄多个场景图像得到图像序列另一示意图。
图7是本申请实施例提供的图像处理方法的另一流程示意图。
图8是本申请实施例提供的图像处理装置的一结构示意图。
图9是本申请实施例提供的电子设备的一结构示意图。
图10是本申请实施例提供的电子设备的另一结构示意图。
具体实施方式
请参照图式,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当 的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
本申请实施例首先提供一种图像处理方法,该图像处理方法应用于电子设备。其中,该图像处理方法的执行主体可以是本申请实施例提供的图像处理装置,或者集成了该图像处理装置的电子设备,该图像处理装置可以采用硬件或者软件的方式实现,电子设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑、或者台式电脑等配置有处理器而具有处理能力的设备。
本申请提供一种图像处理方法,包括:
获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
在一实施例中,所述识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域,包括:
降采样所述合成图像得到降采样图像;
获取所述降采样图像中各区域的动态范围值,并确定出所述降采样图像中动态范围值未达到所述预设动态范围值的区域;
根据所述降采样图像中动态范围值未达到所述预设动态范围值的区域在所述合成图像中确定出所述目标区域。
在一实施例中,所述合成所述前两个场景图像得到合成图像,包括:
根据所述前两个场景图像中相同位置的像素点数据获取图像合成的权重值;
根据所述权重值合成所述前两个场景图像得到所述合成图像。
在一实施例中,所述获取拍摄场景的图像序列,包括:
若接收到图像拍摄请求,则对所述拍摄场景进行逆光环境识别;
若识别到所述拍摄场景处于逆光环境,则获取所述拍摄场景的所述图像序列。
在一实施例中,所述电子设备包括第一摄像头和第二摄像头,所述获取所述拍摄场景的所述图像序列,包括:
分别通过所述第一摄像头和所述第二摄像头按照不同曝光参数对所述拍摄场景进行拍摄,得到所述拍摄场景的多个场景图像;
排序所述所述拍摄场景的多个场景图像得到所述图像序列。
在一实施例中,所述获取所述拍摄场景的所述图像序列,包括:
从预设的图像缓存队列中获取所述拍摄场景的所述图像序列。
在一实施例中,所述对所述拍摄场景进行逆光环境识别,包括:
获取所述拍摄场景的环境参数;
将所述环境参数输入预训练的支持向量机分类器进行分类,得到所述拍摄场景是否处于逆光环境的识别结果。
在一实施例中,所述图像处理方法还包括:
在合成得到所述高动态范围图像后,对所述高动态范围图像进行质量优化处理。
在一实施例中,所述对所述高动态范围图像进行质量优化处理,包括:
对所述高动态范围图像进行降噪处理。
请参照图1,图1为本申请实施例提供的图像处理方法的流程示意图。该图像处理方法应用于本申请实施例提供的电子设备,如图1所示,本申请实施例提供的图像处理方法的流程可以如下:
在101中,获取拍摄场景的图像序列,图像序列包括曝光参数不同的多个场景图像。
其中,电子设备在根据用户操作启动拍摄类应用程序(比如电子设备的系统应用“相机”)后,其摄像头所对准的场景即为拍摄场景。比如,用户通过手指点击电子设备上“相机”应用的图标启动“相机应用”后,若用户使用电子设备的摄像头对准一包括XX物体的场景,则该包括XX物体的场景即为拍摄场景。根据以上描述,本领域技术人员应当理解的是,拍摄场景并非特指某一特定场景,而是跟随摄像头的指向所实时对准的场景。
本申请实施例中,电子设备获取到对应拍摄场景的多个曝光参数不同的场景图像。其中,曝光参数包括曝光时长和曝光值(即俗称的EV值),比如,电子设备可以获取到长曝光时长和短曝光时长依次交叠的多个场景图像,以构成拍摄场景的图像序列;又比如,电子设备还可以获取到过曝光值和欠曝光值依次交叠的多个场景图像,以构成拍摄场景的图像序列;又比如,电子设备还可以以包围曝光的方式获取到拍摄场景的多个场景图像,以构成拍摄场景的图像序列。
在102中,提取图像序列中的前两个场景图像,并合成前两个场景图像得到合成图像。
本申请实施例中,电子设备在获取到拍摄场景的图像序列之后,电子设备首先提取出图像序列中的前两个场景图像,假设图像序列中共包括拍摄场景的3个场景图像,依序为场景图像A、场景图像B以及场景图像C,则电子设备从中提取出场景图像A和场景图像B以待合成。
在提取到图像序列中的前两个场景图像之后,电子设备进一步合成该前两个场景图像得到合成图像。
比如,假设前两个场景图像为短曝光时长的场景图像A和长曝光时长的场景图像B,由于场景图像A的曝光时长较场景图像B的曝光时长短,则场景图像A相较于场景图像B更多的保留拍摄场景中较亮区域的特征,而场景图像B则相较于场景图像A更多的保留拍 摄场景中较暗区域的特征,由此,可以利用长曝光时长的场景图像A保留的拍摄场景中较暗区域的特征以及短曝光时长的场景图像B保留的拍摄场景中较亮区域的特征合成得到合成图像,该合成图像相较于原始的场景图像A和场景图像B具有更高的动态范围。
又比如,假设前两个场景图像为欠曝光的场景图像A和过曝光的场景图像B,由于场景图像A欠曝,则场景图像A相较于场景图像B更多的保留拍摄场景中较亮区域的特征,由于场景图像B过曝,则场景图像B则相较于场景图像A更多的保留拍摄场景中较暗区域的特征,由此,可以利用过曝光的场景图像A保留的拍摄场景中较暗区域的特征以及欠曝光的场景图像B保留的拍摄场景中较亮区域的特征合成得到合成图像,该合成图像相较于原始的场景图像A和场景图像B具有更高的动态范围。
在103中,识别出合成图像中动态范围值未达到预设动态范围值的目标区域。
本申请实施例中,电子设备在合成图像序列的前两个图像得到合成图像之后,进一步识别出合成图像中动态范围值未达到预设动态范围值的目标区域。
比如,电子设备可以将合成图像换分为多个子区域,对于合成图像的任一子区域,根据其图像亮度值对应的直方图方差来确定其动态范围值,或者根据其中亮度最高值和亮度最低值来确定其动态范围值;在确定出各子区域的动态范围值之后,确定出其中动态范围值未达到预设动态范围值的子区域,作为合成图像的目标区域。
应当说明的是,本申请实施例中对于预设动态范围值的取值不做具体限制,可由本领域普通技术人员根据实际需要取合适值。
在104中,从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。
本申请实施例中,电子设备在识别出合成图像中动态范围值未达到预设动态范围值的目标区域之后,进一步从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。请参照图2,通俗的说,即在完成图像序列中前两个合成图像的合成之后,其中某些区域的合成效果已经达到要求,不需要再进行合成,而对于合成效果未达到要求的区域(即动态范围值未达到预设动态范围值的目标区域),依序选取图像序列中尚未合成的下一个图像针对合成效果未达到要求的区域进行合成,得到新的合成图像,再确定出新的合成图像中合成效果未达到要求的区域继续合成,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像(如图2所示,随着合成次数的增加,合成效果未达到要求的区域(即目标区域)逐渐减小)。
由上可知,本申请实施例中,电子设备可以首先获取到拍摄场景的图像序列,其中包括曝光参数不同的多个场景图像,在合成其中前两个场景图像得到合成图像之后,该合成图像中某些区域的合成效果已经达到要求,不需要再进行合成,而对于合成效果未达到要 求的区域(即动态范围值未达到预设动态范围值的目标区域),依序选取图像序列中尚未合成的下一个图像进行合成,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。由此,并不需要对所有场景图像的全部区域都进行合成,能够提高合成得到高动态范围图像的效率。
在一实施例中,“识别出合成图像中动态范围值未达到预设动态范围值的目标区域”,包括:
(1)降采样合成图像得到降采样图像;
(2)获取降采样图像中各区域的动态范围值,并确定出降采样图像中动态范围值未达到预设动态范围值的区域;
(3)根据降采样图像中动态范围值未达到预设动态范围值的区域在合成图像中确定出目标区域。
应当说明的是,在本申请实施例中,考虑到合成图像的分辨率较高,若直接对其进行识别,可能会花费较长的识别时间。因此,请参照图3,为了能够更高效的识别出合成图像中动态范围值未达到预设动态范围值的目标区域,电子设备首先降采样合成图像得到降采样图像,该降采样图像的图像内容与合成图像的图像内容相同,但是分辨率更低。然后,电子设备将降采样图像划分为多个区域,对于降采样图像的任一区域,电子设备根据其图像亮度值对应的直方图方差来确定其动态范围值,或者根据其中亮度最高值和亮度最低值来确定其动态范围值。在确定出降采样图像中各区域的动态范围值之后,电子设备即可将降采样图像各区域的动态范围值与预设动态范围值进行比较,从而确定出降采样图像中动态范围值未达到预设动态范围值的区域。由于降采样图像与合成图像的图像内容相同,但分辨率不同,因此,电子设备进一步根据降采样图像和合成图像的分辨率将降采样图像中动态范围值未达到预设动态范围值的区域映射到合成图像中,得到合成图像中动态范围值未达到预设动态范围值的目标区域。
在一实施例中,“合成前两个场景图像得到合成图像”,包括:
(1)根据前两个场景图像中相同位置的像素点数据获取图像合成的权重值;
(2)根据权重值合成前两个场景图像得到合成图像。
本申请实施例中,考虑到前两个场景图像的大小相同,但是曝光参数不同,比如前两个场景图像为场景图像A和场景图像B,其中场景图像A为长曝光图像、场景图像B为短曝光图像,或者场景图像A为过曝光图像、场景图像B为欠曝光图像。由此,前两个场景图像中同一位置的像素点数据(比如亮度值)能够反映拍摄场景在不同曝光参数下的差异,根据该差异即可分析得到对前两个场景图像进行图像合成的权重值。
在确定出对前两个场景图像进行图像合成的权重值之后,即可根据该权重值合成前两个场景图像得到合成图像,且该合成图像相较于前两个场景图像具有更高的动态范围。
在一实施例中,“获取拍摄场景的图像序列”,包括:
(1)若接收到图像拍摄请求,则对拍摄场景进行逆光环境识别;
(2)若识别到拍摄场景处于逆光环境,则获取拍摄场景的图像序列。
本申请实施例中,考虑到如逆光环境等明暗相差太大的拍摄场景进行拍摄时,拍摄出来的图像更容易容易丢失明处和/或暗处的细节。因此,电子设备可以在接收到图像拍摄请求时,对拍摄场景进行逆光环境识别,从而在识别到拍摄场景处于逆光环境时,获取到拍摄场景的图像序列,以根据该图像序列来合成得到拍摄场景的高动态范围图像。
应当说明的是,对拍摄场景进行逆光环境识别可以通过多种方式实现,作为一种可选的实施方式,可以获取拍摄场景的环境参数,根据获取到的环境参数对拍摄场景进行逆光环境识别。
其中,由于电子设备与拍摄场景处于同一环境下,可以获取电子设备的环境参数,将电子设备的环境参数作为拍摄场景的环境参数。其中,环境参数包括但不限于时间信息,电子设备所处位置的时区信息、位置信息、天气信息,以及电子设备的方位信息等。在获取到拍摄场景的环境参数之后,可以将获取到的这些环境参数输入到预先训练的支持向量机分类器中,由该支持向量机分类器根据输入的环境参数进行分类,以判定拍摄场景是否处于逆光环境中。
作为另一种可选的实施方式,可以获取拍摄场景在预设通道的直方图信息,根据获取到的直方图信息对拍摄场景进行逆光环境识别。
其中,预设通道包括R、G、B三个通道,在获取拍摄场景的直方图信息时,可以获取到拍摄场景的预览图像,再获取预览图像在R、G、B三个通道的直方图信息,将获取到的R、G、B三个通道的直方图信息作为拍摄场景在预设通道的直方图信息。之后,对拍摄场景的直方图信息进行统计,得到统计结果。其中,具体统计不同亮度下的像素数量。在得到统计结果之后,判断统计结果是否满足预设条件,是则判定拍摄场景处于逆光环境,否则判定拍摄场景不处于逆光环境。比如,前述预设条件可以设置为:第一亮度区间和第二亮度区间的像素数量均达到预设数量阈值,且最低亮度小于第一预设亮度阈值和/或最高亮度大于第二预设亮度阈值,其中,预设数量阈值、第一预设亮度阈值和第二预设亮度阈值为经验参数,可由本领域普通技术人员根据实际需要取合适值。
在一实施例中,“获取拍摄场景的图像序列”,包括:
(1)分别通过第一摄像头和第二摄像头按照不同曝光参数对拍摄场景进行拍摄,得到拍摄场景的多个场景图像;
(2)排序拍摄场景的多个场景图像得到图像序列。
请参照图4,在本申请实施例中,电子设备的同一侧设置有第一摄像头和第二摄像头。
电子设备在获取拍摄场景的图像序列时,分别通过第一摄像头和第二摄像头按照不同曝光参数对拍摄场景进行拍摄,得到拍摄场景的多个场景图像。比如,电子设备通过第一摄像头按照短曝光时长对拍摄场景进行曝光,同时通过第二摄像头按照长曝光时长对拍摄 场景进行曝光,从而利用“一次曝光操作”获取到拍摄场景的两个场景图像,分别为长曝光图像和短曝光图像;又比如,电子设备通过第一摄像头对拍摄场景进行过曝光,同时通过第二摄像头对拍摄场景进行欠曝光,从而利用“一次曝光操作”获取到拍摄场景的两个场景图像,分别为过曝光图像和欠曝光图像。由此,可以提升场景图像的获取效率。
在拍摄得到拍摄场景的多个场景图像后,电子设备排序该多个场景图像得到拍摄场景的图像序列。比如,第一摄像头和第二摄像头分别按照长曝光时长和短曝光时长进行拍摄,则电子设备可以按照长曝光时长和短曝光时长交叠的方式对拍摄得到的多个场景图像进行排序,如图5所示;又比如,第一摄像头和第二摄像头分别对拍摄场景进行过曝光和欠曝光,则电子设备可以按照过曝光和欠曝光交叠的方式对拍摄得到的多个场景图像进行排序,如图6所示。
在一实施例中,“获取拍摄场景的图像序列”,包括:
从预设的图像缓存队列中获取拍摄场景的图像序列。
应当说明的是,电子设备中还预先设置有图像缓存队列,该图像缓存队列可以为定长队列,也可以为变长队列,比如,该图像缓存队列为定长队列,能够缓存8个图像。电子设备在使能摄像头后,会将摄像头实时采集到的拍摄场景(摄像头按照不同曝光参数交替)的场景图像缓存到图像缓存队列中。由此,电子设备在获取拍摄场景的图像序列时,即可从预设的图像缓存队列中获取拍摄场景的图像序列。
在一实施例中,本申请提供的图像处理方法还包括:
在合成得到高动态范围图像后,对高动态范围图像进行质量优化处理。
应当说明的是,本申请实施例中所进行的质量优化处理包括但不限于锐化和降噪等,具体可由本领域普通技术人员根据实际需要选取合适的质量优化处理方式。通过对合成得到的高动态范围图像进行质量优化处理,以进一步提高其图像质量。
请结合参照图7,图7为本申请实施例提供的图像处理方法的另一种流程示意图,该图像处理方法的流程可以包括:
在201中,电子设备获取拍摄场景的图像序列,图像序列包括曝光参数不同的多个场景图像。
其中,电子设备在根据用户操作启动拍摄类应用程序(比如电子设备的系统应用“相机”)后,其摄像头所对准的场景即为拍摄场景。比如,用户通过手指点击电子设备上“相机”应用的图标启动“相机应用”后,若用户使用电子设备的摄像头对准一包括XX物体的场景,则该包括XX物体的场景即为拍摄场景。根据以上描述,本领域技术人员应当理解的是,拍摄场景并非特指某一特定场景,而是跟随摄像头的指向所实时对准的场景。
本申请实施例中,电子设备获取到对应拍摄场景的多个曝光参数不同的场景图像。其中,曝光参数包括曝光时长和曝光值(即俗称的EV值),比如,电子设备可以获取到长 曝光时长和短曝光时长依次交叠的多个场景图像,以构成拍摄场景的图像序列;又比如,电子设备还可以获取到过曝光值和欠曝光值依次交叠的多个场景图像,以构成拍摄场景的图像序列;又比如,电子设备还可以以包围曝光的方式获取到拍摄场景的多个场景图像,以构成拍摄场景的图像序列。
在202中,电子设备提取图像序列中的前两个场景图像,并合成前两个场景图像得到合成图像。
本申请实施例中,电子设备在获取到拍摄场景的图像序列之后,电子设备首先提取出图像序列中的前两个场景图像,假设图像序列中共包括拍摄场景的3个场景图像,依序为场景图像A、场景图像B以及场景图像C,则电子设备从中提取出场景图像A和场景图像B以待合成。
在提取到图像序列中的前两个场景图像之后,电子设备进一步合成该前两个场景图像得到合成图像。
比如,假设前两个场景图像为短曝光时长的场景图像A和长曝光时长的场景图像B,由于场景图像A的曝光时长较场景图像B的曝光时长短,则场景图像A相较于场景图像B更多的保留拍摄场景中较亮区域的特征,而场景图像B则相较于场景图像A更多的保留拍摄场景中较暗区域的特征,由此,可以利用长曝光时长的场景图像A保留的拍摄场景中较暗区域的特征以及短曝光时长的场景图像B保留的拍摄场景中较亮区域的特征合成得到合成图像,该合成图像相较于原始的场景图像A和场景图像B具有更高的动态范围。
又比如,假设前两个场景图像为欠曝光的场景图像A和过曝光的场景图像B,由于场景图像A欠曝,则场景图像A相较于场景图像B更多的保留拍摄场景中较亮区域的特征,由于场景图像B过曝,则场景图像B则相较于场景图像A更多的保留拍摄场景中较暗区域的特征,由此,可以利用过曝光的场景图像A保留的拍摄场景中较暗区域的特征以及欠曝光的场景图像B保留的拍摄场景中较亮区域的特征合成得到合成图像,该合成图像相较于原始的场景图像A和场景图像B具有更高的动态范围。
在203中,电子设备降采样合成图像得到降采样图像。
在204中,电子设备获取降采样图像中各区域的动态范围值,并确定出降采样图像中动态范围值未达到预设动态范围值的区域。
在205中,电子设备根据降采样图像中动态范围值未达到预设动态范围值的区域在合成图像中确定出目标区域。
本申请实施例中,电子设备在合成图像序列的前两个图像得到合成图像之后,进一步识别出合成图像中动态范围值未达到预设动态范围值的目标区域。
考虑到合成图像的分辨率较高,若直接对其进行识别,可能会花费较长的识别时间。因此,请参照图3,为了能够更高效的识别出合成图像中动态范围值未达到预设动态范围值的目标区域,电子设备首先降采样合成图像得到降采样图像,该降采样图像的图像内容 与合成图像的图像内容相同,但是分辨率更低。然后,电子设备将降采样图像划分为多个区域,对于降采样图像的任一区域,电子设备根据其图像亮度值对应的直方图方差来确定其动态范围值,或者根据其中亮度最高值和亮度最低值来确定其动态范围值。在确定出降采样图像中各区域的动态范围值之后,电子设备即可将降采样图像各区域的动态范围值与预设动态范围值进行比较,从而确定出降采样图像中动态范围值未达到预设动态范围值的区域。由于降采样图像与合成图像的图像内容相同,但分辨率不同,因此,电子设备进一步根据降采样图像和合成图像的分辨率将降采样图像中动态范围值未达到预设动态范围值的区域映射到合成图像中,得到合成图像中动态范围值未达到预设动态范围值的目标区域。
在206中,电子设备从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。
本申请实施例中,电子设备在识别出合成图像中动态范围值未达到预设动态范围值的目标区域之后,进一步从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。请参照图2,通俗的说,即在完成图像序列中前两个合成图像的合成之后,其中某些区域的合成效果已经达到要求,不需要再进行合成,而对于合成效果未达到要求的区域(即动态范围值未达到预设动态范围值的目标区域),依序选取图像序列中尚未合成的下一个图像针对合成效果未达到要求的区域进行合成,得到新的合成图像,再确定出新的合成图像中合成效果未达到要求的区域继续合成,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像(如图2所示,随着合成次数的增加,合成效果未达到要求的区域(即目标区域)逐渐减小)。
本申请实施例还提供一种图像处理装置。请参照图8,图8为本申请实施例提供的图像处理装置的结构示意图。其中该图像处理装置应用于电子设备,该电子设备包括图像传感器,该图像传感器具有第一工作模式和第二工作模式,该图像处理装置包括图像获取模块501、图像合成模块502、区域识别模块503以及图像合成模块504,如下:
图像获取模块501,用于获取拍摄场景的图像序列,图像序列包括曝光参数不同的多个场景图像;
图像合成模块502,用于提取图像序列中的前两个场景图像,并合成前两个场景图像得到合成图像;
区域识别模块503,用于识别出合成图像中动态范围值未达到预设动态范围值的目标区域;
图像合成模块504,用于从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预 设动态范围值的高动态范围图像。
在一实施例中,在识别出合成图像中动态范围值未达到预设动态范围值的目标区域时,区域识别模块503可以用于:
降采样合成图像得到降采样图像;
获取降采样图像中各区域的动态范围值,并确定出降采样图像中动态范围值未达到预设动态范围值的区域;
根据降采样图像中动态范围值未达到预设动态范围值的区域在合成图像中确定出目标区域。
在一实施例中,在合成前两个场景图像得到合成图像时,图像合成模块502可以用于:
根据前两个场景图像中相同位置的像素点数据获取图像合成的权重值;
根据权重值合成前两个场景图像得到合成图像。
在一实施例中,在获取拍摄场景的图像序列时,图像获取模块501可以用于:
若接收到图像拍摄请求,则对拍摄场景进行逆光环境识别;
若识别到拍摄场景处于逆光环境,则获取拍摄场景的图像序列。
在一实施例中,电子设备包括第一摄像头和第二摄像头,在获取拍摄场景的图像序列时,图像合成模块502可以用于:
分别通过第一摄像头和第二摄像头按照不同曝光参数对拍摄场景进行拍摄,得到拍摄场景的多个场景图像;
排序拍摄场景的多个场景图像得到图像序列。
在一实施例中,在获取拍摄场景的图像序列时,图像合成模块502可以用于:
从预设的图像缓存队列中获取拍摄场景的图像序列。
在一实施例中,在对拍摄场景进行逆光环境识别时,图像获取模块501可以用于:
获取拍摄场景的环境参数;
将环境参数输入预训练的支持向量机分类器进行分类,得到拍摄场景是否处于逆光环境的识别结果。
在一实施例中,图像处理装置还包括图像优化模块,用于:
在合成得到高动态范围图像后,对高动态范围图像进行质量优化处理。
在一实施例中,在对高动态范围图像进行质量优化处理时,图像优化模块用于:
对高动态范围图像进行降噪处理。
应当说明的是,本申请实施例提供的图像处理装置与上文实施例中的图像处理方法属于同一构思,在图像处理装置上可以运行图像处理方法实施例中提供的任一方法,其具体实现过程详见图像处理方法实施例,此处不再赘述。
本申请实施例提供一种计算机可读的存储介质,其上存储有计算机程序,当其存储的计算机程序在计算机上执行时,使得计算机执行如本申请实施例提供的图像处理方法中的 步骤。其中,存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM,)或者随机存取器(Random Access Memory,RAM)等。
本申请实施例还提供一种电子设备,请参照图9,电子设备包括处理器701和存储器702。其中,处理器701与存储器702电性连接。
处理器701是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器702内的计算机程序,以及调用存储在存储器702内的数据,执行电子设备的各种功能并处理数据。
存储器702可用于存储软件程序以及模块,处理器701通过运行存储在存储器702的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器702可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器702可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器702还可以包括存储器控制器,以提供处理器701对存储器702的访问。
在本申请实施例中,电子设备中的处理器701会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器702中,并由处理器701运行存储在存储器702中的计算机程序,从而实现各种功能,如下:
获取拍摄场景的图像序列,图像序列包括曝光参数不同的多个场景图像;
提取图像序列中的前两个场景图像,并合成前两个场景图像得到合成图像;
识别出合成图像中动态范围值未达到预设动态范围值的目标区域;
从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。
请参照图10,图10为本申请实施例提供的电子设备的另一结构示意图,与图9所示电子设备的区别在于,电子设备还包括输入单元703和输出单元704等组件。
其中,输入单元703可用于接收输入的数字、字符信息或用户特征信息(比如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入等。
输出单元704可用于显示由用户输入的信息或提供给用户的信息,如屏幕。
在本申请实施例中,电子设备中的处理器701会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器702中,并由处理器701运行存储在存储器702中的计算机程序,从而实现各种功能,如下:
获取拍摄场景的图像序列,图像序列包括曝光参数不同的多个场景图像;
提取图像序列中的前两个场景图像,并合成前两个场景图像得到合成图像;
识别出合成图像中动态范围值未达到预设动态范围值的目标区域;
从图像序列中提取尚未合成的下一个图像,并根据目标区域合成尚未合成的下一个图像以及合成图像,直至合成得到所有区域的动态范围值均达到预设动态范围值的高动态范围图像。
在一实施例中,在识别出合成图像中动态范围值未达到预设动态范围值的目标区域时,处理器701可以执行:
降采样合成图像得到降采样图像;
获取降采样图像中各区域的动态范围值,并确定出降采样图像中动态范围值未达到预设动态范围值的区域;
根据降采样图像中动态范围值未达到预设动态范围值的区域在合成图像中确定出目标区域。
在一实施例中,在合成前两个场景图像得到合成图像时,处理器701可以执行:
根据前两个场景图像中相同位置的像素点数据获取图像合成的权重值;
根据权重值合成前两个场景图像得到合成图像。
在一实施例中,在获取拍摄场景的图像序列时,处理器701还可以执行:
若接收到图像拍摄请求,则对拍摄场景进行逆光环境识别;
若识别到拍摄场景处于逆光环境,则获取拍摄场景的图像序列。
在一实施例中,在获取拍摄场景的图像序列时,处理器701可以执行:
分别通过第一摄像头和第二摄像头按照不同曝光参数对拍摄场景进行拍摄,得到拍摄场景的多个场景图像;
排序拍摄场景的多个场景图像得到图像序列。
在一实施例中,在获取拍摄场景的图像序列时,处理器701可以执行:
从预设的图像缓存队列中获取拍摄场景的图像序列。
在一实施例中,在对拍摄场景进行逆光环境识别时,处理器701可以执行:
获取拍摄场景的环境参数;
将环境参数输入预训练的支持向量机分类器进行分类,得到拍摄场景是否处于逆光环境的识别结果。
在一实施例中,处理器701还可以执行:
在合成得到高动态范围图像后,对高动态范围图像进行质量优化处理。
在一实施例中,在对高动态范围图像进行质量优化处理时,处理器701可以执行:
对高动态范围图像进行降噪处理。
应当说明的是,本申请实施例提供的电子设备与上文实施例中的图像处理方法属于同 一构思,在电子设备上可以运行图像处理方法实施例中提供的任一方法,其具体实现过程详见特征提取方法实施例,此处不再赘述。
需要说明的是,对本申请实施例的图像处理方法而言,本领域普通测试人员可以理解实现本申请实施例的图像处理方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读取存储介质中,如存储在电子设备的存储器中,并被该电子设备内的至少一个处理器执行,在执行过程中可包括如图像处理方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器、随机存取记忆体等。
对本申请实施例的图像处理装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种图像处理方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种图像处理方法,应用于电子设备,其中,所述图像处理方法包括:
    获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
    提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
    识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
    从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
  2. 根据权利要求1所述的图像处理方法,其中,所述识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域,包括:
    降采样所述合成图像得到降采样图像;
    获取所述降采样图像中各区域的动态范围值,并确定出所述降采样图像中动态范围值未达到所述预设动态范围值的区域;
    根据所述降采样图像中动态范围值未达到所述预设动态范围值的区域在所述合成图像中确定出所述目标区域。
  3. 根据权利要求1所述的图像处理方法,其中,所述合成所述前两个场景图像得到合成图像,包括:
    根据所述前两个场景图像中相同位置的像素点数据获取图像合成的权重值;
    根据所述权重值合成所述前两个场景图像得到所述合成图像。
  4. 根据权利要求1所述的图像处理方法,其中,所述获取拍摄场景的图像序列,包括:
    若接收到图像拍摄请求,则对所述拍摄场景进行逆光环境识别;
    若识别到所述拍摄场景处于逆光环境,则获取所述拍摄场景的所述图像序列。
  5. 根据权利要求4所述的图像处理方法,其中,所述电子设备包括第一摄像头和第二摄像头,所述获取所述拍摄场景的所述图像序列,包括:
    分别通过所述第一摄像头和所述第二摄像头按照不同曝光参数对所述拍摄场景进行拍摄,得到所述拍摄场景的多个场景图像;
    排序所述所述拍摄场景的多个场景图像得到所述图像序列。
  6. 根据权利要求4所述的图像处理方法,其中,所述获取所述拍摄场景的所述图像序列,包括:
    从预设的图像缓存队列中获取所述拍摄场景的所述图像序列。
  7. 根据权利要求4所述的图像处理方法,其中,所述对所述拍摄场景进行逆光环境识别,包括:
    获取所述拍摄场景的环境参数;
    将所述环境参数输入预训练的支持向量机分类器进行分类,得到所述拍摄场景是否处于逆光环境的识别结果。
  8. 根据权利要求1所述的图像处理方法,其中,所述图像处理方法还包括:
    在合成得到所述高动态范围图像后,对所述高动态范围图像进行质量优化处理。
  9. 根据权利要求8所述的图像处理方法,其中,所述对所述高动态范围图像进行质量优化处理,包括:
    对所述高动态范围图像进行降噪处理。
  10. 一种图像处理装置,应用于电子设备,其中,所述图像处理装置包括:
    图像获取模块,用于获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
    图像合成模块,用于提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
    区域识别模块,用于识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
    所述图像合成模块还用于从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
  11. 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机上运行时,使得所述计算机执行:
    获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
    提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
    识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
    从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
  12. 一种电子设备,包括处理器和存储器,所述存储器储存有计算机程序,其中,所述处理器通过调用所述计算机程序,用于执行:
    获取拍摄场景的图像序列,所述图像序列包括曝光参数不同的多个场景图像;
    提取所述图像序列中的前两个场景图像,并合成所述前两个场景图像得到合成图像;
    识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域;
    从所述图像序列中提取尚未合成的下一个图像,并根据所述目标区域合成所述尚未合成的下一个图像以及所述合成图像,直至合成得到所有区域的动态范围值均达到所述预设动态范围值的高动态范围图像。
  13. 根据权利要求12所述的电子设备,其中,在识别出所述合成图像中动态范围值未达到预设动态范围值的目标区域时,所述处理器用于执行:
    降采样所述合成图像得到降采样图像;
    获取所述降采样图像中各区域的动态范围值,并确定出所述降采样图像中动态范围值未达到所述预设动态范围值的区域;
    根据所述降采样图像中动态范围值未达到所述预设动态范围值的区域在所述合成图像中确定出所述目标区域。
  14. 根据权利要求12所述的电子设备,其中,在合成所述前两个场景图像得到合成图像时,所述处理器用于执行:
    根据所述前两个场景图像中相同位置的像素点数据获取图像合成的权重值;
    根据所述权重值合成所述前两个场景图像得到所述合成图像。
  15. 根据权利要求12所述的电子设备,其中,在获取拍摄场景的图像序列时,所述处理器用于执行:
    若接收到图像拍摄请求,则对所述拍摄场景进行逆光环境识别;
    若识别到所述拍摄场景处于逆光环境,则获取所述拍摄场景的所述图像序列。
  16. 根据权利要求15所述的电子设备,其中,所述电子设备包括第一摄像头和第二摄像头,在获取所述拍摄场景的所述图像序列时,所述处理器用于执行:
    分别通过所述第一摄像头和所述第二摄像头按照不同曝光参数对所述拍摄场景进行拍摄,得到所述拍摄场景的多个场景图像;
    排序所述所述拍摄场景的多个场景图像得到所述图像序列。
  17. 根据权利要求15所述的电子设备,其中,在获取所述拍摄场景的所述图像序列时,所述处理器用于执行:
    从预设的图像缓存队列中获取所述拍摄场景的所述图像序列。
  18. 根据权利要求15所述的电子设备,其中,在对所述拍摄场景进行逆光环境识别时, 所述处理器用于执行:
    获取所述拍摄场景的环境参数;
    将所述环境参数输入预训练的支持向量机分类器进行分类,得到所述拍摄场景是否处于逆光环境的识别结果。
  19. 根据权利要求12所述的电子设备,其中,所述处理器还执行:
    在合成得到所述高动态范围图像后,对所述高动态范围图像进行质量优化处理。
  20. 根据权利要求19所述的电子设备,其中,在对所述高动态范围图像进行质量优化处理时,所述处理器用于执行:
    对所述高动态范围图像进行降噪处理。
PCT/CN2020/083572 2019-04-09 2020-04-07 图像处理方法、装置、存储介质及电子设备 WO2020207387A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910280090.7 2019-04-09
CN201910280090.7A CN110035237B (zh) 2019-04-09 2019-04-09 图像处理方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2020207387A1 true WO2020207387A1 (zh) 2020-10-15

Family

ID=67237668

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083572 WO2020207387A1 (zh) 2019-04-09 2020-04-07 图像处理方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN110035237B (zh)
WO (1) WO2020207387A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035237B (zh) * 2019-04-09 2021-08-31 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111083389B (zh) * 2019-12-27 2021-11-16 维沃移动通信有限公司 一种拍摄图像的方法和装置
CN113891012A (zh) * 2021-09-17 2022-01-04 北京极豪科技有限公司 一种图像处理方法、装置、设备以及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323493A (zh) * 2014-06-25 2016-02-10 恒景科技股份有限公司 局部增强装置、多重曝光影像系统以及局部增强方法
CN105453134A (zh) * 2013-08-12 2016-03-30 三星电子株式会社 用于图像的动态范围增强的方法和设备
CN105959591A (zh) * 2016-05-30 2016-09-21 广东欧珀移动通信有限公司 局部hdr的实现方法及系统
CN107566739A (zh) * 2017-10-18 2018-01-09 维沃移动通信有限公司 一种拍照方法及移动终端
US10165194B1 (en) * 2016-12-16 2018-12-25 Amazon Technologies, Inc. Multi-sensor camera system
CN109218613A (zh) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 高动态范围图像合成方法、装置、终端设备和存储介质
CN110035237A (zh) * 2019-04-09 2019-07-19 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8947555B2 (en) * 2011-04-18 2015-02-03 Qualcomm Incorporated White balance optimization with high dynamic range images
CN103002225B (zh) * 2011-04-20 2017-04-12 高通科技公司 多曝光高动态范围图像捕捉
CN106108941A (zh) * 2016-06-13 2016-11-16 杭州融超科技有限公司 一种超声图像区域质量增强装置及方法
CN106060418A (zh) * 2016-06-29 2016-10-26 深圳市优象计算技术有限公司 基于imu信息的宽动态图像融合方法
CN108184075B (zh) * 2018-01-17 2019-05-10 百度在线网络技术(北京)有限公司 用于生成图像的方法和装置
ES2946587T3 (es) * 2018-03-27 2023-07-21 Huawei Tech Co Ltd Método fotográfico, aparato fotográfico y terminal móvil

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105453134A (zh) * 2013-08-12 2016-03-30 三星电子株式会社 用于图像的动态范围增强的方法和设备
CN105323493A (zh) * 2014-06-25 2016-02-10 恒景科技股份有限公司 局部增强装置、多重曝光影像系统以及局部增强方法
CN105959591A (zh) * 2016-05-30 2016-09-21 广东欧珀移动通信有限公司 局部hdr的实现方法及系统
US10165194B1 (en) * 2016-12-16 2018-12-25 Amazon Technologies, Inc. Multi-sensor camera system
CN107566739A (zh) * 2017-10-18 2018-01-09 维沃移动通信有限公司 一种拍照方法及移动终端
CN109218613A (zh) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 高动态范围图像合成方法、装置、终端设备和存储介质
CN110035237A (zh) * 2019-04-09 2019-07-19 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN110035237B (zh) 2021-08-31
CN110035237A (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
WO2020207387A1 (zh) 图像处理方法、装置、存储介质及电子设备
CN108322646B (zh) 图像处理方法、装置、存储介质及电子设备
JP7266672B2 (ja) 画像処理方法および画像処理装置、ならびにデバイス
CN109996009B (zh) 图像处理方法、装置、存储介质及电子设备
US8175385B2 (en) Foreground/background segmentation in digital images with differential exposure calculations
CN110248098B (zh) 图像处理方法、装置、存储介质及电子设备
US9883119B1 (en) Method and system for hardware-based motion sensitive HDR image processing
JP2022505115A (ja) 画像処理の方法および装置並びにデバイス
US20060285754A1 (en) Indoor/Outdoor Classification in Digital Images
US20130208139A1 (en) Exposure Value Adjustment Apparatus, Method, and Non-Transitory Tangible Machine-Readable Medium Thereof
CN110620873B (zh) 设备成像方法、装置、存储介质及电子设备
US9058655B2 (en) Region of interest based image registration
CN110971841A (zh) 图像处理方法、装置、存储介质及电子设备
CN110493515B (zh) 高动态范围拍摄模式开启方法、装置、存储介质及电子设备
WO2023137956A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN107147851B (zh) 照片处理方法、装置、计算机可读存储介质及电子设备
US10602075B2 (en) Automatically determining a set of exposure values for a high dynamic range image capture device
Castro et al. Towards mobile hdr video
US9846924B2 (en) Systems and methods for detection and removal of shadows in an image
CN117061861A (zh) 一种拍摄方法、芯片系统和电子设备
US20220108427A1 (en) Method and an electronic device for detecting and removing artifacts/degradations in media
CN115516495A (zh) 基于选择区域优化高动态范围(hdr)图像处理
Wang Active entropy camera
Huang et al. High dynamic range imaging technology for micro camera array
Son A high dynamic range imaging algorithm: implementation and evaluation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20788300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20788300

Country of ref document: EP

Kind code of ref document: A1