CN115719316A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents
Image processing method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN115719316A CN115719316A CN202211485144.1A CN202211485144A CN115719316A CN 115719316 A CN115719316 A CN 115719316A CN 202211485144 A CN202211485144 A CN 202211485144A CN 115719316 A CN115719316 A CN 115719316A
- Authority
- CN
- China
- Prior art keywords
- exposure
- image
- information
- determining
- exposure area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 73
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000003287 optical effect Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 230000008447 perception Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000010295 mobile communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 241001085205 Prenanthella exigua Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The disclosure provides an image processing method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a first image of a current scene, and determining a plurality of exposure areas corresponding to different depth information in the first image, wherein each exposure area corresponds to different exposure time and exposure step length; for each exposure area, generating a plurality of second images according to the exposure step length in the exposure time; respectively determining third images from the second images corresponding to the exposure areas, and fusing the third images corresponding to the exposure areas to obtain fourth images; and respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain a target image. The exposure processing effect of the image can be improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of the image technology, the functions of the image device are more and more powerful, and the requirements of users on the image quality are gradually increased. However, there are many cases where overexposure or underexposure occurs in an image for various reasons, which affects image quality.
At present, the image exposure processing method of the related art still has the defect of unsatisfactory exposure effect, and the impression of a user is influenced to a certain degree.
Disclosure of Invention
An object of the present disclosure is to provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which can improve an exposure processing effect of an image at least to some extent.
According to a first aspect of the present disclosure, there is provided an image processing method including: acquiring a first image of a current scene, and determining a plurality of exposure areas corresponding to different depth information in the first image, wherein each exposure area corresponds to different exposure time and exposure step length; for each exposure area, generating a plurality of second images according to the exposure step length in the exposure time; respectively determining third images from the second images corresponding to the exposure areas, and fusing the third images corresponding to the exposure areas to obtain fourth images; and respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain a target image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising: the information determining module is used for acquiring a first image of a current scene, determining a plurality of exposure areas corresponding to different depth information in the first image, wherein each exposure area corresponds to different exposure time and exposure step length; the first processing module is used for generating a plurality of second images for each exposure area according to the exposure step length in the exposure time; the second processing module is used for respectively determining third images from the second images corresponding to the exposure areas and fusing the third images corresponding to the exposure areas to obtain a fourth image; and the third processing module is used for respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain a target image.
According to a third aspect of the present disclosure, there is provided an electronic apparatus comprising: a processor; and memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
According to a fourth aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the above-mentioned method.
According to the image processing scheme provided by the embodiment of the disclosure, on one hand, the depth information of the first image is utilized to determine the exposure areas corresponding to different depth information, and the exposure parameters of the exposure areas, namely the exposure time and the exposure step length, are respectively determined; on the other hand, a third image is respectively determined from the plurality of second images corresponding to each exposure area, and the third images corresponding to each exposure area are fused to obtain a fourth image, so that each exposure area of the fourth image is exposed in high quality, and the exposure effect is improved; on the other hand, local contrast enhancement processing is respectively carried out on each exposure area based on the exposure information, and the contrast information of each exposure area is restored while image details are kept, so that the target image can better accord with the perception of human eyes to objects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 is a schematic diagram illustrating an exemplary application environment to which an image processing method and apparatus according to an embodiment of the present disclosure may be applied;
FIG. 2 schematically illustrates a comparison of a scene seen by a human eye with a captured image in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of image processing in an exemplary embodiment of the disclosure;
fig. 4 schematically illustrates a schematic diagram of a partitioned exposure area in an exemplary embodiment of the present disclosure;
fig. 5 schematically illustrates a schematic view of a subdivided exposure area in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of one implementation of determining multiple exposure regions in an exemplary embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of one implementation of an exemplary image processing method of the present disclosure to acquire a fourth image;
FIG. 8 schematically illustrates a flow chart of an implementation of an image enhancement process in an exemplary embodiment of the present disclosure;
FIG. 9 schematically illustrates a stage involved in obtaining a target image in an exemplary embodiment of the disclosure;
FIG. 10 schematically illustrates a flow diagram of another image processing in an exemplary embodiment of the disclosure;
fig. 11 schematically shows a composition diagram of an image processing apparatus in an exemplary embodiment of the present disclosure;
fig. 12 shows a schematic diagram of an electronic device to which an embodiment of the disclosure may be applied.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating an exemplary application environment to which an image processing method and apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 1, the terminal device may be a smart device having an image processing function, for example, the terminal device may be a smart device such as a smart phone, a computer, a tablet computer, a smart watch, an in-vehicle device, a wearable device, a monitoring device, and the terminal device may also be referred to as a mobile terminal, a mobile device, and the like, and the disclosure does not limit the type of the terminal device.
In the embodiment of the disclosure, the image processing apparatus of the terminal device may determine a plurality of exposure areas based on the depth information of the first image based on the image processing method in the embodiment of the disclosure, and each exposure area corresponds to different exposure time and exposure step length, and for each exposure area, a plurality of second images are generated according to the exposure step length within the exposure time; and finally, respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to corresponding exposure information to obtain a target image.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be executed by a terminal device. The image processing method provided by the embodiment of the disclosure may also be executed by a server, and accordingly, the image processing apparatus may also be disposed in the server. The terminal device can send the first image to the server, so that the server determines a plurality of exposure areas and corresponding exposure time and exposure step length based on the first image and sends the exposure time and the exposure step length to the terminal device, the terminal device generates a plurality of second images based on the exposure time and the exposure step length and sends the second images to the server, the server determines third images from the second images corresponding to the exposure areas respectively and fuses the third images to obtain a fourth image, finally, contrast enhancement processing is carried out on the exposure areas in the fourth image according to exposure information, and the obtained target image is sent to the terminal device. The server may be a background system providing the image processing related service in the embodiments of the present disclosure, and may include one electronic device or a cluster formed by multiple electronic devices with a computing function, such as a portable computer, a desktop computer, and a smart phone. In the embodiment of the present disclosure, an example in which the image processing method is executed by the terminal device is described.
In the related art, the HDR (High Dynamic Range Imaging) technology and other technologies still have poor effects in extreme scenes such as backlight, and it is difficult to capture an ideal image, and there may be serious overexposure of a background light source or underexposure of a subject corresponding to a close scene. For example, when people see a photographed image corresponding to a white detail area in a picture, the image may be tragic white, and when people see a black detail area, the image may be photographed to be jet black, which greatly affects the image quality. As shown in fig. 2, the scene (a) viewed by human eyes and the captured image (b) are shown, and the captured image is bright white and dark.
However, in the related art, a plurality of original images are collected, a plurality of regions with proper exposure are selected from each original image, and finally, a plurality of regions corresponding to the plurality of original images are fused to generate a final target image. However, the exposure processing method for generating multiple frames by the method has the problem of too long exposure time, and still cannot solve the problem that slight overexposure or underexposure exists, the quality of the obtained target image cannot meet the requirement, and only the image difference between areas with different brightness is leveled, but still has difference with the actual shooting scene sensed by human eyes.
Based on one or more of the problems described above, exemplary embodiments of the present disclosure provide an image processing method. Referring to fig. 3, the image processing method may include the following steps S310 to S340:
in step S310, a first image of a current scene is obtained, and a plurality of exposure areas corresponding to different depth information in the first image are determined, where each exposure area corresponds to a different exposure time and an exposure step.
In an exemplary embodiment of the present disclosure, the current scene is a scene to be photographed, the depth information refers to distance detection information of the terminal and a photographed object in a depth direction, the distance between an indication sensor (such as an image sensor and a depth sensor) and the photographed object is indicated, different objects respectively have corresponding depth information, and positions of the different objects in the image from the device are reflected from the depth information. The first image may be a frame of raw image randomly acquired from the preview image, and in the embodiment of the present disclosure, a frame of raw image may be periodically selected in an image preview process, and the raw image is used to determine the depth information, so as to perform subsequent image processing operations based on the depth information.
After the first image is obtained, a plurality of exposure areas are determined in the first image according to the depth information, wherein the exposure areas refer to areas which accord with preset exposure conditions and have different depths of field, and for example, areas of a distant view area and a close view area which are exposed and accord with the preset exposure conditions are respectively selected according to the depth information to obtain a plurality of exposure areas.
Wherein, each exposure area corresponds to different exposure time and exposure step length. Fig. 4 is a schematic diagram illustrating a divided exposure area according to an exemplary embodiment of the present disclosure. As shown in fig. 4, each exposure area corresponds to different depth information, and the exposure time of each exposure area may be the same or different, which is not particularly limited in the embodiment of the disclosure. Further, referring to fig. 5, a schematic diagram of subdivided exposure regions according to an exemplary embodiment of the disclosure is shown, and as shown in fig. 5, the exposure levels in each exposure region are subdivided according to the exposure step, it should be noted that the exposure step of each exposure region may be the same or different, and may be set according to actual exposure needs.
According to the embodiment of the disclosure, a plurality of exposure areas are determined based on depth information, and each exposure area has the exposure time and the exposure step length, so that the exposure time and the exposure step length of different exposure areas can be limited under the condition of not influencing the quality of a shot image, the shooting time is shortened, and the shooting speed is improved.
In step S320, a plurality of second images are generated for each exposure area in an exposure time according to an exposure step size.
In an exemplary embodiment of the present disclosure, after determining an exposure area, an exposure time corresponding to the exposure area, and an exposure step, a plurality of second images may be obtained for each exposure area according to the exposure step within the corresponding exposure time, that is, a plurality of raw maps may be generated for each exposure area. With continued reference to fig. 5, during the exposure time corresponding to each exposure area, each exposure step generates a corresponding raw image of one frame.
In step S330, third images are respectively determined from the second images corresponding to the exposure areas, and the third images corresponding to the exposure areas are fused to obtain a fourth image.
In the exemplary embodiment of the disclosure, since each exposure area obtains a plurality of second images, in order to fuse the second images generated by the exposure areas corresponding to different depth information, the third images are respectively determined from each exposure area, so that the third images corresponding to each exposure area are fused to obtain the fourth image.
Alternatively, the third image may be randomly determined among a plurality of second images corresponding to the exposure areas; optionally, the third image may also be determined from the second image according to the image definition, the texture information, and the like, and the method for determining the third image may be selected according to actual needs, which is not particularly limited.
The fourth image is obtained by fusing the third images corresponding to different exposure areas, so that the obtained fourth image meets the preset exposure conditions in the areas corresponding to different depth information, if the preset exposure requirements are met, the exposure effect of the fourth image is improved.
In step S340, contrast enhancement processing is performed on each exposure area in the fourth image according to the corresponding exposure information, so as to obtain a target image.
In the exemplary embodiment of the present disclosure, a fourth image is obtained by fusing third images corresponding to different exposure parameters (exposure time, exposure step length), and in order to further improve the contrast of each region in the fourth image, a contrast enhancement process is performed on the fourth image. The adopted contrast ratio Enhancement algorithm may be an ACE (Automatic Color Enhancement) algorithm, and certainly, other contrast ratio Enhancement algorithms considering local information may also be adopted in the embodiment of the present disclosure, and no special limitation is made to this.
Wherein, the contrast enhancement processing can be performed for each exposure area according to the exposure information of the exposure area. For example, the contrast enhancement degree is increased in the area with smaller exposure, and the contrast enhancement degree is reduced in the area with larger exposure, so that the whole picture has more layering sense under the condition of keeping image details for the whole image, and the perception of human eyes corresponding to each exposure area to an object is restored.
According to the image processing method of the exemplary embodiment of the disclosure, the depth information of the first image is utilized to determine the exposure areas corresponding to different depth information, and the exposure parameters of the exposure areas, namely the exposure time and the exposure step length, are respectively determined; determining a third image from a plurality of second images corresponding to each exposure area respectively, and fusing the third images corresponding to each exposure area to obtain a fourth image, so that each exposure area of the fourth image is exposed with high quality, and the exposure effect is improved; in addition, local contrast enhancement processing is respectively carried out on each exposure area based on the exposure information, and the contrast information of each exposure area is restored while image details are kept, so that the target image can better accord with the perception of human eyes to an object.
In an exemplary embodiment, an implementation is provided that determines a plurality of exposure regions. As shown in fig. 6, determining a plurality of exposure regions corresponding to different depth information in the first image includes steps S610 to S630:
step S610: and determining a plurality of areas with different depths of field which accord with a preset exposure condition in the first image according to the depth information to serve as a plurality of exposure areas.
According to the depth information, a plurality of regions with different depths of field meeting the preset exposure condition are selected from the first image, wherein the regions with different depths of field at least comprise a plurality of close-range regions and far-range regions with different depths of field of the first image, and the close-range regions and the far-range regions meeting the preset exposure condition are respectively selected from the first image. The preset exposure condition includes a relevant condition indicating an exposure effect, for example, the image definition meets a set condition, the image texture richness meets a set condition, and the like, and can be set according to an actual situation, which is not particularly limited.
Step S620: and determining the exposure time of each exposure area by using the depth information of each exposure area based on the corresponding relation between the preset depth information and the exposure time.
After the exposure areas corresponding to the different depth information are determined, the exposure time corresponding to the exposure areas is determined according to the depth information of the exposure areas, wherein the corresponding relation between the preset depth information and the exposure time is determined according to the historical depth information and the exposure time.
Step S630: and determining an exposure step corresponding to each exposure time.
After obtaining the exposure time of each exposure area, the embodiment of the disclosure subdivides the exposure level for each exposure time range, i.e., determines the exposure step corresponding to each exposure time, so as to generate a plurality of second images within the exposure time range based on the exposure step.
In some possible embodiments, the exposure step corresponding to the exposure time may be determined according to a preset relationship between the exposure time and the exposure step. Wherein, the exposure time is shorter, and the exposure step length is denser; on the contrary, if the exposure time is long, the exposure step is sparse, and the image capturing time can be limited as a whole based on this.
In some possible embodiments, the number of pixels in the exposure area corresponding to the exposure time may be obtained, and the exposure step length corresponding to the exposure time may be determined according to the ratio of the number of pixels to the total pixels of the first image. When the occupation ratio of the number of the pixels relative to the total pixels of the first image is high, the exposure step length is dense, and if the occupation ratio of the number of the pixels relative to the total pixels of the first image is low, the exposure step length is sparse, a larger exposure area is about to adopt the more dense exposure step length so as to keep more image details.
In some possible embodiments, shaking information corresponding to the first image may also be obtained, and an exposure step corresponding to the shaking information may be determined based on a preset correspondence between the shaking information and the exposure step. In practical implementation, information related to device shake or image shake may be acquired, such as device shake detected by a sensor, motion detection results corresponding to image content acquired by optical flow information, and the like, and the present disclosure includes, but is not limited to, the above manner of acquiring shake information. For example, if the image content is more jittered, the exposure step size may be selected to be sparser, ensuring that the obtained second image is effectively usable.
In an exemplary embodiment, an implementation of acquiring a fourth image is provided. As shown in fig. 7, determining the third images from the second images corresponding to the exposure areas, and fusing the third images corresponding to the exposure areas to obtain the fourth image may include steps S710 and S720:
step S710: and acquiring image texture information of the second image corresponding to each exposure area, and determining a third image from the second image according to the image texture information.
Each exposure area comprises a plurality of second images, and the third image with the most reserved texture detail can be obtained from the plurality of second images.
In some possible embodiments, noise reduction processing may be performed on each second image, and then a third image is determined from the second images according to contrast information of the second images after the noise reduction processing, and the third image with the highest degree of sharpness and the largest image details is obtained and retained, so as to ensure that the final fusion result can retain more image details.
Step S720: and fusing the third images corresponding to the exposure areas to obtain a fourth image.
And after the third images corresponding to the exposure areas are obtained, fusing the third images based on the exposure areas to obtain a fourth image.
In an exemplary embodiment, an implementation of image enhancement processing is provided. As shown in fig. 8, performing contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain the target image may include steps S810 to S830:
step S810: and aiming at each exposure area in the fourth image, carrying out contrast enhancement on pixel points in the exposure area to obtain enhanced pixel points.
The fourth image is obtained by fusing images with different exposure parameters (exposure time and exposure step length), so as to further restore the contrast information of different areas of the image, make the final image accord with the real perception of human eyes to the current scene, and carry out contrast enhancement processing aiming at the pixel points in each exposure area in the first image to obtain the enhanced pixel points.
And performing contrast enhancement processing on the fourth image by using a self-adaptive contrast enhancement ACE algorithm. Of course, the embodiment of the present disclosure may also select other enhancement algorithms according to actual needs, and no particular limitation is imposed on this.
The contrast enhancement processing of the fourth image using the ACE algorithm will be described as an example.
Firstly, a pixel point in a fourth image is set as x (i, j), x (i, j) is set as a center, the window size is (2n + 1) × (2n + 1), and the local mean value and the local variance of the pixel point are respectively obtained as follows:
wherein m is x (i, j) is the local mean of x (i, j),for the local variance, the enhanced pixel value f (i, j) corresponding to x (i, j) is:
wherein,for the gain factor, D is a constant, usually set to the average pixel value of the global image.
In the exemplary embodiment of the present disclosure, contrast enhancement is performed on each pixel point in the fourth image through the above formula (3), so as to obtain a corresponding enhanced pixel point. Of course, the embodiment of the present disclosure may also set the gain coefficient to be a constant, and respectively perform contrast enhancement on the pixel points in each exposure region in the fourth image by using formula (3) based on the constant value. Based on the method, the problem that the contrast difference of the whole image is reduced due to different exposure time of each exposure area of the image is reduced by adopting an ACE algorithm.
Step S820: and based on the exposure information of the exposure area, replacing the pixel points in the exposure area by using the enhanced pixel points.
Fig. 9 is a schematic diagram illustrating a stage involved in obtaining a target image according to an exemplary embodiment of the present disclosure, and as shown in fig. 9, after contrast enhancement is performed on each pixel point in a fourth image, an enhanced image is formed according to the enhanced pixel points.
In this step, in order to respectively perform contrast enhancement of different exposure areas in the fourth image to different degrees, the pixels in the exposure area may be replaced by the enhanced pixels according to the exposure information of the exposure area, so as to obtain the replaced exposure area.
In some possible embodiments, a target replacement proportion corresponding to the exposure information of the exposure area may be determined according to a preset corresponding relationship between the exposure information and the pixel replacement proportion, where the target replacement proportion is used to indicate a proportion of replacing the pixels in the exposure area with the enhanced pixels, and then the pixels in the exposure area are replaced according to the target replacement proportion. The exposure information may be exposure time, exposure level, and other relevant information for reflecting the exposure condition of the exposure area.
For example, as shown in fig. 9, if the exposure time of the exposure area a is shorter, the exposure area a itself is brighter, and image details need to be enhanced, the contrast enhancement degree is increased, that is, the target replacement ratio is higher; and the exposure time of the exposure area B is longer, the exposure area B is darker, the contrast enhancement degree is reduced, and the target replacement ratio is lower. Based on this, by setting the contrast enhancement degree (pixel replacement degree) of different exposure regions with different exposure times for different exposure regions, the contrast information of the image can be restored.
In some possible embodiments, the preset correspondence relationship between the exposure information and the pixel replacement ratio may be a linear mapping relationship between the exposure time and the pixel replacement ratio, and the target replacement ratio of the exposure time of each exposure area is obtained based on the linear mapping relationship. For example, the linear mapping of the exposure time (a-B) and the pixel replacement ratio (0-1) is adopted, if the exposure time of the exposure area A is a, the exposure area A does not carry out pixel replacement, the original exposure area A of the fourth image is reserved, and if the exposure time of the exposure area B is B, the exposure area B completely replaces the pixels of the exposure area by the enhanced pixels, namely the exposure area B in the enhanced image completely replaces the exposure area B in the fourth image. Of course, the specific operations of other replacement ratios are similar, and pixel replacement is performed according to the corresponding target replacement ratio, which is not listed here.
Step S830: and determining a target image according to each exposure area after the replacement processing.
And after pixel replacement processing is carried out on each exposure area, determining a target image according to all the replacement processed exposure areas. Because each exposure area is respectively provided with different contrast enhancement degrees according to the exposure time, the bright part area can keep more image details while the target image is in a bright and dark area, the image details in the dark part area are slightly blurred, the whole target image is richer in layering sense and accords with the perception of human eyes to the current scene.
In an exemplary embodiment, although the image of the fourth image obtained by fusing the third image has a sufficient degree of sharpness, the change of the gradation of the image is masked to a certain degree, in order to further clarify the change of the gradation of the fourth image before performing contrast enhancement, the brightness of each exposure area in the fourth image can be adjusted based on the exposure information corresponding to the first image to change the degree of contrast of the fourth image, and then the fourth image is subjected to contrast enhancement processing to retain image details, so that the target image conforms to the perception of human eyes from the degree of contrast of the brightness and the image gradation.
The brightness of each exposure area can be adjusted according to the corresponding relation between the exposure time and the brightness adjustment value. For example, based on the exposure information corresponding to the first image, the brightness of different exposure areas in the fourth image is adjusted, so that the area corresponding to the low exposure time is brighter, the area corresponding to the high exposure time is darker, and the light-dark contrast degree of the fourth image is improved.
In an exemplary embodiment, to ensure real-time accuracy of the exposure parameters (exposure time, exposure step size, etc.), an implementation of updating the depth information is also provided. Before acquiring a first image of a current scene and determining a plurality of exposure areas corresponding to different depth information in the first image, the first image and the depth information of the first image can be determined from a plurality of preview images, and the depth information of the first image is updated based on optical flow change information when the current scene is shot.
In practice, in the image preview process, the first image is determined from the plurality of Zhang Yulan images and the depth information of the first image is generated. The first image may be randomly selected, or of course, the time for selecting the first image may also be set according to actual needs, for example, the first image is selected periodically in the image preview process, which is not particularly limited.
The optical flow information shot by the user is changed compared with the optical flow information of the first image determined in advance, and the change information is larger than a set threshold, so that the depth information of the first image possibly has no reference value, and in order to further improve the accuracy of the depth information, if the optical flow change information shot by the user is larger than the set threshold, the depth information is updated again; on the contrary, if the optical flow change information at the time of shooting by the user is smaller than the set threshold, the depth information of the preview image at the latest time of the current time is adopted as the depth information of the first image. The updating of the depth information may be to acquire an image closer to the current time and corresponding depth information from the preview image again, and update the first image and the corresponding depth information.
Fig. 10 shows a flow diagram of an image process according to an exemplary embodiment of the present disclosure. The image processing of the embodiment of the present disclosure is explained below with reference to fig. 10.
Step S1010: the device is turned on and image preview is initiated.
Step S1020: depth information and exposure regions corresponding to the different depth information are determined.
The first image and the depth information of the first image are determined from the multiple Zhang Yulan images, and the depth information of the first image is updated based on optical flow change information when the current scene is captured. After the depth information of the first image is determined, a plurality of exposure areas meeting the preset exposure conditions at different depths are selected according to the depth information.
Step S1030: and determining the exposure time of the exposure area corresponding to the different depth information.
For example, the exposure time for each exposure region is rang [ p1 exposure start time, p1 exposure end time ], [ p2 exposure start time, p2 exposure end time ], …, [ pn exposure start time, pn exposure end time ].
Step S1040: and setting exposure step lengths corresponding to different exposure times.
Determining an exposure step length corresponding to the exposure time according to a relation between the preset exposure time and the exposure step length; or determining an exposure step length corresponding to the exposure time according to the ratio of the number of the pixel points of the exposure area to the total pixel points of the first image; or determining the exposure step length corresponding to the shaking information corresponding to the first image based on the preset corresponding relation between the shaking information and the exposure step length.
Step S1050: and obtaining and fusing third images corresponding to the exposure areas.
The method comprises the steps of generating a plurality of second images for each exposure area according to an exposure step length in exposure time, then acquiring a third image from the plurality of second images for each exposure area, and finally fusing the third images of the exposure areas to obtain a fourth image.
Step S1060: and respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information.
And carrying out contrast enhancement treatment on exposure areas corresponding to different depth information according to the corresponding exposure information respectively, and outputting a treated fourth image to obtain a target image.
It should be noted that the details of the steps S1010 to S1060 are already described in the above method embodiments, and are not repeated herein.
Therefore, the image processing method adjusts the sequence of generating the depth information and setting the exposure parameters, determines the exposure areas corresponding to different depth information by using the depth information of the first image, and respectively determines the exposure parameters of the exposure areas, namely the exposure time and the exposure step length; determining a third image from a plurality of second images corresponding to each exposure area respectively, and fusing the third images corresponding to each exposure area to obtain a fourth image, so that each exposure area of the fourth image is exposed with high quality, and the exposure effect is improved; and respectively carrying out local contrast enhancement treatment on each exposure area based on the exposure information, and reducing the contrast information of each exposure area while keeping image details so that the target image can better accord with the perception of human eyes to an object.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 11, an image processing apparatus 1100 provided in an exemplary embodiment of the present disclosure includes an information determination module 1110, a first processing module 1120, a second processing module 1130, and a third processing module 1140. Wherein:
the information determining module 1110 is configured to obtain a first image of a current scene, determine multiple exposure areas corresponding to different depth information in the first image, where each exposure area corresponds to different exposure time and exposure step length;
a first processing module 1120, configured to generate, for each exposure area, a plurality of second images according to an exposure step size within an exposure time;
the second processing module 1130 is configured to determine third images from the second images corresponding to the exposure areas, and fuse the third images corresponding to the exposure areas to obtain fourth images;
and a third processing module 1140, configured to perform contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information, respectively, to obtain a target image.
In an exemplary embodiment, the information determination module 1110 is configured to:
determining a plurality of areas with different depths of field which accord with a preset exposure condition in the first image according to the depth information, and taking the areas as a plurality of exposure areas; determining the exposure time of each exposure area by using the depth information of each exposure area based on the corresponding relation between the preset depth information and the exposure time; and determining an exposure step corresponding to each exposure time.
In an exemplary embodiment, the information determination module 1110 is configured to:
and determining an exposure step length corresponding to the exposure time according to the relation between the preset exposure time and the exposure step length.
In an exemplary embodiment, the information determination module 1110 is configured to:
acquiring the number of pixel points of an exposure area corresponding to the exposure time; and determining an exposure step length corresponding to the exposure time according to the ratio of the number of the pixel points to the total pixel points of the first image.
In an exemplary embodiment, the information determination module 1110 is configured to:
and acquiring jitter information corresponding to the first image, and determining an exposure step length corresponding to the jitter information based on the corresponding relation between the preset jitter information and the exposure step length.
In an exemplary embodiment, the second processing module 1120 is configured to:
for each exposure area, acquiring image texture information of a second image corresponding to the exposure area, and determining a third image from the second image according to the image texture information; and fusing the third images corresponding to the exposure areas to obtain a fourth image.
In an exemplary embodiment, the second processing module 1120 is configured to:
carrying out noise reduction processing on each second image; and determining a third image from the second image according to the contrast information of the second image after the noise reduction processing.
In an exemplary embodiment, the third processing module 1130 is configured to:
aiming at each exposure area in the fourth image, contrast enhancement is carried out on pixel points in the exposure area to obtain enhanced pixel points; based on the exposure information of the exposure area, replacing the pixel points in the exposure area by the enhanced pixel points; and determining a target image according to each exposure area after the replacement processing.
In an exemplary embodiment, the third processing module 1130 is configured to:
determining a target replacement proportion corresponding to the exposure information of the exposure area according to a corresponding relation between preset exposure information and pixel replacement proportion, wherein the target replacement proportion is used for indicating the proportion of replacing the pixels in the exposure area by the enhanced pixels; and replacing the pixel points in the exposure area according to the target replacement proportion.
In an exemplary embodiment, the image processing apparatus 1100 further includes:
and the fourth processing module is configured to adjust the brightness of each exposure area in the fourth image based on the exposure information corresponding to the first image so as to change the light and shade contrast degree of the fourth image.
In an exemplary embodiment, the image processing apparatus 1100 further includes:
an information update module configured to determine a first image and depth information of the first image from a plurality of Zhang Yulan images; the depth information of the first image is updated based on optical flow change information when the current scene is photographed.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device for the method is also provided in an exemplary embodiment of the present disclosure, and the electronic device may be the above-mentioned imaging device or the server. Generally, the electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the above-mentioned method via execution of the executable instructions.
The following takes the mobile terminal 1200 in fig. 12 as an example, and exemplifies the configuration of the electronic device in the embodiment of the present disclosure. It will be appreciated by those skilled in the art that the configuration of figure 12 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 1200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the various components is shown schematically and does not constitute a structural limitation for mobile terminal 1200. In other embodiments, the mobile terminal may also interface differently than shown in fig. 12, or a combination of multiple interfaces.
As shown in fig. 12, the mobile terminal 1200 may specifically include: the mobile communication device comprises a processor 1201, a memory 1202, a bus 1203, a mobile communication module 1204, an antenna 1, a wireless communication module 1205, an antenna 2, a display 1206, a camera module 1207, an audio module 1208, a power module 1209, and a sensor module 1210.
The processor 1201 may include one or more processing units, such as: the Processor 1201 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc.
An encoder may encode (i.e., compress) an image or video to reduce the data size for storage or transmission. The decoder may decode (i.e., decompress) the encoded data for the image or video to recover the image or video data. The mobile terminal 1200 may support one or more encoders and decoders, such as: image formats such as JPEG (Joint Photographic Experts Group), PNG (Portable Network Graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group) 1, MPEG10, h.1063, h.1064, and HEVC (High Efficiency Video Coding).
The processor 1201 may be connected to the memory 1202 or other components through the bus 1203.
The communication function of the mobile terminal 1200 may be implemented by the mobile communication module 1204, the antenna 1, the wireless communication module 1205, the antenna 2, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 1204 may provide a mobile communication solution of 3G, 4G, 5G, etc. applied to the mobile terminal 1200. The wireless communication module 1205 can provide wireless communication solutions for wireless local area network, bluetooth, near field communication, etc. applied to the mobile terminal 1200.
The display screen 1206 is used for performing display functions, such as displaying a user interface, images, videos, and the like, and displaying exception prompt information. The camera module 1207 is used to implement a shooting function, such as shooting an image, a video, etc., to capture a scene image. The audio module 1208 is used to implement audio functions, such as playing audio, collecting voice, etc. The power module 1209 is used to implement power management functions, such as charging a battery, powering a device, monitoring a battery status, and so on. The sensor module 1210 may include one or more sensors for implementing corresponding sensing functions.
Furthermore, the exemplary embodiments of the present disclosure also provide a computer-readable storage medium on which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (14)
1. An image processing method, comprising:
acquiring a first image of a current scene, and determining a plurality of exposure areas corresponding to different depth information in the first image, wherein each exposure area corresponds to different exposure time and exposure step length;
for each exposure area, generating a plurality of second images according to the exposure step length in the exposure time;
respectively determining third images from the second images corresponding to the exposure areas, and fusing the third images corresponding to the exposure areas to obtain a fourth image;
and respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain a target image.
2. The method of claim 1, wherein determining a plurality of exposure regions in the first image corresponding to different depth information comprises:
determining a plurality of areas with different depth of field which accord with a preset exposure condition in the first image according to the depth information, and taking the areas as a plurality of exposure areas;
determining the exposure time of each exposure area by using the depth information of each exposure area based on the corresponding relation between the preset depth information and the exposure time;
and determining an exposure step corresponding to each exposure time.
3. The method of claim 2, wherein said determining an exposure step size corresponding to each of said exposure times comprises:
and determining an exposure step length corresponding to the exposure time according to the relation between the preset exposure time and the exposure step length.
4. The method of claim 2, wherein said determining an exposure step size corresponding to each of said exposure times comprises:
acquiring the number of pixel points of an exposure area corresponding to the exposure time;
and determining an exposure step length corresponding to the exposure time according to the ratio of the number of the pixel points to the total pixel points of the first image.
5. The method of claim 2, wherein the determining the exposure step size corresponding to each exposure time comprises:
acquiring jitter information corresponding to the first image, and determining an exposure step length corresponding to the jitter information based on a preset corresponding relation between the jitter information and the exposure step length.
6. The method of claim 1, wherein the determining a third image from the second images corresponding to the exposure regions respectively and fusing the third images corresponding to the exposure regions to obtain a fourth image comprises:
for each exposure area, acquiring image texture information of a second image corresponding to the exposure area, and determining a third image from the second image according to the image texture information;
and fusing the third images corresponding to the exposure areas to obtain a fourth image.
7. The method according to claim 6, wherein the obtaining, for each exposure area, image texture information of the second image corresponding to the exposure area, and determining the third image from the second image according to the image texture information comprises:
performing noise reduction processing on each second image;
and determining the third image from the second image according to the contrast information of the second image after the noise reduction processing.
8. The method according to claim 1, wherein performing contrast enhancement processing on each exposure area in the fourth image according to corresponding exposure information to obtain a target image comprises:
aiming at each exposure area in the fourth image, performing contrast enhancement on pixel points in the exposure area to obtain enhanced pixel points;
based on the exposure information of the exposure area, replacing the pixel points in the exposure area by the enhanced pixel points;
and determining the target image according to each exposure area after the replacement processing.
9. The method according to claim 7, wherein the replacing the pixel points in the exposure area with the enhanced pixel points based on the exposure information of the exposure area comprises:
determining a target replacement proportion corresponding to the exposure information of the exposure area according to a corresponding relation between preset exposure information and pixel replacement proportion, wherein the target replacement proportion is used for indicating the proportion of replacing the pixel points in the exposure area by the enhanced pixel points;
and replacing the pixel points in the exposure area according to the target replacement proportion.
10. The method according to claim 8, before performing contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain the target image, further comprising:
and adjusting the brightness of each exposure area in the fourth image based on the exposure information corresponding to the first image so as to change the light and shade contrast degree of the fourth image.
11. The method of any of claims 1-10, wherein prior to said obtaining a first image of a current scene and determining a plurality of exposure regions in the first image corresponding to different depth information, the method further comprises:
determining the first image and depth information of the first image from a plurality of preview images;
updating the depth information of the first image based on the optical flow change information when the current scene is shot.
12. An image processing apparatus characterized by comprising:
the information determining module is used for acquiring a first image of a current scene, determining a plurality of exposure areas corresponding to different depth information in the first image, wherein each exposure area corresponds to different exposure time and exposure step length;
the first processing module is used for generating a plurality of second images for each exposure area according to the exposure step length in the exposure time;
the second processing module is used for respectively determining third images from the second images corresponding to the exposure areas and fusing the third images corresponding to the exposure areas to obtain a fourth image;
and the third processing module is used for respectively carrying out contrast enhancement processing on each exposure area in the fourth image according to the corresponding exposure information to obtain a target image.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 11 via execution of the executable instructions.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211485144.1A CN115719316A (en) | 2022-11-24 | 2022-11-24 | Image processing method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211485144.1A CN115719316A (en) | 2022-11-24 | 2022-11-24 | Image processing method and device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115719316A true CN115719316A (en) | 2023-02-28 |
Family
ID=85256352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211485144.1A Pending CN115719316A (en) | 2022-11-24 | 2022-11-24 | Image processing method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115719316A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116309191A (en) * | 2023-05-18 | 2023-06-23 | 山东恒昇源智能科技有限公司 | Intelligent gas inspection display method based on image enhancement |
-
2022
- 2022-11-24 CN CN202211485144.1A patent/CN115719316A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116309191A (en) * | 2023-05-18 | 2023-06-23 | 山东恒昇源智能科技有限公司 | Intelligent gas inspection display method based on image enhancement |
CN116309191B (en) * | 2023-05-18 | 2023-07-28 | 山东恒昇源智能科技有限公司 | Intelligent gas inspection display method based on image enhancement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021179820A1 (en) | Image processing method and apparatus, storage medium and electronic device | |
CN110619593B (en) | Double-exposure video imaging system based on dynamic scene | |
JP6081726B2 (en) | HDR video generation apparatus and method with ghost blur removed on multiple exposure fusion base | |
CN110033418B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108156369B (en) | Image processing method and device | |
CN113810596B (en) | Time-delay shooting method and device | |
CN110889809B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110443766B (en) | Image processing method and device, electronic equipment and readable storage medium | |
WO2020029679A1 (en) | Control method and apparatus, imaging device, electronic device and readable storage medium | |
CN113298735A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108513062B (en) | Terminal control method and device, readable storage medium and computer equipment | |
CN115719316A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN107295261B (en) | Image defogging method and device, storage medium and mobile terminal | |
CN116055895B (en) | Image processing method and device, chip system and storage medium | |
CN115767262B (en) | Photographing method and electronic equipment | |
CN115861121A (en) | Model training method, image processing method, device, electronic device and medium | |
CN115379128A (en) | Exposure control method and device, computer readable medium and electronic equipment | |
CN115278189A (en) | Image tone mapping method and apparatus, computer readable medium and electronic device | |
CN115529411A (en) | Video blurring method and device | |
CN114205650A (en) | Three-dimensional panoramic video picture synchronization method and device | |
CN113658070A (en) | Image processing method, image processing apparatus, storage medium, and electronic device | |
CN116033274B (en) | 3D-noise-reduction-compatible image width dynamic method | |
CN115409737A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN115546042B (en) | Video processing method and related equipment thereof | |
WO2024164736A1 (en) | Video processing method and apparatus, and computer-readable medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |