CN116416521A - Region identification method, device, computer readable storage medium and electronic equipment - Google Patents

Region identification method, device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN116416521A
CN116416521A CN202111655400.2A CN202111655400A CN116416521A CN 116416521 A CN116416521 A CN 116416521A CN 202111655400 A CN202111655400 A CN 202111655400A CN 116416521 A CN116416521 A CN 116416521A
Authority
CN
China
Prior art keywords
scene
image
determining
light source
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111655400.2A
Other languages
Chinese (zh)
Inventor
邢连萍
凌健
俞大海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN202111655400.2A priority Critical patent/CN116416521A/en
Publication of CN116416521A publication Critical patent/CN116416521A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a region identification method, a device, a computer readable storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a plurality of scene images with different exposure values of a scene to be identified; respectively determining a dynamic light source region and a dynamic non-light source region of a scene to be identified according to a plurality of scene images; and determining a dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region. By adopting the method, the dynamic region in the scene can be accurately identified.

Description

Region identification method, device, computer readable storage medium and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for identifying a region, a computer readable storage medium, and an electronic device.
Background
Objects such as static objects (e.g., buildings) and dynamic objects (e.g., pedestrians) may be present in a scene, and the presence of dynamic objects may cause a number of unfavorable images to the image processing of the scene. For example, when multi-frame image synthesis processing is performed on the scene with the dynamic object, a blurred or semitransparent image (generally called a ghost) appears in the dynamic area where the dynamic object is located, and the quality of the synthesized image is seriously affected. In order to be able to eliminate the adverse effects of dynamic objects, it is necessary to identify dynamic regions in the scene.
Disclosure of Invention
The embodiment of the application provides a region identification method, a device, a computer readable storage medium and electronic equipment, which can accurately identify a dynamic region in a scene.
In a first aspect, an embodiment of the present application provides a method for identifying a region, including:
acquiring a plurality of scene images with different exposure values of a scene to be identified;
respectively determining a dynamic light source region and a dynamic non-light source region of a scene to be identified according to a plurality of scene images;
and determining a dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region.
In a second aspect, an embodiment of the present application further provides an area identifying apparatus, including:
the acquisition module is used for acquiring a plurality of scene images with different exposure values of the scene to be identified;
the first determining module is used for respectively determining a dynamic light source area and a dynamic non-light source area of a scene to be identified according to the plurality of scene images;
and the second determining module is used for determining the dynamic area of the scene to be identified according to the dynamic light source area and the dynamic non-light source area.
In a third aspect, embodiments of the present application further provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements steps in a region identification method as provided in any embodiment of the present application.
In a fourth aspect, embodiments of the present application further provide an electronic device, where the electronic device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps in the region identification method as provided in any embodiment of the present application are implemented.
From the above, the present application divides dynamic regions that may exist in a scene into two types, namely, a dynamic light source region and a dynamic non-light source region, and obtains a plurality of scene images with different exposure values of the scene to be identified, and uses the obtained plurality of scene images to identify the dynamic light source region and the dynamic non-light source region in a targeted manner, so that interference between the dynamic light source region and the dynamic non-light source region can be avoided, and accuracy of the identified dynamic light source region and dynamic non-light source region is ensured. And finally, determining the dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region, so that the accuracy of the determined dynamic region can be ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an area identifying system according to an embodiment of the present application.
Fig. 2 is a flow chart of a region identification method according to an embodiment of the present application.
Fig. 3 is an exemplary diagram of three scene images acquired in an embodiment of the present application.
Fig. 4 is an exemplary diagram of acquiring and optimizing a first mask image in an embodiment of the present application.
Fig. 5 is an exemplary diagram of acquiring and optimizing a second mask image in the embodiment of the present application.
Fig. 6 is a block diagram of a region identification apparatus according to an embodiment of the present application.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
It should be noted that the principles of the present application are illustrated as implemented in a suitable computing environment. The following description is based on illustrated embodiments of the present application and should not be taken as limiting other embodiments not described in detail herein. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Relational terms such as first and second, and the like may be used solely to distinguish one object or operation from another object or operation without necessarily limiting the actual sequential relationship between the objects or operations. In the description of the embodiments of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In order to accurately identify a dynamic region in a scene and eliminate adverse effects of a dynamic object on image processing, the application correspondingly provides a region identification method, a region identification device, a computer-readable storage medium and electronic equipment. Wherein the region identification method can be performed by the electronic device.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, the present application further provides a region identification system, as shown in fig. 1, where the region identification system includes an electronic device 100. For example, the electronic device 100 may capture a scene to be identified according to a plurality of different exposure values by using a capturing component, so as to obtain a plurality of scene images with different exposure values of the scene to be identified, then determine a dynamic light source area and a dynamic non-light source area of the scene to be identified according to the plurality of obtained scene images, and finally further determine a dynamic area of the scene to be identified according to the determined dynamic light source area and dynamic non-light source area. Therefore, during image processing of the image of the scene to be identified, special optimization processing can be performed on the dynamic region in a targeted manner, so that adverse effects possibly caused by the dynamic object are eliminated. The electronic device 100 may be any device with a shooting component and having shooting capability, such as a mobile electronic device with a shooting component, such as a smart phone, a tablet computer, a palm computer, a notebook computer, or a fixed electronic device with a shooting component, such as a desktop computer, a television, and an advertisement player.
In addition, as shown in fig. 1, the area identifying system may further include a storage device 200, where the storage device 200 is configured to store data, for example, the electronic device 100 may store process data and result data for identifying an area of a scene to be identified in the storage device 200, for example, a plurality of acquired scene images, description information of a dynamic light source area, a dynamic non-light source area, and a dynamic area, and so on in the storage device 200.
It should be noted that, the schematic view of the scenario of the area identifying system shown in fig. 1 is only an example, and the area identifying system and the scenario described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the area identifying system and the appearance of a new service scenario, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
Referring to fig. 2, fig. 2 is a flow chart of a region identification method provided in an embodiment of the present application, where the region identification method is applied to an electronic device, and the flow chart may be as follows:
s210, acquiring a plurality of scene images with different exposure values of the scene to be identified.
It should be noted that the electronic device is configured with a photographing assembly, and the photographing assembly is configured to collect an image, and at least includes a lens and an image sensor, wherein the lens is used for projecting an external optical signal to the image sensor, and the image sensor is used for performing photoelectric conversion on the optical signal projected by the lens, and converting the optical signal into a usable electrical signal, so as to obtain a digitized image. After the photographing component is enabled, the photographing scene may be photographed in real time. The scene to be recognized is understood to be a real area in which the photographing assembly is aligned after being enabled, i.e., an area in which the photographing assembly can convert the optical signal into a corresponding image. For example, after enabling the shooting component according to the user operation, if the user controls the shooting component of the electronic device to aim at an area including any object, the area including the object is the scene to be identified.
In this embodiment, the electronic device may control the shooting component to shoot the scene to be identified according to a plurality of different exposure values, so as to obtain a plurality of scene images with different exposure values of the scene to be identified. The number of scene images acquired is not particularly limited and may be configured by one skilled in the art as desired for implementation. It should be noted that the exposure values of the plurality of scene images are the same in size although they are different.
Wherein an Exposure Value (EV) represents all camera aperture shutter combinations that can give the same Exposure, and can be expressed as:
Figure BDA0003448170970000041
where N represents the aperture (f-number) and t represents the exposure time (i.e., shutter).
For example EV0 corresponds to a combination of exposure time of 1 second and aperture of f/1.0 or an equivalent combination thereof
For example, the electronic device may control the photographing component to photograph the scene to be identified according to EV0, EV-2, and EV-4, respectively, to obtain three images with different exposure values, as shown in fig. 3, which are an EV0 image, an EV-2 image, and an EV-4 image, respectively.
S220, respectively determining a dynamic light source area and a dynamic non-light source area of the scene to be identified according to the plurality of scene images.
In this embodiment, in order to accurately identify a dynamic region of a scene to be identified, dynamic objects that may exist in the scene are divided into a dynamic light source object and a dynamic non-light source object, and the regions where the dynamic light source object and the dynamic non-light source object are located are identified in a targeted manner. Wherein, dynamic light source objects such as rolling playing billboards, moving car lights and the like, dynamic non-light source objects such as pedestrians, jittery leaves and the like.
The electronic device can identify the area where the dynamic light source object is located according to the pixel value difference of the plurality of scene images in the object pixels, and the area is recorded as the dynamic light source area.
In an alternative embodiment, determining a dynamic light source region of a scene to be identified from a plurality of scene images includes:
determining a first reference scene image from a plurality of scene images;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first reference scene image and the non-first reference scene image in the plurality of scene images in the corresponding pixels.
In this embodiment, the electronic device may first select a scene image from the plurality of scene images as a reference, and record the selected scene image as a first reference scene image. The selection strategy of the first reference scene image is not particularly limited herein, and may be configured by those skilled in the art according to actual needs.
For example, the electronic device may select a scene image with the best exposure quality from the plurality of scene images as the first reference scene image. For example, the electronic device scores the exposure quality of each scene image according to a configured exposure quality scoring strategy to obtain an exposure quality score of each scene image, and the scene image with the highest exposure quality score is selected as the first reference scene image.
As above, after determining the first reference scene image, the electronic device further determines a dynamic light source region of the scene to be identified according to a difference in pixel values of the first reference scene image and a non-first reference scene image of the plurality of scene images at corresponding pixels. Wherein the dynamic light source region may comprise a plurality of independent regions, and may comprise only one independent region.
In order to improve the efficiency of determining the dynamic light source region, in an alternative embodiment, determining the dynamic light source region of the scene to be identified according to the difference of the pixel values of the first reference scene image and the non-first reference scene image in the plurality of scene images in the corresponding pixels includes:
respectively selecting a first target channel image of a first reference scene image and a second target channel image of a non-first reference scene image, wherein the first target channel image and the second target channel image correspond to the same color channel;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first target channel image and the second target channel image in the corresponding pixels.
In this embodiment, the dynamic light source area is determined by using the first reference scene image and the single-channel image other than the first reference scene image, so that the calculation amount can be reduced, and the purpose of improving the determination efficiency is achieved.
The electronic device selects a color channel image of the first reference scene image as a first target channel image, and selects a color channel image of the non-first reference scene image as a second target channel image. It should be noted that, taking the selected first target channel image and the second target channel image corresponding to the same color channel as constraint, a selection manner of the first target channel image and the second target channel image may be configured by a person skilled in the art according to actual needs.
In this embodiment, the plurality of scene images acquired by the electronic device are RAW format images, that is, the original images obtained by converting the captured light source signals into electrical signals by the image sensor. Thus, for each scene image, one red channel image, two green channel images, a first green channel image and a second green channel image, and one blue channel image are included. When selecting the first target channel image and the second target channel image, the electronic device may select a green channel image that is more sensitive to the human eye, which may be either of the two green channel images. For example, the electronic device may select a first green channel image in the first reference scene image as the first target channel image and select a first green channel image in the non-first reference scene image as the second target channel image.
As described above, after the electronic device selects the first target channel image and the second target channel image, the dynamic light source region of the scene to be identified is determined according to the pixel value difference of the first target channel image and the second target channel image in the corresponding pixels.
In order to further reduce the amount of computation required for determining the dynamic light source region, so as to improve the determination efficiency of the dynamic light source region, in an alternative embodiment, determining the dynamic light source region of the scene to be identified according to the difference between the pixel values of the first target channel image and the second target channel image in the corresponding pixels includes:
Respectively acquiring a first downsampled image of a first target channel image and a second downsampled image of a second target channel image;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first downsampled image and the second downsampled image in the corresponding pixels.
In this embodiment, the original first target channel image and the second target channel image are not directly used to determine the dynamic light source region of the scene to be identified, but the first target channel image and the second target channel image are downsampled and then used for determining the dynamic light source region.
The electronic device firstly downsamples the first target channel image and the second target channel image according to the same sampling multiplying power, and correspondingly acquires a first downsampled image of the first target channel image and a second downsampled image of the second target channel image. The configuration of the sampling magnification is not particularly limited here, and may be configured by those skilled in the art according to actual needs.
For example, in this embodiment, the electronic device downsamples the first target channel image and the second target channel image to 1/8 of the original size, that is, the first downsampled image and the second downsampled image have the same size and are each 1/8 of the size of the respective original images.
As described above, after the first downsampled image and the second downsampled image are acquired, the electronic device may determine the dynamic light source region of the scene to be identified according to the pixel value difference of the first downsampled image and the second downsampled image in the corresponding pixels.
In this embodiment, the determining the dynamic light source area by using the pixel as a unit, and determining the dynamic light source area of the scene to be identified according to the difference of the pixel values of the first downsampled image and the second downsampled image in the corresponding pixels includes:
determining a first target pixel of a first downsampled image;
if the first target pixel meets the preset condition, determining a dynamic light source area of the scene to be identified according to the first target pixel;
the preset conditions may be:
when the exposure value corresponding to the first downsampled image is larger than the exposure value corresponding to the second downsampled image, the pixel value is smaller than the pixel value of the corresponding pixel of the first target pixel in the second downsampled image; or alternatively, the process may be performed,
when the exposure value corresponding to the first downsampled image is smaller than the exposure value corresponding to the second downsampled image, the pixel value is greater than the pixel value of the corresponding pixel of the first target pixel in the second downsampled image.
For example, the electronic device may directly determine the area formed by all the first target pixels as the dynamic light source area of the scene to be identified, or may perform optimization processing on the area formed by the first target pixels and then determine the area as the dynamic light source area of the scene to be identified.
To improve accuracy of the determined dynamic light source region, in an alternative embodiment, determining the dynamic light source region of the scene to be identified according to the first target pixel includes:
generating a first mask image of the scene to be identified according to the first target pixel;
optimizing the first mask image to obtain an optimized first mask image, wherein the optimizing process comprises at least one of corrosion process, expansion process and connected region analysis process;
and determining a dynamic light source area of the scene to be identified according to the optimized first mask image.
The etching treatment (Erosion) can be understood as repeatedly removing pixels on the surface of the pattern, and gradually shrinking the pattern to achieve the purpose of eliminating the dot pattern, thereby solving the problem of burrs in the image.
The expansion process (displacement) can understand the process of continuously diffusing the surface of the pattern to achieve the purpose of removing pinholes, thereby solving the defect problem in the image.
The connected region (Connected Components) refers to an image region formed by foreground pixel points which have the same pixel value and are adjacent in position in the image, and the connected region analysis processing refers to finding and marking each connected region in the image.
It should be noted that when the non-first reference scene image is one, the electronic device correspondingly acquires one second downsampled image, and determines a group of first target pixels according to the unique second downsampled image, and when the non-first reference scene image is a plurality of the electronic device respectively determines the first target pixels corresponding to each second downsampled image, so as to obtain a plurality of groups of first target pixels.
In this embodiment, if only one set of first target pixels is determined, the electronic device directly generates a first mask image of the scene to be identified according to the set of first target pixels, and if multiple sets of first target pixels are determined, the electronic device respectively generates candidate mask images of the scene to be identified according to each set of first target pixels, and takes a sum image of the generated multiple candidate mask images as the first mask image of the scene to be identified.
The electronic device generates the first mask image and the candidate mask image in the same mode.
For example, the electronic device may generate a blank image having the same size as the first downsampled image, then set the pixel values of the pixels in the blank image corresponding to the set of first target pixels to 1, and set the pixel values of the other pixels in the blank image to 0, so as to obtain an image having only 0 and 1 as pixel values, that is, the mask image.
For example, referring to FIG. 4, assume that the electronic device acquires three scene images, EV0, EV-2, and EV-4, the EV0 image is selected as the first reference scene image, and the EV-2 and EV-4 images are non-first reference scene images. According to the mask image generation strategy described in the above embodiment, the electronic device generates a first candidate mask image according to the EV0 image and the EV-2 image, and generates a second candidate mask image according to the EV0 image and the EV-4 image; then, the electronic equipment calculates a sum image of the first candidate mask image and the second candidate mask image to obtain a first mask image; then, the electronic equipment sequentially performs corrosion treatment, expansion treatment and connected region analysis treatment on the first mask image to obtain an optimized first mask image; according to the first mask image, the electronic device can determine the dynamic light source area of the scene to be identified, and as shown in fig. 4, the area where the rolling billboard in the scene to be identified and the mobile phone in the hand of the pedestrian are located is identified as the dynamic light source area.
In this embodiment, the electronic device may further identify, according to the pixel values of each of the plurality of scene images, an area where the dynamic non-light source object is located, and record the area as a dynamic non-light source area.
In an alternative embodiment, determining a dynamic non-light source region of a scene to be identified from a plurality of scene images includes:
determining a second reference scene image from the plurality of scene images;
determining a target binarization threshold according to the pixel values of the second reference scene image;
according to the target binarization threshold value, obtaining binarization images of a plurality of scene images;
and obtaining a sum value image of the binarized images of the plurality of scene images, and determining a dynamic non-light source area of the scene to be identified according to the pixel values of the sum value image.
In this embodiment, the electronic device may first select a scene image from the plurality of scene images as the reference, and record the selected scene image as the second reference scene image. The selection strategy of the second reference scene image is not particularly limited herein, and may be configured by those skilled in the art according to actual needs.
For example, the electronic device may select a scene image with the best exposure quality from the plurality of scene images as the second reference scene image. For example, the electronic device scores the exposure quality of each scene image according to the configured exposure quality scoring strategy to obtain the exposure quality score of each scene image, and the scene image with the highest exposure quality score is selected as the second reference scene image correspondingly.
As above, after determining the second reference scene image, the electronic device further determines a target binarization threshold for binarizing each scene image based on pixel values of the second reference scene image. To reduce the amount of computation and increase the efficiency of determining the target binarization threshold, in an alternative embodiment, determining the target binarization threshold from the pixel values of the second reference scene image comprises:
selecting a third target channel image of the second reference scene image;
and determining a target binarization threshold according to the pixel value of the third target channel image.
In this embodiment, the target binarization threshold is determined by using the single-channel image of the second reference scene image, so that the calculated amount can be reduced, and the purpose of improving the determination efficiency is achieved.
The electronic device first selects a color channel image of the second reference scene image to be recorded as a third target channel image. It should be noted that, the selection manner of the third target channel image is not particularly limited herein, and any color channel image of the second reference scene image may be selected as the third target channel image.
In this embodiment, the plurality of scene images acquired by the electronic device are RAW format images, that is, the original images obtained by converting the captured light source signals into electrical signals by the image sensor. Thus, for each scene image, one red channel image, two green channel images, a first green channel image and a second green channel image, and one blue channel image are included. When selecting the third target channel image, the electronic device may select a green channel image that is more sensitive to the human eye, which may be either of the two green channel images. For example, the electronic device may select a first green channel image of the second reference scene image as the third target channel image.
As described above, the electronic device determines the target binarization threshold after selecting the third target channel image, that is, according to the pixel value of the third target channel image. To further reduce the amount of computation required to determine the target binarization threshold to increase the efficiency of determining the target binarization threshold, in an alternative embodiment, determining the target binarization threshold based on the pixel values of the third target channel image includes:
acquiring a third downsampled image of a third target channel image;
determining the target pixel number of pixels with pixel values in a preset pixel value interval in the third downsampled image;
when the target pixel number reaches the preset pixel number, determining a first preset binarization threshold value as a target binarization threshold value, otherwise, determining a second preset binarization threshold value as a target binarization threshold value, wherein the first preset binarization threshold value is larger than the second preset binarization threshold value.
In this embodiment, the original third target channel image is not directly used to determine the target binarization threshold, but the third target channel image is downsampled and then used to determine the target binarization threshold.
The electronic device firstly downsamples the third target channel image according to the configured sampling rate, and correspondingly acquires a third downsampled image of the third target channel image. The configuration of the sampling magnification is not particularly limited here, and may be configured by those skilled in the art according to actual needs.
For example, in this embodiment, the electronic device downsamples the third target channel image to 1/8 of the original size to obtain a third downsampled image.
As above, after the third downsampled image is obtained, the electronic device further performs statistics on the pixel value of the third downsampled image, determines the number of pixels in the third downsampled image, where the pixel value is located in the preset pixel value interval, and records the number of pixels as a target number of pixels, further compares the number of pixels with the size of the preset number of pixels, determines that the second reference scene image has an overexposure region when the number of pixels reaches the preset number of pixels, determines that the first preset binarization threshold value is the target binarization threshold value, and determines that the second reference scene image does not have the overexposure region otherwise, determines that the second preset binarization threshold value is the target binarization threshold value, and the first preset binarization threshold value is greater than the second preset binarization threshold value. The configurations of the first preset binary threshold value, the second preset binary threshold value and the preset pixel number are not particularly limited, and may be configured by those skilled in the art according to actual needs.
For example, assume that the first preset binarization threshold is configured to be 0.7, the second preset binarization threshold is configured to be 0.5, and the preset number of pixels is two percent of the total number of images of the third downsampled image. If the acquired scene image is 10 bits, the pixel value range of the third downsampled image is 1024 in total between 0 and 1023. The electronic equipment equally divides the pixel value range into eight pixel value intervals according to the sequence of the pixel values from small to large, and takes the last pixel value interval as a preset pixel value interval; then, the electronic device determines the number of target pixels in the third downsampled image, where the pixel value is located in the preset pixel value interval, determines whether the number of target pixels is greater than two percent of the total number of pixels in the third downsampled image, if so, determines that the second reference scene image has an overexposure region, determines that the target binarization threshold is 0.7, and if not, determines that the second reference scene image does not have an overexposure region, and determines that the target binarization threshold is 0.5.
As described above, after determining the target binarization threshold, the electronic device may perform binarization processing on each scene image according to the target binarization threshold, to obtain binarized images of multiple scene images.
The binarization process is described by taking the binarization process for a scene image as an example.
The electronic device sets a pixel with a pixel value greater than a target binarization threshold value in the scene image as a first preset pixel value, sets a pixel with a pixel value less than or equal to the target binarization threshold value in the scene image as a second preset pixel value, takes the first preset pixel value greater than the second preset pixel value as a constraint, and takes the values of the first preset pixel value and the second preset pixel value as actual demands by a person skilled in the art.
After the binarized images of the plurality of scene images are acquired, the electronic device further acquires a sum image of the binarized images of the plurality of scene images, and determines a dynamic non-light source region of the scene to be identified according to pixel values of the sum image. In an alternative embodiment, determining a dynamic non-light source region of the scene to be identified from the pixel values of the sum image comprises:
determining a third target pixel of the sum image, wherein the pixel value of the third target pixel is a preset pixel threshold value;
And determining a dynamic non-light source area of the scene to be identified according to the third target pixel.
For example, the electronic device may directly determine the area formed by all the third target pixels as the dynamic non-light source area of the scene to be identified, or may perform optimization processing on the area formed by the third target pixels and then determine the area as the dynamic non-light source area of the scene to be identified.
To improve the accuracy of the determined dynamic non-light source region, in an alternative embodiment, determining the dynamic non-light source region of the scene to be identified from the third target pixel includes:
generating a second mask image of the scene to be identified according to the third target pixel;
optimizing the second mask image to obtain an optimized second mask image, wherein the optimizing process comprises at least one of corrosion process, expansion process and connected region analysis process;
and determining a dynamic non-light source area of the scene to be identified according to the optimized second mask image.
For example, the electronic device may generate a blank image having the same size as the third downsampled image, then set the pixel value of the pixel corresponding to the third target pixel in the blank image to 1, and set the pixel values of the other pixels in the blank image to 0, so as to obtain an image having only 0 and 1 as pixel values, that is, the second mask image.
For example, referring to FIG. 5, assume that the electronic device acquires three scene images, EV0, EV-2, and EV-4, the EV0 image is selected as the second reference scene image, and the EV-2 and EV-4 images are non-second reference scene images. According to the determination strategy of the target binarization threshold described in the above embodiment, the electronic device determines the target binarization threshold, and performs binarization processing on the EV0 image, the EV-2 image and the EV-4 image by using the target binarization threshold to obtain binarization images corresponding to the EV0 image, the EV-2 image and the EV-4 image respectively; then, the electronic equipment acquires a sum image of the binarized image, and generates a second mask image according to a third target pixel in the sum image; then, the electronic equipment sequentially performs corrosion treatment, expansion treatment and connected region analysis treatment on the second mask image to obtain an optimized second mask image; according to the second mask image, the electronic device may determine a dynamic non-light source region of the scene to be identified, as shown in fig. 5, where the pedestrian is located in the scene to be identified will be identified as the dynamic non-light source region.
It should be noted that, in the above embodiment, a set of optimization parameters may be shared when the optimization processing is performed on the first mask image and the second mask image. Illustratively, the erosion process may employ the same convolution kernel size, the dilation process may employ the same convolution kernel size, and the connected region analysis process may employ the same connectivity (e.g., 8 connectivity).
S230, determining a dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region.
As above, after determining the dynamic light source region and the dynamic non-light source image of the scene to be identified, the electronic device further determines the dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region. For example, the electronic device may perform a union operation on the dynamic light source region and the dynamic non-light source region to obtain a combined region of the dynamic light source region and the dynamic non-light source region, and correspondingly determine the combined region as a dynamic region of the scene to be identified.
In an alternative embodiment, in addition to determining a dynamic light source region of a scene to be identified by using the first downsampled image and the second downsampled image, an overexposed light source region of the scene to be identified may be determined according to the first downsampled image and the second downsampled image, and after determining the dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region, the region identification method provided in the present application further includes:
determining a second target pixel of the first downsampled image, the second target pixel being equal to a pixel value of a corresponding pixel in the second downsampled image;
Determining an overexposure source region of the scene to be identified according to the second target pixel;
and carrying out high dynamic range synthesis processing on the plurality of scene images according to the dynamic region and the overexposure source region to obtain a high dynamic range image of the scene to be identified.
The method for determining the overexposure light source region of the scene to be identified by the electronic device according to the second target pixel may be correspondingly implemented by referring to the method for determining the dynamic light source region of the scene to be identified according to the first target pixel in the above embodiment, which is not described herein.
In this embodiment, the electronic device may further multiplex the acquired multiple scene images with different exposure values, and perform high dynamic range synthesis processing on the multiple scene images according to the determined dynamic area and the overexposure source area, to obtain a high dynamic range image of the scene to be identified.
For example, when high dynamic range synthesis is performed on the acquired multiple scene images, for corresponding pixels of each scene image outside the dynamic area and the overexposure area (i.e., the pixels at the same position of each scene image), the electronic device may determine weights for high dynamic range synthesis according to the brightness information of the corresponding pixels, and correspondingly perform weighted synthesis on corresponding pixels of each scene image outside the synthesis optimization area (union area of the dynamic area and the overexposure area) according to the weights, so as to obtain a synthesized value thereof; for the corresponding pixels of each scene image in the dynamic region, if the corresponding pixels are not overexposed, the electronic equipment takes the pixel values of the corresponding pixels in the first reference scene image as the synthesized values thereof, and if the corresponding pixels are overexposed, the electronic equipment takes the pixel values of the corresponding pixels in the non-first reference scene image with the highest exposure quality score as the synthesized values thereof; for the corresponding pixel of each scene image in the overexposure source area, the electronic equipment takes the pixel value of the corresponding pixel in the non-first reference scene image with the highest exposure quality score as the synthesized value; and finally, the electronic equipment generates a high dynamic range image of the scene to be identified according to the synthesized value of each scene image at each corresponding pixel.
In an optional embodiment, after determining the dynamic area of the scene to be identified according to the dynamic light source area and the dynamic non-light source area, the area identification method provided in the present application further includes:
acquiring a plurality of scene images to be synthesized of a scene to be identified;
and carrying out image synthesis processing on the plurality of scene images to be synthesized according to the dynamic region to obtain a synthesized image of the scene to be identified, wherein the image synthesis processing comprises at least one of high dynamic range synthesis processing, noise reduction synthesis processing and super-resolution synthesis processing.
In this embodiment, after determining the dynamic area of the scene to be identified, the electronic device may use the determined dynamic area to guide the image synthesis process of the multiple frames. First, a plurality of scene images to be synthesized of a scene to be identified are acquired. The number of scene images to be synthesized that are acquired is not limited here, and depends on the actual needs of the subsequent image synthesis process. In addition, the embodiment of the present application is not limited in what type of image synthesis processing is performed, and may be specifically configured according to actual synthesis requirements, for example, if a high dynamic range image of a scene to be identified needs to be obtained, the high dynamic range synthesis processing may be performed, if a clear image of the scene to be identified needs to be obtained, the noise reduction synthesis processing may be performed, if an image with a resolution greater than the image acquisition resolution of the electronic device needs to be obtained, the super resolution synthesis processing may be performed, and so on.
For example, assuming that the configured image synthesis process is high dynamic range synthesis process, a plurality of scene images with different acquired exposure values can be directly used as scene images to be synthesized, when the plurality of scene images to be synthesized are synthesized in a high dynamic range, for corresponding pixels of each scene image to be synthesized outside a dynamic area (i.e. the pixels at the same position of each scene image to be synthesized), the electronic device can determine weights for high dynamic range synthesis according to brightness information of the corresponding pixels, and correspondingly weight and synthesize the corresponding pixels of each scene image to be synthesized outside the dynamic area according to the weights, so as to obtain synthesis values of the corresponding pixels; for the corresponding pixel of each scene image to be synthesized in the dynamic area, the electronic equipment takes the pixel value of the corresponding pixel in the scene image to be synthesized with optimal exposure quality as the synthesis value; and finally, the electronic equipment generates a high dynamic range synthesized image of the scene to be identified according to the synthesized value of each corresponding pixel of each scene image to be synthesized.
Assuming that the configured image synthesis process is noise reduction synthesis process, the electronic device may control the shooting component to shoot the scene to be identified according to the same shooting parameters to obtain a plurality of scene images to be synthesized of the scene to be identified, and when the plurality of scene images to be synthesized are noise reduction synthesized, for corresponding pixels of each scene image to be synthesized (i.e., the pixels of the same position of each scene image to be synthesized) outside the dynamic area, the electronic device may calculate an average pixel value of each scene image to be synthesized in the object pixel as a synthesis value thereof; for the corresponding pixel of each scene image to be synthesized in the dynamic region, the electronic equipment takes the pixel value of the corresponding pixel in the scene image to be synthesized with the highest definition as the synthesis value; finally, the electronic equipment generates a noise reduction synthesized image of the scene to be identified according to the synthesized value of each corresponding pixel of each scene image to be synthesized.
From the above, the present application divides dynamic regions that may exist in a scene into two types, namely, a dynamic light source region and a dynamic non-light source region, and obtains a plurality of scene images with different exposure values of the scene to be identified, and uses the obtained plurality of scene images to identify the dynamic light source region and the dynamic non-light source region in a targeted manner, so that interference between the dynamic light source region and the dynamic non-light source region can be avoided, and accuracy of the identified dynamic light source region and dynamic non-light source region is ensured. And finally, determining the dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region, so that the accuracy of the determined dynamic region can be ensured.
In order to better implement the region identification method in the embodiment of the present application, on the basis of the region identification method, the present application further provides a region identification apparatus, as shown in fig. 6, where the region identification apparatus 300 includes:
an acquiring module 310, configured to acquire a plurality of scene images with different exposure values of a scene to be identified;
a first determining module 320, configured to determine a dynamic light source area and a dynamic non-light source area of a scene to be identified according to the plurality of scene images, respectively;
The second determining module 330 is configured to determine a dynamic area of the scene to be identified according to the dynamic light source area and the dynamic non-light source area.
In an alternative embodiment, the first determining module 320 is configured to:
determining a first reference scene image from a plurality of scene images;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first reference scene image and the non-first reference scene image in the plurality of scene images in the corresponding pixels.
In an alternative embodiment, the first determining module 320 is configured to:
respectively selecting a first target channel image of a first reference scene image and a second target channel image of a non-first reference scene image, wherein the first target channel image and the second target channel image correspond to the same color channel;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first target channel image and the second target channel image in the corresponding pixels.
In an alternative embodiment, the first determining module 320 is configured to:
respectively acquiring a first downsampled image of a first target channel image and a second downsampled image of a second target channel image;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first downsampled image and the second downsampled image in the corresponding pixels.
In an alternative embodiment, the first determining module 320 is configured to:
determining a first target pixel of a first downsampled image;
if the first target pixel meets the preset condition, determining a dynamic light source area of the scene to be identified according to the first target pixel;
wherein, the preset conditions are:
when the exposure value corresponding to the first downsampled image is larger than the exposure value corresponding to the second downsampled image, the pixel value is smaller than the pixel value of the corresponding pixel of the first target pixel in the second downsampled image; or alternatively, the process may be performed,
when the exposure value corresponding to the first downsampled image is smaller than the exposure value corresponding to the second downsampled image, the pixel value is greater than the pixel value of the corresponding pixel of the first target pixel in the second downsampled image.
In an alternative embodiment, the first determining module 320 is configured to:
generating a first mask image of the scene to be identified according to the first target pixel;
optimizing the first mask image to obtain an optimized first mask image, wherein the optimizing process comprises at least one of corrosion process, expansion process and connected region analysis process;
and determining a dynamic light source area of the scene to be identified according to the optimized first mask image.
In an optional embodiment, the area identifying apparatus provided in the present application further includes a first image synthesis module, where the first image synthesis module is configured to:
Determining a second target pixel of the first downsampled image, the second target pixel being equal to a pixel value of a corresponding pixel in the first downsampled image and the second downsampled image;
determining an overexposure source region of the scene to be identified according to the second target pixel;
and carrying out high dynamic range synthesis processing on the plurality of scene images according to the dynamic region and the overexposure source region to obtain a high dynamic range image of the scene to be identified.
In an alternative embodiment, the first determining module 320 is configured to:
determining a second reference scene image from the plurality of scene images;
determining a target binarization threshold according to the pixel values of the second reference scene image;
according to the target binarization threshold value, obtaining binarization images of a plurality of scene images;
and obtaining a sum value image of the binarized images of the plurality of scene images, and determining a dynamic non-light source area of the scene to be identified according to the pixel values of the sum value image.
In an alternative embodiment, the first determining module 320 is configured to:
selecting a third target channel image of the second reference scene image;
and determining a target binarization threshold according to the pixel value of the third target channel image.
In an alternative embodiment, the first determining module 320 is configured to:
Acquiring a third downsampled image of a third target channel image;
determining the target pixel number of pixels with pixel values in a preset pixel value interval in the third downsampled image;
when the target pixel number reaches the preset pixel number, determining a first preset binarization threshold value as a target binarization threshold value, otherwise, determining a second preset binarization threshold value as a target binarization threshold value, wherein the first preset binarization threshold value is larger than the second preset binarization threshold value.
In an alternative embodiment, the first determining module 320 is configured to:
determining a third target pixel of the sum image, wherein the pixel value of the third target pixel is a preset pixel threshold value;
and determining a dynamic non-light source area of the scene to be identified according to the third target pixel.
In an alternative embodiment, the first determining module 320 is configured to:
generating a second mask image of the scene to be identified according to the third target pixel;
optimizing the second mask image to obtain an optimized second mask image, wherein the optimizing process comprises at least one of corrosion process, expansion process and connected region analysis process;
and determining a dynamic non-light source area of the scene to be identified according to the optimized second mask image.
In an optional embodiment, the area identifying apparatus provided in the present application further includes a second image synthesis module, where the second image synthesis module is configured to:
acquiring a plurality of scene images to be synthesized of a scene to be identified;
and carrying out image synthesis processing on the plurality of scene images to be synthesized according to the dynamic region to obtain a synthesized image of the scene to be identified, wherein the image synthesis processing comprises at least one of high dynamic range synthesis processing, noise reduction synthesis processing and super-resolution synthesis processing.
It should be noted that, the region identifying device provided in the embodiment of the present application and the region identifying method in the foregoing embodiments belong to the same concept, and detailed implementation processes of the region identifying device are shown in the region identifying method embodiment, which is not described herein.
The embodiment of the application also provides electronic equipment, which can be mobile electronic equipment provided with shooting components such as a smart phone, a tablet personal computer, a palm computer, a notebook computer and the like, or fixed electronic equipment provided with shooting components such as a desktop computer, a television and an advertisement player and the like. Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 includes a processor 110 having one or more processing cores, a memory 120 having one or more computer-readable storage media, and a computer program stored on the memory 120 and executable on the processor. The processor 110 is electrically connected to the memory 120. Those skilled in the art will appreciate that the electronic device 100 may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
The processor 110 is a control center of the electronic device 100, connects various parts of the entire electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by running or loading software programs and/or modules stored in the memory 120 and invoking data stored in the memory 120, thereby performing overall monitoring of the electronic device 100.
In the embodiment of the present application, the processor 110 in the electronic device 100 loads the instructions corresponding to the processes of one or more application programs into the memory 120 according to the following steps, and the processor 110 executes the application programs stored in the memory 120, so as to implement the region identification method provided in the present application, for example:
acquiring a plurality of scene images with different exposure values of a scene to be identified;
respectively determining a dynamic light source region and a dynamic non-light source region of a scene to be identified according to a plurality of scene images;
and determining a dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be repeated here.
Optionally, as shown in fig. 7, the electronic device 100 may further include: the touch display 130, the radio frequency circuit 140, the photographing assembly 150, the input unit 160 and the power supply 170. The processor 110 is electrically connected to the touch display 130, the radio frequency circuit 140, the photographing assembly 150, the input unit 160, and the power supply 170, respectively.
The touch display 130 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 130 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 110, and can receive and execute commands sent from the processor 110. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is transferred to the processor 110 to determine a type of touch event, and the processor 110 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display 130 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 130 may also implement an input function as part of the input unit 160.
The radio frequency circuit 140 may be configured to receive and transmit radio frequency signals to and from a network device or other electronic device via wireless communication to and from the network device or other electronic device.
The shooting assembly 150 is configured to collect an image, and at least includes a lens and an image sensor, wherein the lens is used for projecting an external optical signal to the image sensor, and the image sensor is used for performing photoelectric conversion on the optical signal projected by the lens, and converting the optical signal into a usable electrical signal, so as to obtain a digitized image.
The input unit 160 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 170 is used to power the various components of the electronic device 100. Alternatively, the power supply 170 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 170 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements steps in any of the region identification methods provided in the embodiments of the present application.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any of the area identifying methods provided in the embodiments of the present application may be executed by the computer program stored in the storage medium, so that the beneficial effects that any of the area identifying methods provided in the embodiments of the present application may be achieved, which are detailed in the previous embodiments and are not repeated herein.
The foregoing has described in detail a method, an apparatus, a computer readable storage medium and an electronic device for identifying a region according to embodiments of the present application, where specific examples are applied to illustrate principles and implementations of the present application, and the description of the foregoing embodiments is only for aiding in understanding the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (16)

1. A method of region identification, comprising:
acquiring a plurality of scene images with different exposure values of a scene to be identified;
respectively determining a dynamic light source area and a dynamic non-light source area of the scene to be identified according to the plurality of scene images;
and determining the dynamic region of the scene to be identified according to the dynamic light source region and the dynamic non-light source region.
2. The method of claim 1, wherein the determining the dynamic light source region of the scene to be identified from the plurality of scene images comprises:
determining a first reference scene image from the plurality of scene images;
And determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first reference scene image and the non-first reference scene image in the plurality of scene images in the corresponding pixels.
3. The method of claim 2, wherein the determining the dynamic light source region of the scene to be identified based on the difference in pixel values at corresponding pixels of the first reference scene image and a non-first reference scene image of the plurality of scene images comprises:
respectively selecting a first target channel image of the first reference scene image and a second target channel image of the non-first reference scene image, wherein the first target channel image and the second target channel image correspond to the same color channel;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first target channel image and the second target channel image in the corresponding pixels.
4. A method as claimed in claim 3, wherein said determining a dynamic light source region of the scene to be identified from the difference in pixel values of the corresponding pixels of the first target channel image and the second target channel image comprises:
Respectively acquiring a first downsampled image of the first target channel image and a second downsampled image of the second target channel image;
and determining a dynamic light source area of the scene to be identified according to the pixel value difference of the first downsampled image and the second downsampled image in the corresponding pixels.
5. The method of claim 4, wherein the determining the dynamic light source region of the scene to be identified based on the difference in pixel values of the corresponding pixels of the first downsampled image and the second downsampled image comprises:
determining a first target pixel of the first downsampled image;
if the first target pixel meets a preset condition, determining a dynamic light source area of the scene to be identified according to the first target pixel;
wherein the preset conditions are:
when the exposure value corresponding to the first downsampled image is larger than the exposure value corresponding to the second downsampled image, the pixel value is smaller than the pixel value of the corresponding pixel of the first target pixel in the second downsampled image; or alternatively, the process may be performed,
and when the exposure value corresponding to the first downsampled image is smaller than the exposure value corresponding to the second downsampled image, the pixel value is larger than the pixel value of the corresponding pixel of the first target pixel in the second downsampled image.
6. The method of claim 5, wherein the determining the dynamic light source region of the scene to be identified from the first target pixel comprises:
generating a first mask image of the scene to be identified according to the first target pixel;
optimizing the first mask image to obtain an optimized first mask image, wherein the optimizing process comprises at least one of corrosion process, expansion process and connected region analysis process;
and determining the dynamic light source area of the scene to be identified according to the optimized first mask image.
7. The method of claim 6, wherein after determining the dynamic region of the scene to be identified based on the dynamic light source region and the dynamic non-light source region, the method further comprises:
determining a second target pixel of the first downsampled image, the second target pixel being equal to a pixel value of a corresponding pixel thereof in the second downsampled image;
determining an overexposure source region of the scene to be identified according to the second target pixel;
and carrying out high dynamic range synthesis processing on the plurality of scene images according to the dynamic region and the overexposure source region to obtain a high dynamic range image of the scene to be identified.
8. The method of any of claims 1-7, wherein the determining a dynamic non-light source region of the scene to be identified from the plurality of scene images comprises:
determining a second reference scene image from the plurality of scene images;
determining a target binarization threshold according to the pixel value of the second reference scene image;
obtaining binarized images of the plurality of scene images according to the target binarization threshold;
and obtaining a sum image of the binarized images of the plurality of scene images, and determining a dynamic non-light source area of the scene to be identified according to the pixel values of the sum image.
9. The method of claim 8, wherein the determining a target binarization threshold from pixel values of the second reference scene image comprises:
selecting a third target channel image of the second reference scene image;
and determining a target binarization threshold according to the pixel value of the third target channel image.
10. The method of claim 9, wherein the determining a target binarization threshold from pixel values of the third target channel image comprises:
acquiring a third downsampled image of the third target channel image;
Determining the target pixel number of pixels with pixel values in a preset pixel value interval in the third downsampled image;
when the target pixel number reaches the preset pixel number, determining a first preset binarization threshold value as a target binarization threshold value, otherwise, determining a second preset binarization threshold value as a target binarization threshold value, wherein the first preset binarization threshold value is larger than the second preset binarization threshold value.
11. The method of claim 10, wherein the determining the dynamic non-light source region of the scene to be identified from the pixel values of the sum image comprises:
determining a third target pixel of the sum image, wherein the pixel value of the third target pixel is a preset pixel threshold value;
and determining a dynamic non-light source area of the scene to be identified according to the third target pixel.
12. The method of claim 11, wherein the determining the dynamic non-light source region of the scene to be identified from the third target pixel comprises:
generating a second mask image of the scene to be identified according to the third target pixel;
optimizing the second mask image to obtain an optimized second mask image, wherein the optimizing process comprises at least one of corrosion process, expansion process and connected region analysis process;
And determining the dynamic non-light source area of the scene to be identified according to the optimized second mask image.
13. The method of claim 1, wherein after the determining the dynamic region of the scene to be identified based on the dynamic light source region and the dynamic non-light source region, the method further comprises:
acquiring a plurality of scene images to be synthesized of the scene to be identified;
and carrying out image synthesis processing on the plurality of scene images to be synthesized according to the dynamic region to obtain a synthesized image of the scene to be identified, wherein the image synthesis processing comprises at least one of high dynamic range synthesis processing, noise reduction synthesis processing and super-resolution synthesis processing.
14. An area identifying apparatus, comprising:
the acquisition module is used for acquiring a plurality of scene images with different exposure values of the scene to be identified;
the first determining module is used for respectively determining a dynamic light source area and a dynamic non-light source area of the scene to be identified according to the plurality of scene images;
and the second determining module is used for determining the dynamic area of the scene to be identified according to the dynamic light source area and the dynamic non-light source area.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps in the region identification method according to any of claims 1 to 13.
16. An electronic device comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps in the region identification method according to any one of claims 1 to 13 when the computer program is executed by the processor.
CN202111655400.2A 2021-12-30 2021-12-30 Region identification method, device, computer readable storage medium and electronic equipment Pending CN116416521A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111655400.2A CN116416521A (en) 2021-12-30 2021-12-30 Region identification method, device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111655400.2A CN116416521A (en) 2021-12-30 2021-12-30 Region identification method, device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116416521A true CN116416521A (en) 2023-07-11

Family

ID=87053294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111655400.2A Pending CN116416521A (en) 2021-12-30 2021-12-30 Region identification method, device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116416521A (en)

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
US20200334793A1 (en) Method and device for blurring image background, storage medium and electronic apparatus
US9826149B2 (en) Machine learning of real-time image capture parameters
US8902328B2 (en) Method of selecting a subset from an image set for generating high dynamic range image
US9767387B2 (en) Predicting accuracy of object recognition in a stitched image
CN110572636B (en) Camera contamination detection method and device, storage medium and electronic equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN116051391B (en) Image processing method and electronic equipment
CN104869309A (en) Shooting method and shooting apparatus
JP5279653B2 (en) Image tracking device, image tracking method, and computer program
US20220329729A1 (en) Photographing method, storage medium and electronic device
CN110570370B (en) Image information processing method and device, storage medium and electronic equipment
CN110868533A (en) HDR mode determination method, device, storage medium and terminal
CN117274107B (en) End-to-end color and detail enhancement method, device and equipment under low-illumination scene
CN113628259A (en) Image registration processing method and device
CN109672829B (en) Image brightness adjusting method and device, storage medium and terminal
WO2023001110A1 (en) Neural network training method and apparatus, and electronic device
CN116416521A (en) Region identification method, device, computer readable storage medium and electronic equipment
CN116797631A (en) Differential area positioning method, differential area positioning device, computer equipment and storage medium
CN111479074A (en) Image acquisition method and device, computer equipment and storage medium
CN111476740A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN110992883A (en) Display compensation method, device, equipment and storage medium
CN115086566B (en) Picture scene detection method and device
WO2022183876A1 (en) Photography method and apparatus, and computer-readable storage medium and electronic device
JP2013197892A (en) Object recognition apparatus, object recognition method, and computer program for object recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication