CN114418881A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents
Image processing method, image processing device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114418881A CN114418881A CN202210050578.2A CN202210050578A CN114418881A CN 114418881 A CN114418881 A CN 114418881A CN 202210050578 A CN202210050578 A CN 202210050578A CN 114418881 A CN114418881 A CN 114418881A
- Authority
- CN
- China
- Prior art keywords
- image
- road
- road image
- processed
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 43
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000005286 illumination Methods 0.000 claims abstract description 57
- 238000000034 method Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 230000002708 enhancing effect Effects 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000012800 visualization Methods 0.000 abstract description 2
- 239000011159 matrix material Substances 0.000 description 13
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, and relates to the field of computer technologies, in particular to the field of image processing and map technologies. The specific implementation scheme is as follows: determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image; carrying out illumination enhancement on an image to be processed to obtain an enhanced image; and obtaining a road image base map corresponding to the image to be processed based on the enhanced image. By adopting the technical scheme, the visualization degree of the road image base map can be improved.
Description
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to the field of image processing technology.
Background
The road image base map can be applied to the fields of electronic maps, automatic driving and the like, and provides necessary information for positioning, navigation, track prediction and the like. The images used for constructing the road image base map are generally acquired by vehicles, and because the images have better visibility and the road traffic markings are clear and obvious under good illumination conditions, the images are generally acquired in the daytime.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, and storage medium.
According to a first aspect of the present disclosure, there is provided an image processing method including:
determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image;
carrying out illumination enhancement on an image to be processed to obtain an enhanced image;
and obtaining a road image base map corresponding to the image to be processed based on the enhanced image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising:
the determining module is used for determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image;
the illumination enhancement module is used for carrying out illumination enhancement on the image to be processed to obtain an enhanced image;
and the first image generation module is used for obtaining a road image base map corresponding to the image to be processed based on the enhanced image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the embodiments of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method in any of the embodiments of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
According to the technical scheme, the to-be-processed image with lower brightness is determined from at least one road image based on the brightness information of the road image, the to-be-processed image is subjected to illumination enhancement to obtain the enhanced image with higher brightness, better visibility and clear and obvious road traffic markings, and the detailed information is favorably restored.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow diagram of an image processing method according to a first embodiment of the present disclosure;
fig. 2 is a schematic flow chart of step S120 according to the first embodiment of the present disclosure;
fig. 3 is a schematic flow chart of step S110 according to the first embodiment of the present disclosure;
fig. 4 is a schematic flow chart of an image processing method according to a second embodiment of the present disclosure;
FIG. 5 is a block flow diagram of an example application of an image processing method according to the present disclosure;
fig. 6 is a block diagram of an image processing apparatus according to a third embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing an image processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart illustrating an image processing method according to a first embodiment of the present disclosure. As shown in fig. 1, the image processing method may include:
step S110, determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image;
s120, performing illumination enhancement on an image to be processed to obtain an enhanced image;
and step S130, obtaining a road image base map corresponding to the image to be processed based on the enhanced image.
In the embodiment of the present disclosure, at least one road image may be acquired by an image acquisition device, where the image acquisition device includes, but is not limited to, a camera, a vehicle data recorder, a mobile phone, a tablet computer, a navigation device, and the like. Alternatively, the image capturing device may be mounted on a vehicle, and the image capturing device captures an image of a road on which the vehicle is traveling during traveling of the vehicle. Illustratively, the at least one road image may be any one of a front view, a rear view, and a side view of the vehicle.
Alternatively, the brightness information of the road image may be used to characterize the illumination intensity of the illumination environment in which the road is located. For example, in an environment where the illumination intensity is sufficient, such as in the daytime, the luminance of the road image is high; the road image brightness is low under the environment that the road is in dark illumination intensity such as tunnels, viaducts and at night. Based on the luminance information of each of the at least one road image, a to-be-processed image having lower luminance may be determined from the at least one road image.
Optionally, in this embodiment of the present disclosure, performing illumination enhancement on the image to be processed may be performing illumination enhancement on the entire image to be processed, or may be performing illumination enhancement on a local portion of the image to be processed. By performing illumination enhancement on the image to be processed, the obtained enhanced image has higher brightness, better visibility and clear and obvious road traffic marking. In addition, because the local area of the image to be processed is smaller than the whole area of the image to be processed, the local part of the image to be processed is subjected to illumination enhancement, the calculation amount of illumination enhancement processing can be reduced, and the calculation speed is accelerated.
In the related art, in the process of constructing the road image base map, because the road image acquired in the low-illumination environment has low brightness, serious loss of detail information and poor visibility, the accuracy and recall rate of subsequent image identification are reduced, and the construction of the road image base map is not facilitated.
By adopting the method of the embodiment of the disclosure, the image to be processed with lower brightness is determined from at least one road image based on the brightness information of the road image, the enhanced image with higher brightness, better visibility and clear and obvious road traffic marking can be obtained by performing illumination enhancement on the image to be processed, the detail information can be restored, and the road image base map corresponding to the image to be processed is determined by utilizing the enhanced image, so that the road image with lower brightness can be suitable for constructing the road image base map, and the visualization degree of the road image base map is improved. And in the process of using the road image base map for subsequent image recognition, the accuracy and recall rate of the image recognition are improved.
It should be noted that, because the traffic flow in the daytime is large, the road traffic marking is blocked, so that invalid images are easy to appear in the acquired road images, however, by adopting the method of the embodiment of the present disclosure, the road images acquired at night are suitable for constructing a road image base map, so that the road image acquisition can be performed at night with small traffic flow, the phenomenon that the vehicle blocks the road traffic marking is reduced, and the effectiveness of the road image acquisition is improved.
In an embodiment, as shown in fig. 2, the performing, in step S120, illumination enhancement on the image to be processed to obtain an enhanced image may include:
step S210, determining a target enhancement area in the image to be processed based on vanishing points in the image to be processed;
and S220, enhancing the target enhancement area based on the illumination information of the target enhancement area to obtain an enhanced image.
Illustratively, determining a target enhancement region in the image to be processed based on vanishing points in the image to be processed comprises: and taking the area below the vanishing point in the image to be processed as a target enhancement area. The vanishing point in the image to be processed can be an intersection point generated by an extension line of a road traffic marking, and an area below the vanishing point in the image to be processed is used as a target enhancement area, so that the image of the road in the image to be processed can be fully reserved, and redundant images of sky and road scenery can be removed.
According to the embodiment, the target enhancement area is determined by using the vanishing point of the image to be processed, and the calculation amount of enhancement processing can be reduced, so that the enhancement processing can be quickly performed on the target enhancement area based on the illumination information of the target enhancement area, and the speed of obtaining the enhanced image is increased.
In an embodiment, the step S220 of enhancing the target enhancement region based on the illumination information of the target enhancement region to obtain an enhanced image may include:
carrying out illumination estimation on the target enhancement area to obtain illumination information corresponding to the target enhancement area;
processing each pixel of the target enhancement area based on the illumination information and a preset factor to obtain an enhanced image; the preset factor is determined based on atmospheric scattering information of a plurality of images in a road scene corresponding to the image to be processed.
Optionally, the target enhanced region is enhanced by using a LIME (Low-light Image Enhancement) algorithm. Exemplarily, the maximum pixel value of each pixel in the target enhancement region in the R (red), G (green) and B (blue) channels is used as the illumination information of the corresponding pixel, to obtain the illumination information T corresponding to the target enhancement region, where T can be represented by formula (1):
where x is the pixel, I is the target enhancement region, and c is each color channel of the target enhancement region.
Respectively carrying out normalization processing on pixels of the target enhancement region in an R channel, a G channel and a B channel to obtain a normalized image N of the target enhancement region in each color channelc,NcCan be expressed by equation (2):
utilizing a Retinex model, and based on illumination information T corresponding to the target enhancement region, a preset factor k and a normalized image N of the target enhancement region in each color channelcDetermining reflection information E of the target enhancement region in R channel, G channel and B channel respectivelyc,EcCan be expressed by equation (3):
Ec=Ione-[(Ione-Nc)-k*(Ione-T)]./T,c∈{R,G,B} (3)
wherein, IoneRepresenting an all-1 matrix,/T represents a pixel-by-pixel division. The preset factor may be obtained based on statistics of atmospheric scattering information of a plurality of images in a road scene corresponding to the image to be processed, and the road scene corresponding to the image to be processed may be, for example, a road scene under low-light conditions such as a tunnel, a viaduct, and night. Preferably, k is 0.905.
Reflection information E of target enhancement region in R channelRReflection information E on G channelGAnd reflection information E in B channelBAnd overlapping to obtain an enhanced image of the target enhanced region.
In the embodiment, the enhancement processing is performed on the target enhancement area by using the illumination information and the preset factor, so that the enhancement processing is more suitable for a road scene under a low illumination condition, and an enhanced image is more natural.
In one embodiment, as shown in fig. 3, the step S110 of determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image includes:
step S310, determining the brightness characteristic distribution information of the ith road image based on the brightness information of the ith road image in at least one road image; wherein i is an integer greater than or equal to 1;
step S320, determining the low-brightness area ratio corresponding to the ith road image based on the brightness feature distribution information;
and step S330, determining the ith road image as the image to be processed under the condition that the low-brightness area ratio meets the preset condition.
Alternatively, step S310 may include: converting the ith road image from an RGB color space image into an HSV color space image, wherein the HSV color space mainly describes the ith road image from three dimensions of chroma (Hue), Saturation (Saturation) and brightness (Value) of color; acquiring a V channel image of an ith road image; dividing pixels of a V-channel image of an ith road image into N brightness areas; and counting the ratio of each brightness area. Therefore, the luminance characteristic distribution information of the ith road image can be accurately extracted. In order to increase the calculation speed, the V-channel image of the ith road image may be sampled according to a preset step length, and then the pixels of the sampled V-channel image may be divided into luminance regions. The step size can be selected and adjusted in advance according to actual needs, for example, the step size can include, but is not limited to, 4 pixels.
Exemplarily, step S320 may include: counting the ratio from the 1 st brightness area to the nth brightness area, wherein N and N are integers which are more than or equal to 1, and N is less than N; the ratio of the 1 st luminance region to the nth luminance region as the low luminance region ratio p, p can be expressed by equation (4):
wherein j is an integer greater than or equal to 1, and j is less than or equal to N, histjIndicating the number of jth luminance regions. For example, N-32, N-6,
for example, the low luminance area occupancy meeting the preset condition may be that the low luminance area occupancy exceeds a first threshold, for example, in a case where the occupancy of the 1 st luminance area to the 6 th luminance area is greater than 0.65, it may be determined that the occupancy of the low luminance area in the V-channel image of the ith road image is large, the ith road image is dark as a whole, the ith road image is a low luminance image, and the ith road image may be determined as an image to be processed.
In the above embodiment, the low-luminance area ratio of the ith road image is determined according to the luminance distribution information of the ith road image, and the ith road image is used as the image to be processed when the low-luminance area ratio meets the preset condition, so that the accuracy of determining the low-luminance image can be improved.
In one embodiment, the step S330, in the case that the low-luminance area ratio meets the preset condition, determining the ith road image as the image to be processed includes:
determining a brightness standard deviation based on the brightness characteristic distribution information of the ith road image;
determining a darkness coefficient of the ith road image based on the brightness standard deviation and the low-brightness area ratio;
and determining the ith road image as the image to be processed under the condition that the low-brightness area ratio is greater than the first threshold value and the darkness coefficient is greater than the second threshold value.
Illustratively, the luminance standard deviation may be represented by δ, the first threshold may be 0.65, the second threshold may be 6.0, and the darkness coefficient r of the i-th road image may be determined by the following formula (5):
in the case where p is greater than or equal to 0.65 and r is greater than or equal to 6.0, it can be determined that the luminance of the ith road image is low and the ith road image is excessively dark as a whole.
In the embodiment, the low brightness difference duty ratio of the ith road image is greater than the first threshold value, and the darkness coefficient is greater than the second threshold value, so that the ith road image can be accurately determined to be the over-dark road image, and the accuracy of determining the low brightness image can be further improved.
In one embodiment, the step S130 of obtaining the road video base map corresponding to the image to be processed based on the enhanced image includes:
and performing projection processing on the enhanced image based on the camera parameters of the image acquisition device corresponding to the image to be processed and the position information and the posture information corresponding to the image to be processed to obtain a road image base map corresponding to the image to be processed.
According to the embodiment, the enhanced image is subjected to projection processing by combining the position information and the posture information of the image acquisition device, so that the accuracy of the projection position and the angle of the enhanced image can be ensured, and the authenticity of the road image base map is favorably improved.
Optionally, the projection processing of the enhanced image may include: based on the internal reference matrix of the image acquisition device, the enhanced image is converted into an image A1 under a camera coordinate system, and the image A1 under the camera coordinate system is subjected to distortion removal by using a distortion coefficient to obtain an image A2. Converting an image A2 in a camera coordinate system into an image A3 in a pixel coordinate system, wherein the image A3 is an image obtained after the image is enhanced and the image is subjected to distortion removal; and performing projection processing on the enhanced image after distortion removal based on the position information and the posture information corresponding to the image to be processed to obtain a road image base map corresponding to the image to be processed. Therefore, the road image base map obtained by projection can be prevented from being distorted, and the road image base map is more real.
Illustratively, the camera parameters of the image capture device may include camera internal parameters and camera external parameters. Wherein the camera internal reference comprises an internal reference matrix M1And a distortion coefficient. The camera external parameters may include a rotation matrix R and a translation matrix t. The camera internal parameter and the camera external parameter can be calibrated in advance by adopting a conventional calibration method. For example, the camera internal reference may be calibrated by Zhang Zhengyou camera calibration method to obtain the internal reference matrix M1(ii) a And establishing a space reference observation point, and determining an external parameter matrix by using a conversion relation between a world coordinate system and a pixel coordinate system. For example, the conversion relationship between the world coordinate system to the pixel coordinate system can be expressed by the following formula (6):
wherein u is the pixel coordinate of a certain pixel point of the two-dimensional image in the pixel coordinate system in the X-axis direction, v is the pixel coordinate of a certain pixel point of the two-dimensional image in the pixel coordinate system in the Y-axis direction, and ZcThe coordinate of a certain pixel point of the two-dimensional image in the Z-axis direction is corresponding to the two-dimensional image in the camera coordinate system,M2is an external parameter matrix of the camera,Xw、Ywand ZwThe coordinates of a certain pixel point of the three-dimensional image corresponding to the two-dimensional image in the world coordinate system in the X-axis direction, the Y-axis direction and the Z-axis direction are respectively.
Due to M1Has determined that ZcThe coordinate of the reference observation point in the world coordinate system and the pixel coordinate in the pixel coordinate system can be determined by establishing the space reference observation point through general reduction, and M is utilized1The coordinates of the reference observation point in the world coordinate system and the pixel coordinates of the reference observation point can be solved to obtain the formula (6)M of (A)2Implementing the external reference matrix M2And (4) calibrating.
For example, the position information of the image to be processed may be acquired by a Global Positioning System (GPS), and the attitude information may be acquired by an Inertial Measurement Unit (IMU). For example, the GPS system and the IMU unit are mounted on a vehicle together with an image capture device, and the GPS system captures position information of the image capture device and posture information of the image capture device of the IMU unit while the vehicle is traveling.
Optionally, in a case that the position information and the posture information of the image to be processed are not consistent with the acquisition time of the image to be processed, linear interpolation may be performed on the position information and the posture information of the image to be processed, respectively, so that the acquisition time of the position information and the posture information of the image to be processed is consistent with the acquisition time of the image to be processed. Therefore, the generation of intervals between the base images of the adjacent road images can be prevented, and the accuracy of the projection position and the angle can be improved.
In one embodiment, as shown in fig. 4, the method may further include:
step S410, obtaining a bottom layer tile map corresponding to at least one road image based on a road image bottom map corresponding to each road image in at least one road image and position information corresponding to each road image;
s420, zooming the bottom layer tile map for multiple times to obtain a multilayer tile map corresponding to the bottom layer tile map;
and S430, obtaining a tile pyramid based on the multilayer tile map, wherein the tile pyramid is used for reconstructing a scene corresponding to at least one road image.
For example, the number of the road image base maps corresponding to each of the at least one road image may be one or more, and for example, the plurality of road image base maps are used as an example, the plurality of road base maps are spliced according to the position information corresponding to the plurality of road base maps, so as to obtain the base tile map corresponding to the road base map.
Illustratively, scaling the underlying tile map multiple times may include: taking the bottom layer tile map as a layer 0 tile map; scaling the 0 th layer of tile map according to a method of synthesizing each 2 x 2 pixels into one pixel to generate a1 st layer of tile map; and scaling the layer 1 tile map according to a method of synthesizing one pixel by every 2 x 2 pixels to generate a layer 2 tile map. And in the same way, after the bottom layer tile map is zoomed for multiple times, the multilayer tile map corresponding to the bottom layer tile map is obtained.
Illustratively, based on the multi-layer tile map, deriving the tile pyramid may comprise: dividing each layer of the multi-layer tile map into square tiles with the same size (for example, 256 pixels by 256 pixels) respectively to form a tile matrix corresponding to each layer of the tile map; and taking the tile matrix corresponding to the multilayer tile map as a tile pyramid.
Optionally, the scene corresponding to the at least one road image includes, but is not limited to, high-precision maps, positioning, navigation, trajectory prediction, and the like.
According to the embodiment, the bottom-layer tile map corresponding to the road image base map is obtained by utilizing the road image base map and the corresponding position information of the road image base map, and the tile pyramid is obtained based on the bottom-layer tile map, so that the storage and transmission of the road image base map can be facilitated, the road image base map can be provided for an application program or other equipment to use, and the convenience of using the road image base map is improved.
In one embodiment, obtaining an underlying tile map corresponding to at least one road image based on a road image underlying map corresponding to each road image in the at least one road image and location information corresponding to each road image includes:
obtaining an initial tile map based on a road image base map corresponding to each road image in at least one road image and position information corresponding to each road image;
and performing neighborhood enhancement on the initial tile map based on a preset algorithm to obtain a bottom layer tile map corresponding to at least one road image.
Illustratively, the original tile map may be domain enhanced using a CLAHE (Contrast Limited Histogram Equalization) algorithm, wherein the original tile map may be segmented into square tiles of a preset size (e.g., 5 × 5 pixels) for enhancement.
According to the scheme, the CLAHE algorithm is adopted to perform neighborhood enhancement on the initial tile map, so that the lane lines, lane edge lines, steering arrows and other road traffic marked lines in the initial tile map are clearer and more obvious, and the CLAHE algorithm can keep the continuity of the image when the contrast of the initial tile map is improved, so that the obtained bottom layer tile map has good visual continuity, and the bottom layer tile map is more real.
Fig. 5 is a flow chart of an application example of the image processing method according to the present disclosure. As shown in fig. 5, taking a front view of a vehicle collected by an image collecting device as an image to be processed as an example, the image processing method may include:
step S510, obtaining a front view of the vehicle;
step S520, performing illumination enhancement on the front view of the vehicle to obtain an enhanced image;
step S530, acquiring position information and posture information of the image acquisition device corresponding to the front view;
step S540, aligning the sampling time of the position information and the attitude information with the sampling time of the front view, so that the sampling time of the front view, the position information and the attitude information are kept consistent;
s550, carrying out distortion removal on the enhanced image by using preset camera parameters, and projecting the enhanced image after distortion removal on the basis of the position information and the posture information to obtain a road image base map;
step S560, obtaining an initial tile map based on the road image base map corresponding to each forward-looking image in the at least one forward-looking image and the position information corresponding to each road image base map;
s570, performing neighborhood enhancement on the initial tile map by using a preset algorithm to obtain a bottom-layer tile map corresponding to at least one front view;
step S580, zooming the bottom layer tile map for multiple times to obtain a plurality of layers of tile maps, and dividing each layer of tile map to obtain a tile matrix corresponding to each layer of tile map; and taking the tile matrix corresponding to the multilayer tile map as a tile pyramid.
According to the method, the illumination enhancement is carried out through the front view of the vehicle, so that the front view of the vehicle collected in low-illumination environments such as tunnels, viaducts and at night can have a good visual effect, and a high-definition and high-precision road image base map can be generated by utilizing the enhanced image of the front view. In addition, after the initial tile map is generated by using the road image base map, neighborhood enhancement is carried out on the initial tile map, so that road traffic marking lines such as lane lines, edge lines and turning arrows in the base tile map can be clearer and more obvious. Moreover, the generation of the tile pyramid based on the bottom tile map is convenient for the storage, transmission, use and the like of the road image bottom map.
Fig. 6 is a block diagram of an image processing apparatus according to a third embodiment of the present disclosure. As shown in fig. 6, the image processing apparatus 600 may include:
a determining module 610, configured to determine an image to be processed in at least one road image based on brightness information of each road image in the at least one road image;
the illumination enhancement module 620 is configured to perform illumination enhancement on the image to be processed to obtain an enhanced image;
the first image generating module 630 is configured to obtain a road image base map corresponding to the image to be processed based on the enhanced image.
In one embodiment, the illumination enhancement module 620 includes:
the first determining submodule is used for determining a target enhancement area in the image to be processed based on vanishing points in the image to be processed;
and the enhancement processing submodule is used for enhancing the target enhancement area based on the illumination information of the target enhancement area to obtain an enhanced image.
In one embodiment, the enhancement processing sub-module includes:
the illumination estimation unit is used for carrying out illumination estimation on the target enhancement area to obtain illumination information corresponding to the target enhancement area;
the enhancement processing unit is used for processing each pixel of the target enhancement area based on the illumination information and a preset factor to obtain an enhanced image; the preset factor is determined based on atmospheric scattering information of a plurality of images in a road scene corresponding to the image to be processed.
In one embodiment, the determining module 610 includes:
the second determining submodule is used for determining the brightness characteristic distribution information of the ith road image based on the brightness information of the ith road image in the at least one road image; wherein i is an integer greater than or equal to 1;
the third determining submodule is used for determining the low-brightness area ratio corresponding to the ith road image based on the brightness feature distribution information;
and the fourth determining submodule is used for determining the ith road image as the image to be processed under the condition that the low-brightness area ratio meets the preset condition.
In one embodiment, the fourth determination submodule includes:
a first determination unit configured to determine a luminance standard deviation based on luminance feature distribution information of an ith road image;
a second determination unit for determining a darkness coefficient of the ith road image based on the luminance standard deviation and the low luminance area ratio;
and a third determining unit, configured to determine the ith road image as the image to be processed, if the low-luminance area ratio is greater than the first threshold and the darkness coefficient is greater than the second threshold.
In one embodiment, the first image generation module 630 includes:
and the projection processing submodule is used for carrying out projection processing on the enhanced image based on the camera parameters of the image acquisition device corresponding to the image to be processed and the position information and the posture information corresponding to the image to be processed so as to obtain a road image base map corresponding to the image to be processed.
In one embodiment, the apparatus further comprises:
the second image generation module is used for obtaining a bottom layer tile map corresponding to at least one road image based on a road image bottom map corresponding to each road image in at least one road image and position information corresponding to each road image;
the third image generation module is used for zooming the bottom layer tile map for multiple times to obtain a multilayer tile map corresponding to the bottom layer tile map;
and the fourth image generation module is used for obtaining a tile pyramid based on the multilayer tile map, wherein the tile pyramid is used for reconstructing a scene corresponding to at least one road image.
In one embodiment, the second image generation module comprises:
the first image generation submodule is used for obtaining an initial tile map based on a road image base map corresponding to each road image in at least one road image and position information corresponding to each road image;
and the second image generation submodule is used for performing neighborhood enhancement on the initial tile map based on a preset algorithm to obtain a bottom layer tile map corresponding to at least one road image.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 707 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (19)
1. An image processing method comprising:
determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image;
performing illumination enhancement on the image to be processed to obtain an enhanced image;
and obtaining a road image base map corresponding to the image to be processed based on the enhanced image.
2. The method of claim 1, wherein performing illumination enhancement on the image to be processed to obtain an enhanced image comprises:
determining a target enhancement region in the image to be processed based on vanishing points in the image to be processed;
and enhancing the target enhancement area based on the illumination information of the target enhancement area to obtain an enhanced image.
3. The method of claim 2, wherein the enhancing the target enhanced region based on the illumination information of the target enhanced region to obtain an enhanced image comprises:
carrying out illumination estimation on the target enhancement area to obtain illumination information corresponding to the target enhancement area;
processing each pixel of the target enhancement area based on the illumination information and a preset factor to obtain an enhanced image; the preset factor is determined based on atmospheric scattering information of a plurality of images in a road scene corresponding to the image to be processed.
4. The method according to any one of claims 1-3, wherein the determining of the image to be processed in the at least one road image based on the luminance information of each of the at least one road image comprises:
determining brightness feature distribution information of an ith road image based on brightness information of the ith road image in the at least one road image; wherein i is an integer greater than or equal to 1;
determining a low-brightness area ratio corresponding to the ith road image based on the brightness feature distribution information;
and under the condition that the low-brightness area ratio meets a preset condition, determining the ith road image as an image to be processed.
5. The method according to claim 4, wherein the determining the ith road image as the image to be processed in the case that the low-luminance area ratio meets a preset condition comprises:
determining a brightness standard deviation based on the brightness feature distribution information of the ith road image;
determining a darkness coefficient of the ith road image based on the brightness standard deviation and the low-brightness area ratio;
and determining the ith road image as an image to be processed under the condition that the low-brightness area ratio is greater than a first threshold value and the darkness coefficient is greater than a second threshold value.
6. The method according to any one of claims 1 to 5, wherein the obtaining of the road video base map corresponding to the image to be processed based on the enhanced image comprises:
and performing projection processing on the enhanced image based on the camera parameters of the image acquisition device corresponding to the image to be processed and the position information and the posture information corresponding to the image to be processed to obtain a road image base map corresponding to the image to be processed.
7. The method of claim 6, further comprising:
obtaining a bottom layer tile map corresponding to at least one road image based on a road image bottom map corresponding to each road image in the at least one road image and position information corresponding to each road image;
zooming the bottom layer tile map for multiple times to obtain a multilayer tile map corresponding to the bottom layer tile map;
and obtaining a tile pyramid based on the multilayer tile map, wherein the tile pyramid is used for reconstructing a scene corresponding to the at least one road image.
8. The method of claim 7, wherein the obtaining the underlying tile map corresponding to the at least one road image based on the road video base map corresponding to each road image in the at least one road image and the position information corresponding to each road image comprises:
obtaining an initial tile map based on a road image base map corresponding to each road image in the at least one road image and position information corresponding to each road image;
and performing neighborhood enhancement on the initial tile map based on a preset algorithm to obtain a bottom layer tile map corresponding to the at least one road image.
9. An image processing apparatus comprising:
the device comprises a determining module, a processing module and a processing module, wherein the determining module is used for determining an image to be processed in at least one road image based on the brightness information of each road image in the at least one road image;
the illumination enhancement module is used for carrying out illumination enhancement on the image to be processed to obtain an enhanced image;
and the first image generation module is used for obtaining a road image base map corresponding to the image to be processed based on the enhanced image.
10. The apparatus of claim 9, wherein the illumination enhancement module comprises:
the first determining submodule is used for determining a target enhancement area in the image to be processed based on vanishing points in the image to be processed;
and the enhancement processing submodule is used for enhancing the target enhancement area based on the illumination information of the target enhancement area to obtain an enhanced image.
11. The apparatus of claim 10, wherein the enhancement processing sub-module comprises:
the illumination estimation unit is used for carrying out illumination estimation on the target enhancement area to obtain illumination information corresponding to the target enhancement area;
the enhancement processing unit is used for processing each pixel of the target enhancement area based on the illumination information and a preset factor to obtain an enhanced image; the preset factor is determined based on atmospheric scattering information of a plurality of images in a road scene corresponding to the image to be processed.
12. The apparatus of any of claims 9-11, wherein the means for determining comprises:
a second determining sub-module, configured to determine luminance feature distribution information of an ith road image in the at least one road image based on luminance information of the ith road image; wherein i is an integer greater than or equal to 1;
a third determining submodule, configured to determine a low-luminance area proportion corresponding to the ith road image based on the luminance feature distribution information;
and the fourth determining submodule is used for determining the ith road image as the image to be processed under the condition that the low-brightness area ratio meets the preset condition.
13. The apparatus of claim 12, wherein the fourth determination submodule comprises:
a first determination unit configured to determine a luminance standard deviation based on luminance feature distribution information of the ith road image;
a second determination unit configured to determine a darkness coefficient of the i-th road image based on the luminance standard deviation and the low-luminance area proportion;
a third determination unit configured to determine the ith road image as an image to be processed if the low-luminance area ratio is greater than a first threshold and the darkness coefficient is greater than a second threshold.
14. The apparatus of any of claims 9-13, wherein the first image generation module comprises:
and the projection processing submodule is used for carrying out projection processing on the enhanced image based on the camera parameters of the image acquisition device corresponding to the image to be processed, and the position information and the posture information corresponding to the image to be processed to obtain a road image base map corresponding to the image to be processed.
15. The apparatus of claim 14, further comprising:
the second image generation module is used for obtaining a bottom layer tile map corresponding to at least one road image based on a road image base map corresponding to each road image in the at least one road image and position information corresponding to each road image;
the third image generation module is used for zooming the bottom layer tile map for multiple times to obtain a multilayer tile map corresponding to the bottom layer tile map;
and the fourth image generation module is used for obtaining a tile pyramid based on the multilayer tile map, wherein the tile pyramid is used for reconstructing a scene corresponding to the at least one road image.
16. The apparatus of claim 15, wherein the second image generation module comprises:
the first image generation submodule is used for obtaining an initial tile map based on a road image base map corresponding to each road image in the at least one road image and position information corresponding to each road image;
and the second image generation submodule is used for performing neighborhood enhancement on the initial tile map based on a preset algorithm to obtain a bottom layer tile map corresponding to the at least one road image.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210050578.2A CN114418881A (en) | 2022-01-17 | 2022-01-17 | Image processing method, image processing device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210050578.2A CN114418881A (en) | 2022-01-17 | 2022-01-17 | Image processing method, image processing device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114418881A true CN114418881A (en) | 2022-04-29 |
Family
ID=81273542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210050578.2A Pending CN114418881A (en) | 2022-01-17 | 2022-01-17 | Image processing method, image processing device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114418881A (en) |
-
2022
- 2022-01-17 CN CN202210050578.2A patent/CN114418881A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108833785B (en) | Fusion method and device of multi-view images, computer equipment and storage medium | |
WO2017054314A1 (en) | Building height calculation method and apparatus, and storage medium | |
CN111311482B (en) | Background blurring method and device, terminal equipment and storage medium | |
US11830103B2 (en) | Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data | |
CN111080526A (en) | Method, device, equipment and medium for measuring and calculating farmland area of aerial image | |
KR20140140163A (en) | Appatatus for image dehazing using the user controllable radical root operation | |
CN115409881A (en) | Image processing method, device and equipment | |
CN111837158A (en) | Image processing method and device, shooting device and movable platform | |
CN110796664A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN106657948A (en) | low illumination level Bayer image enhancing method and enhancing device | |
CN115115611B (en) | Vehicle damage identification method and device, electronic equipment and storage medium | |
CN110458029B (en) | Vehicle detection method and device in foggy environment | |
CN113421217A (en) | Method and device for detecting travelable area | |
CN113487473B (en) | Method and device for adding image watermark, electronic equipment and storage medium | |
CN110599532A (en) | Depth estimation model optimization and depth estimation processing method and device for image | |
CN113344820B (en) | Image processing method and device, computer readable medium and electronic equipment | |
WO2020181510A1 (en) | Image data processing method, apparatus, and system | |
CN114049488A (en) | Multi-dimensional information fusion remote weak and small target detection method and terminal | |
CN114037087A (en) | Model training method and device, depth prediction method and device, equipment and medium | |
CN110765875B (en) | Method, equipment and device for detecting boundary of traffic target | |
CN115861077A (en) | Panoramic image determination method, device, equipment and storage medium | |
CN113902047B (en) | Image element matching method, device, equipment and storage medium | |
CN114418881A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN114898321A (en) | Method, device, equipment, medium and system for detecting road travelable area | |
CN115965531A (en) | Model training method, image generation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |