CN113487502A - Shadow removing method for hollow image - Google Patents

Shadow removing method for hollow image Download PDF

Info

Publication number
CN113487502A
CN113487502A CN202110748344.0A CN202110748344A CN113487502A CN 113487502 A CN113487502 A CN 113487502A CN 202110748344 A CN202110748344 A CN 202110748344A CN 113487502 A CN113487502 A CN 113487502A
Authority
CN
China
Prior art keywords
shadow
image
hollow
gray
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110748344.0A
Other languages
Chinese (zh)
Other versions
CN113487502B (en
Inventor
胡均平
黄强
罗春雷
罗睿
袁确坚
段吉安
夏毅敏
赵海鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202110748344.0A priority Critical patent/CN113487502B/en
Publication of CN113487502A publication Critical patent/CN113487502A/en
Application granted granted Critical
Publication of CN113487502B publication Critical patent/CN113487502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a shadow removing method of a hollow image, which comprises the following steps: s1, acquiring a road surface picture through a vision acquisition system, acquiring an image containing the hollow, and constructing original data of a hollow data set; s2, performing graying processing on the original data of the hollow data set; s3, carrying out shadow detection on the image subjected to the gray processing, and dividing the image into a shadow area and a non-shadow area; s4, removing the shadow area of the segmented image by adopting an improved shadow removal model; the problem of in the grey scale map processing easily with shadow and pothole false retrieval, the shadow removal effect is waited to improve to influence intelligent vehicle's safe and stable driving is solved.

Description

Shadow removing method for hollow image
Technical Field
The invention relates to the technical field of image information processing of intelligent vehicles, in particular to a shadow removing method for a hollow image.
Background
With the rapid increase of economy, intelligent vehicles are more and more widespread, and in order to ensure the safe and stable driving of the automatic driving of the intelligent vehicles, machine vision and image processing technologies are generally used for determining obstacles in front of the intelligent vehicles, measuring distances and the like; in the image processing process, an image is often converted into a gray scale image and then preprocessed, but the gray scale characteristics of shadows are very similar to those of pits, so that the shadows are likely to be mistakenly detected as the pits, or the pits are mistakenly detected as the shadows, and the shadow removal effect is influenced, so that the intelligent vehicle performs unnecessary actions, or the safe and stable driving of the intelligent vehicle is influenced; and the shadow removing effect in the image can be influenced by severe illumination intensity change, cracks, complex textures of the road surface and the like.
Disclosure of Invention
Technical problem to be solved
Based on the problems, the invention provides a shadow removing method for a hollow image, which solves the problems that shadow and hollow are easy to be mistakenly detected in gray scale image processing, and the shadow removing effect needs to be improved, so that safe and stable driving of an intelligent vehicle is influenced.
(II) technical scheme
Based on the above technical problem, the present invention provides a shadow removal method for a hollow image, comprising the steps of:
s1, acquiring a road surface picture through a vision acquisition system, acquiring an image containing the hollow, and constructing original data of a hollow data set;
s2, performing graying processing on the original data of the hollow data set;
s3, carrying out shadow detection on the image subjected to the gray processing, and dividing the image into a shadow area and a non-shadow area;
s4, removing the shadow area of the segmented image by adopting an improved shadow removal model, wherein the improved shadow removal model is as follows:
Figure BDA0003142102520000021
Figure BDA0003142102520000022
P=W×H,
where P is the total number of pixels of the image, W, H indicates the width and height of the image, NP is the number of pixels in the non-shadow area, M is the shadow area, N is the non-shadow area, S is the image after shadow removal, I (I, j) is the gray scale value of the pixel at position (I, j) in the image, I, j are the image pixel coordinates, and I (I) isN,jN) Is a non-shaded area (i)N,jN) Gray value of pixel of (d), i'N,j′NIs the pixel point coordinate of the non-shadow region in the nearest four neighborhoods of the pixel point of the shadow region, ANRepresenting non-shadow region pixel point (i 'in the nearest four neighborhoods of shadow region pixel point (i, j)'N,j′N) The pixel mean of (a);
s5, a shadow-removed pothole image is obtained, i.e., a processed pothole data set.
Further, step S3 includes:
s3.3, adopting an Otsu maximum inter-class variance method to carry out shadow region and non-shadow region segmentation, and determining an optimal threshold value K:
let [0,1,2, …, L-1]Representing the grey level, N, of an image of size W x HiThe total number of pixels of the image is N for the number of pixels with the gray level i0+N1+…+NL-1The ratio of the number of pixels having a gray level i to the total number is Pi=Ni(ii) P; assume that the threshold value of the division is k, 0<k<L-1, the threshold value divides the image into two types of shadow areas C1 and non-shadow areas C2, and C1 is set by the gray value of 0, k]C2 is composed of gray scale values at [ k +1, L-1]Inner pixelComposition is carried out;
Figure BDA0003142102520000031
Figure BDA0003142102520000032
Figure BDA0003142102520000033
Figure BDA0003142102520000034
when k is present, so that max (σ)2(k) K is the optimal threshold value K for dividing shadow areas and non-shadow areas when the shadow areas and the non-shadow areas exist;
s3.4, the shadow area is composed of pixels with the gray value within [0, K ], and the non-shadow area is composed of pixels with the gray value within [ K +1, L-1 ].
Further, step S3.3 is preceded by:
and S3.1, performing morphological dilation operation on the grayed image.
Further, before the step S3.3, after the step S3.1, the method further includes:
and S3.2, performing Gaussian smoothing processing on the image after the morphological dilation operation.
Further, the morphological dilation operation uses a window size of (5, 5).
Further, the graying processing method of step S2 includes:
the RGB three components of the raw data of the hole data set are weighted averaged:
F(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j),
wherein F (i, j) is the gray value of the gray image after the graying conversion at (i, j), and R (i, j), G (i, j), and B (i, j) respectively represent the red, green, and blue components of the image before the graying conversion at (i, j).
Further, in step S1, the vision acquisition system includes a monocular camera or a binocular camera, and the monocular camera or the binocular camera is fixed on the bonnet of the automobile.
Further, in step S1, the captured video is used to acquire a video frame containing a hole, that is, an image containing a hole, using the Open CV toolbox function.
The invention also discloses a shadow removal processing system of the hollow image, comprising:
at least one processor; and at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor to invoke a shadow removal method for the hole image.
A non-transitory computer readable storage medium storing computer instructions for causing the computer to execute the method for shadow removal of a hole image is also disclosed.
(III) advantageous effects
The technical scheme of the invention has the following advantages:
(1) the invention removes the shadow in the image by adopting an improved shadow removal model on the segmented image, and changes the difference between the gray mean value of a non-shadow area and a shadow area in the traditional gray compensation model into redefined ANThe method not only solves the problem that the shadow removing effect is not ideal enough when the illumination intensity changes violently, but also enables the image after the shadow removal to transit more naturally between the original shadow area and the non-shadow area, effectively solves the problem that the boundary existing after the shadow removal in the existing algorithm is obvious, enables the image after the shadow removal to be as close as possible to the non-shadow image when the area is not shielded under the same illumination condition, and is beneficial to the safe and stable driving of the intelligent vehicle;
(2) according to the method, the gray images are subjected to morphological expansion operation, so that cracks are effectively removed, and the cracks are prevented from being mistakenly detected as shadow boundaries; eliminating complex texture of the road surface by performing Gaussian smoothing on the image subjected to morphological dilation operation, and preventing the complex texture of the road surface from being mistakenly detected as a shadow boundary; accurately dividing shadow areas and non-shadow areas by an Otsu maximum between-class variance method, and preventing small pits from being divided into shadow areas; through the combined action of the above modes, the shadow detection is more accurate, the false detection is less, and the effective removal of the shadow area in the subsequent processing is facilitated.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
FIG. 1 is a flow chart of a method for shadow removal of an over-lying image according to an embodiment of the present invention;
FIG. 2 is a schematic view of a vision acquisition system according to an embodiment of the present invention;
FIG. 3 is a grayed out hole image of an embodiment of the present invention;
FIG. 4 is a pothole image after a morphological dilation operation of an embodiment of the present invention;
FIG. 5 is a bump image after Gaussian smoothing according to an embodiment of the present invention;
FIG. 6 is a hole image after Otsu segmentation processing according to an embodiment of the present invention;
fig. 7 is a shadow-removed pothole image of an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
The invention relates to a shadow removing method of a hollow image, which comprises the following steps as shown in figure 1:
s1, acquiring a road surface picture through a vision acquisition system, acquiring an image containing the hollow, and constructing original data of a hollow data set: the vision acquisition system is fixed on an automobile engine hood, and as shown in fig. 2, a video frame containing the potholes is obtained from the shot video by utilizing an Open CV toolbox function;
the visual acquisition system comprises a monocular camera or a binocular camera, the monocular camera performs image identification in an image matching mode, the distance is estimated according to the size of a target in an image, the parallax of the two images is calculated by the binocular camera, the distance of a front obstacle is directly measured, the type of the obstacle does not need to be judged, the monocular camera or the binocular camera is selected according to actual conditions, and the shadow removing method can be applied.
In the collection task of the present embodiment, 1800 pieces of acquired pit data are collectively counted, wherein 1000 pieces of normal road pits, 400 pieces of shadow-covered pits, and 400 pieces of water-filled pits are counted. It follows that the number of dimples covering the shadow is a considerable proportion of the data set. The gray features of shadows are similar to potholes and are most likely to be erroneously detected as potholes, and therefore, the shadow-covered pictures must be subjected to a shadow-removing process to obtain a high-quality pothole data set.
And S2, performing graying processing on the original data of the hollow data set:
the original image is an RGB three-channel image, and the complexity of processing can be greatly improved by directly operating the original image, so that the original image needs to be grayed first, and the three-channel image is changed into a single-channel image; the hole data set was grayed out using a weighted average method. The weighted average method carries out weighted average on the RGB components with different weights according to importance and other indexes. Since human eyes have the highest sensitivity to green and the lowest sensitivity to blue, weighted averaging of the RGB three components according to equation (1) can result in a more reasonable grayscale image:
F(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j) (1)
wherein F (i, j) is the gray value of the gray image after the graying conversion at (i, j), R (i, j), G (i, j), B (i, j) respectively represent the red, green, blue components of the image before the graying conversion at (i, j), and the obtained gray image is shown in fig. 3;
s3, carrying out shadow detection on the image subjected to the gray processing, and dividing the image into a shadow area and a non-shadow area;
s3.1, performing morphological expansion operation on the grayed image;
the long and narrow road defects such as the road cracks are easily detected as the boundaries of the shadows by mistake on the gray scale image, and the influence caused by the cracks can be effectively removed through morphological expansion operation. The window size used for the morphological dilation operation is (5,5), and the processed image is shown in fig. 4;
s3.2, performing Gaussian smoothing on the image subjected to the morphological dilation operation;
the road surface texture of the image after morphological expansion operation is more complex, therefore, the image is subjected to Gaussian smoothing to eliminate the influence of the complex road surface texture on the subsequent shadow region segmentation, and the processed image is shown in FIG. 5;
s3.3, carrying out shadow region and non-shadow region segmentation on the image after Gaussian smoothing by adopting an Otsu maximum inter-class variance method;
the traditional segmentation method adopts a global threshold value which needs to be manually set and is not flexible enough, the segmented shadow area and the non-shadow area are rough and not accurate enough, and in the process of detecting the potholes, smaller potholes are likely to be mistakenly segmented into shadow areas; therefore, in the embodiment, the image is divided by using an Otsu maximum inter-class variance method, and the threshold is determined adaptively, so that the shadow region and the non-shadow region are effectively divided, which specifically includes the following steps:
let [0,1,2, …, L-1]Representing the grey level, N, of an image of size W x HiThe total number of pixels of the image is N for the number of pixels with the gray level i0+N1+…+NL-1The ratio of the number of pixels having a gray level i to the total number is Pi=Ni(ii) P; assume that the threshold value of the division is k, 0<k<L-1, the threshold value divides the image into two types of shadow areas C1 and non-shadow areas C2, and C1 is set by the gray value of 0, k]C2 is composed of gray scale values at [ k +1, L-1]The probability that the pixel is classified to C1 is shown in equation (2). The average value of the gray scale of k-level gray scale before the image is shown as formula (3), the average value of the gray scale of the whole image is shown as formula (4), and the gray scale between the classesThe variance calculation expression is shown in formula (5):
Figure BDA0003142102520000081
Figure BDA0003142102520000082
Figure BDA0003142102520000083
Figure BDA0003142102520000084
when k is present, so that max (σ)2(k) K) is the optimal threshold K for segmenting the shaded and unshaded regions.
S3.4, according to the optimal threshold value K for segmenting the shadow area and the non-shadow area, segmenting the image into the shadow area and the non-shadow area, wherein the shadow area is composed of pixels with the gray value within [0, K ], the non-shadow area is composed of pixels with the gray value within [ K +1, L-1], and the image after Otsu segmentation is shown in FIG. 6;
s4, removing the shadow area of the segmented image by adopting an improved shadow removal model;
the traditional shadow removal algorithm mainly considers the shadow removal by means of gray level compensation, but when a scene with severe illumination intensity change is encountered, the shadow removal effect is not ideal enough, an area which cannot be removed exists, and meanwhile, an obvious boundary exists between an original shadow area and a non-shadow area in an image after the shadow removal; for this reason, the conventional shadow removal model is improved, and the improved model is as follows:
P=W×H (6)
Figure BDA0003142102520000091
Figure BDA0003142102520000092
where P is the total number of pixels of the image, W, H indicates the width and height of the image, NP is the number of pixels in the non-shadow area, M is the shadow area, N is the non-shadow area, S is the image after shadow removal, I (I, j) is the gray scale value of the pixel at position (I, j) in the image, I, j are the image pixel coordinates, and I (I) isN,jN) Is a non-shaded area (i)N,jN) Gray value of pixel of (d), i'N,j′NIs the pixel point coordinate of the non-shadow region in the nearest four neighborhoods of the pixel point of the shadow region, ANRepresenting non-shadow region pixel point (i 'in the nearest four neighborhoods of shadow region pixel point (i, j)'N,j′N) The pixel mean of (a); wherein the content of the first and second substances,
Figure BDA0003142102520000093
representing a shadow region pixel (i, j) and a non-shadow region pixel (i 'in its nearest four neighborhoods'N,j′N) The euclidean distance of (a) is used as a weight for eliminating the shadow boundary. It can be known from the property that the gray values of the adjacent pixels have continuity and similarity, after the shadow is removed, the gray values of the neighborhood of the boundary pixels of the original shadow region should have consistency with the gray values of the pixels of the adjacent non-shadow region. Therefore, pixels in the non-shaded region that are closer to the shaded region should have smaller weights. Compared with the conventional model, the gray level difference between the non-shadow area and the shadow area is changed to A defined by the formula (8) in the formula (7)NThe image with the shadow removed can be transited between the original shadow area and the non-shadow area more naturally, the problem of obvious boundary existing after the shadow is removed in the existing algorithm is effectively solved, and the processed image is shown in figure 7.
S5, a shadow-removed pothole image is obtained, i.e., a processed pothole data set.
In this embodiment, a brand new hole data set potholeleb is obtained after removing shadows from 1800 collected hole pictures. In a subsequent improved network, the effectiveness of shadow removal will be verified using the original dataset PotholeA and the shadow removal dataset PotholeB, and the final hole dataset partitioning is shown in table 1.
TABLE 1 pothole dataset partitioning
Figure BDA0003142102520000101
Finally, it should be noted that the above-described methods may be converted into software program instructions, either implemented by running a processing system comprising a processor and a memory, or implemented by computer instructions stored in a non-transitory computer-readable storage medium. The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
As can be seen from the above, the method for removing shadow in a hollow image has the following advantages:
(1) the invention removes the shadow in the image by adopting an improved shadow removal model on the segmented image, and changes the difference between the gray mean value of a non-shadow area and a shadow area in the traditional gray compensation model into redefined ANThe method not only solves the problem that the shadow removing effect is not ideal enough when the illumination intensity changes violently, but also enables the image after the shadow removal to transit more naturally between the original shadow area and the non-shadow area, effectively solves the problem that the boundary existing after the shadow removal in the existing algorithm is obvious, and enables the image after the shadow removal to be as close to the same illumination strip as possibleThe shadow-free image of the area under the vehicle is not blocked, so that safe and stable driving of the intelligent vehicle is facilitated;
(2) according to the method, the gray images are subjected to morphological expansion operation, so that cracks are effectively removed, and the cracks are prevented from being mistakenly detected as shadow boundaries; eliminating complex texture of the road surface by performing Gaussian smoothing on the image subjected to morphological dilation operation, and preventing the complex texture of the road surface from being mistakenly detected as a shadow boundary; accurately dividing shadow areas and non-shadow areas by an Otsu maximum between-class variance method, and preventing small pits from being divided into shadow areas; through the combined action of the above modes, the shadow detection is more accurate, the false detection is less, and the effective removal of the shadow area in the subsequent processing is facilitated.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method of shadow removal for an over-the-hole image, comprising the steps of:
s1, acquiring a road surface picture through a vision acquisition system, acquiring an image containing the hollow, and constructing original data of a hollow data set;
s2, performing graying processing on the original data of the hollow data set;
s3, carrying out shadow detection on the image subjected to the gray processing, and dividing the image into a shadow area and a non-shadow area;
s4, removing the shadow area of the segmented image by adopting an improved shadow removal model, wherein the improved shadow removal model is as follows:
Figure FDA0003142102510000011
Figure FDA0003142102510000012
P=W×H,
where P is the total number of pixels of the image, W, H indicates the width and height of the image, NP is the number of pixels in the non-shadow area, M is the shadow area, N is the non-shadow area, S is the image after shadow removal, I (I, j) is the gray scale value of the pixel at position (I, j) in the image, I, j are the image pixel coordinates, and I (I) isN,jN) Is a non-shaded area (i)N,jN) Gray value of pixel of (d), i'N,j′NIs the pixel point coordinate of the non-shadow region in the nearest four neighborhoods of the pixel point of the shadow region, ANRepresenting non-shadow region pixel point (i 'in the nearest four neighborhoods of shadow region pixel point (i, j)'N,j′N) The pixel mean of (a);
s5, a shadow-removed pothole image is obtained, i.e., a processed pothole data set.
2. The method for removing the shadow of the hollow image according to claim 1, wherein step S3 includes:
s3.3, adopting an Otsu maximum inter-class variance method to carry out shadow region and non-shadow region segmentation, and determining an optimal threshold value K:
let [0,1,2, …, L-1]Representing the grey level, N, of an image of size W x HiThe total number of pixels of the image is N for the number of pixels with the gray level i0+N1+…+NL-1The ratio of the number of pixels having a gray level i to the total number is Pi=Ni(ii) P; assume that the threshold value of the division is k, 0<k<L-1, the threshold value divides the image into two types of shadow areas C1 and non-shadow areas C2, and C1 is set by the gray value of 0, k]C2 is composed of gray scale values at [ k +1, L-1]Inner pixel composition;
Figure FDA0003142102510000021
Figure FDA0003142102510000022
Figure FDA0003142102510000023
Figure FDA0003142102510000024
when k is present, so that max (σ)2(k) K is the optimal threshold value K for dividing shadow areas and non-shadow areas when the shadow areas and the non-shadow areas exist;
s3.4, the shadow area is composed of pixels with the gray value within [0, K ], and the non-shadow area is composed of pixels with the gray value within [ K +1, L-1 ].
3. The method of shadow removal of pothole images according to claim 2, further comprising, before step S3.3:
and S3.1, performing morphological dilation operation on the grayed image.
4. The method for removing shadow of a pot-hole image according to claim 3, wherein before step S3.3, after step S3.1, further comprising:
and S3.2, performing Gaussian smoothing processing on the image after the morphological dilation operation.
5. The method of removing shadows from a pothole image of claim 3, wherein the morphological dilation operation uses a window size of (5, 5).
6. The method of removing shadows from a pot-hollow image according to claim 1, wherein the graying processing method of step S2 includes:
the RGB three components of the raw data of the hole data set are weighted averaged:
F(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j),
wherein F (i, j) is the gray value of the gray image after the graying conversion at (i, j), and R (i, j), G (i, j), and B (i, j) respectively represent the red, green, and blue components of the image before the graying conversion at (i, j).
7. The method for shadow removal of hollow images as claimed in claim 1, wherein in step S1, the vision acquisition system comprises a monocular or binocular camera fixed on a bonnet of an automobile.
8. The method for removing shadows from a pot-hole image according to claim 1, wherein in step S1, the captured video is used to obtain a pot-hole-containing video frame, i.e., a pot-hole-containing image, using an OpenCV toolbox function.
9. A shadow removal processing system for a hollow image, comprising:
at least one processor; and at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of shadow removal of a pothole image of any of claims 1-8.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for shadow removal of hole images according to any one of claims 1 to 8.
CN202110748344.0A 2021-06-30 2021-06-30 Shadow removing method for hollow image Active CN113487502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110748344.0A CN113487502B (en) 2021-06-30 2021-06-30 Shadow removing method for hollow image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110748344.0A CN113487502B (en) 2021-06-30 2021-06-30 Shadow removing method for hollow image

Publications (2)

Publication Number Publication Date
CN113487502A true CN113487502A (en) 2021-10-08
CN113487502B CN113487502B (en) 2022-05-03

Family

ID=77939506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110748344.0A Active CN113487502B (en) 2021-06-30 2021-06-30 Shadow removing method for hollow image

Country Status (1)

Country Link
CN (1) CN113487502B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915524A (en) * 2012-09-14 2013-02-06 武汉大学 Method for eliminating shadow based on match of inside and outside check lines of shadow area
CN105261021A (en) * 2015-10-19 2016-01-20 浙江宇视科技有限公司 Method and apparatus of removing foreground detection result shadows
CN107154026A (en) * 2017-03-22 2017-09-12 陕西师范大学 A kind of method of the elimination road surface shade based on adaption brightness elevation model
CN107292898A (en) * 2017-05-04 2017-10-24 浙江工业大学 A kind of car plate shadow Detection and minimizing technology based on HSV
CN107808366A (en) * 2017-10-21 2018-03-16 天津大学 A kind of adaptive optical transfer single width shadow removal method based on Block- matching
CN107862667A (en) * 2017-11-23 2018-03-30 武汉大学 A kind of city shadow Detection and minimizing technology based on high-resolution remote sensing image
CN111738931A (en) * 2020-05-12 2020-10-02 河北大学 Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915524A (en) * 2012-09-14 2013-02-06 武汉大学 Method for eliminating shadow based on match of inside and outside check lines of shadow area
CN105261021A (en) * 2015-10-19 2016-01-20 浙江宇视科技有限公司 Method and apparatus of removing foreground detection result shadows
CN107154026A (en) * 2017-03-22 2017-09-12 陕西师范大学 A kind of method of the elimination road surface shade based on adaption brightness elevation model
CN107292898A (en) * 2017-05-04 2017-10-24 浙江工业大学 A kind of car plate shadow Detection and minimizing technology based on HSV
CN107808366A (en) * 2017-10-21 2018-03-16 天津大学 A kind of adaptive optical transfer single width shadow removal method based on Block- matching
CN107862667A (en) * 2017-11-23 2018-03-30 武汉大学 A kind of city shadow Detection and minimizing technology based on high-resolution remote sensing image
CN111738931A (en) * 2020-05-12 2020-10-02 河北大学 Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王济国等: "示温漆图像阴影去除算法的研究与实现", 《计算机工程与设计》 *
胡均平 等: "克服阴影和车道线影响的路面坑洞识别研究", 《制造业自动化》 *

Also Published As

Publication number Publication date
CN113487502B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN110490914B (en) Image fusion method based on brightness self-adaption and significance detection
CN107330376B (en) Lane line identification method and system
WO2018023916A1 (en) Shadow removing method for color image and application
CN110287884B (en) Voltage line detection method in auxiliary driving
CN106778551B (en) Method for identifying highway section and urban road lane line
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
CN110427979B (en) Road water pit identification method based on K-Means clustering algorithm
CN109255326A (en) A kind of traffic scene smog intelligent detecting method based on multidimensional information Fusion Features
CN113191979B (en) Non-local mean denoising method for partitioned SAR (synthetic aperture radar) image
CN116152115B (en) Garbage image denoising processing method based on computer vision
CN107563301A (en) Red signal detection method based on image processing techniques
CN113239733B (en) Multi-lane line detection method
CN111753749A (en) Lane line detection method based on feature matching
CN111369570B (en) Multi-target detection tracking method for video image
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN115511907A (en) Scratch detection method for LED screen
CN112288780B (en) Multi-feature dynamically weighted target tracking algorithm
CN111191482A (en) Brake lamp identification method and device and electronic equipment
CN111192280B (en) Method for detecting optic disc edge based on local feature
CN113487502B (en) Shadow removing method for hollow image
CN109948570B (en) Real-time detection method for unmanned aerial vehicle in dynamic environment
CN109978916B (en) Vibe moving target detection method based on gray level image feature matching
CN111709885A (en) Infrared weak and small target enhancement method based on region of interest and image mark
CN106384103A (en) Vehicle face recognition method and device
CN115953456A (en) Binocular vision-based vehicle overall dimension dynamic measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant