CN112488940A - Method for enhancing image edge of railway locomotive component - Google Patents

Method for enhancing image edge of railway locomotive component Download PDF

Info

Publication number
CN112488940A
CN112488940A CN202011371951.1A CN202011371951A CN112488940A CN 112488940 A CN112488940 A CN 112488940A CN 202011371951 A CN202011371951 A CN 202011371951A CN 112488940 A CN112488940 A CN 112488940A
Authority
CN
China
Prior art keywords
image
matrix
ahe
image matrix
locomotive component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011371951.1A
Other languages
Chinese (zh)
Inventor
石玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202011371951.1A priority Critical patent/CN112488940A/en
Publication of CN112488940A publication Critical patent/CN112488940A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection

Abstract

A railway locomotive part image edge enhancement method relates to the technical field of image processing, aims at the problem that only edge information is enhanced and main body information is discarded in the existing edge enhancement method in the prior art, and comprises the following steps: the method comprises the following steps: acquiring a linear array image; step two: intercepting a locomotive component subgraph according to the linear array image; step three: preprocessing the intercepted locomotive component subgraph and converting the preprocessed locomotive component subgraph into three same image matrixes; step four: one of the image matrixes is converted into a frequency spectrum image through Fourier transform, the frequency spectrum image is filtered through a Gaussian high-pass filter to obtain an image matrix, and then the image matrix is converted back to a space domain through inverse Fourier transform to obtain the image matrix; step five: filtering the rest two image matrixes through two different filters to obtain an image matrix and an image matrix; step six: and normalizing the image matrix to obtain a sum, and multiplying the sum by the weight respectively and adding the sum to obtain a single-channel image matrix.

Description

Method for enhancing image edge of railway locomotive component
Technical Field
The invention relates to the technical field of image processing, in particular to a method for enhancing an image edge of a railway locomotive component.
Background
Existing edge enhancement methods include: sobel operator edge detection, Canny operator edge detection, high-pass filtering extraction of contour information, dual-threshold method edge detection and second-order differential edge detection. These algorithms belong to the category of classical image processing, and are all implemented with the purpose of emphasizing edges, that is, the algorithms are presented with edge information as a high gray scale value and non-edge information as a second gray scale value in the processing result. Although the above algorithm is very robust to the extraction of edge information, it has the following problems:
1. although the edge information of the object in the image is highlighted, the main body information of the object is lost at the same time, so that the application field is limited;
2. for an object with a simple background, the edge detection method has certain robustness, but for an image with a complex background, the robustness of the edge detection method is not high;
3. the result of the edge detection is only suitable for traditional image processing and cannot be used as training data for deep learning;
4. the above-described edge detection algorithm is not robust against noise.
Disclosure of Invention
The purpose of the invention is: aiming at the problems of low robustness and poor noise immunity of the existing edge enhancement method in the prior art, the method for enhancing the image edge of the railway locomotive component is provided.
The technical scheme adopted by the invention to solve the technical problems is as follows:
a method of edge enhancement of a railroad locomotive component image comprising the steps of:
the method comprises the following steps: acquiring a linear array image of a railway locomotive;
step two: intercepting a locomotive component subgraph according to the linear array image of the railway locomotive;
step three: preprocessing the intercepted locomotive component subgraph and converting the preprocessed locomotive component subgraph into three same image matrixes AHE1(x,y)、AHE2(x, y) and AHE3(x,y);
Step four: image matrix AHE1(x, y) converting the frequency spectrum image into a frequency spectrum image through Fourier transform, filtering the frequency spectrum image through a Gaussian high-pass filter to obtain an image matrix H (u, v), and then converting the image matrix H (u, v) back to a spatial domain through inverse Fourier transform to obtain an image matrix I (x, y);
step five:image matrix AHE2(x, y) and AHE3(x, y) filtering through two different filters to obtain an image matrix S (x, y) and an image matrix G (x, y);
step six: normalizing the image matrixes I (x, y), S (x, y) and G (x, y) to obtain
I '(x, y), S' (x, y), G '(x, y), then multiplying I' (x, y), S '(x, y), and G' (x, y) by the target weights, respectively, and adding to obtain a single-channel image matrix, i.e., an edge-enhanced image.
Further, the pretreatment in the third step comprises the following specific steps: firstly, median filtering processing is carried out on the intercepted locomotive component subgraph, and then self-adaptive histogram equalization processing is carried out on the locomotive component subgraph after median filtering.
Further, the median filtering process is expressed as:
Mdian(x,y)=medianA(f(x,y)}
wherein, A is a window, the size is 3 x 3, and f (x, y) is a locomotive component sub-image matrix.
Further, the specific steps of the fourth step are as follows:
firstly, an image matrix AHE obtained after preprocessing is obtained1(x, y) is converted into a frequency spectrum image through two-dimensional discrete fast Fourier transform, the frequency spectrum image is centralized to obtain a frequency spectrum image F (u, v), then Gaussian high-pass filtering is adopted to filter the frequency spectrum image F (u, v) to obtain an image H (u, v), and finally the obtained image H (u, v) is converted back to a space domain through inverse Fourier transform to obtain an image matrix I (x, y).
Further, the image H (u, v) is represented as:
Figure BDA0002807013850000021
in the formula F0Representing the filter radius of a Gaussian high-pass filter, F0Taking AHE1(x, y) 1/5 for the short side of the image matrix.
Further, the concrete steps of the fifth step are as follows:
firstly, an image matrix AHE obtained after preprocessing is obtained2(X, Y) respectively carrying out filtering processing in the X direction and the Y direction through a Scharr operator, then carrying out Gaussian smoothing processing on the filtered images, and then carrying out image fusion on the images subjected to the Gaussian smoothing processing to obtain an image S (X, Y);
the image matrix AHE obtained after the pretreatment is carried out3(x, y) smoothing the image G (x, y) by a Gaussian filter and a median filter.
Further, the weights in the X direction and the Y direction in the weight fusion are 0.5, respectively.
Further, the single-channel image matrix is represented as:
N(x,y)={I′(x,y)×α+S′(x,y)×β+G′(x,y)×γ}×255
in the formula, α, β, γ are weighting coefficients corresponding to I ' (x, y), S ' (x, y), G ' (x, y).
Further, α, β, and γ satisfy the following conditions:
Figure BDA0002807013850000022
further, α, β and γ are: α ═ 0.3, β ═ 0.1, and γ ═ 0.6.
The invention has the beneficial effects that:
1. the characteristic information of the object is kept while the edge is enhanced;
2. the method also has stronger robustness for the edge enhancement of the complex background image;
3. the image after edge enhancement can be used as training data for deep learning;
4. has good anti-interference and anti-noise capability.
Drawings
FIG. 1 is a comparison of an original image and an edge enhanced image to FIG. 1;
FIG. 2 is a comparison of an original image and an edge enhanced image with FIG. 2;
FIG. 3 is a comparison of an original image and an edge enhanced image with FIG. 3;
FIG. 4 is a comparison of an original image and an edge enhanced image to FIG. 4;
FIG. 5 is a comparison of an original image and an edge enhanced image with FIG. 5;
FIG. 6 is a comparison of an original image and an edge enhanced image to FIG. 6;
FIG. 7 is a partially exploded and enlarged view of an original image;
FIG. 8 is a schematic diagram of data enhancement results 1;
FIG. 9 is a schematic diagram of data enhancement results 2;
FIG. 10 is an expanded view of the process of the present application;
fig. 11 is a flow chart of the present application.
Detailed Description
It should be noted that, in the case of conflict, the features included in the embodiments or the embodiments disclosed in the present application may be combined with each other.
The first embodiment is as follows: referring to fig. 11, the method for enhancing the image edge of the railway locomotive component according to the embodiment includes the following steps:
the method comprises the following steps: acquiring a linear array image of a railway locomotive;
step two: intercepting a locomotive component subgraph according to the linear array image of the railway locomotive;
step three: preprocessing the intercepted locomotive component subgraph and converting the preprocessed locomotive component subgraph into three same image matrixes AHE1(x,y)、AHE2(x, y) and AHE3(x,y);
Step four: image matrix AHE1(x, y) converting the frequency spectrum image into a frequency spectrum image through Fourier transform, filtering the frequency spectrum image through a Gaussian high-pass filter to obtain an image matrix H (u, v), and then converting the image matrix H (u, v) back to a spatial domain through inverse Fourier transform to obtain an image matrix I (x, y);
step five: image matrix AHE2(x, y) and AHE3(x, y) filtering through two different filters to obtain an image matrix S (x, y) and an image matrix G (x, y);
step six: normalizing the image matrixes I (x, y), S (x, y) and G (x, y) to obtain
I '(x, y), S' (x, y), G '(x, y), then multiplying I' (x, y), S '(x, y), and G' (x, y) by the target weights, respectively, and adding to obtain a single-channel image matrix, i.e., an edge-enhanced image.
Linear array image acquisition
High-definition linear array imaging equipment is arranged on two sides and the bottom of a rail, and the train starts to start the imaging equipment to scan the moving locomotive by triggering the piezoelectric sensor. And obtaining a high-definition linear array image after line-by-line scanning.
Acquisition of sub-images to be detected
Different modules or parts of the locomotive are intercepted according to the axle distance of the train, the type of the train and the priori knowledge to obtain subimages, and the acquisition of the subimages can effectively reduce the time required by fault recognition and improve the recognition accuracy.
Subgraph analysis and edge enhancement
Because the image acquired by the linear array camera is a single-channel gray-scale image without color information characteristics, the edge information can be distinguished only through the change of the gray-scale value. Leading to the following two conditions that directly affect the accuracy of subsequent fault detection.
a. When light is not uniformly distributed, the acquired linear array image has the characteristic of different brightness;
b. when the background area contains dirt and oil stain, the obtained sub-graph is difficult to distinguish the information characteristics of the parts of the vehicle body.
The cases a and b, referred to as noise interference herein, can be easily identified as edge regions by using conventional edge detection and enhancement methods, and the edge regions are enhanced while losing the subject information of the object. The method for enhancing the edge of the text is to improve and fuse the traditional edge detection operator, can well solve the problems, and has the basic idea that: firstly, copying the cut subgraphs into three same matrixes after image preprocessing, then converting one of the matrixes into a spectrogram through Fourier transform, performing filtering enhancement through a Gaussian high-pass filter, converting the spectrogram into a spatial domain, and filtering the rest two matrixes through two different filters; and finally, multiplying the processed three matrixes by corresponding weights and fusing the three matrixes into a new single-channel image matrix.
Image matrix edge enhancement
The image matrix edge enhancement is divided into three parts, wherein one part is frequency domain edge enhancement, the other part is spatial domain edge filtering, and the last part is image filtering denoising.
The second embodiment is as follows: the embodiment is further described with respect to the first embodiment, and the difference between the embodiment and the first embodiment is that the pretreatment in the third step comprises the following specific steps: firstly, median filtering processing is carried out on the intercepted locomotive component subgraph, and then self-adaptive histogram equalization processing is carried out on the locomotive component subgraph after median filtering.
The third concrete implementation mode: this embodiment is further described with respect to the second embodiment, and the difference between this embodiment and the second embodiment is that the median filtering is expressed as:
Mdian(x,y)=medianA{f(x,y)}
where a is the window, size 3 x 3, and f (x, y) is the image matrix.
Firstly, median filtering is carried out on the obtained subgraphs, and isolated noise points generated in the acquisition process of the linear array camera are eliminated. Median filter formula:
Mdian(x,y)=medianA{f(x,y)}
where a is the window, size 3 x 3, { f (x, y) } is the image matrix.
And then, performing Adaptive Histogram Equalization (AHE) on the image, wherein the image acquired by the line camera generates an effect of different brightness along with different lumens of the region, but the pixel value conversion rules of the edge region and the non-edge region are the same, so that the acquired image has the same gray value characteristic, and the adaptive histogram equalization is adopted to improve the local contrast of the image and obtain more image details. Obtaining an image AHE (x, y) after Adaptive Histogram Equalization (AHE) processing. Then copying into three identical image matrixes, respectively marked as AHE1(x,y),AHE2(x,y),AHE3(x,y0。
The fourth concrete implementation mode: the third embodiment is further described, and the difference between the third embodiment and the fourth embodiment is that the specific steps in the fourth embodiment are:
firstly, an image matrix AHE obtained after preprocessing is obtained1(x, y0 is converted into a frequency spectrum image through two-dimensional discrete fast Fourier transform, the frequency spectrum image is centralized to obtain a frequency spectrum image F (u, v), then Gaussian high-pass filtering is adopted to filter the frequency spectrum image F (u, v) to obtain an image H (u, v), and finally the obtained image H (u, v) is converted back to a space domain through inverse Fourier transform to obtain an image matrix I (x, y).
And (4) enhancing the frequency domain edge, so as to obtain the main body outline information of the image. Firstly, an image AHE obtained after pretreatment is obtained1(x, y) is converted into a spectral image by two-dimensional discrete fast fourier transform, and the spectral image F (u, v) is obtained after the spectral image is centered. Obtaining the contour information of the image requires high-pass filtering the image, in order to make the high-pass filtered image smooth in transition and free from ringing, gaussian high-pass filtering is then used to filter the spectrogram F (u, v), so as to obtain an image H (u, v), where the expression is as follows:
Figure BDA0002807013850000051
in the formula F0Represents the filtering radius of a Gaussian high-pass filter, and is proved by experiments when F0Get 1/5AHE1The (x, y) image matrix image has the most prominent subject contour features when the short side is long.
Finally, the resulting H (u, v) is converted back to the spatial domain by an inverse fourier transform, resulting in an image matrix I (x, y).
The fifth concrete implementation mode: this embodiment mode is further explained with reference to a fourth embodiment mode, and is different from the fourth embodiment mode in that an image H (u, v) is represented by:
Figure BDA0002807013850000061
in the formula F0Representing the filter radius of a Gaussian high-pass filter, F0The short side length of the image is taken 1/5.
The sixth specific implementation mode: the fifth embodiment is further described, and the difference between the fifth embodiment and the fifth embodiment is that the specific steps of the fifth embodiment are:
firstly, an image matrix AHE obtained after preprocessing is obtained2(X, Y) respectively carrying out filtering processing in the X direction and the Y direction through a Scharr operator, then carrying out Gaussian smoothing processing on the filtered images, and then carrying out image fusion on the images subjected to the Gaussian smoothing processing to obtain an image S (X, Y); the image fusion is completed by using a weighted average image fusion algorithm.
The image matrix AHE obtained after the pretreatment is carried out3(x, y) smoothing the image G (x, y) by a Gaussian filter and a median filter.
And (4) spatial domain edge filtering, wherein the purpose is to obtain detail contour information of the image. The Scharr operator is a discrete differential operator for edge detection, has a smoothing effect on noise, provides more accurate edge direction information, and has higher accuracy compared with a Sobel operator. Therefore, the image AHE obtained after the pretreatment is firstly processed2(X, Y) filtering in the X direction and the Y direction respectively through Scharr operators. Because the contour information of the locomotive component has certain continuity after being collected, the method performs Gaussian smoothing on the image after performing Scharr filtering in the direction X, Y, so that the interference of non-contour noise information can be eliminated. Then, the processed images are subjected to weight fusion, and since the importance degrees of the contour information in the X, Y direction are consistent, the fusion weights are respectively 0.5, so that the filtered images in the X, Y direction are uniformly fused to obtain an image S (x, y).
3) Image filtering, which aims to eliminate noise interference and simultaneously,the subject information of the image is retained. In order to eliminate the noise interference caused by image acquisition or the vehicle body while keeping the main body information of the image, the preprocessed image AHE is used3(x, y) smoothing the image by a Gaussian filter and a median filter to obtain an image G (x, y).
The seventh embodiment: this embodiment mode is further described as a sixth embodiment mode, and is different from the sixth embodiment mode in that the weight fusion weights are each 0.5, and filtered images in the X, Y direction are uniformly fused. The filter processing in the X direction and the filter processing in the Y direction are performed by the Scharr operator, respectively, to obtain two filter images in the X, Y direction.
The fusion weight means that the proportion of the two images is 0.5 when the two images are combined into one image, that is, the two images are uniformly fused into one image
The specific implementation mode is eight: this embodiment mode is further described with respect to a seventh embodiment mode, and the difference between this embodiment mode and the seventh embodiment mode is that the normalization in step six is expressed as:
Figure BDA0002807013850000062
in the formula, image represents a normalized image.
Image fusion
In order to ensure that the pixel ranges of the images I (x, y), S (x, y) and G (x, y) after the edge enhancement of the image matrix have consistency, the images are normalized, and the normalization expression is as follows:
Figure BDA0002807013850000071
in the formula, image represents a normalized image.
The specific implementation method nine: this embodiment mode is further described with respect to the eighth embodiment mode, and the difference between this embodiment mode and the eighth embodiment mode is that a single-channel image matrix is expressed as:
N(x,y)={I′(x,y)×α+S′(x,y)×β+G′(x,y)×γ}×255
in the formula, α, β, γ are weighting coefficients corresponding to I ' (x, y), S ' (x, y), G ' (x, y).
The detailed implementation mode is ten: this embodiment mode is a further description of a ninth embodiment mode, and differs from the ninth embodiment mode in that α, β, and γ satisfy the following conditions:
Figure BDA0002807013850000072
after normalization, images I ' (x, y), S ' (x, y), and G ' (x, y) were obtained. Multiplying the images by corresponding weights and adding the images to obtain an image N (x, y), wherein the expression is as follows:
N(x,y)={I′(x,y)×α+S′(x,y)×β+G′(x,y)×γ}×255
in the formula, α, β, and γ are weighting coefficients corresponding to I ' (x, y), S ' (x, y), and G ' (x, y), and α, β, and γ are required to satisfy the following conditions in order to enhance contour information while ensuring the original features of an image:
Figure BDA0002807013850000073
as can be seen from comparative experiments, for a locomotive component image, when α is 0.3, β is 0.1, and γ is 0.6, the edge contour enhancement effect of the component is the best, as shown in fig. 1 and 2, fig. 1 is an original image, and fig. 2 is an edge-enhanced image.
It is apparent from fig. 1 to 6 that the image contrast and the edge information are significantly improved. As shown in fig. 7, the image indicated by the upper dotted line is a partial enlarged view of the original image, and the image indicated by the lower dotted line is a partial enlarged view after edge enhancement.
When the method of the present invention is applied to the existing VOC data set, the data enhancement results are shown in fig. 8 and 9, where fig. 8 is an original gray scale image and fig. 9 is an edge-enhanced image. As can be seen from the figure, the contour information of different objects is significantly enhanced. After the data set is made, the data set is put into Mask-Rcnn for training, and under the condition that other configuration conditions are the same, the segmentation accuracy is improved by 0.8%.
Fig. 10 is a flow chart of the most sophisticated technical solution of the present application.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations be included within the scope of the invention as defined in the following claims and the description.

Claims (10)

1. A method of edge enhancement of a railroad locomotive component image, comprising the steps of:
the method comprises the following steps: acquiring a linear array image of a railway locomotive;
step two: intercepting a locomotive component subgraph according to the linear array image of the railway locomotive;
step three: preprocessing the intercepted locomotive component subgraph and converting the preprocessed locomotive component subgraph into three same image matrixes AHE1(x,y)、AHE2(x, y) and AHE3(x,y);
Step four: image matrix AHE1(x, y) converting the frequency spectrum image into a frequency spectrum image through Fourier transform, filtering the frequency spectrum image through a Gaussian high-pass filter to obtain an image matrix H (u, v), and then converting the image matrix H (u, v) back to a spatial domain through inverse Fourier transform to obtain an image matrix I (x, y);
step five: image matrix AHE2(x, y) and AHE3(x, y) filtering through two different filters to obtain an image matrix S (x, y) and an image matrix G (x, y);
step six: the image matrixes I (x, y), S (x, y) and G (x, y) are normalized to obtain I '(x, y), S' (x, y) and G '(x, y), and then the I' (x, y), S '(x, y) and G' (x, y) are multiplied by the target weights respectively and added to obtain a single-channel image matrix, namely the edge-enhanced image.
2. The method for enhancing the image edge of the railway locomotive component according to claim 1, wherein the preprocessing in the third step comprises the following specific steps: firstly, median filtering processing is carried out on the intercepted locomotive component subgraph, and then self-adaptive histogram equalization processing is carried out on the locomotive component subgraph after median filtering.
3. The method of claim 2 wherein said median filtering process is represented by:
Mdian(x,y)=medianA{f(x,y)}
wherein, A is a window, the size is 3 x 3, and f (x, y) is a locomotive component sub-image matrix.
4. The method of claim 3, wherein the fourth step comprises the following steps:
firstly, an image matrix AHE obtained after preprocessing is obtained1(x, y) is converted into a frequency spectrum image through two-dimensional discrete fast Fourier transform, the frequency spectrum image is centralized to obtain a frequency spectrum image F (u, v), then Gaussian high-pass filtering is adopted to filter the frequency spectrum image F (u, v) to obtain an image H (u, v), and finally the obtained image H (u, v) is converted back to a space domain through inverse Fourier transform to obtain an image matrix I (x, y).
5. The railroad locomotive component image edge enhancement method of claim 4, wherein the image H (u, v) is represented as:
Figure FDA0002807013840000011
in the formula F0Representing the filter radius of a Gaussian high-pass filter, F0Taking AHE1(x, y) 1/5 for the short side of the image matrix.
6. The method of claim 5, wherein the step five comprises the following steps:
firstly, an image matrix AHE obtained after preprocessing is obtained2(X, Y) respectively carrying out filtering processing in the X direction and the Y direction through a Scharr operator, then carrying out Gaussian smoothing processing on the filtered images, and then carrying out image fusion on the images subjected to the Gaussian smoothing processing to obtain an image S (X, Y);
the image matrix AHE obtained after the pretreatment is carried out3(x, y) smoothing the image G (x, y) by a Gaussian filter and a median filter.
7. The method of claim 6, wherein the weight fusion has a weight of 0.5 in the X direction and a weight of 0.5 in the Y direction.
8. The railroad locomotive component image edge enhancement method of claim 7, wherein the single channel image matrix is represented as:
N(x,y)={I′(x,y)×α+S′(x,y)×β+G′(x,y)×γ}×255
in the formula, α, β, γ are weighting coefficients corresponding to I ' (x, y), S ' (x, y), G ' (x, y).
9. The method of claim 8, wherein α, β and γ satisfy the following condition:
Figure FDA0002807013840000021
10. the railroad locomotive component image edge enhancement method of claim 9, wherein α, β and γ are: α ═ 0.3, β ═ 0.1, and γ ═ 0.6.
CN202011371951.1A 2020-11-30 2020-11-30 Method for enhancing image edge of railway locomotive component Pending CN112488940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011371951.1A CN112488940A (en) 2020-11-30 2020-11-30 Method for enhancing image edge of railway locomotive component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011371951.1A CN112488940A (en) 2020-11-30 2020-11-30 Method for enhancing image edge of railway locomotive component

Publications (1)

Publication Number Publication Date
CN112488940A true CN112488940A (en) 2021-03-12

Family

ID=74937296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011371951.1A Pending CN112488940A (en) 2020-11-30 2020-11-30 Method for enhancing image edge of railway locomotive component

Country Status (1)

Country Link
CN (1) CN112488940A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7848589B2 (en) * 2005-02-01 2010-12-07 Richoh Company, Ltd. Method and apparatus for applying edge enhancement based on image characteristics
CN103295225A (en) * 2013-04-10 2013-09-11 苏州大学 Train bogie edge detecting method under dim light condition
CN104580829A (en) * 2014-12-25 2015-04-29 深圳市一体太赫兹科技有限公司 Terahertz image enhancing method and system
CN107220988A (en) * 2017-04-30 2017-09-29 南京理工大学 Based on the parts image edge extraction method for improving canny operators
CN109241831A (en) * 2018-07-26 2019-01-18 东南大学 A kind of greasy weather at night visibility classification method based on image analysis
CN111080538A (en) * 2019-11-29 2020-04-28 中国电子科技集团公司第五十二研究所 Infrared fusion edge enhancement method
CN111340716A (en) * 2019-11-20 2020-06-26 电子科技大学成都学院 Image deblurring method for improving dual-discrimination countermeasure network model
CN111640074A (en) * 2020-05-18 2020-09-08 扬州哈工博浩智能科技有限公司 X-ray image enhancement method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7848589B2 (en) * 2005-02-01 2010-12-07 Richoh Company, Ltd. Method and apparatus for applying edge enhancement based on image characteristics
CN103295225A (en) * 2013-04-10 2013-09-11 苏州大学 Train bogie edge detecting method under dim light condition
CN104580829A (en) * 2014-12-25 2015-04-29 深圳市一体太赫兹科技有限公司 Terahertz image enhancing method and system
CN107220988A (en) * 2017-04-30 2017-09-29 南京理工大学 Based on the parts image edge extraction method for improving canny operators
CN109241831A (en) * 2018-07-26 2019-01-18 东南大学 A kind of greasy weather at night visibility classification method based on image analysis
CN111340716A (en) * 2019-11-20 2020-06-26 电子科技大学成都学院 Image deblurring method for improving dual-discrimination countermeasure network model
CN111080538A (en) * 2019-11-29 2020-04-28 中国电子科技集团公司第五十二研究所 Infrared fusion edge enhancement method
CN111640074A (en) * 2020-05-18 2020-09-08 扬州哈工博浩智能科技有限公司 X-ray image enhancement method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FARI MUHAMMAD ABUBAKAR: "Image Enhancement using Histogram Equalization and Spatial Filtering", 《INTERNATIONAL JOURNAL OF SCIENCE AND RESEARCH (IJSR)》 *
王蓉: "图像增强算法实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
黄剑玲等: "结合Wiener滤波和形态学的图像边缘检测方法", 《郑州大学学报(理学版)》 *

Similar Documents

Publication Publication Date Title
CN108921800B (en) Non-local mean denoising method based on shape self-adaptive search window
CN109507192B (en) Magnetic core surface defect detection method based on machine vision
CN111401372B (en) Method for extracting and identifying image-text information of scanned document
CN109636766B (en) Edge information enhancement-based polarization difference and light intensity image multi-scale fusion method
CN108133481A (en) A kind of image processing algorithm for fluorescence immune chromatography strip imaging system
CN111260591B (en) Image self-adaptive denoising method based on attention mechanism
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN111709888B (en) Aerial image defogging method based on improved generation countermeasure network
CN111340716A (en) Image deblurring method for improving dual-discrimination countermeasure network model
CN106327451A (en) Image restorative method of ancient animal fossils
CN106447640A (en) Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof
CN112883824A (en) Finger vein feature recognition device for intelligent blood sampling and recognition method thereof
CN105719254B (en) Image noise reduction method and system
CN104899842B (en) The adaptive extreme value median filter method of sequence for remote line-structured light image
Shedbalkar et al. A comparative analysis of filters for noise reduction and smoothening of brain MRI images
CN101655973A (en) Image enhancing method based on visual characteristics of human eyes
CN110348442A (en) A kind of shipborne radar image sea oil film recognition methods based on support vector machines
CN109034070B (en) Blind separation method and device for replacement aliasing image
Patvardhan et al. Denoising of document images using discrete curvelet transform for ocr applications
CN112488940A (en) Method for enhancing image edge of railway locomotive component
CN115661110A (en) Method for identifying and positioning transparent workpiece
Ahmed Image enhancement and noise removal by using new spatial filters
Budhiraja et al. Effect of pre-processing on MST based infrared and visible image fusion
Boiangiu et al. Methods of bitonal image conversion for modern and classic documents
CN109165551A (en) A kind of expression recognition method of adaptive weighted fusion conspicuousness structure tensor and LBP feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312