CN108694349B - Pantograph image extraction method and device based on linear array camera - Google Patents

Pantograph image extraction method and device based on linear array camera Download PDF

Info

Publication number
CN108694349B
CN108694349B CN201710223798.XA CN201710223798A CN108694349B CN 108694349 B CN108694349 B CN 108694349B CN 201710223798 A CN201710223798 A CN 201710223798A CN 108694349 B CN108694349 B CN 108694349B
Authority
CN
China
Prior art keywords
pantograph
image
gradient
detected
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710223798.XA
Other languages
Chinese (zh)
Other versions
CN108694349A (en
Inventor
宋平
张楠
王瑞锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tang Source Electrical Ltd By Share Ltd
Original Assignee
Chengdu Tang Source Electrical Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tang Source Electrical Ltd By Share Ltd filed Critical Chengdu Tang Source Electrical Ltd By Share Ltd
Priority to CN201710223798.XA priority Critical patent/CN108694349B/en
Publication of CN108694349A publication Critical patent/CN108694349A/en
Application granted granted Critical
Publication of CN108694349B publication Critical patent/CN108694349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of monitoring of an operating state of a pantograph. The invention provides a pantograph image extraction method and device based on a linear array camera, aiming at the problems in the prior art. And monitoring whether the working state of the pantograph is abnormal or not according to the extracted pantograph image so as to guide maintenance. The method comprises the steps of carrying out feature detection and initial positioning of the pantograph on an image to be detected according to an established pantograph classification model, if the pantograph exists in the image to be detected, carrying out secondary positioning on the pantograph, judging the positions of a pantograph sliding plate and a contact line in the image to be detected, and judging whether the real pantograph exists in the image to be detected according to the relative positions of the pantograph sliding plate and the contact line; otherwise, executing the pantograph initial positioning step on the next image to be detected.

Description

Pantograph image extraction method and device based on linear array camera
Technical Field
The invention relates to the field of monitoring of the running state of a pantograph, in particular to a pantograph image extraction method and device based on a linear array camera.
Background
The pantograph slide plate monitoring device mainly detects the running state of a pantograph. The camera is arranged at the passing position of the train, the pantograph image is shot at a fixed point, and the abnormal state of the carbon pantograph slide plate can be monitored, so that maintenance is guided.
Previously, pantograph image capture was mainly area-array camera capture, mainly with the following disadvantages: 1. the area-array camera is triggered at a fixed point, only one or two pictures are shot, and the condition of false triggering is easy to occur in a hardware triggering device, so that no pantograph or only part of pantograph exists in the shot images; 2. because the pantograph monitoring is real-time online monitoring, the running speed of a high-speed railway train is too high, and the frame rate of an area-array camera cannot keep up with the running speed of the train, so that the photographed pantograph picture is easy to have smear.
The camera is fixedly installed at the passing position of the train, the linear array camera is adopted to shoot the roof of the train, the image of the top of the whole train can be shot, and therefore the complete pantograph image is extracted.
The pantograph images shot by the linear array camera have the following characteristics that 1, the lighting conditions are different, the pantograph monitoring device is an all-weather detection device, the pantograph monitoring device is influenced by sunlight in the daytime, and the shot pantograph is too bright due to the serious light reflection condition of the pantograph; part of the common speed trains are coal freight trains, and the roofs of the common speed trains are black, so that the pantographs are also black; 2. the line frequency and the vehicle speed of the linear array camera are not matched, generally, before a monitoring device works, the line frequency of the camera can be subjected to parameter setting and matched with the experienced vehicle speed, but the conditions of sudden acceleration and deceleration can be inevitably met in the running process of a train, and the line frequency and the vehicle speed are not matched to cause the distortion of a shot pantograph image, so that the size is increased or reduced. 3. The field objective conditions of the installation points of the monitoring devices are different, and due to the fact that installation personnel of different monitoring devices are different and installation positions are different, the positions of the shot pantograph in the image are different easily. Previously, images taken by a line camera are generally extracted from a pantograph by a traditional image processing method. Aiming at the characteristics of images shot by the linear array camera, the traditional image processing method has the defects that the algorithm parameters needed to be set are too many and sensitive to the environment, different parameters are possibly needed to be set at different installation points, the workload is extremely high, the setting of the parameters is empirical, and other people except image processing professionals cannot set the parameters well.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the problems in the prior art, a pantograph image extraction method and device based on a linear array camera are provided. And monitoring whether the working state of the pantograph is abnormal or not according to the extracted pantograph image, so that the maintenance of the pantograph is facilitated.
In order to solve the technical problems, the invention adopts a technical scheme that a pantograph image extraction method based on a linear array camera comprises the following steps:
a pantograph classification model creating step: preprocessing the collected positive and negative sample images of the pantograph; extracting characteristic values of all positive and negative sample images of the pantograph to obtain directional gradient histogram characteristics of the positive and negative sample images of the pantograph; training and learning are carried out by utilizing the extracted positive and negative sample image direction gradient histogram characteristics, and an optimal classification hyperplane, namely a pantograph classification model, for segmenting two training samples is determined;
primary positioning of the pantograph: extracting directional gradient histogram features of an image to be detected, carrying out classification detection on the image according to a pantograph classification model, and primarily judging whether a pantograph exists in the image to be detected;
accurate positioning of the pantograph: if the pantograph exists in the image to be detected, the pantograph is positioned again, the positions of a pantograph slide plate and a contact line are judged, and whether the pantograph is a real pantograph or not is judged according to the relative positions of the pantograph slide plate and the contact line; otherwise, executing the pantograph initial positioning step on the next image to be detected.
Further, the pantograph positive and negative sample image preprocessing comprises:
traversing each pixel (x, y) in the positive and negative sample images of the pantograph, taking the new value of the corresponding pixel point as the intermediate value of all pixels in the neighborhood of the pixel 8, and removing random salt and pepper noise in the positive and negative images of the pantograph, namely filtering and denoising;
carrying out histogram equalization processing on the filtered and denoised positive and negative images of the pantograph;
the filtering and denoising can be performed through mean filtering, gaussian filtering, bilateral filtering or guided filtering.
Further, the feature extraction method comprises the following steps:
normalizing all positive and negative sample images of the pantograph according to a formula (1);
I(x,y)=I(x,y)gamma(1)
carrying out gradient calculation in the x direction and the y direction on positive and negative sample images of the pantograph, and respectively and correspondingly obtaining the gradient of a pixel point (x, y) in each sample image in the x direction and the gradient in the y direction as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
obtaining the gradient amplitude and the gradient direction at the pixel point (x, y) according to the formulas (2) and (3);
Figure BDA0001264566160000031
Figure BDA0001264566160000032
traversing each pixel point in each sample image, and calculating to obtain the gradient amplitude and the gradient direction of all the pixel points; and combining all gradient amplitudes and gradient directions, and performing histogram statistical calculation to obtain the directional gradient histogram characteristics of the whole image, thereby obtaining the directional gradient histogram characteristics of all pantograph positive and negative sample images.
Further, the specific implementation process of the pantograph classification model comprises the following steps:
a step of setting a classification linear equation: is { (x)i,yi) N is N sample points, where x is 1iFor each sample image, yiDenotes xiClass to which X belongs to RNY { -1,1}, and the expected output corresponding to the training sample set is YiBelongs to { +1, -1}, and represents xiThe class to which it belongs (the class to which it belongs: i.e. refers to x)iWhether the sample belongs to the class of the positive sample or the class of the negative sample, for example, the expected output of the class of the positive sample is 1, and the expected output of the class of the negative sample is-1), X represents all XiAs training data; the expected outputs +1 and-1 represent class labels of the sample images belonging to positive and negative samples, respectively, X, Y are all known quantities, and all sample images have been positive and negative sample labeled before they were created; w represents weight, b is offset, and w and b are unknown quantities; and the classification linear equation of the linear space is;
Y=w·X+b (6)
searching an optimal classification line to separate different classes, and simultaneously enabling a classification interval to be maximum, so that two classes of samples simultaneously meet the condition that Y is more than or equal to 1 and can be expressed as Y, wherein the classification interval is equal to 2/| | w |, because Y0 is a classification line of positive and negative samples in a two-dimensional space, when Y is 1 or-1, the distance from a point on a Y-1 or-1 straight line to the Y-0 straight line is calculated to be Y/| | w | |, and 2/| | w | | |; the classification interval maximization is equivalent to the following optimization problem:
minG(w)=||w||2(7)
so that y isi[(w·xi)+b]-1 ≧ 0, i ≧ 1i[(w·x)+b]Training samples with-1 ═ 0 are called support vectors;
a Lagrange function construction step: the following lagrangian function is constructed, which translates into a quadratic programming problem:
Figure BDA0001264566160000041
in the formula: a ═ a1,a2,...,aN) Is the Lagrangian multiplier; the minimum value in the formula (7) is the saddle point of the formula (8), and can be converted into the dual problem of the formula (7) through the partial derivative function operation of L on w and b, and the maximum value of the function phi (a) is solved;
Figure BDA0001264566160000042
the constraint condition is
Figure BDA0001264566160000043
If it is
Figure BDA0001264566160000044
For the optimal solution, the partial derivative function calculation is performed on the formula (8) to obtain:
Figure BDA0001264566160000045
then the
Figure BDA0001264566160000046
Constructing a linear classification optimal classification function: constructing an optimal classification function f (x) according to formula (6) as follows:
Figure BDA0001264566160000047
wherein the content of the first and second substances,
Figure BDA0001264566160000048
then the
Figure BDA0001264566160000049
Constructing a nonlinear classification optimal classification hyperplane: for the sample feature points which can be divided in a nonlinear way, a nonlinear function phi is used for mapping the sample feature point control to a high-dimensional feature space, and linear classification is carried out in the feature space; if the mapping φ can be found, then the inner product operation (x)iX) can be represented by (phi (x)i) φ (x)) instead; usually by a kernel function K (x)i,x)=(φ(xi) φ (x)) represents the inner product operation;
the optimal classification hyperplane function for linear classification and non-linear classification can therefore be expressed as:
Figure BDA0001264566160000051
wherein the content of the first and second substances,
Figure BDA0001264566160000052
then the
Figure BDA0001264566160000053
A pantograph classification model construction step: and finally obtaining a pantograph classification model according to the formula (10) and the formula (11).
Further, the specific process of primarily judging whether the pantograph exists in the image to be detected is as follows:
inputting a pantograph sequence image shot by a linear array camera; connecting two adjacent images in the pantograph sequence image end to form an image to be detected;
setting a ROI (region of interest) in an image to be detected, continuously moving the ROI with the moving step length of n, and extracting the directional gradient histogram characteristics of each ROI image according to a characteristic extraction method;
after the directional gradient histogram features of the ROI images are extracted, classifying and detecting the pantograph of each ROI image according to a pantograph classification model, and judging whether the pantograph exists in the image to be detected;
and when judging that a certain ROI to be detected has the pantograph, not performing initial positioning on the subsequent ROI regional image of the image to be detected, and continuously performing initial positioning on the pantograph of the next image to be detected.
Further, the specific process of judging whether the pantograph is a real pantograph or not according to the relative positions of the pantograph slide plate and the contact line is as follows:
inputting a pantograph image with a pantograph at the initial positioning, performing X-direction gradient calculation or Y-direction gradient calculation on the pantograph image, and correspondingly obtaining X-direction and Y-direction gradient amplitudes respectively;
respectively carrying out binarization processing on the gradient amplitude diagrams in the X direction and the Y direction to respectively and correspondingly obtain gradient binary diagrams in the X direction and the Y direction;
extracting all connected areas in the gradient binary image in the X direction and the Y direction, respectively and correspondingly calculating the length, the width and the size of each connected area in the X direction and the Y direction, filtering the connected areas with the length, the width and the size not meeting the conditions, and obtaining a contact line suspected area parallel to the Y axis and a pantograph carbon slide plate suspected area parallel to the X axis;
obtaining a precisely positioned contact line according to whether the gray value of the suspected area of the contact line changes;
judging whether the pantograph carbon slide plate is a real pantograph carbon slide plate or not according to the corresponding position relation between the contact line and the pantograph carbon slide plate in practice based on the accurately positioned contact line gradient binary image and the suspected area gradient binary image of the pantograph carbon slide plate; accurate positioning and extraction of pantograph images are achieved.
In order to solve the above technical problems, a technical solution of the present invention is to provide a pantograph image extracting device based on a line-scan camera, including:
a pantograph model creation module: the device is used for preprocessing the collected positive and negative sample images of the pantograph; extracting the characteristics of all positive and negative sample images of the pantograph to obtain the directional gradient histogram characteristics of the positive and negative sample images of the pantograph; training and learning are carried out by utilizing the extracted positive and negative sample image direction gradient histogram characteristics, and an optimal classification hyperplane, namely a pantograph classification model, for segmenting two training samples is determined;
pantograph primary positioning module: the method is used for extracting the directional gradient histogram characteristics of an image to be detected, classifying and detecting the image according to a pantograph classification model, and primarily judging whether a pantograph exists in the image to be detected;
accurate positioning module of pantograph: the pantograph control system is used for re-positioning the pantograph if the pantograph exists in the image to be detected, judging the positions of the pantograph slide plate and the contact line, and judging whether the pantograph is a real pantograph or not according to the relative positions of the pantograph slide plate and the contact line; otherwise, executing the pantograph initial positioning module on the next image to be detected.
Further, the extraction method comprises the following steps:
normalizing all positive and negative sample images of the pantograph according to a formula (1);
I(x,y)=I(x,y)gamma(1)
carrying out gradient calculation in the x direction and the y direction on positive and negative sample images of the pantograph, and respectively and correspondingly obtaining the gradient of a pixel point (x, y) in each sample image in the x direction and the gradient in the y direction as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
obtaining the gradient amplitude and the gradient direction at the pixel point (x, y) according to the formulas (2) and (3);
Figure BDA0001264566160000061
Figure BDA0001264566160000071
traversing each pixel point in each sample image, and calculating to obtain the gradient amplitude and the gradient direction of all the pixel points; and combining all gradient amplitudes and gradient directions, and performing histogram statistical calculation to obtain the directional gradient histogram characteristics of the whole image, thereby obtaining the directional gradient histogram characteristics of all pantograph positive and negative sample images.
Further, the specific process of primarily judging whether the pantograph exists in the image to be detected is as follows:
inputting a pantograph sequence image shot by a linear array camera; connecting two adjacent images in the pantograph sequence image end to form an image to be detected;
setting a ROI (region of interest) in an image to be detected, continuously moving the ROI with the moving step length of n, and extracting the directional gradient histogram characteristics of each ROI image according to a characteristic extraction method;
after the directional gradient histogram features of the ROI images are extracted, classifying and detecting the pantograph of each ROI image according to a pantograph classification model, and judging whether the pantograph exists in the image to be detected;
and when the pantograph is judged to exist in a certain ROI to be detected, the subsequent ROI area image of the image to be detected is not subjected to pantograph positioning any more, and the initial positioning of the pantograph is continuously carried out on the next image to be detected.
Further, the specific process of judging whether the pantograph is a real pantograph or not according to the relative positions of the pantograph slide plate and the contact line is as follows:
inputting a pantograph image with a pantograph at the initial positioning, performing X-direction gradient calculation or Y-direction gradient calculation on the pantograph image, and correspondingly obtaining X-direction and Y-direction gradient amplitudes respectively;
respectively carrying out binarization processing on the gradient amplitudes in the X direction and the Y direction, and respectively and correspondingly obtaining gradient binary images in the X direction and the Y direction;
extracting all connected areas in the gradient binary image in the X direction and the Y direction, respectively and correspondingly calculating the length, the width and the size of each connected area in the X direction and the Y direction, filtering the connected areas with the length, the width and the size not meeting the conditions, and obtaining a contact line suspected area parallel to the Y axis and a pantograph carbon slide plate suspected area parallel to the X axis;
obtaining a precisely positioned contact line according to whether the gray value of the suspected area of the contact line changes;
judging whether the pantograph carbon slide plate is a real pantograph carbon slide plate or not according to the corresponding position relation between the contact line and the pantograph carbon slide plate in practice based on the accurately positioned contact line gradient binary image and the suspected area gradient binary image of the pantograph carbon slide plate; accurate positioning and extraction of pantograph images are achieved.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. because the image is first subjected to a filtering pre-processing before the pantograph image extraction is performed. After filtering processing, some noises caused in the imaging process can be filtered, image pollution is reduced, the feature extraction accuracy is improved, and the correct positioning rate of the pantograph is improved.
2. The parameters in the method or the device are convenient to modify; in the pantograph model establishing process, no professional is needed, and other non-professionals can train and modify related parameters to obtain a pantograph classification model suitable for the self needs, so that subsequent image extraction is facilitated. Namely, the method or the device can be suitable for extracting different types of pantographs under different field conditions.
3. Because the method or the device performs histogram equalization processing on the preprocessed image, the image contrast is improved, and normalization processing is performed before feature extraction, the method or the device is not sensitive to light source change in the pantograph image extraction after the histogram equalization processing is performed on the image with uneven illumination and low image contrast; and the subsequent pantograph initial positioning and pantograph fine positioning processes ensure that the pantograph extraction accuracy is high, and the false extraction rate and the missing extraction rate are reduced. The method or the device can be suitable for pantograph image extraction under different illumination conditions, and can be used in the daytime and at night.
4. The method or the device carries out primary pantograph positioning on the image to be detected through steps of a pantograph classification model and the like, further measures parameters such as the position of a pantograph slide plate in the image to be detected by combining actual positions of the pantograph slide plate and a contact line in order to improve the accuracy of pantograph extraction, and finally judges whether the pantograph is really arranged in the image to be detected. The pantograph initial inspection and accurate positioning steps are combined, so that the pantograph in the image to be detected is accurately positioned, and the detection efficiency and accuracy are greatly improved.
5. Under the conditions that the pantograph image extraction accuracy is high, and the false extraction rate and the missing extraction rate are reduced. Whether the working state of the pantograph is abnormal or not is simply monitored, and maintenance work of the pantograph is accelerated.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
Any feature disclosed in this specification may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
Description of the invention:
1. the pantograph slide plate is parallel to the x direction; the contact line is parallel to the Y direction; x, Y directions are the x-axis direction of the pantograph two-dimensional image pixel points and the vertical direction (y-axis) perpendicular to the x-axis direction;
2. generally, the sequence image is the whole train roof image;
the specific working principle of the invention is as follows:
the method comprises the following steps: collecting positive and negative sample images related to the pantograph picture;
the method comprises the steps of collecting photos shot by a pantograph monitoring device (a linear array camera), extracting pantograph pictures under various types, various shooting angles and various illumination conditions to be positive samples, and extracting non-pantograph area pictures to be negative sample images.
Step two: training positive and negative samples of the pantograph, and establishing a pantograph algorithm model by using a machine learning algorithm; firstly, image preprocessing (image denoising and image enhancement) is carried out on positive and negative sample images, then feature extraction is carried out, the purposes of carrying out dimension reduction processing on the images and reducing redundant information are achieved, and a machine learning algorithm is utilized to train and learn image feature information to obtain a pantograph algorithm model.
1. Image preprocessing: the positive and negative sample images of the pantograph are shot in all weather in 24 hours at night in the daytime, so that the conditions of uneven illumination and low image contrast exist; in addition, the line-scan camera is subject to electronic interference, and in order to improve the transmission efficiency of the image, the original pantograph image is compressed in a JPEG format during image acquisition, so that noise exists in the shot image easily. If the original image shot by the array camera is not preprocessed, the subsequent feature extraction accuracy is affected, and therefore the pantograph positioning accuracy is affected.
The image preprocessing process adopted by the method is filtering denoising and histogram equalization.
(1) Filtering and denoising: the field image acquisition equipment (linear array camera, linear array camera install on train hard crossbeam or pillar for real-time supervision pantograph operating condition, thereby judge whether this pantograph work is normal) exists a large amount of random salt and pepper noises in the pantograph image of gathering, for getting rid of the salt and pepper noise, take following method: traversing each pixel (x, y) in the image, taking its new value as the median value of all pixels in the neighborhood of this pixel 8, as follows:
g,(x,y)=median{g(x-1,y-1),g(x,y-1),g(x+1,y-1),g(x-1,y),g(x,y),g(x+1,y)
,g(x-1,y+1),g(x,y+1),g(x+1,y+1)}
of course, the median filtering described above may be replaced by mean filtering or gaussian filtering;
(2) histogram equalization: because the images shot by the field image acquisition equipment all weather have the conditions of uneven illumination and low contrast, the images need to be enhanced, and the contrast of the images is improved. The histogram of the original image is transformed into a uniformly distributed form, so that the dynamic range of the gray value of the pixel is increased, and the effect of enhancing the overall contrast of the image can be achieved. Assuming that the gray scale of the original image at (x, y) is f and the gray scale of the changed image is g, the image is increasedA strong method can be expressed as mapping the gray level f at (x, y) to g. The mapping function for an image in the gray histogram equalization process can be defined as g ═ EQ (f), and this mapping function EQ (f) must satisfy two conditions (where L is the number of gray levels of the image) (1) EQ (f) is a monotonically increasing function in the range of 0 ≦ f ≦ L-1(L is the maximum gray level in the entire image). The gray level arrangement order of the original image is not disturbed by the image enhancement processing, and the gray levels of the original image are still arranged from black to white (or from white to black) after conversion. (2) For L-1 with f more than or equal to 0 and g more than or equal to 0 and less than or equal to L-1, the condition ensures the consistency of the dynamic range of the gray values before and after transformation. The specific mapping function is
Figure BDA0001264566160000111
Where k is (0, 1, 2, … …, L-1), n is the total number of pixels in the image, and n is the total number of pixels in the imagejThe total number of the pixels with the gray level j is represented, and the gray level value of each pixel after histogram equalization can be directly obtained from the gray level value of each pixel of the original image according to the equation.
2. Extracting characteristics, namely extracting the characteristics of positive and negative sample images of the pantograph by using a characteristic extraction method before modeling the positive and negative sample images to obtain the directional gradient histogram characteristics of the whole pantograph positive and negative sample images; the process carries out dimension reduction processing on the image, reduces redundant information of the image and improves algorithm processing efficiency. The specific process of the feature extraction method is as follows:
extracting the characteristics of each image, and performing dimension reduction processing on the images, so that redundant information of the images is reduced, and the algorithm processing efficiency is improved; the extracted features of the method are directional gradient histogram features;
(1) firstly, because the shot pantograph images have different illumination conditions, each image needs to be normalized in order to reduce the influence of illumination factors and local shadows.
I(x,y)=I(x,y)gamma(1)
(2) To further weaken the influence of illumination, gradient calculation in the x direction and the y direction is required to be carried out on the pantograph image. The gradient of the pixel point (x, y) in each sample image in the x direction and the y direction are respectively obtained correspondingly:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
according to the formula, the gradient amplitude and the gradient direction at the pixel point (x, y) can be obtained;
Figure BDA0001264566160000112
Figure BDA0001264566160000113
(3) characteristic synthesis: traversing each pixel point in each sample image, and calculating to obtain the gradient amplitude and the gradient direction of all the pixel points; and combining all gradient amplitudes and gradient directions, and performing histogram statistical calculation to obtain the directional gradient histogram characteristics of the whole image, thereby obtaining the directional gradient histogram characteristics of all pantograph positive and negative sample images.
Wherein, the gradient amplitude and gradient direction combination process is as follows: the entire image is first divided into a number of cell cells, for example, 4 × 4 pixels each. We divide the gradient direction 360 degrees equally into 9 directions and use the histogram of 9 bins to count the gradient information of these 4 x 4 pixels. For example: if the gradient direction of the pixel is 20-40 degrees, the count of the 2 nd bin of the histogram is increased by 1, so that the gradient direction histogram of the cell can be obtained by performing weighted projection (mapping to a fixed angle range) on each pixel in the cell in the histogram by using the gradient direction, namely the 9-dimensional feature vector corresponding to the cell (because of 9 bins). In addition, we take the gradient magnitude as a weight of the projection. For example: the gradient direction of this pixel is 20-40 degrees and assuming that the gradient magnitude is 2, the count for the 2 nd bin of the histogram is incremented by 2.
The range of variation of the gradient intensity is very large due to the variation of the local illumination and the variation of the foreground-background contrast. This requires normalization of the gradient strength. Normalization can further compress lighting, shadows, and edges. A plurality of cells are combined into a block, so that the feature vectors of all the cells in the block are connected in series to obtain the directional gradient histogram feature of the block. And finally, connecting the feature vectors of all blocks in the image in series to obtain the directional gradient histogram feature in the image.
The pantograph image can be a positive and negative pantograph sample or a pantograph image to be measured.
3. Machine learning: after the features of all the images are obtained, the features can be trained and learned through a machine learning method to obtain a pantograph algorithm model, and a support vector machine (svm) machine learning algorithm is adopted in the pantograph algorithm model. The support vector machine is mainly used for classifying data, a dividing plane (line) is found in space to separate sample points, the optimal condition of the dividing plane (line) is that the classification interval is maximized, and the classification interval is calculated based on the distance from a point to a plane (straight line).
(1) Description of nonlinear and linear classification during pantograph model building:
with the development of science and technology, people often encounter various types of mass data in the practical application process, such as stock market trading data, multimedia graphic image video data, aerospace collected data, biological characteristic data and the like, and the data is generally called high-dimensional data in statistical processing
As to whether the data is linearly separable: for low-dimensional data, the image can be drawn and can be visually seen. But for high dimensional data it is generally determined by calculating whether the convex hulls intersect. A convex hull is a convex closed curve (surface) and it just wraps all the data. When we draw two classes of convex hulls, if the two do not overlap, then the two are linearly separable, otherwise not linearly separable.
(2) The specific implementation process of the pantograph model is as follows:
a step of setting a classification linear equation: is { (x)i,yi) N is N sample points, where x is 1iFor each sample image, yiDenotes xiThe class belongs to, X belongs to RN, Y { -1,1}, and the expected output corresponding to the training sample set is YiBelongs to { +1, -1}, and represents xiThe class to which it belongs (the class to which it belongs: i.e. refers to x)iWhether the sample belongs to the class of the positive sample or the class of the negative sample, for example, the expected output of the class of the positive sample is 1, and the expected output of the class of the negative sample is-1), X represents all XiAs training data; the expected outputs +1 and-1 represent class labels of the sample images belonging to positive and negative samples, respectively, X, Y are all known quantities, and all sample images have been positive and negative sample labeled before they were created; w represents weight, b is offset, and w and b are unknown quantities; and the classification linear equation of the linear space is;
Y=w·X+b (6)
searching an optimal classification line to separate different classes, and simultaneously enabling a classification interval to be maximum, so that two classes of samples simultaneously meet the condition that Y is more than or equal to 1 and can be expressed as Y, wherein the classification interval is equal to 2/| | w |, because Y0 is a classification line of positive and negative samples in a two-dimensional space, when Y is 1 or-1, the distance from a point on a Y-1 or-1 straight line to the Y-0 straight line is calculated to be Y/| | w | |, and 2/| | w | | |; the classification interval maximization is equivalent to the following optimization problem:
minG(w)=||w||2(7)
so that y isi[(w·xi)+b]-1 ≧ 0, i ≧ 1i[(w·x)+b]Training samples with-1 ═ 0 are called support vectors;
a Lagrange function construction step: the following lagrangian function is constructed, which translates into a quadratic programming problem:
Figure BDA0001264566160000141
in the formula: a ═ a1,a2,...,aN) Is the lagrange multiplier. The minimum value in the formula (7) is the saddle point of the formula (8), and can be converted into the dual problem of the formula (7) through the partial derivative function operation of L on w and b, and the maximum value of the function phi (a) is solved;
Figure BDA0001264566160000142
the constraint condition is
Figure BDA0001264566160000143
If it is
Figure BDA0001264566160000144
For the optimal solution, the partial derivative function calculation is performed on the formula (8) to obtain:
Figure BDA0001264566160000145
then the
Figure BDA0001264566160000146
Constructing a linear classification optimal classification function: constructing an optimal classification function f (x) according to formula (6) as follows:
Figure BDA0001264566160000147
wherein the content of the first and second substances,
Figure BDA0001264566160000148
then the
Figure BDA0001264566160000149
Constructing a nonlinear classification optimal classification hyperplane: and for the sample feature points which can be divided in a nonlinear way, mapping the sample feature point control to a high-dimensional feature space by using a nonlinear function phi, and performing linear classification in the feature space. If the mapping φ can be found, then the inner product operation (x)iX) can be represented by (phi (x)i) φ (x)) instead. Usually by a kernel function K (x)i,x)=(φ(xi) φ (x)) represents the inner product operation;
the nonlinear classification optimal classification hyperplane function can therefore be expressed as:
Figure BDA00012645661600001410
wherein the content of the first and second substances,
Figure BDA00012645661600001411
then the
Figure BDA00012645661600001412
A pantograph classification model construction step: finally obtaining a pantograph classification model according to the formulas (10) and (11), wherein the nonlinear data is taken as the pantograph classification model by using a nonlinear classification optimal classification hyperplane function (formula 11); the linear data is used as a pantograph classification model with a linear classification optimal classification function (formula 10).
Step three: primary positioning of the pantograph: inputting a pantograph sequence image (generally, the sequence image is an image of the top of the whole train) shot by the linear array camera, and connecting two adjacent images in the pantograph sequence image end to form an image to be detected. Because only part of the pantograph in a single image is possible, for example, a part of the pantograph is at the bottom of the previous image, and another part of the pantograph is at the top of the next image, two adjacent images need to be spliced into an image to be detected of the pantograph to be extracted;
setting a ROI with a specific size in the image to be detected, continuously moving the ROI with the moving step length of n, and extracting the characteristics of the image of the ROI according to a characteristic extraction method. And after the characteristic extraction is finished, classifying the extracted image characteristics according to the pantograph classification model, and judging whether the pantograph exists in the image to be detected. Because only one pantograph can be arranged in the to-be-detected image synthesized by two adjacent images, in order to save efficiency, if a certain ROI (region of interest) to be detected has the pantograph, subsequent positioning is not carried out on the image of the ROI to be detected any more, and the initial positioning of the pantograph is carried out on the next image to be detected continuously.
Step four: accurate positioning of the pantograph: in the train roof image, because the feature similarity between some areas (such as train carriage joints) and the pantograph area is high, false detection occurs in the preliminarily determined image, and in order to reduce the false positioning of the pantograph, secondary positioning and filtering are required to be performed on the preliminarily positioned pantograph image; the idea of secondary positioning is to judge the positions of the carbon slide plate of the pantograph and the contact line; in the running process of a train, a pantograph needs to take electricity to a contact line along a railway, so that a certain position prior relationship exists between a carbon pantograph slide plate and the contact line in an image (in a set of pantograph image shooting equipment, the pantograph rises and takes electricity, so that the pantograph rises and the contact line is in a crossed relationship, the pantograph falls off and does not work, the pantograph does not cross the contact line, but the distance between the pantograph and the contact line is fixed.
Preliminary positioning of a pantograph slide plate: and inputting the primarily positioned pantograph image, and firstly performing gradient calculation on the image in the y direction to overcome the influence of illumination. The carbon pantograph slider in the pantograph structure can be regarded as the main structural feature of the pantograph. From the image, the pantograph carbon slide plate is almost vertical to the vehicle body and the y axis, so that gradient calculation in the y direction is performed on the initially positioned image, specifically shown in formula 3, and gradient information in other directions in the image is filtered. Since the gradient amplitude represents the intensity of pixel mutation, the edge of the pantograph slide plate is a region with strong pixel mutation, the gradient amplitude can be subjected to binarization processing, a certain threshold value is set, and G is setyThe pixels where (x, y) is larger than the threshold are set to 255, and the remaining pixels are set to 0. After the gradient image is binarized, a plurality of noise points exist, and the binary image can be subjected to morphological filtering expansion and corrosion treatment to filter noise. And finally, extracting all connected regions in the image, and calculating the length, width and size of each connected region. And setting a certain threshold value to filter out the connected area of which the length, the width and the size do not meet the condition, and obtaining the suspected area of the pantograph carbon slide plate which is finally parallel to the x axis.
Positioning a contact line: inputting a pantograph image with a pantograph at the initial positioning, and firstly performing gradient calculation on the image in the x direction to overcome the influence of illumination. From the image, the contact line is perpendicular to the x-axis, so we perform gradient calculation in the x-direction on the initially located image, as shown in formula 2, thereby filtering gradient information in other directions in the image. Since the gradient magnitude represents the intensity of the pixel abrupt change, we can binarize the gradient magnitudeProcessing, setting a threshold value, and comparing GyThe pixels where (x, y) is larger than the threshold are set to 255, and the remaining pixels are set to 0. And then, performing morphological filtering expansion and corrosion treatment on the binary image to filter noise. Finally, all connected regions in the image are extracted, and the length (length), width (width), and size (size) of each connected region are calculated. And setting a certain threshold value to filter out the connected areas with length (length), width (width) and size (size) which do not meet the conditions, and obtaining the final contact line suspected area parallel to the y axis. Since the image is taken by a line camera, there will be disturbing areas in the image similar to contact lines. However, the contact line has a long service life, and abrasion, dirt and the like occur, so that the gray value of the contact line pixel in the image changes (the gray value of other interference areas does not change), and other interference lines can be filtered according to whether the gray value of the final suspected area of the contact line changes, so that the accurately positioned contact line is obtained.
After a suspected area gradient binary image of the pantograph carbon slide plate and a precisely positioned contact line gradient binary image are obtained, whether the pantograph carbon slide plate is a real pantograph slide plate or not can be judged according to the corresponding position relation between the contact line and the pantograph carbon slide plate in practice, so that a wrongly positioned pantograph image is filtered, and the accurate positioning and extraction of the pantograph image are realized.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed.

Claims (8)

1. A pantograph image extraction method based on a linear array camera is characterized by comprising the following steps:
a pantograph classification model creating step: preprocessing the collected positive and negative sample images of the pantograph; extracting characteristic values of all positive and negative sample images of the pantograph to obtain directional gradient histogram characteristics of the positive and negative sample images of the pantograph; training and learning are carried out by utilizing the extracted positive and negative sample image direction gradient histogram characteristics, and an optimal classification hyperplane, namely a pantograph classification model, for segmenting two training samples is determined;
primary positioning of the pantograph: extracting directional gradient histogram features of an image to be detected, carrying out classification detection on the image according to a pantograph classification model, and primarily judging whether a pantograph exists in the image to be detected;
accurate positioning of the pantograph: if the pantograph exists in the image to be detected, the pantograph is positioned again, the positions of a pantograph slide plate and a contact line are judged, and whether the pantograph is a real pantograph or not is judged according to the relative positions of the pantograph slide plate and the contact line; otherwise, performing the initial positioning step of the pantograph on the next image to be detected;
wherein, whether the concrete process of judging for real pantograph according to pantograph slide and contact wire relative position is:
inputting a pantograph image with a pantograph at the initial positioning, performing X-direction gradient calculation or Y-direction gradient calculation on the pantograph image, and correspondingly obtaining X-direction and Y-direction gradient amplitudes respectively;
respectively carrying out binarization processing on the gradient amplitude diagrams in the X direction and the Y direction to respectively and correspondingly obtain gradient binary diagrams in the X direction and the Y direction;
extracting all connected areas in the gradient binary image in the X direction and the Y direction, respectively and correspondingly calculating the length, the width and the size of each connected area in the X direction and the Y direction, filtering the connected areas with the length, the width and the size not meeting the conditions, and obtaining a contact line suspected area parallel to the Y axis and a pantograph carbon slide plate suspected area parallel to the X axis;
obtaining a precisely positioned contact line according to whether the gray value of the suspected area of the contact line changes;
judging whether the pantograph carbon slide plate is a real pantograph carbon slide plate or not according to the corresponding position relation between the contact line and the pantograph carbon slide plate in practice based on the accurately positioned contact line gradient binary image and the suspected area gradient binary image of the pantograph carbon slide plate; accurate positioning and extraction of pantograph images are achieved.
2. The pantograph image extraction method based on the line camera as claimed in claim 1, wherein the pantograph positive and negative sample image preprocessing comprises:
traversing each pixel (x, y) in the positive and negative sample images of the pantograph, taking the new value of the corresponding pixel point as the intermediate value of all pixels in the neighborhood of the pixel 8, and removing random salt and pepper noise in the positive and negative sample images of the pantograph, namely filtering and denoising;
carrying out histogram equalization processing on the filtered and denoised positive and negative images of the pantograph;
the filtering and denoising can be performed through mean filtering, gaussian filtering, bilateral filtering or guided filtering.
3. The pantograph image extraction method based on the line camera as claimed in claim 1, wherein the feature extraction method comprises:
normalizing all positive and negative sample images of the pantograph according to a formula (1);
I(x,y)=I(x,y)gamma(1)
carrying out gradient calculation in the x direction and the y direction on positive and negative sample images of the pantograph, and respectively and correspondingly obtaining the gradient of a pixel point (x, y) in each sample image in the x direction and the gradient in the y direction as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
obtaining the gradient amplitude and the gradient direction at the pixel point (x, y) according to the formulas (2) and (3);
Figure FDA0002325325640000021
Figure FDA0002325325640000022
traversing each pixel point in each sample image, and calculating to obtain the gradient amplitude and the gradient direction of all the pixel points; and combining all gradient amplitudes and gradient directions, and performing histogram statistical calculation to obtain the directional gradient histogram characteristics of the whole image, thereby obtaining the directional gradient histogram characteristics of all pantograph positive and negative sample images.
4. The pantograph image extraction method based on the line camera as claimed in claim 3, wherein the pantograph classification model is implemented by the following steps:
a step of setting a classification linear equation: is { (x)i,yi) N is N sample points, where x is 1iFor each sample image, yiDenotes xiClass to which X belongs to RNY { -1,1}, and the expected output corresponding to the training sample set is YiBelongs to { +1, -1}, and represents xiTo which class X denotes all XiAs training data; the expected outputs +1 and-1 represent class labels of the sample images belonging to positive and negative samples, respectively, X, Y are all known quantities, and all sample images have been positive and negative sample labeled before they were created; w represents weight, b is offset, and w and b are unknown quantities; and the classification linear equation of the linear space is;
Y=w·X+b (6)
searching an optimal classification line to separate different classes, and simultaneously enabling a classification interval to be maximum, so that two classes of samples simultaneously meet the condition that Y is more than or equal to 1 and can be expressed as Y, wherein the classification interval is equal to 2/| | w |, because Y0 is a classification line of positive and negative samples in a two-dimensional space, when Y is 1 or-1, the distance from a point on a Y-1 or-1 straight line to the Y-0 straight line is calculated to be Y/| | w | |, and 2/| | w | | |; the classification interval maximization is equivalent to the following optimization problem:
minG(w)=||w||2(7)
so that y isi[(w·xi)+b]-1 ≧ 0, i ≧ 1i[(w·x)+b]Training samples with-1 ═ 0 are called support vectors;
a Lagrange function construction step: the following lagrangian function is constructed, which translates into a quadratic programming problem:
Figure FDA0002325325640000031
in the formula: a ═ a1,a2,...,aN) Is the Lagrangian multiplier; the minimum value in the formula (7) is the saddle point of the formula (8), and can be converted into the dual problem of the formula (7) through the partial derivative function operation of L on w and b, and the maximum value of the function phi (a) is solved;
Figure FDA0002325325640000032
the constraint condition is
Figure FDA0002325325640000033
If it is
Figure FDA0002325325640000034
For the optimal solution, the partial derivative function calculation is performed on the formula (8) to obtain:
Figure FDA0002325325640000035
then the
Figure FDA0002325325640000036
Constructing a linear classification optimal classification function: constructing an optimal classification function f (x) according to formula (6) as follows:
Figure FDA0002325325640000041
wherein the content of the first and second substances,
Figure FDA0002325325640000042
then the
Figure FDA0002325325640000043
Constructing a nonlinear classification optimal classification hyperplane: for non-linearly separable sample feature points, a non-linear function phi is used for mapping sample feature point controlsPerforming linear classification on the feature space to a high-dimensional feature space; if the mapping φ can be found, then the inner product operation (x)iX) can be represented by (phi (x)i) φ (x)) instead; usually by a kernel function K (x)i,x)=(φ(xi) φ (x)) represents the inner product operation;
the optimal classification hyperplane function for linear classification and non-linear classification can therefore be expressed as:
Figure FDA0002325325640000044
wherein the content of the first and second substances,
Figure FDA0002325325640000045
then the
Figure FDA0002325325640000046
A pantograph classification model construction step: and finally obtaining a pantograph classification model according to the formula (10) and the formula (11).
5. The pantograph image extraction method based on the line camera as claimed in claim 3, wherein the specific process of primarily judging whether the pantograph exists in the image to be detected is as follows:
inputting a pantograph sequence image shot by a linear array camera; connecting two adjacent images in the pantograph sequence image end to form an image to be detected;
setting a ROI (region of interest) in an image to be detected, continuously moving the ROI with the moving step length of n, and extracting the directional gradient histogram characteristics of each ROI image according to a characteristic extraction method;
after the directional gradient histogram features of the ROI images are extracted, classifying and detecting the pantograph of each ROI image according to a pantograph classification model, and judging whether the pantograph exists in the image to be detected;
and when judging that a certain ROI to be detected has the pantograph, not performing initial positioning on the subsequent ROI regional image of the image to be detected, and continuously performing initial positioning on the pantograph of the next image to be detected.
6. The utility model provides a pantograph image extraction element based on linear array camera which characterized in that includes:
a pantograph classification model creation module: the device is used for preprocessing the collected positive and negative sample images of the pantograph; extracting the characteristics of all positive and negative sample images of the pantograph to obtain the directional gradient histogram characteristics of the positive and negative sample images of the pantograph; training and learning are carried out by utilizing the extracted positive and negative sample image direction gradient histogram characteristics, and an optimal classification hyperplane, namely a pantograph classification model, for segmenting two training samples is determined;
pantograph primary positioning module: the method is used for extracting the directional gradient histogram characteristics of an image to be detected, classifying and detecting the image according to a pantograph classification model, and primarily judging whether a pantograph exists in the image to be detected;
accurate positioning module of pantograph: the pantograph control system is used for re-positioning the pantograph if the pantograph exists in the image to be detected, judging the positions of the pantograph slide plate and the contact line, and judging whether the pantograph is a real pantograph or not according to the relative positions of the pantograph slide plate and the contact line; otherwise, executing a pantograph initial positioning module on the next image to be detected;
wherein, whether the concrete process of judging for real pantograph according to pantograph slide and contact wire relative position is:
inputting a pantograph image with a pantograph at the initial positioning, performing X-direction gradient calculation or Y-direction gradient calculation on the pantograph image, and correspondingly obtaining X-direction and Y-direction gradient amplitudes respectively;
respectively carrying out binarization processing on the gradient amplitude diagrams in the X direction and the Y direction to respectively and correspondingly obtain gradient binary diagrams in the X direction and the Y direction;
extracting all connected areas in the gradient binary image in the X direction and the Y direction, respectively and correspondingly calculating the length, the width and the size of each connected area in the X direction and the Y direction, filtering the connected areas with the length, the width and the size not meeting the conditions, and obtaining a contact line suspected area parallel to the Y axis and a pantograph carbon slide plate suspected area parallel to the X axis;
obtaining a precisely positioned contact line according to whether the gray value of the suspected area of the contact line changes;
judging whether the pantograph carbon slide plate is a real pantograph carbon slide plate or not according to the corresponding position relation between the contact line and the pantograph carbon slide plate in practice based on the accurately positioned contact line gradient binary image and the suspected area gradient binary image of the pantograph carbon slide plate; accurate positioning and extraction of pantograph images are achieved.
7. The pantograph image extraction device of claim 6, wherein the feature extraction method comprises:
normalizing all positive and negative sample images of the pantograph according to a formula (1);
I(x,y)=I(x,y)gamma(1)
carrying out gradient calculation in the x direction and the y direction on positive and negative sample images of the pantograph, and respectively and correspondingly obtaining the gradient of a pixel point (x, y) in each sample image in the x direction and the gradient in the y direction as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
obtaining the gradient amplitude and the gradient direction at the pixel point (x, y) according to the formulas (2) and (3);
Figure FDA0002325325640000061
Figure FDA0002325325640000062
traversing each pixel point in each sample image, and calculating to obtain the gradient amplitude and the gradient direction of all the pixel points; and combining all gradient amplitudes and gradient directions, and performing histogram statistical calculation to obtain the directional gradient histogram characteristics of the whole image, thereby obtaining the directional gradient histogram characteristics of all pantograph positive and negative sample images.
8. The pantograph image extraction device based on the line camera as claimed in claim 7, wherein the specific process of primarily judging whether the pantograph exists in the image to be detected is as follows:
inputting a pantograph sequence image shot by a linear array camera; connecting two adjacent images in the pantograph sequence image end to form an image to be detected;
setting a ROI (region of interest) in an image to be detected, continuously moving the ROI with the moving step length of n, and extracting the directional gradient histogram characteristics of each ROI image according to a characteristic extraction method;
after the directional gradient histogram features of the ROI images are extracted, classifying and detecting the pantograph of each ROI image according to a pantograph classification model, and judging whether the pantograph exists in the image to be detected;
and when the pantograph is judged to exist in a certain ROI to be detected, the subsequent ROI area image of the image to be detected is not subjected to pantograph positioning any more, and the initial positioning of the pantograph is continuously carried out on the next image to be detected.
CN201710223798.XA 2017-04-07 2017-04-07 Pantograph image extraction method and device based on linear array camera Active CN108694349B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710223798.XA CN108694349B (en) 2017-04-07 2017-04-07 Pantograph image extraction method and device based on linear array camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710223798.XA CN108694349B (en) 2017-04-07 2017-04-07 Pantograph image extraction method and device based on linear array camera

Publications (2)

Publication Number Publication Date
CN108694349A CN108694349A (en) 2018-10-23
CN108694349B true CN108694349B (en) 2020-03-06

Family

ID=63843046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710223798.XA Active CN108694349B (en) 2017-04-07 2017-04-07 Pantograph image extraction method and device based on linear array camera

Country Status (1)

Country Link
CN (1) CN108694349B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110763681B (en) * 2019-08-23 2022-06-10 南京理工大学 Pantograph abrasion area positioning detection method
CN110910443B (en) * 2019-12-04 2023-03-21 成都唐源电气股份有限公司 Contact net geometric parameter real-time measuring method and device based on single monitoring camera
CN111260631B (en) * 2020-01-16 2023-05-05 成都地铁运营有限公司 Efficient rigid contact line structure light bar extraction method
CN111260629A (en) * 2020-01-16 2020-06-09 成都地铁运营有限公司 Pantograph structure abnormity detection algorithm based on image processing
CN111666947B (en) * 2020-05-26 2023-08-04 成都唐源电气股份有限公司 Pantograph head offset measuring method and system based on 3D imaging
CN113487561B (en) * 2021-07-02 2023-06-30 成都唐源电气股份有限公司 Pantograph foreign matter detection method and device based on gray gradient abnormal voting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745238A (en) * 2013-11-15 2014-04-23 中国科学院遥感与数字地球研究所 Pantograph identification method based on AdaBoost and active shape model
CN104374373A (en) * 2014-10-15 2015-02-25 中铁电气化局集团有限公司 Catenary status monitoring system based on pantograph image analysis
CN104463240A (en) * 2013-09-23 2015-03-25 深圳市朗驰欣创科技有限公司 Method and device for controlling list interface
CN105652154A (en) * 2016-01-25 2016-06-08 成都国铁电气设备有限公司 Safety monitoring analysis system for contact net running state
CN106052575A (en) * 2016-08-02 2016-10-26 易讯科技股份有限公司 Pantograph carbon slide plate wear online detection method based on train high speed running

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463240A (en) * 2013-09-23 2015-03-25 深圳市朗驰欣创科技有限公司 Method and device for controlling list interface
CN103745238A (en) * 2013-11-15 2014-04-23 中国科学院遥感与数字地球研究所 Pantograph identification method based on AdaBoost and active shape model
CN104374373A (en) * 2014-10-15 2015-02-25 中铁电气化局集团有限公司 Catenary status monitoring system based on pantograph image analysis
CN105652154A (en) * 2016-01-25 2016-06-08 成都国铁电气设备有限公司 Safety monitoring analysis system for contact net running state
CN106052575A (en) * 2016-08-02 2016-10-26 易讯科技股份有限公司 Pantograph carbon slide plate wear online detection method based on train high speed running

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Arc detection in pantograph-catenary systems by the use of support vector machines-based classification;Saml Barmada et al.;《IET Electrical System in Transportation》;20140630;第4卷(第2期);第45-52页 *

Also Published As

Publication number Publication date
CN108694349A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
CN108694349B (en) Pantograph image extraction method and device based on linear array camera
CN109101924B (en) Machine learning-based road traffic sign identification method
CN109753929B (en) High-speed rail insulator inspection image recognition method
CN107169953B (en) Bridge concrete surface crack detection method based on HOG characteristics
CN102902974B (en) Image based method for identifying railway overhead-contact system bolt support identifying information
CN111260629A (en) Pantograph structure abnormity detection algorithm based on image processing
CN109489724B (en) Tunnel train safe operation environment comprehensive detection device and detection method
CN111814686A (en) Vision-based power transmission line identification and foreign matter invasion online detection method
CN108711149B (en) Mineral rock granularity detection method based on image processing
CN106485694B (en) A kind of high iron catenary double-jacket tube connector six-sided nut based on cascade classifier falls off defective mode detection method
CN105203552A (en) 360-degree tread image detecting system and method
CN110659649A (en) Image processing and character recognition algorithm based on near infrared light imaging
WO2024037408A1 (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN108009574B (en) Track fastener detection method
Li et al. Coal and coal gangue separation based on computer vision
WO2021109011A1 (en) Intelligent capacitor internal defect detection method based on ultrasound image
CN110321855A (en) A kind of greasy weather detection prior-warning device
CN110276747B (en) Insulator fault detection and fault rating method based on image analysis
CN113486712B (en) Multi-face recognition method, system and medium based on deep learning
CN108629776B (en) Mineral rock granularity detecting system
CN117197700B (en) Intelligent unmanned inspection contact net defect identification system
Dong et al. An end-to-end abnormal fastener detection method based on data synthesis
CN116309407A (en) Method for detecting abnormal state of railway contact net bolt
CN116977266A (en) Canopy defect detection method and system based on small sample image feature extraction
CN110866435A (en) Far infrared pedestrian training method with self-similarity gradient oriented histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant