CN108389205B - Rail foreign matter monitoring method and device based on air-based platform image - Google Patents

Rail foreign matter monitoring method and device based on air-based platform image Download PDF

Info

Publication number
CN108389205B
CN108389205B CN201810225219.XA CN201810225219A CN108389205B CN 108389205 B CN108389205 B CN 108389205B CN 201810225219 A CN201810225219 A CN 201810225219A CN 108389205 B CN108389205 B CN 108389205B
Authority
CN
China
Prior art keywords
picture
processed
feature
rail
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810225219.XA
Other languages
Chinese (zh)
Other versions
CN108389205A (en
Inventor
曹先彬
甄先通
李岩
郑洁宛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810225219.XA priority Critical patent/CN108389205B/en
Publication of CN108389205A publication Critical patent/CN108389205A/en
Application granted granted Critical
Publication of CN108389205B publication Critical patent/CN108389205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a rail foreign matter monitoring method and device based on an empty foundation platform image, wherein the method comprises the following steps: acquiring a picture to be processed, wherein the picture to be processed is a rail picture shot by a low-altitude unmanned machine; obtaining effective gradient information of a picture to be processed; coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic; obtaining a second characteristic according to the hue, saturation and transparency HSV color model of the picture to be processed; and judging whether foreign matters exist in the rail in the picture to be processed or not according to the first characteristic and the second characteristic. According to the rail foreign matter monitoring method and device based on the air-based platform image, whether foreign matters exist on the rail is judged by combining the effective gradient information and the color information of the image, and the monitoring efficiency of the rail foreign matters is improved.

Description

Rail foreign matter monitoring method and device based on air-based platform image
Technical Field
The invention relates to an aviation monitoring technology, in particular to a rail foreign matter monitoring method and device based on an air-based platform image.
Background
The rail transit is a main route of national passenger transportation and freight transportation, and has very important strategic significance. Among them, railway transportation plays a particularly important role. Due to the fact that people in China are numerous, high-flow railway traffic needs more rigorous security inspection to ensure the safety of passengers during traveling.
In the prior art, with the wide application of low altitude unmanned aerial vehicles in the field of patrol and monitoring, more and more low altitude unmanned aerial vehicle-based platforms are used for railway line patrol so as to save the cost of manpower and material resources. The rail maintenance personnel monitor the condition of the railway through rail pictures sent back by the low-altitude unmanned machine of the air-based platform, and timely perform field maintenance when finding foreign matters on the rail.
By adopting the prior art, in order to ensure safe operation of the railway, the low-altitude unmanned inspection of the air-based platform needs to feed back information along the railway more accurately and timely, and only a monitoring mode that rail maintenance personnel observe low-altitude unmanned images causes lower monitoring efficiency of rail foreign matters and needs to consume a large amount of manpower and material resources.
Disclosure of Invention
The invention provides a rail foreign matter monitoring method and device based on an empty foundation platform image, and the rail foreign matter monitoring efficiency is improved.
The invention provides a rail foreign matter monitoring method based on an empty foundation platform image, which comprises the following steps:
acquiring a picture to be processed, wherein the picture to be processed is a rail picture shot by a low-altitude unmanned machine;
obtaining effective gradient information of the picture to be processed;
coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic;
obtaining a second characteristic according to the hue, saturation and transparency HSV color model of the picture to be processed;
and judging whether foreign matters exist in the rail in the picture to be processed according to a third characteristic obtained by fusing the first characteristic and the second characteristic.
In an embodiment of the present invention, the method for monitoring a rail foreign object based on an air-based platform image, where the obtaining effective gradient information of the to-be-processed image includes:
establishing an integral image of the picture to be processed;
establishing a scale space by the integral image through a box filter;
locating feature points of the scale space;
and obtaining effective gradient information of the picture to be processed by constructing a feature point descriptor.
In an embodiment of the present invention, the method for monitoring a rail foreign object based on an air-based platform image, where the effective gradient information is encoded according to the type of the effective gradient information to obtain the first characteristic, includes:
clustering the effective gradient information through a clustering algorithm, and taking a clustering center as a basic code word;
and coding the effective gradient information by adopting a bag-of-words model according to the basic code words to obtain the first characteristic of a fixed coding format.
In an embodiment of the present invention, the method for monitoring foreign matters on a rail based on an empty foundation platform image, where the obtaining of the second feature according to the HSV color model features of hue, saturation and transparency of the to-be-processed picture includes:
acquiring HSV color model data of the picture to be processed;
and counting the HSV color model data according to a color histogram to obtain the HSV color model characteristic as the second characteristic.
In an embodiment of the present invention, the method for monitoring foreign matters on a rail based on an air-based platform image, where the determining whether foreign matters exist on the rail in the to-be-processed picture according to the first feature and the second feature includes:
performing feature fusion on the first feature and the second feature to obtain a third feature;
judging whether foreign matters exist in the rail in the picture to be processed through the third features by a classifier, wherein the classifier comprises: the third feature of a rail picture with foreign matter present and the third feature of a rail picture without foreign matter present.
In an embodiment of the present invention, before acquiring the to-be-processed image, the method for monitoring the foreign object on the rail based on the air-based platform image further includes:
acquiring the third characteristics of N rail pictures with foreign matters and the third characteristics of M rail pictures without foreign matters, wherein N and M are positive integers;
and storing the third characteristics of the N rail pictures with the foreign matters and the third characteristics of the M rail pictures without the foreign matters into the classifier.
In an embodiment of the invention, the classifier is a support vector machine SVM, and the method for monitoring the rail foreign matter based on the air-based platform image is as described above.
In an embodiment of the present invention, after determining whether there is a foreign object in the rail in the to-be-processed picture according to the first feature and the second feature, the method for monitoring a rail foreign object based on an air-based platform image further includes:
and if the rail in the picture to be processed is judged to have the foreign matter, acquiring the attribute of the foreign matter through the third characteristic.
In an embodiment of the present invention, in the method for monitoring the foreign object on the rail based on the air-based platform image, the attribute of the foreign object includes one or more of the following items: the type, size, shape and color of the foreign matter.
The invention provides a rail foreign matter monitoring device based on an empty foundation platform image, which comprises: the acquisition module is used for acquiring a picture to be processed, and the picture to be processed is a rail picture shot by a low-altitude unmanned machine;
the characteristic extraction module is used for acquiring effective gradient information of the picture to be processed;
the characteristic extraction module is further used for coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic;
the feature extraction module is further used for obtaining a second feature according to the hue, saturation and transparency HSV color model of the picture to be processed;
and the classification module is used for judging whether foreign matters exist in the rail in the picture to be processed according to a third characteristic obtained by fusing the first characteristic and the second characteristic.
The invention provides a rail foreign matter monitoring method and device based on an empty foundation platform image, wherein the method comprises the following steps: acquiring a picture to be processed, wherein the picture to be processed is a rail picture shot by a low-altitude unmanned machine; obtaining effective gradient information of a picture to be processed; coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic; obtaining a second characteristic according to the hue, saturation and transparency HSV color model of the picture to be processed; and judging whether foreign matters exist in the rail in the picture to be processed or not through the first characteristic and the second characteristic. According to the rail foreign matter monitoring method and device based on the air-based platform image, whether foreign matters exist on the rail is judged by combining the effective gradient information and the color information of the image, and the monitoring efficiency of the rail foreign matters is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a rail foreign matter monitoring method based on an empty foundation platform image according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a first rail foreign matter monitoring device based on an image of a space-based platform according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic flow chart of a rail foreign matter monitoring method based on an empty foundation platform image according to a first embodiment of the present invention. As shown in fig. 1, the method for detecting a foreign object based on an image of a platform in an empty space provided by this embodiment includes:
s101: and acquiring a picture to be processed, wherein the picture to be processed is a rail picture shot by a low-altitude unmanned machine.
Specifically, the execution main body of the embodiment may be a low-altitude unmanned aerial vehicle of an air-based platform, and the air-based platform is used for monitoring the rail foreign matter through at least one unmanned aerial vehicle; the execution main body of the embodiment may also be any one or a combination of multiple kinds of Terminal Equipment (Terminal), user Equipment (User Equipment), server Equipment and the like capable of acquiring a rail picture taken by the air-based platform through the internet and other manners. The terminal device may be a desktop computer (computer), a notebook computer (notebook), a tablet computer (PAD), or the like. The user device may be a smart phone (smart phone), a smart watch (smart watch), smart glasses, or the like. It is to be understood that the above examples are illustrative only and are not to be construed as limiting in any way.
In this step, when the low-altitude unmanned aerial vehicle patrols and examines the rail section that needs to be monitored, constantly shoot the rail that is monitored, obtain continuous rail picture. After the low-altitude unmanned machine shoots the rail picture, the low-altitude unmanned machine can directly take the rail picture as a picture to be processed; or the rail picture is uploaded to a server for storage through the internet by the low-altitude unmanned machine, and then the rail picture stored in the server is acquired by picture processing equipment such as a terminal and user equipment and serves as the picture to be processed.
S102: and obtaining effective gradient information of the picture to be processed.
Specifically, the effective gradient information of the to-be-processed picture acquired in S101 is obtained, wherein optionally, the effective gradient information in the to-be-processed picture may be extracted by a gradient feature extractor.
Optionally, the step S102 of extracting effective gradients of the to-be-processed picture may include the following steps:
s1021: and establishing an integral image of the picture to be processed.
Specifically, the integral image created in this step is an image obtained by performing integral calculation on the picture to be processed, each point of the integral image is represented as a sum of pixels of a rectangular region from the origin of the original image to the point, and the creation of the integral image can increase the calculation speed because the sum of pixels of any rectangular region in the original image can be completed by addition and subtraction after the integral image is traversed on the whole image, and is irrelevant to the area of the rectangle, and the larger the rectangle is, the more the calculation time is saved.
S1022: the integral image is passed through a box filter to create a scale space.
Specifically, in this step, a scale space is established by using a box filter, and a gaussian kernel function is approximately replaced by using the box filter, so that the convolution templates are all composed of simple rectangles. The problem of fast calculation of a rectangular area is solved by introducing an integral image, and the calculation speed is greatly improved by the approximation of a box filter. In order to ensure that the image matching has scale invariance, the images need to be layered, a scale space of the images is established, and then characteristic points are searched on the images with different scales. In the embodiment, the scale space is established by keeping the size of the image to be processed unchanged, and the integral image obtained by calculating the image to be processed is filtered by changing the size of the box filter, so that the scale space of the image is formed.
S1023: feature points of the scale space are located.
Specifically, with the scale space established in S102, the extreme points of the image are detected using the fast Hessian matrix on each layer of the image in the scale space. For any point (x, y) in space, the corresponding scale in the scale space is σ, and the definition of the Hessian matrix is as follows:
Figure BDA0001601132560000061
wherein L is xx (xσ)、L xy (xσ)、L yy (x σ) is the difference between the point on the image and the second partial derivative of Gaussian
Figure BDA0001601132560000062
The result of the convolution, where g is a gaussian function.
Meanwhile, in order to obtain the stable position and the scale value of the characteristic point, interpolation can be carried out on the scale space, so that the position value of the characteristic point and the scale value of the characteristic point are obtained.
S1024: and obtaining effective gradient information of the picture to be processed by constructing the feature point descriptor.
Specifically, in this step, the principal direction of the feature point is first obtained, which can ensure the rotational invariance of the algorithm, and then the neighborhood of the feature point is rotated to the principal direction to describe the feature point. In order to make the matching of images rotation invariant, the concept of principal direction is introduced. The calculation of the main direction is to take the feature point as the center, take the circular area with the radius of 6s (s is the scale value of the feature point) around the feature point, and calculate the haar wavelet response value of the pixel points in the neighborhood in the x and y directions. And giving a certain weight coefficient to the calculated response value according to the distance, and then carrying out histogram statistics on the weighted response value. Counting is carried out from the x axis, and a new vector is obtained by adding and calculating the haar wavelet response values within 60 degrees of the circular region. The vectors are calculated in the same way every 5 degrees, and 72 new vectors can be obtained by traversing the whole circular area. We select the longest vector direction as the dominant direction of the feature point.
For the detected feature points, a region of 20Sx 20S size in the neighborhood range of the central point is selected by taking the feature point as the center, and then the main direction of the region is rotated to the main direction of the feature point. In order to better utilize the spatial information of the image, the 20s × 20s region is divided into 16 sub-regions of 4*4, so that the pixel value of each sub-region is 5s × 5s. And finally, describing the characteristic points by counting the haar wavelet response values of the pixel points to obtain effective gradient information of the picture to be processed. For example, assuming that a picture can obtain P feature points through detection, and each feature point can obtain a 128-dimensional feature descriptor through the above description, we can use a one-dimensional vector of P × 128 dimensions to represent the effective gradient information of the picture to be processed.
The effective gradient information calculation method provided in this example is merely an example, and nothing else is listed with reference to a method known in the art for calculating effective gradient information. It should be noted that, in this step, effective gradient information of the picture to be processed may also be calculated by other calculation methods that are customary by those skilled in the art, which is not specifically limited in this embodiment.
S103: and coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic.
Specifically, in this step, since the dimensions and lengths of the effective gradient information calculated by different pictures are different, if the effective gradient information needs to be used as a feature for processing and classifying pictures, the effective gradient information with different dimensions and lengths calculated by different pictures needs to be processed, so that the dimensions and lengths of the effective gradient information of all the pictures are the same, and the effective gradient information can be compared together and then classified. Optionally, the effective gradient information may be processed by classifying the type of the effective gradient information, and then performing encoding processing corresponding to the type on the effective gradient information according to the classification condition to obtain encoded effective gradient information as the first feature. The classification is to determine the encoding method of the effective gradient information.
Alternatively, one possible implementation manner of this step S103 is,
s1031: clustering effective gradient information through a clustering algorithm, and taking a clustering center as a basic code word;
s1032: and coding the effective gradient information by adopting a bag-of-words model according to the basic code words to obtain a first characteristic of a fixed coding format.
Specifically, the total number of cluster categories can be set to be 1000 in advance according to a K-means clustering method, and the first features are clustered by measuring distances between the features to obtain a clustering center as a basic code word. The effective gradient information is then encoded from the base codeword using a bag of words model. The dimension of effective gradient information obtained by each picture is different, each feature point is assigned to a corresponding clustering center through bag-of-words model clustering, then a histogram is counted by taking the clustering center as a basis, and a 1000-dimensional feature vector is obtained, namely the first feature to be extracted. Next, we will describe a specific processing example.
Assume that the sample contains 1w of pictures shot by the low-altitude unmanned aerial vehicle, wherein 7000 pictures contain foreign matters and 3000 pictures contain no foreign matters. Gradient information in each picture can be extracted by a gradient feature extractor, and is represented by a plurality of 128-dimensional feature descriptors. Thus, we can obtain
Figure BDA0001601132560000081
Feature points, where P is the number of feature points in each picture. And (3) clustering the characteristic points by using a K-means clustering method, and assigning the total number of the categories to be 1000 so as to obtain 1000 categories, wherein each category comprises a plurality of characteristic points. The clustering center of each category is obtained by calculating the mean of all 128-dimensional feature descriptors in each category, and can be represented by a 128-dimensional one-dimensional vector. By this we can get 1000 cluster centers. Then, the clustering centers are used as a basic dictionary, and each picture is coded by adopting the concept of a bag-of-words model. Specifically, assume that the picture contains P feature points,and counting the frequencies of the P characteristic points belonging to the 1000 categories, and finally obtaining a 1000-dimensional one-dimensional vector which is used as a code word after final coding for subsequent processing.
The clustering algorithm and bag-of-words model approach provided in this example are examples only, and nothing in this list is known with reference to methods known in the art. It should be noted that, in this step, the effective gradient information may also be calculated in a unified dimension by other calculation methods that are familiar to those skilled in the art, and this embodiment is not limited in particular.
S104: and obtaining a second characteristic according to the hue, saturation and transparency HSV color model of the picture to be processed.
Specifically, in this step, the color feature of the picture to be processed is processed, wherein optionally, this step specifically includes:
s1041: acquiring HSV color model data of a picture to be processed;
s1042: and counting the HSV color model data according to the color histogram to obtain an HSV color model characteristic as a second characteristic.
The HSV color model is a color model facing visual perception, and includes 3 components H, S, V respectively corresponding to hue, saturation and brightness of a color signal, and may be represented by an inverted cone. The magnitude from the long axis represents saturation, the long axis represents brightness, and the angle around the long axis represents color. Since the perceived color difference is proportional to the euclidean distance. HSV is therefore more suitable for human perception. Color histograms are a common method of representing color features, but the data size of the histograms is too large, so it is generally necessary to quantize the histograms to simplify the color features. To do this, the color threshold is first quantized in the following way:
Figure BDA0001601132560000091
according to G = HQ S Q V +SQ V + V constructs a one-dimensional feature vector, where Q S Q V Equal to 3, respectively, so G =9H +3S + V. This is achieved byThe sample color features are quantized to an integer of 0-71, which facilitates the histogram statistics to be a one-dimensional feature vector, which is recorded as the second feature.
The calculation method of the HSV color model provided in this example is only an example, and the non-listed parts are known by referring to the known method of calculating the HSV color model in the art. It should be noted that, in this step, the HSV color model of the picture to be processed may also be calculated by other calculation methods commonly used by those skilled in the art, which is not specifically limited in this embodiment.
S105: and judging whether foreign matters exist in the rail in the picture to be processed or not according to the first characteristic and the second characteristic.
Specifically, in a possible implementation manner of this embodiment, the first feature of the to-be-processed picture obtained in S103 and the second feature of the to-be-processed picture obtained in S104 are subjected to feature fusion to obtain the third feature. Optionally, since the dimension of the first feature and the dimension of the second feature are the same in different pictures, an array of the third features including both the array of the first feature and the array of the second feature may be obtained by combining the array of the first features and the array of the second features. Or, the first feature and the second feature may be fused in an addition, subtraction, multiplication, or the like, as long as a third feature that can simultaneously represent the first feature and the second feature is obtained, and the dimensions and forms of the third feature are the same for all pictures, which is not limited herein.
Specifically, in this step, it may be determined whether a foreign object exists on the rail in the to-be-processed picture according to the third feature by using a classifier, where: a third feature of the rail picture in which the foreign object exists and a third feature of the rail picture in which the foreign object does not exist.
For example: the set a of the classifier stores N normal third features of the rail pictures without foreign matters, where the N third features in the set a are N third features calculated according to S101 to S105 in the above embodiment. N pieces of rail pictures shot by the low-altitude unmanned aerial vehicle can be selected, the third characteristics obtained through the pictures are put into the set A, and the N third characteristics in the set A are all in the same range as the N third characteristics do not contain foreign matters. Meanwhile, the set B of classifiers stores M third features of the rail pictures with the foreign object, for example, the set B of classifiers stores M third features of the rail pictures with the foreign object, where M is a positive integer, and M and N may be the same or different, and the M third features in the set B are M rail pictures with the foreign object, which are taken by the low-altitude unmanned aerial vehicle and calculated according to the embodiments S101 to S105. The method can select M rail pictures shot by the low-altitude unmanned aerial vehicle with foreign matters on the rail, and put the third characteristics obtained through the pictures into a set B, wherein the M third characteristics in the set B are in a range different from the third characteristics in the set A due to the fact that the foreign matters exist, and if the types of the foreign matters are the same, the third characteristics in the set B are in the same range.
In S105, after the third feature of the to-be-processed picture is obtained in the above steps, the classifier compares the third feature of the to-be-processed picture with third features in the set a and the set B according to a machine learning algorithm, and determines that the third feature of the to-be-processed picture belongs to the set a or the set B, and if the third feature of the to-be-processed picture is more similar to the third feature in the set a in the comparison, it is determined that foreign matter does not exist in a rail in the to-be-processed picture; correspondingly, if the third feature of the picture to be processed is more similar to the third feature in the set B in the comparison, the rail in the picture to be processed is judged to have the foreign matter.
Optionally, in the above embodiment, the classifier adopted in step S105 is a Support Vector Machine (SVM) linear classifier.
In addition, the present invention also provides a multi-feature fusion dynamic scene classification device, which is used for implementing the method embodiments described above, and has the same technical features and technical effects, and the details are not repeated.
Optionally, in the above embodiment, after S105, the method further includes: and if the rail in the picture to be processed is judged to have the foreign matter, acquiring the attribute of the foreign matter through the third characteristic.
Specifically, when it is determined in S105 that foreign matter exists in the rail in the picture to be processed, the type of the foreign matter may be further identified by the classifier. The attribute of the foreign object may be parameter information such as a type, a size, a shape, a color, and a moving speed of the foreign object. For example: pictures of livestock (such as cattle and sheep) entering the low-altitude unmanned aerial vehicle-acquired rail pictures are calculated through the steps to obtain third characteristics, and the third characteristics are stored in a set C of a classifier; similarly, there are pictures of the vehicle (e.g. car, motorcycle) entering, and the pictures are stored in the set D of classifiers after the third feature is calculated. And after the classifier receives the third features of the pictures to be processed, classifying the third features of the pictures to be processed according to the third features in the set C and the set D to obtain the type of the rail foreign matters in the pictures to be processed, namely livestock or vehicles. Optionally, the determination of the attribute of the foreign object in this embodiment may be implemented by using two different classifiers or the same classifier as the determination of whether the foreign object exists in the above embodiment, and if the determination is implemented by the same classifier, the sets a to D are all stored in the classifier. The distance is an example of a type in the attribute of the foreign object, and other parameters are the same in implementation manner and are not described again, and it should be noted that the moving speed of the foreign object can be obtained by combining pictures taken at different times.
In summary, according to the rail foreign matter detection method based on the air-based platform image provided by the embodiment, the to-be-processed picture is obtained, and the to-be-processed picture is a rail picture shot by a low-altitude unmanned aerial vehicle; obtaining effective gradient information of a picture to be processed; coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic; obtaining a second characteristic according to the hue, saturation and transparency HSV color model of the picture to be processed; and judging whether foreign matters exist in the rail in the picture to be processed according to the first characteristic and the second characteristic. Therefore, whether foreign matters exist on the rail is judged by combining the effective gradient information and the color information of the picture, and the monitoring efficiency of the rail foreign matters is improved. Meanwhile, in the embodiment of the rail foreign matter monitoring method based on the air-based platform image, the gradient feature of the image to be classified is extracted by adopting a gradient feature extractor, and then a high-level gradient feature is obtained according to the gradient feature by using a K-means clustering method and a bag-of-words model; meanwhile, a color feature extractor is adopted to extract the color features of the picture to be classified, and the rail foreign matter is monitored and classified after the two features are fused. And when the picture to be classified is identified as a picture containing foreign matters, early warning is carried out. Not only the gradient information in the picture is considered, but also the color information in the picture is considered, so that the classification is more accurate.
Fig. 2 is a schematic structural diagram of a first rail foreign matter monitoring apparatus based on an image of a space-based platform according to an embodiment of the present invention. As shown in fig. 2, the rail foreign object monitoring device based on the unmanned aerial vehicle image provided by the present embodiment includes: an acquisition module 201, a feature extraction module 202 and a classification module 203. The acquisition module 201 is used for acquiring a picture to be processed, wherein the picture to be processed is a rail picture shot by a low-altitude unmanned aerial vehicle; the feature extraction module 202 is configured to obtain effective gradient information of the picture to be processed; the feature extraction module 202 is further configured to encode the effective gradient information according to the type of the effective gradient information to obtain a first feature; the feature extraction module 202 is further configured to obtain a second feature according to the hue, saturation, and transparency HSV color model of the picture to be processed; the classification module 203 is configured to determine whether foreign objects exist in the rail in the to-be-processed picture according to the first feature and the second feature.
The apparatus provided in this embodiment is used to execute the method provided in the embodiment shown in fig. 1, and the implementation manner and principle thereof are the same, and are not described again.
Optionally, in the above embodiment, the feature extraction module is specifically configured to establish an integral image of the picture to be processed; establishing a scale space by the integral image through a box filter; positioning characteristic points of the scale space; and obtaining effective gradient information of the picture to be processed by constructing the feature point descriptor.
Optionally, in the above embodiment, the feature extraction module is specifically configured to cluster the effective gradient information through a clustering algorithm, and use a clustering center as a basic codeword; and coding the effective gradient information by adopting a bag-of-words model according to the basic code words to obtain a first characteristic of a fixed coding format.
Optionally, in the above embodiment, the feature extraction module is specifically configured to obtain HSV color model data of the to-be-processed picture; and counting the HSV color model data according to the color histogram to obtain an HSV color model characteristic as a second characteristic.
Optionally, in the above embodiment, the classification module is specifically configured to perform feature fusion on the first feature and the second feature to obtain a third feature; whether foreign matters exist in the rail in the picture to be processed is judged through the third characteristic by the classifier, and the classifier comprises: a third feature of the rail picture in which the foreign object exists and a third feature of the rail picture in which the foreign object does not exist.
Optionally, in the above embodiment, the obtaining module is further configured to obtain third features of N rail pictures with foreign matters and third features of M rail pictures without foreign matters, where N and M are positive integers; and storing the third characteristics of the N rail pictures with the foreign matters and the third characteristics of the M rail pictures without the foreign matters into a classifier.
Optionally, in the above embodiment, the classifier is a support vector machine SVM.
Optionally, in the above embodiment, the classification module is further configured to, if it is determined that foreign objects exist in the rail in the to-be-processed picture, obtain an attribute of the foreign objects through the third feature.
Optionally, in the above embodiment, the classification module is further configured to, if it is determined that foreign matter exists in a rail in the picture to be processed, obtain location information of the picture to be processed when the low-altitude unmanned aerial vehicle takes the picture to be processed, and send an alarm signal carrying the location information to the alarm server.
The apparatus provided in this embodiment is used to execute the method provided in the above embodiment, and the implementation manner and principle thereof are the same, and are not described again.
An embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, where the computer program is executed by a processor to execute a rail foreign object monitoring method based on an empty foundation platform image in any one of the above embodiments. The storage medium in this embodiment is a computer-readable storage medium.
An embodiment of the present invention further provides an electronic device, including: a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to execute a rail foreign object monitoring method based on the air-based platform image in any one of the above embodiments via execution of executable instructions.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A rail foreign matter monitoring method based on an empty foundation platform image is characterized by comprising the following steps:
acquiring a picture to be processed, wherein the picture to be processed is a rail picture shot by a low-altitude unmanned machine;
obtaining effective gradient information of the picture to be processed;
coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic;
obtaining a second characteristic according to the hue, saturation and transparency HSV color model of the picture to be processed;
judging whether foreign matters exist in the rail in the picture to be processed or not according to the first characteristic and the second characteristic;
the obtaining effective gradient information of the picture to be processed comprises:
establishing an integral image of the picture to be processed;
establishing a scale space by the integral image through a box filter;
locating feature points of the scale space;
obtaining effective gradient information of the picture to be processed by constructing a feature point descriptor;
the judging whether foreign matters exist in the rail in the picture to be processed according to the first characteristic and the second characteristic comprises the following steps:
performing feature fusion on the first feature and the second feature to obtain a third feature, wherein the feature fusion is performed in a mode of combining, adding, subtracting or multiplying the first feature and the second feature;
judging whether foreign matters exist in the rail in the picture to be processed through the third features by a classifier, wherein the classifier comprises: the third feature of a rail picture with a foreign object present and the third feature of a rail picture without a foreign object present;
the encoding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic includes:
clustering the effective gradient information through a clustering algorithm, and taking a clustering center as a basic code word;
and coding the effective gradient information by adopting a bag-of-words model according to the basic code words to obtain the first characteristic of a fixed coding format.
2. The method as claimed in claim 1, wherein the obtaining a second feature according to the hue, saturation and transparency HSV color model feature of the to-be-processed picture comprises:
acquiring HSV color model data of the picture to be processed;
and counting the HSV color model data according to a color histogram to obtain the HSV color model characteristic as the second characteristic.
3. The method according to claim 1, wherein before the obtaining the picture to be processed, further comprising:
acquiring the third characteristics of N rail pictures with foreign matters and the third characteristics of M rail pictures without foreign matters, wherein N and M are positive integers;
and storing the third characteristics of the N rail pictures with the foreign matters and the third characteristics of the M rail pictures without the foreign matters into the classifier.
4. The method of claim 3, wherein the classifier is a Support Vector Machine (SVM).
5. The method according to any one of claims 1 to 4, wherein after determining whether foreign objects exist on the rail in the to-be-processed picture according to the first feature and the second feature, the method further comprises:
and if the rail in the picture to be processed is judged to have the foreign matter, acquiring the attribute of the foreign matter through the third characteristic.
6. The method of claim 5, wherein the attributes of the foreign object include one or more of: the type, size, shape and color of the foreign matter.
7. A rail foreign matter monitoring device based on air-based platform images, comprising:
the acquisition module is used for acquiring a picture to be processed, and the picture to be processed is a rail picture shot by a low-altitude unmanned machine;
the processing module is used for acquiring effective gradient information of the picture to be processed;
the characteristic extraction module is used for coding the effective gradient information according to the type of the effective gradient information to obtain a first characteristic;
the feature extraction module is further used for obtaining a second feature according to the hue, saturation and transparency HSV color model of the picture to be processed;
the classification module is used for judging whether foreign matters exist in the rail in the picture to be processed according to the first characteristic and the second characteristic;
the processing module is specifically used for establishing an integral image of the picture to be processed; establishing a scale space for the integral image through a box filter; locating feature points of the scale space; obtaining effective gradient information of the picture to be processed by constructing a feature point descriptor;
the classification module is specifically configured to perform feature fusion on the first feature and the second feature to obtain a third feature, where the feature fusion is performed in a manner of combining, adding, subtracting, or multiplying the first feature and the second feature;
judging whether foreign matters exist in the rail in the picture to be processed through the third features by a classifier, wherein the classifier comprises: the third feature of a rail picture with a foreign object present and the third feature of a rail picture without a foreign object present;
the characteristic extraction module is specifically used for clustering the effective gradient information through a clustering algorithm, and taking a clustering center as a basic code word;
and coding the effective gradient information by adopting a bag-of-words model according to the basic code words to obtain the first characteristic of a fixed coding format.
CN201810225219.XA 2018-03-19 2018-03-19 Rail foreign matter monitoring method and device based on air-based platform image Active CN108389205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810225219.XA CN108389205B (en) 2018-03-19 2018-03-19 Rail foreign matter monitoring method and device based on air-based platform image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810225219.XA CN108389205B (en) 2018-03-19 2018-03-19 Rail foreign matter monitoring method and device based on air-based platform image

Publications (2)

Publication Number Publication Date
CN108389205A CN108389205A (en) 2018-08-10
CN108389205B true CN108389205B (en) 2022-10-25

Family

ID=63067549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810225219.XA Active CN108389205B (en) 2018-03-19 2018-03-19 Rail foreign matter monitoring method and device based on air-based platform image

Country Status (1)

Country Link
CN (1) CN108389205B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907534A (en) * 2021-02-18 2021-06-04 哈尔滨市科佳通用机电股份有限公司 Fault detection method and device based on door closing part position image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8447139B2 (en) * 2010-04-13 2013-05-21 International Business Machines Corporation Object recognition using Haar features and histograms of oriented gradients
CN102982350B (en) * 2012-11-13 2015-10-28 上海交通大学 A kind of station caption detection method based on color and histogram of gradients
CN105448095B (en) * 2014-06-03 2017-11-17 杭州海康威视数字技术股份有限公司 Method and apparatus are surveyed in a kind of yellow mark car test
CN105989594B (en) * 2015-02-12 2019-02-12 阿里巴巴集团控股有限公司 A kind of image region detection method and device
CN104680189B (en) * 2015-03-15 2018-04-10 西安电子科技大学 Based on the bad image detecting method for improving bag of words

Also Published As

Publication number Publication date
CN108389205A (en) 2018-08-10

Similar Documents

Publication Publication Date Title
CN106557778B (en) General object detection method and device, data processing device and terminal equipment
CN108960266B (en) Image target detection method and device
US9830704B1 (en) Predicting performance metrics for algorithms
CN108229321B (en) Face recognition model, and training method, device, apparatus, program, and medium therefor
US9704054B1 (en) Cluster-trained machine learning for image processing
Baran et al. A smart camera for the surveillance of vehicles in intelligent transportation systems
CN105809205B (en) A kind of classification method and its system of high spectrum image
Yogameena et al. Deep learning‐based helmet wear analysis of a motorcycle rider for intelligent surveillance system
CN110378218A (en) A kind of image processing method, device and terminal device
CN113963147B (en) Key information extraction method and system based on semantic segmentation
CN116129330B (en) Video-based image processing, behavior recognition, segmentation and detection methods and equipment
Szymanowicz et al. Discrete neural representations for explainable anomaly detection
CN111950348A (en) Method and device for identifying wearing state of safety belt, electronic equipment and storage medium
US11423262B2 (en) Automatically filtering out objects based on user preferences
Selvaraj et al. L1 norm based pedestrian detection using video analytics technique
CN108389205B (en) Rail foreign matter monitoring method and device based on air-based platform image
CN113971821A (en) Driver information determination method and device, terminal device and storage medium
CN115082668A (en) Method, device, equipment and medium for screening interest areas in remote sensing images
CN113496212A (en) Text recognition method and device for box-type structure and electronic equipment
CN112785595B (en) Target attribute detection, neural network training and intelligent driving method and device
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN111340139A (en) Method and device for judging complexity of image content
CN111126248A (en) Method and device for identifying shielded vehicle
CN109558771B (en) Behavior state identification method, device and equipment of marine ship and storage medium
WO2022166325A1 (en) Multi-label class equalization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant