CN113487620A - Railway insulation section detection method and device - Google Patents

Railway insulation section detection method and device Download PDF

Info

Publication number
CN113487620A
CN113487620A CN202110567484.8A CN202110567484A CN113487620A CN 113487620 A CN113487620 A CN 113487620A CN 202110567484 A CN202110567484 A CN 202110567484A CN 113487620 A CN113487620 A CN 113487620A
Authority
CN
China
Prior art keywords
difference
current
image
target
segmented image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110567484.8A
Other languages
Chinese (zh)
Inventor
孙洪茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yixin Intelligent Vision Technology Co ltd
Original Assignee
Shenzhen Yixin Intelligent Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yixin Intelligent Vision Technology Co ltd filed Critical Shenzhen Yixin Intelligent Vision Technology Co ltd
Priority to CN202110567484.8A priority Critical patent/CN113487620A/en
Publication of CN113487620A publication Critical patent/CN113487620A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for detecting a railway insulation joint, wherein an image segmentation path corresponding to a target insulation joint is determined through position information, and a current segmentation image set is generated according to the image segmentation path of the target insulation joint and a current image of the target insulation joint; determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; and confirming the current deformation condition of the target insulation joint according to the first difference condition and/or the second difference condition and/or the third difference condition. The deformation condition of the insulating joint is detected through the monitoring device installed on the train, so that the labor intensity of detection workers of the insulating joint is low, the inspection period is short, once the interval equipment is damaged, the detection workers can be easily found in time, and the occurrence probability of interference of personnel on a travelling crane and personal casualty accidents is reduced.

Description

Railway insulation section detection method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for detecting a railway insulation joint.
Background
The insulating joint in the non-insulating track circuit system is used for realizing the electrical isolation between adjacent track circuits, ensuring the effective transmission of signals of the section and realizing the balance of power frequency traction backflow, and is a very key component in the non-insulating track circuit system. At present, an uninsulated track circuit applied to a big railway line in China is generally a tuned uninsulated track circuit.
The railway insulation section detection method mainly adopts a manual inspection mode, inspection personnel need to walk in an interval of dozens of kilometers at night and inspect signal equipment, and the inspection period is half a year and a month.
However, the manual inspection mode is high in labor intensity and long in inspection period, once the interval equipment is damaged, the interval equipment is not easy to find in time, and the occurrence probability that the personnel on the road interferes the travelling crane and the personal casualty accident is increased.
Disclosure of Invention
In view of the above, the present application is proposed to provide a railway insulation section detection method and apparatus that overcomes or at least partially solves the above problems, comprising:
a railway insulation joint detection method is used for determining the deformation condition of a target insulation joint through a real-time image acquired by a monitoring device loaded on a train; wherein, the deformation condition includes waiting to maintain or normal, includes:
acquiring current state data of the target insulating joint, wherein the current state data comprises a current image of the target insulating joint, an acquisition time point of the current state data, position information of the current state data and a current audio signal generated by knocking the target insulating joint;
determining an image segmentation path corresponding to the target insulation joint according to the position information, and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint;
determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times;
determining a standard audio signal corresponding to the target insulation section according to the position information, and determining an audio difference degree according to the current audio signal and the standard audio signal;
determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
Preferably, the step of acquiring the current state data of the target insulation section includes:
acquiring a real-time image of the target insulation joint, and generating a current image of the target insulation joint according to the real-time image, a foreground model corresponding to the target insulation joint and a background model corresponding to the target insulation joint;
and acquiring the position information of the target insulation joint, and generating the current state data of the target insulation joint according to the position information of the target insulation joint and the current image of the target insulation joint.
Preferably, the method further comprises the following steps:
acquiring a standard image of the target insulation joint, and determining a target area corresponding to the target insulation joint in the standard image of the target insulation joint;
acquiring pixel points in the target area in the standard image, and generating the foreground model according to the pixel points in the target area;
and acquiring pixel points outside the target area in the standard image, and generating the background model according to the pixel points outside the target area.
Preferably, the step of generating a current image of the target insulation section according to the real-time image and a foreground model corresponding to the target insulation section and a background model of the target insulation section includes:
determining a first probability that each pixel point in the real-time image belongs to the target area according to the foreground model and the real-time image;
determining a second probability that each pixel point in the real-time image belongs to the outside of the target area according to the background model and the real-time image;
and generating a current image of the target insulation joint according to the first probability, the second probability and the real-time image.
Preferably, the current audio signal is: the first audio signal and the second audio signal are generated by knocking the target insulation joint; wherein the time interval between the first audio signal and the second audio signal is a preset time; averaging the first audio signal and the second audio signal to obtain an average audio signal; and amplifying and denoising the average audio signal to obtain the current audio signal.
Preferably, the step of generating a current segmentation image set according to the image segmentation path of the target insulation joint and the current image of the target insulation joint includes:
dividing the current image of the target insulation joint into a preset number of current divided images according to the image dividing path, and numbering the current divided images;
and generating the current segmentation image set according to the numbered current segmentation images.
Preferably, the step of determining a first difference between each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set closest to the current acquisition time point includes:
respectively determining image difference values between the historical segmentation images corresponding to the numbers in the historical segmentation image set and the current segmentation image set which are closest to the current acquisition time point and the current segmentation image;
and superposing the image difference values to generate the first difference condition.
Preferably, the step of determining the second difference between each segmented image in the current segmented image set and each segmented image corresponding to all the historical segmented image sets includes:
respectively determining image difference values between the historical segmentation images and the current segmentation images which are corresponding to the numbers in the single historical segmentation image set and the current segmentation image set;
superposing the image difference values of each historical segmentation image in the single historical image set and each current segmentation image in the current segmentation image set, and respectively generating second comparison difference values;
and comparing second comparison difference values in all the historical segmentation image sets, and taking the largest value in the second comparison difference values as a second difference condition.
Preferably, the step of determining a third difference between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times includes:
respectively determining image difference values between the historical segmentation images and the current segmentation images which are corresponding to the numbers in the single historical segmentation image set and the current segmentation image set;
superposing the image difference values of each historical segmentation image in the single historical image set and each current segmentation image in the current segmentation image set, and respectively generating the third comparison difference values;
and superposing third comparison difference values in each historical segmentation image set within preset times to serve as a third difference condition.
The invention also provides a railway insulation joint detection device matched with the method, which is used for determining the deformation condition of the target insulation joint through the real-time image acquired by the monitoring device loaded on the train; wherein, the deformation condition includes waiting to maintain or normal, includes:
the current state data acquisition module is used for acquiring current state data of the target insulating joint, wherein the current state data comprises a current image of the target insulating joint, an acquisition time point of the current state data, position information of the current state data and a current audio signal generated by knocking the target insulating joint;
a current segmentation image set generation module, configured to determine an image segmentation path corresponding to the target insulation joint according to the position information, and generate a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint;
the difference condition determining module is used for determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times;
the audio frequency difference degree determining module is used for determining a standard audio frequency signal corresponding to the target insulation joint according to the position information and determining the audio frequency difference degree according to the current audio frequency signal and the standard audio frequency signal;
the current deformation condition confirming module is used for determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulating joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
The application has the following advantages:
in an embodiment of the application, current state data of the target insulation joint is acquired, wherein the current state data includes a current image of the target insulation joint, an acquisition time point of the current state data, position information of the current state data, and a current audio signal generated by tapping the target insulation joint; determining an image segmentation path corresponding to the target insulation joint according to the position information, and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint; determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times; determining a standard audio signal corresponding to the target insulation section according to the position information, and determining an audio difference degree according to the current audio signal and the standard audio signal; determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case. The deformation condition of the insulating joint is detected through the monitoring device installed on the train, so that the labor intensity of detection workers of the insulating joint is low, the inspection period is short, once the interval equipment is damaged, the detection workers can be easily found in time, and the occurrence probability of the accidents of personal injury and death caused by the interference of the personnel on the train is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed to be used in the description of the present application will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a flow chart illustrating steps of a method for detecting a railway insulation joint according to an embodiment of the present application;
fig. 2 is a block diagram illustrating a railway insulation section detection apparatus according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an image segmentation path of a railway insulation joint detection method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the method for detecting a railway insulation joint in any embodiment of the present invention, the deformation condition of the target insulation joint is determined by a real-time image obtained by a monitoring device mounted on a train; wherein the deformation condition comprises to-be-repaired or normal.
Referring to fig. 1, a flowchart illustrating steps of a method for detecting a railway insulation joint according to an embodiment of the present application is shown, and specifically includes the following steps:
s110, acquiring current state data of the target insulation joint, wherein the current state data comprises a current image of the target insulation joint, an acquisition time point of the current state data, position information of the current state data and a current audio signal generated by knocking the target insulation joint;
s120, determining an image segmentation path corresponding to the target insulation joint according to the position information, and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint;
s130, determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set closest to the current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times;
s140, determining a standard audio signal corresponding to the target insulation node according to the position information, and determining an audio difference degree according to the current audio signal and the standard audio signal;
s150, determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
In an embodiment of the application, current state data of the target insulation joint is acquired, wherein the current state data includes a current image of the target insulation joint, an acquisition time point of the current state data, position information of the current state data, and a current audio signal generated by tapping the target insulation joint; determining an image segmentation path corresponding to the target insulation joint according to the position information, and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint; determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times; determining a standard audio signal corresponding to the target insulation section according to the position information, and determining an audio difference degree according to the current audio signal and the standard audio signal; determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case. The deformation condition of the insulating joint is detected through the monitoring device installed on the train, so that the labor intensity of detection workers of the insulating joint is low, the inspection period is short, once the interval equipment is damaged, the detection workers can be easily found in time, and the occurrence probability of the accidents of personal injury and death caused by the interference of the personnel on the train is reduced.
Next, the railway insulation joint detection method in the present exemplary embodiment will be further described.
As described in step S110, the current state data of the target insulation segment is obtained, where the current state data includes a current image of the target insulation segment, a time point of obtaining the current state data, position information of the current state data, and a current audio signal generated by tapping the target insulation segment.
In an embodiment of the present invention, a specific process of "acquiring the current state data of the target insulation node" in step S110 may be further described with reference to the following description, where the current state data includes a current image of the target insulation node, an acquisition time point of the current state data, position information of the current state data, and a current audio signal generated by tapping the target insulation node.
And acquiring a real-time image of the target insulating joint, and generating a current image of the target insulating joint according to the real-time image, a foreground model corresponding to the target insulating joint and a background model corresponding to the target insulating joint.
In an embodiment of the present invention, a specific process of "generating a current image of the target insulation section according to the real-time image and the foreground model corresponding to the target insulation section and the background model corresponding to the target insulation section" may be further described in conjunction with the following description.
And determining a first probability that each pixel point in the real-time image belongs to the target area according to the foreground model and the real-time image.
In an embodiment of the present invention, the method further includes acquiring a standard image of the target insulation joint, and determining a target area corresponding to the target insulation joint in the standard image of the target insulation joint.
As an example, the target area may be implemented by acquiring an outer contour identification point of the target area corresponding to the target insulation joint in the standard image, and constructing the target area according to the outer contour identification point; and the outer contour identification points are identified by utilizing an artificial intelligence identification technology.
The artificial intelligence recognition technology is to establish an outer contour identification point model by using an artificial neural network method, and train the outer contour identification point model by using a standard image. The contour identification point is an inflection point, which can be set according to the range of a user.
As an example, constructing the target area according to the outer contour identification point may be performed by: establishing a pixel coordinate system according to the standard image, and taking a point at the upper left corner of the standard image as an origin; assigning values to each outer contour identification point by using a first function to obtain a first parameter; identifying a maximum value and a minimum value in the first parameters as a contour starting point and a contour end point of the target image respectively; taking the pixel coordinates of the contour identification points as starting points and sequencing along a clockwise direction by taking the contour starting points and the contour end points as starting points; and connecting the two sequenced adjacent contour identification points to form the target area.
As an example, the first function may be:
(point.y*imgW)+(point.x+1)
point.y represents the coordinate of the pixel point along the Y axis in the pixel coordinate system, point.x represents the coordinate of the pixel point along the X axis in the pixel coordinate system, and imgW represents the width of the original image.
In an embodiment of the present invention, the method further includes obtaining pixel points in the target region in the standard image, and generating the foreground model according to the pixel points in the target region.
In an embodiment of the present invention, a specific process of "obtaining pixel points in the target region in the standard image and generating the foreground model according to the pixel points in the target region" in the step may be further described with reference to the following description.
As an example, the background model may obtain, as a first pixel point set, pixel points in the target region in the standard image, and perform cluster analysis according to color values of the pixel points in the first pixel point set to obtain at least one first category. Aiming at each first category, acquiring a first weight corresponding to the first category, and determining a first Gaussian distribution model according to the color value of each pixel point in the first category; and determining the implementation of the foreground model in a weighted summation mode according to the determined first Gaussian distribution models and the obtained first weights.
As an example, the first classification may be implemented by classifying pixels having a color difference smaller than a preset color difference, or dividing the first set of pixels into a preset number of first classifications. The cluster analysis method can be determined according to the prior art, and the invention is not limited thereto.
As an alternative example, the first weight value may be determined according to one of the following methods; if the ratio of the number of the pixels included in the first category to the number of the pixels in the first pixel set is used as the first weight corresponding to the first category.
In a specific implementation, the first category a includes 10 pixel points, and the first pixel point set includes 100 pixel points, then the first weight corresponding to the first category a is 0.1 (i.e., 10/100).
As another alternative example, the first weight may be determined according to one of the following methods; if the average color difference of each first category including the pixel points is calculated, and the sum of the average color differences of all the first categories is calculated; and for each first category, subtracting the average color difference of the first category from the sum value to obtain a difference value, and taking the ratio of the difference value to the sum value as a first weight corresponding to the first category.
In a specific implementation, after performing cluster analysis on the first pixel point set, a total of 3 first categories are obtained, and the first categories are respectively numbered as category 1, category 2, and category 3. The average color difference of the pixel points included in the category 1 is a; the average color difference of the pixel points included in the category 2 is b; the average color difference of the pixel points included in the category 3 is c; the sum is (a + b + c) (i.e., H), and for class 1, the corresponding first weight is (H-a)/H. Similarly, for the category 2, the corresponding first weight is (H-b)/H; for category 3, the corresponding first weight is (H-c)/H.
As an example, the gaussian distribution model may be implemented by using a maximum likelihood estimation method, specifically, the mean and the variance of the gaussian distribution model are determined by determining the gaussian distribution model, that is, determining the mean and the variance of the gaussian distribution model by using a maximum likelihood estimation method, or may be determined according to a method in the prior art, which is not limited in this invention.
As an example, the foreground module may be determined according to the following formula:
Figure BDA0003081288470000121
wherein, in the formula, f represents a foreground model; i represents the ith first category; lambda [ alpha ]iRepresenting a first weight corresponding to the ith first category;
Figure BDA0003081288470000122
representing the gaussian distribution model corresponding to the ith first class.
In an embodiment of the present invention, a specific process of "generating a current image of the target insulation section according to the real-time image and the foreground model corresponding to the target insulation section and the background model corresponding to the target insulation section" may be further described in conjunction with the following description.
And determining a second probability that each pixel point in the real-time image belongs to the outside of the target area according to the background model and the real-time image.
In an embodiment of the present invention, the method further includes obtaining pixel points outside the target area in the standard image, and generating the background model according to the pixel points outside the target area.
In an embodiment of the present invention, a specific process of "obtaining pixel points outside the target area in the standard image and generating the background model according to the pixel points outside the target area" may be further described with reference to the following description.
As an example, the background model may be implemented by obtaining pixel points outside the target area in the standard image, as a second pixel point set, and performing cluster analysis according to color values of pixel points in the second pixel point set to obtain at least one second category. And aiming at each second category, obtaining a second weight corresponding to the second category, and determining a second Gaussian distribution model according to the color value of each pixel point in the second category. And determining a background model in a weighted summation mode according to the determined second Gaussian distribution models and the obtained second weights. Because the construction methods of the foreground model and the background model are the same, the description is omitted here.
In an embodiment of the present invention, a specific process of "generating a current image of the target insulation section according to the real-time image and the foreground model corresponding to the target insulation section and the background model corresponding to the target insulation section" may be further described in conjunction with the following description.
And generating a current image of the target insulation joint according to the first probability, the second probability and the real-time image as follows.
As an example, for any pixel point in the real-time image, if a first probability that the pixel point belongs to the target area is greater than a second probability that the pixel point belongs to the outside of the target area, the pixel point is regarded as a pixel point belonging to the target area; or, regarding any pixel point in the real-time image, if the first probability that the pixel point belongs to the target area is greater than the preset probability, regarding the pixel point as the pixel point belonging to the target area; or, for any pixel point in the real-time image, if the ratio of a first probability that the pixel point belongs to the target area to a second probability that the pixel point belongs to the outside of the target area is greater than a preset ratio, regarding the pixel point as the pixel point belonging to the target area; or, regarding any pixel point in the real-time image, if the difference value obtained by subtracting the second probability outside the target area from the first probability in the target area of the pixel point is greater than a preset difference value, regarding the pixel point as the pixel point in the target area, and regarding the pixel point in the target area as the current image of the target insulating section.
In an embodiment of the present invention, a specific process of "acquiring the current state data of the target insulation node" in step S110 may be further described with reference to the following description, where the current state data includes a current image of the target insulation node, an acquisition time point of the current state data, position information of the current state data, and a current audio signal generated by tapping the target insulation node.
And acquiring the position information of the target insulating joint, and generating the current state data of the target insulating joint according to the position information of the target insulating joint and the current image of the target insulating joint.
It should be noted that the timestamp of the current image of the target insulation joint is an acquisition time point, that is, the acquisition time point of the current state data, and the position information is the longitude and latitude coordinates of the target insulation joint.
In an embodiment of the present invention, a specific process of "acquiring the current state data of the target insulation node" in step S110 may be further described with reference to the following description, where the current state data includes a current image of the target insulation node, an acquisition time point of the current state data, position information of the current state data, and a current audio signal generated by tapping the target insulation node.
Knocking a first audio signal and a second audio signal generated by the target insulation joint as described in the following steps; wherein the time interval between the first audio signal and the second audio signal is a preset time.
As an example, a first audio signal generated by knocking the target insulation section is acquired by an audio receiving device, and a second audio signal generated by knocking the target insulation section is acquired again according to a preset time interval of 2 s.
It should be noted that the frequency and amplitude of the regular sound wave of the audio signal vary the information carrier. Audio is a continuously varying analog signal that can be represented by a continuous curve called a sound wave.
And averaging the first audio signal and the second audio signal to obtain an average audio signal, as described in the following steps.
As an example, the first audio signal and the second audio signal are respectively graphed, then the two graphs are overlapped, the average value of the two points is selected, a new graph is generated, and then the new graph is converted into the average audio signal.
And as described in the following steps, the average audio signal is amplified and subjected to noise reduction processing to obtain the current audio signal.
As an example, the average audio signal is amplified and then subjected to denoising processing, and the specific steps are as follows: obtaining an amplified average audio signal, wherein the amplified average audio signal comprises transient noise; obtaining estimated amplitude data of a current frame signal in the amplified average audio signal, wherein the estimated amplitude data is used for representing an amplitude obtained by estimating the current frame signal after the transient noise is removed; when the estimated amplitude data is smaller than minimum amplitude data, adjusting the estimated amplitude data to obtain target amplitude data of which the amplitude exceeds the minimum amplitude data, wherein the target amplitude data is used for representing the amplitude of a target audio signal which is expected to be obtained after the transient noise of the noisy audio signal is removed; the minimum amplitude data is used for representing the minimum value of the amplitude of the noise-carrying frequency signal on each frequency band in the preset time length; and obtaining the current audio signal based on the target amplitude data.
As described in step S120, an image segmentation path corresponding to the target insulation joint is determined according to the position information, and a current segmentation image set is generated according to the image segmentation path of the target insulation joint and the current image of the target insulation joint.
In an embodiment of the present invention, the specific process of "determining the image segmentation path corresponding to the target insulation segment according to the position information" in step S120 may be further described with reference to the following description.
And determining the identification of the target insulation node according to the position information as described in the following steps.
It should be noted that, according to the position information of the current state data, that is, the longitude and latitude coordinates of the target insulation joint, that is, the position information of the target insulation joint, the specific type of the insulation joint at the specific position is input in advance by the system, so as to obtain the identifier of the target insulation joint.
In an embodiment of the present invention, the specific process of "determining the image segmentation path corresponding to the target insulation segment according to the position information" in step S120 may be further described with reference to the following description.
And determining an image segmentation path of the target insulating joint according to the identification of the target insulating joint as described in the following steps.
It should be noted that, an image segmentation path of the target insulation joint is determined according to the identifier of the target insulation joint, that is, how to segment the target insulation joint; specifically, each mark represents a different dividing path of the insulation joint, and the dividing path is divided according to the part shape of the target insulation joint.
In an embodiment of the present invention, the specific process of "generating the current segmented image set according to the image segmentation path of the target insulation segment and the current image of the target insulation segment" in step S120 may be further described with reference to the following description.
Dividing the current image of the target insulation joint into a preset number of current divided images according to the image dividing path, and numbering the current divided images;
and generating the current segmentation image set according to the numbered current segmentation images as follows.
Referring to fig. 3, in a specific implementation, a current image of a target insulation segment is divided into five divided images, which are numbered sequentially from top to bottom, 1, 2, 3, 4, and 5 in the figure; and taking the numbered segmentation images as a current segmentation image set. And storing the numbered segmented images in the form of an image set (the stored image set is the current segmented image set).
As described in step S130, determining a first difference between each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; and determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times.
In an embodiment of the present invention, a specific process of "determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquiring time point" in step S130 may be further described with reference to the following description.
And respectively determining image difference values between the historical segmented images corresponding to the numbers in the historical segmented image set and the current segmented image set which are closest to the current acquisition time point and the current segmented image.
It should be noted that the image difference value is a difference value of an image parameter, and the image parameter includes contrast, exposure compensation, color, and appearance characteristics. The image difference obtains a parameter difference value of each image parameter, namely a contrast difference value, an exposure compensation difference value, a color difference value and an appearance characteristic difference value, and then the contrast difference value, the exposure compensation difference value, the color difference value and the appearance characteristic difference value are superposed to be used as the image difference value. And the user can set the image parameters in different application scenes according to the self requirements, which is not limited here.
In an embodiment of the present invention, a specific process of "determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquiring time point" in step S130 may be further described with reference to the following description.
Superimposing the image disparity values generates the first disparity case, as described in the following steps.
In a specific implementation, the current segmented image set at the previous acquisition time point is used as the most similar historical segmented image set at the current acquisition time point, that is, the current segmented image set at the last time when the train acquires the target insulation joint is used as the most similar historical segmented image set at the current acquisition time point; comparing each segmented image in the history segmented image set with the most similar current acquisition time point with each segmented image in the corresponding current segmented image set one by one to generate five image difference values, namely performing weighted superposition on the five image difference values to serve as a first difference condition; where the weights are different, i.e. the weights taken in the adjustment calculation.
As described in step S130 above, the second difference between each segmented image in the current segmented image set and each segmented image corresponding to all the historical segmented image sets is determined.
In an embodiment of the present invention, a specific process of "determining the second difference between each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets" in step S130 may be further described with reference to the following description.
And respectively determining image difference values between the historical segmentation images corresponding to the numbers in the single historical segmentation image set and the current segmentation image as follows.
It should be noted that the image difference value is a difference value of an image parameter, and the image parameter includes contrast, exposure compensation, color, and appearance characteristics. The image difference obtains a parameter difference value of each image parameter, namely a contrast difference value, an exposure compensation difference value, a color difference value and an appearance characteristic difference value, and then the contrast difference value, the exposure compensation difference value, the color difference value and the appearance characteristic difference value are superposed to be used as the image difference value. And the user can set the image parameters in different application scenes according to the self requirements, which is not limited here.
In an embodiment of the present invention, a specific process of "determining the second difference between each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets" in step S130 may be further described with reference to the following description.
And as described in the following steps, the image difference values of each historical segmented image in the single historical image set and each current segmented image in the current segmented image set are all superposed to respectively generate the second comparison difference values.
In a specific implementation, the current segmented image sets of all the previous acquired time points are used as historical segmented image sets, that is, the current segmented image sets of all the target insulation joints acquired by the train are used as all the historical segmented image sets; in this embodiment, twenty previous acquisition time points exist, that is, twenty historical segmented image sets exist; and comparing each segmented image in the twenty historical segmented image sets with each segmented image in the corresponding current segmented image set one by one to generate twenty groups of image difference values, wherein each group of image difference values respectively has five image difference values, namely, the five image difference values are weighted and superposed to serve as a second difference condition.
In an embodiment of the present invention, a specific process of "determining the second difference between each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets" in step S130 may be further described with reference to the following description.
And comparing second comparison difference values in all the historical segmented image sets, and taking the largest value in the second comparison difference values as a second difference condition.
In a specific implementation, the twenty historical segmented image sets, that is, twenty second comparison differences, are compared, and the second comparison difference with the largest value is selected as the second difference condition.
In an embodiment of the present invention, a specific process of "determining a third difference between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times" in step S130 may be further described with reference to the following description.
And respectively determining image difference values between the historical segmentation images corresponding to the numbers in the single historical segmentation image set and the current segmentation image as follows.
It should be noted that the image difference value is a difference value of an image parameter, and the image parameter includes contrast, exposure compensation, color, and appearance characteristics. The image difference obtains a parameter difference value of each image parameter, namely a contrast difference value, an exposure compensation difference value, a color difference value and an appearance characteristic difference value, and then the contrast difference value, the exposure compensation difference value, the color difference value and the appearance characteristic difference value are superposed to be used as the image difference value. And the user can set the image parameters in different application scenes according to the self requirements, which is not limited here.
In an embodiment of the present invention, a specific process of "determining a third difference between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times" in step S130 may be further described with reference to the following description.
And as described in the following steps, all the image difference values of each historical segmented image in the single historical image set and each current segmented image in the current segmented image set are superposed, and the third comparison difference values are respectively generated.
In a specific implementation, the current segmented image set of the previous preset-time obtaining time points, namely the three preset-time obtaining time points, is used as a historical segmented image set, namely the current segmented image set of all the previous three target insulation joints obtained by the train is used as each historical segmented image set within the preset times, namely three historical segmented image sets; firstly, comparing each segmented image in a single historical segmented image set with each segmented image in a corresponding current segmented image set one by one to generate five image difference values, and superposing the five image difference values to serve as a third comparison difference value.
In an embodiment of the present invention, a specific process of "determining a third difference between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times" in step S130 may be further described with reference to the following description.
And as described in the following steps, the third comparison difference values in each history segmented image set within the preset times are superposed to serve as a third difference condition.
In a specific implementation, the preset number of times is set to three times, the segmented images of the previous two acquisition time points of the current acquisition time point and the current segmented image of the current acquisition time point are used as a historical segmented image set together, that is, three third comparison difference values are provided, and the three third comparison difference values are superposed to serve as a third difference condition.
As described in step S140, a standard audio signal corresponding to the target insulation node is determined according to the position information, and an audio difference is determined according to the current audio signal and the standard audio signal.
As described in step S150, determining a total difference value according to the first difference condition, the second difference condition, the third difference condition, and the audio frequency difference, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
In an embodiment of the present invention, the step S150 "determining a total difference value according to the first difference condition, the second difference condition, the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case. "is used herein.
In a first specific implementation, when the audio difference degree is smaller than or equal to an audio threshold, if the audio threshold is set to 50 and the audio difference degree is set to 40, the weight of the audio difference degree is set to 20%, and the weight of the first difference condition is set to 80%, and a total difference value is calculated according to the weight of the audio difference degree and the weight of the first difference condition; and when the total difference value exceeds a preset threshold value, setting the current deformation condition of the target insulation joint as to-be-maintained.
In a second specific implementation, when the audio difference degree is smaller than or equal to an audio threshold, if the audio threshold is set to 50 and the audio difference degree is set to 60, the weight of the audio difference degree is set to 80%, and the weight of the first difference condition is set to 20%, and a total difference value is calculated according to the weight of the audio difference degree and the weight of the first difference condition; and when the total difference value exceeds a preset threshold value, setting the current deformation condition of the target insulation joint as to-be-maintained.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Referring to fig. 2, a railway insulation section detection device provided by an embodiment of the present application is shown, which specifically includes the following modules,
the current state data acquisition module 210: the system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for acquiring current state data of the target insulating joint, and the current state data comprises a current image of the target insulating joint, an acquisition time point of the current state data, position information of the current state data and a current audio signal generated by knocking the target insulating joint;
the current segmented image set generation module 220: the image segmentation device is used for determining an image segmentation path corresponding to the target insulation joint according to the position information and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint;
the difference situation determination module 230: the image processing method comprises the steps of determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set which is closest to a current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times;
an audio difference determining module 240, configured to determine a standard audio signal corresponding to the target insulation node according to the position information, and determine an audio difference according to the current audio signal and the standard audio signal;
the current deformation condition confirmation module 250: the audio frequency difference degree determining unit is used for determining a difference total value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulating joint to be maintained when the difference total value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
In an embodiment of the present invention, the current state data obtaining module 210 includes:
the current image generation submodule is used for acquiring a real-time image of the target insulating joint and generating a current image of the target insulating joint according to the real-time image, a foreground model corresponding to the target insulating joint and a background model corresponding to the target insulating joint;
and the current state data generation submodule is used for acquiring the position information of the target insulation joint and generating the current state data of the target insulation joint according to the position information of the target insulation joint and the current image of the target insulation joint.
In a specific implementation, the method further comprises:
the target area determining submodule is used for acquiring a standard image of the target insulation joint and determining a target area corresponding to the target insulation joint in the standard image of the target insulation joint;
a foreground model generation submodule, configured to obtain pixel points in the target region in the standard image, and generate the foreground model according to the pixel points in the target region;
and the background model generation submodule is used for acquiring pixel points outside the target area in the standard image and generating the background model according to the pixel points outside the target area.
In one specific implementation, the current image generation sub-module includes:
a first probability determination submodule, configured to determine, according to the foreground model and the real-time image, a first probability that each pixel in the real-time image belongs to the target region;
the second probability determination submodule is used for determining a second probability that each pixel point in the real-time image belongs to the outside of the target area according to the background model and the real-time image;
and the current image generation submodule of the target insulation joint is used for generating a current image of the target insulation joint according to the first probability, the second probability and the real-time image.
The audio signal acquisition sub-module is used for knocking a first audio signal and a second audio signal generated by the target insulation section; wherein the time interval between the first audio signal and the second audio signal is a preset time;
the average audio signal processing submodule is used for carrying out average processing on the first audio signal and the second audio signal to obtain an average audio signal;
and the current audio signal determining submodule is used for obtaining the current audio signal after the average audio signal is amplified and subjected to noise reduction processing.
In an embodiment of the present invention, the current segmented image set generating module 220 includes:
the identification determining submodule of the target insulating joint is used for determining the identification of the target insulating joint according to the position information;
and the image segmentation path determining submodule is used for determining the image segmentation path of the target insulation joint according to the identification of the target insulation joint.
The image numbering sub-module is used for segmenting the current image of the target insulating joint into a preset number of current segmented images according to the image segmentation path and numbering the current segmented images;
and the current segmentation image set generation submodule is used for generating the current segmentation image set according to the numbered current segmentation images.
In an embodiment of the present invention, the difference situation determining module 230 includes:
a first image difference value determining submodule, configured to determine image difference values between the current segmented image and the historical segmented image corresponding to the numbers in the historical segmented image set and the current segmented image set that are closest to the current acquisition time point, respectively;
and the first difference condition superposition submodule is used for superposing the image difference values to generate the first difference condition.
The second image difference value determining submodule is used for respectively determining the image difference values between the historical segmentation images and the current segmentation images, wherein the historical segmentation images correspond to the current segmentation image set in the single historical segmentation image set;
the second comparison difference generation submodule is used for superposing the image difference value of each historical segmentation image in the single historical image set and each current segmentation image in the current segmentation image set, and respectively generating the second comparison difference value;
and the second difference situation generation submodule is used for comparing second comparison difference values in all the historical segmentation image sets, and taking the largest value in the second comparison difference values as a second difference situation.
A third image difference value determining submodule, configured to determine image difference values between the current segmented image and the historical segmented images corresponding to the numbers in the single historical segmented image set and the current segmented image set, respectively;
a third comparison difference generation submodule, configured to superimpose all image difference values of each historical segmented image in the single historical image set and each current segmented image in the current segmented image set, and generate the third comparison difference respectively;
and the third difference condition generation submodule is used for superposing third comparison difference values in each historical segmentation image set within the preset times to serve as a third difference condition.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The present embodiment and the above embodiments have repeated operation steps, and the present embodiment is only described briefly, and the rest of the schemes may be described with reference to the above embodiments.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Referring to fig. 4, a computer device of a railway insulation section detection method according to the present application is shown, which may specifically include the following:
the computer device 12 described above is embodied in the form of a general purpose computing device, and the components of the computer device 12 may include, but are not limited to: one or more processors or processing units 16, a memory 28, and a bus 18 that couples various system components including the memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, audio Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as random access memory 30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (commonly referred to as "hard drives"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. The memory may include at least one program product having a set (e.g., at least one) of program modules 42, with the program modules 42 configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules 42, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, camera, etc.), with one or more devices that enable an operator to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through the I/O interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN)), a Wide Area Network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As shown in FIG. 4, the network adapter 20 communicates with the other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units 16, external disk drive arrays, RAID systems, tape drives, and data backup storage systems 34, etc.
The processing unit 16 executes various functional applications and data processing by running programs stored in the memory 28, for example, implementing a railway insulation section detection method provided by the embodiment of the present application.
That is, the processing unit 16 implements, when executing the program,: acquiring current state data of the target insulation joint, wherein the current state data comprises a current image of the target insulation joint, an acquisition time point of the current state data and position information of the current state data; determining an image segmentation path corresponding to the target insulation joint according to the position information, and determining and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint; determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; the second difference situation determining module is used for determining the second difference situation of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; a third difference condition determining module, configured to determine a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times; confirming the current deformation condition of the target insulation joint according to the first difference condition and/or the second difference condition and/or the third difference condition; specifically, when the first difference condition and/or the second difference condition and/or the third difference condition exceeds a preset threshold value, the current deformation condition of the target insulation joint is set to be maintained.
In an embodiment of the present application, the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a railway insulation section detection method as provided in all embodiments of the present application.
That is, the program when executed by the processor implements: acquiring current state data of the target insulation joint, wherein the current state data comprises a current image of the target insulation joint, an acquisition time point of the current state data and position information of the current state data; determining an image segmentation path corresponding to the target insulation joint according to the position information, and determining and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint; determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; the second difference situation determining module is used for determining the second difference situation of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; a third difference condition determining module, configured to determine a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times; confirming the current deformation condition of the target insulation joint according to the first difference condition and/or the second difference condition and/or the third difference condition; specifically, when the first difference condition and/or the second difference condition and/or the third difference condition exceeds a preset threshold value, the current deformation condition of the target insulation joint is set to be maintained.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the operator's computer, partly on the operator's computer, as a stand-alone software package, partly on the operator's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the operator's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for detecting the railway insulation section provided by the application are introduced in detail, specific examples are applied in the method to explain the principle and the implementation mode of the application, and the description of the embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A railway insulation joint detection method is used for determining the deformation condition of a target insulation joint through a real-time image acquired by a monitoring device loaded on a train; wherein, the deformation condition is including treating maintenance or normal, its characterized in that includes:
acquiring current state data of the target insulating joint, wherein the current state data comprises a current image of the target insulating joint, an acquisition time point of the current state data, position information of the current state data and a current audio signal generated by knocking the target insulating joint;
determining an image segmentation path corresponding to the target insulation joint according to the position information, and generating a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint;
determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; determining a second difference condition of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; determining a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times;
determining a standard audio signal corresponding to the target insulation section according to the position information, and determining an audio difference degree according to the current audio signal and the standard audio signal;
determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulation joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
2. The method of claim 1, wherein the step of obtaining current status data of the target insulation node comprises:
acquiring a real-time image of the target insulation joint, and generating a current image of the target insulation joint according to the real-time image, a foreground model corresponding to the target insulation joint and a background model corresponding to the target insulation joint;
and acquiring the position information of the target insulation joint, and generating the current state data of the target insulation joint according to the position information of the target insulation joint and the current image of the target insulation joint.
3. The method of claim 2, further comprising:
acquiring a standard image of the target insulation joint, and determining a target area corresponding to the target insulation joint in the standard image of the target insulation joint;
acquiring pixel points in the target area in the standard image, and generating the foreground model according to the pixel points in the target area;
and acquiring pixel points outside the target area in the standard image, and generating the background model according to the pixel points outside the target area.
4. The method of claim 3, wherein the step of generating a current image of the target insulation section from the real-time image and a foreground model corresponding to the target insulation section and a background model of the target insulation section comprises:
determining a first probability that each pixel point in the real-time image belongs to the target area according to the foreground model and the real-time image;
determining a second probability that each pixel point in the real-time image belongs to the outside of the target area according to the background model and the real-time image;
and generating a current image of the target insulation joint according to the first probability, the second probability and the real-time image.
5. The method of claim 1, wherein the current audio signal is:
the first audio signal and the second audio signal are generated by knocking the target insulation joint; wherein the time interval between the first audio signal and the second audio signal is a preset time;
averaging the first audio signal and the second audio signal to obtain an average audio signal;
and amplifying and denoising the average audio signal to obtain the current audio signal.
6. The method of claim 1, wherein the step of generating a current segmented image set from the image segmentation path of the target insulation joint and the current image of the target insulation joint comprises:
dividing the current image of the target insulation joint into a preset number of current divided images according to the image dividing path, and numbering the current divided images;
and generating the current segmentation image set according to the numbered current segmentation images.
7. The method according to claim 6, wherein the step of determining the first difference between each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set closest to the current acquisition time point comprises:
respectively determining image difference values between the historical segmentation images corresponding to the numbers in the historical segmentation image set and the current segmentation image set which are closest to the current acquisition time point and the current segmentation image;
and superposing the image difference values to generate the first difference condition.
8. The method according to claim 6, wherein the step of determining the second difference between each segmented image in the current segmented image set and each segmented image corresponding to all the historical segmented image sets comprises: respectively determining image difference values between the historical segmentation images and the current segmentation images which are corresponding to the numbers in the single historical segmentation image set and the current segmentation image set;
superposing the image difference values of each historical segmentation image in the single historical image set and each current segmentation image in the current segmentation image set, and respectively generating second comparison difference values;
and comparing second comparison difference values in all the historical segmentation image sets, and taking the largest value in the second comparison difference values as a second difference condition.
9. The method according to claim 6, wherein the step of determining the third difference between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times comprises:
respectively determining image difference values between the historical segmentation images and the current segmentation images which are corresponding to the numbers in the single historical segmentation image set and the current segmentation image set;
superposing the image difference values of each historical segmentation image in the single historical image set and each current segmentation image in the current segmentation image set, and respectively generating third comparison difference values;
and superposing third comparison difference values in each historical segmentation image set within preset times to serve as a third difference condition.
10. A railway insulation joint detection device is used for determining the deformation condition of a target insulation joint through a real-time image acquired by a monitoring device loaded on a train; wherein, the deformation condition is including treating maintenance or normal, its characterized in that includes:
the current state data acquisition module is used for acquiring current state data of the target insulating joint, wherein the current state data comprises a current image of the target insulating joint, an acquisition time point of the current state data, position information of the current state data and a current audio signal generated by knocking the target insulating joint;
a current segmentation image set generation module, configured to determine an image segmentation path corresponding to the target insulation joint according to the position information, and generate a current segmentation image set according to the image segmentation path of the target insulation joint and a current image of the target insulation joint;
the difference condition determining module is used for determining a first difference condition of each segmented image in the current segmented image set and each segmented image corresponding to the historical segmented image set with the closest current acquisition time point; wherein the historical segmented image set with the closest current acquisition time point is the current segmented image set corresponding to the previous acquisition time point; and/or; the second difference situation determining module is used for determining the second difference situation of each segmented image in the current segmented image set and each segmented image corresponding to all historical segmented image sets; and/or; a third difference condition determining module, configured to determine a third difference condition between each segmented image in the current segmented image set and each segmented image corresponding to each historical segmented image set within a preset number of times;
the audio frequency difference degree determining module is used for determining a standard audio frequency signal corresponding to the target insulation joint according to the position information and determining the audio frequency difference degree according to the current audio frequency signal and the standard audio frequency signal;
the current deformation condition confirming module is used for determining a total difference value according to the first difference condition, the second difference condition and/or the third difference condition and the audio frequency difference degree, and setting the current deformation condition of the target insulating joint to be maintained when the total difference value exceeds a preset threshold value; specifically, when the audio difference degree is less than or equal to an audio threshold, setting the weight of the audio difference degree to be 20% and the sum of the weights of the first difference case and/or the second difference case and/or the third difference case to be 80%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case; when the audio difference degree is larger than the audio threshold, setting the weight of the audio difference degree as 80% and setting the sum of the weights of the first difference case and/or the second difference case and/or the third difference case as 20%, and calculating a difference total value according to the sum of the weight of the audio difference degree and the weights of the first difference case and/or the second difference case and/or the third difference case.
CN202110567484.8A 2021-05-24 2021-05-24 Railway insulation section detection method and device Pending CN113487620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110567484.8A CN113487620A (en) 2021-05-24 2021-05-24 Railway insulation section detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110567484.8A CN113487620A (en) 2021-05-24 2021-05-24 Railway insulation section detection method and device

Publications (1)

Publication Number Publication Date
CN113487620A true CN113487620A (en) 2021-10-08

Family

ID=77933122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110567484.8A Pending CN113487620A (en) 2021-05-24 2021-05-24 Railway insulation section detection method and device

Country Status (1)

Country Link
CN (1) CN113487620A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070979A (en) * 2021-10-31 2022-02-18 武汉市菲利纸业有限责任公司 Method for processing captured data of cutting point image in corrugated case production

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070979A (en) * 2021-10-31 2022-02-18 武汉市菲利纸业有限责任公司 Method for processing captured data of cutting point image in corrugated case production
CN114070979B (en) * 2021-10-31 2024-01-19 武汉鑫华达彩印包装有限公司 Method for processing captured data of cutting point images in corrugated case production

Similar Documents

Publication Publication Date Title
Xu et al. Inter/intra-category discriminative features for aerial image classification: A quality-aware selection model
CN106097315A (en) A kind of underwater works crack extract method based on sonar image
CN110232368B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN113177469A (en) Training method and device for human body attribute detection model, electronic equipment and medium
KR20180109658A (en) Apparatus and method for image processing
CN111815576B (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
CN116563262A (en) Building crack detection algorithm based on multiple modes
CN110795599B (en) Video emergency monitoring method and system based on multi-scale graph
CN113642447B (en) Monitoring image vehicle detection method and system based on convolutional neural network cascade
CN113487620A (en) Railway insulation section detection method and device
US20160035107A1 (en) Moving object detection
CN114445663A (en) Method, apparatus and computer program product for detecting challenge samples
CN106778822B (en) Image straight line detection method based on funnel transformation
CN108446581A (en) A kind of unmanned plane detection method in adverse circumstances
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN112613564A (en) Target detection post-processing method for eliminating overlapped frames
CN116229419B (en) Pedestrian detection method and device
CN112037182A (en) Locomotive running gear fault detection method and device based on time sequence image and storage medium
CN116310993A (en) Target detection method, device, equipment and storage medium
CN115830012A (en) Method for detecting and analyzing contact net clue damage data
CN113505860B (en) Screening method and device for blind area detection training set, server and storage medium
CN113447572B (en) Steel rail flaw detection method, electronic device, steel rail flaw detection vehicle and readable storage medium
Balcilar et al. Extracting vehicle density from background estimation using Kalman filter
CN113450385B (en) Night work engineering machine vision tracking method, device and storage medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination