CN112801975B - Binocular vision-based railway ballast inspection system and working method thereof - Google Patents

Binocular vision-based railway ballast inspection system and working method thereof Download PDF

Info

Publication number
CN112801975B
CN112801975B CN202110114343.0A CN202110114343A CN112801975B CN 112801975 B CN112801975 B CN 112801975B CN 202110114343 A CN202110114343 A CN 202110114343A CN 112801975 B CN112801975 B CN 112801975B
Authority
CN
China
Prior art keywords
module
image
depth map
fpga
railway ballast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110114343.0A
Other languages
Chinese (zh)
Other versions
CN112801975A (en
Inventor
李思丰
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Science Hunan Advanced Rail Transit Research Institute Co ltd
Original Assignee
China Science Hunan Advanced Rail Transit Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Science Hunan Advanced Rail Transit Research Institute Co ltd filed Critical China Science Hunan Advanced Rail Transit Research Institute Co ltd
Priority to CN202110114343.0A priority Critical patent/CN112801975B/en
Publication of CN112801975A publication Critical patent/CN112801975A/en
Application granted granted Critical
Publication of CN112801975B publication Critical patent/CN112801975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a binocular vision-based railway ballast inspection system which is arranged on a rail and comprises a wheel driving module, a plurality of wheels, an MCU module, a storage module, a communication module, an FPGA module, a first CCD sensing module, a second CCD sensing module and a man-machine interaction module, wherein the wheel driving module is electrically connected with the wheels and the MCU module, the MCU module is electrically connected with the communication module and the FPGA module, and the FPGA module is electrically connected with the storage module, the first CCD sensing module, the second CCD sensing module and the man-machine interaction module, and the optical axes of the first CCD sensing module and the second CCD sensing module are perpendicular to the rail. The method can solve the technical problems of large detection error, manual fatigue and visual fatigue caused by large workload of the conventional railway ballast inspection, and time and cost consumption caused by large workload of the railway ballast inspection.

Description

Binocular vision-based railway ballast inspection system and working method thereof
Technical Field
The invention belongs to the technical field of railway operation safety, and particularly relates to a binocular vision-based railway ballast inspection system and a working method thereof.
Background
In the railway maintenance process, cables and sensors are often installed, the sensors and the cables are generally buried in deep positions below railway ballasts to avoid artificial damage and influence on railway appearance, backfilling is carried out after installation is completed, and after a long time, due to the fact that people trample and excavate the railway ballasts for backfilling, a lot of railway ballast stones can be lost, and the height of the railway ballasts can be reduced.
At present, a manual inspection mode is generally adopted for inspecting railway ballasts of a railway system, however, the manual inspection mode has more defects: firstly, because manual inspection can only judge the height of the railway ballast by visual inspection, the detection error is large; secondly, the work load of railway ballast inspection is large, the labor fatigue and the visual fatigue are easy to cause, and the time cost is quite consumed; thirdly, for the detection objects with long route, wide distribution and different thickness of the railway ballast, the detection standards are difficult to unify, and the detection result is inaccurate and objective.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a binocular vision-based railway ballast inspection system and a working method thereof, and aims to solve the technical problems of large detection error, manual fatigue and visual fatigue caused by large workload of railway ballast inspection, quite time-consuming cost and inaccurate and objective detection results caused by non-unification of detection standards in the conventional railway ballast manual inspection.
In order to achieve the above object, according to one aspect of the present invention, there is provided a binocular vision-based railway ballast inspection system, which is disposed on a rail and includes a wheel driving module, a plurality of wheels, an MCU module, a storage module, a communication module, an FPGA module, a first CCD sensing module, and a second CCD sensing module, and a man-machine interaction module, the wheel driving module is electrically connected with the plurality of wheels and the MCU module, the MCU module is electrically connected with the communication module and the FPGA module, the FPGA module is electrically connected with the storage module, the first CCD sensing module, the second CCD sensing module, and the man-machine interaction module, and optical axes of the first CCD sensing module and the second CCD sensing module are perpendicular to the rail.
Preferably, the wheel drive module is a servo motor;
the number of wheels is 4, at least one of which is a powered wheel;
the communication module is a 4G, 5G or GPRS communication module;
the memory module uses DDR2 or DDR3 chips;
the first CCD sensing module and the second CCD sensing module are identical and comprise a CCD sensor, an analog-to-digital converter and a CPLD;
the man-machine interaction module comprises a display screen and a keyboard.
According to another aspect of the invention, there is provided a working method of a binocular vision-based railway ballast inspection system, comprising the steps of:
(1) The method comprises the steps that an FPGA module controls a man-machine interaction module to receive a ballast inspection instruction input by a user, the ballast inspection instruction indicates a ballast inspection direction and a ballast inspection path, the ballast inspection instruction is sent to an MCU module, and a counter i=1 is set;
(2) The MCU module controls the wheel driving module to drive the wheel to run a single stepping value along the inspection direction indicated by the railway ballast inspection instruction according to the railway ballast inspection instruction from the FPGA module;
(3) The FPGA module controls the first CCD sensing module to acquire an image in the ith step and stores the image in the storage module in the ith step;
(4) The FPGA module extracts an image in the ith step from the storage module, converts the image into a gray image, extracts an edge in the gray image by using a Sobel operator, and carries out binarization processing on the gray image after the edge is extracted to obtain a binarized image in the ith step;
(5) The FGPA module extracts all the closed contours from the binarized image obtained in the step (4) in the i-th step, sorts all the closed contours according to the total number of pixel points occupied by the closed contours, and reserves the closed contour corresponding to the maximum value of the total number of pixel points and the closed contour corresponding to the second maximum value of the total number of pixel points;
(6) The FPGA module acquires a rectangular frame A which just can frame the closed contour corresponding to the maximum value of the total number of the pixel points obtained in the step (5), acquires the length and the width of the rectangular frame A, judges whether the ratio of the length to the width is larger than a and smaller than b, if so, enters the step (7), otherwise, enters the step (9), and returns to the step (2); wherein a is a fraction between 3 and 4 and b is a fraction between 1 and 2;
(7) The FPGA module acquires a rectangular frame B which just can frame the closed outline corresponding to the second maximum value of the total number of the pixel points obtained in the step (5), acquires the length and the width of the rectangular frame B, judges whether the ratio of the length to the width is larger than c and smaller than d, if yes, enters the step (8), otherwise, enters the step (9); wherein c is a fraction between 7 and 8 and d is a fraction between 4 and 5;
(8) The FPGA module obtains four distances between two long sides of the rectangular frame A and the upper and lower boundaries of the image in the ith step obtained in the step (3), and obtains the minimum distance d1 from the four distances min Acquiring four distances between two long sides of the rectangular frame B and the upper and lower boundaries of the image at the ith step obtained in the step (3), and acquiring a minimum distance d2 therefrom min Calculating two ratios a1 and a2 between the two minimum distances, and judging whether there is e>a1>f or e>a2>f, if yes, entering a step (10), otherwise, entering a step (9); wherein e is a fraction between 1.8 and 2 and f is a fraction between 1 and 1.2;
(9) The FPGA module sets i=i+1, and then returns to the step (2);
(10) The MCU module sets the initial value of the distance L equal to 0;
(11) The FPGA module controls the first CCD sensing module and the second CCD sensing module to acquire images at the same time, informs the MCU module to control the wheel driving module to drive the wheel to move forward at the speed of 0.6M/s, and stores images M and N acquired by the first CCD sensing module and the second CCD sensing module respectively at the same time into the storage module;
(12) The FPGA module extracts images M and N respectively acquired by the first CCD sensing module and the second CCD sensing module at the same time from the storage module, and uses a Bouguet algorithm to carry out polar correction on the two acquired images M and N so as to respectively acquire a first image rotation matrix R1 and a second image rotation matrix R2;
(13) The FGPA module judges whether the quotient of the distance L and the distance between adjacent sleepers is an integer, if yes, the step (14) is entered, and if not, the step (11) is returned;
(14) The FGPA module respectively preprocesses the images M and N to obtain preprocessed images, and respectively converts the preprocessed images into gray images M 'and N';
(15) The FPGA module calculates the average value hsl1 of the brightness of the gray image M 'obtained in the step (14) and the average value hsl2 of the brightness of the gray image N', calculates the ratio hsl=hsl1/hsl2 of the average value hsl1 and the average value hsl2, and multiplies each pixel value in the gray image N 'by the ratio hsl to obtain a new gray image N';
(16) The FGPA module multiplies the first image rotation matrix R1 obtained in the step (12) by each pixel value in the gray image M 'to obtain an updated gray image M', and multiplies the second image rotation matrix R2 obtained in the step (12) by each pixel value in the gray image N 'to obtain an updated gray image N';
(17) The FGPA module extracts all the closed contours from the updated gray level image M' obtained in the step (16), sorts all the closed contours according to the total number of the pixel points occupied by the closed contours, reserves the closed contour corresponding to the maximum value of the total number of the pixel points and the closed contour corresponding to the second maximum value of the total number of the pixel points, acquires a rectangular frame C which just can frame the closed contour corresponding to the maximum value of the total number of the pixel points, and acquires a rectangular frame D which just can frame the closed contour corresponding to the second maximum value of the total number of the pixel points;
(18) The FGPA module respectively intercepts an image block E with the same four-corner coordinates as a rectangular frame C and an image block F with the same four-corner coordinates as a rectangular frame D from a gray image M ', searches for image blocks X and Y matched with the image blocks E and F from the gray image N' by utilizing an MAD algorithm, acquires a parallax map between the image blocks E and X and a parallax map between the image blocks F and Y by utilizing an SGBM algorithm, converts the parallax map between the image blocks E and X into a depth map D1, and converts the parallax map between the image blocks F and Y into a depth map D2;
(19) The FPGA module judges whether the length-width ratio of the image block E is larger than that of the image block F, if so, the step (20) is carried out, and if not, the step (23) is carried out;
(20) The FPGA module calculates an average D1 of the depth map D1, updates the depth map D2 according to the average D1 to obtain an updated depth map D2' = (D10/D1) ×d2, where D10 represents the actual statistical distance of the first CCD sensor module from the detected tie, multiplies the depth map D2 by substantially every pixel value in the depth map D2,
(21) The FPGA module acquires a normalized depth map D3 according to the depth map D2' updated in the step (20): d3 =d2' - (d10+h0), and binarizing the normalized depth map D3 to obtain a binarized image, wherein the constant h0 has a value ranging from 20 to 40 cm;
(22) The FPGA module searches image blocks matched with the p-1 matrix in the binarized image in the step (21) by utilizing an MAD algorithm, judges whether the total number of the image blocks is larger than a preset threshold value q, if so, indicates that the situation of railway ballast missing exists, and enters the step (26), otherwise, returns to the step (13); wherein p is an integer between 10 and 50 and the threshold q is an integer between 5 and 20, preferably 10;
(23) The FPGA module calculates the average D2 of the depth map D2, updates the depth map D1 according to the average D2 to obtain an updated depth map D1' = (D20/D2) ×d1, where D20 represents the actual statistical distance of the second CCD sensor module from the detected tie, multiplies the depth map D1 by essentially every pixel value in the depth map D1,
(24) The FPGA module acquires a normalized depth map D4 according to the updated depth map D1' in the step (23): d4 =d1' - (d20+h0), and binarizing the normalized depth map D4 to obtain a binarized image, wherein the constant h0 has a value ranging from 20 to 40 cm;
(25) The FPGA module searches image blocks matched with the p-x-p full 1 matrix from the binarized image in the step (24) by utilizing an MAD algorithm, judges whether the total number of the image blocks is larger than a preset threshold value q, if so, indicates that the condition of railway ballast missing exists, and enters the step (26), otherwise, returns to the step (13);
(26) The FPGA module sends the railway ballast missing notification and the current path L of the wheels to the MCU module for temporary storage;
(27) The MCU module sends the railway ballast missing notification and the current path L of the wheels to the background management system through the communication module at fixed time intervals;
(28) The FPGA module judges whether the current path L of the wheel is larger than the railway ballast inspection path indicated by the railway ballast inspection instruction, if so, the step (29) is entered, and if not, the step (13) is returned;
(29) The FPGA module informs the first CCD sensing module and the second CCD sensing module to stop working, and informs the MCU module to control the wheel driving module to drive the wheels to return to the L=0 place in the direction opposite to the railway ballast inspection direction indicated by the railway ballast inspection instruction.
Preferably, in the step (2), the single step value of the wheel is 10cm, and the running speed of the wheel is 0.1m/s.
Preferably, the preprocessing of step (14) includes color correction and Gamma correction in sequence.
Preferably, in step (18),
d1 = (f×bl)/Dv 1, where f is a normalized focal length, which is obtained by calibration in advance, bl is a distance between a CCD sensor lens center of the first CCD sensor module and a CCD sensor lens center of the second CCD sensor module, dv1 is a parallax map between the image blocks E and X;
d2 = (f×bl)/Dv 2, where Dv2 is a disparity map between image blocks F and Y.
Preferably, in the binarization processing in step (21) and step (24), if the pixel value in the normalized depth map is greater than 0, the pixel value in the binarized image is 1, and if the pixel value in the normalized depth map is less than or equal to 0, the pixel value in the binarized image is 0.
Preferably, in step (26), the current distance L of the wheel is a distance traveled at a speed of 0.6m/s from the execution time of step (11) until the current time.
Preferably, in step (29), the running speed of the wheels is 1.2m/s.
In general, the above technical solutions conceived by the present invention, compared with the prior art, enable the following beneficial effects to be obtained:
1. according to the invention, the steps (20) and (23) are adopted, and the actual statistical distance from the known CCD sensing module to the detected sleeper is utilized to correct the distance from the detected sleeper to the CCD sensing module, so that the accuracy of railway ballast detection is improved, and the technical problem of large detection error in the existing railway ballast manual inspection can be solved;
2. the invention adopts the steps (10), (13) and (28), and the automatic inspection and automatic return of the device are realized by monitoring the moving path L of the device, and no operator is required to follow the whole course, so that the technical problems of manual fatigue, visual fatigue, large workload and time-consuming cost existing in the conventional manual inspection of the railway ballast can be solved;
3. according to the invention, the unified ballast height standard is set in the steps (23) and (24), the railway ballasts are inspected, and the inspection result is output to the background management system, so that railway workers can accurately supplement the railway ballasts according to the inspection result, the railway ballasts are stressed more uniformly, and the train is safer to operate;
4. according to the invention, the image acquisition and the depth information acquisition are realized through the binocular vision technology, an additional sensor for measuring the height is not needed, the equipment simplification degree can be improved, the weight is reduced, the carrying and the carrying are convenient, and the cost of the whole equipment is reduced.
5. Because the invention adopts the step (1), the error of the device for searching the complete sleeper for the first time and the complete railway ballast on the adjacent side is minimized on the basis of balancing efficiency and error by setting a small and proper single stepping value of 10cm, the searching times are reduced, and the searching efficiency is improved.
6. According to the invention, the steps (1) to (7) are adopted, and before the device starts to carry out the inspection operation, the device acquires an image in the ith step through the first CCD sensing module by setting a smaller single step value and a smaller speed, and searches whether the image has a complete sleeper and adjacent complete railway ballasts or not in the image in the ith step, so that the searching steps of the complete sleeper and the adjacent complete railway ballasts of the image data between the steps (8) and (25) are reduced, the calculated amount of the image data is reduced, the inspection speed of the device is improved, and the inspection efficiency is improved.
7. According to the invention, the step (25) is adopted, and the interference of the gap of the railway ballast and the smaller railway ballast on the inspection result is eliminated by setting the image blocks matched by the full 1 matrix of p with moderate size, so that the area with more railway ballasts missing is screened out, the inspection accuracy of the system is improved, and the situations of wasting manpower, material resources, financial resources and the like caused by misjudgment of the system are reduced.
8. The invention adopts the steps (6), (7) and (19), and the length-width ratio of the image block E and the length-width ratio of the image block F are judged to distinguish the sleeper image from the railway ballast image, so that compared with a conventional image matching algorithm, the data calculation amount is greatly reduced, the calculation delay is reduced, and the inspection efficiency is improved.
9. Since the invention adopts the step (8), the minimum distance d1 between the two long sides of the rectangular frame A and the upper and lower boundaries of the image at the ith step obtained in the step (3) is obtained min And a minimum distance d2 between the two long sides of the rectangular frame B and the upper and lower boundaries of the image at the ith step obtained in step (3) min And whether the set formed by the rectangular frame A and the rectangular frame B is at the middle position of the image in the ith step is judged, so that the calculated amount of image data is reduced, the inspection speed of the device is improved, and the inspection efficiency is improved.
Drawings
FIG. 1 is a block diagram of a binocular vision-based track ballast inspection system of the present invention;
FIG. 2 is a flow chart of a working method of the binocular vision-based railway ballast inspection system;
fig. 3 is an application schematic diagram of the binocular vision-based railway ballast inspection system of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1 and 3, according to a first aspect of the present invention, there is provided a binocular vision-based railway ballast inspection system which is provided on a rail (also shown with reference to fig. 3) and includes a wheel driving module 1, a plurality of wheels 2, an MCU module 3, a storage module 4, a communication module 5, an FPGA module 6, a first CCD sensing module 7, and a second CCD sensing module 8, and a man-machine interaction module 9.
The wheel driving module 1 is electrically connected with a plurality of wheels, and the MCU module 3, the wheel driving module 1 is specifically a servo motor, and the number of wheels is 4, at least one of which is a power wheel. In the invention, the model of the MCU module 3 is STM32F103RCT6.
The MCU module 3 is also electrically connected with the communication module 5 and the FPGA module 6. In the present invention, the communication module 5 is a 4G, 5G, or GPRS communication module, and the FPGA module 6 is 5CEFA5U19I7.
The FPGA module 6 is also electrically connected with the storage module 4, the first CCD sensing module 7, the second CCD sensing module 8 and the man-machine interaction module 9. The storage module 4 is a DDR2 or DDR3 chip, the first CCD sensor module 7 and the second CCD sensor module 8 are identical, and include a CCD sensor, an analog-to-digital converter, and a complex programmable logic device (Complex Programmable Logic Device, abbreviated as CPLD), where the type of the CCD sensor is TSL1401CL, the type of the analog-to-digital converter is AD9462BCPZ-125, and the type of the CPLD is MAX7000. The man-machine interaction module 9 comprises a display screen and a keyboard.
The optical axes of the first and second CCD sensor modules 7 and 8 are perpendicular to the rail.
As shown in fig. 2, according to a second aspect of the present invention, there is provided a working method of the above-mentioned binocular vision-based railway ballast inspection system, comprising the steps of:
(1) The method comprises the steps that an FPGA module controls a man-machine interaction module to receive a ballast inspection instruction input by a user, the ballast inspection instruction indicates a ballast inspection direction and a ballast inspection path, the ballast inspection instruction is sent to an MCU module, and a counter i=1 is set;
(2) The MCU module controls the wheel driving module to drive the wheel to run a single stepping value along the inspection direction indicated by the railway ballast inspection instruction according to the railway ballast inspection instruction from the FPGA module, wherein the single stepping value of the wheel is 10cm, and the running speed of the wheel is 0.1m/s;
the method has the advantages that by setting a small and proper single stepping value of 10cm, on the basis of balancing efficiency and errors, errors of the device in searching the complete sleeper for the first time and the complete railway ballast on the adjacent side are minimized, searching times are reduced, and searching efficiency is improved.
(3) The FPGA module controls the first CCD sensing module to acquire an image in the ith step and stores the image in the storage module in the ith step;
(4) The FPGA module extracts an image in the ith step from the storage module, converts the image into a gray image, extracts an edge in the gray image by using a Sobel operator, and carries out binarization processing on the gray image after the edge is extracted to obtain a binarized image in the ith step;
(5) The FGPA module extracts all the closed contours from the binarized image obtained in the step (4) in the i-th step, sorts all the closed contours according to the total number of pixel points occupied by the closed contours, and reserves the closed contour corresponding to the maximum value of the total number of pixel points and the closed contour corresponding to the second maximum value of the total number of pixel points;
specifically, in this step, a closed contour is extracted by using a hollowed-out interior point method.
(6) The FPGA module acquires a rectangular frame A which just can frame the closed contour corresponding to the maximum value of the total number of pixel points obtained in the step (5) by using a FindContours algorithm, acquires the length and the width of the rectangular frame A, judges whether the ratio of the length to the width is larger than a and smaller than b, if so, enters the step (7), otherwise, enters the step (9), and returns to the step (2);
specifically, the value of a is a fraction between 3 and 4, and the value of b is a fraction between 1 and 2.
(7) The FPGA module acquires a rectangular frame B which just can frame the closed outline corresponding to the second maximum value of the total number of the pixel points obtained in the step (5) by using a FindContours algorithm, acquires the length and the width of the rectangular frame B, judges whether the ratio of the length to the width is larger than c and smaller than d, if so, enters the step (8), otherwise, enters the step (9), and returns to the step (2);
specifically, the value of c is a fraction between 7 and 8, and the value of d is a fraction between 4 and 5.
The steps (1) to (7) have the advantages that before the device starts to carry out the inspection operation, the device acquires an image in the ith step through the first CCD sensing module by setting a smaller single step value and a smaller speed, and searches whether the image has a complete sleeper and adjacent complete railway ballasts or not in the image in the ith step, so that the searching steps of the complete sleeper and the adjacent complete railway ballasts of the image data between the steps (8) to (25) are reduced, the calculated amount of the image data is reduced, the inspection speed of the device is improved, and the inspection efficiency is improved.
(8) The FPGA module obtains four distances between two long sides of the rectangular frame A and the upper and lower boundaries of the image in the ith step obtained in the step (3), and obtains the minimum distance d1 from the four distances min Acquiring four distances between two long sides of the rectangular frame B and the upper and lower boundaries of the image at the ith step obtained in the step (3), and acquiring a minimum distance d2 therefrom min Calculating two ratios a1 and a2 between the two minimum distances, and judging whether there is e>a1>f or e>a2>f, if yes, entering a step (10), otherwise, entering a step (9);
specifically, a1=d1 min /d2 min ,a2=d2 min /d1 min
In this step e is a fraction between 1.8 and 2 and f is a fraction between 1 and 1.2.
The advantage of this step is that by obtaining the minimum distance d1 between the two long sides of the rectangular box A and the upper and lower boundaries of the image at the ith step obtained in step (3) min And a minimum distance d2 between the two long sides of the rectangular frame B and the upper and lower boundaries of the image at the ith step obtained in step (3) min And whether the set formed by the rectangular frame A and the rectangular frame B is at the middle position of the image in the ith step is judged, so that the calculated amount of image data is reduced, the inspection speed of the device is improved, and the inspection efficiency is improved.
(9) The FPGA module sets i=i+1, and then returns to the step (2);
(10) The MCU module sets the initial value of the distance L equal to 0;
(11) The FPGA module controls the first CCD sensing module and the second CCD sensing module to acquire images at the same time, informs the MCU module to control the wheel driving module to drive the wheel to move forward at the speed of 0.6M/s, and stores images M and N acquired by the first CCD sensing module and the second CCD sensing module respectively at the same time into the storage module;
(12) The FPGA module extracts images M and N respectively acquired by the first CCD sensing module and the second CCD sensing module at the same time from the storage module, and uses a Bouguet algorithm to carry out polar correction on the two acquired images M and N so as to respectively acquire a first image rotation matrix R1 and a second image rotation matrix R2;
(13) The FGPA module judges whether the quotient of the distance L and the distance between adjacent sleepers is an integer, if yes, the step (14) is entered, and if not, the step (11) is returned;
specifically, the spacing between adjacent ties is typically 0.6 meters.
(14) The FGPA module respectively preprocesses the images M and N to obtain preprocessed images, and respectively converts the preprocessed images into gray images M 'and N';
specifically, the pretreatment process sequentially comprises the following steps: color correction and Gamma correction.
(15) The FPGA module calculates the average value hsl1 of the brightness of the gray image M 'obtained in the step (14) and the average value hsl2 of the brightness of the gray image N', calculates the ratio hsl=hsl1/hsl2 of the average value hsl1 and the average value hsl2, and multiplies each pixel value in the gray image N 'by the ratio hsl to obtain a new gray image N';
the purpose of this step is to achieve a unification of the brightness of the grayscale images M 'and N'.
(16) The FGPA module multiplies the first image rotation matrix R1 obtained in the step (12) by each pixel value in the gray image M 'to obtain an updated gray image M', and multiplies the second image rotation matrix R2 obtained in the step (12) by each pixel value in the gray image N 'to obtain an updated gray image N';
the purpose of this step is to obtain two grayscale images with optical axes parallel to each other.
(17) The FGPA module extracts all the closed contours from the updated gray level image M' obtained in the step (16), sorts all the closed contours according to the total number of pixel points occupied by the closed contours, reserves the closed contour corresponding to the maximum value of the total number of pixel points and the closed contour corresponding to the second maximum value of the total number of pixel points, acquires a rectangular frame C which just can frame the closed contour corresponding to the maximum value of the total number of pixel points by using a FindContours algorithm, and acquires a rectangular frame D which just can frame the closed contour corresponding to the second maximum value of the total number of pixel points by using the FindContours algorithm;
(18) The FGPA module respectively intercepts an image block E with the same four-corner coordinates as a rectangular frame C and an image block F with the same four-corner coordinates as a rectangular frame D from a gray image M ', searches for image blocks X and Y respectively matched with the image blocks E and F from the gray image N' by utilizing an average absolute difference algorithm (MeanAbsolute Differences, abbreviated as MAD), acquires a parallax map between the image blocks E and X and a parallax map between the image blocks F and Y by utilizing a Semi-global block matching (Semi-global block matching, abbreviated as SGBM) algorithm, converts the parallax map between the image blocks E and X into a depth map D1, and converts the parallax map between the image blocks F and Y into a depth map D2;
specifically, the calculation formula for converting the disparity map into the depth map is as follows:
dp= (f×bl)/Dv, where Dp is a depth map, f is a normalized focal length (the parameter is obtained by calibration in advance), bl is a distance between the CCD sensor lens center of the first CCD sensor module 7 and the CCD sensor lens center of the second CCD sensor module 8, and Dv is a parallax map.
(19) The FPGA module judges whether the length-width ratio of the image block E is larger than that of the image block F, if so, the step (20) is carried out, and if not, the step (23) is carried out;
the method has the advantages that the sleeper image and the railway ballast image are distinguished from each other by judging the length-width ratio of the image block E and the length-width ratio of the image block F, compared with a conventional image matching algorithm, the method greatly reduces the data calculation amount, reduces the calculation delay and improves the inspection efficiency.
(20) The FPGA module calculates an average D1 of the depth map D1, updates the depth map D2 according to the average D1 to obtain an updated depth map D2' = (D10/D1) ×d2, where D10 represents the actual statistical distance of the first CCD sensor module from the detected tie, multiplies the depth map D2 by substantially every pixel value in the depth map D2,
specifically, the average value D1 is equal to the sum of all the pixel values in the depth map D1 divided by the number of pixel points.
(21) The FPGA module acquires a normalized depth map D3 according to the depth map D2' updated in the step (20): d3 =d2' - (d10+h0), and binarizing the normalized depth map D3 to obtain a binarized image, wherein the constant h0 has a value ranging from 20 to 40 cm;
specifically, in the binarization process, if the pixel value in the normalized depth map D3 is greater than 0, the pixel value in the binarized image is 1, and if the pixel value in the normalized depth map D3 is less than or equal to 0, the pixel value in the binarized image is 0.
(22) The FPGA module searches image blocks matched with the p-1 matrix in the binarized image in the step (21) by utilizing an MAD algorithm, judges whether the total number of the image blocks is larger than a preset threshold value q, if so, indicates that the situation of railway ballast missing exists, and enters the step (26), otherwise, returns to the step (13);
wherein p is an integer between 10 and 50, preferably 20;
the threshold q is an integer ranging from 5 to 20, preferably 10.
(23) The FPGA module calculates the average D2 of the depth map D2, updates the depth map D1 according to the average D2 to obtain an updated depth map D1' = (D20/D2) ×d1, where D20 represents the actual statistical distance of the second CCD sensor module from the detected tie, multiplies the depth map D1 by essentially every pixel value in the depth map D1,
specifically, the average value D2 is equal to the sum of all the pixel values in the depth map D2 divided by the number of pixel points.
(24) The FPGA module acquires a normalized depth map D4 according to the updated depth map D1' in the step (23): d4 =d1' - (d20+h0), and binarizing the normalized depth map D4 to obtain a binarized image, wherein the constant h0 has a value ranging from 20 to 40 cm;
specifically, in the binarization process, if the pixel value in the normalized depth map D4 is greater than 0, the pixel value in the binarized image is 1, and if the pixel value in the normalized depth map D4 is less than or equal to 0, the pixel value in the binarized image is 0.
(25) The FPGA module searches image blocks matched with the p-x-p full 1 matrix from the binarized image in the step (24) by utilizing an MAD algorithm, judges whether the total number of the image blocks is larger than a preset threshold value q, if so, indicates that the condition of railway ballast missing exists, and enters the step (26), otherwise, returns to the step (13);
wherein p is an integer between 10 and 50, preferably 20;
the threshold q is an integer ranging from 5 to 20, preferably 10.
The method has the advantages that the interference of the gap of the railway ballast and the smaller railway ballast on the inspection result is eliminated by setting the image blocks matched with the full 1 matrix of p with moderate size, so that the area with more railway ballasts missing is screened out, the inspection accuracy of the system is improved, and the situations of wasting manpower, material resources, financial resources and the like caused by misjudgment of the system are reduced.
(26) The FPGA module sends a railway ballast missing notification and the current path L of the wheel (which is the path which is clocked from the execution time of the step (11) to the current time and goes forward at the speed of 0.6 m/s) to the MCU module for temporary storage;
(27) The MCU module sends the railway ballast missing notification and the current path L of the wheels to the background management system through the communication module at fixed time intervals;
specifically, the time interval in this step is 10 seconds to 1 minute, preferably 30 seconds.
(28) The FPGA module judges whether the current path L of the wheel is larger than the railway ballast inspection path indicated by the railway ballast inspection instruction, if so, the step (29) is entered, and if not, the step (13) is returned;
(29) The FPGA module informs the first CCD sensing module and the second CCD sensing module to stop working, and informs the MCU module to control the wheel driving module to drive the wheels to return to the location with L=0 in the direction opposite to the railway ballast inspection direction indicated by the railway ballast inspection instruction, wherein the running speed of the wheels is 1.2m/s.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (8)

1. The working method of the binocular vision-based railway ballast inspection system comprises that the binocular vision-based railway ballast inspection system is arranged on a rail and comprises a wheel driving module, a plurality of wheels, an MCU module, a storage module, a communication module, an FPGA module, a first CCD sensing module, a second CCD sensing module and a man-machine interaction module, wherein the wheel driving module is electrically connected with the wheels and the MCU module, the MCU module is electrically connected with the communication module and the FPGA module, the FPGA module is electrically connected with the storage module, the first CCD sensing module, the second CCD sensing module and the man-machine interaction module, and the optical axes of the first CCD sensing module and the second CCD sensing module are perpendicular to the rail; the method is characterized by comprising the following steps of:
(1) The method comprises the steps that an FPGA module controls a man-machine interaction module to receive a ballast inspection instruction input by a user, the ballast inspection instruction indicates a ballast inspection direction and a ballast inspection path, the ballast inspection instruction is sent to an MCU module, and a counter i=1 is set;
(2) The MCU module controls the wheel driving module to drive the wheel to run a single stepping value along the inspection direction indicated by the railway ballast inspection instruction according to the railway ballast inspection instruction from the FPGA module;
(3) The FPGA module controls the first CCD sensing module to acquire an image in the ith step and stores the image in the storage module in the ith step;
(4) The FPGA module extracts an image in the ith step from the storage module, converts the image into a gray image, extracts an edge in the gray image by using a Sobel operator, and carries out binarization processing on the gray image after the edge is extracted to obtain a binarized image in the ith step;
(5) The FGPA module extracts all the closed contours from the binarized image obtained in the step (4) in the i-th step, sorts all the closed contours according to the total number of pixel points occupied by the closed contours, and reserves the closed contour corresponding to the maximum value of the total number of pixel points and the closed contour corresponding to the second maximum value of the total number of pixel points;
(6) The FPGA module acquires a rectangular frame A which just can frame the closed contour corresponding to the maximum value of the total number of the pixel points obtained in the step (5), acquires the length and the width of the rectangular frame A, judges whether the ratio of the length to the width is larger than a and smaller than b, if so, enters the step (7), otherwise, enters the step (9), and returns to the step (2); wherein a is a fraction between 3 and 4 and b is a fraction between 1 and 2;
(7) The FPGA module acquires a rectangular frame B which just can frame the closed outline corresponding to the second maximum value of the total number of the pixel points obtained in the step (5), acquires the length and the width of the rectangular frame B, judges whether the ratio of the length to the width is larger than c and smaller than d, if yes, enters the step (8), otherwise, enters the step (9); wherein c is a fraction between 7 and 8 and d is a fraction between 4 and 5;
(8) The FPGA module obtains four distances between two long sides of the rectangular frame A and the upper and lower boundaries of the image in the ith step obtained in the step (3), and obtains the minimum distance d1 from the four distances min Acquiring four distances between two long sides of the rectangular frame B and the upper and lower boundaries of the image at the ith step obtained in the step (3), and acquiring a minimum distance d2 therefrom min Calculating two ratios a1 and a2 between the two minimum distances, and judging whether there is e>a1>f or e>a2>f, if yes, entering a step (10), otherwise, entering a step (9); wherein e is a fraction between 1.8 and 2 and f is a fraction between 1 and 1.2;
(9) The FPGA module sets i=i+1, and then returns to the step (2);
(10) The MCU module sets the initial value of the distance L equal to 0;
(11) The FPGA module controls the first CCD sensing module and the second CCD sensing module to acquire images at the same time, informs the MCU module to control the wheel driving module to drive the wheel to move forward at the speed of 0.6M/s, and stores images M and N acquired by the first CCD sensing module and the second CCD sensing module respectively at the same time into the storage module;
(12) The FPGA module extracts images M and N respectively acquired by the first CCD sensing module and the second CCD sensing module at the same time from the storage module, and uses a Bouguet algorithm to carry out polar correction on the two acquired images M and N so as to respectively acquire a first image rotation matrix R1 and a second image rotation matrix R2;
(13) The FGPA module judges whether the quotient of the distance L and the distance between adjacent sleepers is an integer, if yes, the step (14) is entered, and if not, the step (11) is returned;
(14) The FGPA module respectively preprocesses the images M and N to obtain preprocessed images, and respectively converts the preprocessed images into gray images M 'and N';
(15) The FPGA module calculates the average value hsl1 of the brightness of the gray image M 'obtained in the step (14) and the average value hsl2 of the brightness of the gray image N', calculates the ratio hsl=hsl1/hsl2 of the average value hsl1 and the average value hsl2, and multiplies each pixel value in the gray image N 'by the ratio hsl to obtain a new gray image N';
(16) The FGPA module multiplies the first image rotation matrix R1 obtained in the step (12) by each pixel value in the gray image M 'to obtain an updated gray image M', and multiplies the second image rotation matrix R2 obtained in the step (12) by each pixel value in the gray image N 'to obtain an updated gray image N';
(17) The FGPA module extracts all the closed contours from the updated gray level image M' obtained in the step (16), sorts all the closed contours according to the total number of the pixel points occupied by the closed contours, reserves the closed contour corresponding to the maximum value of the total number of the pixel points and the closed contour corresponding to the second maximum value of the total number of the pixel points, acquires a rectangular frame C which just can frame the closed contour corresponding to the maximum value of the total number of the pixel points, and acquires a rectangular frame D which just can frame the closed contour corresponding to the second maximum value of the total number of the pixel points;
(18) The FGPA module respectively intercepts an image block E with the same four-corner coordinates as a rectangular frame C and an image block F with the same four-corner coordinates as a rectangular frame D from a gray image M ', searches for image blocks X and Y matched with the image blocks E and F from the gray image N' by utilizing an MAD algorithm, acquires a parallax map between the image blocks E and X and a parallax map between the image blocks F and Y by utilizing an SGBM algorithm, converts the parallax map between the image blocks E and X into a depth map D1, and converts the parallax map between the image blocks F and Y into a depth map D2;
(19) The FPGA module judges whether the length-width ratio of the image block E is larger than that of the image block F, if so, the step (20) is carried out, and if not, the step (23) is carried out;
(20) The FPGA module calculates an average D1 of the depth map D1, updates the depth map D2 according to the average D1 to obtain an updated depth map D2' = (D10/D1) ×d2, where D10 represents the actual statistical distance of the first CCD sensor module from the detected tie, multiplies the depth map D2 by substantially every pixel value in the depth map D2,
(21) The FPGA module acquires a normalized depth map D3 according to the depth map D2' updated in the step (20): d3 =d2' - (d10+h0), and binarizing the normalized depth map D3 to obtain a binarized image, wherein the constant h0 has a value ranging from 20 to 40 cm;
(22) The FPGA module searches image blocks matched with the p-1 matrix in the binarized image in the step (21) by utilizing an MAD algorithm, judges whether the total number of the image blocks is larger than a preset threshold value q, if so, indicates that the situation of railway ballast missing exists, and enters the step (26), otherwise, returns to the step (13); wherein p is an integer between 10 and 50 and the threshold q is an integer between 5 and 20;
(23) The FPGA module calculates the average D2 of the depth map D2, updates the depth map D1 according to the average D2 to obtain an updated depth map D1' = (D20/D2) ×d1, where D20 represents the actual statistical distance of the second CCD sensor module from the detected tie, multiplies the depth map D1 by essentially every pixel value in the depth map D1,
(24) The FPGA module acquires a normalized depth map D4 according to the updated depth map D1' in the step (23): d4 =d1' - (d20+h0), and binarizing the normalized depth map D4 to obtain a binarized image, wherein the constant h0 has a value ranging from 20 to 40 cm;
(25) The FPGA module searches image blocks matched with the p-x-p full 1 matrix from the binarized image in the step (24) by utilizing an MAD algorithm, judges whether the total number of the image blocks is larger than a preset threshold value q, if so, indicates that the condition of railway ballast missing exists, and enters the step (26), otherwise, returns to the step (13);
(26) The FPGA module sends the railway ballast missing notification and the current path L of the wheels to the MCU module for temporary storage;
(27) The MCU module sends the railway ballast missing notification and the current path L of the wheels to the background management system through the communication module at fixed time intervals;
(28) The FPGA module judges whether the current path L of the wheel is larger than the railway ballast inspection path indicated by the railway ballast inspection instruction, if so, the step (29) is entered, and if not, the step (13) is returned;
(29) The FPGA module informs the first CCD sensing module and the second CCD sensing module to stop working, and informs the MCU module to control the wheel driving module to drive the wheels to return to the L=0 place in the direction opposite to the railway ballast inspection direction indicated by the railway ballast inspection instruction.
2. The method of claim 1, wherein in the step (2), the single step value of the wheel is 10cm, and the running speed of the wheel is 0.1m/s.
3. The method of claim 1, wherein the preprocessing of step (14) includes color correction and Gamma correction.
4. The method of operating a binocular vision-based railway ballast inspection system according to claim 1, wherein in the step (18),
d1 = (f×bl)/Dv 1, where f is a normalized focal length, which is obtained by calibration in advance, bl is a distance between a CCD sensor lens center of the first CCD sensor module and a CCD sensor lens center of the second CCD sensor module, dv1 is a parallax map between the image blocks E and X;
d2 = (f×bl)/Dv 2, where Dv2 is a disparity map between image blocks F and Y.
5. The method according to claim 1, wherein in the binarization processing of step (21) and step (24), if the pixel value in the normalized depth map is greater than 0, the pixel value in the binarized image is 1, and if the pixel value in the normalized depth map is less than or equal to 0, the pixel value in the binarized image is 0.
6. The method of claim 1, wherein in step (26), the current distance L of the wheel is a distance from the execution time of step (11) to the current time, and the vehicle travels at a speed of 0.6 m/s.
7. The method of operating a binocular vision based railway ballast inspection system according to claim 1, wherein in step (29), the running speed of the wheels is 1.2m/s.
8. The method for operating a binocular vision-based railway ballast inspection system according to claim 1, wherein,
the wheel driving module is a servo motor;
the number of wheels is 4, at least one of which is a powered wheel;
the communication module is a 4G, 5G or GPRS communication module;
the memory module uses DDR2 or DDR3 chips;
the first CCD sensing module and the second CCD sensing module are identical and comprise a CCD sensor, an analog-to-digital converter and a CPLD;
the man-machine interaction module comprises a display screen and a keyboard.
CN202110114343.0A 2021-01-28 2021-01-28 Binocular vision-based railway ballast inspection system and working method thereof Active CN112801975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110114343.0A CN112801975B (en) 2021-01-28 2021-01-28 Binocular vision-based railway ballast inspection system and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110114343.0A CN112801975B (en) 2021-01-28 2021-01-28 Binocular vision-based railway ballast inspection system and working method thereof

Publications (2)

Publication Number Publication Date
CN112801975A CN112801975A (en) 2021-05-14
CN112801975B true CN112801975B (en) 2023-12-22

Family

ID=75812269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110114343.0A Active CN112801975B (en) 2021-01-28 2021-01-28 Binocular vision-based railway ballast inspection system and working method thereof

Country Status (1)

Country Link
CN (1) CN112801975B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294187B (en) * 2022-10-08 2023-01-31 合肥的卢深视科技有限公司 Image processing method of depth camera, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005213779A (en) * 2004-01-27 2005-08-11 Hashizume Kiko Kk Track structure identifying device
CN102285361A (en) * 2011-07-15 2011-12-21 上海工程技术大学 Rail space measuring vehicle
CN105225482A (en) * 2015-09-02 2016-01-06 上海大学 Based on vehicle detecting system and the method for binocular stereo vision
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005213779A (en) * 2004-01-27 2005-08-11 Hashizume Kiko Kk Track structure identifying device
CN102285361A (en) * 2011-07-15 2011-12-21 上海工程技术大学 Rail space measuring vehicle
CN105225482A (en) * 2015-09-02 2016-01-06 上海大学 Based on vehicle detecting system and the method for binocular stereo vision
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
可变形与分离式铁路轨道清洁机器人机械设计实现;宋子诏;;电子制作(第Z2期);全文 *
基于双目视觉的车辆闸杆防撞系统;王永;熊显名;李小勇;;计算机系统应用(第05期);全文 *

Also Published As

Publication number Publication date
CN112801975A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
WO2023045299A1 (en) Road surface technical condition detection method and device based on three-dimensional contour
CN104599249B (en) Cableway platform bridge floor car load is distributed real-time detection method
CN104567684B (en) A kind of contact net geometric parameter detection method and device
González et al. Automatic traffic signs and panels inspection system using computer vision
CN101957309B (en) All-weather video measurement method for visibility
CN103630088B (en) High accuracy tunnel cross-section detection method based on bidifly light belt and device
CN111784657A (en) Digital image-based system and method for automatically identifying cement pavement diseases
CN110766979A (en) Parking space detection method for automatic driving vehicle
KR102017870B1 (en) Real-time line defect detection system
CN113306991B (en) Coal conveyor monitoring management system based on stereoscopic vision
CN103955923A (en) Fast pavement disease detecting method based on image
CN108154498A (en) A kind of rift defect detecting system and its implementation
CN117058600B (en) Regional bridge group traffic load identification method and system
CN107967681A (en) Defect inspection method is hindered in a kind of elevator compensation chain punching based on machine vision
CN112801975B (en) Binocular vision-based railway ballast inspection system and working method thereof
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN115931874A (en) Carrying type magnetic suspension intelligent dynamic inspection system and gap width detection method
CN115503788B (en) Ballast track bed section scanning detection system
CN114612731B (en) Intelligent identification method and system for road flatness detection
Yao et al. Automated measurements of road cracks using line-scan imaging
CN111102959A (en) Online rail settlement monitoring device and method based on linear detection
CN115289991A (en) Subway track deformation monitoring method and device and electronic equipment
CN207751450U (en) Road detection apparatus and system
CN112766685A (en) Road engineering quality supervision method based on big data analysis and machine vision and cloud computing supervision platform
CN214882942U (en) Bridge modeling and video image acquisition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant