CN111292294A - Method and system for detecting abnormality of in-warehouse bottom piece - Google Patents

Method and system for detecting abnormality of in-warehouse bottom piece Download PDF

Info

Publication number
CN111292294A
CN111292294A CN202010062832.1A CN202010062832A CN111292294A CN 111292294 A CN111292294 A CN 111292294A CN 202010062832 A CN202010062832 A CN 202010062832A CN 111292294 A CN111292294 A CN 111292294A
Authority
CN
China
Prior art keywords
image
dimensional
structured light
target component
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010062832.1A
Other languages
Chinese (zh)
Inventor
张渝
彭建平
赵波
章祥
胡继东
马莉
黄炜
王祯
牟科瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lead Time Science & Technology Co ltd
Original Assignee
Beijing Lead Time Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lead Time Science & Technology Co ltd filed Critical Beijing Lead Time Science & Technology Co ltd
Priority to CN202010062832.1A priority Critical patent/CN111292294A/en
Publication of CN111292294A publication Critical patent/CN111292294A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The application discloses a method and a system for detecting the abnormality of a bottom part in a warehouse, which can be applied to the detection field of rail vehicles such as light rails, subways, trains, high-speed rails and locomotives. After the abnormal part of the target component is obtained through two-dimensional gray image comparison, the three-dimensional data of the abnormal part is compared to determine whether the problems of part deletion, loosening and the like exist, so that the problem of false detection caused by rainwater and sludge coverage during two-dimensional image comparison is solved, and the accuracy is improved.

Description

Method and system for detecting abnormality of in-warehouse bottom piece
Technical Field
The invention relates to the field of locomotive detection equipment, in particular to a method and a system for detecting abnormality of an in-garage bottom part.
Background
With the continuous forward development of the transportation in China, the speed of railway transportation, particularly railway passenger transportation, is continuously increased, and higher requirements are put forward on the safety of locomotives.
Most of the existing railway passenger vehicles are parked in an overhaul trench operation area after being put in storage at night, the trench is manually laid down to watch and check all parts of a walking part, and whether all important walking parts have the problems of looseness, loss, deformation, foreign matters and the like is searched; the detection is checked manually, so that the problems of strong subjectivity, single observation angle, easy existence of dead angles, low efficiency and the like exist. To solve this problem, the current industry also performs automatic detection by two-dimensional image acquisition.
The detection method is characterized in that cameras are fixed on two sides of a railway, and the cameras collect panoramic images of the bottom of a train when the train passes by; and after the images are collected, the current image is compared with the historical image to determine abnormal parts, and fault detection is completed. The detection method has strong limitation, and due to the complex railway running environment, rain, snow, mud, oil stains and the like can be attached to the bottom of the train in the actual running process, so that the detection result is interfered, the misjudgment is generated in the two-dimensional image detection process, and the accuracy is not high enough.
Disclosure of Invention
In view of this, the present application provides an apparatus and a method for detecting an abnormality of an in-garage bottom part, which combine two-dimensional image detection and three-dimensional size data to determine a state of a vehicle bottom part, thereby improving an accuracy of image detection.
In order to solve the technical problems, the technical scheme provided by the invention is a method for detecting the abnormality of a vehicle bottom part, which comprises the following steps:
comparing the similarity between the two-dimensional gray image of the target component and the corresponding standard image, and marking the area corresponding to the image with the similarity not greater than the first preset value as an abnormal part;
and comparing the similarity between the three-dimensional data of the abnormal part and the corresponding standard data, and judging the area with the similarity not greater than a second preset value as abnormal.
Preferably, the step of comparing the similarity between the two-dimensional grayscale image of the target component and the corresponding standard image includes:
matching the two-dimensional gray image of the target component with the template region characteristics of the standard two-dimensional image to obtain matching points;
calculating through the matching points to obtain a homography matrix;
correcting the two-dimensional gray image through a homography matrix, and then registering the two-dimensional gray image with a standard two-dimensional image;
scanning pixel points of the corrected two-dimensional gray image, comparing the similarity of the two-dimensional gray image with an image area corresponding to the standard image, and if the similarity is not greater than a first preset value, judging that the area corresponding to the image is an abnormal part.
Preferably, the two-dimensional grayscale image is obtained by:
and triggering an image acquisition device, wherein the image acquisition device acquires plane images of the target component from different directions to obtain the two-dimensional gray image.
Preferably, the step of comparing the three-dimensional data of the abnormal part with the standard data comprises:
registering the point cloud number in the three-dimensional data corresponding to the abnormal part with the point cloud number of the corresponding standard data;
and after the registration is finished, comparing the similarity between the point cloud in the three-dimensional data corresponding to the abnormal part and the point cloud of the corresponding standard three-dimensional data, and if the similarity is not greater than a second preset value, judging that the area corresponding to the abnormal part is abnormal.
Preferably, the three-dimensional data is obtained by:
acquiring a two-dimensional structured light image of a target component;
and extracting three-dimensional data through a three-dimensional imaging algorithm.
Preferably, the acquiring the two-dimensional structured light image of the target component specifically includes:
triggering a projection device and an image acquisition device at the same time, wherein the projection device projects structured light on the surface of the target component; the image acquisition device simultaneously and respectively acquires the structured light images on the surface of the target component from different directions to obtain a plurality of two-dimensional structured light images.
An in-garage under-carriage component anomaly detection system comprising:
the image acquisition device is used for acquiring a two-dimensional gray image and a two-dimensional structured light image of the target component;
the image processing module is used for extracting and obtaining three-dimensional data of the target component through the two-dimensional structured light image;
the component abnormality detection module is used for comparing the two-dimensional gray level image with the standard image to obtain an abnormal part of the target component;
and the component size detection module is used for comparing the three-dimensional data of the abnormal part with the standard data and confirming the judgment result of the abnormal part according to the comparison result.
Preferably, the image acquisition device comprises a first image acquisition unit and a second image acquisition unit, wherein optical axes of the first image acquisition unit and the second image acquisition unit are intersected and used for simultaneously and respectively acquiring plane images of the target component from different directions to obtain the two-dimensional gray image;
the device also comprises a projection device; the projection device is used for projecting the structured light on the surface of the target component; the method comprises the steps that an image acquisition device acquires a structured light image of the surface of a target component to obtain a two-dimensional structured light image;
the image processing module is used for extracting the two-dimensional structured light image through a three-dimensional imaging algorithm to obtain the three-dimensional data.
Compared with the prior art, the beneficial effects of the application include: according to the vehicle bottom part abnormity detection method, after the abnormal area is identified by the two-dimensional gray scale image, the three-dimensional data of the abnormal area part is compared to judge whether the size of the area part is abnormal. If the abnormal situation occurs, the region is considered to be loosened or fallen off, such as the fastening bolt is loosened and lost, the oil filling port is loosened and the like; if the three-dimensional data are compared and abnormal in size does not occur, the judgment can be mistaken for the area due to attachments such as oil stains and muddy water, and the accuracy of the whole vehicle bottom detection system is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of the system of the present invention;
fig. 3 is a preferred embodiment of the image acquisition device in the system.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.
The invention discloses a method for detecting the abnormity of a bottom part in a garage, which is used for detecting whether the problems of looseness, loss, deformation, foreign matters and the like exist in bottom parts of rail vehicles such as trains, subways, locomotives and the like, and comprises the following steps as shown in figure 1:
s1: comparing the similarity between the two-dimensional gray image of the target component and the corresponding standard image, and marking the area corresponding to the image with the similarity not greater than the first preset value as an abnormal part;
s2: and comparing the similarity between the three-dimensional data of the abnormal part and the corresponding standard data, and judging the area with the similarity not greater than a second preset value as abnormal.
By the method, the abnormal area can be identified through the two-dimensional gray image, and then the abnormal area is checked through three-dimensional data comparison in a targeted manner, so that the interference or misjudgment of factors such as stains and rainwater on the detection of the two-dimensional image can be effectively avoided, and the accuracy is improved.
Specifically, the three-dimensional data in the above scheme is three-dimensional data obtained by acquiring a two-dimensional structured light image by collecting a target component area array structured light and then extracting the two-dimensional structured light image by a three-dimensional imaging algorithm.
Based on this, the method specifically includes:
step S1 includes:
s11: matching the two-dimensional gray image of the target component with the template region characteristics of the standard two-dimensional image to obtain matching points;
s12: calculating through the matching points to obtain a homography matrix;
s13: after the two-dimensional gray image is corrected through the homography matrix, the two-dimensional gray image of the target component is superposed with a pre-stored standard two-dimensional image;
s14: and scanning pixels of the image area, comparing the similarity of the two-dimensional gray image of the target component with the image area corresponding to the standard image, and if the similarity is not greater than a first preset value, judging that the area corresponding to the image is marked as an abnormal part.
S21: extracting the two-dimensional structured light image through a three-dimensional imaging algorithm to obtain the three-dimensional data;
s22, registering the point cloud number in the three-dimensional data corresponding to the abnormal part with the point cloud number of the corresponding standard data;
s23: and after the registration is finished, comparing the similarity between the point cloud in the three-dimensional data corresponding to the abnormal part and the point cloud of the corresponding standard three-dimensional data, and if the similarity is not greater than a second preset value, judging that the area corresponding to the abnormal part is abnormal.
In this embodiment, the two-dimensional grayscale image and the three-dimensional data are obtained before the image flaw detection processing, and the specific steps include:
s01: the control system detects the distance between the target component and the image acquisition devices, or receives a control signal sent by the outside and simultaneously triggers the plurality of image acquisition devices; the image acquisition device simultaneously and respectively acquires plane images of the target component from different directions to obtain a plurality of two-dimensional gray level images;
s02: the control system triggers the projection device and the image acquisition device simultaneously again, and the projection device projects structured light on the surface of the target component; the method comprises the steps that an image acquisition device acquires structured light images of the surface of a target component to obtain at least two-dimensional structured light images;
s03: and the two-dimensional structured light image is processed by a three-dimensional algorithm to obtain three-dimensional data.
The structured light described in this embodiment is an area array structured light, and is preferably a grating stripe image with a fixed phase shift amount. The projection device projects grating stripe images to the target component, and the plurality of image acquisition devices shoot the target component covered with the grating stripe images from different angles at the same time to obtain at least a first measured object image and a second measured object image.
The grating stripe image in the application can be structured light with sine change, namely a sine grating stripe image, and can also be structured light with cosine change, namely a cosine grating stripe image, projected. And processing the two-dimensional structured light image through a three-dimensional imaging algorithm to obtain three-dimensional data.
The three-dimensional imaging algorithm is well known to those skilled in the art, and although the specific implementation means is slightly different, the point cloud containing the three-dimensional data of the target component is obtained through binocular imaging calculation. For the purposes of this application, any three-dimensional imaging algorithm is feasible.
The image acquisition can be completed simultaneously, when the acquisition device walks to the first position at the bottom of the vehicle, the two-dimensional gray image acquisition can be carried out on the target component firstly, and then the two-dimensional structured light image acquisition is carried out; and the two-dimensional structured light image obtains three-dimensional data through a three-dimensional imaging algorithm before three-dimensional flaw detection.
In other embodiments of the present application, the two-dimensional grayscale image and the two-dimensional structured light image may also be acquired by different devices at different time nodes, and the nodes and modes of acquiring the two-dimensional grayscale image and the two-dimensional structured light image should not be limitations on the detection method described in the present application.
The template region in step S1 is a region whose size is not changed due to the missing parts or the shielding of foreign objects, and the region may be a fixed template region preset according to the structural characteristics of the target region.
Template region in standard imageObtaining four points A1(xA1,yA1)、A2(xA2,yA2)、A3(xA3,yA3)、A4(xA4,yA4);
In a two-dimensional gray scale image, points A are acquired1、A2、A3、A4Corresponding point B with projection mapping relation1(xB1,yB1)、B2(xB2,yB2)、B3(xB3,yB3)、B4(xB4,yB4);
Constructing a world coordinate system on an object plane, enabling any point z on the object plane to be 0, and passing through a homography matrix
Figure BDA0002375052950000061
Point AnAnd BnThe following relationship is satisfied:
Figure BDA0002375052950000071
wherein n is 1, 2, 3, 4.
Due to the deviation of the shooting angle, the two-dimensional gray image is deformed due to the transmission relation, and the projection mapping deformation relation existing between the two-dimensional gray image and the standard two-dimensional image can be obtained through the homography matrix. Thus, passing through the homography matrix
Figure BDA0002375052950000072
The two-dimensional gray scale image can be corrected, and the corrected image should be theoretically superposed with the standard two-dimensional image.
In fact, due to the fact that vehicle bottom components are possibly lost, loosened and the like, the comparison condition of image areas corresponding to the two-dimensional gray-scale image and the standard image possibly changes, the change is shown on the misalignment of pixel points, and the comparison of the similarity is achieved accordingly.
By this step, the presence or absence of an abnormality in each target component is identified using the two-dimensional grayscale image. However, the similarity between the two-dimensional grayscale image and the standard image may not be greater than the first preset value due to rain, mud, snow, etc., which is not actually caused by the problems of damage and loss of parts, and thus false alarm is easily generated. Therefore, after the two-dimensional image flaw detection is carried out, the application further carries out detection through a three-dimensional image:
step S2 specifically includes:
s21: registering the point cloud number in the three-dimensional data corresponding to the abnormal part with the point cloud number of the corresponding standard data;
s22: and after the registration is finished, comparing the similarity between the point cloud in the three-dimensional data corresponding to the abnormal part and the point cloud of the corresponding standard three-dimensional data, and if the similarity is not greater than a second preset value, judging that the area corresponding to the abnormal part is abnormal.
For other embodiments of the present application, the projection device is configured to project a structured light image projected toward a target component, and may also be a speckle image or a coding pattern.
To above-mentioned embodiment of this application, when ambient lighting intensity is not enough, can also trigger the light filling device when triggering image acquisition device and acquireing two-dimensional grayscale image for image acquisition device can acquire clear two-dimensional grayscale image.
The two-dimensional gray scale image and the standard image are superposed, and the part which cannot be effectively superposed in the image can be identified as an abnormal part. The abnormal part may be that the image is partially not overlapped due to the structural abnormality of the parts such as loosening and losing of the fastening bolt, loosening of the oil filling port and the like, or the image is partially not overlapped due to the gray scale change of the image caused by adhesion of rainwater and sludge.
And when the abnormal parts cannot be overlapped, identifying the abnormal parts, comparing the three-dimensional data of the abnormal parts with the standard data, if the three-dimensional data is obviously different from the standard data, determining that the abnormal parts are caused by structural looseness or part loss, and if the three-dimensional data is not obviously different from the standard data, determining that the abnormal parts are misjudged due to rainwater and sludge attachment. The method improves the detection accuracy by compositely judging the abnormality of the target component.
As shown in fig. 2 and 3, the present application further discloses a detection system comprising:
the image acquisition device 1 is used for acquiring a two-dimensional gray image and a two-dimensional structured light image of a target component;
the image processing module 3 is used for extracting and obtaining three-dimensional data of the target component through a two-dimensional structured light image;
the component abnormality detection module 2 is used for comparing the two-dimensional gray-scale image with a standard image to obtain an abnormal part of the target component;
and the component size detection module 4 is used for comparing the three-dimensional data of the abnormal part with the standard data and confirming the judgment result of the abnormal part according to the comparison result.
The image acquisition device 1 comprises at least a first image acquisition unit 11 and a second image acquisition unit 13, wherein optical axes of the first image acquisition unit 11 and the second image acquisition unit 13 are intersected and used for simultaneously and respectively acquiring plane images of a target component from different directions; because the optical axes of the first image acquisition unit 11 and the second image acquisition unit 13 are crossed, when the same target component is shot, a plurality of images can be shot from different angles, and multiple times of comparison can be carried out based on a plurality of comparison information during comparison, so that the error can be reduced, and the detection accuracy is improved.
The first image acquisition unit 11 and the second image acquisition unit 13 are preferably capable of being linked with the light supplementing device 12 or the projection device 15, and when a two-dimensional gray-scale image is acquired, the light supplementing device 12 can be selected according to the light intensity of the environment and triggered at the same time to supplement light for a target component;
when acquiring a two-dimensional structured light image, the projection device 15 and the first image acquisition unit 11 and the second image acquisition unit 13 are activated simultaneously.
The projection device 15 is used for projecting the structured light on the surface of the target component; the first image acquisition unit 11 and the second image acquisition unit 13 acquire a structured light image of the surface of the target component to obtain a two-dimensional structured light image;
the image processing module 3 is used for extracting the two-dimensional structured light image through a three-dimensional imaging algorithm to obtain the three-dimensional data.
In the preferred embodiment of the above detection system, the projection device 15 is an area array projection, and the image acquisition device 1 also directly acquires two-dimensional structured light on the surface of the target component, and does not need a scanning motor device required for line-structured light shooting, so that the efficiency of structured light shooting is improved, rapid image acquisition is performed first, and then the rear end processes the image, and the efficiency of vehicle bottom detection is improved.
Moreover, because two-dimensional grayscale image and two-dimensional structure light image can use same group image collection device 1 for the equipment volume of installation image collection 1 is littleer, not only can install in the track both sides, can also set up on the arm, and the arm removes at random, when carrying out image collection, its collection scope is wider, can go deep into the vehicle bottom inside, like axle embrace case top, gear box top etc..
The image capturing device 1 may be triggered by an external signal received by the control system, or may be triggered by the distance between the target component and the image capturing device 1 being recognized by the distance sensor 14, which may also prevent the capturing component from colliding with a train component when the apparatus is mounted on a robotic arm. The image acquisition device can also be positioned, and the accuracy of the shooting distance is ensured.
In the preferred embodiment of the above system, a pair of image capturing devices 1 is selected for image capturing at different viewing angles, and the multi-view images can enhance the accuracy of component difference detection and identification positioning. Imaging may of course also be performed by a multi-view camera arranged as more than two cameras, in which case preferably a binocular camera is used for capturing the image of the target component. The detection device is in a working state, a lens of the camera is upwards arranged to facilitate shooting, optical axes of the first image acquisition unit 11 and the second image acquisition unit 13 are intersected, namely the optical axes of the first image acquisition unit 11 and the second image acquisition unit 13 have a certain included angle, so that image acquisition of the component can be completed from multiple visual angles, and the obtained multi-visual-angle image can enhance the accuracy of component difference detection and identification positioning.
The data can be obtained through view imaging of the binocular camera, and the three-dimensional image of the target component can be read and restored through the data of the binocular camera.
The above is only a preferred embodiment of the present invention, and it should be noted that the above preferred embodiment should not be considered as limiting the present invention, and the protection scope of the present invention should be subject to the scope defined by the claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and these modifications and adaptations should be considered within the scope of the invention.

Claims (8)

1. An abnormality detection method for an in-garage vehicle bottom component, characterized by comprising:
comparing the similarity between the two-dimensional gray image of the target component and the corresponding standard image, and marking the area corresponding to the image with the similarity not greater than the first preset value as an abnormal part;
and comparing the similarity between the three-dimensional data of the abnormal part and the corresponding standard data, and judging the area with the similarity not greater than a second preset value as abnormal.
2. The method for detecting the abnormality of the in-garage vehicle bottom part according to claim 1, wherein the step of comparing the similarity between the two-dimensional gray scale image of the target part and the corresponding standard image comprises:
matching the two-dimensional gray image of the target component with the template region characteristics of the standard two-dimensional image to obtain matching points;
calculating through the matching points to obtain a homography matrix;
correcting the two-dimensional gray image through a homography matrix, and then registering the two-dimensional gray image with a standard two-dimensional image;
scanning pixel points of the corrected two-dimensional gray image, comparing the similarity of the two-dimensional gray image with an image area corresponding to the standard image, and if the similarity is not greater than a first preset value, judging that the area corresponding to the image is an abnormal part.
3. The method for detecting abnormality of in-garage floor parts according to claim 2, characterized in that the two-dimensional grayscale image is obtained by:
and triggering an image acquisition device, wherein the image acquisition device acquires plane images of the target component from different directions to obtain the two-dimensional gray image.
4. The method for detecting the abnormality of the in-garage vehicle bottom component according to claim 1, wherein the step of comparing the three-dimensional data of the abnormal portion with the standard data includes:
registering the point cloud number in the three-dimensional data corresponding to the abnormal part with the point cloud number of the corresponding standard data;
and after the registration is finished, comparing the similarity between the point cloud in the three-dimensional data corresponding to the abnormal part and the point cloud of the corresponding standard three-dimensional data, and if the similarity is not greater than a second preset value, judging that the area corresponding to the abnormal part is abnormal.
5. The method for detecting the abnormality of the in-garage vehicle bottom component according to claim 4, wherein the three-dimensional data is obtained by:
acquiring a two-dimensional structured light image of a target component;
and extracting three-dimensional data through a three-dimensional imaging algorithm.
6. The method for detecting the abnormality of the in-garage vehicle bottom part according to claim 5, wherein the acquiring of the two-dimensional structured light image of the target part is specifically:
triggering a projection device and an image acquisition device at the same time, wherein the projection device projects structured light on the surface of the target component; the image acquisition device simultaneously and respectively acquires the structured light images on the surface of the target component from different directions to obtain a plurality of two-dimensional structured light images.
7. The utility model provides an unusual detecting system of garage bottom of car spare which characterized in that includes:
the image acquisition device is used for acquiring a two-dimensional gray image and a two-dimensional structured light image of the target component;
the image processing module is used for extracting and obtaining three-dimensional data of the target component through the two-dimensional structured light image;
the component abnormality detection module is used for comparing the two-dimensional gray level image with the standard image to obtain an abnormal part of the target component;
and the component size detection module is used for comparing the three-dimensional data of the abnormal part with the standard data and confirming the judgment result of the abnormal part according to the comparison result.
8. The system for detecting the abnormality of the in-garage automobile bottom part is characterized in that an image acquisition device comprises a first image acquisition unit and a second image acquisition unit, wherein optical axes of the first image acquisition unit and the second image acquisition unit are intersected and used for acquiring plane images of a target part from different directions simultaneously and respectively to obtain two-dimensional gray images;
the device also comprises a projection device; the projection device is used for projecting the structured light on the surface of the target component; the method comprises the steps that an image acquisition device acquires a structured light image of the surface of a target component to obtain a two-dimensional structured light image;
the image processing module is used for extracting the two-dimensional structured light image through a three-dimensional imaging algorithm to obtain the three-dimensional data.
CN202010062832.1A 2020-01-20 2020-01-20 Method and system for detecting abnormality of in-warehouse bottom piece Pending CN111292294A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010062832.1A CN111292294A (en) 2020-01-20 2020-01-20 Method and system for detecting abnormality of in-warehouse bottom piece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010062832.1A CN111292294A (en) 2020-01-20 2020-01-20 Method and system for detecting abnormality of in-warehouse bottom piece

Publications (1)

Publication Number Publication Date
CN111292294A true CN111292294A (en) 2020-06-16

Family

ID=71024277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010062832.1A Pending CN111292294A (en) 2020-01-20 2020-01-20 Method and system for detecting abnormality of in-warehouse bottom piece

Country Status (1)

Country Link
CN (1) CN111292294A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985457A (en) * 2020-09-11 2020-11-24 北京百度网讯科技有限公司 Traffic facility damage identification method, device, equipment and storage medium
CN112330647A (en) * 2020-11-12 2021-02-05 南京优视智能科技有限公司 Method for detecting abnormality of bottom of bullet train
CN112381791A (en) * 2020-11-13 2021-02-19 北京图知天下科技有限责任公司 Bolt looseness detection method based on 3D point cloud
CN112488995A (en) * 2020-11-18 2021-03-12 成都主导软件技术有限公司 Intelligent injury judging method and system for automatic train maintenance
CN112733709A (en) * 2021-01-08 2021-04-30 北京主导时代科技有限公司 Track inspection detection system and method
CN113808096A (en) * 2021-09-14 2021-12-17 成都主导软件技术有限公司 Non-contact bolt looseness detection method and system
CN113808097A (en) * 2021-09-14 2021-12-17 北京主导时代科技有限公司 Method and system for detecting loss of key components of train

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104567726A (en) * 2014-12-17 2015-04-29 苏州华兴致远电子科技有限公司 Vehicle operation fault detection system and method
CN104567725A (en) * 2014-12-17 2015-04-29 苏州华兴致远电子科技有限公司 Vehicle operation fault detection system and method
WO2016095490A1 (en) * 2014-12-17 2016-06-23 苏州华兴致远电子科技有限公司 Vehicle operation fault detection system and method
CN106940884A (en) * 2015-12-15 2017-07-11 北京康拓红外技术股份有限公司 A kind of EMUs operation troubles image detecting system and method comprising depth information
CN109242035A (en) * 2018-09-25 2019-01-18 北京华开领航科技有限责任公司 Vehicle bottom fault detection means and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104567726A (en) * 2014-12-17 2015-04-29 苏州华兴致远电子科技有限公司 Vehicle operation fault detection system and method
CN104567725A (en) * 2014-12-17 2015-04-29 苏州华兴致远电子科技有限公司 Vehicle operation fault detection system and method
WO2016095490A1 (en) * 2014-12-17 2016-06-23 苏州华兴致远电子科技有限公司 Vehicle operation fault detection system and method
CN106940884A (en) * 2015-12-15 2017-07-11 北京康拓红外技术股份有限公司 A kind of EMUs operation troubles image detecting system and method comprising depth information
CN109242035A (en) * 2018-09-25 2019-01-18 北京华开领航科技有限责任公司 Vehicle bottom fault detection means and method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985457A (en) * 2020-09-11 2020-11-24 北京百度网讯科技有限公司 Traffic facility damage identification method, device, equipment and storage medium
CN112330647A (en) * 2020-11-12 2021-02-05 南京优视智能科技有限公司 Method for detecting abnormality of bottom of bullet train
CN112381791A (en) * 2020-11-13 2021-02-19 北京图知天下科技有限责任公司 Bolt looseness detection method based on 3D point cloud
CN112488995A (en) * 2020-11-18 2021-03-12 成都主导软件技术有限公司 Intelligent injury judging method and system for automatic train maintenance
CN112488995B (en) * 2020-11-18 2023-12-12 成都主导软件技术有限公司 Intelligent damage judging method and system for automatic maintenance of train
CN112733709A (en) * 2021-01-08 2021-04-30 北京主导时代科技有限公司 Track inspection detection system and method
CN113808096A (en) * 2021-09-14 2021-12-17 成都主导软件技术有限公司 Non-contact bolt looseness detection method and system
CN113808097A (en) * 2021-09-14 2021-12-17 北京主导时代科技有限公司 Method and system for detecting loss of key components of train
CN113808096B (en) * 2021-09-14 2024-01-30 成都主导软件技术有限公司 Non-contact bolt loosening detection method and system
CN113808097B (en) * 2021-09-14 2024-04-12 北京主导时代科技有限公司 Method and system for detecting loss of key parts of train

Similar Documents

Publication Publication Date Title
CN111292294A (en) Method and system for detecting abnormality of in-warehouse bottom piece
CN111289261B (en) Detection method for in-warehouse bottom piece
CN110979321B (en) Obstacle avoidance method for unmanned vehicle
CN107738612B (en) Automatic parking space detection and identification system based on panoramic vision auxiliary system
Broggi et al. Self-calibration of a stereo vision system for automotive applications
US9378553B2 (en) Stereo image processing device for vehicle
US8041079B2 (en) Apparatus and method for detecting obstacle through stereovision
US11427193B2 (en) Methods and systems for providing depth maps with confidence estimates
JP3895238B2 (en) Obstacle detection apparatus and method
US8422737B2 (en) Device and method for measuring a parking space
US20090010482A1 (en) Diagrammatizing Apparatus
JP2014215039A (en) Construction machine
CN104567725A (en) Vehicle operation fault detection system and method
CN107229906A (en) A kind of automobile overtaking's method for early warning based on units of variance model algorithm
CN112132896A (en) Trackside equipment state detection method and system
CN112488995B (en) Intelligent damage judging method and system for automatic maintenance of train
JP4296287B2 (en) Vehicle recognition device
CN103171560A (en) Lane recognition device
CN111855667A (en) Novel intelligent train inspection system and detection method suitable for metro vehicle
JP4967758B2 (en) Object movement detection method and detection apparatus
CN107621229B (en) Real-time railway track width measurement system and method based on area array black-and-white camera
JP4906628B2 (en) Surveillance camera correction device
CN110696016A (en) Intelligent robot suitable for subway vehicle train inspection work
JP2010107348A (en) Calibration target and in-vehicle calibration system using it
CN217932084U (en) Comprehensive train detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200616