CN113177962A - Inspection robot, switch state identification method and device and storage medium - Google Patents

Inspection robot, switch state identification method and device and storage medium Download PDF

Info

Publication number
CN113177962A
CN113177962A CN202110556070.5A CN202110556070A CN113177962A CN 113177962 A CN113177962 A CN 113177962A CN 202110556070 A CN202110556070 A CN 202110556070A CN 113177962 A CN113177962 A CN 113177962A
Authority
CN
China
Prior art keywords
state
switch
image
recognition result
action part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110556070.5A
Other languages
Chinese (zh)
Inventor
温鑫
龚慧钦
于航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110556070.5A priority Critical patent/CN113177962A/en
Publication of CN113177962A publication Critical patent/CN113177962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the specification provides an inspection robot, a switch state identification method, a device and a storage medium, wherein the method comprises the following steps: acquiring an entity mark on a switch action part; the entity mark is used for indicating the current state of the switch action part; inquiring a corresponding relation table between the mark and the state according to the entity mark; determining the state of the switch action part based on the query result. The embodiment of the specification can improve the on-off state identification accuracy of the inspection robot.

Description

Inspection robot, switch state identification method and device and storage medium
Technical Field
The present disclosure relates to inspection robots, and particularly to an inspection robot, a method and an apparatus for identifying a switch state of the inspection robot, and a storage medium.
Background
With the rapid development of data centers, the routing inspection of a data center machine room becomes an important work for guaranteeing operation and maintenance of the data centers. With the continuous increase of power distribution cabinet equipment, manual inspection cannot meet the operation and maintenance requirements of a large-scale data center, and the inspection robot is gradually deployed in the operation and maintenance of a machine room.
The current inspection robot detection module is mainly used for inspecting devices such as switches, instrument panels, status lights and the like. But in practical applications it is found that: when the inspection robot inspects the switch device, the problem of wrong switch state identification easily occurs.
Disclosure of Invention
An object of an embodiment of the present specification is to provide an inspection robot, a method and an apparatus for identifying an on/off state of the inspection robot, and a storage medium, so as to improve accuracy of identifying the on/off state of the inspection robot.
In order to achieve the above object, in one aspect, an embodiment of the present specification provides a switch state identification method, including:
acquiring an entity mark on a switch action part; the entity mark is used for indicating the current state of the switch action part;
inquiring a corresponding relation table between the mark and the state according to the entity mark;
determining the state of the switch action part based on the query result.
In a preferred embodiment, the physical indicia are color indicia.
In a preferred embodiment, the method further comprises:
synchronously carrying out image recognition on the switch action part, and determining the state of the switch action part based on the image recognition result;
confirming whether the state determined based on the image recognition result is consistent with the state determined based on the query result;
and if the state determined based on the image recognition result is consistent with the state determined based on the inquiry result, taking the state as a switch state recognition result.
In a preferred embodiment, the method further comprises:
and if the state determined based on the image recognition result is inconsistent with the state determined based on the inquiry result, re-acquiring the entity mark on the switch action part and re-recognizing according to the entity mark.
In a preferred embodiment, the synchronizing image-recognizing the switch actuating portion and determining the state of the switch actuating portion based on the image-recognizing result includes:
acquiring an image of the switch action part;
recognizing the state of the switch action part based on the image contour to obtain a first recognition result; identifying the state of the switch action part based on the image color to obtain a second identification result;
confirming whether the first recognition result is consistent with the second recognition result;
and if the first recognition result is consistent with the second recognition result, taking the first recognition result as a state determined based on the image recognition result.
In a preferred embodiment, the synchronizing performs image recognition on the switch actuating portion and determines the state of the switch actuating portion based on the image recognition result, further comprising:
and if the first recognition result is inconsistent with the second recognition result, acquiring the image of the switch action part again and re-recognizing the image according to the image.
In a preferred embodiment, the identifying the state of the switch action part based on the image contour includes:
converting the image into a binary image;
identifying a switch contour and a switch action part contour from the binary image;
and inputting the switch outline and the switch action part outline into a pre-trained deep learning model to obtain a first recognition result.
In a preferred embodiment, the identifying the state of the switch actuation portion based on the image color includes:
extracting the color of a switch area in the image;
determining a color belonging to the switching operation unit among the colors of the switching area;
inquiring a corresponding relation table of colors and states according to the colors belonging to the switch action part;
and obtaining a second identification result according to the query result.
On the other hand, an embodiment of the present specification further provides a switch state identification device, including:
the mark collector is used for collecting the entity mark on the switch action part; the entity mark is used for indicating the current state of the switch action part;
and the processor is used for acquiring the entity mark, inquiring the corresponding relation table between the mark and the state according to the entity mark and determining the state of the switch action part based on the inquiry result.
In a preferred embodiment, the physical marker is a color marker and the marker collector is a color sensor.
In a preferred embodiment, the apparatus further comprises:
the imaging part is used for synchronously acquiring the image of the switch action part;
the processor is further configured to: performing image recognition on the image, and determining the state of the switch action part based on the image recognition result; confirming whether the state determined based on the image recognition result is consistent with the state determined based on the query result; and if the state determined based on the image recognition result is consistent with the state determined based on the inquiry result, taking the state as a switch state recognition result.
In a preferred embodiment, the processor is further configured to:
and if the state determined based on the image recognition result is inconsistent with the state determined based on the inquiry result, re-acquiring the entity mark on the switch action part and re-recognizing according to the entity mark.
In a preferred embodiment, the synchronizing image-recognizing the switch actuating portion and determining the state of the switch actuating portion based on the image-recognizing result includes:
acquiring an image of the switch action part;
recognizing the state of the switch action part based on the image contour to obtain a first recognition result; identifying the state of the switch action part based on the image color to obtain a second identification result;
confirming whether the first recognition result is consistent with the second recognition result;
and if the first recognition result is consistent with the second recognition result, taking the first recognition result as a state determined based on the image recognition result.
In a preferred embodiment, the synchronizing performs image recognition on the switch actuating portion and determines the state of the switch actuating portion based on the image recognition result, further comprising:
and if the first recognition result is inconsistent with the second recognition result, acquiring the image of the switch action part again and re-recognizing the image according to the image.
In a preferred embodiment, the identifying the state of the switch action part based on the image contour includes:
converting the image into a binary image;
identifying a switch contour and a switch action part contour from the binary image;
and inputting the switch outline and the switch action part outline into a pre-trained deep learning model to obtain a first recognition result.
In a preferred embodiment, the identifying the state of the switch actuation portion based on the image color includes:
extracting the color of a switch area in the image;
determining a color belonging to the switching operation unit among the colors of the switching area;
inquiring a corresponding relation table of colors and states according to the colors belonging to the switch action part;
and obtaining a second identification result according to the query result.
On the other hand, the embodiment of the specification also provides an inspection robot, and the inspection robot is provided with the switch state recognition device.
In another aspect, the present specification further provides a computer storage medium, on which a computer program is stored, the computer program being executed by a processor of the inspection robot to execute the instructions of the method.
According to the technical scheme provided by the embodiment of the specification, the entity mark is easier to identify relative to the switch image in the embodiment of the specification, and the embodiment of the specification can improve the accuracy of identifying the switch state of the inspection robot. In addition, the switch state identification accuracy of the inspection robot is further improved through multiple identification.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
FIG. 1 illustrates a flow chart of a switch state identification method in some embodiments of the present description;
fig. 2 shows a schematic view of a partial structure of an inspection robot in some embodiments of the present description;
fig. 3 is a schematic structural view illustrating a switch operating portion in a closed state in an embodiment of the present disclosure;
FIG. 4a is a schematic structural diagram of the switch actuating portion in an off state according to an embodiment of the present disclosure;
FIG. 4b is a schematic structural diagram of the switch operating part in an off state in another embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating a method for identifying a switch state in other embodiments of the present description;
FIG. 6 illustrates a flow chart for determining switch states based on image recognition in some embodiments of the present description;
FIG. 7 is a schematic diagram illustrating the use of the Faster R-CNN network to predict switch states in one embodiment of the present description;
fig. 8 shows a block diagram of the inspection robot in some embodiments of the present disclosure.
[ description of reference ]
1. A lifting rod;
2. a steering head;
3. marking a collector;
4. an imaging section;
5. a switch body;
6. a switch operation part;
802. a patrol robot;
804. a processor;
806. a memory;
808. a drive mechanism;
810. an input/output interface;
812. an input device;
814. an output device;
816. a presentation device;
818. a graphical user interface;
820. a network interface;
822. a communication link;
824. a communication bus.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification. For example, in the following description, forming the second component over the first component may include embodiments in which the first and second components are formed in direct contact, embodiments in which the first and second components are formed in non-direct contact (i.e., additional components may be included between the first and second components), and so on.
Also, for ease of description, some embodiments of the present description may use spatially relative terms such as "above …," "below …," "top," "below," etc., to describe the relationship of one element or component to another (or other) element or component as illustrated in the various figures of the embodiments. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements or components described as "below" or "beneath" other elements or components would then be oriented "above" or "over" the other elements or components.
The embodiment of the specification relates to a technology for automatically inspecting the state of equipment (such as a switch and the like) of a data center by using an inspection robot. For convenience of description, a switching device (hereinafter, referred to as a switch) is described as an example in the following, but it should be understood by those skilled in the art that the embodiments of the present disclosure may be applied to any other suitable field of detecting the device status of a data center, and this is not limited in this disclosure, and may be specifically selected according to needs.
When the inspection robot is used for inspecting the switch, the whole switch adopts one color, so that the boundary of the switch action part is difficult to accurately judge in image identification, the state identification rate of the switch action part is low, and the preset requirement cannot be met.
In view of this, the embodiments of the present disclosure improve the inspection robot and make adaptive changes to the switch. Wherein, the adaptive modification to the switch is that: the switch operating part is provided with a physical mark for indicating the current state of the switch operating part (namely, whether the switch operating part is in a closed state or an open state can be identified through the physical mark). Referring to fig. 1, in some embodiments of the present description, the inspection robot may include a lifting rod 1, a steering platform 2 is connected to a free end of the lifting rod 1, and a mark collector 3 may be installed on the steering platform 2. When the inspection robot is used, the processor of the inspection robot can control the lifting rod 1 and the steering cloud platform 2 to enable the mark collector 3 to be over against the switch action part of the target switch so as to collect the entity mark on the switch action part and provide the entity mark to the processor; the processor can inquire the corresponding relation table of the mark and the state according to the entity mark, and determine the state of the switch action part based on the inquiry result, thereby realizing the state identification of the switch action part (namely realizing the switch state identification). Because the entity mark is easier to identify relative to the switch image, the embodiment of the specification can improve the accuracy of identifying the switch state of the inspection robot.
In the embodiments of the present specification, the physical mark may refer to a physical entity (i.e., an actual object) provided on the switch operating portion. The physical mark may be a pattern, a color, a combination thereof, or the like provided on the switch operating portion. In an embodiment of the present disclosure, as shown in fig. 3, the switch body 5 is provided with the switch operating portion 6, the switch operating portion 6 is in a closed state, and a color mark capable of significantly distinguishing a color from the other portions of the switch body 5 and the switch operating portion 6 may be coated on an outer vertical surface of the switch operating portion 6 in the closed state. Accordingly, in an embodiment of the present disclosure, as shown in fig. 4a, the outer vertical surface of the switch actuating portion 6 in the closed state may not be modified. Thus, the state of the switch operating part can be recognized by recognizing the presence or absence of the color mark. Of course, in another embodiment of the present specification, as shown in fig. 4b, a mark of another color capable of significantly distinguishing the color of the switch body 5 and the other portions of the switch operating portion 6 may be coated on the outer vertical surface of the switch operating portion 6 in the off state; obviously, the color mark on the outer surface of the switch operating part 6 in the open state should be significantly different from the color mark on the outer surface of the switch operating part 6 in the closed state. The state of the switch operating portion can be recognized by recognizing the different color marks.
It should be noted that the above-mentioned outer vertical surface of the switch action part in the closed state and the outer vertical surface in the closed state are only exemplary illustrations, and the description does not limit the installation position of the physical mark on the switch action part, and the physical mark may be installed at any position on the switch action part where the state can be conveniently identified, and may be specifically selected according to the needs.
It should also be noted that the application area of the physical mark is not limited in the present specification, and can be specifically selected according to the needs; in contrast, the larger the coating area, the better the recognition, but the higher the coating cost. For example, in one embodiment of the present description, the coated area of the physical token may completely cover the facade as in fig. 3, 4a and 4 b. For another example, in another embodiment of the present description, the physical token may cover only a portion (e.g., in the middle of a facade, etc.).
In some embodiments of the present disclosure, the tag collector 3 of the inspection robot may be adaptively configured according to the physical tag. For example, when the entity is marked as a color marker, the marker collector 3 may be a color sensor. The color sensor is also referred to as a color sensor or a color recognition sensor. The color sensor compares the color of an object with a reference color that has been taught above to detect a color, and outputs a detection result (i.e., a color detection result for a target object) when the two colors match within a certain error range.
In some embodiments of the present disclosure, the switch operating portion is an externally visible movable member in the switch, and the state transition control of the switch can be realized by the operation of the movable member. The switch operating part is switched to a closed state when the operating member performs a closing operation, and is switched to an open state when the operating member performs an opening operation. In some embodiments of the present disclosure, the operation of the switch actuation portion may be implemented by automatic control, or may be implemented by manual control, or one part of the control may be implemented by automatic control (for example, opening) and the other part of the control (for example, closing) may be implemented by manual control. In one embodiment of the present specification, a typical switch operation portion may be, for example, a knife section of a switch.
In some embodiments of the present description, a memory of the inspection robot may store a table of correspondence between the tag and the state. The mark and state corresponding relation table records the one-to-one corresponding relation between the entity mark and the corresponding state of each switch. Therefore, by querying the correspondence table, the switch state corresponding to the currently obtained entity label can be determined. In an exemplary embodiment of the present specification, the color label is taken as an example, and the label and state correspondence table may be as shown in table 1 below.
TABLE 1
Switch mark Color mark of switch action part On-off state
Switch
1 Yellow colour Closure is provided
Switch 1 Blue color Disconnect
Switch
2 Grey colour Closure is provided
Switch 2 Black color Disconnect
Switch
3 Yellow colour Closure is provided
Switch 3 Blue color Disconnect
For the inspection robot, the inspection robot can identify different switches through space positions (or other information) and the like. And identifies its state on the basis thereof. The switch position information of each switch to be detected of the data center can be stored in a memory of the inspection robot in advance, after the inspection robot finds one switch, the position information of the switch can be determined through a positioning device carried by the inspection robot, and then the switch is matched with the prestored switch position information, so that the switch can be identified.
Referring to fig. 1, in some embodiments of the present disclosure, an imaging portion 4 may be further mounted on the steering platform 2 of the inspection robot. The imaging section 4 may be used to synchronously capture images of the switch actuation sections. Correspondingly, the processor of the inspection robot can also perform image recognition on the image collected by the imaging part 4 and determine the state of the switch action part based on the image recognition result; confirming whether the state determined based on the image recognition result is consistent with the state determined based on the query result; if the state determined based on the image recognition result is consistent with the state determined based on the query result, taking the state as a switch state recognition result; of course, if the state determined based on the image recognition result does not match the state determined based on the inquiry result, the entity mark on the switch actuation portion may be newly acquired and re-recognized accordingly. Thus, the accuracy of switch state identification can be further improved through double identification detection.
In some embodiments of the present description, the imaging part 4 may be a device that can capture an image of a target and convert it into an image signal (or image data) suitable for processing by a processor of the inspection robot. In an embodiment of the present specification, a typical imaging section 4 may be, for example, a camera or the like. When the inspection robot is used, the processor of the inspection robot can control the lifting rod 1 and the steering cloud platform 2 to enable the imaging part 4 to be over against the switch action part of the target switch, so that images of the switch action part can be collected conveniently. In this specification, the imaging part 4 synchronously acquires the image of the switch action part, and is relative to the mark collector 3, that is, the mark collector 3 and the imaging part 4 can simultaneously work, so that the state determined based on the image recognition result and the state determined based on the query result are both the states of the same switch action part at the same time, thereby being beneficial to improving the accuracy of switch state recognition. Of course, in consideration of the fact that the state of the switch operating unit is not changed frequently in many cases, the synchronization in the embodiment of the present specification means that a certain time deviation can be allowed.
The present specification also provides a method for identifying a switch state, which may be applied to an inspection robot, and as shown in fig. 2, in some embodiments of the present specification, the method for identifying a switch state may include the following steps:
s201, acquiring an entity mark on a switch action part; the entity mark is used for indicating the current state of the switch action part.
And S202, inquiring a corresponding relation table between the mark and the state according to the entity mark.
S203, determining the state of the switch action part based on the inquiry result.
In the embodiment of the specification, since the entity mark is easier to identify relative to the switch image, the switch state identification method of the embodiment of the specification can improve the accuracy of identifying the switch state of the inspection robot.
In an embodiment of the present specification, the physical mark on the acquisition switch operating portion may be: the processor of the inspection robot can passively receive or actively read the entity marks on the switch action part collected by the mark collector. In some embodiments of the present description, the physical indicia may be color indicia.
Taking color labels as an example, in combination with fig. 5, in some embodiments of the present disclosure, the method for identifying a switch state may include the following steps:
and S51, acquiring the color marks of the switch action parts and synchronously acquiring the images of the switch action parts.
In the embodiments of the present description, the color mark may be acquired by a color sensor of the inspection robot, and the image may be acquired by an imaging unit such as a camera of the inspection robot.
S52, inquiring a corresponding relation table of colors and states according to the color marks, and determining the state of the switch action part based on the inquiry result; at the same time, the image recognition of the switch operation part can be performed, and the state of the switch operation part can be determined based on the image recognition result.
S53, confirming whether the state determined based on the image recognition result is consistent with the state determined based on the inquiry result. If the state determined based on the image recognition result is identical to the state determined based on the query result, performing step S54; otherwise, jumping to execute the step of acquiring the color mark of the switch action part in S51, i.e. re-acquiring the entity mark on the switch action part and re-identifying according to the entity mark.
In the embodiments of the present specification, the state is identical. For example, if the state determined based on the image recognition result is the closed state and the state determined based on the inquiry result is also the closed state, it is considered that the state determined based on the image recognition result coincides with the state determined based on the inquiry result, that is, both are the closed states.
S54, if the state determined based on the image recognition result matches the state determined based on the inquiry result, the state is regarded as the switch state recognition result.
Therefore, the accuracy of switch state identification is further improved through double identification detection.
As shown in fig. 6, the image recognition of the switch actuating portion and the determination of the state of the switch actuating portion based on the image recognition result may include the steps of:
and S61, acquiring the image of the switch action part.
S62, recognizing the state of the switch action part based on the image contour to obtain a first recognition result; and recognizing the state of the switch action part based on the image color to obtain a second recognition result.
And S63, confirming whether the first recognition result is consistent with the second recognition result. If the first recognition result is consistent with the second recognition result, performing step S64; otherwise, the step S61 is skipped to, that is, when the first recognition result is inconsistent with the second recognition result, the image of the switch action part can be obtained again and recognized again according to the image.
And S64, taking the recognition result as the state determined based on the image recognition result. In the embodiments of the present specification, the identification results are identical, which means that the identification results are identical; therefore, in the case where the recognition results coincide, the first recognition result or the second recognition result may be taken as the state determined based on the image recognition result.
Therefore, the accuracy of image identification can be improved through dual image identification, and the accuracy of switch state identification can be further improved.
The identifying the state of the switch actuation portion based on the image contour, in conjunction with the dotted line portion on the left side in step S62, may include:
converting the image into a gray scale image; the image may be converted into a grayscale image by, for example, a weighted average method or the like;
converting the gray level image into a binary image; the gray image is changed into an image with only two gray values of 0 and 255 by selecting a proper threshold value, and the gray image can be converted into a binary image by a region adaptive method and the like;
identifying a switch contour and a switch action part contour from the binary image;
and inputting the switch outline and the switch action part outline into a pre-trained deep learning model to obtain a first recognition result.
In an embodiment of the specification, the pre-trained deep learning model may include, but is not limited to, for example, a Faster R-CNN network or the like. Compared with the R-CNN network, a large amount of overlapping exists between candidate frames in an image, so that the redundancy of characteristic operation is extracted; the Faster R-CNN network can normalize the whole image and directly send the image into the depth network, and then send candidate regions extracted from the image, wherein the previous layers of features of the candidate regions do not need to be repeatedly calculated, so that the training and testing speed per hour is improved. Meanwhile, unlike the independent classifiers and regressors in the R-CNN network which need a large number of features as training samples, the fast R-CNN realizes the classification judgment and the position fine adjustment by using a deep network, and does not need additional storage. The switch R-CNN network firstly frames a switch outline and a switch action part outline, and when the switch action part outline is identified to be in an upper position of the switch outline, the switch is in a closed state; when the contour of the switch action part is identified to be in the lower position of the switch contour, the switch is in the off state.
As shown in FIG. 7, the principle of using Faster R-CNN to predict switch states is roughly as follows:
firstly, inputting a switch image M multiplied by N with any size into a CNN Network, and transmitting the switch image M multiplied by N to a finally shared convolution layer in a forward direction through the CNN Network, so that on one hand, a feature map input by a Region generation Network (RPN) is obtained, and on the other hand, the feature map is continuously transmitted to a special convolution layer in a forward direction to generate a higher-dimensional feature map. The feature map input by the RPN network obtains a Region suggestion and a Region score through the RPN network, applies non-maximum suppression (for example, a threshold value is 0.7) to the Region score, and outputs the Region suggestion of the Top-N score to a Region of Interest (RoI) pooling layer. The region suggestions are prediction frames identified by the algorithm and more accurate prediction frames obtained according to the offset, and the prediction frames are subjected to border-crossing elimination and overlapped frames are eliminated. The region score retains a local maximum score of no more than 0.7 of non-maximum suppression for each predicted frame. And extracting the prediction boxes of the first N local maximum score values and entering the next layer of network.
And simultaneously inputting the obtained high-dimensional feature map and the output region suggestions into the RoI pooling layer, and extracting the features corresponding to the region suggestions.
And after the obtained region suggestion features pass through the full connection layer, outputting the classification score of the region and a regressive frame (bbox). bbox is the position offset of the prediction frame of the switch action part, the final prediction frame position obtained can be determined according to the offset, the current state of the switch action part is judged according to the position, and whether the switch action part is switched on (namely whether the switch action part is switched on) is obtained.
In connection with the dotted line portion on the right side in step S62, the identifying the state of the switch actuation portion based on the image color may include:
extracting the color of a switch area in the image; for example, a color distribution of the entire switch region including the switch color, the switch operation portion color, the background color, and the like may be extracted from the image by a computer vision algorithm such as OpenCV.
Determining a color belonging to the switching operation unit among the colors of the switching area; according to the color distribution of the whole switch area, the color of the switch action part can be identified;
inquiring a corresponding relation table of colors and states according to the colors belonging to the switch action part;
obtaining a second identification result according to the query result; for example, when the color of the switch action part is identified to be the closed color, the switch action part is judged to be in the closed state at the moment; when the color of the switch action part is recognized to be off color, the switch action part is judged to be off state at the moment.
While the process flows described above include operations that occur in a particular order, it should be appreciated that the processes may include more or less operations that are performed sequentially or in parallel (e.g., using parallel processors or a multi-threaded environment).
The embodiments of the present description provide a switch state recognition device, which can be configured on an inspection robot, thereby forming the inspection robot of the embodiments of the present description. In some embodiments of the present disclosure, the switch state identification device may include a tag collector and a processor. The mark collector can be used for collecting the entity mark on the switch action part; the entity mark is used for indicating the current state of the switch action part. The processor may be configured to obtain the entity flag, query a flag-state correspondence table according to the entity flag, and determine the state of the switch actuation portion based on a query result.
In some embodiments of the present disclosure, the entity is a color marker and the marker collector is a color sensor.
In some embodiments of the present disclosure, the switch state identification device may further include an imaging portion. The imaging part can be used for synchronously acquiring images of the switch action part. Correspondingly, the processor may be further configured to: performing image recognition on the image, and determining the state of the switch action part based on the image recognition result; confirming whether the state determined based on the image recognition result is consistent with the state determined based on the query result; if the state determined based on the image recognition result is consistent with the state determined based on the query result, taking the state as a switch state recognition result; and if the state determined based on the image recognition result is inconsistent with the state determined based on the inquiry result, re-acquiring the entity mark on the switch action part and re-recognizing according to the entity mark.
In some embodiments of the present description, the synchronizing image-recognizing the switch actuating portion and determining the state of the switch actuating portion based on the image-recognizing result may include:
acquiring an image of the switch action part;
recognizing the state of the switch action part based on the image contour to obtain a first recognition result; identifying the state of the switch action part based on the image color to obtain a second identification result;
confirming whether the first recognition result is consistent with the second recognition result;
and if the first recognition result is consistent with the second recognition result, taking the first recognition result as a state determined based on the image recognition result.
In some embodiments of the present specification, the synchronizing performs image recognition on the switch actuating portion, and determines the state of the switch actuating portion based on the image recognition result, further including:
and if the first recognition result is inconsistent with the second recognition result, acquiring the image of the switch action part again and re-recognizing the image according to the image.
In some embodiments of the present description, the identifying the state of the switch action part based on the image contour includes:
converting the image into a binary image;
identifying a switch contour and a switch action part contour from the binary image;
and inputting the switch outline and the switch action part outline into a pre-trained deep learning model to obtain a first recognition result.
In some embodiments of the present description, the identifying the state of the switch actuation portion based on the image color includes:
extracting the color of a switch area in the image;
determining a color belonging to the switching operation unit among the colors of the switching area;
inquiring a corresponding relation table of colors and states according to the colors belonging to the switch action part;
and obtaining a second identification result according to the query result.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As shown in fig. 8, in some embodiments of the present description, the inspection robot 802 may include one or more processors 804, such as one or more Central Processing Units (CPUs) or Graphics Processors (GPUs), each of which may implement one or more hardware threads. The inspection robot 802 may also include any memory 806 for storing any kind of information such as code, settings, data, etc., and in one embodiment, a computer program stored on the memory 806 and executable on the processor 804 may perform the instructions of the switch state identification method of any of the embodiments described above when executed by the processor 804. For example, and without limitation, memory 806 may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of the inspection robot 802. In one case, when the processor 804 executes the associated instructions stored in any memory or combination of memories, the inspection robot 802 may perform any of the operations of the associated instructions. The inspection robot 802 also includes one or more drive mechanisms 808, such as a hard disk drive mechanism, an optical disk drive mechanism, etc., for interacting with any memory.
The inspection robot 802 may also include an input/output interface 810(I/O) for receiving various inputs (via input device 812) and for providing various outputs (via output device 814). One particular output mechanism may include a presentation device 816 and an associated graphical user interface 818 (GUI). In other embodiments, the input/output interface 810(I/O), the input device 812, and the output device 814 may not be included, and only serve as one inspection robot in the network. The inspection robot 802 may also include one or more network interfaces 820 for exchanging data with other devices via one or more communication links 822. One or more communication buses 824 couple the above-described components together. Input devices 812 may include, but are not limited to, cameras, color sensors, and the like, among others.
Communication link 822 may be implemented in any manner, such as over a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. The communication link 822 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products of some embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processor to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processor, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processor to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processor to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, the inspection robot includes one or more processors (CPUs), input/output interfaces, a network interface, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by the patrol robot. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The embodiments of this specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The described embodiments may also be practiced in distributed computing environments where tasks are performed by remote processors that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (18)

1. A switch state identification method is characterized by comprising the following steps:
acquiring an entity mark on a switch action part; the entity mark is used for indicating the current state of the switch action part;
inquiring a corresponding relation table between the mark and the state according to the entity mark;
determining the state of the switch action part based on the query result.
2. The switch state identification method of claim 1, wherein the entity label is a color label.
3. The switch state identification method of claim 2, further comprising:
synchronously carrying out image recognition on the switch action part, and determining the state of the switch action part based on the image recognition result;
confirming whether the state determined based on the image recognition result is consistent with the state determined based on the query result;
and if the state determined based on the image recognition result is consistent with the state determined based on the inquiry result, taking the state as a switch state recognition result.
4. The switch state identification method of claim 3, further comprising:
and if the state determined based on the image recognition result is inconsistent with the state determined based on the inquiry result, re-acquiring the entity mark on the switch action part and re-recognizing according to the entity mark.
5. The method of claim 3, wherein the synchronizing image-recognizing the switch actuation portion and determining the state of the switch actuation portion based on the image-recognition result comprises:
acquiring an image of the switch action part;
recognizing the state of the switch action part based on the image contour to obtain a first recognition result; identifying the state of the switch action part based on the image color to obtain a second identification result;
confirming whether the first recognition result is consistent with the second recognition result;
and if the first recognition result is consistent with the second recognition result, taking the first recognition result as a state determined based on the image recognition result.
6. The switching state recognition method according to claim 5, wherein the synchronizing performs image recognition on the switching operation part and determines the state of the switching operation part based on the image recognition result, further comprising:
and if the first recognition result is inconsistent with the second recognition result, acquiring the image of the switch action part again and re-recognizing the image according to the image.
7. The method of recognizing a switch state according to claim 5, wherein the recognizing the state of the switch actuation portion based on the image contour includes:
converting the image into a binary image;
identifying a switch contour and a switch action part contour from the binary image;
and inputting the switch outline and the switch action part outline into a pre-trained deep learning model to obtain a first recognition result.
8. The method of claim 5, wherein the identifying the state of the switch actuation portion based on image color comprises:
extracting the color of a switch area in the image;
determining a color belonging to the switching operation unit among the colors of the switching area;
inquiring a corresponding relation table of colors and states according to the colors belonging to the switch action part;
and obtaining a second identification result according to the query result.
9. A switch state identifying device, comprising:
the mark collector is used for collecting the entity mark on the switch action part; the entity mark is used for indicating the current state of the switch action part;
and the processor is used for acquiring the entity mark, inquiring the corresponding relation table between the mark and the state according to the entity mark and determining the state of the switch action part based on the inquiry result.
10. The switch state identification device of claim 9, wherein the physical indicia is a color indicia and the indicia collector is a color sensor.
11. The switch state recognition device of claim 10, further comprising:
the imaging part is used for synchronously acquiring the image of the switch action part;
the processor is further configured to: performing image recognition on the image, and determining the state of the switch action part based on the image recognition result; confirming whether the state determined based on the image recognition result is consistent with the state determined based on the query result; and if the state determined based on the image recognition result is consistent with the state determined based on the inquiry result, taking the state as a switch state recognition result.
12. The switch state identification device of claim 11, wherein the processor is further configured to:
and if the state determined based on the image recognition result is inconsistent with the state determined based on the inquiry result, re-acquiring the entity mark on the switch action part and re-recognizing according to the entity mark.
13. The switching state recognition apparatus according to claim 11, wherein the synchronizing image-recognizes the switching action part and determines the state of the switching action part based on the image recognition result, includes:
acquiring an image of the switch action part;
recognizing the state of the switch action part based on the image contour to obtain a first recognition result; identifying the state of the switch action part based on the image color to obtain a second identification result;
confirming whether the first recognition result is consistent with the second recognition result;
and if the first recognition result is consistent with the second recognition result, taking the first recognition result as a state determined based on the image recognition result.
14. The switching state identifying device according to claim 13, wherein the synchronization performs image recognition on the switching action part and determines the state of the switching action part based on the image recognition result, further comprising:
and if the first recognition result is inconsistent with the second recognition result, acquiring the image of the switch action part again and re-recognizing the image according to the image.
15. The switch state recognition apparatus according to claim 13, wherein the recognizing the state of the switch actuation portion based on the image contour includes:
converting the image into a binary image;
identifying a switch contour and a switch action part contour from the binary image;
and inputting the switch outline and the switch action part outline into a pre-trained deep learning model to obtain a first recognition result.
16. The switch state recognition apparatus according to claim 13, wherein the recognizing the state of the switch actuation portion based on the image color includes:
extracting the color of a switch area in the image;
determining a color belonging to the switching operation unit among the colors of the switching area;
inquiring a corresponding relation table of colors and states according to the colors belonging to the switch action part;
and obtaining a second identification result according to the query result.
17. An inspection robot, characterized in that the inspection robot is provided with the switching state recognition device of any one of claims 9 to 16.
18. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor of an inspection robot, executes instructions for performing the method according to any one of claims 1-8.
CN202110556070.5A 2021-05-21 2021-05-21 Inspection robot, switch state identification method and device and storage medium Pending CN113177962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110556070.5A CN113177962A (en) 2021-05-21 2021-05-21 Inspection robot, switch state identification method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110556070.5A CN113177962A (en) 2021-05-21 2021-05-21 Inspection robot, switch state identification method and device and storage medium

Publications (1)

Publication Number Publication Date
CN113177962A true CN113177962A (en) 2021-07-27

Family

ID=76929768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110556070.5A Pending CN113177962A (en) 2021-05-21 2021-05-21 Inspection robot, switch state identification method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113177962A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116338449A (en) * 2023-05-26 2023-06-27 中铁电气化局集团有限公司 Online testing method and system for switching characteristics of circuit breaker

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002092592A (en) * 2000-09-12 2002-03-29 Toko Electric Corp System and method for automatic patrol
CN1845606A (en) * 2006-05-15 2006-10-11 华北电力大学(北京) Automatic image recognizing and monitoring method for power high voltage circuit breaker switch state
CN105137343A (en) * 2015-08-27 2015-12-09 国家电网公司 Relay protection pressure plate state detection system and detection method
CN106570865A (en) * 2016-11-08 2017-04-19 国家电网公司 Digital-image-processing-based switch state detecting system of power equipment
CN108334815A (en) * 2018-01-11 2018-07-27 深圳供电局有限公司 Inspection method of power secondary equipment, and switch state identification method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002092592A (en) * 2000-09-12 2002-03-29 Toko Electric Corp System and method for automatic patrol
CN1845606A (en) * 2006-05-15 2006-10-11 华北电力大学(北京) Automatic image recognizing and monitoring method for power high voltage circuit breaker switch state
CN105137343A (en) * 2015-08-27 2015-12-09 国家电网公司 Relay protection pressure plate state detection system and detection method
CN106570865A (en) * 2016-11-08 2017-04-19 国家电网公司 Digital-image-processing-based switch state detecting system of power equipment
CN108334815A (en) * 2018-01-11 2018-07-27 深圳供电局有限公司 Inspection method of power secondary equipment, and switch state identification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116338449A (en) * 2023-05-26 2023-06-27 中铁电气化局集团有限公司 Online testing method and system for switching characteristics of circuit breaker
CN116338449B (en) * 2023-05-26 2023-08-11 中铁电气化局集团有限公司 Online testing method and system for switching characteristics of circuit breaker

Similar Documents

Publication Publication Date Title
CN111695622B (en) Identification model training method, identification method and identification device for substation operation scene
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
US11893774B2 (en) Systems and methods for training machine models with augmented data
KR102478335B1 (en) Image Analysis Method and Server Apparatus for Per-channel Optimization of Object Detection
CN114693661A (en) Rapid sorting method based on deep learning
CN114898319B (en) Vehicle type recognition method and system based on multi-sensor decision level information fusion
CA3110975A1 (en) Tyre sidewall imaging method
CN105809718A (en) Object tracking method with minimum trajectory entropy
CN113177962A (en) Inspection robot, switch state identification method and device and storage medium
KR102285625B1 (en) Non-contact type recognition apparatus and method of object's attibutes
CN109079777B (en) Manipulator hand-eye coordination operation system
CN114792417A (en) Model training method, image recognition method, device, equipment and storage medium
US20210390419A1 (en) Device and Method for Training and Testing a Classifier
US9984294B2 (en) Image classification method and apparatus for preset tour camera
RU2731052C1 (en) Robot automatic system for sorting solid municipal waste based on neural networks
CN115909351B (en) Container number identification method and device based on deep learning
CN108074264A (en) A kind of classification multi-vision visual localization method, system and device
CN111402185B (en) Image detection method and device
US10735660B2 (en) Method and device for object identification
CN114782827B (en) Object capture point acquisition method and device based on image
CN114067106B (en) Inter-frame contrast-based pantograph deformation detection method and equipment and storage medium
CN115527207A (en) Method for detecting fault of control rod nut of train brake adjuster based on deep neural network
US20230343105A1 (en) Method for detecting road users
CN113917453A (en) Multi-sensor fusion method based on radar and video
CN112989997A (en) 3D target detection method and system based on multi-information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination