CN111951161A - Target identification method and system and inspection robot - Google Patents

Target identification method and system and inspection robot Download PDF

Info

Publication number
CN111951161A
CN111951161A CN202010706175.XA CN202010706175A CN111951161A CN 111951161 A CN111951161 A CN 111951161A CN 202010706175 A CN202010706175 A CN 202010706175A CN 111951161 A CN111951161 A CN 111951161A
Authority
CN
China
Prior art keywords
information
camera
image information
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010706175.XA
Other languages
Chinese (zh)
Inventor
李超
敖奇
张奎刚
王福闯
孙红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRSC Research and Design Institute Group Co Ltd
Original Assignee
CRSC Research and Design Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRSC Research and Design Institute Group Co Ltd filed Critical CRSC Research and Design Institute Group Co Ltd
Priority to CN202010706175.XA priority Critical patent/CN111951161A/en
Publication of CN111951161A publication Critical patent/CN111951161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of rail transit, and discloses a target identification method, a system and a patrol robot, wherein the target identification method comprises the following steps: step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan; step S2: acquiring and obtaining first image information of a target shielded by a cabinet grid through a first camera, and obtaining control information according to the first image information; step S3: controlling a second camera to acquire and obtain a plurality of second image information of the target according to the control information; step S4: the second image information is synthesized into the final image information of the complete target which is not shielded by the cabinet grid through the image splicing technology, the current state of the target can be identified without opening a cabinet door in the inspection process through the method, and meanwhile, the applicability of the inspection robot is improved.

Description

Target identification method and system and inspection robot
Technical Field
The invention belongs to the technical field of rail transit, and particularly relates to a target identification method and system and an inspection robot.
Background
With the rapid development of the Chinese railway construction, the high-speed and high-density running state of the railway train puts more strict requirements on the safety and operation and maintenance management of the railway electric service equipment and system. The high-speed railway section signal relay station is mostly an unattended station, the inconvenience of traffic and the inconvenience and traffic safety hidden danger brought by the night patrol operation to the maintenance and emergency disposal of signal equipment cause that electric service patrol personnel can not completely grasp the operation working condition of the high-speed railway section unattended signal relay station equipment in real time, and the dead zone is likely to appear in the application state of fixed-point monitoring signal equipment.
The intelligent patrol system of the national railway unattended signal relay station mainly completes the automatic patrol monitoring function of a railway electric appliance machine room, monitors and alarms the technical indexes of signal equipment, key equipment and instruments and meters in real time, greatly improves the monitoring and operation and maintenance level of the high-speed railway signal equipment, enhances the security control level of key places of the high-speed railway, strives to compress the equipment fault processing delay time, and protects and drives the high-speed railway for safe operation.
However, in practice, it is found that the existing inspection robot does not have the grid blurring capability, can only inspect a cabinet without grid shielding, and can only display pictures shot by a camera, so that most of measures which can be taken at present are to directly remove a cabinet door, but the method has the risk of misoperation of unauthorized people.
In addition, the existing inspection robots are used for inspecting known targets, for example, before inspection, the types and the sizes of the board cards in the existing cabinet are known, so that image acquisition and identification are performed in a targeted manner, and in many practical situations, information of the targets cannot be obtained before inspection, so that inspection in a targeted manner is impossible, so that the existing inspection robots are not strong in applicability and cannot adapt to the situation that the information of the targets cannot be obtained before inspection.
Therefore, it is urgently needed to develop a target identification method and system and an inspection robot which overcome the defects.
Disclosure of Invention
In view of the above problem, the present invention provides a target identification method, including:
step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan;
step S2: acquiring and obtaining first image information of a target shielded by a cabinet grid through the first camera, and obtaining control information according to the first image information;
step S3: controlling a second camera to acquire and obtain a plurality of second image information of the target according to the control information;
step S4: and synthesizing the plurality of second image information into complete final image information of the target without being shielded by a cabinet grid by using an image splicing technology.
In the above-mentioned target identification method, step S2 includes:
step S21: acquiring and obtaining the first image information of a target through the first camera;
step S22: identifying the first image information through a deep learning algorithm to obtain parameter information of the target;
step S23: and obtaining the control information according to the parameter information.
In the above target identification method, the parameter information includes: position information of the object and size information of the object.
In the above target identification method, in the step S3, the mechanical arm is controlled according to the control information to drive the second camera to move to a plurality of second positions, and each time the second camera moves to one of the second positions, the second camera collects and obtains one of the second image information of the target.
The object identifying method according to any one of the above claims, wherein in the step S3, the plurality of second image information are synthesized into the complete final image information without a cabinet grid mask through feature point extraction and matching, image registration and image fusion.
The present invention also provides a target recognition system, including:
the first camera and the second camera are arranged on the mechanical arm;
the control unit is electrically connected with the mechanical arm and the first camera, and controls the first camera to acquire and obtain first image information of a target shielded by the cabinet grid after the mechanical arm is controlled by the control unit according to a work plan to drive the first camera to move to a first position;
the image information processing unit is used for acquiring control information according to the first image information and outputting the control information to the control unit, and the control unit is used for controlling a second camera to acquire and acquire a plurality of second image information of the target according to the control information;
and the image synthesis unit synthesizes the second image information into the complete final image information of the target without the shielding of the cabinet grid through an image splicing technology.
In the above target identification system, the image information processing unit identifies the first image information through a deep learning algorithm to obtain parameter information of the target, and then obtains control information according to the parameter information.
The above object recognition system, wherein the parameter information includes: position information of the object and size information of the object.
In the above target identification system, the control unit controls the mechanical arm to drive the second camera to move to a plurality of second positions according to the control information, and the control unit controls the second camera to collect and obtain the second image information of the target when the second camera moves to one of the second positions.
The target recognition system of any one of the above, wherein the image synthesis unit synthesizes the plurality of second image information into the complete final image information without a cabinet grid mask through feature point extraction and matching, image registration and image fusion.
The invention also provides a patrol robot, which comprises:
a mechanical arm;
the target recognition system of any one of the above is connected to the mechanical arm, and the inspection robot obtains final image information of a complete target shielded by the grid of the inorganic cabinet through the recognition system.
Aiming at the prior art, the invention has the following effects:
the GPU unit is arranged in the inspection robot, so that the inspection robot can run a deep learning algorithm with larger network and higher precision, and the real-time processing of video streams with high resolution and high frame number is realized; meanwhile, the current state of the target can be identified without opening the cabinet door in the inspection process through the invention; in addition, the applicability of the inspection robot is greatly improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method of object recognition in accordance with the present invention;
FIG. 2 is a flowchart illustrating the substeps of step S2 in FIG. 1;
fig. 3 is a schematic structural diagram of the object recognition system of the present invention.
Wherein the reference numerals are:
a target recognition system: 1
A first camera: 11
A second camera: 12
A control unit: 13
Image information processing unit 14
Image synthesizing unit 15
Mechanical arm: 2
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
The exemplary embodiments of the present invention and the description thereof are provided to explain the present invention and not to limit the present invention. Additionally, the same or similar numbered elements/components used in the drawings and the embodiments are used to represent the same or similar parts.
The mechanical arm is matched with the camera, any board card can be recognized from a large picture, the board cards are spliced into a complete picture through the capacity of blurring, and the specific type of the target is recognized through the complete picture after blurring.
Referring to fig. 1, fig. 1 is a flow chart of a blurring processing method according to the present invention. As shown in fig. 1, the object recognition method of the present invention includes:
step S1: and controlling the mechanical arm to drive the first camera to move to the first position according to the work plan.
Specifically, after a work plan, such as a patrol work plan, is received, the work plan is analyzed to obtain position information of the cabinet to be detected and position information of a first position acquired by the first camera, and the mechanical arm is controlled to drive the first camera to move to the first position according to the position information of the first position.
Step S2: the first image information of the target shielded by the cabinet grid is acquired and obtained through the first camera, and control information is obtained according to the first image information.
Referring to fig. 2, fig. 2 is a flowchart illustrating a sub-step of step S2 in fig. 1. As shown in fig. 2, the step S2 includes:
step S21: acquiring and obtaining the first image information of a target through the first camera;
step S22: identifying the first image information through a deep learning algorithm to obtain parameter information of the target;
step S23: and obtaining the control information according to the parameter information.
Wherein the parameter information includes: position information of the object and size information of the object.
Specifically, a first camera acquires and obtains first image information of a target according to a first acquisition instruction, the first camera is connected with a GPU module, the first camera sends the first image information to a GPU unit after the first image information is obtained, the GPU unit identifies position information of the target and size information of the target in the first image information through a trained depth learning algorithm, and control information is output according to the position information of the target and the size information of the target.
The deep learning algorithm of the invention marks and identifies different types of targets by using frames with different colors, thereby realizing the simultaneous identification of one board card or a plurality of board cards, and when a plurality of board cards are identified, the plurality of board cards can be of the same type or different types.
The deep learning algorithm is trained through a large amount of data, and therefore pictures of a plurality of manufacturers and equipment, grids of different cabinets and board cards are collected. The data collection amount is over ten thousand. And (3) calibrating the original data, training on a GPU server in a laboratory, finally obtaining a trained deep learning algorithm model, and deploying the trained deep learning algorithm model to a GPU module of the inspection robot. The training process can be roughly divided into four stages of data acquisition, data calibration, algorithm design, training and verification. Where the data is labeled as names of different boards and switches. The data set is divided into a training set, a validation set and a test set. The training set is introduced into the deep learning network, parameters are continuously updated, designed loss functions are converged, overfitting is avoided, the verification set is used for detecting the training state in the training process, the trained deep learning network is tested by the test set, and retraining is needed when the effect is poor.
Step S3: and controlling a second camera to acquire and obtain a plurality of second image information of the target according to the control information, wherein in step S3, the mechanical arm is controlled according to the control information to drive the second camera to move to a plurality of second positions, and each time the second camera moves to one of the second positions, the second camera acquires and obtains one of the second image information of the target.
Specifically, first acquisition position information of the second camera is obtained through analysis according to the control information, namely the first and second position information obtains quantity information of second image information needing to be acquired and distance information of each movement of the second camera; controlling the mechanical arm to drive the second camera to move to a first second position according to second position information, and acquiring first second image information of the target by the second camera; then controlling the mechanical arm to drive the second camera to move to the next second position according to the distance information of each movement of the second camera and then continuing to collect the data; and the rest can be done in the same way until all the acquisitions are finished.
For example, a 20cm high and 4cm wide board needs to be blurred, the board is collected from top to bottom along the height, the board moves 4cm downwards each time, and all information can be collected after the 20cm high board is shot for 5 times. In this embodiment, although the board card is used as an example, the present invention does not limit the type of the target, and in other embodiments, the target may also be a switch.
Step S4: and synthesizing the plurality of second image information into complete final image information of the target without being shielded by a cabinet grid by using an image splicing technology.
Specifically, the image stitching can be divided into several parts, namely, extraction and matching of feature points, image registration and image fusion. The extraction of the feature points has feature factors such as sift, surf, harris corner points, ORB and the like, can be used for image splicing work, and have advantages. The method uses SURF for image splicing, and splicing by other methods is similar. Meanwhile, feature points of the two pictures are extracted and matched, so that enough matching points are obtained, and the image registration result is more accurate. And then, carrying out image registration, namely converting the two pictures into the same coordinate system, wherein an adopted algorithm is Ransac and the like. The image fusion mainly processes the edge connection problem of two pictures, and edge transition can be more natural by adopting methods such as weighted average and the like. And synthesizing the second image information into complete final image information shielded by a cabinet-free grid, and reproducing all information of the target after the grid through the final image information so as to judge the type of the target.
In the present embodiment, it is preferable that the first camera is a depth camera and the second camera is an industrial camera, but the present invention is not limited thereto. Specifically, a depth camera and an industrial camera are fixed at the front end of a mechanical arm, and the relative positions of the depth camera and the industrial camera and the mechanical arm are fixed. After a mechanical arm shoots at a first position (about 30cm away from a cabinet) by using a depth camera, identifying parameter information of a target to obtain control information, controlling the mechanical arm to drive an industrial camera to move to a second position (about 10cm away from a cabinet door) through the control information, moving the industrial camera to the next second position to shoot again after shooting, splicing all pictures after finishing all shooting tasks, completely reproducing final image information of the target, reproducing all information of the target after a grid through the final image information, and further judging the type of the target.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a target recognition system according to the present invention. As shown in fig. 3, the object recognition system 1 of the present invention includes:
a first camera 11 and a second camera 12 mounted on the robot arm 2;
the control unit 13 is electrically connected to the mechanical arm 21 and the first camera 11, and after the control unit 13 controls the mechanical arm 21 to drive the first camera 11 to move to the first position according to the work plan, the control unit 13 controls the first camera 11 to acquire and obtain first image information of a target shielded by a cabinet grid;
an image information processing unit 14, which obtains control information according to the first image information and outputs the control information to the control unit 13, wherein the control unit 13 controls the second camera 12 to acquire and obtain a plurality of second image information of the target according to the control information;
and the image synthesis unit 15 synthesizes the plurality of second image information into complete final image information of the target without being shielded by the cabinet grid through an image splicing technology.
In the embodiment, the image information processing unit 14 is a GPU unit, but the invention is not limited thereto.
Further, the image information processing unit 14 identifies the first image information through a deep learning algorithm to obtain parameter information of the target, and then obtains control information according to the parameter information.
Wherein the parameter information includes: position information of the object and size information of the object.
Still further, the control unit 13 controls the mechanical arm 21 to drive the second camera 12 to move to a plurality of second positions according to the control information, and when the second camera 12 moves to one of the second positions, the control unit 13 controls the second camera 12 to collect and obtain one of the second image information of the target.
Further, the image synthesizing unit 15 synthesizes a plurality of second image information into the complete final image information without a cabinet grid mask through feature point extraction and matching, image registration and image fusion.
The present invention also provides a patrol robot, comprising: the recognition system 1 and the robot arm 2 described above; and the identification system 1 is connected with the mechanical arm, and the inspection robot obtains the final image information of the complete target shielded by the grid of the inorganic cabinet through the identification system 1.
In the present embodiment, the robot is a multi-axis robot, but the invention is not limited thereto, and in other embodiments, the robot may be an xyz stage or the like.
In summary, the GPU unit is arranged in the inspection robot, so that the inspection robot can run a deep learning algorithm with a larger network and higher precision, and realize real-time processing of video streams with high resolution and high frame number; meanwhile, the current state of the target can be identified without opening the cabinet door in the inspection process through the invention; in addition, the applicability of the inspection robot is greatly improved.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A method of object recognition, comprising:
step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan;
step S2: acquiring and obtaining first image information of a target shielded by a cabinet grid through the first camera, and obtaining control information according to the first image information;
step S3: controlling a second camera to acquire and obtain a plurality of second image information of the target according to the control information;
step S4: and synthesizing the plurality of second image information into complete final image information of the target without being shielded by a cabinet grid by using an image splicing technology.
2. The object recognition method according to claim 1, wherein the step S2 includes:
step S21: acquiring and obtaining the first image information of a target through the first camera;
step S22: identifying the first image information through a deep learning algorithm to obtain parameter information of the target;
step S23: and obtaining the control information according to the parameter information.
3. The object recognition method of claim 2, wherein the parameter information comprises: position information of the object and size information of the object.
4. The method for identifying an object according to claim 2, wherein in the step S3, the mechanical arm is controlled to drive the second camera to move to a plurality of second positions according to the control information, and each time the second camera moves to a second position, the second camera collects and obtains second image information of the object.
5. The method for object recognition according to any one of claims 1-4, wherein in step S3, the second image information is synthesized into the complete final image information without cabinet grid occlusion by feature point extraction and matching, image registration and image fusion.
6. An object recognition system, comprising:
the first camera and the second camera are arranged on the mechanical arm;
the control unit is electrically connected with the mechanical arm and the first camera, and controls the first camera to acquire and obtain first image information of a target shielded by the cabinet grid after the mechanical arm is controlled by the control unit according to a work plan to drive the first camera to move to a first position;
the image information processing unit is used for acquiring control information according to the first image information and outputting the control information to the control unit, and the control unit is used for controlling a second camera to acquire and acquire a plurality of second image information of the target according to the control information;
and the image synthesis unit synthesizes the second image information into the complete final image information of the target without the shielding of the cabinet grid through an image splicing technology.
7. The object recognition system according to claim 6, wherein the image information processing unit recognizes the first image information by a deep learning algorithm to obtain parameter information of the object, and obtains control information based on the parameter information.
8. The object recognition system of claim 7, wherein the parameter information comprises: position information of the object and size information of the object.
9. The object recognition system of claim 7, wherein the control unit controls the mechanical arm to drive the second camera to move to a plurality of second positions according to control information, and the control unit controls the second camera to capture and obtain a second image of the object when the second camera moves to a second position.
10. The object recognition system according to any one of claims 6-9, wherein the image synthesis unit synthesizes a plurality of the second image information into the complete final image information without a cabinet grid occlusion by extraction and matching of feature points, image registration, and image fusion.
11. An inspection robot, comprising:
a mechanical arm;
the object identifying system of any one of the preceding claims 6-10, connected to the robotic arm, the inspection robot obtaining final image information of the complete object obscured by the cabinet grid through the identifying system.
CN202010706175.XA 2020-07-21 2020-07-21 Target identification method and system and inspection robot Pending CN111951161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706175.XA CN111951161A (en) 2020-07-21 2020-07-21 Target identification method and system and inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706175.XA CN111951161A (en) 2020-07-21 2020-07-21 Target identification method and system and inspection robot

Publications (1)

Publication Number Publication Date
CN111951161A true CN111951161A (en) 2020-11-17

Family

ID=73340194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706175.XA Pending CN111951161A (en) 2020-07-21 2020-07-21 Target identification method and system and inspection robot

Country Status (1)

Country Link
CN (1) CN111951161A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113091667A (en) * 2021-03-30 2021-07-09 中国工商银行股份有限公司 Inspection robot and inspection method
CN114161410A (en) * 2021-11-16 2022-03-11 中国电信集团系统集成有限责任公司 Operation and maintenance method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106514654A (en) * 2016-11-11 2017-03-22 国网浙江宁海县供电公司 Patrol method of robot and patrol robot
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN110490854A (en) * 2019-08-15 2019-11-22 中国工商银行股份有限公司 Obj State detection method, Obj State detection device and electronic equipment
CN111402565A (en) * 2020-04-22 2020-07-10 云南电网有限责任公司电力科学研究院 Wireless meter reading system for inspection robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106514654A (en) * 2016-11-11 2017-03-22 国网浙江宁海县供电公司 Patrol method of robot and patrol robot
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN110490854A (en) * 2019-08-15 2019-11-22 中国工商银行股份有限公司 Obj State detection method, Obj State detection device and electronic equipment
CN111402565A (en) * 2020-04-22 2020-07-10 云南电网有限责任公司电力科学研究院 Wireless meter reading system for inspection robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113091667A (en) * 2021-03-30 2021-07-09 中国工商银行股份有限公司 Inspection robot and inspection method
CN114161410A (en) * 2021-11-16 2022-03-11 中国电信集团系统集成有限责任公司 Operation and maintenance method and device, electronic equipment and storage medium
CN114161410B (en) * 2021-11-16 2024-01-09 中电信数智科技有限公司 Operation and maintenance method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109271872B (en) Device and method for judging on-off state and diagnosing fault of high-voltage isolating switch
CN110826538A (en) Abnormal off-duty identification system for electric power business hall
CN112396658B (en) Indoor personnel positioning method and system based on video
CN109190473A (en) The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN110807353A (en) Transformer substation foreign matter identification method, device and system based on deep learning
CN110633612B (en) Monitoring method and system for inspection robot
CN113225387B (en) Visual monitoring method and system for machine room
CN111951161A (en) Target identification method and system and inspection robot
CN107133592B (en) Human body target feature detection algorithm for power substation by fusing infrared thermal imaging and visible light imaging technologies
CN109299723A (en) A kind of railway freight-car operation monitoring system
CN110458794B (en) Quality detection method and device for accessories of rail train
CN113947731A (en) Foreign matter identification method and system based on contact net safety inspection
CN109308448A (en) A method of it prevents from becoming distribution maloperation using image processing techniques
CN113788051A (en) Train on-station running state monitoring and analyzing system
CN112437255A (en) Intelligent video monitoring system and method for nuclear power plant
CN111923042B (en) Virtualization processing method and system for cabinet grid and inspection robot
CN113095160A (en) Power system personnel safety behavior identification method and system based on artificial intelligence and 5G
CN110247328A (en) Position judging method based on image recognition in switchgear
CN111428987A (en) Artificial intelligence-based image identification method and system for relay protection device
CN116152945A (en) Under-mine inspection system and method based on AR technology
CN116311034A (en) Robot inspection system based on contrast detection
CN116055521A (en) Inspection system and image recognition method for electric inspection robot
CN112347889B (en) Substation operation behavior identification method and device
CN112926488A (en) Operating personnel violation identification method based on electric power tower structure information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination