CN117523575A - Intelligent instrument reading method and system based on inspection robot - Google Patents

Intelligent instrument reading method and system based on inspection robot Download PDF

Info

Publication number
CN117523575A
CN117523575A CN202311421845.3A CN202311421845A CN117523575A CN 117523575 A CN117523575 A CN 117523575A CN 202311421845 A CN202311421845 A CN 202311421845A CN 117523575 A CN117523575 A CN 117523575A
Authority
CN
China
Prior art keywords
instrument
inspection
image
area
inspection robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311421845.3A
Other languages
Chinese (zh)
Inventor
辛菁
雷升
刘伟
焦尚彬
尚治龙
陈凯
任春彧
贾元成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202311421845.3A priority Critical patent/CN117523575A/en
Publication of CN117523575A publication Critical patent/CN117523575A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses an intelligent instrument reading method and system based on an inspection robot, which are characterized in that an inspection path of the inspection robot is planned according to an environment image of an inspection area and the position of an instrument, so that the inspection robot can finish inspection of each instrument in the shortest time, when an instrument inspection task is executed, the inspection robot acquires images of the area where each instrument is located, then acquires instrument images in the images through a target detection network model, and divides the numerical value area of the instrument images to determine the numerical value of the instrument according to the divided images. Compared with the traditional fixed track inspection mode, the trackless navigation provides greater flexibility and adaptability, and can effectively cope with the diversity of instrument distribution in a complex factory environment.

Description

Intelligent instrument reading method and system based on inspection robot
Technical Field
The invention relates to the field of inspection robots, in particular to an intelligent instrument reading method and system based on the inspection robot.
Background
With the rapid development of industrial automation and intellectualization, the inspection robot plays an important role in industrial production and facility maintenance. The inspection robot can execute inspection tasks in the environments of factories, warehouses, power plants and the like, and collects key instrument data to ensure normal operation of equipment and discover potential faults in time.
The traditional inspection method mainly adopts a manual mode, and mainly adopts manual observation, hearing and related equipment to conduct related tests. The method not only can cause unstable detection results due to the influence of artificial subjective factors, but also generally causes a dangerous environment of complex factory equipment and high temperature and high pressure, thereby threatening the safety of staff to a certain extent. It is expected that the traditional manual inspection mode cannot be suitable for the high-speed development of the current power system, and the intelligent unmanned power plant will become a future development direction. The system replaces manual work to realize all-weather inspection, full-automatic positioning, real-time communication, automatic acquisition of pressure gauges, temperature gauges and noise, and the like, and improves the efficiency of inspection work while reducing the labor cost. Therefore, the development of the inspection technology capable of replacing the manual robot has important practical significance.
The inspection robot is an important product in the industrial field, and needs a plurality of technologies for integrating the whole inspection task, including an information integration technology, an autonomous navigation and positioning technology, an intelligent control technology, a machine vision technology, a network communication technology and the like of a plurality of sensors. In order to ensure the normal operation of industrial production equipment, various meters are adopted to measure various indexes so as to monitor the state of the equipment, a machine vision technology plays an important role in the identification of the meters, a computer vision method is used for processing meter images acquired by a robot, and finally meter readings are calculated. The application of a plurality of technologies can continuously promote iteration and development of the inspection robot, and has important practical value and significance for research on intelligent instrument reading technology of the inspection robot.
The traditional inspection method is difficult to cope with the challenges of multiple instrument types and wide distribution, and a fixed track needs to be established to limit the moving range of the robot. Meanwhile, the manual image acquisition has the problems of low accuracy and efficiency, meter reading is performed manually, and human errors and delays exist. Through trackless navigation, intelligent image acquisition and transmission and real-time calculation and monitoring, the flexibility, the acquisition accuracy and the real-time performance of inspection are improved. The robot can freely move and adapt to various instruments, automatically collect high-quality images, calculate instrument readings in real time and improve inspection efficiency and accuracy. Therefore, the automatic inspection has more efficient and high-quality application prospect in industrial production.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an intelligent instrument reading method and system based on a patrol robot.
The invention is realized by the following technical scheme:
an intelligent instrument reading method based on a patrol robot comprises the following steps:
step 1, planning an optimal inspection path according to an image of an inspection area and the positions of various instruments in the inspection area by combining an autonomous navigation method;
step 2, navigating the inspection robot to each instrument position according to the optimal inspection path, and adjusting the posture of the robot according to a template matching algorithm to obtain a clear image of the area where the instrument is located;
step 3, extracting an instrument image in an area image where the instrument is located based on the target detection network model;
and 4, dividing a numerical value region in the instrument image, and determining the numerical value of the instrument according to the divided image.
2. The intelligent meter reading method based on the inspection robot according to claim 1, wherein the method for planning the optimal inspection path in step 1 is as follows:
according to the image of the inspection area and the positioning of the corresponding image, adopting a vision-based Rthabmap synchronous positioning and mapping algorithm to construct an environment map of the inspection area;
marking instrument position points in an environment map, taking the current position of the inspection robot as an initial node, and planning an initial optimal inspection path by using an A-based global planning algorithm;
and carrying out smoothing treatment and path optimization on the path by adopting a local DWA algorithm to obtain an optimal routing inspection path.
Preferably, in step 2, a navigation instruction is generated according to the optimal routing inspection path, the routing inspection robot moves to each instrument position along the optimal planning path according to the navigation instruction, a matching result of the template image and the subgraph is calculated through a template matching algorithm, and the gesture of the routing inspection robot is controlled according to the matching result so as to obtain an area image of the instrument with optimal definition.
Preferably, the method for controlling the gesture of the inspection robot comprises the following steps:
and determining the center of the instrument according to the template matching result, calculating the error between the center of the instrument and the center of the camera of the inspection robot, and adjusting the posture of the inspection robot according to the error, so that the error between the center of the instrument and the center of the camera and the occupied field area ratio of the instrument surface area are smaller than the set range.
Preferably, the method for extracting the meter image in the step 3 is as follows:
and constructing a target detection network YOLOv4-tiny, training by adopting an instrument image training set, and acquiring an instrument image in an area image of the instrument according to the trained target detection network YOLOv 4-tiny.
Preferably, in step 4, the type of the meter is determined according to the image of the meter, and the types of the meter include digital display type meters and pointer type meters.
Preferably, the method for determining the numerical value of the pointer instrument is as follows:
the method comprises the steps of dividing an instrument image by adopting a U-net dividing network to obtain a pointer image and a dial image, determining the circle center of the dial, expanding polar coordinates of the dial according to the circle center, vertically projecting a pointer, determining the position of the pointer according to the peak value of a vertical projection result, and determining the numerical value of the pointer instrument according to the position relation between the center line of the pointer and the dial after expanding the polar coordinates.
Preferably, the numerical value determining method of the digital display instrument is as follows:
and dividing the digital display area by adopting a projection method, and then identifying the character strings of the digital display area by utilizing a CRNN network to obtain the numerical value of the digital display instrument.
A system for an intelligent meter reading method based on a patrol robot comprises,
the path planning module is used for planning an optimal routing inspection path according to the images of the routing inspection area and the positions of the instruments in the routing inspection area and combining an autonomous navigation method;
the regional image acquisition module is used for navigating the inspection robot to each instrument position according to the optimal inspection path, adjusting the gesture of the robot according to a template matching algorithm and acquiring a clear regional image of the instrument;
the instrument image extraction module is used for extracting an instrument image in the area image where the instrument is located based on the target detection network model;
and the identification module is used for dividing the numerical value area in the instrument image and determining the numerical value of the instrument according to the divided image.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention discloses an intelligent instrument reading method based on an inspection robot, which is characterized in that an inspection path of the inspection robot is planned according to an environment image of an inspection area and the position of an instrument, so that the inspection robot can finish inspection of each instrument in the shortest time, when an instrument inspection task is executed, the inspection robot acquires images of the area where each instrument is located, then acquires instrument images in the images through a target detection network model, and divides the numerical value area of the instrument images to determine the numerical value of the instrument according to the divided images. Compared with the traditional fixed track inspection mode, the trackless navigation provides greater flexibility and adaptability, and can effectively cope with the diversity of instrument distribution in a complex factory environment.
Drawings
FIG. 1 is a flow chart of the mobile end work flow based on the inspection robot reading system.
Fig. 2 is a view of the inspection robot according to the present invention.
Fig. 3 is a base station side workflow diagram of the inspection robot based reading system of the present invention.
Fig. 4 is an off-center schematic view of the meter of the present invention.
FIG. 5 is a schematic diagram of a template image of the template matching algorithm of the present invention.
Wherein, a diagram is a template image, and b diagram is an image to be matched.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings, which illustrate but do not limit the invention.
Referring to fig. 1-5, the intelligent meter reading method based on the inspection robot comprises the following steps:
and step 1, planning an optimal inspection path according to the images of the inspection area and the positions of the instruments in the inspection area by combining an autonomous navigation method.
In the planning process of the optimal inspection path, map construction and trackless navigation autonomous movement in the inspection area are realized by utilizing sensors such as a depth camera and the like according to synchronous positioning, a mapping technology and an autonomous navigation method. The robot can plan a path, avoid obstacles and accurately reach a specified inspection position.
The synchronous positioning and mapping technology adopts a vision-based Rthabmap synchronous positioning and mapping algorithm, utilizes sensors such as a depth camera and the like to capture image data in the environment, takes the image data as input data of the positioning and mapping algorithm, performs real-time positioning and mapping according to the input image data, utilizes feature points and descriptors in the image data to determine the position and the posture of the inspection robot, and constructs the environment map.
After an environment map is constructed, marking a target position point of a patrol instrument in the grid map, and then autonomous navigation is carried out on the patrol instrument to the patrol point by the patrol robot, wherein an autonomous navigation scheme adopts a global A-by-X algorithm and a local DWA algorithm to realize planning and navigation of an optimal path.
After the environment map is constructed, an optimal routing path is planned by utilizing an A-global planning algorithm, the basic principle is that the current position of a routing robot is taken as an initial node, the expansion node of the node is estimated, the optimal child node in expansion is determined through an estimation function, then the expandable node of the child node is estimated, and loop iteration is sequentially carried out until a target node exists in the expandable node of a certain node. g (n) is a cost function and represents the cost value between the initialization node and the current node, the cost is equivalent to the path length from the initial position to the current position, and when the robot moves to a certain n node, the cost corresponding to the optimal route from the initial node to the current position is represented; h (n) is a heuristic function, is an optimal path evaluation value between the current node and the target node, and is directly related to the effect of an A-path planning algorithm; f (n) shows the cost from the starting node through the intermediate node to the target node. The evaluation function of the a-algorithm is:
f(n)=g(n)+h(n)
and (3) carrying out global path planning in an environment map, wherein an initial optimal routing inspection path is obtained by using a global planning algorithm A, and the path is optimized and smoothed by using a local DWA algorithm so as to consider the dynamics constraint of the robot and the obstacle condition of the environment. And generating a navigation instruction according to the optimized path, and controlling the robot to realize optimal path navigation. Adopting alpha, beta and lambda to respectively represent the weight of the direction, the distance and the speed of the robot; v, w represents the linear and angular speed of the robot; head represents the robot orientation; dist represents the distance of the robot from the obstacle; the velocity represents the robot speed. The evaluation function of the DWA algorithm is:
G(v,w)=αhead(v,w)+βdist(v,w)+λvelocity(v,w)
and 2, navigating the inspection robot to each instrument position according to the optimal inspection path, adjusting the posture of the robot according to a template matching algorithm, obtaining a clear instrument image, and transmitting the instrument image to a base station host end.
The inspection robot navigates to each instrument position, intelligent collection of instrument images is achieved by utilizing a robot vision servo technology, a preset template library is compared by utilizing a template matching algorithm, the pose of the robot is continuously adjusted, the camera is guaranteed to obtain clear high-quality instrument images, and the instrument images are transmitted to a base station host end.
Referring to fig. 5, the template matching algorithm adopts gray-scale based template matching, and the principle of the algorithm is as follows: let T (m, n) be the template of size mxn, S (w, h) be the image to be matched of size mxh, S (i, j) be the subgraph.
The template matching process is to continuously compare the similarity between the template image and the subgraph, the similarity maximum value is reserved, and the corresponding subgraph is the matching result. Gray-scale based template matching is performed by graying the template image T (m, n) and the sub-image S (i, j) of the sliding process, and finally calculating the matching degree R (i, j) between the template image and the sub-image. And reflecting the similarity of the template image on the image to be detected according to the calculated matching degree. Wherein R (i, j) is calculated as follows:
in order to solve the problem that the position of the instrument deviates from the center of the field of view, the position of the robot needs to be adjusted according to the template matching result, so that the center position of the dial of the instrument is adjusted to the correct position. The specific method comprises the following steps: firstly, according to the template matching result, the abscissa x of the instrument center can be obtained m Then calculate x m Deviation error from the camera center abscissa, where the camera center abscissa is fixed at 320 due to the camera resolution of 640 x 480. If error is greater than 10 pixels, the robot is adjusted to rotate rightwards; if error is smaller than-10 pixels, the robot is adjusted to rotate leftwards; if the error is between-10 and 10, calculating the area proportion area_pro of the surface area of the instrument in the visual field to the visual field, and if the area_pro is smaller than 10%, adjusting the robot to move forwards to obtain a clear image with proper shooting area occupation ratio and proper position. The method comprises the following steps:
step 3, the host base station receives the transmitted instrument image, and extracts the instrument image in the area image of the instrument based on the target detection network model;
the instrument image detection adopts a single-stage target detection network Yolov4-tiny with higher real-time performance, a large number of instrument samples are trained, the model is deployed into an instrument reading system, the instrument image transmitted by a mobile terminal is detected, and the finally detected instrument area is extracted.
Step 4, determining the type of the instrument according to the instrument image,
and dividing a numerical value region in the instrument image, and determining the numerical value of the instrument according to the divided image.
For the pointer type instrument, firstly, the divided instrument area is divided into a pointer area and a dial area by utilizing a U-net dividing network, the dividing result is unfolded according to polar coordinates, and the reading result is calculated by combining prior information. For a digital display instrument, firstly, a digital display area is extracted by using a projection method, the segmented character strings are read by using a CRNN character recognition network, and a reading result is recognized.
In particular, for pointer meter readings, the scale of the pointer is generally circular, and the circle center position must be determined to complete polar expansion. Firstly, outputting a segmentation result only comprising a dial, searching a contour by using Opencv, traversing an extreme value of an abscissa in the contour query result, determining a circle center position according to different instrument types, and performing polar coordinate expansion on the dial according to the circle center. And vertically projecting the pointer, determining the position of the pointer according to the peak value of the vertical projection result, wherein the projection range is the scale range. And then calculating the reading by adopting a distance method, and realizing the accurate reading of the instrument by adopting a mathematical relationship according to the position relationship of the central line of the pointer and the dial after the pointer and the dial are unfolded according to polar coordinates.
Aiming at the digital display instrument reading, the character string is identified by utilizing a CRNN network after the digital display area is segmented by utilizing a projection method, and the CRNN model combines the advantages of CNN and RNN, can process the space information of the digital display instrument image and the sequence information of the text at the same time, and convert the information such as numbers, symbols, characters and the like into corresponding character sequences to be output, thereby realizing the aim of digital display instrument identification.
And 5, visually displaying the reading result and the inspection information in the inspection process, such as the inspection progress, the real-time camera, the inspection time and other information, so that the management of staff is facilitated.
The invention also provides an intelligent instrument reading system based on the inspection robot, which comprises a path planning module, a region image acquisition module, an instrument image extraction module and an identification module.
The path planning module is used for planning an optimal routing inspection path according to the images of the routing inspection area and the positions of the instruments in the routing inspection area and combining an autonomous navigation method;
the regional image acquisition module is used for navigating the inspection robot to each instrument position according to the optimal inspection path, adjusting the gesture of the robot according to a template matching algorithm and acquiring a clear regional image of the instrument;
the instrument image extraction module is used for extracting an instrument image in the area image where the instrument is located based on the target detection network model;
and the identification module is used for dividing the numerical value area in the instrument image and determining the numerical value of the instrument according to the divided image.
The invention also provides an intelligent instrument reading device based on the inspection robot, which comprises a mobile end of the inspection robot and a base station end of a host.
And the inspection robot performs autonomous navigation by utilizing a sensor of the inspection robot according to a preset task to reach the position of the industrial instrument. By adjusting the pose of the inspection robot, the robot shoots high-quality instrument images by using the built-in camera, and transmits the images to a base station end of a host computer, so that the monitoring of the instrument state is realized. The inspection robot workflow is shown in fig. 1 and the inspection robot is shown in fig. 2. The part comprises the following parts:
and the map construction module is used for constructing an environment map, acquiring an environment image through a sensor carried by the mobile robot and constructing the environment map according to the environment image.
The autonomous navigation module is used for planning an optimal routing path of the robot by combining an autonomous navigation method on the basis of an environment map, and the robot can safely navigate in a complex environment through a path planning algorithm and an obstacle avoidance algorithm, avoid obstacles and reach a designated position.
And the image acquisition module is used for enabling the inspection robot to move to the instrument position according to the optimal inspection path, shooting an instrument area by the inspection robot through the carried camera, and controlling the pose of the robot to shoot high-quality instrument area images.
And the image transmission module is used for transmitting the acquired image data of the instrument area to the base station end for analysis and reading. The image transmission module is responsible for transmitting the acquired image data to the host computer through a network.
The host base station in the system receives the image transmitted by the mobile terminal, calculates the reading of the instrument in real time by means of the powerful computing capability of the background computer, and displays the inspection information and the reading through the monitoring interface.
The workflow is shown in fig. 3 and includes the following parts:
an image receiving module: the module is responsible for receiving the instrument image data transmitted by the inspection robot. And through establishing communication connection with the inspection robot end, receiving and acquiring the transmitted image data. The received image data may be used for subsequent meter reading processing and analysis.
Instrument extraction and identification module: on the basis of image receiving, the instrument extraction and identification module processes the received image, extracts an instrument area, and reads the pointer type instrument and the digital display type instrument according to different reading schemes designed according to different instrument types.
And a visual display module: the final result of the meter intelligence reading module needs to be presented to the operator in a visual manner. The visual display module integrates the recognized meter reading and other inspection information and displays the meter reading and other inspection information through a monitoring interface, a display screen or other output equipment. An operator can view the inspection information and the reading information of the instrument in real time.
The invention relates to an intelligent instrument reading method based on an inspection robot, which is used for solving the problem of instrument inspection in a factory environment, and the traditional inspection mode is difficult to adapt to various instrument types and widely distributed instrument layouts. The invention provides a trackless navigation intelligent inspection method, wherein an inspection robot automatically navigates to an industrial instrument, and a camera of the inspection robot is utilized to shoot high-quality instrument images and transmits the high-quality instrument images to a base station end. And the host base station calculates the meter reading in real time and displays the inspection information and the reading on the monitoring interface. Compared with the traditional fixed track inspection method, the trackless navigation improves the flexibility and adaptability of inspection; the intelligent image acquisition ensures the accuracy of the image; the efficiency and the accuracy of patrolling and examining are improved through real-time calculation and monitoring. The invention can improve the efficiency and quality of industrial production and provides a high-efficiency and accurate solution for industrial production.
The inspection robot can automatically collect high-quality instrument images through the sensor and the camera of the inspection robot, and the definition and accuracy of the images are ensured through an intelligent adjustment technology. Compared with the traditional manual acquisition mode, the invention realizes an automatic image acquisition process and improves the acquisition efficiency and accuracy. And secondly, the host base station can calculate the meter reading in real time by utilizing the strong calculation power of the background computer, and the patrol information and the reading are displayed through a monitoring interface. The real-time calculation and monitoring function greatly improves the inspection efficiency and accuracy, and can acquire the information of the instrument state in time. The invention provides a new way for improving efficiency and quality for industrial production.
The above is only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited by this, and any modification made on the basis of the technical scheme according to the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (9)

1. An intelligent instrument reading method based on a patrol robot is characterized by comprising the following steps:
step 1, planning an optimal inspection path according to an image of an inspection area and the positions of various instruments in the inspection area by combining an autonomous navigation method;
step 2, navigating the inspection robot to each instrument position according to the optimal inspection path, and adjusting the posture of the robot according to a template matching algorithm to obtain a clear image of the area where the instrument is located;
step 3, extracting an instrument image in an area image where the instrument is located based on the target detection network model;
and 4, dividing a numerical value region in the instrument image, and determining the numerical value of the instrument according to the divided image.
2. The intelligent meter reading method based on the inspection robot according to claim 1, wherein the method for planning the optimal inspection path in step 1 is as follows:
according to the image of the inspection area and the positioning of the corresponding image, adopting a vision-based Rthabmap synchronous positioning and mapping algorithm to construct an environment map of the inspection area;
marking instrument position points in an environment map, taking the current position of the inspection robot as an initial node, and planning an initial optimal inspection path by using an A-based global planning algorithm;
and carrying out smoothing treatment and path optimization on the path by adopting a local DWA algorithm to obtain an optimal routing inspection path.
3. The intelligent instrument reading method based on the inspection robot, according to claim 1, is characterized in that in the step 2, a navigation instruction is generated according to an optimal inspection path, the inspection robot moves to each instrument position along the optimal planning path according to the navigation instruction, a matching result of a template image and a subgraph is calculated through a template matching algorithm, and the gesture of the inspection robot is controlled according to the matching result so as to obtain an area image of the instrument with optimal definition.
4. The intelligent meter reading method based on the inspection robot according to claim 3, wherein the method for controlling the posture of the inspection robot is as follows:
and determining the center of the instrument according to the template matching result, calculating the error between the center of the instrument and the center of the camera of the inspection robot, and adjusting the posture of the inspection robot according to the error, so that the error between the center of the instrument and the center of the camera and the occupied field area ratio of the instrument surface area are smaller than the set range.
5. The intelligent meter reading method based on the inspection robot according to claim 1, wherein the method for extracting the meter image in the step 3 is as follows:
and constructing a target detection network YOLOv4-tiny, training by adopting an instrument image training set, and acquiring an instrument image in an area image of the instrument according to the trained target detection network YOLOv 4-tiny.
6. The intelligent meter reading method based on the inspection robot according to claim 1, wherein in the step 4, the type of the meter is determined according to the image of the meter, and the type of the meter comprises a digital display type meter and a pointer type meter.
7. The intelligent meter reading method based on the inspection robot according to claim 6, wherein the numerical value determining method of the pointer meter is as follows:
the method comprises the steps of dividing an instrument image by adopting a U-net dividing network to obtain a pointer image and a dial image, determining the circle center of the dial, expanding polar coordinates of the dial according to the circle center, vertically projecting a pointer, determining the position of the pointer according to the peak value of a vertical projection result, and determining the numerical value of the pointer instrument according to the position relation between the center line of the pointer and the dial after expanding the polar coordinates.
8. The intelligent meter reading method based on the inspection robot according to claim 6, wherein the numerical value determining method of the digital display meter is as follows:
and dividing the digital display area by adopting a projection method, and then identifying the character strings of the digital display area by utilizing a CRNN network to obtain the numerical value of the digital display instrument.
9. A system for implementing the intelligent meter reading method based on the inspection robot according to any one of claims 1 to 8, comprising,
the path planning module is used for planning an optimal routing inspection path according to the images of the routing inspection area and the positions of the instruments in the routing inspection area and combining an autonomous navigation method;
the regional image acquisition module is used for navigating the inspection robot to each instrument position according to the optimal inspection path, adjusting the gesture of the robot according to a template matching algorithm and acquiring a clear regional image of the instrument;
the instrument image extraction module is used for extracting an instrument image in the area image where the instrument is located based on the target detection network model;
and the identification module is used for dividing the numerical value area in the instrument image and determining the numerical value of the instrument according to the divided image.
CN202311421845.3A 2023-10-30 2023-10-30 Intelligent instrument reading method and system based on inspection robot Pending CN117523575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311421845.3A CN117523575A (en) 2023-10-30 2023-10-30 Intelligent instrument reading method and system based on inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311421845.3A CN117523575A (en) 2023-10-30 2023-10-30 Intelligent instrument reading method and system based on inspection robot

Publications (1)

Publication Number Publication Date
CN117523575A true CN117523575A (en) 2024-02-06

Family

ID=89765382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311421845.3A Pending CN117523575A (en) 2023-10-30 2023-10-30 Intelligent instrument reading method and system based on inspection robot

Country Status (1)

Country Link
CN (1) CN117523575A (en)

Similar Documents

Publication Publication Date Title
CN106595630B (en) It is a kind of that drawing system and method are built based on laser navigation Intelligent Mobile Robot
CN101419055B (en) Space target position and pose measuring device and method based on vision
CN108051002A (en) Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN110826549A (en) Inspection robot instrument image identification method and system based on computer vision
CN109186606A (en) A kind of robot composition and air navigation aid based on SLAM and image information
CN109737981B (en) Unmanned vehicle target searching device and method based on multiple sensors
CN101861526A (en) System and method for automatic calibration of tracked ultrasound
CN111536967A (en) EKF-based multi-sensor fusion greenhouse inspection robot tracking method
CN111639505A (en) Hybrid positioning navigation system and method for indoor inspection robot
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN104133192A (en) Agricultural machine navigation system and method applied to small and medium-sized farmland
CN116630394B (en) Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
CN104898675A (en) Robot intelligent navigation control method
CN111862200B (en) Unmanned aerial vehicle positioning method in coal shed
CN108152829B (en) Two-dimensional laser radar mapping device with linear guide rail and mapping method thereof
CN201355241Y (en) Visual-based space target pose measuring device
CN112785564A (en) Pedestrian detection tracking system and method based on mechanical arm
CN116358547B (en) Method for acquiring AGV position based on optical flow estimation
CN115586748B (en) Mobile intelligent flexible motion control system and method thereof
Lin et al. Drift-free visual slam for mobile robot localization by integrating uwb technology
CN117523575A (en) Intelligent instrument reading method and system based on inspection robot
CN103791901A (en) Data processing system of star sensor
CN114663473A (en) Personnel target positioning and tracking method and system based on multi-view information fusion
CN114721377A (en) Improved Cartogrier based SLAM indoor blind guiding robot control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination