CN110991387B - Distributed processing method and system for robot cluster image recognition - Google Patents
Distributed processing method and system for robot cluster image recognition Download PDFInfo
- Publication number
- CN110991387B CN110991387B CN201911288462.7A CN201911288462A CN110991387B CN 110991387 B CN110991387 B CN 110991387B CN 201911288462 A CN201911288462 A CN 201911288462A CN 110991387 B CN110991387 B CN 110991387B
- Authority
- CN
- China
- Prior art keywords
- image
- robot
- instrument
- robot cluster
- image recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000007689 inspection Methods 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000004880 explosion Methods 0.000 claims abstract description 17
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 230000003321 amplification Effects 0.000 claims abstract description 5
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 5
- 238000001914 filtration Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000005286 illumination Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000007789 gas Substances 0.000 description 1
- 239000003345 natural gas Substances 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of intelligent image processing, and relates to a distributed processing method and system for robot cluster image recognition, wherein the method comprises the following steps: acquiring a current image S of the instrument; reading an image S and preprocessing to obtain an image G; calling an instrument position detection algorithm, and carrying out positioning detection on the image G to obtain an instrument area image D; adjusting the target position of the instrument area image D; judging whether the target position is positioned at the middle position of the image S; amplifying the region image obtained after adjustment according to the amplification factor to obtain an instrument region image D1; reading the current indication value of the instrument or identifying the switching state of the current equipment; giving out corresponding indication according to the reading result or the state identification result; and returning an instrument identification result, and ending the identification. The method can be applied to the intelligent anti-explosion inspection robot to replace manual inspection, and the distributed deployment mode is used, so that system resources are reasonably utilized, and the working efficiency of an inspection system is improved.
Description
Technical Field
The invention belongs to the technical field of intelligent image processing, and relates to a distributed processing method and system for robot cluster image recognition.
Background
Along with the continuous development of economy and the continuous improvement of automation level, energy sources such as petroleum, natural gas and the like are used by more and more people, and the quantity of transported energy sources is also more and more, so that intelligent explosion-proof inspection robots are gradually required to replace manual inspection work of key equipment in transportation routes and key stations.
The traditional inspection method adopts a manual method to read and manually record the instrument, on one hand, the inspection manual inspection method has low efficiency, easy error and high labor intensity, and the personal safety of inspection workers cannot be ensured if flammable and explosive gas and liquid leak; on the other hand, the complex industrial field environment can cause the instrument to be influenced by various factors during detection, the recognition efficiency and accuracy are not high, and the flexibility is poor when various types of instruments need to be detected.
Today, intelligent explosion-proof inspection robots driven by artificial intelligence waves are constantly known by more enterprises. For robot clusters in the inspection area, the task to be completed for inspection is to judge whether the reading of various instruments, the on-off state of valves and the running states of other devices are normal, so that a processing method and a processing system for image recognition of the intelligent explosion-proof inspection robot under a complex industrial background need to be developed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a distributed processing method and a distributed processing system for robot cluster image recognition, which are used for solving the problems of high risk of manual inspection, low working efficiency, difficulty in human intervention in special environments and the like.
In order to achieve the above purpose, the present invention provides the following technical solutions:
in one aspect, the invention provides a distributed processing method for robot cluster image recognition, which specifically comprises the following steps:
step 1, selecting a robot A in a robot cluster system to execute an image acquisition task and acquiring a current image S of an instrument;
step 2, calling an image recognition module in a robot cluster system to read an image S, and preprocessing the image S to obtain an image G;
step 3, calling an instrument position detection algorithm, and carrying out positioning detection on the image G in the step 2 to obtain an instrument area image D; the instrument position detection algorithm, namely calling an instrument model M in the image recognition module, searching the image G in the step 2, finding an area where an instrument is located in the image G, and obtaining an instrument area image D in the image S before preprocessing; the instrument model M is obtained through training of instrument picture sets of the same type, and the instrument corresponding to the instrument model M is the same as the type of the target instrument in the image G to be detected; when the target position of the instrument area image D is a plurality of instruments, defining an optimal instrument as a final detection result according to actual working requirements;
step 4, adjusting the target position of the instrument area image D in the step 3, wherein the purpose is to adjust the position area image D of the target instrument to the center of the camera visual field;
step 5, judging whether the target position in step 4 is located at the middle position of the image S: if yes, executing the step 6; if not, returning to the step 4 to readjust;
step 6, amplifying the area image obtained after the adjustment in the step 5 according to the preset amplification factor of the robot cluster system to obtain an instrument area image D1 for reading and identifying;
step 7, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 6, and reading the current indication value of the instrument or recognizing the switching state of the current equipment;
step 8, giving a corresponding instruction according to the reading result or the state identification result in the step 7;
and 9, returning an instrument identification result, and ending the identification.
Further, the robot cluster system in the step 1 comprises a plurality of intelligent explosion-proof inspection robots, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can perform task allocation on each node.
Further, the robot cluster system can acquire data information of any node, calculates the resource utilization rate of a plurality of intelligent anti-explosion inspection robot servers currently by using a resource task scheduling algorithm, selects an optimal robot A according to the resource utilization rate, and executes an image acquisition task through an image acquisition module of the robot A; the optimal robot A is the robot which is closest to the inspection task point and has lower utilization rate of the CPU and the memory in the whole robot cluster system.
Further, the image acquisition module is a visible light camera, and the visible light camera is arranged on the intelligent anti-explosion inspection robot through the cradle head.
Further, in the step 2, the image S is preprocessed to obtain an image G, so that the influence of noise and illumination in the environment can be removed; the method specifically comprises the following steps: after the image S is converted into a gray image through an image color space, the characteristics of a target object are conveniently extracted; and filtering the noise in the image by Gaussian filtering to obtain an image G.
Further, when the target position of the meter area image D in the step 3 is a plurality of meters, defining a meter B as a final detection result according to the actual working requirement, where the meter B is an optimal meter defined according to the actual requirement.
Further, the adjusting method in the step 4 is an automatic adjusting method for a pan-tilt, and specifically includes the following steps:
step 4.1, defining the current image S pixel size as PW×PH, the camera focal length as f, and the camera sensor size as L W ×L H ;
Step 4.2, defining the width and height of the instrument area image D from the center of the image S to be IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1) W ;
Step 4.4, calculating the height L of the target surface by using the formula (2) H ;
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3) H ;
Step 4.6, calculating the vertical viewing angle theta of the target surface by using the formula (4) V ;
Step 4.7, calculating the horizontal adjustment angle A of the cradle head by using the formula (5) H ;
Step 4.8, calculating the vertical adjustment angle A of the cradle head by using the formula (6) V ;
Wherein, the pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
Further, the invoking the image recognition module in the robot cluster system in step 7 specifically includes: the robot cluster system selects one server of the current idle node according to the resource utilization state of each node server, and then invokes the visible light image recognition module to recognize by utilizing the corresponding image processing interface.
On the other hand, the invention also provides a distributed processing system for robot cluster image recognition, which comprises a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can perform task allocation on each node and can acquire data information of any node; the processing system completes the degree and state identification of various instruments and equipment on site according to the distributed processing method for robot cluster image identification.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects: according to the actual requirements of the on-site inspection work, the processing system deploys a plurality of intelligent anti-explosion inspection robots on corresponding nodes, positions and identifies the instrument to be identified through mutual coordination among the robots, so that the high efficiency, reliability and flexibility of the inspection work of the intelligent anti-explosion inspection robots can be improved while the utilization rate of the cluster resources of the robots is improved, and the problems of high risk, low working efficiency, difficulty in human intervention in special environments and the like of the manual inspection work are effectively solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate principles of the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a distributed processing method for robot cluster image recognition provided by the invention;
fig. 2 is a schematic flow chart of another distributed processing method for robot cluster image recognition provided by the present invention;
fig. 3 is a schematic diagram of a resource scheduling process provided in the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of methods and systems that are consistent with aspects of the invention that are described in the following claims.
The present invention will be described in further detail below with reference to the drawings and examples for better understanding of the technical solutions of the present invention to those skilled in the art.
Example 1:
referring to fig. 1, the invention provides a distributed processing method for robot cluster image recognition, which specifically comprises the following steps:
step 1, selecting a robot A in a robot cluster system to execute an image acquisition task and acquiring a current image S of an instrument;
step 2, calling an image recognition module in a robot cluster system to read the image S, and preprocessing the image S to obtain an image G;
step 3, positioning and detecting the image G in the step 2 by using an instrument position detection algorithm to obtain an instrument area image D;
step 4, adjusting the target position of the instrument area image D in the step 3;
step 5, judging whether the target position in step 4 is located at the middle position of the image S: if yes, executing the step 6; if not, returning to the step 4 to readjust;
step 6, amplifying the area image obtained after the adjustment in the step 5 according to the preset amplification factor of the robot cluster system to obtain an instrument area image D1 for reading and identifying;
step 7, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 6, and reading the current indication value of the instrument or recognizing the switching state of the current equipment;
step 8, giving a corresponding instruction according to the reading result or the state identification result in the step 7;
and 9, returning an instrument identification result, and ending the identification.
Further, the robot cluster system in the step 1 comprises a plurality of intelligent explosion-proof inspection robots, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can perform task allocation on each node.
Further, the robot cluster system can acquire data information of any node, calculates the resource utilization rate of a plurality of intelligent anti-explosion inspection robot servers currently by using a resource task scheduling algorithm, selects a robot A according to the resource utilization rate, and executes an image acquisition task through an image acquisition module of the robot A; the optimal robot A is the robot which is closest to the inspection task point and has lower utilization rate of the CPU and the memory in the whole robot cluster system.
Further, the image acquisition module is a visible light camera, and the visible light camera is arranged on the intelligent anti-explosion inspection robot through the cradle head.
Further, in the step 2, the image S is preprocessed to obtain an image G, so that the influence of noise and illumination in the environment can be removed; the method specifically comprises the following steps: the image S is converted into a gray image through an image color space, and then the characteristics of a target object are conveniently extracted; and filtering the internal noise through Gaussian filtering processing to obtain an image G.
Further, when the target position of the meter area image D in the step 3 is a plurality of meters, defining a meter B as a final detection result according to the actual working requirement, where the meter B is an optimal meter defined according to the actual requirement.
Further, the adjusting method in the step 4 is an automatic adjusting method for a pan-tilt, and specifically includes the following steps:
step 4.1, defining the current image S pixel size as PW×PH, the camera focal length as f, and the camera sensor size as L W ×L H ;
Step 4.2, defining the width and height of the instrument area image D from the center of the image S to be IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1) W ;
Step 4.4, calculating the height L of the target surface by using the formula (2) H ;
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3) H ;
Step 4.6, calculating the vertical viewing angle theta of the target surface by using the formula (4) V ;
Step 4.7, calculating the horizontal adjustment angle A of the cradle head by using the formula (5) H ;
Step 4.8, calculating the vertical adjustment angle A of the cradle head by using the formula (6) V ;
Wherein, the pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
Further, the invoking the image recognition module in the robot cluster system in step 7 specifically includes: the robot cluster system selects one server of the current idle node according to the resource utilization state of each node server, and then invokes the visible light image recognition module to recognize by utilizing the corresponding image processing interface.
In addition, the invention also provides a distributed processing system for robot cluster image recognition, which comprises a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can perform task allocation on each node and can acquire data information of any node; the processing system completes the degree and state identification of various instruments and equipment on site according to the distributed processing method for robot cluster image identification.
Example 2:
referring to fig. 2, the invention further provides a distributed processing method for robot cluster image recognition, which specifically comprises the following steps:
step 1, selecting a robot A in a robot cluster system to execute an image acquisition task and acquiring a current RGB image S of an instrument;
the robot cluster system can deploy a plurality of robots in one inspection area, and the robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can distribute tasks to each node; the system can acquire an image shot by a visible light camera carried on any robot, the visible light camera can acquire a current image S, return to a storage path of the image, and simultaneously can also transmit the image S to a computer through wireless; meanwhile, the system can utilize a resource task scheduling algorithm (the resource scheduling process is shown in fig. 3), and the optimal robot A is selected to perform image acquisition tasks;
step 2, calling an image recognition module in a robot cluster system to read the image S, preprocessing the image S to obtain an image G, and removing noise and influence generated by illumination in the environment; the specific processing process comprises the following steps: after converting the image S into a gray image through an image color space, conveniently extracting characteristics of a target object, and then processing internal noise of the image through Gaussian filtering to obtain an image G;
step 3, calling an instrument position detection algorithm, and carrying out positioning detection on the image G in the step 2 to obtain an instrument area image D; the method comprises the following steps: calling a visible light image recognition module of the system, wherein the module can search the image G in the step 2 by using an instrument model M, find an area where an instrument is located in the image G, obtain an instrument area image D in the image S before preprocessing, and return to the position where the instrument area D is located in the image S; wherein, the instrument model M is obtained by training the instrument picture set of the same type, and the optional instrument types are as follows: pointer instrument, digital instrument or liquid level meter, etc.; the instrument corresponding to the instrument model M is the same as the type of the target instrument in the image S to be detected;
step 4, analyzing the target position of the meter area image D in step 3, if the target position is a single meter, directly returning target information, specifically, reflecting the position information of the meter area D in the image S (the position information is a coordinate reflected in the image, taking the pixel as a unit, if the detected meter is a single meter in step 3, only one coordinate information is provided, if the detected meter is a plurality of meters, a plurality of coordinate information is provided); returning to the image recognition module, and judging and adjusting the following steps according to the position information; if the target position is a plurality of meters, selecting an optimal meter as a detection result, and defining the selection of the optimal meter according to actual requirements;
step 5, adjusting the target position of the instrument area image D in the step 4, and adjusting the position area image D of the target instrument to the center of the camera visual field by automatically adjusting the rotation angle of the cloud platform;
step 6, judging whether the target position in the step 5 is located at the middle position of the image S: if yes, executing the step 7; if not, returning to the step 5 to readjust;
step 7, amplifying the area image obtained after the adjustment in the step 6 according to the preset amplification factor of the robot cluster system to obtain an area image D1 which is most suitable for reading or state identification;
step 8, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 7, and reading the current indication value of the instrument or recognizing the switching state of the current equipment;
the system selects one robot server of the node which is idle at present according to the resource utilization state of each node server, and invokes a visible light image recognition module by utilizing an image processing interface of the robot server to obtain a recognition result;
step 9, giving a corresponding instruction according to the reading result or the state identification result in the step 8;
when the system finds that the image S has no target instrument or the reading fails, the system prompts that the instrument searching fails or the identification fails; when the instrument reading by the system is in an abnormal range, the equipment is in an abnormal state, and at the moment, the system gives an alarm prompt.
And 10, returning an instrument identification result, and ending the identification.
Further, the robot cluster system can acquire data information of any node, calculates the resource utilization rate of a plurality of intelligent anti-explosion inspection robot servers currently by using a resource task scheduling algorithm, selects an optimal robot A according to the resource utilization rate, and executes an image acquisition task through an image acquisition module of the robot A; the optimal robot A is the robot which is closest to the inspection task point and has lower utilization rate of the CPU and the memory in the whole robot cluster system.
Further, the image acquisition module is a visible light camera, and the visible light camera is arranged on the intelligent anti-explosion inspection robot through the cradle head.
Further, in the step 2, preprocessing the image S to obtain an image G, which specifically includes: the image S is converted into a gray image through an image color space, and then Gaussian filtering processing is carried out to obtain an image G.
Further, the adjusting method in the step 4 is an automatic adjusting method of a pan-tilt, which is a process of calculating a rotation angle corresponding to the pan-tilt according to a distance from a target to a center position of an image S and re-collecting the target image, and specifically includes the following steps:
step 4.1, defining the current image S pixel size as PW×PH, the camera focal length as f, and the camera sensor size as L W ×L H ;
Among them, the camera-selectable sea-health camera 2007c or 3007c is 1/2.8 inches (diagonal dimension: 16 mm/2.8=5.71, 4.59mm×3.42), so there is α=16/2.8=5.71;
step 4.2, defining the width and height of the instrument area image D from the center of the image S to be IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1) W ;
Step 4.4, calculating the height L of the target surface by using the formula (2) H ;
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3) H ;
Step 4.6, calculating the vertical viewing angle theta of the target surface by using the formula (4) V ;
Step 4.7, calculating the horizontal adjustment angle A of the cradle head by using the formula (5) H ;
Step 4.8, calculating the vertical adjustment angle A of the cradle head by using the formula (6) V ;
Wherein, the pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
Further, the invoking the image recognition module in the robot cluster system in step 7 specifically includes: the robot cluster system selects one server of the current idle node according to the resource utilization state of each node server, and then invokes the visible light image recognition module to recognize by utilizing the corresponding image processing interface.
In addition, the invention provides a distributed processing system for robot cluster image recognition, which comprises a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can perform task allocation on each node and can acquire data information of any node; the processing system completes the degree and state identification of various instruments and equipment on site according to the distributed processing method for robot cluster image identification.
In summary, the distributed processing method for robot cluster image recognition provided by the invention can be applied to intelligent explosion-proof inspection robots to replace manual inspection, a plurality of inspection robots can be deployed in an inspection area, and system resources are reasonably utilized by using a distributed deployment mode, so that the working efficiency of an inspection system is improved; according to different inspection requirements of different nodes, corresponding identification function modules are deployed, and the method has the characteristics of high speed, good flexibility, high accuracy and the like.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention.
It will be understood that the invention is not limited to what has been described above and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (8)
1. The distributed processing method for robot cluster image recognition is characterized by comprising the following steps of:
step 1, selecting a robot A in a robot cluster system to execute an image acquisition task to acquire a current image S of an instrument, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots which can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can distribute tasks to each node;
step 2, calling an image recognition module in a robot cluster system to read an image S, and preprocessing the image S to obtain an image G;
step 3, calling an instrument position detection algorithm, and carrying out positioning detection on the image G to obtain an instrument area image D;
step 4, adjusting the target position of the instrument area image D, wherein the adjusting method in the step 4 is a holder automatic adjusting method, and specifically comprises the following steps:
step 4.1, defining the current image S pixel size as PW×PH, the camera focal length as f, and the camera sensor size as L W ×L H ;
Step 4.2, defining the width and height of the instrument area image D from the center of the image S to be IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1) W ;
Step 4.4, calculating the height L of the target surface by using the formula (2) H ;
/>
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3) H ;
Step 4.6, calculating the vertical viewing angle theta of the target surface by using the formula (4) V ;
Step 4.7, calculating the horizontal adjustment angle A of the cradle head by using the formula (5) H ;
Step 4.8, calculating the vertical adjustment angle A of the cradle head by using the formula (6) V ;
Wherein, the pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, the units of width IW and height IH are px;
step 5, judging whether the target position in the step 4 is located at the middle position of the image S: if yes, executing the step 6; if not, returning to the step 4 to readjust;
step 6, amplifying the area image obtained after the adjustment in the step 5 according to the preset amplification factor of the robot cluster system to obtain an instrument area image D1 for reading and identifying;
step 7, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 6, and reading the current indication value of the instrument or recognizing the switching state of the current equipment;
step 8, giving a corresponding instruction according to the reading result or the state identification result in the step 7;
and 9, returning an instrument identification result, and ending the identification.
2. The distributed processing method for robot cluster image recognition according to claim 1, wherein the robot cluster system can acquire data information of any one node, calculates resource utilization rate of a plurality of intelligent explosion-proof inspection robot servers currently by using a resource task scheduling algorithm, selects a robot A according to the resource utilization rate, and executes an image acquisition task through an image acquisition module of the robot A.
3. The distributed processing method for robot cluster image recognition according to claim 2, wherein the resource utilization rate includes usage conditions of a CPU and a memory.
4. The distributed processing method for robot cluster image recognition according to claim 2, wherein the image acquisition module is a visible light camera, and the visible light camera is installed on the intelligent explosion-proof inspection robot through a holder.
5. The distributed processing method for robot cluster image recognition according to claim 1, wherein in the step 2, the preprocessing of the image S to obtain an image G specifically includes: and converting the image S into a gray image through an image color space, and then carrying out Gaussian filtering processing to obtain an image G.
6. The distributed processing method for robot cluster image recognition according to claim 1, wherein when the target position of the meter area image D in the step 3 is a plurality of meters, one meter B is defined as a final detection result according to an actual working requirement.
7. The distributed processing method for robot cluster image recognition according to claim 1, wherein the invoking the image recognition module in the robot cluster system in step 7 specifically includes: the robot cluster system selects one server of the current idle node according to the resource utilization state of each node server, and then invokes the visible light image recognition module to recognize by utilizing the corresponding image processing interface.
8. The distributed processing system for robot cluster image recognition is characterized by comprising a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent anti-explosion inspection robot is defined as a node, and the robot cluster system can perform task allocation on each node and can acquire data information of any node; the processing system performs degree and state identification on a plurality of types of meters and equipment on site according to the distributed processing method for robot cluster image identification according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911288462.7A CN110991387B (en) | 2019-12-11 | 2019-12-11 | Distributed processing method and system for robot cluster image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911288462.7A CN110991387B (en) | 2019-12-11 | 2019-12-11 | Distributed processing method and system for robot cluster image recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110991387A CN110991387A (en) | 2020-04-10 |
CN110991387B true CN110991387B (en) | 2024-02-02 |
Family
ID=70093678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911288462.7A Active CN110991387B (en) | 2019-12-11 | 2019-12-11 | Distributed processing method and system for robot cluster image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110991387B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114005026A (en) * | 2021-09-29 | 2022-02-01 | 达闼科技(北京)有限公司 | Image recognition method and device for robot, electronic device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN105930837A (en) * | 2016-05-17 | 2016-09-07 | 杭州申昊科技股份有限公司 | Transformer station instrument equipment image recognition method based on autonomous routing inspection robot |
WO2018107916A1 (en) * | 2016-12-14 | 2018-06-21 | 南京阿凡达机器人科技有限公司 | Robot and ambient map-based security patrolling method employing same |
CN109299758A (en) * | 2018-07-27 | 2019-02-01 | 深圳市中兴系统集成技术有限公司 | A kind of intelligent polling method, electronic equipment, intelligent inspection system and storage medium |
CN109739239A (en) * | 2019-01-21 | 2019-05-10 | 天津迦自机器人科技有限公司 | A kind of planing method of the uninterrupted Meter recognition for crusing robot |
CN109977813A (en) * | 2019-03-13 | 2019-07-05 | 山东沐点智能科技有限公司 | A kind of crusing robot object localization method based on deep learning frame |
CN110110869A (en) * | 2019-05-21 | 2019-08-09 | 国电大渡河瀑布沟发电有限公司 | A kind of power station intelligent inspection system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9463574B2 (en) * | 2012-03-01 | 2016-10-11 | Irobot Corporation | Mobile inspection robot |
CN104778726A (en) * | 2015-04-29 | 2015-07-15 | 深圳市保千里电子有限公司 | Motion trail tracing method and system based on human body characteristics |
-
2019
- 2019-12-11 CN CN201911288462.7A patent/CN110991387B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN105930837A (en) * | 2016-05-17 | 2016-09-07 | 杭州申昊科技股份有限公司 | Transformer station instrument equipment image recognition method based on autonomous routing inspection robot |
WO2018107916A1 (en) * | 2016-12-14 | 2018-06-21 | 南京阿凡达机器人科技有限公司 | Robot and ambient map-based security patrolling method employing same |
CN109299758A (en) * | 2018-07-27 | 2019-02-01 | 深圳市中兴系统集成技术有限公司 | A kind of intelligent polling method, electronic equipment, intelligent inspection system and storage medium |
CN109739239A (en) * | 2019-01-21 | 2019-05-10 | 天津迦自机器人科技有限公司 | A kind of planing method of the uninterrupted Meter recognition for crusing robot |
CN109977813A (en) * | 2019-03-13 | 2019-07-05 | 山东沐点智能科技有限公司 | A kind of crusing robot object localization method based on deep learning frame |
CN110110869A (en) * | 2019-05-21 | 2019-08-09 | 国电大渡河瀑布沟发电有限公司 | A kind of power station intelligent inspection system |
Non-Patent Citations (2)
Title |
---|
房桦 ; 蒋涛 ; 李红玉 ; 罗浩 ; 李健 ; 杨国庆 ; .一种适用于智能变电站巡检机器人的双针仪表读数的识别算法.山东电力技术.2013,(03),全文. * |
许湘明 ; 宋晖 ; .变电站机器人视觉伺服系统研究.西南科技大学学报.2011,(04),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN110991387A (en) | 2020-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111931565B (en) | Autonomous inspection and hot spot identification method and system based on photovoltaic power station UAV | |
CN102063718B (en) | Field calibration and precision measurement method for spot laser measuring system | |
CN102135236B (en) | Automatic non-destructive testing method for internal wall of binocular vision pipeline | |
CN107808133B (en) | Unmanned aerial vehicle line patrol-based oil and gas pipeline safety monitoring method and system and software memory | |
CN103196370B (en) | Measuring method and measuring device of conduit connector space pose parameters | |
CN108734143A (en) | A kind of transmission line of electricity online test method based on binocular vision of crusing robot | |
CN101751572A (en) | Pattern detection method, device, equipment and system | |
CN111255636A (en) | Method and device for determining tower clearance of wind generating set | |
CN110400315A (en) | A kind of defect inspection method, apparatus and system | |
CN103413141B (en) | Ring illuminator and fusion recognition method utilizing ring illuminator illumination based on shape, grain and weight of tool | |
CN114265418A (en) | Unmanned aerial vehicle inspection and defect positioning system and method for photovoltaic power station | |
CN110837839B (en) | High-precision unmanned aerial vehicle orthographic image manufacturing and data acquisition method | |
CN113379712A (en) | Steel bridge bolt disease detection method and system based on computer vision | |
CN107092905B (en) | Method for positioning instrument to be identified of power inspection robot | |
CN114754934B (en) | Gas leakage detection method | |
CN102930279A (en) | Image identification method for detecting product quantity | |
CN104103069A (en) | Image processing apparatus, image processing method and program | |
CN112528979A (en) | Transformer substation inspection robot obstacle distinguishing method and system | |
CN110991387B (en) | Distributed processing method and system for robot cluster image recognition | |
CN113657423A (en) | Target detection method suitable for small-volume parts and stacked parts and application thereof | |
CN114941807A (en) | Unmanned aerial vehicle-based rapid monitoring and positioning method for leakage of thermal pipeline | |
CN115619738A (en) | Detection method for module side seam welding after welding | |
CN112102395A (en) | Autonomous inspection method based on machine vision | |
CN113705564B (en) | Pointer type instrument identification reading method | |
CN107767366B (en) | A kind of transmission line of electricity approximating method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |