CN110991387A - Distributed processing method and system for robot cluster image recognition - Google Patents

Distributed processing method and system for robot cluster image recognition Download PDF

Info

Publication number
CN110991387A
CN110991387A CN201911288462.7A CN201911288462A CN110991387A CN 110991387 A CN110991387 A CN 110991387A CN 201911288462 A CN201911288462 A CN 201911288462A CN 110991387 A CN110991387 A CN 110991387A
Authority
CN
China
Prior art keywords
image
robot
instrument
robot cluster
distributed processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911288462.7A
Other languages
Chinese (zh)
Other versions
CN110991387B (en
Inventor
王士兴
邱垚
常超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Ancn Smart Instrument Inc
Original Assignee
Xi'an Ancn Smart Instrument Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Ancn Smart Instrument Inc filed Critical Xi'an Ancn Smart Instrument Inc
Priority to CN201911288462.7A priority Critical patent/CN110991387B/en
Publication of CN110991387A publication Critical patent/CN110991387A/en
Application granted granted Critical
Publication of CN110991387B publication Critical patent/CN110991387B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of intelligent image processing, and relates to a distributed processing method and a distributed processing system for robot cluster image recognition, wherein the method comprises the following steps: acquiring a current image S of the instrument; reading an image S and preprocessing the image S to obtain an image G; calling an instrument position detection algorithm, and carrying out positioning detection on the image G to obtain an instrument area image D; adjusting the target position of the instrument area image D; judging whether the target position is located at the middle position of the image S; amplifying the area image obtained after adjustment according to the amplification factor to obtain an instrument area image D1; reading the current indication value of the instrument or identifying the current on-off state of the equipment; giving corresponding indication according to the reading result or the state recognition result; and returning the instrument identification result, and finishing the identification. The method can be applied to the intelligent explosion-proof inspection robot to replace manual inspection, and a distributed deployment mode is used, so that system resources are reasonably utilized, and the working efficiency of an inspection system is improved.

Description

Distributed processing method and system for robot cluster image recognition
Technical Field
The invention belongs to the technical field of intelligent image processing, and relates to a distributed processing method and a distributed processing system for robot cluster image recognition.
Background
Along with the continuous development of economy and the continuous improvement of automation level, energy such as oil and gas is used by more and more people, and the energy quantity of transportation is also more and more, consequently in the transportation way and key website, need use intelligent explosion-proof inspection robot gradually to replace the manual work to accomplish the work of patrolling and examining of key equipment.
The traditional inspection method is to read and manually record the meter by adopting a manual method, on one hand, the method for manually inspecting the reading and the recording is low in efficiency, easy to make mistakes and high in labor intensity, and if flammable and explosive gas and liquid are leaked, the personal safety of an inspection worker cannot be guaranteed; on the other hand, the complex industrial field environment can cause the instrument to be influenced by various factors during detection, the identification efficiency and accuracy are not high, and the flexibility is poor when various types of instruments need to be detected.
Nowadays, the intelligent explosion-proof inspection robot driven by artificial intelligence wave is continuously known by more enterprises. For a robot cluster in an inspection area, the task to be finished by inspection is to judge whether the readings of various instruments, the on-off states of valves and the running states of other equipment are normal, so that a processing method and a system for image recognition of an intelligent explosion-proof inspection robot under a complex industrial background need to be developed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a distributed processing method and a distributed processing system for robot cluster image recognition, so as to solve the problems of high danger, low working efficiency, difficulty in human intervention in special environments and the like of manual inspection.
In order to achieve the purpose, the invention provides the following technical scheme:
in one aspect, the invention provides a distributed processing method for robot cluster image recognition, which specifically comprises the following steps:
step 1, selecting one robot A in a robot cluster system to execute an image acquisition task and acquiring a current instrument image S;
step 2, calling an image recognition module in the robot cluster system to read an image S, and preprocessing the image S to obtain an image G;
step 3, calling an instrument position detection algorithm, and carrying out positioning detection on the image G in the step 2 to obtain an instrument area image D; the instrument position detection algorithm calls an instrument model M in the image recognition module, searches the image G in the step 2, finds an area where the instrument is located in the image G, and obtains an instrument area image D corresponding to the image S before preprocessing; the instrument model M is obtained through training instrument image sets of the same type, and the instrument corresponding to the instrument model M is the same as the target instrument type in the image G to be detected; when the target position of the instrument area image D is a plurality of instruments, defining an optimal instrument as a final detection result according to actual working requirements;
step 4, adjusting the target position of the instrument area image D in the step 3, wherein the purpose is to adjust the position area image D of the target instrument to the center of the visual field of the camera;
step 5, judging whether the target position in the step 4 is positioned at the middle position of the image S: if yes, executing step 6; if not, returning to the step 4 for readjustment;
step 6, amplifying the area image obtained after adjustment in the step 5 according to a preset amplification factor of the robot cluster system to obtain an instrument area image D1 for reading identification;
step 7, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 6, and reading the current indication value of the instrument or recognizing the on-off state of current equipment;
step 8, giving corresponding indication according to the reading result or the state recognition result in the step 7;
and 9, returning the instrument identification result, and ending the identification.
Further, the robot cluster system in the step 1 comprises a plurality of intelligent explosion-proof inspection robots, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, and the robot cluster system can distribute tasks to each node.
Further, the robot cluster system can acquire data information of any node, calculate the resource utilization rate of a plurality of current intelligent explosion-proof inspection robot servers by using a resource task scheduling algorithm, select an optimal robot A according to the resource utilization rate, and execute an image acquisition task through an image acquisition module of the robot A; the optimal robot A is the robot which is closest to the inspection task point and has low utilization rate of the CPU and the memory in the whole robot cluster system.
Furthermore, the image acquisition module is a visible light camera, and the visible light camera is installed on the intelligent explosion-proof inspection robot through a holder.
Further, in the step 2, the image S is preprocessed to obtain an image G, so that the influence of noise and illumination in the environment can be removed; the method specifically comprises the following steps: after the image S is converted into a gray image through an image color space, the characteristics of a target object are conveniently extracted; and filtering the internal noise of the image by Gaussian filtering to obtain an image G.
Further, when the target position of the meter area image D in step 3 is a plurality of meters, one meter B is defined as a final detection result according to an actual work requirement, and the meter B is an optimal meter defined according to an actual requirement.
Further, the adjusting method in step 4 is an automatic holder adjusting method, which specifically includes the following steps:
step 4.1, defining the S pixel size of the current image as PW multiplied by PH, the focal length of a camera as f, and the size of a camera sensor as LW×LH
Step 4.2, defining the width and the height of the instrument area image D from the center of the image S as IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1)W
Figure BDA0002313496690000041
Step 4.4, calculating the height L of the target surface by using the formula (2)H
Figure BDA0002313496690000042
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3)H
Figure BDA0002313496690000043
Step 4.6, calculating the vertical visual angle theta of the target surface by using the formula (4)V
Figure BDA0002313496690000044
Step 4.7, calculating the horizontal adjustment angle A of the holder by using the formula (5)H
Figure BDA0002313496690000045
Step 4.8, calculating the vertical adjustment angle A of the holder by using the formula (6)V
Figure BDA0002313496690000051
The pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
Further, the step 7 of calling an image recognition module in the robot cluster system specifically includes: the robot cluster system selects a server of a current idle node according to the resource utilization state of each node server, and calls a visible light image recognition module to recognize by using a corresponding image processing interface.
On the other hand, the invention also provides a distributed processing system for robot cluster image recognition, which comprises a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is respectively provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can be communicated with one another; each intelligent explosion-proof inspection robot is defined as a node, the robot cluster system can distribute tasks to each node and can acquire data information of any node; the processing system completes degree and state identification of various types of instruments and equipment on site according to the distributed processing method for robot cluster image identification.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects: according to the actual demand of the on-site inspection work, the processing system deploys a plurality of intelligent explosion-proof inspection robots on corresponding nodes, through mutual coordination among the robots, positioning and identification are carried out on instruments to be identified, the efficiency of robot cluster resources utilization is improved, meanwhile, the efficiency, reliability and flexibility of inspection work of the intelligent explosion-proof inspection robots can be improved, and the problems that the manual inspection work is high in danger, low in working efficiency, difficult to intervene in a special environment and the like are effectively solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a distributed processing method for robot cluster image recognition according to the present invention;
fig. 2 is a schematic flow chart of still another distributed processing method for robot cluster image recognition provided by the present invention;
fig. 3 is a schematic diagram of a resource scheduling process provided by the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of methods, systems and so forth consistent with certain aspects of the invention, as detailed in the following claims.
In order to make those skilled in the art better understand the technical solution of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings and examples.
Example 1:
referring to fig. 1, the present invention provides a distributed processing method for robot cluster image recognition, which specifically includes the following steps:
step 1, selecting one robot A in a robot cluster system to execute an image acquisition task and acquiring a current instrument image S;
step 2, calling an image recognition module in the robot cluster system to read the image S, and preprocessing the image S to obtain an image G;
step 3, positioning and detecting the image G in the step 2 by using an instrument position detection algorithm to obtain an instrument area image D;
step 4, adjusting the target position of the instrument area image D in the step 3;
step 5, judging whether the target position in the step 4 is positioned at the middle position of the image S: if yes, executing step 6; if not, returning to the step 4 for readjustment;
step 6, amplifying the area image obtained after adjustment in the step 5 according to a preset amplification factor of the robot cluster system to obtain an instrument area image D1 for reading identification;
step 7, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 6, and reading the current indication value of the instrument or recognizing the on-off state of current equipment;
step 8, giving corresponding indication according to the reading result or the state recognition result in the step 7;
and 9, returning the instrument identification result, and ending the identification.
Further, the robot cluster system in the step 1 comprises a plurality of intelligent explosion-proof inspection robots, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, and the robot cluster system can distribute tasks to each node.
Further, the robot cluster system can acquire data information of any node, calculate the resource utilization rate of a plurality of current intelligent explosion-proof inspection robot servers by using a resource task scheduling algorithm, select a robot A according to the resource utilization rate, and execute an image acquisition task through an image acquisition module of the robot A; the optimal robot A is the robot which is closest to the inspection task point and has low utilization rate of the CPU and the memory in the whole robot cluster system.
Furthermore, the image acquisition module is a visible light camera, and the visible light camera is installed on the intelligent explosion-proof inspection robot through a holder.
Further, in the step 2, the image S is preprocessed to obtain an image G, so that the influence of noise and illumination in the environment can be removed; the method specifically comprises the following steps: converting the image S into a gray image through an image color space, and then extracting the characteristics of a target object; and filtering internal noise through Gaussian filtering to obtain an image G.
Further, when the target position of the meter area image D in step 3 is a plurality of meters, one meter B is defined as a final detection result according to an actual work requirement, and the meter B is an optimal meter defined according to an actual requirement.
Further, the adjusting method in step 4 is an automatic holder adjusting method, which specifically includes the following steps:
step 4.1, defining the S pixel size of the current image as PW multiplied by PH, the focal length of a camera as f, and the size of a camera sensor as LW×LH
Step 4.2, defining the width and the height of the instrument area image D from the center of the image S as IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1)W
Figure BDA0002313496690000081
Step 4.4, calculating the height L of the target surface by using the formula (2)H
Figure BDA0002313496690000091
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3)H
Figure BDA0002313496690000092
Step 4.6, calculating the vertical visual angle theta of the target surface by using the formula (4)V
Figure BDA0002313496690000093
Step 4.7, calculating the horizontal adjustment angle A of the holder by using the formula (5)H
Figure BDA0002313496690000094
Step 4.8, calculating the vertical adjustment angle A of the holder by using the formula (6)V
Figure BDA0002313496690000095
The pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
Further, the step 7 of calling an image recognition module in the robot cluster system specifically includes: the robot cluster system selects a server of a current idle node according to the resource utilization state of each node server, and calls a visible light image recognition module to recognize by using a corresponding image processing interface.
In addition, the invention also provides a distributed processing system for robot cluster image recognition, which comprises a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, the robot cluster system can distribute tasks to each node and can acquire data information of any node; the processing system completes degree and state identification of various types of instruments and equipment on site according to the distributed processing method for robot cluster image identification.
Example 2:
referring to fig. 2, the present invention further provides a distributed processing method for robot cluster image recognition, which specifically includes the following steps:
step 1, selecting one robot A in a robot cluster system to execute an image acquisition task, and acquiring a current RGB image S of an instrument;
the robot cluster system can deploy a plurality of robots in an inspection area, and the plurality of robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, and the robot cluster system can distribute tasks to each node; the system can acquire an image shot by a visible light camera carried on any one robot, the visible light camera can acquire a current image S and return to a storage path of the image, and meanwhile, the image S can also be wirelessly transmitted to a computer; meanwhile, the system can select the optimal robot A to perform an image acquisition task by using a resource task scheduling algorithm (the resource scheduling process of the system is shown in figure 3);
step 2, calling an image recognition module in the robot cluster system to read the image S, preprocessing the image S to obtain an image G, and removing noise and illumination influences in the environment; the specific treatment process comprises the following steps: after the image S is converted into a gray image through an image color space, the characteristics of a target object are conveniently extracted, and then internal noise of the image is processed through Gaussian filtering to obtain an image G;
step 3, calling an instrument position detection algorithm, and carrying out positioning detection on the image G in the step 2 to obtain an instrument area image D; the method specifically comprises the following steps: calling a visible light image identification module of the system, wherein the visible light image identification module can search the image G in the step 2 by using the instrument model M, find the area where the instrument is located in the image G, correspond to the image S before preprocessing to obtain an instrument area image D, and return to the position where the instrument area D is located in the image S; wherein, instrument model M uses the instrument picture set training of the same type to obtain, and optional instrument type has: pointer instruments, digital instruments or level gauges, etc.; the instrument corresponding to the instrument model M is the same as the target instrument type in the image S to be detected;
step 4, analyzing the target position of the instrument area image D in the step 3, and if the target position is a single instrument, directly returning target information, specifically, reflecting the position information of the instrument area D in the image S (the position information reflects coordinates in the image, and the pixel is used as a unit, if the image is a single instrument detected in the step 3, only one coordinate information exists, and if the image is a plurality of instruments, a plurality of coordinate information exists); returning to the image recognition module, and performing judgment and adjustment operation of the following steps according to the position information; if the target position is a plurality of meters, selecting an optimal meter as a detection result, and defining the selection of the optimal meter according to actual requirements;
step 5, adjusting the target position of the instrument area image D in the step 4, and adjusting the position area image D of the target instrument to the center of the visual field of the camera by automatically adjusting the rotation angle of the cloud platform;
step 6, judging whether the target position in the step 5 is positioned at the middle position of the image S: if yes, executing step 7; if not, returning to the step 5 for readjustment;
step 7, amplifying the area image obtained after adjustment in the step 6 according to a preset amplification factor of the robot cluster system to obtain an area image D1 most suitable for reading or state identification;
step 8, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 7, reading the current indication value of the instrument, or recognizing the on-off state of current equipment;
the system selects a robot server with a current idle node according to the resource utilization state of each node server, and calls a visible light image recognition module by using an image processing interface of the robot server to obtain a recognition result;
step 9, giving corresponding indication according to the reading result or the state recognition result in the step 8;
when the system finds that no target instrument exists in the image S or reading fails, the system prompts that the instrument is searched for or identification fails; when the instrument reading number read by the system is in an abnormal range, the device is in an abnormal state, and at the moment, the system gives an alarm prompt.
And step 10, returning the instrument identification result, and ending the identification.
Further, the robot cluster system can acquire data information of any node, calculate the resource utilization rate of a plurality of current intelligent explosion-proof inspection robot servers by using a resource task scheduling algorithm, select an optimal robot A according to the resource utilization rate, and execute an image acquisition task through an image acquisition module of the robot A; the optimal robot A is the robot which is closest to the inspection task point and has low utilization rate of the CPU and the memory in the whole robot cluster system.
Furthermore, the image acquisition module is a visible light camera, and the visible light camera is installed on the intelligent explosion-proof inspection robot through a holder.
Further, in step 2, preprocessing the image S to obtain an image G specifically includes: and converting the image S into a gray image through an image color space, and then performing Gaussian filtering processing to obtain an image G.
Further, the adjusting method in step 4 is an automatic holder adjusting method, which is a process of calculating a rotation angle corresponding to the holder according to a distance from the target to the center position of the image S, and performing reacquisition on the target image, and specifically includes the following steps:
step 4.1, defining the S pixel size of the current image as PW multiplied by PH, the focal length of a camera as f, and the size of a camera sensor as LW×LH
Among them, the camera can be selected from the kazakhstan cameras 2007c or 3007c, which are 1/2.8 inch (diagonal size: 16 mm/2.8-5.71, 4.59mm × 3.42), so α -16/2.8-5.71;
step 4.2, defining the width and the height of the instrument area image D from the center of the image S as IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1)W
Figure BDA0002313496690000131
Step 4.4, calculating the height L of the target surface by using the formula (2)H
Figure BDA0002313496690000132
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3)H
Figure BDA0002313496690000133
Step 4.6, calculating the target surface using equation (4)Vertical angle of view thetaV
Figure BDA0002313496690000134
Step 4.7, calculating the horizontal adjustment angle A of the holder by using the formula (5)H
Figure BDA0002313496690000135
Step 4.8, calculating the vertical adjustment angle A of the holder by using the formula (6)V
Figure BDA0002313496690000136
The pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
Further, the step 7 of calling an image recognition module in the robot cluster system specifically includes: the robot cluster system selects a server of a current idle node according to the resource utilization state of each node server, and calls a visible light image recognition module to recognize by using a corresponding image processing interface.
In addition, the invention provides a distributed processing system for robot cluster image recognition, which comprises a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, the robot cluster system can distribute tasks to each node and can acquire data information of any node; the processing system completes degree and state identification of various types of instruments and equipment on site according to the distributed processing method for robot cluster image identification.
In conclusion, the distributed processing method for robot cluster image recognition provided by the invention can be applied to intelligent explosion-proof inspection robots to replace manual inspection, a plurality of inspection robots can be deployed in an inspection area, and the distributed deployment mode is used, so that system resources are reasonably utilized, and the working efficiency of an inspection system is improved; according to different polling requirements of different nodes, corresponding recognition function modules are deployed, and the method has the characteristics of high speed, good flexibility, high accuracy and the like.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention.
It is to be understood that the present invention is not limited to what has been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A distributed processing method for robot cluster image recognition is characterized by comprising the following steps:
step 1, selecting one robot A in a robot cluster system to execute an image acquisition task and acquiring a current instrument image S;
step 2, calling an image recognition module in the robot cluster system to read an image S, and preprocessing the image S to obtain an image G;
step 3, calling an instrument position detection algorithm, and carrying out positioning detection on the image G to obtain an instrument area image D;
step 4, adjusting the target position of the instrument area image D;
and 5, judging whether the target position in the step 4 is positioned at the middle position of the image S: if yes, executing step 6; if not, returning to the step 4 for readjustment;
step 6, amplifying the area image obtained after adjustment in the step 5 according to a preset amplification factor of the robot cluster system to obtain an instrument area image D1 for reading identification;
step 7, calling an image recognition module in the robot cluster system to process the instrument area image D1 in the step 6, and reading the current indication value of the instrument or recognizing the on-off state of current equipment;
step 8, giving corresponding indication according to the reading result or the state recognition result in the step 7;
and 9, returning the instrument identification result, and ending the identification.
2. The distributed processing method for robot cluster image recognition according to claim 1, wherein the robot cluster system in the step 1 comprises a plurality of intelligent explosion-proof inspection robots, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, and the robot cluster system can distribute tasks to each node.
3. The distributed processing method for robot cluster image recognition according to claim 2, wherein the robot cluster system can acquire data information of any one node, calculate resource utilization rates of a plurality of current intelligent explosion-proof inspection robot servers by using a resource task scheduling algorithm, select a robot A according to the resource utilization rates, and execute an image acquisition task through an image acquisition module of the robot A.
4. The distributed processing method for robot cluster image recognition according to claim 3, wherein the resource utilization includes usage of CPU and memory.
5. The distributed processing method for robot cluster image recognition according to claim 3, wherein the image acquisition module is a visible light camera, and the visible light camera is installed on the intelligent explosion-proof inspection robot through a holder.
6. The distributed processing method for robot cluster image recognition according to claim 1, wherein in the step 2, preprocessing the image S to obtain an image G specifically includes: and converting the image S into a gray image through an image color space, and then performing Gaussian filtering to obtain an image G.
7. The distributed processing method for robot cluster image recognition according to claim 1, wherein when the target position of the meter area image D in step 3 is a plurality of meters, one meter B is defined as a final detection result according to an actual work requirement.
8. The distributed processing method for robot cluster image recognition according to claim 1, wherein the adjusting method in step 4 is an automatic pan-tilt adjusting method, and specifically comprises the following steps:
step 4.1, defining the S pixel size of the current image as PW multiplied by PH, the focal length of a camera as f, and the size of a camera sensor as LW×LH
Step 4.2, defining the width and the height of the instrument area image D from the center of the image S as IW and IH respectively;
step 4.3, calculating the width L of the target surface by using the formula (1)W
Figure FDA0002313496680000031
Step 4.4, calculating the height L of the target surface by using the formula (2)H
Figure FDA0002313496680000032
Step 4.5, calculating the horizontal visual angle theta of the target surface by using the formula (3)H
Figure FDA0002313496680000033
Step 4.6, calculating the vertical visual angle theta of the target surface by using the formula (4)V
Figure FDA0002313496680000034
Step 4.7, calculating the horizontal adjustment angle A of the holder by using the formula (5)H
Figure FDA0002313496680000035
Step 4.8, calculating the vertical adjustment angle A of the holder by using the formula (6)V
Figure FDA0002313496680000036
The pixel unit is px, the camera focal length unit and the camera sensor size unit are mm, and the units of width IW and height IH are px.
9. The distributed processing method for robot cluster image recognition according to claim 1, wherein the step 7 of calling an image recognition module in the robot cluster system specifically includes: the robot cluster system selects a server of a current idle node according to the resource utilization state of each node server, and calls a visible light image recognition module to recognize by using a corresponding image processing interface.
10. A distributed processing system for robot cluster image recognition is characterized by comprising a robot cluster system, wherein the robot cluster system comprises a plurality of intelligent explosion-proof inspection robots, a server and a computer, each intelligent explosion-proof inspection robot is provided with a visible light camera through a holder, and the plurality of intelligent explosion-proof inspection robots can communicate with each other; each intelligent explosion-proof inspection robot is defined as a node, the robot cluster system can distribute tasks to each node and can acquire data information of any node; the processing system completes degree and state recognition of various types of instruments and equipment on site according to the distributed processing method for robot cluster image recognition of any one of claims 1 to 9.
CN201911288462.7A 2019-12-11 2019-12-11 Distributed processing method and system for robot cluster image recognition Active CN110991387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911288462.7A CN110991387B (en) 2019-12-11 2019-12-11 Distributed processing method and system for robot cluster image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911288462.7A CN110991387B (en) 2019-12-11 2019-12-11 Distributed processing method and system for robot cluster image recognition

Publications (2)

Publication Number Publication Date
CN110991387A true CN110991387A (en) 2020-04-10
CN110991387B CN110991387B (en) 2024-02-02

Family

ID=70093678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911288462.7A Active CN110991387B (en) 2019-12-11 2019-12-11 Distributed processing method and system for robot cluster image recognition

Country Status (1)

Country Link
CN (1) CN110991387B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005026A (en) * 2021-09-29 2022-02-01 达闼科技(北京)有限公司 Image recognition method and device for robot, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130231779A1 (en) * 2012-03-01 2013-09-05 Irobot Corporation Mobile Inspection Robot
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105930837A (en) * 2016-05-17 2016-09-07 杭州申昊科技股份有限公司 Transformer station instrument equipment image recognition method based on autonomous routing inspection robot
US20160323517A1 (en) * 2015-04-29 2016-11-03 Protruly Vision Technology Group CO.,LTD Method and system for tracking moving trajectory based on human features
WO2018107916A1 (en) * 2016-12-14 2018-06-21 南京阿凡达机器人科技有限公司 Robot and ambient map-based security patrolling method employing same
CN109299758A (en) * 2018-07-27 2019-02-01 深圳市中兴系统集成技术有限公司 A kind of intelligent polling method, electronic equipment, intelligent inspection system and storage medium
CN109739239A (en) * 2019-01-21 2019-05-10 天津迦自机器人科技有限公司 A kind of planing method of the uninterrupted Meter recognition for crusing robot
CN109977813A (en) * 2019-03-13 2019-07-05 山东沐点智能科技有限公司 A kind of crusing robot object localization method based on deep learning frame
CN110110869A (en) * 2019-05-21 2019-08-09 国电大渡河瀑布沟发电有限公司 A kind of power station intelligent inspection system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130231779A1 (en) * 2012-03-01 2013-09-05 Irobot Corporation Mobile Inspection Robot
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
US20160323517A1 (en) * 2015-04-29 2016-11-03 Protruly Vision Technology Group CO.,LTD Method and system for tracking moving trajectory based on human features
CN105930837A (en) * 2016-05-17 2016-09-07 杭州申昊科技股份有限公司 Transformer station instrument equipment image recognition method based on autonomous routing inspection robot
WO2018107916A1 (en) * 2016-12-14 2018-06-21 南京阿凡达机器人科技有限公司 Robot and ambient map-based security patrolling method employing same
CN109299758A (en) * 2018-07-27 2019-02-01 深圳市中兴系统集成技术有限公司 A kind of intelligent polling method, electronic equipment, intelligent inspection system and storage medium
CN109739239A (en) * 2019-01-21 2019-05-10 天津迦自机器人科技有限公司 A kind of planing method of the uninterrupted Meter recognition for crusing robot
CN109977813A (en) * 2019-03-13 2019-07-05 山东沐点智能科技有限公司 A kind of crusing robot object localization method based on deep learning frame
CN110110869A (en) * 2019-05-21 2019-08-09 国电大渡河瀑布沟发电有限公司 A kind of power station intelligent inspection system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
房桦;蒋涛;李红玉;罗浩;李健;杨国庆;: "一种适用于智能变电站巡检机器人的双针仪表读数的识别算法" *
许湘明;宋晖;: "变电站机器人视觉伺服系统研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005026A (en) * 2021-09-29 2022-02-01 达闼科技(北京)有限公司 Image recognition method and device for robot, electronic device and storage medium

Also Published As

Publication number Publication date
CN110991387B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN107229930B (en) Intelligent identification method for numerical value of pointer instrument
CN111255636B (en) Method and device for determining tower clearance of wind generating set
CN109969736B (en) Intelligent detection method for deviation fault of large carrying belt
CN102063718B (en) Field calibration and precision measurement method for spot laser measuring system
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN101751572A (en) Pattern detection method, device, equipment and system
CN102135236A (en) Automatic non-destructive testing method for internal wall of binocular vision pipeline
CN103196370A (en) Measuring method and measuring device of conduit connector space pose parameters
CN106546263A (en) A kind of laser leveler shoot laser line detecting method based on machine vision
CN107092905B (en) Method for positioning instrument to be identified of power inspection robot
CN113379712A (en) Steel bridge bolt disease detection method and system based on computer vision
CN111914767A (en) Scattered-pollution enterprise detection method and system based on multi-source remote sensing data
CN112528979A (en) Transformer substation inspection robot obstacle distinguishing method and system
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN113884011A (en) Non-contact concrete surface crack measuring equipment and method
CN114119535A (en) Laser cleaning effect on-line monitoring method based on visual detection
CN115619738A (en) Detection method for module side seam welding after welding
CN115376000A (en) Underwater measurement method, device and computer readable storage medium
CN112102395A (en) Autonomous inspection method based on machine vision
JPH0765152A (en) Device and method for monitoring
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN113705564B (en) Pointer type instrument identification reading method
CN117237925B (en) Intelligent road disease inspection method and system based on computer vision
CN110991387A (en) Distributed processing method and system for robot cluster image recognition
CN115993366B (en) Workpiece surface detection method and system based on sensing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant