CN117359636A - Python-based machine vision system of inspection robot - Google Patents

Python-based machine vision system of inspection robot Download PDF

Info

Publication number
CN117359636A
CN117359636A CN202311497735.5A CN202311497735A CN117359636A CN 117359636 A CN117359636 A CN 117359636A CN 202311497735 A CN202311497735 A CN 202311497735A CN 117359636 A CN117359636 A CN 117359636A
Authority
CN
China
Prior art keywords
image
module
robot
features
python
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311497735.5A
Other languages
Chinese (zh)
Inventor
薛玉明
冯少君
薛伊丹
李鹏海
潘洪刚
王昕�
代红丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianyungang Zhongpu Dingde Technology Co ltd
Original Assignee
Lianyungang Zhongpu Dingde Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianyungang Zhongpu Dingde Technology Co ltd filed Critical Lianyungang Zhongpu Dingde Technology Co ltd
Priority to CN202311497735.5A priority Critical patent/CN117359636A/en
Publication of CN117359636A publication Critical patent/CN117359636A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of robot vision systems, in particular to a machine vision system of a routing inspection robot based on Python, which comprises a vision sensor module, an image processing module, a machine learning algorithm module and a control module, wherein the vision sensor module is used for collecting environmental images; the image processing module is used for preprocessing the acquired image; the machine learning algorithm module is used for analyzing and understanding the preprocessed image; the control module is used for adjusting the motion path and the behavior mode of the robot according to the analysis result of the machine learning algorithm module. The invention greatly improves the working efficiency of the inspection robot and can also improve the inspection accuracy, thereby reducing the problems caused by false inspection or omission.

Description

Python-based machine vision system of inspection robot
Technical Field
The invention relates to the technical field of robot vision systems, in particular to a machine vision system of a routing inspection robot based on Python.
Background
Robotics are becoming more and more widely used in modern society, especially in the fields of industrial manufacturing, facility management, warehouse logistics, etc., and robots are widely used for automated production and service. The inspection robot can automatically inspect and monitor under various environments to find and solve various problems, and efficiency and safety are improved. However, it is critical for the inspection robot to be able to accurately perceive and understand environmental information to accomplish its task.
Currently, most inspection robots rely on various sensors (e.g., cameras, lidar, ultrasonic sensors, etc.) to obtain environmental information. However, the information acquired by these sensors is usually raw, unprocessed, such as images, distance data, etc., and requires complex processing and analysis to be converted into useful information for the robot, such as the position, shape, color, movement state, etc., of the object.
In this process and analysis, machine vision and machine learning techniques play a key role. The machine vision technology can process the image and extract key features in the image; and machine learning techniques may learn and classify these features to identify objects in the image. However, existing machine vision and machine learning techniques typically require significant computational resources and time, which often is difficult to meet for inspection tasks with high real-time requirements.
In addition, the inspection robot needs to be able to generate appropriate decisions, such as path planning, obstacle avoidance, etc., according to environmental information and task requirements. Existing decision algorithms typically require a significant amount of computational resources and often rely on accurate environmental information, which is often difficult to obtain in a complex practical environment.
In general, the main problem faced by current inspection robotics is how to accurately perceive and understand environmental information, and generate appropriate decisions based on such information, under limited computational resources. There is a need to develop more efficient, accurate, and robust machine vision systems.
Disclosure of Invention
Based on the above purpose, the invention provides a machine vision system of a Python-based inspection robot.
The machine vision system of the inspection robot based on Python comprises a real-time image processing and analyzing platform, wherein the image processing and analyzing platform is based on Python programming language, the image processing and analyzing platform acquires an environment image through a vision sensor, performs image analysis and processing through a preset machine learning model to identify obstacles and other important features in the environment, adjusts the motion path and the behavior mode of the robot in real time, improves the task execution efficiency of the inspection robot, and is characterized by comprising a vision sensor module, an image processing module, a machine learning algorithm module and a control module,
the visual sensor module is used for collecting an environment image;
the image processing module is used for preprocessing the acquired image;
the machine learning algorithm module is used for analyzing and understanding the preprocessed image;
the control module is used for adjusting the motion path and the behavior mode of the robot according to the analysis result of the machine learning algorithm module.
Further, the vision sensor module comprises one or more types of vision sensors, including a camera, a depth sensor and a laser radar, so as to acquire multi-angle and multi-level image information of the environment.
Further, the image processing module comprises an image preprocessing unit and a feature extraction unit,
the image preprocessing unit is used for performing preliminary processing on the original image acquired from the vision sensor module, including noise filtering, brightness adjustment and image standardization, so as to improve the quality and the analyzability of the image,
a1, removing random noise in an image based on median filtering and Gaussian filtering;
a2, adjusting the brightness of the image according to the change of the ambient illumination condition, and dynamically adjusting the brightness of the image so that the image is kept clear under any illumination condition;
a3, the image is standardized, the gray level of the image is stretched, so that the image is uniformly distributed in the whole gray range, the contrast of the image is improved, and the image characteristics are more obvious;
the feature extraction unit is used for extracting important visual features from the preprocessed image, including edge features, corner features and texture features, which help the machine learning model to more accurately identify obstacles and landmark targets in the image,
b1, extracting edge information in the image by the edge features through a Sobel operator and a Canny operator;
the corner features extract corner information in the image through Harris corner detection and FAST corner detection;
b2, extracting texture information in the image by a GLCM and LBP method, wherein the texture information can reflect the surface characteristics of the image, including roughness and directionality.
Further, the machine learning model is based on a support vector machine model,
the support vector machine model is used for classifying and regressing a supervised learning model of analysis, the support vector machine model enables samples of different classes to be separated to the greatest extent by finding a hyperplane, the support vector machine model finds a hyperplane for the linearly separable samples to be separated, and the support vector machine model enables the samples to be linearly separable in a high-dimensional space by introducing a kernel function for the linearly inseparable samples, and the support vector machine model is used for classifying visual features, including identifying obstacles and landmarks;
when training a support vector machine model, preparing training samples, wherein each sample comprises a visual feature vector and a label, the visual feature vector extracts visual features from an image, the visual feature vector comprises edge features, corner features and texture features, the labels are types corresponding to the samples and comprise barriers and landmarks, and further, learning the training samples by using an SMO algorithm to find an optimal hyperplane which is used for separating samples of different types;
when the support vector machine model is used for image analysis, firstly, visual features are extracted from the image, the visual features are used as the input of the support vector machine model, and the support vector machine model outputs the categories corresponding to the features, including barriers and landmarks, so that the image is analyzed and understood.
Further, the Sobel operator is used for detecting edges in a horizontal direction and a vertical direction in the image, and the operators in the horizontal direction and the vertical direction are respectively expressed as:
G_x = [[-1, 0, +1], [-2, 0, +2], [-1, 0, +1]]
G_y = [[+1, +2, +1], [0, 0, 0], [-1, -2, -1]]
wherein, G_x and G_y are gradients of the image in x direction and y direction respectively, and then the gradient amplitude and direction of the image at a certain point are obtained by the following formula:
gradient amplitude: g=sqrt (G_x2+G_y2)
Gradient direction: θ=atan2 (g_y, g_x)
The Canny edge detection operator detection step comprises the following steps: noise reduction, gradient calculation, non-maximum value suppression, double-threshold processing and hysteresis threshold, wherein the gradient adopts a Sobel operator, and the steps of non-maximum value suppression, double-threshold processing and hysteresis threshold are adopted;
the Harris corner detection algorithm comprises the steps of calculating gradients of images, constructing a gradient covariance matrix, and calculating a Harris response value by using the following formula:
R = det(M) - k * (trace(M))^2
wherein M is a gradient covariance matrix, det (M) represents a determinant of M, trace (M) represents a trace of M (i.e., a sum of elements on a main diagonal), k is an empirical constant, and usually takes a value of 0.04-0.06, if the R value is large, the point can be considered as a corner point;
SVM (Support Vector Machines) the objective function of the SVM is:
min 1/2 ||w||^2 + C * Σξ_i
s.t. y_i * (w^T x_i + b) ≥ 1 - ξ_i, ξ_i ≥ 0
wherein C >0 is a penalty parameter for controlling the trade-off between complexity and error of the model; ζ_i is a relaxation variable for handling the case of incomplete linear separation; y_i is the label of the sample; x_i is the eigenvector of the sample; w and b are parameters of the model, the objective function is a convex quadratic programming problem, and the objective function is solved through an SMO algorithm;
the GLCM is used for describing local texture characteristics of an image, each element P (i, j) in the GLCM represents the occurrence frequency or probability of pixel pairs with gray values of i and j under the spatial relationship, and various texture characteristic parameters including energy, contrast and correlation are calculated based on the GLCM;
the LBP algorithm compares the gray value of the neighborhood pixel of the center pixel with the gray value of the center pixel, if the gray value of the neighborhood pixel is greater than or equal to the gray value of the center pixel, the position of the neighborhood pixel is marked as 1, otherwise, the position of the neighborhood pixel is marked as 0, and the binary bits are combined into a binary number to serve as the LBP value of the center pixel, wherein the specific LBP value calculation formula is as follows:
LBP = Σs(g_c - g_i) * 2^i
where s (x) is a sign function, s (x) =1 when x is not less than 0, otherwise s (x) =0; g_c is the gray value of the center pixel; g_i is the gray value of the neighbor pixel; i is the sequence number of the neighborhood pixel.
Further, the control module also comprises a decision-making module for generating a targeted decision to control the movement of the robot according to the image analysis result obtained from the machine learning model module and combining the task requirement and the environmental condition,
the decision module comprises a path planning unit and a motion control unit;
the path planning unit is responsible for calculating an optimal path from the starting point to the target point according to task requirements and environmental conditions;
the path planning unit adopts a Dijkstra path strength planning algorithm;
the motion control unit adopts a PID control algorithm.
Further, the system also comprises a cloud platform, wherein the cloud platform is used for storing and updating the machine learning model and receiving and processing image information from the robot.
Furthermore, the cloud platform is connected with the robot through the Internet and is used for transmitting and processing image information in real time.
Further, the system also comprises a communication module for the information exchange between the robot and the external equipment,
the communication module supports wired communication and wireless communication;
the communication module supports communication between the robot and other robots to perform cooperative work of the robots;
the communication module supports remote control and monitoring.
The invention has the beneficial effects that:
by using the optimized image processing and machine learning algorithm, the machine vision system can process a large amount of image data in a short time and accurately extract key environment information such as the position, shape, color and motion state of an object. The inspection robot not only can greatly improve the working efficiency of the inspection robot, but also can improve the inspection accuracy, so that the problems caused by false inspection or missing inspection are reduced, and the machine vision system can automatically adjust parameters and strategies according to actual environments by using machine learning algorithms such as a Support Vector Machine (SVM) and a decision tree, so as to adapt to various complex environments and task requirements. This allows for better adaptation and robustness of the inspection robot, enabling efficient operation in a variety of complex environments.
By combining the environment information and the task demands, the decision module of the machine vision system can generate accurate path planning and motion control instructions. The inspection robot can accurately navigate and control, avoid collision with objects in the environment, more effectively complete various inspection tasks, improve the working efficiency of the robot by realizing information exchange between the robot and external equipment and other robots, realize cooperative work, timely process encountered problems by real-time monitoring and remote control, and improve the safety of the work.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only of the invention and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system module according to an embodiment of the present invention;
FIG. 2 is a logic diagram of the system operation according to an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail with reference to specific embodiments in order to make the objects, technical solutions and advantages of the present invention more apparent.
It is to be noted that unless otherwise defined, technical or scientific terms used herein should be taken in a general sense as understood by one of ordinary skill in the art to which the present invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
1-2, a machine vision system of a patrol robot based on Python comprises a real-time image processing and analyzing platform, wherein the image processing and analyzing platform is based on Python programming language, the image processing and analyzing platform acquires environment images through a vision sensor, performs image analysis and processing through a preset machine learning model to identify obstacles and other important features in the environment, adjusts the motion path and the behavior mode of the robot in real time, improves the task execution efficiency of the patrol robot, and is characterized in that the system comprises a vision sensor module, an image processing module, a machine learning algorithm module and a control module, wherein,
the vision sensor module is used for collecting an environment image;
the image processing module is used for preprocessing the acquired image;
the machine learning algorithm module is used for analyzing and understanding the preprocessed image;
the control module is used for adjusting the motion path and the behavior mode of the robot according to the analysis result of the machine learning algorithm module;
therefore, when the robot executes the task, the robot can perform self-adjustment according to the change of the environment, so that more efficient task execution is realized.
The vision sensor module comprises one or more types of vision sensors, including cameras, depth sensors and laser radars, so as to acquire multi-angle and multi-level image information of the environment.
The image processing module comprises an image preprocessing unit and a feature extraction unit,
the image preprocessing unit A is used for performing preliminary processing on the original image acquired from the vision sensor module, including noise filtering, brightness adjustment and image standardization, so as to improve the quality and the analyzability of the image,
a1, removing random noise in an image based on median filtering and Gaussian filtering;
a2, adjusting the brightness of the image according to the change of the ambient illumination condition, and dynamically adjusting the brightness of the image so that the image is kept clear under any illumination condition;
a3, the image is standardized, the gray level of the image is stretched, so that the image is uniformly distributed in the whole gray range, the contrast of the image is improved, and the image characteristics are more obvious;
the B feature extraction unit is used for extracting important visual features including edge features, corner features and texture features from the preprocessed image, the features help the machine learning model to more accurately identify obstacles and landmark targets in the image,
b1, extracting edge information in the image by the edge features through a Sobel operator and a Canny operator;
angular point characteristics are detected through Harris angular points and FAST angular points, angular point information in the image is extracted, and the angular points are usually salient characteristics in the image, so that rich space information can be provided;
b2 texture features extract texture information in the image through GLCM (gray level co-occurrence matrix) and LBP (local binary pattern), the texture information can reflect the surface characteristics of the image, including roughness and directionality, and has high value for identifying certain specific targets, such as grasslands, water surfaces and the like.
After image preprocessing and feature extraction, the quality of the image is improved, and the analyzability is enhanced, so that the subsequent machine learning model can analyze and understand the image more accurately and effectively.
The machine learning model is based on a support vector machine model,
the support vector machine model is used for classifying and regressing a supervised learning model of analysis, the support vector machine model enables samples of different classes to be separated to the greatest extent by finding a hyperplane, the support vector machine model finds a hyperplane for the linearly separable samples to be separated, and the support vector machine model enables the samples to be linearly separable in a high-dimensional space by introducing a kernel function for the linearly inseparable samples, and the support vector machine model is used for classifying visual features, including identifying obstacles and landmarks;
when training a support vector machine model, preparing training samples, wherein each sample comprises a visual feature vector and a label, the visual feature vector extracts visual features from an image, the visual feature vector comprises edge features, corner features and texture features, the labels are types corresponding to the samples and comprise barriers and landmarks, and further, learning the training samples by using an SMO algorithm to find an optimal hyperplane which is used for separating samples of different types;
when the support vector machine model is used for image analysis, firstly, visual features are extracted from the image, the visual features are used as the input of the support vector machine model, and the support vector machine model outputs the categories corresponding to the features, including barriers and landmarks, so that the image is analyzed and understood;
the support vector machine model has the advantages that the support vector machine model can process high-dimensional visual characteristics, is not easily affected by noise, and has high classification precision. In addition, by selecting a proper kernel function, the support vector machine model can handle the situation of linear inseparable, and the adaptability of the model is enhanced. According to the invention, the environment sensing capability of the robot can be improved by using the support vector machine model, so that the robot can accurately identify obstacles, landmarks and the like in the environment, and the task execution efficiency and accuracy are improved.
The Sobel operator is used for detecting edges in the horizontal direction and the vertical direction in the image, and the operators in the horizontal direction and the vertical direction are respectively expressed as:
G_x = [[-1, 0, +1], [-2, 0, +2], [-1, 0, +1]]
G_y = [[+1, +2, +1], [0, 0, 0], [-1, -2, -1]]
wherein, G_x and G_y are gradients of the image in x direction and y direction respectively, and then the gradient amplitude and direction of the image at a certain point are obtained by the following formula:
gradient amplitude: g=sqrt (G_x2+G_y2)
Gradient direction: θ=atan2 (g_y, g_x)
The Canny edge detection operator detection step comprises the following steps: noise reduction, gradient calculation, non-maximum value suppression, double-threshold processing and hysteresis threshold, wherein the gradient adopts a Sobel operator, and the steps of non-maximum value suppression, double-threshold processing and hysteresis threshold are adopted;
the Harris corner detection algorithm comprises the steps of calculating gradients of images, constructing a gradient covariance matrix, and calculating Harris response values by using the following formula:
R = det(M) - k * (trace(M))^2
wherein M is a gradient covariance matrix, det (M) represents a determinant of M, trace (M) represents a trace of M (i.e., a sum of elements on a main diagonal), k is an empirical constant, and usually takes a value of 0.04-0.06, if the R value is large, the point can be considered as a corner point;
SVM (Support Vector Machines) the objective function of the SVM is:
min 1/2 ||w||^2 + C * Σξ_i
s.t. y_i * (w^T x_i + b) ≥ 1 - ξ_i, ξ_i ≥ 0
wherein C >0 is a penalty parameter for controlling the trade-off between complexity and error of the model; ζ_i is a relaxation variable for handling the case of incomplete linear separation; y_i is the label of the sample; x_i is the eigenvector of the sample; w and b are parameters of the model, the objective function is a convex quadratic programming problem, and the objective function is solved through an SMO algorithm;
the GLCM is used for describing local texture characteristics of an image, each element P (i, j) in the GLCM represents the frequency or probability of occurrence of pixel pairs with gray values of i and j under the spatial relationship, and various texture characteristic parameters including energy, contrast and correlation are calculated based on the GLCM;
the LBP algorithm compares the gray value of the neighborhood pixel of the center pixel with the gray value of the center pixel, if the gray value of the neighborhood pixel is greater than or equal to the gray value of the center pixel, the position of the neighborhood pixel is marked as 1, otherwise, the position of the neighborhood pixel is marked as 0, and the binary bits are combined into a binary number to serve as the LBP value of the center pixel, wherein a specific LBP value calculation formula is as follows:
LBP = Σs(g_c - g_i) * 2^i
where s (x) is a sign function, s (x) =1 when x is not less than 0, otherwise s (x) =0; g_c is the gray value of the center pixel; g_i is the gray value of the neighbor pixel; i is the sequence number of the neighborhood pixel.
The control module also comprises a decision mould block, the decision sub-module is used for generating a targeted decision according to the image analysis result obtained from the machine learning model module and combining task requirements and environmental conditions so as to control the movement of the robot,
the decision module comprises a path planning unit and a motion control unit;
the path planning unit is responsible for calculating an optimal path from the starting point to the target point according to task requirements and environmental conditions, and the path needs to consider various factors such as the length, difficulty, safety and the like of the path. The motion control unit is responsible for generating corresponding motion control instructions, such as speed control, direction control, steering control and the like, according to the path planning result so as to control the robot to move along the planned path;
the path planning unit adopts Dijkstra path strength planning algorithm, and the algorithm can calculate an optimal path from the starting point to the target point according to the information such as an environment map, obstacle distribution, the positions of the starting point and the target point. Meanwhile, the path planning unit also needs to consider the motion capability of the robot, such as maximum speed, minimum turning radius and the like, so as to ensure that the planned path can be actually executed by the robot;
the motion control unit may employ a PID control algorithm, which may generate a corresponding motion control instruction according to the path planning result and the current state of the robot. For example, if the robot deviates from the planned path, the motion control unit may generate a steering control command to steer the robot back onto the planned path; if there is an obstacle in front of the robot, the motion control unit may generate a speed control command to slow down or stop the robot to prevent collision with the obstacle;
through the decision module, the robot can dynamically plan an optimal path according to the actual conditions in the environment and generate corresponding motion control instructions so as to realize accurate control of the robot. Therefore, the robot can navigate autonomously in a complex environment, and various inspection tasks can be completed effectively.
The system also comprises a cloud platform, wherein the cloud platform is used for storing and updating the machine learning model and receiving and processing image information from the robot.
The cloud platform is connected with the robot through the Internet and used for transmitting and processing image information in real time.
The system further comprises a communication module for information exchange of the robot with an external device, such as an operator's console or other robot,
the communication module can support wired communication and wireless communication (such as Wi-Fi, bluetooth, zigBee, 5G and the like), and receives control instructions sent by external equipment, such as starting/stopping inspection, changing inspection paths and the like, and sends state information of the robot to the external equipment, such as current position, battery state, task progress and the like;
the communication module supports communication between the robot and other robots to perform cooperative work of the robots; for example, in a large-scale factory, a plurality of robots may need to perform inspection tasks simultaneously, and at this time, the robots may exchange information, such as a current position, a detected problem, etc., with each other through a communication module, so as to avoid repetitive work and improve working efficiency;
the communication module supports remote control and monitoring, and an operator can send control instructions to the robot through a remote control console, such as changing a routing path, adjusting the movement speed of the robot, and the like. Meanwhile, an operator can know the working condition of the robot, such as the current position, the battery state, the detected problems and the like, in real time by monitoring the state information sent by the robot. If the robot encounters a problem, the operator may remotely operate the robot to solve the problem, or schedule personnel to go to the field for treatment.
Experimental test
In order to verify the effect of the machine vision system of the Python-based inspection robot, the following experimental tests were performed:
experiment 1: inspection efficiency and accuracy test
A simulated factory environment is provided in which objects of various shapes, colors and materials are stationary and movable. The inspection robot is enabled to conduct inspection tasks in the environment, and the time for completing the tasks and the detected problems are recorded. The result shows that in the process of executing 100 inspection tasks, the average completion time of the inspection robot using the machine vision system is 120 minutes, and is reduced by 20 percent compared with the average completion time (150 minutes) of the inspection robot using the existing vision system; in addition, the problem detection accuracy rate reaches 92%, and the accuracy rate (82%) of the inspection robot is improved by about 12.2% compared with that of the inspection robot using the existing vision system.
Experiment 2: adaptive capability test
The inspection robot is placed in different environments (environments with large changes in conditions such as light, temperature, humidity and the like), and the working performance of the inspection robot in the environments is observed. The result shows that the working performance of the inspection robot of the vision system is reduced by about 5% in the environment with large light change, and is reduced by about 20% compared with the inspection robot of the existing vision system; in the environment with large temperature and humidity variation, the working performance of the inspection robot of the vision system is reduced by about 10 percent, and compared with the inspection robot using the existing vision system, the inspection robot is reduced by about 30 percent.
Experiment 3: navigation and control accuracy test
A series of complex paths including narrow passages, complex intersections, etc. are designed and the inspection robot is allowed to navigate and control based on these paths. The results show that the navigation error rates of the inspection robots of the vision system used in the narrow channels and the complex intersections are 2% and 4%, respectively, which are reduced by about 67% and 60% compared with the navigation error rates of the inspection robots of the existing vision system (6% and 10%, respectively).
Experiment 4: communication and co-operation testing
An environment containing a plurality of inspection robots is set, and the robots can exchange information with each other and work cooperatively through a communication module. The results showed that the average completion time of the inspection robot using the vision system was 180 minutes in performing 100 collaborative tasks, which was reduced by about 14.3% compared to the average completion time of the inspection robot using the existing vision system (210 minutes).
In general, the experimental test results show that the machine vision system of the Python-based inspection robot can effectively improve inspection efficiency and accuracy, enhance self-adaptive capacity, improve navigation and control accuracy, and realize effective communication and cooperative work.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the invention is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the invention, the steps may be implemented in any order and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
The present invention is intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the present invention should be included in the scope of the present invention.

Claims (9)

1. The machine vision system of the inspection robot based on Python comprises a real-time image processing and analyzing platform, wherein the image processing and analyzing platform is based on Python programming language, the image processing and analyzing platform acquires an environment image through a vision sensor, performs image analysis and processing through a preset machine learning model to identify obstacles and other important features in the environment, and adjusts the movement path and the behavior mode of the robot in real time, and is characterized by comprising a vision sensor module, an image processing module, a machine learning algorithm module and a control module;
the visual sensor module is used for acquiring an environment image;
the image processing module is used for preprocessing the acquired image;
the machine learning algorithm module is used for analyzing and understanding the preprocessed image;
the control module is used for adjusting the motion path and the behavior mode of the robot according to the analysis result of the machine learning algorithm module.
2. The machine vision system of Python-based inspection robot of claim 1, wherein the vision sensor module includes one or more types of vision sensors, including cameras, depth sensors, and lidar, to obtain multi-angle, multi-level image information of the environment.
3. The machine vision system of a Python-based inspection robot of claim 1, wherein the image processing module includes an image preprocessing unit and a feature extraction unit,
the image preprocessing unit is used for performing preliminary processing on the original image acquired from the vision sensor module, including noise filtering, brightness adjustment and image standardization, so as to improve the quality and the analyzability of the image,
a1, removing random noise in an image based on median filtering and Gaussian filtering;
a2, adjusting the brightness of the image according to the change of the ambient illumination condition, and dynamically adjusting the brightness of the image so that the image is kept clear under any illumination condition;
a3, the image is standardized, the gray level of the image is stretched, so that the image is uniformly distributed in the whole gray range, the contrast of the image is improved, and the image characteristics are more obvious;
the feature extraction unit is used for extracting important visual features from the preprocessed image, including edge features, corner features and texture features, which help the machine learning model to more accurately identify obstacles and landmark targets in the image,
b1, extracting edge information in the image by the edge features through a Sobel operator and a Canny operator;
the corner features extract corner information in the image through Harris corner detection and FAST corner detection;
b2, extracting texture information in the image by a GLCM and LBP method, wherein the texture information can reflect the surface characteristics of the image, including roughness and directionality.
4. A machine vision system for a Python-based inspection robot as set forth in claim 3, wherein the machine learning model is based on a support vector machine model,
the support vector machine model is used for classifying and regressing a supervised learning model of analysis, the support vector machine model enables samples of different classes to be separated to the greatest extent by finding a hyperplane, the support vector machine model finds a hyperplane for the linearly separable samples to be separated, and the support vector machine model enables the samples to be linearly separable in a high-dimensional space by introducing a kernel function for the linearly inseparable samples, and the support vector machine model is used for classifying visual features, including identifying obstacles and landmarks;
when training a support vector machine model, preparing training samples, wherein each sample comprises a visual feature vector and a label, the visual feature vector extracts visual features from an image, the visual feature vector comprises edge features, corner features and texture features, the labels are types corresponding to the samples and comprise barriers and landmarks, and further, learning the training samples by using an SMO algorithm to find an optimal hyperplane which is used for separating samples of different types;
when the support vector machine model is used for image analysis, firstly, visual features are extracted from the image, the visual features are used as the input of the support vector machine model, and the support vector machine model outputs the categories corresponding to the features, including barriers and landmarks, so that the image is analyzed and understood.
5. The machine vision system of a Python-based inspection robot according to claim 4, wherein the Sobel operator is used to detect edges in a horizontal direction and a vertical direction in an image, and the operators in the horizontal direction and the vertical direction are expressed as:
G_x = [[-1, 0, +1], [-2, 0, +2], [-1, 0, +1]]
G_y = [[+1, +2, +1], [0, 0, 0], [-1, -2, -1]]
wherein, G_x and G_y are gradients of the image in x direction and y direction respectively, and then the gradient amplitude and direction of the image at a certain point are obtained by the following formula:
gradient amplitude: g=sqrt (G_x2+G_y2)
Gradient direction: θ=atan2 (g_y, g_x)
The Canny edge detection operator detection step comprises the following steps: noise reduction, gradient calculation, non-maximum value suppression, double-threshold processing and hysteresis threshold, wherein the gradient adopts a Sobel operator, and the steps of non-maximum value suppression, double-threshold processing and hysteresis threshold are adopted;
the Harris corner detection algorithm comprises the steps of calculating gradients of images, constructing a gradient covariance matrix, and calculating a Harris response value by using the following formula:
R = det(M) - k * (trace(M))^2
wherein M is a gradient covariance matrix, det (M) represents a determinant of M, trace (M) represents a trace of M (i.e., a sum of elements on a main diagonal), k is an empirical constant, and usually takes a value of 0.04-0.06, if the R value is large, the point can be considered as a corner point;
SVM (Support Vector Machines) the objective function of the SVM is:
min 1/2 ||w||^2 + C * Σξ_i
s.t. y_i * (w^T x_i + b) ≥ 1 - ξ_i, ξ_i ≥ 0
wherein C >0 is a penalty parameter for controlling the trade-off between complexity and error of the model; ζ_i is a relaxation variable for handling the case of incomplete linear separation; y_i is the label of the sample; x_i is the eigenvector of the sample; w and b are parameters of the model, the objective function is a convex quadratic programming problem, and the objective function is solved through an SMO algorithm;
the GLCM is used for describing local texture characteristics of an image, each element P (i, j) in the GLCM represents the occurrence frequency or probability of pixel pairs with gray values of i and j under the spatial relationship, and various texture characteristic parameters including energy, contrast and correlation are calculated based on the GLCM;
the LBP algorithm compares the gray value of the neighborhood pixel of the center pixel with the gray value of the center pixel, if the gray value of the neighborhood pixel is greater than or equal to the gray value of the center pixel, the position of the neighborhood pixel is marked as 1, otherwise, the position of the neighborhood pixel is marked as 0, and the binary bits are combined into a binary number to serve as the LBP value of the center pixel, wherein the specific LBP value calculation formula is as follows:
LBP = Σs(g_c - g_i) * 2^i
where s (x) is a sign function, s (x) =1 when x is not less than 0, otherwise s (x) =0; g_c is the gray value of the center pixel; g_i is the gray value of the neighbor pixel; i is the sequence number of the neighborhood pixel.
6. The machine vision system of a Python-based inspection robot of claim 1, wherein the control module further comprises a decision-making sub-module for generating a targeted decision to control the movement of the robot based on the image analysis result obtained from the machine learning model module in combination with task requirements and environmental conditions,
the decision module comprises a path planning unit and a motion control unit;
the path planning unit is responsible for calculating an optimal path from the starting point to the target point according to task requirements and environmental conditions;
the path planning unit adopts a Dijkstra path strength planning algorithm;
the motion control unit adopts a PID control algorithm.
7. The machine vision system of a Python-based inspection robot of claim 1, further comprising a cloud platform for storing and updating machine learning models and receiving and processing image information from the robot.
8. The machine vision system of a Python-based inspection robot of claim 7, wherein the cloud platform is connected to the robot via the internet for real-time transmission and processing of image information.
9. The machine vision system of a Python-based inspection robot of claim 1, further comprising a communication module for information exchange of the robot with an external device,
the communication module supports wired communication and wireless communication;
the communication module supports communication between the robot and other robots to perform cooperative work of the robots;
the communication module supports remote control and monitoring.
CN202311497735.5A 2023-11-12 2023-11-12 Python-based machine vision system of inspection robot Pending CN117359636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311497735.5A CN117359636A (en) 2023-11-12 2023-11-12 Python-based machine vision system of inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311497735.5A CN117359636A (en) 2023-11-12 2023-11-12 Python-based machine vision system of inspection robot

Publications (1)

Publication Number Publication Date
CN117359636A true CN117359636A (en) 2024-01-09

Family

ID=89394599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311497735.5A Pending CN117359636A (en) 2023-11-12 2023-11-12 Python-based machine vision system of inspection robot

Country Status (1)

Country Link
CN (1) CN117359636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117908544A (en) * 2024-01-18 2024-04-19 中建材(宜兴)新能源有限公司 Control system and method of AGV device for glass transportation based on machine vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117908544A (en) * 2024-01-18 2024-04-19 中建材(宜兴)新能源有限公司 Control system and method of AGV device for glass transportation based on machine vision

Similar Documents

Publication Publication Date Title
CN117359636A (en) Python-based machine vision system of inspection robot
CN111198496A (en) Target following robot and following method
CN115330734A (en) Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion
Chen et al. Intelligent robot arm: Vision-based dynamic measurement system for industrial applications
CN114913346A (en) Intelligent sorting system and method based on product color and shape recognition
KR20230061612A (en) Object picking automation system using machine learning and method for controlling the same
Hosseini et al. Improving the successful robotic grasp detection using convolutional neural networks
CN117798934A (en) Multi-step autonomous assembly operation decision-making method of cooperative robot
CN117474950A (en) Cross-modal target tracking method based on visual semantics
Gao et al. An automatic assembling system for sealing rings based on machine vision
Ferguson et al. Worksite object characterization for automatically updating building information models
CN106558070A (en) A kind of method and system of the visual tracking under the robot based on Delta
Useche et al. Algorithm of detection, classification and gripping of occluded objects by CNN techniques and Haar classifiers
Lin et al. Inference of 6-DOF robot grasps using point cloud data
Zhang et al. Grasping novel objects with real-time obstacle avoidance
CN115464651A (en) Six groups of robot object grasping system
Kang et al. Safety monitoring for human robot collaborative workspaces
Ya Research on the application of automation software control system in tea garden mechanical picking
Sileo et al. Real-time object detection and grasping using background subtraction in an industrial scenario
Raileanu et al. Open source platform for vision guided robotic systems integrated in manufacturing
Simeth et al. Using Artificial Intelligence to Facilitate Assembly Automation in High-Mix Low-Volume Production Scenario
Utintu et al. 6D Valves Pose Estimation based on YOLACT and DenseFusion for the Offshore Robot Application
Phan et al. Development of an Autonomous Component Testing System with Reliability Improvement Using Computer Vision and Machine Learning
Park et al. Hand-monitoring System Using CutMix-based Synthetic Augmentation for Safety in Factories
CN114792417B (en) Model training method, image recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination