CN112847321B - Industrial robot visual image recognition system based on artificial intelligence - Google Patents
Industrial robot visual image recognition system based on artificial intelligence Download PDFInfo
- Publication number
- CN112847321B CN112847321B CN202110002755.5A CN202110002755A CN112847321B CN 112847321 B CN112847321 B CN 112847321B CN 202110002755 A CN202110002755 A CN 202110002755A CN 112847321 B CN112847321 B CN 112847321B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- coordinate
- acquiring
- sending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/08—Programme-controlled manipulators characterised by modular constructions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an industrial robot visual image recognition system based on artificial intelligence, relates to the technical field of image recognition, and solves the technical problems that the industrial robot visual recognition system in the existing scheme has single function and cannot fully utilize data and hardware; the invention is provided with the Internet of things terminal module, and the first image is screened out by the edge calculation method, so that the quality of the first image is improved, and the working efficiency of the invention is improved; the invention is provided with the defect detection module which detects the defects of the parts, and the image analysis technology is utilized, so that the quality of the parts can be ensured, and the yield of products can be improved; the invention is provided with the coordinate analysis module, and the coordinate analysis module establishes a foundation for the accurate and rapid operation of the robot while checking the precision of the robot through the conversion between polar coordinates.
Description
Technical Field
The invention belongs to the field of image recognition, relates to an artificial intelligence technology, and particularly relates to an industrial robot visual image recognition system based on artificial intelligence.
Background
Robots and automation equipment have a wide application market, and in the case of robots, the robot technology is a typical representative of advanced manufacturing technologies, and is important modern manufacturing automation equipment integrating multiple discipline advanced technologies such as machinery, electronics, control, computers, sensors, artificial intelligence and the like. The flexible and flexible production line has high flexibility and adaptability, meets the characteristics of a modern production mode, namely small batch, multiple varieties, short product life cycle and quick update, can change the traditional production mode, improves the product quality and the production efficiency, and realizes civilized production and flexible production. Industrial robots have been widely used in the fields of automobiles and parts, motorcycles, engineering machinery, machine tool molds, low-voltage electric appliances, tobacco, chemical industry, military industry, and the like.
The invention patent with publication number CN109664333A provides a vision recognition system suitable for an industrial robot, which comprises a vision sensor, an image acquisition module and an image processing module which are connected in sequence; the image processing module is provided with an illumination module; the system comprises a cloud storage server, an image integration module and a data comparison module, and the image integration module is connected with the cloud storage server and the data comparison module respectively.
In the scheme, the image integration module is respectively connected with the cloud storage server and the data comparison module to realize integration and storage of received images, and the image scanning module scans an object and identifies the object through the database processing system; however, the visual identification system of the industrial robot in the scheme has a single function, and cannot make full use of data and hardware; therefore, the above solution still needs further improvement.
Disclosure of Invention
In order to solve the problems of the scheme, the invention provides an industrial robot visual image recognition system based on artificial intelligence.
The purpose of the invention can be realized by the following technical scheme: the industrial robot visual image recognition system based on artificial intelligence comprises a processor, a coordinate analysis module, a maintenance module, a data storage module, an internet of things terminal module, a production line connection module and a defect detection module;
the internet of things terminal module is in communication connection with the image acquisition unit; the image acquisition unit comprises two image vision sensors and an illumination unit; the lighting unit comprises at least one LED lighting lamp and at least one light intensity sensor;
the coordinate analysis module is used for establishing a coordinate relation between a part position and an industrial robot, and comprises the following components:
after the coordinate analysis module receives the coordinate analysis signal, a polar coordinate system is established by taking a base point of the base of the industrial robot as a pole and is marked as a first coordinate system; acquiring a polar coordinate mark of an image vision sensor as a first polar coordinate group;
acquiring a part image through an image vision sensor; the part images are at least two images;
establishing a second coordinate system by taking the central position of the part in the part image as a pole point; acquiring a polar coordinate mark of the image vision sensor in a second polar coordinate system as a second polar coordinate group;
the second polar coordinate set is combined with the second coordinate system to determine the polar coordinates of the center position of the part in the coordinate system taking the image vision sensor as the pole and mark the polar coordinates as a third polar coordinate set;
the third polar coordinate set is combined with the first coordinate system to determine the polar coordinates of the central position of the part in the first coordinate system and is marked as a fourth polar coordinate set; the first polar coordinate set, the second polar coordinate set, the third polar coordinate set and the fourth polar coordinate set at least comprise two polar coordinates;
the distance between the polar coordinates in the fourth set of coordinates is acquired and labeled;
When distance is exceededSatisfy the requirement ofIf so, judging that the fourth coordinate group meets the requirements; otherwise, judging that the fourth coordinate group does not meet the requirements, and sending a robot maintenance signal to a maintenance module; whereinAndis a distance threshold, andandall are real numbers greater than 0;
when the fourth coordinate group meets the requirements, acquiring the mean value of the polar coordinates in the fourth coordinate group and marking the mean value as a target coordinate;
and sending the target coordinates and the sending record of the robot maintenance signal to a data storage module for storage through a processor.
Preferably, the target coordinates provide an operation position for the robot operation part; the operating parts include a robot hand and a nozzle.
Preferably, the maintenance module is configured to dispatch a person on duty to perform maintenance on the robot, and includes:
when the maintenance module receives a maintenance signal, acquiring an on-duty personnel statistical library through the data storage module; the maintenance signals comprise robot maintenance signals and part defect signals;
acquiring a position sent by a maintenance signal and marking the position as a target position; acquiring the state of an attendant through an attendant statistical library; the states include busy and idle;
screening out idle on-duty personnel to form a candidate personnel library; acquiring the position of a candidate personnel library and marking the position as an initial position;
planning a route from the initial position to the target position through a third-party platform and marking the route as a standard route; the third party platform comprises a Baidu map and a Gade map;
selecting the on-duty person corresponding to the route with the shortest length in the standard routes and marking the on-duty person as a target person;
the target position and the maintenance signal are sent to a target person, and the target person receives the target position and the maintenance signal and then arrives at the target position to process;
sending the dispatching record of the target personnel to a data storage module for storage through a processor; the dispatch record includes a dispatch time and a target location.
Preferably, the internet of things terminal module is configured to perform preliminary screening on an original image to obtain a first image, and includes:
the method comprises the steps that an original image is collected through an image vision sensor and then is transmitted to an internet of things terminal module;
acquiring the temperature mean value of the CPU surface of the Internet of things terminal module in real time, and marking the temperature mean value as;
Real-time acquisition of tool for CPU of terminal module of internet of thingsMaking frequency and marking the working frequency as;
By the formulaObtaining CPU overload factor(ii) a WhereinAndis a proportionality coefficient, andandare all real numbers that are greater than 0,the master frequency of the CPU of the Internet of things terminal module;
when CPU overload factorSatisfy the requirement ofIf the GPU of the terminal module of the internet of things is overloaded, the GPU is configured for the CPU of the terminal module of the internet of things through the processor; otherwise, not configuring the GPU for the CPU of the Internet of things terminal module;
after the Internet of things terminal module receives the original image, carrying out image preprocessing on the original image to obtain a verification image; the image preprocessing comprises image segmentation, image denoising and gray level transformation;
obtaining the average value of the gray levels and the maximum value sum of the gray levels of the pixel points in the verification imageThe gray minimum value, and the average value, the maximum value and the minimum value are respectively marked as、And;
by the formulaObtaining an image evaluation coefficient(ii) a WhereinAndis a proportionality coefficient, andandall are real numbers greater than 0;
when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image meets the requirement, and marking the verification image as a first image; when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image does not meet the requirement, and acquiring the original image again through the image vision sensor; whereinEvaluating a coefficient threshold for the image, anA real number greater than 0;
and respectively sending the first image to a production line connecting module and a data storage module.
Preferably, the production line connection module is used for supervising the production process, and includes:
extracting a first image, extracting a part image and marking as a part image;
acquiring a standard image in a data storage module; the standard image is a part reference image of a corresponding position of the image acquisition unit;
matching the part image with the part size in the standard image, and judging that the production process is normal when the part size is matched; when the sizes of the parts are not matched and consistent, judging that the production process is abnormal, and sending a process abnormal signal to a maintenance module through a processor; the part sizes comprise the length, width and height of the parts, and the matching consistency means that the difference of the part sizes is within an allowable error;
sending the sending record of the process abnormal signal to a data storage module through a processor; and simultaneously sending the first image to a defect detection module.
Preferably, the defect detecting module is configured to detect a defect of the part in the first image, and includes:
acquiring feature data of a part image; the characteristic data comprises gray value characteristics, gray difference characteristics, histogram characteristics, transformation coefficient characteristics, gray edge characteristics and texture characteristics;
performing dimension reduction processing on the feature data through a dimension reduction method to obtain main features; the dimensionality reduction method comprises a principal component analysis method, a random mapping method and a non-negative matrix factorization method;
detecting the part defects in the part image by combining the main features with an image morphology method to obtain a defect detection result; when the defect detection result is empty, judging that the part has no defects, and sending a coordinate analysis signal to a coordinate analysis module through a processor; when the defect detection result is non-empty, judging that the part has a defect, and sending a part defect signal to a maintenance module through a processor;
and sending the sending record of the part defect signal to a data storage module for storage through a processor.
Preferably, the illumination unit is used for ensuring that the image acquisition unit acquires a clear original image, and includes:
acquiring light intensity through a light intensity sensor and marking the light intensity as current light intensity;
acquiring an LED current change curve in a data storage module; the LED current change curve is generated through historical data, the historical data is a corresponding relation between light intensity and optimal current of an LED illuminating lamp, and the optimal current is power supply current when an image acquisition unit acquires a clear original image;
the current light intensity is brought into an LED brightness change curve to obtain the power supply current of the LED illuminating lamp; and adjusting the brightness of the LED illuminating lamp according to the power supply current.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention is provided with an Internet of things terminal module, which is used for primarily screening an original image to obtain a first image; the method comprises the steps that an original image is collected through an image vision sensor and then is transmitted to an internet of things terminal module; real-time temperature average value of CPU surface of Internet of things terminal module(ii) a Real-time acquisition of operating frequency of CPU of Internet of things terminal module(ii) a Obtaining CPU overload factor(ii) a When CPU overload factorSatisfy the requirement ofIf the GPU of the terminal module of the internet of things is overloaded, the GPU is configured for the CPU of the terminal module of the internet of things through the processor; otherwise, not configuring the GPU for the CPU of the Internet of things terminal module; after the Internet of things terminal module receives the original image, carrying out image preprocessing on the original image to obtain a verification image; obtaining the gray average value, the gray maximum value and the gray minimum value of the pixel points in the verification image, and obtaining the image evaluation coefficient(ii) a When image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image meets the requirement, and marking the verification image as a first image; when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image does not meet the requirement, and acquiring the original image again through the image vision sensor; the Internet of things terminal module screens out the first image through an edge calculation method, so that the quality of the first image is improved, and meanwhile, the working efficiency of the invention is improved;
2. the invention is provided with a defect detection module, which is used for detecting the defects of the parts in the first image; acquiring feature data of a part image; performing dimension reduction processing on the feature data through a dimension reduction method to obtain main features; detecting the part defects in the part image by combining the main features with an image morphology method to obtain a defect detection result; when the defect detection result is empty, judging that the part has no defects, and sending a coordinate analysis signal to a coordinate analysis module through a processor; when the defect detection result is non-empty, judging that the part has a defect, and sending a part defect signal to a maintenance module through a processor; the defect detection module detects the defects of the parts, and the quality of the parts can be ensured by utilizing an image analysis technology, so that the yield of products is improved;
3. the invention is provided with a coordinate analysis module, wherein the coordinate analysis module is used for establishing a coordinate relation between a part position and an industrial robot; after the coordinate analysis module receives the coordinate analysis signal, a polar coordinate system is established by taking a base point of the base of the industrial robot as a pole and is marked as a first coordinate system; acquiring a polar coordinate mark of an image vision sensor as a first polar coordinate group; acquiring a part image through an image vision sensor; establishing a second coordinate system by taking the central position of the part in the part image as a pole point; acquiring a polar coordinate mark of the image vision sensor in a second polar coordinate system as a second polar coordinate group; the second polar coordinate set is combined with the second coordinate system to determine the polar coordinates of the center position of the part in the coordinate system taking the image vision sensor as the pole and mark the polar coordinates as a third polar coordinate set; the third polar coordinate set is combined with the first coordinate system to determine the polar coordinates of the central position of the part in the first coordinate system and is marked as a fourth polar coordinate set; obtaining distances between polar coordinates in a fourth set of polar coordinates(ii) a When distance is exceededSatisfy the requirement ofIf so, judging that the fourth coordinate group meets the requirements; otherwise, judging that the fourth coordinate group does not meet the requirements, and sending a robot maintenance signal to a maintenance module; when fourth polar coordinateWhen the group meets the requirements, acquiring the mean value of the polar coordinates in the fourth polar coordinate group and marking the mean value as a target coordinate; the coordinate analysis module establishes a foundation for accurate and rapid operation of the robot while checking the precision of the robot through conversion between polar coordinates.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the system for recognizing visual images of an industrial robot based on artificial intelligence includes a processor, a coordinate analysis module, a maintenance module, a data storage module, an internet of things terminal module, a production line connection module, and a defect detection module;
the Internet of things terminal module is in communication connection with the image acquisition unit; the image acquisition unit comprises two image vision sensors and an illumination unit; the lighting unit comprises at least one LED lighting lamp and at least one light intensity sensor;
the coordinate analysis module is used for establishing a coordinate relation between the part position and the industrial robot, and comprises the following components:
after the coordinate analysis module receives the coordinate analysis signal, a polar coordinate system is established by taking a base point of the base of the industrial robot as a pole and is marked as a first coordinate system; acquiring a polar coordinate mark of an image vision sensor as a first polar coordinate group;
acquiring a part image through an image vision sensor; the part images are at least two images;
establishing a second coordinate system by taking the central position of the part in the part image as a pole point; acquiring a polar coordinate mark of the image vision sensor in a second polar coordinate system as a second polar coordinate group;
the second polar coordinate set is combined with the second coordinate system to determine the polar coordinates of the center position of the part in the coordinate system taking the image vision sensor as the pole and mark the polar coordinates as a third polar coordinate set;
the third polar coordinate set is combined with the first coordinate system to determine the polar coordinates of the central position of the part in the first coordinate system and is marked as a fourth polar coordinate set; the first polar coordinate set, the second polar coordinate set, the third polar coordinate set and the fourth polar coordinate set at least comprise two polar coordinates;
the distance between the polar coordinates in the fourth set of coordinates is acquired and labeled;
When distance is exceededSatisfy the requirement ofIf so, judging that the fourth coordinate group meets the requirements; otherwise, judging that the fourth coordinate group does not meet the requirements, and sending a robot maintenance signal to a maintenance module;
when the fourth coordinate group meets the requirements, acquiring the mean value of the polar coordinates in the fourth coordinate group and marking the mean value as a target coordinate;
and sending the target coordinates and the sending record of the robot maintenance signal to a data storage module for storage through a processor.
Further, the target coordinates provide an operation position for the robot operation part; the operation parts include a robot hand and a nozzle.
Further, the maintenance module is used for dispatching a person on duty to maintain the robot, and comprises:
when the maintenance module receives a maintenance signal, acquiring an on-duty personnel statistical library through the data storage module; the maintenance signals comprise robot maintenance signals and part defect signals;
acquiring a position sent by a maintenance signal and marking the position as a target position; acquiring the state of an attendant through an attendant statistical library; the states include busy and idle;
screening out idle on-duty personnel to form a candidate personnel library; acquiring the position of a candidate personnel library and marking the position as an initial position;
planning a route from the initial position to the target position through a third-party platform and marking the route as a standard route; the third party platform comprises a Baidu map and a Gade map;
selecting the on-duty person corresponding to the route with the shortest length in the standard routes and marking the on-duty person as a target person;
the target position and the maintenance signal are sent to a target person, and the target person receives the target position and the maintenance signal and then arrives at the target position to process;
sending the dispatching record of the target personnel to a data storage module for storage through a processor; the dispatch record includes a dispatch time and a target location.
Further, the terminal module of internet of things is used for carrying out preliminary screening to the original image and obtaining the first image, including:
the method comprises the steps that an original image is collected through an image vision sensor and then is transmitted to an internet of things terminal module;
acquiring the temperature mean value of the CPU surface of the Internet of things terminal module in real time, and marking the temperature mean value as;
Acquiring the working frequency of the CPU of the terminal module of the internet of things in real time, and marking the working frequency as;
By the formulaObtaining CPU overload factor(ii) a WhereinAndis a proportionality coefficient, andandare all real numbers that are greater than 0,the master frequency of the CPU of the Internet of things terminal module;
when CPU overload factorSatisfy the requirement ofIf the GPU of the terminal module of the internet of things is overloaded, the GPU is configured for the CPU of the terminal module of the internet of things through the processor; otherwise, not configuring the GPU for the CPU of the Internet of things terminal module;
after the Internet of things terminal module receives the original image, carrying out image preprocessing on the original image to obtain a verification image; the image preprocessing comprises image segmentation, image denoising and gray level transformation;
acquiring a gray average value, a gray maximum value and a gray minimum value of a pixel point in a verification image, and respectively marking the gray average value, the gray maximum value and the gray minimum value as、And;
by the formulaObtaining an image evaluation coefficient(ii) a WhereinAndis a proportionality coefficient, andandall are real numbers greater than 0;
when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image meets the requirement, and marking the verification image as a first image; when image evaluation coefficientSatisfy the requirement ofIf so, judging that the quality of the verification image does not meet the requirement, and re-using the image vision sensorCollecting an original image; whereinEvaluating a coefficient threshold for the image, anA real number greater than 0;
and respectively sending the first image to a production line connecting module and a data storage module.
Further, the production line connection module is used for supervising the production process, and comprises:
extracting a first image, extracting a part image and marking as a part image;
acquiring a standard image in a data storage module; the standard image is a part reference image of a corresponding position of the image acquisition unit;
matching the part image with the part size in the standard image, and judging that the production process is normal when the part size is matched; when the sizes of the parts are not matched and consistent, judging that the production process is abnormal, and sending a process abnormal signal to a maintenance module through a processor; the sizes of the parts comprise the length, the width and the height of the parts, and the matching consistency means that the difference of the sizes of the parts is within an allowable error;
sending the sending record of the process abnormal signal to a data storage module through a processor; and simultaneously sending the first image to a defect detection module.
Further, the defect detecting module is used for detecting the defect of the part in the first image, and comprises:
acquiring feature data of a part image; the characteristic data comprises gray value characteristics, gray difference characteristics, histogram characteristics, transformation coefficient characteristics, gray edge characteristics and texture characteristics;
performing dimension reduction processing on the feature data through a dimension reduction method to obtain main features; the dimension reduction method comprises a principal component analysis method, a random mapping method and a non-negative matrix decomposition method;
detecting the part defects in the part image by combining the main features with an image morphology method to obtain a defect detection result; when the defect detection result is empty, judging that the part has no defects, and sending a coordinate analysis signal to a coordinate analysis module through a processor; when the defect detection result is non-empty, judging that the part has a defect, and sending a part defect signal to a maintenance module through a processor;
and sending the sending record of the part defect signal to a data storage module for storage through a processor.
Further, the lighting unit is used for ensuring that the image acquisition unit acquires a clear original image, and comprises:
acquiring light intensity through a light intensity sensor and marking the light intensity as current light intensity;
acquiring an LED current change curve in a data storage module; the LED current change curve is generated through historical data, the historical data is the corresponding relation between light intensity and the optimal current of an LED illuminating lamp, and the optimal current is the power supply current when an image acquisition unit acquires a clear original image;
the current light intensity is brought into an LED brightness change curve to obtain the power supply current of the LED illuminating lamp; and adjusting the brightness of the LED illuminating lamp according to the power supply current.
Further, the processor is respectively in communication connection with the coordinate analysis module, the maintenance module, the data storage module, the internet of things terminal module, the production line connection module and the defect detection module; the data storage module is in communication connection with the maintenance module, and the maintenance module is in communication connection with the coordinate analysis module; the production line connecting module is respectively in communication connection with the Internet of things terminal module and the defect detection module.
The above formulas are all calculated by removing dimensions and taking values thereof, the formula is one closest to the real situation obtained by collecting a large amount of data and performing software simulation, and the preset parameters in the formula are set by the technical personnel in the field according to the actual situation.
The working principle of the invention is as follows:
the method comprises the steps that an original image is collected through an image vision sensor and then is transmitted to an internet of things terminal module; real-time temperature average value of CPU surface of Internet of things terminal module(ii) a Real-time acquisition of operating frequency of CPU of Internet of things terminal module(ii) a Obtaining CPU overload factor(ii) a When CPU overload factorSatisfy the requirement ofIf the GPU of the terminal module of the internet of things is overloaded, the GPU is configured for the CPU of the terminal module of the internet of things through the processor; otherwise, not configuring the GPU for the CPU of the Internet of things terminal module; after the Internet of things terminal module receives the original image, carrying out image preprocessing on the original image to obtain a verification image; obtaining the gray average value, the gray maximum value and the gray minimum value of the pixel points in the verification image, and obtaining the image evaluation coefficient(ii) a When image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image meets the requirement, and marking the verification image as a first image; when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image does not meet the requirement, and acquiring the original image again through the image vision sensor;
extracting a first image, extracting a part image and marking as a part image; acquiring a standard image in a data storage module; matching the part image with the part size in the standard image, and judging that the production process is normal when the part size is matched; when the sizes of the parts are not matched and consistent, judging that the production process is abnormal, and sending a process abnormal signal to a maintenance module through a processor;
acquiring feature data of a part image; performing dimension reduction processing on the feature data through a dimension reduction method to obtain main features; detecting the part defects in the part image by combining the main features with an image morphology method to obtain a defect detection result; when the defect detection result is empty, judging that the part has no defects, and sending a coordinate analysis signal to a coordinate analysis module through a processor; when the defect detection result is non-empty, judging that the part has a defect, and sending a part defect signal to a maintenance module through a processor;
after the coordinate analysis module receives the coordinate analysis signal, a polar coordinate system is established by taking a base point of the base of the industrial robot as a pole and is marked as a first coordinate system; acquiring a polar coordinate mark of an image vision sensor as a first polar coordinate group; acquiring a part image through an image vision sensor; establishing a second coordinate system by taking the central position of the part in the part image as a pole point; acquiring a polar coordinate mark of the image vision sensor in a second polar coordinate system as a second polar coordinate group; the second polar coordinate set is combined with the second coordinate system to determine the polar coordinates of the center position of the part in the coordinate system taking the image vision sensor as the pole and mark the polar coordinates as a third polar coordinate set; the third polar coordinate set is combined with the first coordinate system to determine the polar coordinates of the central position of the part in the first coordinate system and is marked as a fourth polar coordinate set; obtaining distances between polar coordinates in a fourth set of polar coordinates(ii) a When distance is exceededSatisfy the requirement ofIf so, judging that the fourth coordinate group meets the requirements; otherwise, judging that the fourth coordinate group does not meet the requirements, and sending a robot maintenance signal to a maintenance module; and when the fourth coordinate set meets the requirements, acquiring the mean value of the polar coordinates in the fourth coordinate set and marking as the target coordinate.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.
Claims (7)
1. The industrial robot visual image recognition system based on artificial intelligence is characterized by comprising a processor, a coordinate analysis module, a maintenance module, a data storage module, an internet of things terminal module, a production line connection module and a defect detection module;
the internet of things terminal module is in communication connection with the image acquisition unit; the image acquisition unit comprises two image vision sensors and an illumination unit; the lighting unit comprises at least one LED lighting lamp and at least one light intensity sensor;
the coordinate analysis module is used for establishing a coordinate relation between a part position and an industrial robot, and comprises the following components:
after the coordinate analysis module receives the coordinate analysis signal, a polar coordinate system is established by taking a base point of the base of the industrial robot as a pole and is marked as a first coordinate system; acquiring a polar coordinate mark of an image vision sensor as a first polar coordinate group;
acquiring a part image through an image vision sensor; the part images are at least two images;
establishing a second coordinate system by taking the central position of the part in the part image as a pole point; acquiring a polar coordinate mark of the image vision sensor in a second polar coordinate system as a second polar coordinate group;
the second polar coordinate set is combined with the second coordinate system to determine the polar coordinates of the center position of the part in the coordinate system taking the image vision sensor as the pole and mark the polar coordinates as a third polar coordinate set;
the third polar coordinate set is combined with the first coordinate system to determine the polar coordinates of the central position of the part in the first coordinate system and is marked as a fourth polar coordinate set; the first polar coordinate set, the second polar coordinate set, the third polar coordinate set and the fourth polar coordinate set at least comprise two polar coordinates;
the distance between the polar coordinates in the fourth set of coordinates is acquired and labeled;
When distance is exceededSatisfy the requirement ofIf so, judging that the fourth coordinate group meets the requirements; otherwise, judging that the fourth coordinate group does not meet the requirements, and sending a robot maintenance signal to a maintenance module;
when the fourth coordinate group meets the requirements, acquiring the mean value of the polar coordinates in the fourth coordinate group and marking the mean value as a target coordinate;
and sending the target coordinates and the sending record of the robot maintenance signal to a data storage module for storage through a processor.
2. An artificial intelligence based industrial robot visual image recognition system according to claim 1 wherein the target coordinates provide an operating position for a robot operating part; the operating parts include a robot hand and a nozzle.
3. An industrial robot visual image recognition system based on artificial intelligence according to claim 1, wherein the repair and maintenance module is adapted to dispatch a person on duty for repair and maintenance of the robot, comprising:
when the maintenance module receives a maintenance signal, acquiring an on-duty personnel statistical library through the data storage module; the maintenance signals comprise robot maintenance signals and part defect signals;
acquiring a position sent by a maintenance signal and marking the position as a target position; acquiring the state of an attendant through an attendant statistical library; the states include busy and idle;
screening out idle on-duty personnel to form a candidate personnel library; acquiring the position of a candidate personnel library and marking the position as an initial position;
planning a route from the initial position to the target position through a third-party platform and marking the route as a standard route; the third party platform comprises a Baidu map and a Gade map;
selecting the on-duty person corresponding to the route with the shortest length in the standard routes and marking the on-duty person as a target person;
the target position and the maintenance signal are sent to a target person, and the target person receives the target position and the maintenance signal and then arrives at the target position to process;
sending the dispatching record of the target personnel to a data storage module for storage through a processor; the dispatch record includes a dispatch time and a target location.
4. The system according to claim 1, wherein the terminal module of internet of things is configured to perform a preliminary screening on the original image to obtain the first image, and comprises:
the method comprises the steps that an original image is collected through an image vision sensor and then is transmitted to an internet of things terminal module;
acquiring the temperature mean value of the CPU surface of the Internet of things terminal module in real time, and marking the temperature mean value as;
Acquiring the working frequency of the CPU of the terminal module of the internet of things in real time, and marking the working frequency as;
By the formulaObtaining CPU overload factor(ii) a WhereinAndis a proportionality coefficient, andandare all real numbers that are greater than 0,the master frequency of the CPU of the Internet of things terminal module;
when CPU overload factorSatisfy the requirement ofThen, the thing is judged to be finally connectedIf the GPU of the end module is overloaded, configuring the GPU for the CPU of the Internet of things terminal module through the processor; otherwise, not configuring the GPU for the CPU of the Internet of things terminal module;
after the Internet of things terminal module receives the original image, carrying out image preprocessing on the original image to obtain a verification image; the image preprocessing comprises image segmentation, image denoising and gray level transformation;
acquiring a gray average value, a gray maximum value and a gray minimum value of a pixel point in a verification image, and respectively marking the gray average value, the gray maximum value and the gray minimum value as、And;
by the formulaObtaining an image evaluation coefficient(ii) a WhereinAndis a proportionality coefficient, andandall are real numbers greater than 0;
when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image meets the requirement, and marking the verification image as a first image; when image evaluation coefficientSatisfy the requirement ofJudging that the quality of the verification image does not meet the requirement, and acquiring the original image again through the image vision sensor; whereinEvaluating a coefficient threshold for the image, anA real number greater than 0;
and respectively sending the first image to a production line connecting module and a data storage module.
5. The system of claim 1, wherein the line connecting module is configured to supervise a production process, and comprises:
extracting a first image, extracting a part image and marking as a part image;
acquiring a standard image in a data storage module; the standard image is a part reference image of a corresponding position of the image acquisition unit;
matching the part image with the part size in the standard image, and judging that the production process is normal when the part size is matched; when the sizes of the parts are not matched and consistent, judging that the production process is abnormal, and sending a process abnormal signal to a maintenance module through a processor; the part sizes comprise the length, width and height of the parts, and the matching consistency means that the difference of the part sizes is within an allowable error;
sending the sending record of the process abnormal signal to a data storage module through a processor; and simultaneously sending the first image to a defect detection module.
6. An artificial intelligence based industrial robot visual image recognition system as claimed in claim 1 wherein the defect detection module is for detecting defects in the part in the first image and comprises:
acquiring feature data of a part image; the characteristic data comprises gray value characteristics, gray difference characteristics, histogram characteristics, transformation coefficient characteristics, gray edge characteristics and texture characteristics;
performing dimension reduction processing on the feature data through a dimension reduction method to obtain main features; the dimensionality reduction method comprises a principal component analysis method, a random mapping method and a non-negative matrix factorization method;
detecting the part defects in the part image by combining the main features with an image morphology method to obtain a defect detection result; when the defect detection result is empty, judging that the part has no defects, and sending a coordinate analysis signal to a coordinate analysis module through a processor; when the defect detection result is non-empty, judging that the part has a defect, and sending a part defect signal to a maintenance module through a processor;
and sending the sending record of the part defect signal to a data storage module for storage through a processor.
7. The artificial intelligence based industrial robot visual image recognition system of claim 1, wherein the illumination unit is used for ensuring that the image acquisition unit acquires a clear original image, and comprises:
acquiring light intensity through a light intensity sensor and marking the light intensity as current light intensity;
acquiring an LED current change curve in a data storage module; the LED current change curve is generated through historical data, the historical data is a corresponding relation between light intensity and optimal current of an LED illuminating lamp, and the optimal current is power supply current when an image acquisition unit acquires a clear original image;
the current light intensity is brought into an LED brightness change curve to obtain the power supply current of the LED illuminating lamp; and adjusting the brightness of the LED illuminating lamp according to the power supply current.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110002755.5A CN112847321B (en) | 2021-01-04 | 2021-01-04 | Industrial robot visual image recognition system based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110002755.5A CN112847321B (en) | 2021-01-04 | 2021-01-04 | Industrial robot visual image recognition system based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112847321A CN112847321A (en) | 2021-05-28 |
CN112847321B true CN112847321B (en) | 2021-12-28 |
Family
ID=76001484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110002755.5A Active CN112847321B (en) | 2021-01-04 | 2021-01-04 | Industrial robot visual image recognition system based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112847321B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469560B (en) * | 2021-07-19 | 2022-05-10 | 北京东华博泰科技有限公司 | Cloud platform data management system based on industrial big data |
CN117197133B (en) * | 2023-11-06 | 2024-01-30 | 湖南睿图智能科技有限公司 | Control system and method for vision robot in complex industrial environment |
CN117549314B (en) * | 2024-01-09 | 2024-03-19 | 承德石油高等专科学校 | Industrial robot intelligent control system based on image recognition |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5744587B2 (en) * | 2011-03-24 | 2015-07-08 | キヤノン株式会社 | Robot control apparatus, robot control method, program, and recording medium |
CN105234943B (en) * | 2015-09-09 | 2018-08-14 | 大族激光科技产业集团股份有限公司 | A kind of industrial robot teaching device and method of view-based access control model identification |
CN105965519A (en) * | 2016-06-22 | 2016-09-28 | 江南大学 | Vision-guided discharging positioning method of clutch |
CN106607907B (en) * | 2016-12-23 | 2017-09-26 | 西安交通大学 | A kind of moving-vision robot and its investigating method |
CN106938463A (en) * | 2017-05-02 | 2017-07-11 | 上海贝特威自动化科技有限公司 | A kind of method of large plate positioning crawl |
CN110125926B (en) * | 2018-02-08 | 2021-03-26 | 比亚迪股份有限公司 | Automatic workpiece picking and placing method and system |
CN111452034A (en) * | 2019-01-21 | 2020-07-28 | 广东若铂智能机器人有限公司 | Double-camera machine vision intelligent industrial robot control system and control method |
-
2021
- 2021-01-04 CN CN202110002755.5A patent/CN112847321B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112847321A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112847321B (en) | Industrial robot visual image recognition system based on artificial intelligence | |
CN106568783B (en) | A kind of hardware defect detecting system and method | |
CN105956618B (en) | Converter steelmaking blowing state identification system and method based on image dynamic and static characteristics | |
CN108537154A (en) | Transmission line of electricity Bird's Nest recognition methods based on HOG features and machine learning | |
CN109977808A (en) | A kind of wafer surface defects mode detection and analysis method | |
CN107804514B (en) | Toothbrush sorting method based on image recognition | |
CN110910350B (en) | Nut loosening detection method for wind power tower cylinder | |
CN112528979B (en) | Transformer substation inspection robot obstacle distinguishing method and system | |
JP2021515885A (en) | Methods, devices, systems and programs for setting lighting conditions and storage media | |
CN111242057A (en) | Product sorting system, method, computer device and storage medium | |
CN112561875A (en) | Photovoltaic cell panel coarse grid detection method based on artificial intelligence | |
CN110567383A (en) | pantograph abrasion early warning system and detection method based on structural forest and sub-pixels | |
CN114092478B (en) | Anomaly detection method | |
CN113706455B (en) | Rapid detection method for damage of 330kV cable porcelain insulator sleeve | |
CN112734637B (en) | Thermal infrared image processing method and system for monitoring temperature of lead | |
CN107944453A (en) | Based on Hu not bushing detection methods of bending moment and support vector machines | |
CN114387262A (en) | Nut positioning detection method, device and system based on machine vision | |
CN117114420B (en) | Image recognition-based industrial and trade safety accident risk management and control system and method | |
CN110246131A (en) | Conducting wire share split defect image recognition methods based on convolutional neural networks | |
CN116843615B (en) | Lead frame intelligent total inspection method based on flexible light path | |
CN115993366B (en) | Workpiece surface detection method and system based on sensing equipment | |
CN117351472A (en) | Tobacco leaf information detection method and device and electronic equipment | |
CN107121063A (en) | The method for detecting workpiece | |
CN115855961B (en) | Distribution box fault detection method used in operation | |
CN112989881A (en) | Unsupervised migratable 3D visual object grabbing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230523 Address after: Room 303, Building 5, No. 93 Yunhe South Road (Zhonggang Metal Trading City), Guangling District, Yangzhou City, Jiangsu Province, 225000 Patentee after: Jiangsu Juqun Construction Engineering Co.,Ltd. Address before: 225000 Wenchang West Road, Yangzhou City, Jiangsu Province Patentee before: Yangzhou Vocational University (Yangzhou Radio and TV University) |