CN110163232B - Intelligent vision recognition vehicle board transformer coordinate system - Google Patents
Intelligent vision recognition vehicle board transformer coordinate system Download PDFInfo
- Publication number
- CN110163232B CN110163232B CN201810977200.0A CN201810977200A CN110163232B CN 110163232 B CN110163232 B CN 110163232B CN 201810977200 A CN201810977200 A CN 201810977200A CN 110163232 B CN110163232 B CN 110163232B
- Authority
- CN
- China
- Prior art keywords
- transformer
- image
- coordinate
- coordinate system
- library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/752—Contour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of artificial intelligence, in particular to the field of artificial intelligence image processing, and more particularly relates to an intelligent visual identification vehicle board transformer coordinate system. By adopting the technical scheme disclosed by the invention, the site coordinate position of the transformer on the vehicle plate can be automatically obtained with high accuracy and high precision, so that basic data is provided for subsequent automatic operation.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to the field of artificial intelligence image processing, and more particularly relates to an intelligent visual recognition vehicle board transformer coordinate system.
Background
Intelligent vision is an important branch of artificial intelligence. "artificial intelligence" is a big and broad concept as the focus and focus of research in many fields, and it has different expressions in different fields. And intelligent vision is an application branch in the field of image processing.
The intelligent vision is a science for researching how to make a machine look, in short, a camera or a vision sensor is used for replacing human eyes, and by identifying, tracking and measuring a target, and processing the data into an image more suitable for human eyes to observe in a computer.
With the continuous development of artificial intelligence image processing technology and the combination of information such as big data and the like, the current artificial intelligence image processing technology can simulate natural selection of biological evolution theory and a calculation model of genetic evolution process. The method for randomly searching the optimal solution by simulating the natural evolution process embodies the evolution principle of survival, excellence and disfavor of the fittest. The method is mainly characterized in that the method directly operates the structural object, does not have the limitation of derivation and function continuity, and has parallelism and stronger global optimization capability.
The industrial automation is realized in a large storage scene, particularly, the automation in the process of loading and unloading goods and materials in storage is bound to depend on the accurate identification of a transport vehicle plate and goods and materials loaded on the transport vehicle plate, and the size of the goods and materials can give accurate loading and unloading signals only by accurately identifying the position of the goods and materials, so that the reliability of the loading and unloading automatic operation is ensured.
The transformer is a common material in storage in the power industry, an intelligent visual identification system and method for the transformer are not available in the industry at present, and the existing identification system and method are not suitable for the transformer in a large storage scene. Therefore, accurate identification and coordinate positioning of the transformer are one of the main problems limiting the industrial and automatic development of the transformer in the large storage scene.
In conclusion, finding a reliable intelligent vision recognition car board transformer coordinate system is a large storage scene at present and is an urgent problem to be solved in the transformer storage process.
Disclosure of Invention
The invention aims to solve the technical problem of providing an intelligent visual identification vehicle board transformer coordinate system which is good in universality, high in stability and high in accuracy.
In order to solve the technical problems, the invention discloses an intelligent vision recognition vehicle board transformer coordinate system, which realizes the purpose of intelligently vision recognizing the vehicle board transformer coordinate through the following steps,
step 1: establishing an image feature library and a contour shape recognition library of a recognition object, wherein the image feature library and the contour shape recognition library comprise a vehicle head image feature library and a contour shape recognition library, a vehicle plate image feature library and a contour shape recognition library, a ground image feature library and a contour shape recognition library, and various types of transformer image feature libraries and contour shape recognition libraries;
step 2: comparing the visual point cloud picture to be identified with an HSV color model in an image feature library, extracting an object outline, comparing the object outline with an outline shape identification library, removing identified car heads, car boards, the ground and other invalid areas, and only retaining image information of the transformer to be identified;
and step 3: removing boundary points through image corrosion according to the image characteristic value of the target transformer, enabling the boundary to contract inwards, eliminating small and meaningless objects, carrying out image segmentation through edge detection, and extracting the outline of each transformer to be unloaded on the vehicle plate;
and 4, step 4: according to the contour of each transformer image, combining HSV color templates in an image feature library, and calculating the coordinate value of the pixel center point of the transformer image by using the intersection point of the diagonals of the two frames;
and 5: and converting the coordinate value of the central point of each transformer pixel into a loading and unloading site coordinate taking millimeters as a unit according to the site coordinate value of the visual point cloud picture pixel original point and the relation of the size proportion and the deflection angle.
As a preferred technical scheme, the method for identifying the object by the HSV color model in the step 2 is to divide the white color and the black color into a plurality of levels according to a logarithmic relationship, wherein the different levels are called as gray levels, then the scanned image is respectively compared with the ground gray image characteristic value, the vehicle plate gray image characteristic value and the vehicle head gray image characteristic value, the image binarization is carried out, and the invalid identification area is removed.
By adopting the image binarization processing mode, the data volume in the image can be greatly reduced, so that the target contour is highlighted, invalid identification areas are easier to remove, and the data calculation amount is reduced.
The HSV color model is a color model formed by taking hue H, saturation S and lightness V as parameters.
Preferably, the "gray scale" ranges generally from 0 to 255, 255 for white and 0 for black.
It should be explained here that the aforementioned edge detection refers to detecting a place where the gray level or structure has a sudden change, which can indicate the end of one region, and this place is also the beginning of another region, and this discontinuity is called an edge, and the image gray levels of different regions are different, and the boundary generally has a distinct edge, so that the image segmentation can be performed by using this feature.
As a preferable technical solution, the field coordinate value of the visual point cloud image pixel origin point in step 5 of the present invention is determined according to the field coordinate point when the line-scanning traveling mechanism triggers the visual sensor to perform detection.
Further preferably, the coordinate conversion in step 5 is performed by converting the coordinates of the center point of each transformer image into millimeter-scale coordinates of the field according to the field coordinate values of the pixel origin of the visual point cloud image and the actual field size (millimeters) and the angle deflection relationship corresponding to each pixel point. Specifically, the method of this conversion is:
1) a field coordinate value (X) according to the origin (0, 0) of the visual point cloud picture pixel0,Y0) The actual sizes of X-axis and Y-axis directions corresponding to each pixel point are in millimeter level according to the image coordinate values of the center point of the transformer
(xa,ya) Calculating the non-deflection field coordinate (X) in a linear relationa is,Ya is);
2) According to the actual deflection angle difference between the visual point cloud picture coordinate system and the loading and unloading site coordinate system, using a trigonometric function
Calculating the actual site coordinate (X) of the transformer by number conversiona fact,Ya fact) With coordinate accuracy error in
Within + -30 mm.
In the present invention, it is further preferable that the visual point cloud image in step 2 is a point cloud image formed by arranging multiple times of linear data in a time sequence, where each time of linear data is a line of pixel information containing distance and angle data.
By adopting the technical scheme disclosed by the invention, the site coordinate position of the transformer on the vehicle plate can be automatically obtained with high accuracy and high precision, so that basic data is provided for subsequent automatic operation.
Detailed Description
In order that the invention may be better understood, we now provide further explanation of the invention with reference to specific examples.
In this embodiment, how to realize the purpose of intelligently and visually recognizing the coordinates of the vehicle-board transformer by the intelligent and visually recognized vehicle-board transformer coordinate system is specifically explained and explained.
Firstly, an image feature library and a contour shape recognition library of a recognition object are established in the system, wherein the image feature library and the contour shape recognition library comprise a vehicle head image feature library and a contour shape recognition library, a vehicle plate image feature library and a contour shape recognition library, a ground image feature library and a contour shape recognition library, and various types of transformer image feature libraries and contour shape recognition libraries.
In the visual point cloud picture generated by the visual detection mechanism, the HSV color model (hue H, saturation S, lightness V) in the image feature library is used for identifying the object, and the different features of the ground, the vehicle board, the vehicle head and the like can be clearly distinguished. Specifically, the relationship between white and black is logarithmically divided into several levels, and these levels are "gray levels". In this embodiment, the range of the levels is 0-255, the white is 255, and the black is 0, then the visual point cloud picture is compared with the ground gray level image characteristic value, the car board gray level image characteristic value, and the car head gray level image characteristic value respectively, the image binarization is performed, the respective contours of the ground, the car board, and the car head are distinguished, and then the comparison and the confirmation are performed with the contour shape recognition library. And then, removing the identified ground, the vehicle plate, the vehicle head area and other invalid identification areas, and only keeping the image information of the target transformer.
Then, according to the image characteristic value of the target transformer, removing boundary points through image corrosion, enabling boundaries to contract inwards, eliminating small and meaningless objects, carrying out image segmentation through edge detection, and extracting the outline of each transformer to be unloaded on the vehicle plate; as explained herein, edge detection refers to detecting a place where a gray level or structure has a sudden change, which can indicate the ending of one region, and this place is also the beginning of another region, and this discontinuity is called an edge, and different image grays are different, and the boundary generally has a distinct edge, so that image segmentation can be performed by using this feature.
Calculating the pixel center point coordinates of the transformer image by the intersection point of the two frame diagonals according to the picture outline of each transformer and by combining an HSV color template;
and finally, converting the coordinates of the pixel center point of each transformer into the coordinates of the field according to the field coordinate values of the pixel original points of the visual point cloud images and the relationship of proportion and angle. The field coordinate value of the picture pixel origin is determined according to the field coordinate point when the line scanning walking mechanism triggers the visual sensor to detect. According to the field coordinate value of the picture pixel origin, the actual field size (millimeter) and the angle deflection relation corresponding to each pixel point are converted into millimeter-scale field coordinates according to the field coordinate value of the picture pixel origin, and the coordinate precision error is guaranteed to be within +/-30 mm.
What has been described above is a specific embodiment of the present invention. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.
Claims (7)
1. The intelligent visual recognition vehicle board transformer coordinate system is characterized in that the system achieves the purpose of intelligently visually recognizing the coordinates of a loading and unloading site of a transformer on a vehicle board through the following steps:
step 1: establishing an image feature library and a contour shape recognition library of a recognition object, wherein the image feature library and the contour shape recognition library comprise a vehicle head image feature library and a contour shape recognition library, a vehicle plate image feature library and a contour shape recognition library, a ground image feature library and a contour shape recognition library, and various types of transformer image feature libraries and contour shape recognition libraries;
step 2: comparing the visual point cloud picture to be identified with an HSV color model in an image feature library, extracting an object outline, comparing the object outline with an outline shape identification library, removing identified car heads, car boards, the ground and other invalid areas, and only keeping image information of the transformer to be identified;
and step 3: removing boundary points through image corrosion according to the image characteristic value of the target transformer, enabling the boundary to contract inwards, eliminating small and meaningless objects, carrying out image segmentation through edge detection, and extracting the outline of each transformer to be unloaded on the vehicle plate;
and 4, step 4: according to the contour of each transformer image, combining HSV color templates in an image feature library, and calculating the coordinate value of the pixel center point of the transformer image by using the intersection point of the diagonals of the two frames;
and 5: according to the field coordinate value of the visual point cloud picture pixel original point, the coordinate value of the central point of each transformer pixel is converted into loading and unloading field coordinates taking millimeters as a unit according to the relation of size proportion and deflection angle, and the coordinate precision error is within +/-30 mm.
2. The intelligent visual identification vehicle board transformer coordinate system of claim 1, wherein: the method for identifying the object by the HSV color model in the step 2 is that the white color and the black color are divided into a plurality of levels according to the logarithmic relation, the different levels are called as gray levels, then the scanned image is respectively compared with the characteristic value of the ground gray image, the characteristic value of the gray image of the vehicle board and the characteristic value of the gray image of the vehicle head, the image binarization is carried out, and the invalid identification area is removed.
3. The intelligent visual identification vehicle board transformer coordinate system of claim 2, wherein: the "gray scale" ranges from 0 to 255, 255 for white and 0 for black.
4. The intelligent visual identification vehicle board transformer coordinate system of claim 1, wherein: and 5, determining the field coordinate value of the visual point cloud picture pixel origin according to the field coordinate point when the line scanning automatic walking mechanism triggers the visual sensor to detect.
5. The intelligent visual identification vehicle board transformer coordinate system of claim 1, wherein: and 5, converting the coordinates in the step 5 into millimeter-scale loading and unloading site coordinates according to site coordinate values of the pixel original points of the visual point cloud images and the millimeter-scale dimension and angle deflection relation of the actual site corresponding to each pixel point.
6. The intelligent visual identification vehicle board transformer coordinate system of claim 1, wherein: in the step 2, the visual point cloud picture is formed by arranging multiple linear data according to a time sequence, wherein each linear data is a line of pixel information containing distance and angle data.
7. The intelligent visual identification vehicle board transformer coordinate system of claim 5, wherein: the coordinate transformation in said step 5 is performed in such a way that,
1) a field coordinate value (X) according to the origin (0, 0) of the visual point cloud picture pixel0,Y0) The actual sizes of the millimeter-scale in the X-axis direction and the Y-axis direction corresponding to each pixel point are calculated according to the image coordinate value (X) of the center point of the transformera,ya) Calculating the non-deflection field coordinate (X) in a linear relationa is,Ya is);
2) Calculating the actual site coordinate (X) of the transformer by trigonometric function conversion according to the actual deflection angle difference between the visual point cloud chart coordinate system and the loading and unloading site coordinate systema fact,Ya fact) The coordinate precision error is within +/-30 mm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810977200.0A CN110163232B (en) | 2018-08-26 | 2018-08-26 | Intelligent vision recognition vehicle board transformer coordinate system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810977200.0A CN110163232B (en) | 2018-08-26 | 2018-08-26 | Intelligent vision recognition vehicle board transformer coordinate system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110163232A CN110163232A (en) | 2019-08-23 |
CN110163232B true CN110163232B (en) | 2020-06-23 |
Family
ID=67645087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810977200.0A Active CN110163232B (en) | 2018-08-26 | 2018-08-26 | Intelligent vision recognition vehicle board transformer coordinate system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163232B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559703A (en) * | 2013-10-08 | 2014-02-05 | 中南大学 | Crane barrier monitoring and prewarning method and system based on binocular vision |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8432448B2 (en) * | 2006-08-10 | 2013-04-30 | Northrop Grumman Systems Corporation | Stereo camera intrusion detection system |
-
2018
- 2018-08-26 CN CN201810977200.0A patent/CN110163232B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559703A (en) * | 2013-10-08 | 2014-02-05 | 中南大学 | Crane barrier monitoring and prewarning method and system based on binocular vision |
Non-Patent Citations (2)
Title |
---|
基于智能巡视系统的变压器故障诊断方法;赵永俊等;《河北电力技术》;20130228;第32卷(第01期);全文 * |
基于轮廓的形状特征提取与识别方法;周正杰等;《计算机工程与应用》;20061231(第14期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110163232A (en) | 2019-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105678689B (en) | High-precision map data registration relation determining method and device | |
CN107507167B (en) | Cargo tray detection method and system based on point cloud plane contour matching | |
Shen et al. | A positioning lockholes of container corner castings method based on image recognition | |
CN111721259B (en) | Underwater robot recovery positioning method based on binocular vision | |
KR100823549B1 (en) | Recognition method of welding line position in shipbuilding subassembly stage | |
CN111627072A (en) | Method and device for calibrating multiple sensors and storage medium | |
CN107527368B (en) | Three-dimensional space attitude positioning method and device based on two-dimensional code | |
CN111767780B (en) | AI and vision combined intelligent integrated card positioning method and system | |
CN111784655B (en) | Underwater robot recycling and positioning method | |
CN115609591B (en) | Visual positioning method and system based on 2D Marker and compound robot | |
Mozos et al. | Interest point detectors for visual slam | |
CN110378957B (en) | Torpedo tank car visual identification and positioning method and system for metallurgical operation | |
CN112734844B (en) | Monocular 6D pose estimation method based on octahedron | |
CN112833784B (en) | Steel rail positioning method combining monocular camera with laser scanning | |
Chen et al. | Pallet recognition and localization method for vision guided forklift | |
CN107729906B (en) | Intelligent robot-based inspection point ammeter numerical value identification method | |
KR20180098945A (en) | Method and apparatus for measuring speed of vehicle by using fixed single camera | |
CN110163232B (en) | Intelligent vision recognition vehicle board transformer coordinate system | |
JPH07103715A (en) | Method and apparatus for recognizing three-dimensional position and attitude based on visual sense | |
JPS63311485A (en) | Automatic calibration device | |
CN116309882A (en) | Tray detection and positioning method and system for unmanned forklift application | |
CN116160458A (en) | Multi-sensor fusion rapid positioning method, equipment and system for mobile robot | |
Kita et al. | Localization of pallets on shelves in a warehouse using a wide-angle camera | |
CN114359314B (en) | Real-time visual key detection and positioning method for humanoid piano playing robot | |
Varga et al. | Improved autonomous load handling with stereo cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |