CN116594351A - Numerical control machining unit system based on machine vision - Google Patents

Numerical control machining unit system based on machine vision Download PDF

Info

Publication number
CN116594351A
CN116594351A CN202310200650.XA CN202310200650A CN116594351A CN 116594351 A CN116594351 A CN 116594351A CN 202310200650 A CN202310200650 A CN 202310200650A CN 116594351 A CN116594351 A CN 116594351A
Authority
CN
China
Prior art keywords
image
robot
processing
machine vision
machine tool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310200650.XA
Other languages
Chinese (zh)
Inventor
徐晓光
汪千松
孙晓云
汪龙
王奇凯
王淼
丁家乐
叶炯
张久超
王远远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Zobiao Intelligent Technology Co ltd
Anhui Polytechnic University
Original Assignee
Anhui Zobiao Intelligent Technology Co ltd
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Zobiao Intelligent Technology Co ltd, Anhui Polytechnic University filed Critical Anhui Zobiao Intelligent Technology Co ltd
Priority to CN202310200650.XA priority Critical patent/CN116594351A/en
Publication of CN116594351A publication Critical patent/CN116594351A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/408Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by data handling or data format, e.g. reading, buffering or conversion of data
    • G05B19/4083Adapting programme, configuration
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/35Nc in input of data, input till input file format
    • G05B2219/35356Data handling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses a numerical control machining unit system based on machine vision, which comprises a motion control system and an industrial camera, wherein the motion control system is internally provided with the numerical control system and is used for controlling the machining of parts; the motion control system is also provided with a robot system for controlling a robot in the robot system to grasp parts; the motion control system is also provided with a machine vision system for processing the image; the invention relates to the technical field of numerical control processing, in particular to an industrial camera which is connected with a machine vision system and used for collecting images and performing image processing by a vision processor in the machine vision system. The invention solves the problems of quality and efficiency caused by machining deviation when the machining process is still executed according to a set program because the part is irregular or the fixed position is deviated after entering the machine tool.

Description

Numerical control machining unit system based on machine vision
Technical Field
The invention relates to the technical field of numerical control machining, in particular to a numerical control machining unit system based on machine vision.
Background
For the numerical control machining industry, the technical requirements for operators are more severe because the parts machining needs to be strictly standardized and the quality requirements are extremely strict. As market personalization, diversification of product demands have increased, this presents a great challenge to both factory production technology and quality efficiency. The manual production and processing are adopted in a large quantity, so that the labor cost is huge, the equipment consumption time is long, meanwhile, the labor is limited, the fatigue can be generated, the working efficiency is further reduced, meanwhile, the threat to the human body can be caused by the working environment, and the factory benefit is greatly reduced. Thus, the current social development requirements cannot be met mainly by the traditional production of manual equipment.
Although the numerical control machining technology has been developed for decades, the quality and the efficiency of the numerical control machining technology are low from the original simple part machining, the numerical control machining technology gradually goes to higher machining quality and faster machining efficiency, but now faces the increasingly diversified demands of people, the types of machined products are more abundant, the update iteration of the traditional numerical control machine tool is slower, and the production demands cannot be met. In an automatic machine tool production line, the market always has higher requirements on the quality and efficiency of a processed product, and the quality of the processed product of a machine tool is easily influenced, such as the irregularity of a processed part or the deviation of a fixed position after entering the machine tool, so that the processing deviation can occur when the processing process is still executed according to a set program, and further the quality and efficiency problems are caused
Disclosure of Invention
In order to solve the problems that the machining deviation can occur when the machining process is still executed according to a set program due to the fact that the part is irregular or the fixed position is deviated after entering a machine tool, and quality and efficiency are further caused, the invention aims to provide a numerical control machining unit system based on machine vision.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the numerical control machining unit system based on the machine vision comprises a motion control system and an industrial camera, wherein the motion control system is provided with the numerical control system and is used for controlling the machining of parts; the motion control system is also provided with a robot system for controlling a robot in the robot system to grasp parts; the motion control system is also provided with a machine vision system for processing the image; the industrial camera is connected with the machine vision system and is used for collecting images, and a vision processor in the machine vision system is used for processing the images.
Preferably, the machining unit in the numerical control system is a machine tool, and the numerical control system comprises the following operation steps: the machine tool processing is converted from manual transportation feeding to automatic feeding of a conveyor belt, processed parts are directly placed on a machine tool feeding conveyor belt, after the parts reach a specified position, sensor signals are triggered and the conveyor belt is stopped, a camera fixed above the specified stopping position of the parts on the machine tool feeding conveyor belt is used for part image shooting, meanwhile, a machine vision system is used for processing the shot images and outputting result information, a robot system is controlled to acquire information, and the robot is used for grabbing the parts and feeding the parts to a machine tool for processing, and finished products after processing are taken out by the robot and placed on the machine tool feeding conveyor belt for transportation.
Preferably, the machine vision system identifies and locates the step as follows: creating a standard template, matching the acquired image with the standard template, outputting part pose information and transmitting the part pose information to a robot, wherein the matching is consistent, the angle offset is 0, and the robot directly grabs and sends the part pose information to a machine tool for machining; and (3) matching is inconsistent, the angle offset is not 0, and the robot is put into a machine tool for machining after correcting the received angle offset.
Preferably, the robot system receives the information of the coordinate X, Y and the deflection angle R, which are recognized by the machine vision system for positioning the machined part, after the robot receives the information, the coordinate conversion is firstly carried out, the image coordinate under the shot part is converted into the world coordinate under the movement of the robot, then the robot grabs the part before passing through the converted X, Y coordinate, and the deflection angle R is corrected in the process of being sent to the machine tool; and then the numerical control system processes, and after the processing is finished, signals are sent to the robot, the robot receives the signals and then takes out the processed parts from the machine tool to send to the machine tool feeding conveyor belt, and then the robot finishes one-time action.
Preferably, the robot system comprises an I/O interface for receiving and sending information and a servo drive for controlling the movement of the mechanical arm.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, the numerical control processing unit is integrated based on machine vision, a motion control system and the like, so that the labor cost is reduced, and the processing quality and efficiency are improved;
2. according to the invention, after the information of the shot part, the coordinates of the part and the deflection angle are obtained through the acquisition and the pretreatment of the machine vision system, the robot wants to go to the grabbing part to obtain the coordinate information, and when the part deflects, the deflection angle information can also be obtained;
3. according to the invention, the robot system receives the data transmitted by the vision system, performs grabbing action and numerical control system information interaction, and jointly completes the task of machining parts of the machine tool;
4. according to the invention, the motion control system can realize information communication among all modules, ensure that an automatic production line runs normally, and realize the purposes of high efficiency and high quality of machine tool processing under a machine vision system;
5. according to the invention, a complete system is constructed, the machine vision system is combined with the numerical control machine tool under the automatic production line, so that the labor cost is greatly reduced, the quality and efficiency of a machine tool machined part are improved, and the running stability and efficiency of the system can meet the industrial production requirements of related factories.
Drawings
The invention is described in further detail below with reference to the attached drawings and detailed description:
FIG. 1 is a schematic diagram of a machine vision based numerical control machining unit system;
FIG. 2 is a schematic diagram of an image coordinate system;
FIG. 3 is a schematic diagram of the total product of matrices Rx (ψ), ry (ψ), rz (τ);
FIG. 4 is a schematic diagram of a camera imaging profile;
FIG. 5 is a schematic view of a 7X 7 calibration plate;
FIG. 6 is a schematic diagram of a simplified operation flow of the numerical control processing unit;
FIG. 7 is a schematic diagram of a machine vision system composition;
FIG. 8 is a schematic diagram of a vision system identification positioning process;
FIG. 9 is a schematic diagram of image graying;
FIG. 10 is a schematic diagram of a value filtered 5×5 template;
FIG. 11 is a schematic diagram of a Gaussian filter template;
FIG. 12 is a schematic diagram of the principle of median filtering an image;
FIG. 13 is a schematic diagram of the processing effects of different filtering algorithms;
FIG. 14 is a schematic diagram of image segmentation;
FIG. 15 is a schematic diagram of templates for the Sobel operator in the vertical and horizontal directions;
FIG. 16 is a schematic diagram of a template of a Laplacian operator;
FIG. 17 is a schematic diagram of the results of each operator processing;
FIG. 18 is a schematic diagram of a robotic system assembly;
fig. 19 is a schematic diagram of a specific control flow of the motion control system.
Detailed Description
Further advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure of the present invention, which is described by the following specific examples.
Please refer to fig. 1 to 19. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the invention, are not intended to be critical to the essential characteristics of the invention, but are intended to fall within the spirit and scope of the invention. Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
Example 1, machine vision system hardware selection:
The application principle of the industrial camera is simply described as converting the object to be photographed into an image signal (optical signal) and then into an electrical signal for storage through internal processing.
The difference based on the current application of sensors inside industrial cameras can be divided into CCD industrial cameras and CMOS industrial cameras. The two are photoelectric conversion by the light sensing diode, so that the conversion from an image signal to a digital signal is realized, and the main difference between the two is that the specific digital data transmission modes are different. In short, the charge data in the CCD camera is sequentially transferred, and each pixel in the CMOS camera has a respective signal amplifier, and the charge-voltage conversion is performed, so that the transfer is not required to be performed in a certain order. Based on the condition, the CCD camera has slower output speed, high chip power consumption, high CMOS speed and low chip power consumption, but the CMOS camera has poorer consistency of signal output, so that the respective amplifiers can bring larger noise due to inconsistency when working, the interference on image quality is larger, the CCD camera has wider bandwidth, better image output quality and high resolution. Therefore, after the analysis is integrated, the system selects the sea Kangwei visual gigabit portal industrial area array CCD camera.
The camera is connected with a computer by a gigabit Ethernet interface, can display high-definition images in real time, and needs to be provided with a power supply for independent power supply. And a plurality of operating systems are supported, and meanwhile, the system is compatible with a plurality of machine vision software on the market, such as Halcon, opencv, labView and the like. Meanwhile, the gain, the exposure time and the white balance are automatically or manually adjusted, the LUT and Gamma correction are manually adjusted, the color camera is implanted with an excellent image interpolation algorithm, the color camera has excellent color restoration characteristics, a gigabit Ethernet interface is adopted, the maximum transmission distance can reach 100m under the condition of no relay, and the specific parameters are shown in the following table:
sea Kangwei industrial camera related parameters
The camera calibration method comprises the following steps:
after the component model is determined, we can theoretically use the vision system. But after actual use, we find that the visual system does not process well, unlike what is expected. This is because we have large errors in the direct mounting of the purchased camera due to the unclear specific parameters, which makes our experimental data less than ideal. Therefore, we need to perform camera calibration before using the vision system to reduce errors as much as possible.
What is then the camera calibration and how we should do so again is what is done next.
In the image measurement process, in order to determine the accurate relationship between the geometric position of a point on the surface of a space object and the corresponding point in the image, a geometric model of camera imaging must be established, and the parameters of the geometric model are the parameters of the camera. Under most conditions, these parameters must be obtained through experiments and calculations, and this process of solving the parameters is called camera calibration (or camera calibration). In image measurement or machine vision application, calibration of camera parameters is a very critical link, and the accuracy of a calibration result and the stability of an algorithm directly influence the accuracy of a result generated by camera work. Thus, making camera calibration is a precondition for making subsequent work. The camera calibration method commonly used at present mainly comprises a traditional camera calibration method, an active vision camera calibration method, a camera self-calibration method and the like.
(1) Traditional camera calibration
The calibration object used in the traditional camera calibration method is required to be of a known size, then a point with an available coordinate is established on the calibration object, the point is corresponding to the point on the shot image, a geometric relation model is conveniently established, and finally internal and external parameters of the calibration camera are acquired through a certain algorithm. The calibration object can be different according to the requirement, and can be generally divided into a three-dimensional calibration object and a planar calibration object. The three-dimensional calibration object is suitable for scenes with high requirements on calibration precision, and can finish calibration through a single image, but is difficult to manufacture and inconvenient to maintain; compared with the planar calibration object, the planar calibration object is simpler to manufacture, the precision is guaranteed to a certain extent, and the precision can be improved through two or more images during calibration. They are suitable for any camera imaging model, but before calibration, calibration objects with known sizes need to be prepared, and meanwhile, the manufacturing precision of the calibration objects can directly influence the final calibration result, so that some occasions are not suitable for the calibration method.
(2) Active vision camera calibration
The calibration method does not need to use a calibration object, and controls the camera to do special movements by using certain movement information of the known camera, so that the internal parameters of the camera are calculated through the particularities of the movements. The method is simple in algorithm, can obtain linear solution and has strong robustness, but the method is high in cost, high in experimental equipment price and high in application environment requirement, and is not suitable for the calibration method particularly when motion parameters or motion states are unknown or cannot be controlled.
(3) Camera self-calibration
The calibration method does not need to use a calibration object, and the purpose of calibration is achieved mainly through the motion constraint of the camera. In operation, it is necessary to analyze in combination with some parallel lines or orthogonal information in the scene to complete the calibration. The method is flexible in operability and can be matched with a camera to finish calibration on line. But also because of strong camera constraints, the operation is based on absolute quadrics or curved methods, so the algorithm stability is poor.
By simply analyzing the three calibration methods, the article is completed under a monocular camera, and the shot part images are processed under a plane to obtain coordinates and angle values, so that the distortion error of the camera can be ignored. Therefore, the calibration plate with known size is set and printed by HALCON software for calibration, and the precision of manufacturing and printing the calibration plate by the software is high, so that the traditional camera calibration method is more convenient and has low cost.
Camera coordinate transformation relationship:
in order to understand the principle of object recognition and localization, the transformation relationship between the coordinate systems is known for the convenience of direct application in this document. Related coordinate systems referred to herein in calibrating the position of a scene object are: the image coordinate system is as in fig. 2, camera coordinate system, world coordinate system.
Conversion relation between pixel coordinate system and image plane coordinate system:
first, for an image coordinate system, two kinds of images can be classified: one is a pixel coordinate system (UOV), which is an image coordinate system with pixels as units and the upper left of the image as the origin; one is an image plane coordinate system (XO 0 Y) representing the image pixel position in mm, the origin being defined between the camera and the image plane, in physical unitsThe intersections, the relationship between which is shown in figure 2. Let us assume that the minimum physical dimensions of each pixel in the rows and columns are dx and dy (dx, dy, u) 0 、v 0 All are hypothetical parameters). dx and dy represent the minimum practical size of the camera photosensitive chip, and the following relations (2.1), (2.2) can be obtained by calculus
U in the formula 0 、v 0 The position is the center of the image plane and is the internal parameter to be solved finally.
Converting the formulas (2.1), (2.2) into a matrix form:
The inverse relationship can be expressed as:
conversion relation between camera coordinate system and world coordinate system: the position of the camera is not fixed in a real scene and in order to determine this position point, a world coordinate system needs to be constructed to describe the position of the camera. The conversion between the two involves rotation and translation matrix calculation, and the position relationship between the two can be calculated by the rotation matrix R and the translation vector T. If a point P in the real scene, its homogeneous coordinates in the real world coordinate system and the set camera coordinate system can be expressed according to (Xw, yw, zw, 1) T and (Xc, yc, zc, 1) T, the above coordinate relationship can be described, that is, the possible relationship between the camera coordinate system and the world coordinate system is:
wherein R is a 3×3 rotation matrix and T is a 3×1 translation matrix. 0 T Is (0, 0). R and T are determined from the position and orientation of the camera relative to the spatial scene. The rotation matrix R is three matrices Rx (ψ), R formed by rotating the coordinate axes around the x, y and z axes by angles ψ, ψ and τ y (ψ),R z (τ) as shown in fig. 3.
Conversion relation between camera coordinate system and image plane coordinate system:
a brief model of an ideal camera imaging is shown in fig. 4.
According to the camera model and triangle similarity principle, f is the focal length of the camera (i.e. the distance of OO 1), P is the intersection point of the OP connecting line and the image plane, and (Xc, yc, zc) is the coordinates of the spatial point P in the camera coordinate system, so that the formulas (2.6) and (2.7) can be obtained
Conversion relation between world coordinate system and pixel coordinate system:
bringing (2.5) and (2.6) into (2.7), the world coordinate of the P point and the image coordinate of the projection point P are related as formula (2.8)
The above formula completes the conversion from the world coordinate system to the image coordinate system, and the transition of the camera coordinate system and X are experienced in the middle w W in (b) represents the world in millimeters and u, v in pixels, i.e., the conversion from millimeters to pixels is completed.
Wherein a is x =f/dx,a y =f/dy; m is a 3×4 matrix-projection matrix, M 1 Completely by internal parameters a of the camera x ,a y ,u 0 ,v 0 Determining principal point coordinates (u 0 ,v 0 ),a x ,a y Scale factors of the u-axis and v-axis of the gemstone image, M 2 It is entirely determined by the external parameters of the camera.
To obtain the internal parameters and external parameters of the camera, the camera must be calibrated, and then the object coordinates and angles in the world coordinate system are calculated and converted. By CCD camera calibration, optimized internal and external parameters can be obtained, and the parameters are parameters of the model, namely principal point (u 0 ,v 0 ) And the lens focal length f, and the external parameters are a rotation matrix R and a translation matrix T. The camera calibration is accomplished herein by using HALCON software in conjunction with a calibration plate.
The code gen_caltab (7,7,0.0125,0.5, 'caltab. Descr', 'caltab. Ps') is entered in the software to obtain a 7 x 7 calibration plate as shown in fig. 5.
Through printing out the calibration plate, setting is carried out in HALCON software, the position and the direction of the calibration plate are continuously changed, and after tens of groups of pictures are obtained, the internal and external parameters of the camera can be obtained, so that the subsequent operation can be carried out.
Embodiment 2, a numerical control processing unit system based on machine vision, comprising a motion control system and an industrial camera, wherein the motion control system is provided with a numerical control system for controlling the processing of parts; the motion control system is also provided with a robot system for controlling a robot in the robot system to grasp parts; the motion control system is also provided with a machine vision system for processing the image; the industrial camera is connected with the machine vision system and is used for acquiring images, and a vision processor in the machine vision system is used for processing the images.
And (3) a numerical control system:
compared with the traditional machine tool, the numerical control machine tool is integrated based on machine vision, a motion control system and the like. Mainly solves the problems of machine tool production quality and efficiency. A simplified flow chart of the operation of the numerical control machining unit is shown in fig. 6.
The operation of the numerical control processing unit under the automatic production line is described as follows: the machine tool processing is converted from manual transportation feeding to automatic feeding of a conveying belt, processed parts are directly placed on a machine tool feeding conveying belt, after the parts reach a specified position (machine tool feeding position), sensor signals are triggered and the conveying belt is stopped, a camera is fixed above the specified stopping position of the parts on the machine tool feeding conveying belt to perform part drawing, meanwhile, a machine vision system processes the photographed image and outputs result information, a robot acquires information and sends the information to a grabbing part feeding machine tool to be processed, and finished products after processing are taken out by the robot and placed on the machine tool feeding conveying belt to be transported away. Under the whole set of automatic production line process, the labor cost is reduced, and the processing quality and efficiency are improved.
Machine vision system:
the study herein is based on machine vision technology, so a machine vision system is indispensable. The designed machine vision system mainly comprises three aspects by integrating the research object of the vision system and the position of the research object: image acquisition, image processing and result output, namely visual hardware and visual software.
In the foregoing, the composition and selection of visual hardware aspects have been mainly taught, and hardware components have been determined by comparison, mainly to determine industrial cameras and light sources suitable for the present system. Thus, the assembly can be initially completed and the result can be checked in operation. After the related parameters are determined, the formal image acquisition can be carried out on the parts, various preprocessing is carried out on the acquired images by using a visual software part, the recognition result is output after the interference information is eliminated, and the result is transmitted to the robot system through a communication module in the control system so as to facilitate the subsequent process flow. The machine vision system composition diagram is shown in fig. 7.
And (3) image acquisition:
after the machine vision system is built, image acquisition is needed first for facilitating subsequent research. The acquired image not only can help us to complete camera calibration and obtain required parameters, but also can reflect some influences of the environment, such as noise interference types, common Gaussian noise, spiced salt noise, white noise and the like. This provides support for the direction of subsequent image preprocessing.
In using a machine vision system for image acquisition processing, several aspects are generally considered. Because the quality of the image processing results directly affect the subsequent experiments and have a great influence on the research, the parts of the vision system need to be strictly screened. The selection direction includes a camera, an auxiliary light source, a vision processor, and the like. The acquisition of high-quality images is an important start of vision system processing, so that the vision system selects an auxiliary light source LED lamp after comparing all acquisition modules in the market, and the performance price ratio of the infrared imaging system is high, and the detection effect on products is good. After the camera acquires the image, a vision processor in the vision system processes the related image.
Image processing:
the machine vision system designed in the method is mainly used for identifying and positioning different poses of the machined parts. The parts are collected through a camera, then a standard template (the processed product corresponding to the standard template is qualified) is created in the system, the collected image is matched with the standard template, and then the pose information of the parts is output and transmitted to the robot. Matching is consistent, the angle offset is 0, and the robot directly grabs and sends the materials into a machine tool for machining; and (3) matching is inconsistent, the angle offset is not 0, and the robot is put into a machine tool for machining after correcting the received angle offset. The vision system identification positioning flowchart is shown in fig. 8.
In order to realize the visual recognition positioning process, various preprocessing needs to be performed first, so that the accuracy of recognition positioning is ensured.
The image preprocessing is a core part of a machine vision system, and the task of the image preprocessing is mainly to realize that a camera collects pictures, and then the filtering, the gray level conversion binarization, the open operation and the close operation, the segmentation and the filling, the affine transformation, the edge feature extraction, the pose coordinate state recognition calculation and the like of an RGB image are realized through software. The implementation way of the specific research content of the paper can be selected from the following processing steps: 1) And (3) graying the color image, so that the speed of post image processing is improved. By means of the gray level histogram, selecting a suitable gray level region can be accomplished faster; 2) Filtering to eliminate noise signal (mainly salt and pepper noise) of interference image, and matching with corrosion and expansion operation to eliminate interference to a greater extent so as to make the interference image have higher quality; 3) Threshold segmentation of the image is realized, and a region of interest of the target object is separated; 4) And extracting edge information, namely obtaining important information which is easy to ignore in marginalization through edge extraction, and then calculating to obtain angle and coordinate information of the image. And finally, converting the world coordinate system into a world coordinate system by using a calibration algorithm for output display. The vision system is pre-treatment described in terms of:
Graying the image:
generally, the image processing needs to be performed first, because the currently acquired image is basically a color image, and the color image has too much pigment, so that the later image processing is difficult to perform once the background of the image is slightly changed. Therefore, the image needs to be subjected to gray scale processing to become a binary image. The image graying is shown in fig. 9.
And (3) image filtering:
the machine tool processing site environment is generally noisy, and the image quality acquired by the acquisition module often has large noise interference, so that the noise interference needs to be removed by filtering, and great help is provided for subsequent image processing. Filtering is a method of removing noise, and there are many ways.
(1) Mean value filtering
The core idea of the average filtering is to give a standard template for the pixels to be processed, which template covers surrounding similar pixels, on the image processing, and the result of the processing is to replace the original pixel value with the average value of all the pixels covered by the template, so that the average filtering is also called linear filtering from the single point of view.
Selecting a current pixel point (x, y) to be processed, then establishing a standard template, wherein the template contains some pixels nearby, then calculating all pixels in the template to obtain an average value, assigning the average value to the current pixel point (x, y), and taking the assigned pixel point as a gray value g (x, y) of the processed image at the point, so that a mean value filtering formula can be written as follows:
Where m is the total number of pixels contained in the template.
The value filtered 5 x 5 template diagram is shown in fig. 10.
The average filtering denoising effect is mainly influenced by the size of the selected template, the larger the size is, the calculated average value is closer to the true value of each pixel point, but the image is blurred, so that part of useful information is lost, and the characteristics are lost after the image is processed, so that the subsequent processing is not facilitated.
(2) Gaussian filtering
Gaussian filters are widely used because of the gaussian noise associated with most of the images. It also belongs to a linear filtering but the denoising principle is different. A template is selected to weight and then average all pixel values of an image to be processed in a traversing mode, namely, each pixel point is obtained by carrying out weighted average on the pixel point and other adjacent pixel values.
Gaussian filtering is very effective in suppressing noise into normal distribution. The Gaussian function has great influence on Gaussian filtering, firstly, the two-dimensional Gaussian function is symmetrical and has the same smoothness degree in all directions, so that the edge trend of an original image can be well reserved, secondly, the Gaussian function has single value, an anchor point at the Gaussian convolution kernel is an extreme value, and the Gaussian function has monotonically decreasing characteristics in all directions, so that information at the edge of the image is well reserved, and finally, the frequency domain region where the Gaussian filtering is located is not interfered and polluted by high-frequency signals. The one-dimensional and two-dimensional Gaussian function formulas are as follows
Where σ is a coefficient of gaussian filtering, reflecting the smoothness, and different filtering effects can be obtained when σ is different.
The gaussian filter template is shown in fig. 11.
Median filtering
Different from the first two filtering methods, the median filtering belongs to a nonlinear filtering technology, and the name of the median filtering is that the gray value of each pixel point on the image to be processed is set to be the middle value of the gray values of all the pixel points in the area near the point, namely, all the gray values of all the surrounding pixels are ordered, and the middle gray value is used as the gray value of the pixel of the point, so that the operation amount is small, and the processing speed is high. Therefore, the method can eliminate isolated noise points, is particularly effective to salt and pepper noise treatment, and can also protect edge information.
In simple terms, if one-dimensional filtering is processed, it is simply ordered and then intermediate values are found, and when two-dimensional filtering is processed, the field becomes two-dimensional, assuming that the sequence is { Xi, j }, and there are various shapes such as circular, trapezoidal, cross-shaped, etc. The two-dimensional median filtering at this time can be expressed as:
where A represents the filter window.
The field template size used for the median filtering is typically 3×3,5×5, etc., and if the current template does not achieve the filtering effect, this can be achieved by changing the template size. The principle of median filtering the image is shown in fig. 12.
Based on the above, the three filtering algorithms are adopted respectively to process the image of the same article, and the processing effect diagram of different filtering algorithms is shown in fig. 13, (a) original diagram with noise; (b) an average filtering plot; (c) a gaussian filter processing diagram; (d) median filtering treatment diagram.
The image after mean value filtering treatment is higher in ambiguity, edge information is not easy to obtain, the information at the edge is still blurred to a certain extent although the information at the edge is smaller than the ambiguity at the mean value filtering treatment after Gaussian filtering treatment, the image after median filtering treatment is clear than the image at the edge, and the information at the edge is also convenient to obtain. Therefore, the paper compares the specific image processing results, and finally selects the median filter by combining the prior experimental environment from the consideration of different types and shapes of the processed parts. The method can inhibit isolated noise points, and has the advantages of complete edge information retention and good filtering effect.
Image segmentation:
in the face of an acquired image, only a part of information is needed to complete the purpose, and the rest part is mostly useless and easy to form interference items, so that the useless interference information needs to be removed as much as possible, and an image segmentation method can be selected to separate the whole object image from the background area, thereby facilitating subsequent processing research. The thresholding algorithm of the image is a classical algorithm that can be maximized to achieve this goal. The threshold segmentation method is suitable for the situation that the gray value of a target area and the gray value of a background area are greatly different, and the part to be identified and the background area which are researched in the text just meet the condition. Then an appropriate threshold is required for this segmentation method to segment the two parts of content and then facilitate subsequent image processing. Taking HALCON software as an example for simple explanation, firstly carrying out gray level binarization processing on a read image, then forming a connected domain, and then selecting the features of a separated image according to the requirement through a feature histogram to achieve the purpose of segmentation (the embodiment selects area features for segmentation). The image segmentation is shown in fig. 14.
Extracting image edge features:
edge detection is simply a method of processing image edge information. After the previous preprocessing, it can be seen that the image pixels often have larger changes at the edges of the image, so that the next processing direction is to find a suitable algorithm to clearly and accurately extract the information at the edges. The current method for extracting information at the edge can be generally divided into a one-dimensional edge extraction method and a two-dimensional edge extraction method. The current one-dimensional edge refers to the position of the image with the largest first derivative value in a certain direction, while the two-dimensional edge is at a certain zero position in the second derivative, and the numerical signs at two sides of the zero point are different (from positive to negative or from negative to positive). Corresponding to the two-dimensional edge operator is a one-dimensional edge operator and a two-dimensional edge operator. The first commonly used one-dimensional edge operators at present are Roberts operator, sobel operator, prewitt operator and the like; the second two-dimensional edge operator is a LOG operator, a Laplacian operator and the like; in addition, the one-dimensional Canny operator is an edge operator based on the first derivative performed after the image smoothing process, and further improvement is generally required to obtain better image processing results in order to adapt to different study objects.
From the subjects studied herein, a simple comparison of the above several operators, respectively, can be made to the conclusions shown in the following table.
Edge operator features
(1) Roberts operator
The Roberts operator is the simplest operator that detects image edge information by using gray level differences at local positions. In use, a 2 x 2 template is used to calculate the difference between two adjacent pixels in the diagonal direction. The Roberts detection operator and gradient calculations are as follows:
the 2 x 2 operator templates used in the above formula are as follows:
delta is obtained by the formula (3-4) x f、Δ y After f, the gradient value R (x, y) of the Roberts operator can be further calculated, so that whether the point (x, y) is a step-type edge point or not can be judged through a preset limiting value, f (x, y) is an input image, and R (x, y) is a target image after edge detection.
(2) Sobel operator
The Sobel operator is a discrete difference operator used for performing approximate calculation on the gradient of the calculated image brightness function. The method is to carry out weighted difference processing on gray values in four adjacent areas of all pixel points in an image until the maximum value is obtained at the edge position. The calculation formula for the Sobel operator and gradient is thus as follows:
And then a gradient direction formula can be obtained:
the Sobel operator has higher weight for the point close to the central pixel, so the weight of the pixel where the Sobel operator is connected with the four neighborhoods of the central pixel is 2 or-2, and the standard is 1 or-1, so templates of the Sobel operator in the vertical direction and the horizontal direction are sequentially shown in fig. 15.
In general, all edges in the horizontal direction in the image can be detected by (a) templates, and all edges in the vertical direction in the image can be detected by (b) templates. In the processing of an actual image, a convolution operation is required to be performed on each pixel point by using the formula (3-6), and then the maximum value in the convolution of two templates is selected as the output value of the pixel point, so that an image with smooth and continuous edges can be obtained. Similarly, whether the point (x, y) is a step-type edge point or not can be determined by a predetermined appropriate limit value, f (x, y) is an input image, and R (x, y) is a target image after edge detection.
(3) Prewitt operator
The Prewitt operator is similar to the Sobel operator, and also performs weighted difference processing by using gray values in the upper, lower, left and right adjacent areas of the pixel point, and also reaches an extremum detection edge at the edge, so that a pseudo edge can be well removed, and the noise is smoothed. In principle, the four-neighborhood convolution is performed on an image by using templates in two directions of the image space, including horizontal direction and vertical direction detection. However, the influence of the current pixel point on other neighboring pixels is considered as the same, the influence of the distance on the current pixel point is negligible, and the weight of each part is the same, namely the weight is 1 or-1. The calculation formula for the Prewitt operator and gradient is thus as follows:
And then a gradient direction formula can be obtained:
the two neighborhood templates of the Prewitt operator are as follows:
and (3) carrying out convolution operation on each pixel point in the image by using the formula (3-8), and then selecting the maximum value in the convolution of the two templates as the output value of the pixel point, so that the image with smooth and continuous edges can be obtained. Similarly, whether the point (x, y) is a step-type edge point or not can be determined by a predetermined appropriate limit value, f (x, y) is an input image, and R (x, y) is a target image after edge detection.
(4) Laplacian operator
The Laplacian operator is the simplest second-order isotropic differential operator and has the characteristic of rotation invariance. The Laplace transform of any two-dimensional image function can be expressed as an isotropic second derivative, defined as:
however, to facilitate processing of digital images, taking the sum of the second order differences in the x-axis and y-axis directions for each pixel of f (x, y), the formula can be further expressed as a discrete form:
therefore, when four or eight neighborhoods are taken around the pixel point, templates of the Laplacian operator are respectively shown in FIG. 16.
Since the Laplacian operator is relatively sensitive to noise, the image needs to be smoothed before being processed, but since the smoothing is also performed by means of templates, the two can be combined together to generate a new template.
(5) LOG operator
LOG edge detection is an edge detection method that combines gaussian filtering with the laplace detection operator. The method also comprises the steps of carrying out smoothing denoising treatment on an original image, restraining noise to the greatest extent, and carrying out edge extraction on the smoothed image to obtain a target image.
Currently, LOG operator application is generally completed in three steps:
(1) The two-dimensional Gaussian filter function has good filtering effect, and a smooth filtering image can be obtained;
(2) The two-dimensional Laplace operator can strengthen the bright spots in the image again;
(3) And performing edge detection on the second derivative zero position crossing point obtained after the convolution as an edge point.
In the foregoing, it has been mentioned that a two-dimensional gaussian filter function G (x, y) is convolved with the original image f (x, y) to obtain a smoothed image H (x, y)
H(x,y)=G(x,y)*f(x,y)(3.13)
The convolution operation is performed, and then the second-order directional derivative image Q (x, y) of the smoothed image H (x, y) is obtained by means of a Laplacian operator. The interchangeability by convolution and differentiation is:
whereby the gaussian smoothing filter function is integrated with the laplace differential operation as a convolution operator:
the above formula is LOG operator. And then obtaining the zero crossing point track of Q (x, y) to obtain the edge of the image f (x, y). And performing convolution operation on the original gray level image through a LOG operator to obtain a zero crossing point serving as an edge point. However, due to the fact that the LOG operator is high in noise sensitivity, when the LOG operator meets the requirements of good denoising effect and smooth images, positioning at the edge position is inaccurate, and the LOG operator cannot be unified, so that the LOG operator is affected.
(6) Traditional Canny operator
The Canny operator is more widely used than the previous ones. The multi-stage edge detection algorithm has good signal-to-noise ratio and good performance for high-edge detection. Canny was originally designed to seek an optimal edge detection algorithm, and therefore forms a criterion for judging whether an edge detection operator is good or bad:
1) The detection rate is good; 2) The accurate positioning can be realized; 3) With minimal response.
Thus, the basic steps for implementing the Canny edge detection algorithm are as follows:
1) And (5) noise reduction. The image is smoothed by using a suitable gaussian filter. I.e. a two-dimensional gaussian function G (x, y) convolution operation is performed on the original image f (x, y).
S(x,y)=G(x,y)*f(x,y) (3.16)
Wherein sigma is a Gaussian filter coefficient, and the larger the sigma value is, the better the smoothing effect is and the better the noise suppression capability is; s (x, y) represents the convolved image.
2) Gradient values and angular direction images are calculated from the first derivative. Taking A (x, y) as a horizontal direction x derivative, B (x, y) as a vertical direction y derivative, C (x, y) as gradient amplitude of the image, and the direction Θ (x, y) is as follows:
3) Non-maximum suppression is applied to the gradient magnitude values. In the Canny algorithm, the edge points are generally considered as points where the gradient magnitude is greatest, but not all points where the gradient magnitude is great are edge points. By this method, the current gray value is represented by 0 on the gradient amplitude of each pixel point compared with the amplitude in the gradient direction of the point, if not the maximum value, it belongs to the non-edge point, if the maximum value, it will remain as the edge point, thereby obtaining the refined edge.
4) The edge is detected and connected using a double threshold method. The high threshold value and the low threshold value are set firstly, and then each pixel point in the image is judged, so that two threshold value edge images can be obtained. The high threshold may yield a rough contour edge and then find weak edge information from the low threshold image to fill in the high threshold image edge gaps.
The method is less affected by noise, and the edge continuity after image processing is good, but false edge information is easy to generate, and part of useful edge information is also ignored.
(7) Improved Canny operator
According to the designed vision system, an improved vision processing algorithm is adopted, the edge condition of an object can be well detected, weak edge information can be detected, and useful edge information cannot be easily ignored. Compared with the traditional Canny operator, the improved vision algorithm adopted by the vision system adopts a bilateral filter to replace a Gaussian filter, and by adding the Gaussian variance formula on the original basis, the filtering effect is improved, a good denoising function is realized, and the problem of image edge blurring caused by Gaussian filtering is solved; meanwhile, in order to adapt to the randomness of workpiece placement, the method adapts to wide-range angle change, and more directions of calculation are added on the original gradient calculation rule. Parameters are continuously corrected in a plurality of improved modes, so that the false edge information is finally identified and processed, and the integrity of the original image is well reserved. In the image edge detection required by the system, the improved vision algorithm effectively suppresses noise and achieves good edge positioning.
Specifically, the directional gradient improvement formula is as follows:
by the above formula, the magnitude formula of the gradient can be obtained as follows:
so in the gradient direction:
the result of processing according to each operator is shown in fig. 17.
As can be seen from the graph, after the Roberts operator, the Sobel operator and the Prewitt operator are processed, a plurality of noise interferences still exist; the Laplacian operator has less noise after processing, but the image ambiguity is higher than the former ones; the traditional Canny operator image has better noise removal, but the image information at the edge still has point defects; the improved Canny operator has better detection effect, high and smooth image noise elimination rate, more complete edge information reservation and improved edge detection accuracy.
And (3) outputting results:
after acquisition and pre-processing by the vision system we have obtained several pieces of information of the photographed part, part coordinates and deflection angles etc. The vision system is connected with the robot system through TCP communication, so that data information intercommunication is realized. For the present study, that is, the robot wants to go to the gripping part, the coordinate information can be obtained, and when the part deflects, the deflection angle information can also be obtained.
A robot system:
After the machined part is positioned and identified by a machine vision system, transmitting the information of the coordinates X, Y and the deflection angle R to the robot system, after the robot receives the information, firstly converting the coordinates of an image under the shot part into world coordinates under the motion of the robot, then grabbing the part by the robot through the converted X, Y coordinates, and finishing correction of the deflection angle R in the process of being sent to a machine tool; and then the numerical control system processes, and after the processing is finished, signals are sent to the robot, the robot receives the signals and then takes out the processed parts from the machine tool to send to the machine tool feeding conveyor belt, and then the robot finishes one-time action. The robot system composition diagram is shown in fig. 18.
The robot used by the system mainly has the functions of receiving the data transmitted by the vision system, performing grabbing action and interacting information of the numerical control system, and jointly completing the task of machining parts by the machine tool.
The robot system mainly comprises two parts, namely an input/output interface for receiving and sending information, namely an I/O interface, and a servo drive for controlling the movement of the mechanical arm. In the study, control of the robot system is mainly completed through software programming of a robot controller, grabbing actions are completed through motion instructions, and meanwhile, the robot can complete information receiving and sending among the systems through an internal Modbus communication protocol and related instructions.
Motion control system:
the machine vision is used for identifying and positioning the position of a part, when the part reaches a designated position and stops, the vision camera operates, the part coordinates are displayed through software, the robot is used for grabbing the part and placing the part into the designated position of the machine tool after obtaining the part pose information through the motion control system, meanwhile, the part is inclined when the part reaches the feeding position of the machine tool due to the problem of the conveying speed of a conveying belt body and the like, the part is directly sent into the machine tool after grabbing, the part is necessarily inclined, the G code processed by the machine tool is written, once the part is inclined, the part is necessarily a disqualified product after being processed, the process also needs to correct the pose of the deflected part, and the part is forward when entering the designated position of the machine tool is ensured. The machine vision also needs to identify the inclination angle of the part and send the angle value to the robot, and the robot can correct the angle after grabbing the inclined part, so that the part is machined positively, and the machining quality is ensured. Based on the flow, the whole motion control design is needed to ensure that the automatic production line runs normally and the information communication of each component part is smooth.
A specific control flow diagram of the motion control system is shown in fig. 19.
The main implementation control flow is simply described as follows: the processing part is stopped after being transported to the position of the sensor by the conveyor belt, the camera starts photographing and performs vision processing, the robot acquires vision processing coordinate information and then grabs the part, and angle correction is performed in the process that the robot is transported to the position of the machine tool according to the deflection angle, so that the part keeps forward reaching the designated processing position of the machine tool, then the machine tool starts processing, after the processing is completed, the robot starts grabbing the processed part again and then sends the processed part to the feeding conveyor belt of the machine tool to be transported away, so that the processing flow of an automatic numerical control processing unit is finished, the feeding conveyor belt is started again, and the part transportation, processing and processing are repeatedly performed. The motion control system can realize information communication among all modules, ensure that an automatic production line runs normally, and realize the purpose of high efficiency and high quality of machine tool processing under a vision system.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (6)

1. The utility model provides a numerical control processing unit system based on machine vision, is including motion control system and industry camera, its characterized in that: the motion control system is provided with a numerical control system for controlling the processing of parts; the motion control system is also provided with a robot system for controlling a robot in the robot system to grasp parts; the motion control system is also provided with a machine vision system for processing the image; the industrial camera is connected with the machine vision system and is used for collecting images, and a vision processor in the machine vision system is used for processing the images.
2. A machine vision based numerically controlled machining unit system as in claim 1, wherein: the processing unit in the numerical control system is a machine tool, and the numerical control system comprises the following operation steps: the machine tool processing is converted from manual transportation feeding to automatic feeding of a conveyor belt, processed parts are directly placed on a machine tool feeding conveyor belt, after the parts reach a specified position, sensor signals are triggered and the conveyor belt is stopped, a camera fixed above the specified stopping position of the parts on the machine tool feeding conveyor belt is used for part image shooting, meanwhile, a machine vision system is used for processing the shot images and outputting result information, a robot system is controlled to acquire information, and the robot is used for grabbing the parts and feeding the parts to a machine tool for processing, and finished products after processing are taken out by the robot and placed on the machine tool feeding conveyor belt for transportation.
3. A machine vision based numerically controlled machining unit system as in claim 1, wherein: the machine vision system identification positioning steps are as follows: creating a standard template, matching the acquired image with the standard template, and outputting part pose information and transmitting the part pose information to the robot, wherein a processed product corresponding to the standard template is qualified. Matching is consistent, the angle offset is 0, and the robot directly grabs and sends the materials into a machine tool for machining; and (3) matching is inconsistent, the angle offset is not 0, and the robot is put into a machine tool for machining after correcting the received angle offset.
4. A machine vision based numerically controlled machining unit system as in claim 1, wherein: the robot system receives the information of the coordinate X, Y and the deflection angle R which are obtained by positioning and identifying the machined part by the machine vision system, after receiving the information, the robot firstly performs coordinate conversion to convert the image coordinate under the shot part into the world coordinate under the motion of the robot, then the robot grasps the part by the converted X, Y coordinate, and completes the correction of the deflection angle R in the process of being sent to the machine tool; and then the numerical control system processes, and after the processing is finished, signals are sent to the robot, the robot receives the signals and then takes out the processed parts from the machine tool to send to the machine tool feeding conveyor belt, and then the robot finishes one-time action.
5. A machine vision based numerically controlled machining unit system as in claim 1, wherein: the robot system comprises an I/O interface for receiving and sending information and a servo drive for controlling the movement of the mechanical arm.
6. A machine vision based numerically controlled machining unit system as in claim 1, wherein: the graphic processing steps of the machine vision system are as follows: step one, graying a color image, improving the processing speed of a later-stage image, and selecting a proper gray value area through a gray histogram can be completed more quickly; step two, filtering processing is carried out to eliminate noise signals of interference images, and corrosion and expansion operation are matched to eliminate interference to a large extent, so that the interference images have high quality; step three, realizing threshold segmentation of the image, and separating out a target object region of interest; and fourthly, extracting edge information, namely obtaining important information which is easy to ignore in marginalization through edge extraction, calculating to obtain angle and coordinate information of the image, and finally converting the angle and coordinate information into a world coordinate system by using a calibration algorithm to output and display.
CN202310200650.XA 2023-06-25 2023-06-25 Numerical control machining unit system based on machine vision Pending CN116594351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310200650.XA CN116594351A (en) 2023-06-25 2023-06-25 Numerical control machining unit system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310200650.XA CN116594351A (en) 2023-06-25 2023-06-25 Numerical control machining unit system based on machine vision

Publications (1)

Publication Number Publication Date
CN116594351A true CN116594351A (en) 2023-08-15

Family

ID=87588658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310200650.XA Pending CN116594351A (en) 2023-06-25 2023-06-25 Numerical control machining unit system based on machine vision

Country Status (1)

Country Link
CN (1) CN116594351A (en)

Similar Documents

Publication Publication Date Title
CN109544456B (en) Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN109785317B (en) Automatic pile up neatly truss robot's vision system
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN111784655B (en) Underwater robot recycling and positioning method
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN114494045A (en) Large-scale straight gear geometric parameter measuring system and method based on machine vision
CN111738320B (en) Shielded workpiece identification method based on template matching
CN110926330A (en) Image processing apparatus, image processing method, and program
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN115096206A (en) Part size high-precision measurement method based on machine vision
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN108399617B (en) Method and device for detecting animal health condition
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN219153837U (en) Mount paper location laminating device
CN116883498A (en) Visual cooperation target feature point positioning method based on gray centroid extraction algorithm
CN116594351A (en) Numerical control machining unit system based on machine vision
CN115661110A (en) Method for identifying and positioning transparent workpiece
CN114022342A (en) Acquisition method and device for acquisition point information, electronic equipment and storage medium
CN114022341A (en) Acquisition method and device for acquisition point information, electronic equipment and storage medium
RU2383925C2 (en) Method of detecting contours of image objects and device for realising said method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination