WO2021088247A1 - 合金分析视觉定位方法、装置及合金分析系统 - Google Patents

合金分析视觉定位方法、装置及合金分析系统 Download PDF

Info

Publication number
WO2021088247A1
WO2021088247A1 PCT/CN2020/070988 CN2020070988W WO2021088247A1 WO 2021088247 A1 WO2021088247 A1 WO 2021088247A1 CN 2020070988 W CN2020070988 W CN 2020070988W WO 2021088247 A1 WO2021088247 A1 WO 2021088247A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
structured light
tested
image
acquisition device
Prior art date
Application number
PCT/CN2020/070988
Other languages
English (en)
French (fr)
Inventor
孙茂杰
徐海宁
张楠
杨文�
苏循亮
周鼎
Original Assignee
江苏金恒信息科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏金恒信息科技股份有限公司 filed Critical 江苏金恒信息科技股份有限公司
Publication of WO2021088247A1 publication Critical patent/WO2021088247A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications

Definitions

  • the invention relates to the technical field of visual inspection, in particular to a visual positioning method, device and alloy analysis system for alloy analysis.
  • optical non-contact measurement sets have the advantages of fast measurement speed and high measurement accuracy, and are widely used in various fields.
  • the production level also tends to be automated and refined.
  • it is necessary to analyze the alloy composition of the finished wire rod.
  • the structured light measurement system is mainly composed of structured light projection devices, cameras, and image acquisition and processing systems.
  • the measurement principle is to project a certain structure of light, such as a point light source, a line light source or a grating, to the measured object.
  • the structured light is deformed by the modulation of the surface information of the measured object, and the deformed structured light fringe image is obtained by the camera to obtain the best The three-dimensional position information of the detection point.
  • the phase measurement method is generally used.
  • the principle is to calculate the phase value of each pixel in the image through multiple grating fringe images with a certain phase difference, and then according to the phase value Calculate the three-dimensional information of the object.
  • the diameters of the finished wire rods and coils vary widely, ranging from 5mm to 34mm and 1.2m to 1.5m, respectively.
  • it is necessary to take at least Three grid images of light bars are used to calculate the phase value, resulting in a large amount of calculation and low positioning efficiency.
  • the present invention provides a visual positioning method, device and alloy analysis system for alloy analysis.
  • the present invention provides a visual positioning method for alloy analysis, the method comprising:
  • the robot controls the image acquisition device to move to the sample to be tested, obtain the distance between the image acquisition device and the sample to be tested; wherein the image acquisition device is equipped with a structured light source;
  • the image acquisition device is controlled to shoot the surface of the sample to be tested to obtain an image of the sample;
  • the structured light generated by the structured light source passes through the surface of the sample to be tested After the surface is reflected, it is received by the image acquisition device, so that the sample image includes structured light stripes that carry the surface deformation characteristics of the sample to be tested;
  • the best detection point is determined, and the three-dimensional position coordinates of the best detection point are calculated; wherein, the central point set includes the pixel points in the structured light stripe except for the pixel points in the edge areas on both sides. Other pixels outside.
  • the extracting a plurality of structured light stripes from the sample image includes:
  • the pixel point is the target point
  • All target points in the sample image are extracted to obtain multiple structured light stripes.
  • the determining the best detection point according to the set of center points of each structured light stripe includes:
  • the most convex point is used as the best detection point.
  • the filtering out the most convex points on the surface of the sample to be tested in the shooting area of the image acquisition device includes:
  • the minimum depth coordinate is filtered from the depth coordinate Zi, and the pixel corresponding to the minimum depth coordinate is used as the most convex point.
  • the obtaining the three-dimensional position coordinates of the best detection point includes:
  • the depth coordinate Z of the optimal inspection point is calculated by using the triangulation method to obtain the optimal inspection point Three-dimensional position coordinates (X, Y, Z).
  • the method further includes:
  • the area of interest is a stripe area in the structured light stripe except for the edge areas on both sides;
  • the pixel points included in the region of interest are formed into the central point set.
  • the method further includes: marking the best detection point in the sample image.
  • the present invention also provides an alloy analysis visual positioning device for realizing the alloy analysis visual positioning method as described in the first aspect, including an image acquisition device, a structured light source, a laser ranging sensor, and a controller.
  • the alloy analysis visual positioning device is connected with a robot, and the structured light source, the robot, the image acquisition device and the laser ranging sensor are respectively electrically connected with the controller; the laser ranging sensor is used to detect image acquisition The distance between the equipment and the sample to be tested;
  • controller is configured to execute the following program steps:
  • the image acquisition device is controlled to shoot the surface of the sample to be tested to obtain an image of the sample;
  • the structured light generated by the structured light source passes through the surface of the sample to be tested After the surface is reflected, it is received by the image acquisition device, so that the sample image includes structured light stripes that carry the surface deformation characteristics of the sample to be tested;
  • the best detection point is determined, and the three-dimensional position coordinates of the best detection point are calculated; wherein, the central point set includes the pixel points in the structured light stripe except for the pixel points in the edge areas on both sides. Other pixels outside.
  • the device further includes a bottom plate and an outer shield, the front end panel of the outer shield is transparent, and the rear end of the outer shield is fixed on the bottom plate; the image capture device and the structured light source are fixed on the On the bottom plate, the image acquisition device and the structured light source are located inside the outer shield; the laser ranging sensor is arranged on the top of the outer shield.
  • the axis of the image acquisition device and the structured light source are on the same vertical plane.
  • the present invention also provides an alloy analysis system, including a robot, a support, an alloy analyzer, and the alloy analysis visual positioning device as described in the second aspect, the robot and the alloy analyzer are connected through the support , The alloy analysis visual positioning device is arranged on the support, the alloy analyzer and the alloy analysis visual positioning device are arranged adjacently and both face the sample to be tested, and the controller is also electrically connected to the alloy analyzer connection;
  • controller is configured to execute the following program steps:
  • the activation of the alloy analyzer is controlled to perform alloy analysis at the optimal detection point.
  • the present invention has the following beneficial effects: when the robot controls the image acquisition device to move to the sample to be tested, the distance between the image acquisition device and the sample to be tested is obtained, and if the distance is equal to the preset distance, the image collector can be moved To the best shooting position to ensure the shooting effect of structured light stripes.
  • the sample image includes background and structured light stripes.
  • the present invention extracts multiple structured light stripes from the sample image, thereby separating the background from the structured light stripes, so as to improve the accuracy and efficiency of optimal detection point positioning during subsequent image processing. After extracting multiple structured light stripes, obtain the coordinates of other pixel points in each structured light stripe except for the pixels in the edge areas on both sides, and obtain the center point set of each structured light stripe.
  • the central point collection combined with the deformation characteristics of the structured light modulated by the surface of the sample to be tested, can determine the unevenness of the surface of the sample to be tested, so as to screen out the best detection point, and finally get the three-dimensional position of the best detection point
  • a robot can be used to move the probe of the alloy analyzer to the position corresponding to the three-dimensional position coordinates of the best detection point, thereby completing the visual positioning and alloy analysis process.
  • the invention only needs to take a sample image in the detection area to calculate the three-dimensional position coordinates of the best detection point, the calculation amount is reduced, and complex image processing procedures are not required, and the calculation and positioning efficiency is higher.
  • FIG. 1 is a flowchart of a visual positioning method for alloy analysis according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a sample image with structured light fringes according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a sample image after marking the best detection point according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the detection principle of the depth coordinate Z of the best detection point according to an embodiment of the present invention.
  • Fig. 5 is a control flow chart of a visual positioning device for alloy analysis according to another embodiment of the present invention.
  • FIG. 6 is a schematic diagram of the front structure of an alloy analysis visual positioning device according to another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of the back structure of an alloy analysis visual positioning device according to another embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of an alloy analysis system shown in another embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the connection structure between the bracket and the alloy analysis visual positioning device and the alloy analyzer according to another embodiment of the present invention.
  • an embodiment of the present invention provides a visual positioning method for alloy analysis, and the method includes:
  • Step S10 when the robot controls the image acquisition device to move to the sample to be tested, the distance between the image acquisition device and the sample to be tested is acquired; wherein the image acquisition device is equipped with a structured light source.
  • a robot can be used to control the movement of the image acquisition equipment to the sample to be tested to adjust the image acquisition equipment and the sample to be tested.
  • the relative position and distance between the image acquisition equipment and the shooting position of the image acquisition equipment can be located.
  • the distance measurement device can be selected to detect the distance between the image acquisition equipment and the sample to be measured, such as laser rangefinders, optical fiber rangefinders, etc., in this implementation
  • the example does not limit the ranging method.
  • the image acquisition device in the present invention can be an industrial camera, and the sample to be tested can be a wire rod or a coil, or other samples that require alloy analysis, which is not limited in the present invention.
  • the light source equipped is a structured light source, which can generate structured light. Based on the characteristic principle that the structured light is deformed by the modulation of the surface of the sample to be tested, the structured light is reflected by the surface of the sample to be tested.
  • the image acquisition device receives, so that the image of the sample captured by the image acquisition device has structured light fringes that carry the real deformation characteristics of the sample surface.
  • step S20 if the distance between the image acquisition device and the sample to be tested is equal to the preset distance, the image acquisition device is controlled to shoot the surface of the sample to be tested to obtain an image of the sample.
  • the preset distance can be preset according to the characteristics of the sample to be tested, so as to ensure that the image acquisition device can collect sample images at a better shooting distance and ensure the image shooting effect.
  • the robot controls the image acquisition device to move to the sample to be tested, the distance between the image acquisition device and the sample to be tested can be obtained in real time, and it can be judged whether the distance between the image acquisition device and the sample to be tested is equal to the preset distance. If the judgment result is not equal, you need to continue to control the robot to adjust the position of the image acquisition device until the judgment result is equal, then the shooting position of the image acquisition device is positioned, and the image acquisition device can be controlled to start and photograph the surface of the sample to be tested.
  • the sample image is thus collected, and the sample image with structured light fringes is shown in Figure 2.
  • the sample image can be saved in a fixed path after being collected, so that the sample image stored in the path can be directly read during subsequent image processing.
  • the preset distance is 200 mm to 400 mm.
  • step S30 a plurality of structured light stripes are extracted from the sample image.
  • the filtering operation is performed in a manner of replacing the value of the pixel with the median value of the gray level in the neighborhood of a pixel, and the method of image noise reduction is not limited to that described in this embodiment.
  • those skilled in the art can also perform other processing on the sample image according to actual processing requirements, such as image enhancement, etc., for details, refer to existing image processing methods, which will not be repeated in this embodiment.
  • an image taken with a wire rod sample as an example.
  • the sample image mainly consists of two parts, one part is the background of dark steel bars (the black part in the figure), and the other part is the structured light stripe (that is, the figure has The deformed multiple white stripes), because the structured light stripes and the black background have their own distinct characteristics and different brightness, a threshold T can be preset.
  • the threshold T is used to distinguish the background and the structured light stripes.
  • the present invention Need to collect the brightness value f(x, y) at the pixel point (x, y) in the image, and determine whether the brightness value f(x, y) is greater than the threshold T, if f(x, y) is greater than T, then the pixel
  • the point (x, y) is a target point, and the target point is a pixel point forming a plurality of structured light stripes; otherwise, the pixel point (x, y) is a background point.
  • a series of target points can be extracted, and all target points can form multiple structured light stripes. For example, in the sample image shown in Figure 2, 7 structured light stripes are extracted.
  • Step S40 Determine the best detection point according to the central point set of each structured light stripe, and calculate the three-dimensional position coordinates of the best detection point; wherein the central point set includes the structured light stripe except for the edge areas on both sides Pixels other than pixels.
  • each structured light stripe in the present invention can include two types of stripe regions, one is the edge regions on the left and right sides, and the other is the fringe region in the middle except for the edge regions on the left and right sides.
  • the best detection The point is generally selected in the middle striped area.
  • the middle fringe area in each structured light fringe is delineated as the ROI, and then all the structured light fringes are defined as ROI. All the included pixels constitute the central point set.
  • the structured light is modulated by the surface of the sample to be tested and the deformation characteristics are: due to the unevenness of the surface of the sample to be tested, the structured light irradiated on the surface of the sample to be tested will be phase-modulated, resulting in The more protruding part of the sample to be tested corresponds to the lower the light fringe pixel point, on the contrary, the more concave part of the sample to be tested corresponds to the higher the light fringe pixel point. Therefore, the structured light fringe information in the sample image can be used to analyze the unevenness of the sample surface to determine the best detection point.
  • the present invention is based on the collection of the center points of each structured light stripe, and based on the deformation characteristics of the structured light modulated by the surface of the sample to be tested, the most of the surface of the sample to be tested in the shooting area of the image acquisition device is filtered out. A convex point, and the most convex point is used as the best detection point.
  • the filtering out the most convex points on the surface of the sample to be tested in the shooting area of the image acquisition device includes:
  • the center point set of each structured light stripe is obtained and stored in the point set PointVector; for the point set PointVector, it is classified according to the number of structured light stripes, so that each The structured light stripes correspond to a central point set rowVectori.
  • y coordinates of each pixel in the set sort the y coordinates of each pixel in the set (ascending or descending), then The maximum y coordinate value can be automatically filtered out, and then the pixel coordinates (xi, yi) corresponding to the maximum y coordinate value can be obtained.
  • (xi, yi) is the lowest pixel position in the y-axis direction in each structured light stripe . You can filter out 7 pixel coordinates, namely (x1, y1) (x2, y2), (x3, y3), (x4, y4), (x5, y5), (x6, y6) and ( x7, y7).
  • the depth coordinates Zi corresponding to (xi, yi) are calculated, which are respectively Z1, Z2, Z3, Z4, Z5, Z6 And Z7
  • the depth coordinate calculation method can refer to the following and as shown in Figure 4; select the minimum depth coordinate from Z1, Z2, Z3, Z4, Z5, Z6 and Z7, such as Z3 minimum (corresponding to the third structured light stripe ), the pixel point corresponding to Z3 is (x3, y3), which means that the position of the sample surface corresponding to (x3, y3) is closest to the image acquisition device, that is, (x3, y3) is the most convex point on the sample surface in the shooting area.
  • the best detection point is taken as the best detection point, and the true three-dimensional position information corresponding to the best detection point is the position point that the alloy analyzer probe needs to detect.
  • the best detection point can be marked in the sample image according to the pixel point coordinates (x3, y3) of the best detection point in the image, so as to provide a reference for the user.
  • the best detection point determined by the sample image is in the image coordinate system, and the best detection point needs to be correspondingly converted to the real world coordinate system to obtain the three-dimensional position coordinates of the best detection point, which is convenient for the robot to control the alloy analysis
  • the instrument moves to the best detection point for alloy analysis.
  • the conversion relationship between the image coordinate system and the world coordinate system can be obtained in advance according to the imaging characteristics and shooting position of the image acquisition device, and the pixel coordinates (x, After y), according to the conversion relationship, the best detection point is converted into world coordinates suitable for robot movement, so as to obtain the coordinates (X, Y) corresponding to the best detection point in the world coordinate system, that is, the three-dimensional position X-axis and Y-axis coordinates in the coordinates.
  • the triangle ranging method is used to calculate the depth of the optimal detection point.
  • the coordinate Z is the Z-axis coordinate in the three-dimensional position coordinate, so as to obtain the three-dimensional position coordinate (X, Y, Z) of the best detection point.
  • control the robot After locating the three-dimensional position coordinates of the best inspection point, control the robot to move the alloy analyzer so that the probe of the alloy analyzer reaches the position corresponding to the three-dimensional position coordinates of the best inspection point, and then alloy analysis can be performed to obtain the analysis result .
  • the triangular distance measurement method is a conventional distance measurement method. For details, reference may be made to related descriptions in the prior art, which will not be repeated in this embodiment.
  • the robot controls the image acquisition device to move to the sample to be tested, the distance between the image acquisition device and the sample to be tested is obtained. If the distance is equal to the preset distance, the image collector can be moved To the best shooting position to ensure the shooting effect of structured light stripes.
  • the sample image includes background and structured light stripes.
  • the present invention extracts multiple structured light stripes from the sample image, thereby separating the background from the structured light stripes, so as to improve the accuracy and efficiency of optimal detection point positioning during subsequent image processing. After extracting multiple structured light stripes, obtain the coordinates of other pixel points in each structured light stripe except for the pixels in the edge areas on both sides, and obtain the center point set of each structured light stripe.
  • the central point collection combined with the deformation characteristics of the structured light modulated by the surface of the sample to be tested, can determine the unevenness of the surface of the sample to be tested, so as to screen out the best detection point, and finally get the three-dimensional position of the best detection point
  • a robot can be used to move the probe of the alloy analyzer to the position corresponding to the three-dimensional position coordinates of the best detection point, thereby completing the visual positioning and alloy analysis process.
  • the invention only needs to take a sample image in the detection area to calculate the three-dimensional position coordinates of the best detection point, the calculation amount is reduced, and complex image processing procedures are not required, and the calculation and positioning efficiency is higher.
  • an alloy analysis visual positioning device which is used to implement the alloy analysis visual positioning method described in the previous embodiment, including a structured light source 41, an image acquisition device 42, The laser ranging sensor 43 and the controller 5; the alloy analysis visual positioning device is connected to the robot 1, and the robot 1 is electrically connected to the controller 5.
  • the controller 5 is used to control the movement and opening and closing of the robot 1; 5 When controlling the movement of the robot 1, the robot 1 will drive the structured light source 41, the image acquisition device 42, and the laser ranging sensor 43 to link; the controller 5 is electrically connected to the structured light source 41 to control the opening and closing of the structured light source 41;
  • the image acquisition device 42 is electrically connected.
  • the controller 5 can open and close the image acquisition device 42.
  • the image acquisition device 42 sends the captured sample image to the controller 5 so that the controller 5 can process and calculate the sample image to determine the best Detection point;
  • the controller 5 is electrically connected to the laser ranging sensor 43, the controller 5 can control the opening and closing of the laser ranging sensor 43, the laser ranging sensor 43 is used to detect the distance between the image acquisition device 42 and the sample to be tested,
  • the laser distance measuring sensor 43 can send a measurement signal to the controller 5, so that the controller 5 can obtain the distance between the image acquisition device 42 and the sample to be measured to locate the shooting position of the image acquisition device 42;
  • the image acquisition device 42 and structured light source The axis of 41 is on the same vertical plane, which can improve the shooting quality of the sample image.
  • the distance between the image acquisition device 42 and the structured light source 41 is 70 mm-100 mm.
  • controller 5 is configured to execute the following program steps:
  • control the image acquisition device to shoot the surface of the sample to be tested to obtain an image of the sample
  • the best detection point is determined, and the three-dimensional position coordinates of the best detection point are calculated; wherein, the central point set includes the pixel points in the structured light stripe except for the pixel points in the edge areas on both sides. Other pixels outside.
  • the controller 5 is electrically connected to the robot 1, and the robot 1 is fixedly connected to the alloy analyzer 3.
  • the controller 5 can generate corresponding control instructions according to the three-dimensional position coordinates of the best detection point, and send the control instructions to the robot 1;
  • the robot 1 moves according to the control instruction, which will drive the alloy analyzer 3 to link, thereby moving the alloy analyzer 3 to the three-dimensional position coordinates of the best detection point, and the alloy analyzer 3 performs alloy analysis on the sample to be tested.
  • controller 5 can be further configured to execute the following program steps:
  • the pixel point is the target point
  • All target points in the sample image are extracted to obtain multiple structured light stripes.
  • controller 5 can be further configured to execute the following program steps:
  • the most convex point is used as the best detection point.
  • controller 5 can be further configured to execute the following program steps:
  • the minimum depth coordinate is filtered from the depth coordinate Zi, and the pixel corresponding to the minimum depth coordinate is used as the most convex point.
  • controller 5 can be further configured to execute the following program steps:
  • the depth coordinate Z of the optimal inspection point is calculated by using the triangulation method to obtain the optimal inspection point Three-dimensional position coordinates (X, Y, Z).
  • controller 5 can be further configured to execute the following program steps:
  • the area of interest is a stripe area in the structured light stripe except for the edge areas on both sides;
  • the pixel points included in the region of interest are formed into the central point set.
  • controller 5 may be further configured to perform the following program steps: mark the best detection point in the sample image.
  • the alloy analysis visual positioning device further includes a bottom plate 44 and an outer shield 45.
  • the front (ie, front) panel of the outer shield 45 is transparent, and the transparent front panel can ensure the structure
  • the structured light emitted by the light source 41 can be incident on the surface of the sample to be tested, and ensure that the image acquisition device 42 can collect images.
  • the transparent front panel can also play a protective and sealing role; the rear end of the outer shield 45 (that is, the back ) Is fixed on the bottom plate 44, the image capture device 42 and the structured light source 41 are fixed on the bottom plate 44, the image capture device 42 and the structured light source 41 are located inside the outer shield 45, and the bottom plate 44 is used to install the image capture device 42 and the structured light source 41.
  • the rear end of the device can also be sealed and protected; the laser ranging sensor 43 is arranged on the top of the outer shield 45.
  • another embodiment of the present invention provides an alloy analysis system, including a robot 1, a support 2, an alloy analyzer 3, and the alloy analysis visual positioning device 4 described in the previous embodiment, and the robot 1 It is connected to the alloy analyzer 3 through the bracket 2, the alloy analysis visual positioning device 4 is arranged on the bracket 2, and the alloy analyzer 3 and the alloy analysis visual positioning device 4 are arranged adjacently and facing the sample to be tested 100.
  • the controller is also connected with The alloy analyzer 3 is electrically connected;
  • the robot 1 can be a six-axis robot; optionally, the bracket 2 is an L-shaped bracket with two sides, and a flange 21 is provided at one end of the bracket 2, and the flange 21 is used for Connect the bracket 2 with the robot 1.
  • the other side of the bracket 2 is connected to the bottom plate 44 through the mounting plate 22, thereby connecting the bracket 2 and the alloy analysis visual positioning device 4, and the two sides of the bracket 2 are supported by The rod 23 is connected to strengthen the support structure of the bracket 2.
  • the controller in this embodiment is also configured to execute the following program steps:
  • the activation of the alloy analyzer is controlled to perform alloy analysis at the optimal detection point.
  • a voice device may be provided on the robot 1, and the voice device is electrically connected to the controller 5, and the voice device is used to broadcast the test results of the alloy analyzer 3 to the sample to be tested. Let the on-site personnel know whether the sample to be tested is qualified. Then the controller 5 may also be configured to control the voice device to broadcast corresponding prompt information according to the detection result fed back by the alloy analyzer, and control the robot to return to the starting position.
  • the prompt information can be preset in the voice device. For example, the prompt information can be set to pass or fail the test of a certain sample to be tested, and the specific content of the prompt information is not limited.
  • the alloy analyzer adopts X fluorescence analysis technology, which can quickly, accurately and non-destructively analyze a variety of materials; it has a wide and customizable brand library, and users can modify the existing brand library and add new ones. Or create a brand library, which can strictly control the analysis of light elements (magnesium, aluminum, silicon, phosphorus, sulfur); it has a powerful background data management function, and software can be customized according to requirements.
  • the test results and reports can be directly downloaded to a U disk, or data can be transmitted via WiFi, USB or network cables.
  • the controller may be a PLC (Programmable Logic Controller, Programmable Logic Controller), and the PLC may be configured with functions such as a control program and an image processing system.
  • the robot 1 can be ABB IRB4600
  • the laser ranging sensor 43 can be a Panasonic HG-C1050 laser sensor
  • the structured light source 41 can be an OPT-SL10B structured light source
  • the alloy analyzer 3 can be a Niton XL2980 alloy analyzer.
  • the image acquisition device 42 uses the AVT Mako G-192B industrial camera.

Abstract

一种合金分析视觉定位方法、装置及合金分析系统,当机器人(1)控制图像采集设备(42)向待测样品(100)移动时,获取图像采集设备(42)与待测样品(100)之间的距离(S10);其中,图像采集设备(42)配备有结构光源(41);如果图像采集设备(42)与待测样品(100)之间的距离等于预设距离,控制图像采集设备(42)对待测样品(100)的表面进行拍摄,得到样品图像(S20);从样品图像中提取出多条结构光条纹(S30);根据每条结构光条纹的中心点集合,确定最佳检测点,计算最佳检测点的三维位置坐标(S40);中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像素点。该方法在检测区域只需拍摄一幅样品图像,即可计算出最佳检测点的三维位置坐标,计算量减少,且无需复杂的图像处理过程,计算和定位效率更高。

Description

合金分析视觉定位方法、装置及合金分析系统
本申请要求在2019年11月6日提交中国专利局、申请号为201911073340.6、发明名称为“合金分析视觉定位方法、装置及合金分析系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及视觉检测技术领域,尤其涉及一种合金分析视觉定位方法、装置及合金分析系统。
背景技术
随着光学、计算机及图像处理等技术的发展,光学非接触测量集测量速度快、测量精度高等优点,广泛应用于各个领域。比如,在钢铁行业,由于产品的多样化,其生产水平亦趋于自动化和精细化,为防止不同钢种混号,需要对成品盘条进行合金成份分析。
在进行合金分析时,目前多采用结构光测量系统来定位样品表面最佳检测点的位置,结构光测量系统主要由结构光投影装置、摄像机、图像采集处理系统组成。测量原理是向被测物体投射一定结构的光,如点光源、线光源或光栅,结构光受被测物体表面信息的调制而发生形变,利用摄像机获取变形的结构光条纹图像,从而获得最佳检测点的三维位置信息。
在利用结构光测量系统进行最佳检测点定位计算时,目前一般采用相位测量方法,其原理是通过有一定相位差的多幅光栅条纹图像计算图像中每个像素的相位值,然后根据相位值计算物体的三维信息。然而,在实际生产过程中,成品盘条和盘卷的直径变化范围较大,分别为5mm~34mm和1.2m~1.5m,当不同规格的成品盘条和盘卷组合时,就需要拍摄至少三幅光条栅格图像来计算相位值,导致计算量大、定位效率低。
发明内容
为解决上述背景技术中所述的问题,本发明提供一种合金分析视觉定位方法、装置及合金分析系统。
第一方面,本发明提供一种合金分析视觉定位方法,所述方法包括:
当机器人控制图像采集设备向待测样品移动时,获取图像采集设备与待测样品之间的距离;其中,所述图像采集设备配备有结构光源;
如果图像采集设备与待测样品之间的距离等于预设距离,控制所述图像采集设备对待测样品的表面进行拍摄,得到样品图像;所述结构光源产生的结构光经所述待测样品的表面反射后,被所述图像采集设备接收,使所述样品图像中包括携带有待测样品表面形变特征的结构光条纹;
从所述样品图像中提取出多条结构光条纹;
根据每条结构光条纹的中心点集合,确定最佳检测点,并计算所述最佳检测点的三维位置坐标;其中,所述中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像 素点。
可选地,所述从所述样品图像中提取出多条结构光条纹,包括:
获取所述样品图像中像素点的亮度值;
判断所述亮度值是否大于阈值;
如果所述亮度值大于阈值,则所述像素点为目标点;
对所述样品图像中所有的目标点进行提取,得到多条所述结构光条纹。
可选地,所述根据每条结构光条纹的中心点集合,确定最佳检测点,包括:
根据每条结构光条纹的中心点集合,以及结构光受所述待测样品表面的调制而发生形变特性,筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点;
将所述最凸点作为所述最佳检测点。
可选地,所述筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点,包括:
对所述中心点集合中各像素点的y坐标进行排序,获取最大y坐标值对应的像素点坐标(xi,yi);其中,i表示结构光条纹的序号,1≦i≦N,N为样品图像中提取出的结构光条纹的数量;
根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算(xi,yi)对应的深度坐标Zi;
从深度坐标Zi中筛选出最小深度坐标,并将所述最小深度坐标对应的像素点作为所述最凸点。
可选地,所述获取所述最佳检测点的三维位置坐标,包括:
获取图像坐标系与世界坐标系的转换关系;
根据所述转换关系,获取所述样品图像中最佳检测点在世界坐标系中对应的坐标(X,Y);
根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算所述最佳检测点的深度坐标Z,得到所述最佳检测点的三维位置坐标(X,Y,Z)。
可选地,所述方法还包括:
设定所述结构光条纹的感兴趣区域,所述感兴趣区域为结构光条纹中除两侧边缘区域之外的条纹区域;
将所述感兴趣区域中包括的像素点组成所述中心点集合。
可选地,所述方法还包括:在所述样品图像中对所述最佳检测点进行标记。
第二方面,本发明还提供一种合金分析视觉定位装置,用于实现如第一方面所述的合金分析视觉定位方法,包括图像采集设备、结构光源、激光测距传感器和控制器,所述合金分析视觉定位装置与机器人连接,所述结构光源、所述机器人、所述图像采集设备和所述激光测距传感器分别与所述控制器电连接;所述激光测距传感器用于检测图像采集设备与待测样品之间的距离;
其中,所述控制器被配置为执行如下程序步骤:
控制所述图像采集设备向待测样品移动;
获取图像采集设备与待测样品之间的距离;
如果图像采集设备与待测样品之间的距离等于预设距离,控制所述图像采集设备对待 测样品的表面进行拍摄,得到样品图像;所述结构光源产生的结构光经所述待测样品的表面反射后,被所述图像采集设备接收,使所述样品图像中包括携带有待测样品表面形变特征的结构光条纹;
从所述样品图像中提取出多条结构光条纹;
根据每条结构光条纹的中心点集合,确定最佳检测点,并计算所述最佳检测点的三维位置坐标;其中,所述中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像素点。
可选地,所述装置还包括底板和外护罩,所述外护罩的前端面板透明,所述外护罩的后端固定在所述底板上;图像采集设备和结构光源固定在所述底板上,并且图像采集设备和结构光源位于所述外护罩的内部;所述激光测距传感器设置于所述外护罩的顶部。
可选地,所述图像采集设备和结构光源的轴心处于同一竖直平面上。
第三方面,本发明还提供一种合金分析系统,包括机器人、支架、合金分析仪以及如第二方面所述的合金分析视觉定位装置,所述机器人与所述合金分析仪通过所述支架连接,所述合金分析视觉定位装置设置在所述支架上,所述合金分析仪和所述合金分析视觉定位装置相邻设置且均朝向待测样品,所述控制器还与所述合金分析仪电连接;
其中,所述控制器被配置为执行如下程序步骤:
控制机器人运动,使所述合金分析仪移动至最佳检测点的三维位置坐标所对应的位置;
控制所述合金分析仪启动,以对所述最佳检测点处进行合金分析。
本发明具备的有益效果如下:当机器人控制图像采集设备向待测样品移动时,获取图像采集设备与待测样品之间的距离,如果该距离等于预设距离时,则可将图像采集器移动到最佳的拍摄位置,以保证结构光条纹的拍摄效果。样品图像中包括背景和结构光条纹,本发明从样品图像中提取出多条结构光条纹,从而将背景与结构光条纹分离,以便后续图像处理时提高最佳检测点定位的准确性和效率。在多条结构光条纹提取后,获取每条结构光条纹中除两侧边缘区域像素点之外的其他像素点的坐标,得到每条结构光条纹的中心点集合,根据每条结构光条纹的中心点集合,结合结构光受所述待测样品表面的调制而发生形变特性,可以确定出待测样品表面的凹凸性,从而筛选出最佳检测点,并最终得到最佳检测点的三维位置坐标,可以利用机器人将合金分析仪的探头移动至该最佳检测点的三维位置坐标对应的位置处,从而完成视觉定位和合金分析过程。本发明在检测区域只需拍摄一幅样品图像,即可计算出最佳检测点的三维位置坐标,计算量减少,且无需复杂的图像处理过程,计算和定位效率更高。
附图说明
图1为本发明一实施例示出的一种合金分析视觉定位方法的流程图;
图2为本发明一实施例示出的具有结构光条纹的样品图像示意图;
图3为本发明一实施例示出的标记最佳检测点后的样品图像示意图;
图4为本发明一实施例示出的最佳检测点的深度坐标Z的检测原理示意图;
图5为本发明另一实施例示出的合金分析视觉定位装置的控制流程图;
图6为本发明另一实施例示出的合金分析视觉定位装置的正面结构示意图;
图7为本发明另一实施例示出的合金分析视觉定位装置的背面结构示意图;
图8为本发明又一实施例示出的合金分析系统的结构示意图;
图9为本发明又一实施例示出的支架与合金分析视觉定位装置、合金分析仪的连接结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
如图1所示,本发明一实施例提供一种合金分析视觉定位方法,所述方法包括:
步骤S10,当机器人控制图像采集设备向待测样品移动时,获取图像采集设备与待测样品之间的距离;其中,所述图像采集设备配备有结构光源。
由于本发明采用视觉定位,需要利用图像采集设备拍摄待测样品表面的图像,以便确定最佳检测点,因此可利用机器人控制图像采集设备向待测样品移动,以调节图像采集设备与待测样品之间的相对位置和距离,从而定位图像采集设备的拍摄位置,可以选择测距装置来检测图像采集设备与待测样品之间的距离,比如激光测距仪、光纤测距仪等,本实施例对测距方式不作限定。
本发明中的图像采集设备可选为工业相机,待测样品可以为盘条或盘卷,或者其他需要进行合金分析的样品,本发明对此不作限定。本发明在进行样品表面图像采集时,配备的光源为结构光源,可以产生结构光,基于结构光受待测样品表面的调制而发生形变的特性原理,结构光经待测样品的表面反射后被图像采集设备接收,从而使图像采集设备拍摄到样品图像具有携带样品表面真实形变特征的结构光条纹。
步骤S20,如果图像采集设备与待测样品之间的距离等于预设距离,控制所述图像采集设备对待测样品的表面进行拍摄,得到样品图像。
在进行合金分析之前,可以根据待测样品的特点,预先设定所述预设距离,以保证图像采集设备能够在较佳的拍摄距离处采集样品图像,保证图像拍摄效果。在当机器人控制图像采集设备向待测样品移动的过程中,可以实时获取图像采集设备与待测样品之间的距离,并判断图像采集设备与待测样品之间的距离是否等于预设距离,如果判断结果是不等于,则需要继续控制机器人调节图像采集设备的位置,直至判断结果为等于为止,则图像采集设备的拍摄位置定位完成,可以控制图像采集设备启动并对待测样品表面进行拍摄,从而采集到样品图像,具有结构光条纹的样品图像如图2所示。可选地,样品图像采集后可以保存在一个固定的路径下,这样在后续进行图像处理时,可以直接读取该路径中存储的样品图像。可选地,所述预设距离为200mm~400mm。
步骤S30,从所述样品图像中提取出多条结构光条纹。
由于成像系统、传输介质和记录设备等的不完善,数字图像在其形成、传输记录过程中往往会受到多种噪声的污染,为了消除图像中混入的噪声并识别提取图像特征,可选地,采用由一个像素邻域中的灰度级的中值来代替该像素的值的方式进行滤波操作,图像降噪的方式不限于本实施例所述。另外,本领域技术人员还可根据实际处理需求,对样品图像进行其他处理,比如图像增强等,具体可参照现有图像处理方法,本实施例不再赘述。
如图2所示,是以盘条样品为例拍摄的图像,样品图像主要包括两部分,一部分是暗色钢筋的背景(即图中的黑色部分),另一部分是结构光条纹(即图中具有形变的多条白 色条纹),由于结构光条纹和黑色背景具有其各自明显的特征,亮度不同,所以可以预设一个阈值T,阈值T用于区分对背景和结构光条纹进行分割,故本发明中需要采集图像中像素点(x,y)处的亮度值f(x,y),并判断亮度值f(x,y)是否大于阈值T,如果f(x,y)大于T,则像素点(x,y)为目标点,所述目标点是组成多条结构光条纹的像素点,否则,像素点(x,y)则为背景点。通过这种方式,可以提取出一系列的目标点,所有目标点可以组成多个结构光条纹,比如图2所示的样品图像中,提取出7个结构光条纹。
步骤S40,根据每条结构光条纹的中心点集合,确定最佳检测点,并计算所述最佳检测点的三维位置坐标;其中,所述中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像素点。
参照图2,本发明中每条结构光条纹可以包括两种条纹区域,一种是左右两侧的边缘区域,另一种除左右两侧边缘区域之外的偏中间的条纹区域,最佳检测点一般在偏中间的条纹区域中进行选取。在提取出多条结构光条纹后,根据设定的结构光条纹对应的ROI(region of interest,感兴趣区域),将每条结构光条纹中偏中间的条纹区域圈定为ROI,则ROI中所包括的全部像素点组成所述中心点集合。
申请人在实践中发现,结构光受所述待测样品表面的调制而发生形变特性为:由于待测样品的表面存在凹凸性,会使照射到待测样品表面的结构光发生相位调制,造成待测样品越凸出的部分对应的光条纹像素点越偏下,反之,待测样品越凹进去的部分对应的光条纹像素点越偏上。因此,可以利用样品图像中的结构光条纹信息来解析样品表面的凹凸性,从而确定最佳检测点。本发明是根据每条结构光条纹的中心点集合,并基于上述结构光受所述待测样品表面的调制而发生形变特性,筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点,并将所述最凸点作为所述最佳检测点。
进一步地,所述筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点,包括:
步骤(A):对所述中心点集合中各像素点的y坐标进行排序,获取最大y坐标值对应的像素点坐标(xi,yi);其中,i表示结构光条纹的序号,1≦i≦N,N为样品图像中提取出的结构光条纹的数量;
步骤(B):根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算(xi,yi)对应的深度坐标Zi;
步骤(C):从深度坐标Zi中筛选出最小深度坐标,并将所述最小深度坐标对应的像素点作为所述最凸点,该最凸点即为最佳检测点。
在具体实现中,在提取出多条结构光条纹后,获取每条结构光条纹的中心点集合,集中保存在点集PointVector中;针对点集PointVector,按照结构光条纹的数量进行分类,使每条结构光条纹对应一个中心点集合rowVectori,比如图2和图3中,提取出7个结构光条纹,即N=7,则具有7个中心点集合,分别为rowVector1、rowVector2、rowVector3、rowVector4、rowVector5、rowVector6和rowVector7;如图3所示,一般将图像左上角作为原点,建立图像坐标系,对于任一个中心点集合,对集合中各像素点的y坐标进行排序(升序或降序),则可自动筛选出最大y坐标值,然后获取最大y坐标值对应的像素点坐标(xi,yi),(xi,yi)是每条结构光条纹中在y轴方向上位置最偏下的像素点,则可以筛选出到7个像素点坐标,分别为(x1,y1)(x2,y2)、(x3,y3)、(x4,y4)、(x5,y5)、(x6,y6)和(x7,y7)。
然后根据图像采集设备、结构光源与待测样品之间的相对位置关系,利用三角测距方法,计算(xi,yi)对应的深度坐标Zi,分别为Z1、Z2、Z3、Z4、Z5、Z6和Z7,深度坐标计算方法可参照下文以及图4所示;从Z1、Z2、Z3、Z4、Z5、Z6和Z7中筛选出最小深度坐标,比如Z3最小(对应在第三条结构光条纹中),Z3对应的像素点为(x3,y3),则说明(x3,y3)对应的样品表面位置点距离图像采集设备最近,即(x3,y3)为拍摄区域内样品表面的最凸点,将样品表面的最凸点作为最佳检测点,该最佳检测点对应的真实三维位置信息即为合金分析仪探头需要探测的位置点。可选地,如图3所示,可以根据最佳检测点在图像中的像素点坐标(x3,y3),在样品图像中对最佳检测点进行标记,从而为使用者提供参照。
通过样品图像确定的最佳检测点是在图像坐标系中,还需要将最佳检测点对应转换到真实的世界坐标系中,以获取最佳检测点的三维位置坐标,便于通过机器人控制合金分析仪移动到最佳检测点处进行合金分析。具体地,可以根据图像采集设备的成像特性和拍摄位置等相关信息,预先获取图像坐标系与世界坐标系之间的转换关系,当得到最佳检测点在样品图像中的像素点坐标(x,y)后,根据所述转换关系,将最佳检测点转换为适于机器人运动的世界坐标中,从而得到最佳检测点在世界坐标系中对应的坐标(X,Y),即得到三维位置坐标中的X轴坐标和Y轴坐标。如图4所示,根据基准位、以及图像采集设备、结构光源、测距装置与待测样品之间的相对位置参数等几何关系,利用三角测距方法,计算所述最佳检测点的深度坐标Z,即三维位置坐标中的Z轴坐标,从而得到最佳检测点的三维位置坐标(X,Y,Z)。当定位出最佳检测点的三维位置坐标后,控制机器人移动合金分析仪,使合金分析仪的探头抵达最佳检测点的三维位置坐标对应的位置处,然后即可进行合金分析,得到分析结果。本实施例中,所述的三角测距方法为常规的测距方法,具体可参照现有技术的相关描述,本实施例不再赘述。
由本实施例以上技术方案可知,当机器人控制图像采集设备向待测样品移动时,获取图像采集设备与待测样品之间的距离,如果该距离等于预设距离时,则可将图像采集器移动到最佳的拍摄位置,以保证结构光条纹的拍摄效果。样品图像中包括背景和结构光条纹,本发明从样品图像中提取出多条结构光条纹,从而将背景与结构光条纹分离,以便后续图像处理时提高最佳检测点定位的准确性和效率。在多条结构光条纹提取后,获取每条结构光条纹中除两侧边缘区域像素点之外的其他像素点的坐标,得到每条结构光条纹的中心点集合,根据每条结构光条纹的中心点集合,结合结构光受所述待测样品表面的调制而发生形变特性,可以确定出待测样品表面的凹凸性,从而筛选出最佳检测点,并最终得到最佳检测点的三维位置坐标,可以利用机器人将合金分析仪的探头移动至该最佳检测点的三维位置坐标对应的位置处,从而完成视觉定位和合金分析过程。本发明在检测区域只需拍摄一幅样品图像,即可计算出最佳检测点的三维位置坐标,计算量减少,且无需复杂的图像处理过程,计算和定位效率更高。
如图5和图6所示,本发明另一实施例提供一种合金分析视觉定位装置,用于实现前一实施例所述的合金分析视觉定位方法,包括结构光源41、图像采集设备42、激光测距传感器43和控制器5;所述合金分析视觉定位装置与机器人1连接,机器人1与控制器5电连接,控制器5用于对机器人1的运动和启闭进行控制;当控制器5控制机器人1移动时,机器人1会带动结构光源41、图像采集设备42、激光测距传感器43联动;控制器5 与结构光源41电连接,可以控制结构光源41的启闭;控制器5与图像采集设备42电连接,控制器5可以图像采集设备42的启闭,图像采集设备42将拍摄的样品图像发送至控制器5,以便由控制器5对样品图像进行处理和计算,确定最佳检测点;控制器5与激光测距传感器43电连接,控制器5可以控制激光测距传感器43的启闭,激光测距传感器43用于检测图像采集设备42与待测样品之间的距离,激光测距传感器43可以将测量信号发送给控制器5,使控制器5获取图像采集设备42与待测样品之间的距离,以定位图像采集设备42的拍摄位置;图像采集设备42和结构光源41的轴心处于同一竖直平面上,可以提高样品图像的拍摄质量,可选地,图像采集设备42和结构光源41之间的距离为70mm~100mm。
其中,控制器5被配置为执行如下程序步骤:
控制所述图像采集设备向待测样品移动;
获取图像采集设备与待测样品之间的距离;
如果图像采集设备与待测样品之间的距离等于预设距离,控制所述图像采集设备对待测样品的表面进行拍摄,得到样品图像;
从所述样品图像中提取出多条结构光条纹,并计算每条结构光条纹的中心点坐标;
根据每条结构光条纹的中心点集合,确定最佳检测点,并计算所述最佳检测点的三维位置坐标;其中,所述中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像素点。
控制器5与机器人1电连接,机器人1与合金分析仪3固定连接,控制器5可根据最佳检测点的三维位置坐标,生成对应的控制指令,并将所述控制指令发送给机器人1;机器人1根据控制指令移动,会带动合金分析仪3联动,从而将合金分析仪3移动至最佳检测点所处的三维位置坐标处,并由合金分析仪3对待测样品进行合金分析。
可选地,控制器5还可被进一步配置为执行如下程序步骤:
获取所述样品图像中像素点的亮度值;
判断所述亮度值是否大于阈值;
如果所述亮度值大于阈值,则所述像素点为目标点;
对所述样品图像中所有的目标点进行提取,得到多条所述结构光条纹。
可选地,控制器5还可被进一步配置为执行如下程序步骤:
根据每条结构光条纹的中心点集合,以及结构光受所述待测样品表面的调制而发生形变特性,筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点;
将所述最凸点作为所述最佳检测点。
可选地,控制器5还可被进一步配置为执行如下程序步骤:
对所述中心点集合中各像素点的y坐标进行排序,获取最大y坐标值对应的像素点坐标(xi,yi);其中,i表示结构光条纹的序号,1≦i≦N,N为样品图像中提取出的结构光条纹的数量;
根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算(xi,yi)对应的深度坐标Zi;
从深度坐标Zi中筛选出最小深度坐标,并将所述最小深度坐标对应的像素点作为所述最凸点。
可选地,控制器5还可被进一步配置为执行如下程序步骤:
获取图像坐标系与世界坐标系的转换关系;
根据所述转换关系,获取所述样品图像中最佳检测点在世界坐标系中对应的坐标(X,Y);
根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算所述最佳检测点的深度坐标Z,得到所述最佳检测点的三维位置坐标(X,Y,Z)。
可选地,控制器5还可被进一步配置为执行如下程序步骤:
设定所述结构光条纹的感兴趣区域,所述感兴趣区域为结构光条纹中除两侧边缘区域之外的条纹区域;
将所述感兴趣区域中包括的像素点组成所述中心点集合。
可选地,控制器5还可被进一步配置为执行如下程序步骤:在所述样品图像中对所述最佳检测点进行标记。
可选地,如图6和图7所示,所述合金分析视觉定位装置还包括底板44和外护罩45,外护罩45的前端(即正面)面板透明,透明的前端面板可以保证结构光源41发出的结构光能够入射到待测样品的表面,以及保证图像采集设备42能够采集到图像,同时透明的前端面板还能起到防护密封的作用;外护罩45的后端(即背面)固定在底板44上,图像采集设备42和结构光源41固定在底板44上,图像采集设备42和结构光源41位于外护罩45的内部,底板44既用于安装图像采集设备42和结构光源41,还能对装置的后端进行密封防护;激光测距传感器43设置于外护罩45的顶部。
如图7~图9所示,本发明又一实施例提供一种合金分析系统,包括机器人1、支架2、合金分析仪3以及前一实施例所述的合金分析视觉定位装置4,机器人1与合金分析仪3通过支架2连接,合金分析视觉定位装置4设置在支架2上,合金分析仪3和合金分析视觉定位装置4相邻设置且均朝向待测样品100,所述控制器还与合金分析仪3电连接;机器人1可选为六轴机器人;可选地,支架2为具有两条边部的L形支架,支架2一边的端部设有法兰21,法兰21用于将支架2与机器人1连接起来,支架2另一边的端部通过安装板22与底板44连接,从而将支架2与合金分析视觉定位装置4连接起来,支架2的两条边部之间通过支撑杆23连接,以强化支架2的支撑结构。
在前述实施例中所述控制器被配置执行的程序的基础上,本实施例中所述控制器还被配置为执行如下程序步骤:
控制机器人运动,使所述合金分析仪移动至最佳检测点的三维位置坐标所对应的位置;
控制所述合金分析仪启动,以对所述最佳检测点处进行合金分析。
本发明各实施例中,可选地,在机器人1上可以设置语音装置,所述语音装置与控制器5电连接,所述语音装置用于播报合金分析仪3对待测样品的检测结果,以使现场人员获知待测样品是否合格。则所述控制器5还可被配置为:根据所述合金分析仪反馈的检测结果,控制所述语音装置播报对应的提示信息,并控制机器人回到起始位。所述提示信息可以预设在语音装置中,提示信息比如可设置为某待测样品检测合格或不合格,提示信息的具体内容不做限定。
本发明各实施例中,合金分析仪是采用X荧光分析技术,能快速、精确无损的分析多 种材质;拥有广泛、可自定义牌号库,用户可对现有牌号库进行修改,添加新牌号或创建牌号库,可严格控制轻元素(镁铝硅磷硫)的分析;拥有强大的后台数据管理功能,可按要求定制软件。检测结果和报告可直接下载到U盘,或通过WiFi、USB或网线实现数据的传输。
本发明各实施例中,控制器可选为PLC(Programmable Logic Controller,可编程逻辑控制器),PLC中可配置有控制程序以及图像处理系统等功能。可选地,机器人1可选用ABB IRB4600型机器人,激光测距传感器43可选用Panasonic HG-C1050激光传感器,结构光源41选用OPT-SL10B型结构光源,合金分析仪3选用尼通XL2980型合金分析仪,图像采集设备42选用AVT Mako G-192B型工业相机。
本发明各实施例之间相同或相似的内容相互参照即可,相关实施例中不再赘述。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本发明未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由所附的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (10)

  1. 一种合金分析视觉定位方法,其特征在于,所述方法包括:
    当机器人控制图像采集设备向待测样品移动时,获取图像采集设备与待测样品之间的距离;其中,所述图像采集设备配备有结构光源;
    如果图像采集设备与待测样品之间的距离等于预设距离,控制所述图像采集设备对待测样品的表面进行拍摄,得到样品图像;所述结构光源产生的结构光经所述待测样品的表面反射后,被所述图像采集设备接收,使所述样品图像中包括携带有待测样品表面形变特征的结构光条纹;
    从所述样品图像中提取出多条结构光条纹;
    根据每条结构光条纹的中心点集合,确定最佳检测点,并计算所述最佳检测点的三维位置坐标;其中,所述中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像素点;所述最佳检测点为所述待测样品表面的最凸点。
  2. 根据权利要求1所述的方法,其特征在于,所述从所述样品图像中提取出多条结构光条纹,包括:
    获取所述样品图像中像素点的亮度值;
    判断所述亮度值是否大于阈值;
    如果所述亮度值大于阈值,则所述像素点为目标点;
    对所述样品图像中所有的目标点进行提取,得到多条所述结构光条纹。
  3. 根据权利要求1所述的方法,其特征在于,所述根据每条结构光条纹的中心点集合,确定最佳检测点,包括:
    根据每条结构光条纹的中心点集合,以及结构光受所述待测样品表面的调制而发生形变特性,筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点;
    将所述最凸点作为所述最佳检测点。
  4. 根据权利要求3所述的方法,其特征在于,所述筛选出所述图像采集设备的拍摄区域内待测样品表面的最凸点,包括:
    对所述中心点集合中各像素点的y坐标进行排序,获取最大y坐标值对应的像素点坐标(xi,yi);其中,i表示结构光条纹的序号,1≦i≦N,N为样品图像中提取出的结构光条纹的数量;
    根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算(xi,yi)对应的深度坐标Zi;
    从深度坐标Zi中筛选出最小深度坐标,并将所述最小深度坐标对应的像素点作为所述最凸点。
  5. 根据权利要求1所述的方法,其特征在于,所述获取所述最佳检测点的三维位置坐标,包括:
    获取图像坐标系与世界坐标系的转换关系;
    根据所述转换关系,获取所述样品图像中最佳检测点在世界坐标系中对应的坐标(X,Y);
    根据所述图像采集设备、所述结构光源与所述待测样品之间的相对位置关系,利用三角测距方法,计算所述最佳检测点的深度坐标Z,得到所述最佳检测点的三维位置坐标(X,Y,Z)。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    设定所述结构光条纹的感兴趣区域,所述感兴趣区域为结构光条纹中除两侧边缘区域之外的条纹区域;
    将所述感兴趣区域中包括的像素点组成所述中心点集合。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:在所述样品图像中对所述最佳检测点进行标记。
  8. 一种合金分析视觉定位装置,用于实现如权利要求1-7任一项所述的合金分析视觉定位方法,包括图像采集设备和结构光源,其特征在于,还包括激光测距传感器和控制器,所述合金分析视觉定位装置与机器人连接,所述结构光源、所述机器人、所述图像采集设备和所述激光测距传感器分别与所述控制器电连接;所述激光测距传感器用于检测图像采集设备与待测样品之间的距离;
    其中,所述控制器被配置为执行如下程序步骤:
    控制所述图像采集设备向待测样品移动;
    获取图像采集设备与待测样品之间的距离;
    如果图像采集设备与待测样品之间的距离等于预设距离,控制所述图像采集设备对待测样品的表面进行拍摄,得到样品图像;所述结构光源产生的结构光经所述待测样品的表面反射后,被所述图像采集设备接收,使所述样品图像中包括携带有待测样品表面形变特征的结构光条纹;
    从所述样品图像中提取出多条结构光条纹;
    根据每条结构光条纹的中心点集合,确定最佳检测点,并计算所述最佳检测点的三维位置坐标;其中,所述中心点集合包括结构光条纹中除两侧边缘区域像素点之外的其他像素点。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括底板和外护罩,所述外护罩的前端面板透明,所述外护罩的后端固定在所述底板上;图像采集设备和结构光源固定在所述底板上,并且图像采集设备和结构光源位于所述外护罩的内部;所述激光测距传感器设置于所述外护罩的顶部;所述图像采集设备和结构光源的轴心处于同一竖直平面上。
  10. 一种合金分析系统,其特征在于,包括机器人、支架、合金分析仪以及如权利要求8或9所述的合金分析视觉定位装置,所述机器人与所述合金分析仪通过所述 支架连接,所述合金分析视觉定位装置设置在所述支架上,所述合金分析仪和所述合金分析视觉定位装置相邻设置且均朝向待测样品,所述控制器还与所述合金分析仪电连接;
    其中,所述控制器被配置为执行如下程序步骤:
    控制机器人运动,使所述合金分析仪移动至最佳检测点的三维位置坐标所对应的位置;
    控制所述合金分析仪启动,以对所述最佳检测点处进行合金分析。
PCT/CN2020/070988 2019-11-06 2020-01-08 合金分析视觉定位方法、装置及合金分析系统 WO2021088247A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911073340.6 2019-11-06
CN201911073340.6A CN110567963B (zh) 2019-11-06 2019-11-06 合金分析视觉定位方法、装置及合金分析系统

Publications (1)

Publication Number Publication Date
WO2021088247A1 true WO2021088247A1 (zh) 2021-05-14

Family

ID=68786087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/070988 WO2021088247A1 (zh) 2019-11-06 2020-01-08 合金分析视觉定位方法、装置及合金分析系统

Country Status (2)

Country Link
CN (1) CN110567963B (zh)
WO (1) WO2021088247A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110567963B (zh) * 2019-11-06 2020-02-04 江苏金恒信息科技股份有限公司 合金分析视觉定位方法、装置及合金分析系统
CN111272756B (zh) * 2020-03-09 2022-08-26 江苏金恒信息科技股份有限公司 一种合金分析系统
CN111562262B (zh) * 2020-05-27 2020-10-13 江苏金恒信息科技股份有限公司 一种合金分析系统及其复检方法
CN111618855B (zh) * 2020-05-27 2021-10-08 江苏金恒信息科技股份有限公司 一种自动挂牌系统及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204789315U (zh) * 2015-06-25 2015-11-18 成都富江机械制造有限公司 一种便携式合金分析仪
CN105548196A (zh) * 2015-12-07 2016-05-04 郑州轻工业学院 硬质合金顶锤在线无损检测的方法和装置
CN107291081A (zh) * 2017-07-18 2017-10-24 广州松兴电气股份有限公司 轨道车辆检修小车
WO2018024841A1 (de) * 2016-08-04 2018-02-08 Hydro Aluminium Rolled Products Gmbh Vorrichtung und verfahren zur legierungsanalyse von schrottfragmenten aus metall
CN108469234A (zh) * 2018-03-02 2018-08-31 北京科技大学 一种在轨航天器表面异常状况智能检测方法及其系统
CN110103196A (zh) * 2019-06-19 2019-08-09 广东电网有限责任公司 一种gis的检修机器人和gis的检修系统
CN110567963A (zh) * 2019-11-06 2019-12-13 江苏金恒信息科技股份有限公司 合金分析视觉定位方法、装置及合金分析系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589476B (zh) * 2012-02-13 2014-04-02 天津大学 高速扫描整体成像三维测量方法
CN105931232B (zh) * 2016-04-18 2019-02-19 南京航空航天大学 结构光光条中心高精度亚像素提取方法
CN106338521B (zh) * 2016-09-22 2019-04-12 华中科技大学 增材制造表面及内部缺陷与形貌复合检测方法及装置
CN106312397B (zh) * 2016-10-12 2018-04-13 华南理工大学 一种激光视觉引导的焊接轨迹自动跟踪系统及方法
CN107052086A (zh) * 2017-06-01 2017-08-18 扬州苏星机器人科技有限公司 基于三维视觉的冲压件表面缺陷检测装置及检测方法
CN107798698B (zh) * 2017-09-25 2020-10-27 西安交通大学 基于灰度修正与自适应阈值的结构光条纹中心提取方法
CN110243293B (zh) * 2019-06-18 2021-01-08 上海同岩土木工程科技股份有限公司 基于结构光和机器视觉的管片错台快速检测装置与方法
CN110160770B (zh) * 2019-06-25 2021-12-21 沈阳工业大学 高速旋转主轴实时检测装置及其检测方法
CN110216662A (zh) * 2019-07-18 2019-09-10 江苏金恒信息科技股份有限公司 一种工业机器人本体

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204789315U (zh) * 2015-06-25 2015-11-18 成都富江机械制造有限公司 一种便携式合金分析仪
CN105548196A (zh) * 2015-12-07 2016-05-04 郑州轻工业学院 硬质合金顶锤在线无损检测的方法和装置
WO2018024841A1 (de) * 2016-08-04 2018-02-08 Hydro Aluminium Rolled Products Gmbh Vorrichtung und verfahren zur legierungsanalyse von schrottfragmenten aus metall
CN107291081A (zh) * 2017-07-18 2017-10-24 广州松兴电气股份有限公司 轨道车辆检修小车
CN108469234A (zh) * 2018-03-02 2018-08-31 北京科技大学 一种在轨航天器表面异常状况智能检测方法及其系统
CN110103196A (zh) * 2019-06-19 2019-08-09 广东电网有限责任公司 一种gis的检修机器人和gis的检修系统
CN110567963A (zh) * 2019-11-06 2019-12-13 江苏金恒信息科技股份有限公司 合金分析视觉定位方法、装置及合金分析系统

Also Published As

Publication number Publication date
CN110567963B (zh) 2020-02-04
CN110567963A (zh) 2019-12-13

Similar Documents

Publication Publication Date Title
WO2021088247A1 (zh) 合金分析视觉定位方法、装置及合金分析系统
CN102175700B (zh) 数字x射线图像焊缝分割和缺陷检测方法
CN102455171B (zh) 一种激光拼焊焊缝背面几何形貌检测方法
JP5997989B2 (ja) 画像測定装置、その制御方法及び画像測定装置用のプログラム
CN101532926A (zh) 冲击试样自动加工装置在线检测视觉系统及其图像处理方法
CN108007388A (zh) 一种基于机器视觉的转盘角度高精度在线测量方法
TWI729186B (zh) 晶圓中開口尺寸之光學量測
CN105157603A (zh) 一种线激光传感器及其三维坐标数据的计算方法
CN104316530A (zh) 一种零部件检测方法及应用
JP6115639B2 (ja) 情報処理装置、検査範囲の計算方法、及びプログラム
WO2021238095A1 (zh) 一种合金分析系统及其复检方法
CN109791038B (zh) 台阶大小及镀金属厚度的光学测量
JP2024508331A (ja) マシンビジョンによる検出方法、その検出装置及びその検出システム
US10168524B2 (en) Optical measurement of bump hieght
CN111272756B (zh) 一种合金分析系统
CN114088157A (zh) 钢水液面检测方法、系统、设备及介质
Onishi et al. Theoretical and experimental guideline of optimum design of defect-inspection apparatus for transparent material using phase-shift illumination approach
CN109791039B (zh) 使用光学显微镜产生样本的三维信息的方法
US20230128214A1 (en) User interface device for autonomous machine vision inspection
TWI450572B (zh) 影像邊界掃描的電腦系統及方法
CN117522830A (zh) 用于检测锅炉腐蚀的点云扫描系统
JPH03100424A (ja) 指示計自動読み取り装置
Eitzinger et al. Robotic inspection systems
TW202209261A (zh) 用於光學檢測的路徑建立方法及其裝置
CN117872699A (zh) 一种手表密封胶圈缺陷检测系统、方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20884559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20884559

Country of ref document: EP

Kind code of ref document: A1