CN117110319B - Sphere surface defect detection method and detection system based on 3D imaging - Google Patents

Sphere surface defect detection method and detection system based on 3D imaging Download PDF

Info

Publication number
CN117110319B
CN117110319B CN202311369143.5A CN202311369143A CN117110319B CN 117110319 B CN117110319 B CN 117110319B CN 202311369143 A CN202311369143 A CN 202311369143A CN 117110319 B CN117110319 B CN 117110319B
Authority
CN
China
Prior art keywords
point cloud
sphere
detection
point
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311369143.5A
Other languages
Chinese (zh)
Other versions
CN117110319A (en
Inventor
徐健
刘大双
陆振
王迎春
史文杰
周美兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huiding Zhilian Equipment Technology Jiangsu Co ltd
Original Assignee
Huiding Zhilian Equipment Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huiding Zhilian Equipment Technology Jiangsu Co ltd filed Critical Huiding Zhilian Equipment Technology Jiangsu Co ltd
Priority to CN202311369143.5A priority Critical patent/CN117110319B/en
Publication of CN117110319A publication Critical patent/CN117110319A/en
Application granted granted Critical
Publication of CN117110319B publication Critical patent/CN117110319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/951Balls
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E30/00Energy generation of nuclear origin
    • Y02E30/30Nuclear fission reactors

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention provides a sphere surface defect detection method and a detection system based on 3D imaging, wherein the detection method comprises the following steps: (a) Scanning the upper hemispherical surface and the lower hemispherical surface of a sphere to be detected to obtain an upper hemispherical 3D depth image and a lower hemispherical 3D depth image of the sphere to be detected; (b) Based on the obtained upper hemispherical 3D depth image and the lower hemispherical 3D depth image, splicing to obtain a 3D point cloud model corresponding to the sphere to be detected; and (c) judging whether the surface of the sphere to be detected has defects or not by combining the 3D point cloud image before splicing and the point cloud model after splicing.

Description

Sphere surface defect detection method and detection system based on 3D imaging
Technical Field
The invention relates to the technical field of machine vision detection, in particular to a sphere surface defect detection method and system based on 3D imaging.
Background
The automobile is assembled by tens of thousands of parts, the parts are basic and important components for the development of the automobile, and the quality of the product directly influences the performance of the whole automobile. The surface defect detection can help enterprises to better master the distribution condition of the product quality, find out quality weak links, reduce the fluctuation of the product quality and form closed-loop control of production and quality improvement, so that the requirements of the enterprises on the appearance quality of parts are increasingly high in recent years.
Currently, the mainstream surface defect detection methods include a surface defect detection method based on a conventional mechanism and a surface defect detection method based on machine vision. The first detection method can be divided into eddy current detection, alternating current magnetic field detection, magnetic leakage detection and the like, and the method mainly utilizes high-sensitivity electromechanical technology or optical technology to realize nondestructive detection of surface defects by analyzing and processing acquired electric signal or magnetic signal information. The eddy current detection is mainly suitable for detecting the surface and the near surface of the conductive material, but has higher requirements on the state of the surface to be detected, poor detection effect when the roughness is larger, and low recognition rate on dirt and slight scratches; the alternating current electromagnetic field detection is only suitable for ferromagnetic materials with high magnetic permeability, the equipment cost is high, and the types of detectable defects are limited; the magnetic leakage detection needs to be demagnetized after the detection is completed, the operation is complex, and the types of defects are difficult to accurately distinguish.
The second type of defect detection method based on machine vision is an effective means for realizing automation, intellectualization and precision, and has the advantages of high reliability, high detection speed, low cost and wide application range. The existing machine vision mainly uses 2D image processing, but as the manufacturing process becomes more and more complex, the requirements on the accuracy and stability of surface defect detection are also increasingly high, and sometimes the detection requirements cannot be finished only by means of RGB information. The 3D defect detection technology based on analyzing and processing the depth information of the product can more accurately detect and analyze the defects on the surface of the product by using a computer vision technology and a 3D imaging technology. For the graphite sphere as the detection object, the surface dirt image features can be black during 2D shooting, the contrast ratio between the surface dirt image features and the concave-convex defect features is not high, and misjudgment is easy to cause. The defect of using the 3D camera to draw the picture needs to have a certain depth, and the interference of dirt can not cause erroneous judgement to the detection.
Disclosure of Invention
The method and the system for detecting the surface defects of the sphere based on 3D imaging have the main advantages that the method is suitable for detecting defects such as surface scratches and dirt of the sphere, the quality of products can be effectively controlled, the production process is improved, the labor cost is reduced, and economic losses are avoided.
Another advantage of the present application is to provide a method and a system for detecting a defect on a surface of a sphere based on 3D imaging, wherein the method detects a concave-convex defect on a surface of a graphite sphere based on 3D imaging principle, and the method can obtain a large amount of effective data information in a short time, and has high measurement accuracy, high speed and strong adaptability.
Another advantage of the present application is to provide a method and a system for detecting a surface defect of a sphere based on 3D imaging, wherein the method is a method for detecting a surface defect of a graphite sphere based on 3D imaging principle, so as to realize efficient and accurate detection of the surface defect of the sphere.
Another advantage of the present application is to provide a method and a system for detecting a defect on a surface of a sphere based on 3D imaging, wherein the detection method belongs to a contactless detection scheme, and damage to the surface of a workpiece to be detected and secondary damage to a product due to contact point taking are avoided.
The method and the system for detecting the surface defects of the sphere based on 3D imaging have the advantages that the detection system adopts a rotary image acquisition mode combined with double-station phase, the acquisition time of each workpiece is less than 2s, and the overall detection time is shortened.
Another advantage of the present application is to provide a method and a system for detecting a defect on a surface of a sphere based on 3D imaging, where the method uses a detection scheme based on 3D imaging technology, so that the adaptability to a surface to be measured is stronger on the premise of ensuring measurement accuracy, and stability and anti-interference capability of the system are enhanced.
Thus, according to a first aspect of the present invention, there is provided a method for detecting a surface defect of a sphere based on 3D imaging, the method comprising the steps of:
(a) Scanning the upper hemispherical surface and the lower hemispherical surface of a sphere to be detected to obtain an upper hemispherical 3D depth image and a lower hemispherical 3D depth image of the sphere to be detected;
(b) Based on the obtained upper hemispherical 3D depth image and the lower hemispherical 3D depth image, splicing to obtain a 3D point cloud model corresponding to the sphere to be detected; and
(c) And judging whether defects exist on the surface of the sphere to be detected or not by combining the 3D point cloud image before splicing and the point cloud model after splicing.
According to one embodiment of the present application, in step (a) of the detection method, an upper hemispherical surface of the sphere to be detected is photographed by a 3D camera, wherein a scanning field of view of the 3D camera covers an entire hemisphere of the sphere to be detected, and in a process of acquiring three-dimensional information, the sphere to be detected needs to perform autorotation along a rotation axis passing through a center of the sphere to acquire the upper hemispherical 3D point cloud image and the lower hemispherical 3D point cloud image.
According to one embodiment of the present application, in step (a) of the detection method, after three-dimensional information of the sphere to be detected is collected at the first detection station, the sphere to be detected is turned upside down and is transferred to the second detection station, so as to complete the acquisition of the three-dimensional information of the lower half.
According to one embodiment of the present application, step (b) of the detection method further comprises the steps of:
(b.1) converting the acquired upper hemisphere 3D depth image and the lower hemisphere 3D depth image into corresponding multi-slice 3D point cloud images; and
and (b.2) moving other 3D point cloud images based on the position of the first 3D point cloud image so as to realize the stitching of the 3D point cloud images.
According to one embodiment of the application, in step (b.2) of the detection method, start-stop point coordinates of the first 3D point cloud image are sorted and translated to theoretical positions of the whole image.
According to one embodiment of the application, in the step (b.2) of the detection method, translation matching and angle difference calculation are performed on the first 3D point cloud image and other point cloud images, and the remaining other 3D point cloud images are rotated by a certain angle according to the calculation result to complete the stitching of the 3D point cloud.
According to one embodiment of the present application, step (c) of the detection method further comprises the steps of:
the method comprises the steps of (c.1) grabbing suspected points different from surrounding depths in a 3D point cloud image before splicing, and positioning the suspected points into the spliced 3D point cloud image after coordinate conversion;
selecting a pixel area in a specific range around the suspected point, constructing a space plane for the area, and projecting the suspected point to the space plane to obtain a projection point, wherein the projection point and the original suspected point form a detection vector; and
and (c.3) calculating the included angle between the normal vector and the detection vector of the suspected point, and judging the suspected point to be a concave surface when the included angle is smaller than 90 degrees, and otherwise judging the suspected point to be a convex surface.
According to one embodiment of the present application, a 100x100 pixel area is selected around the suspected point.
According to one embodiment of the present application, in step (c.1) of the detection method, a suspected point is roughly acquired, wherein the step further includes:
(c.1.1) fitting a theoretical sphere based on the 3D point cloud data, and obtaining a sphere center and a radius R;
(c.1.2) calculating the distance D from each point on the 3D point cloud to the fitted sphere center; and
(c.1.3) all columns of distance d versus radius R, |d-R| > thresh are suspicious points.
According to one embodiment of the present application, in the step (c) of the detection method, filtering and judging the suspected points, wherein the suspected points are connected into pieces, the area of the pieces is calculated, and the suspected points with small area are filtered; for the suspected points with large areas, if the area with gentle and excessive |d-R| belongs to the area with poor sphericity, but no scratches are generated, filtering is also carried out.
According to one embodiment of the present application, in step (c) of the detection method, the filtering method counts the gradient of the normal vector, and if the gradient is small, the gradient is gentle, otherwise, the collision scratch is identified.
According to another aspect of the present application, the present application further provides a sphere surface defect detection system based on 3D imaging, including:
the device comprises a first detection station, a second detection station, a first scanning mechanism, a second scanning mechanism, a turnover conveying mechanism and a processor, wherein a sphere to be detected can be placed at the first detection station and the second detection station, the turnover conveying mechanism is used for turnover and conveying the sphere to be detected from the first detection station to the second detection station, the first scanning mechanism is used for acquiring an upper hemisphere depth image of the sphere to be detected, the second scanning mechanism is used for acquiring a lower hemisphere depth image of the sphere to be detected, and the processor is used for splicing the depth images acquired by the first scanning mechanism and the second scanning mechanism into a sphere 3D point cloud model corresponding to the sphere to be detected; and judging whether the concave-convex defects exist or not by combining the spherical point cloud model.
According to one embodiment of the application, the first scanning mechanism and the second scanning mechanism have a central scanning axis, wherein the first scanning mechanism and the second scanning mechanism are mounted inclined 45 ° to one side of the first inspection station and the second inspection station.
According to one embodiment of the application, the first detection station and the second detection station can rotate along a central axis, and the first detection station and the second detection station bear the ball to be detected to do autorotation along the rotation axis passing through the center of the ball.
According to one embodiment of the application, the processor comprises an information conversion unit, a point cloud splicing unit and a defect detection unit, wherein the information conversion unit converts hemispherical images obtained by scanning by the first scanning mechanism and the second scanning mechanism into corresponding upper hemispherical 3D point cloud images and lower hemispherical 3D point cloud images; the point cloud splicing unit is used for sorting and splicing the upper hemispherical 3D point cloud image and the lower hemispherical 3D point cloud image to obtain a point cloud model corresponding to the sphere to be detected; and the defect detection unit is used for judging whether the concave-convex defects exist or not by combining the point cloud images before and after splicing.
Further objects and advantages of the present invention will become fully apparent from the following description and the accompanying drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description and accompanying drawings.
Drawings
The technical scheme of the present invention will be described in further detail with reference to the accompanying drawings and examples. In the drawings, like reference numerals are used to refer to like parts unless otherwise specified. Wherein:
fig. 1 is a schematic diagram of steps of a method for detecting defects on a surface of a sphere based on 3D imaging according to a first preferred embodiment of the present invention.
Fig. 2 is a schematic diagram of a sphere surface defect detection system based on 3D imaging according to the first preferred embodiment of the present invention.
FIG. 3 is a schematic diagram of a sphere to be measured placed on the 3D imaging-based sphere surface defect detection system.
Fig. 4 is a schematic view of a detection scenario of the 3D imaging-based sphere surface defect detection system according to the first preferred embodiment of the present invention.
Fig. 5 is a schematic view of the 3D imaging-based sphere surface defect detection system according to the first preferred embodiment of the present invention, scanning the sphere surface.
Fig. 6A and 6B are schematic diagrams of point clouds before stitching obtained by the method for detecting a surface defect of a sphere based on 3D imaging according to the first preferred embodiment of the present invention.
Fig. 7 is a schematic diagram of a spliced sphere surface defect detection method according to the first preferred embodiment of the present invention.
Fig. 8 is a schematic diagram of a stitching principle of the method for detecting a surface defect of a sphere based on 3D imaging according to the first preferred embodiment of the present invention.
Detailed Description
It is pointed out that the embodiments shown in the drawings are only for the purpose of illustrating and explaining the inventive concept in detail and image, which are not necessarily drawn to scale in terms of size and structure nor are they to be construed as limiting the inventive concept.
Terms of orientation such as up, down, left, right, front, rear, front, back, top, bottom, etc. mentioned or possible to be mentioned in the present specification are defined with respect to the configurations shown in the respective drawings, which are relative concepts, and thus may be changed according to different positions and different use states thereof. These and other directional terms should not be construed as limiting terms.
It will be appreciated by those skilled in the art that in the present disclosure, the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," etc. refer to an orientation or positional relationship based on that shown in the drawings, which is merely for convenience of description and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore the above terms should not be construed as limiting the present invention.
It will be understood that the terms "a" and "an" should be interpreted as referring to "at least one" or "one or more," i.e., in one embodiment, the number of elements may be one, while in another embodiment, the number of elements may be plural, and the term "a" should not be interpreted as limiting the number.
Referring to fig. 1 to 8 of drawings of the present specification, a method and system for detecting a surface defect of a sphere based on 3D imaging according to a first preferred embodiment of the present application are illustrated in the following description. In order to facilitate the description of the detection system for the surface defect of the sphere based on the 3D imaging, the detection method for the surface defect of the sphere based on the 3D imaging is simply referred to as a detection method.
The inspection system comprises a first inspection station 10, a second inspection station 20, a first scanning mechanism 30, a second scanning mechanism 40, a turnover conveying mechanism 50 and a processor 60, wherein a sphere to be inspected can be placed at the first inspection station 10 and the second inspection station 20, the turnover conveying mechanism 50 is used for turnover and conveying the sphere to be inspected from the first inspection station 10 to the second inspection station 20, the first scanning mechanism 30 is opposite to the first inspection station 10 and is used for scanning the sphere to be inspected when the first inspection station 10 is positioned, and acquiring upper hemispherical scanning information corresponding to the sphere to be inspected; the second scanning mechanism 40 is opposite to the second detection station 20, and when the ball to be detected is turned over by the turning and conveying mechanism 50 and conveyed to the second detection station 20, the second scanning mechanism 40 scans the turned ball to be detected and obtains lower hemispherical scanning information corresponding to the ball to be detected.
Preferably, in this preferred embodiment of the present application, the first scanning mechanism 30 and the second scanning mechanism 40 are implemented as 3D cameras, wherein hemispherical scanning information scanned by the first scanning mechanism 30 and the second scanning mechanism 40 is a 3D depth image or a 3D point cloud image.
Briefly, in this preferred embodiment of the present application, the first scanning mechanism 30 scans the upper half of the sphere under test to acquire a corresponding upper hemisphere 3D depth image while the sphere under test is at the first inspection station 10; after the first scanning mechanism 30 finishes scanning, the overturning and conveying mechanism 50 overturns the sphere to be detected up and down by 180 degrees and conveys the sphere to the second detection station 20, and the second scanning mechanism 40 scans the lower half part of the sphere to be detected to obtain a 3D depth image corresponding to the lower half sphere.
The first scanning mechanism 30 and the second scanning mechanism 40 are communicatively connected with the processor 60, the upper hemisphere 3D depth image acquired by the first scanning mechanism 30 and the lower hemisphere 3D depth image acquired by the second scanning mechanism 40 are transmitted to the processor 60, and the processor 60 performs a stitching process on the acquired upper hemisphere 3D depth image information and lower hemisphere 3D depth image, that is, the acquired upper hemisphere 3D depth image and lower hemisphere 3D depth image are stitched into a sphere 3D point cloud model corresponding to a sphere to be measured; the processor 60 determines whether there is a concave-convex defect in combination with the spherical point cloud model.
Illustratively, in this preferred embodiment of the present application, the sphere to be measured is a graphite sphere, the surface of which is a graphite material.
The detection system further comprises a feeding mechanism 70 and a discharging mechanism 80, wherein the feeding mechanism 70 is used for conveying the ball to be detected to the first detection station 10, and after detection, the ball is conveyed outwards through the discharging mechanism 80.
It should be noted that, in the preferred embodiment of the present application, the portions of the feeding mechanism 70, the discharging mechanism 80, the first detecting station 10, the second detecting station 20, and the turnover conveying mechanism 50, which are in contact with the ball to be detected, are all provided with intersecting soft materials, so as to avoid the surface of the ball to be detected from being damaged by extrusion collision.
Because the simultaneous acquisition of the two camera single stations easily causes mutual interference of lasers and influences imaging quality, the detection system of the preferred embodiment of the application adopts a double-station image-taking mode, and each station is respectively responsible for acquiring the three-dimensional information of the surface of a half sphere. The first scanning mechanism 30 and the second scanning mechanism 40 have a central scanning axis, wherein the first scanning mechanism 30 and the second scanning mechanism 40 are installed at one side of the first inspection station 10 and the second inspection station 20 with an inclination of 45 degrees, which can scan a field of view, respectively, to cover the entire hemisphere.
The scanning field of view of the first scanning mechanism 30 may cover an upper hemispherical portion of the sphere to be measured, and the scanning field of view of the second scanning mechanism 40 may cover a lower hemispherical portion of the sphere to be measured. The central scanning axes of the first scanning mechanism 30 and the second scanning mechanism 40 are opposite to the sphere center of the sphere to be measured, and the included angle between the central scanning axes of the first scanning mechanism 30 and the second scanning mechanism 40 and the horizontal direction is 45 degrees.
The first detection station 10 and the second detection station 20 can rotate along a central axis, the first detection station 10 and the second detection station 20 bear the ball to be detected to do autorotation along the rotation axis passing through the center of the sphere, and if the rotation axis deviates from the center of the sphere, the images after subsequent splicing are easy to deform.
As shown in fig. 4 and 5, at the first inspection station 10 or the second inspection station 20, the first scanning mechanism 30 and the second scanning mechanism 40 acquire one 3D image corresponding to the ball to be inspected in a single scan (photographing) of the ball to be inspected, wherein the 3D image corresponds to 3D image information of the surface of the ball to be inspected. The sphere to be tested rotates along with the first detection station 10 or the second detection work 20, and the first scanning mechanism 30 and the second scanning mechanism 40 continuously scan (shoot) the sphere to be tested and acquire a plurality of 3D images corresponding to the sphere to be tested in the rotating process.
It should be noted that the scanning line width of the first scanning mechanism 30 and/or the second scanning mechanism 40 during a single scanning or shooting is greater than or equal to the radius of the sphere to be measured, that is, the scanning field of the first scanning mechanism 30 or the second scanning mechanism 40 can cover half a sphere of the sphere to be measured during scanning.
In short, the ball to be tested is placed on the first detection station 10 after being grabbed by the grippers from the feeding mechanism, the ball is turned 180 degrees after the three-dimensional information acquisition of the upper half part is completed, the ball is transported to the second detection station 20 through the turning conveying mechanism 50 to complete the three-dimensional information acquisition of the lower half part, and the 3D image information acquisition work of the upper surface and the lower surface of the ball acquired at the moment is completed.
It is worth mentioning that the detection system of the preferred embodiment of the present application can complete 360 ° detection of the sphere within 5s and maintain the original surface state during transportation of the sphere.
The processor 60 converts the acquired 3D image information of the upper and lower surfaces of the sphere into a plurality of 3D point cloud images, and splices the 3D point cloud images to obtain a 3D point cloud model corresponding to the sphere to be tested, and then judges whether the surface of the sphere has defects based on the obtained 3D point cloud model.
The processor 60 includes an information conversion unit 61, a point cloud stitching unit 62, and a defect detection unit 63, where the information conversion unit 61 converts hemispherical images scanned by the first scanning mechanism 30 and the second scanning mechanism 40 into corresponding upper hemispherical 3D point cloud images and lower hemispherical 3D point cloud images; the point cloud stitching unit 62 collates and stitches the upper hemispherical 3D point cloud image and the lower hemispherical 3D point cloud image to obtain a point cloud model corresponding to the sphere to be detected; the defect detection unit 63 combines the point cloud images before and after the splicing to determine whether the concave-convex defect exists.
In detail, the upper hemisphere 3D image information scanned by the first scanning mechanism 30 and the lower hemisphere 3D image information scanned by the second scanning mechanism 40 are a plurality of pictures with depth information. The information converting unit 61 converts the pictures with depth information scanned by the first scanning mechanism 30 and the second scanning mechanism 40 into 3D point cloud images, and the 3D point cloud images are fragmented and correspond to only part of the surface of the sphere to be measured. Therefore, the point cloud stitching unit 62 collates the coordinates of the start point and the stop point of the first point cloud, translates the coordinates to the theoretical position of the whole image, performs translation matching and angle difference calculation on the obtained point cloud images, and rotates the second point cloud image by a certain angle according to the calculation result to finish the stitching of the two point cloud images. And carrying out transformation processing on the residual point cloud images according to the rotation splicing principle so as to obtain a fitted graphite sphere point cloud model.
The defect detection unit 63 combines the point cloud images before and after the splicing to determine whether the concave-convex defect exists. Firstly, the defect detection unit 63 captures suspected points different from surrounding depths in the point cloud images before stitching, and positions the suspected points into the stitched point cloud images after coordinate conversion; then, the defect detecting unit 63 selects a pixel region in a specific range (for example, 100×100) around the suspected point, constructs a spatial plane for the region, projects the suspected point onto the plane, and the projected point and the original suspected point form a detection vector; finally, the defect detecting unit 63 calculates the included angle between the normal vector and the detection vector of the suspected point, and determines that the surface is concave when the included angle is smaller than 90 °, and otherwise, the surface is convex.
As shown in fig. 1, a method for detecting a surface defect of a sphere based on 3D imaging according to another aspect of the present application is set forth in the following description. The sphere surface defect detection method based on 3D imaging comprises the following steps:
(a) Scanning the upper hemispherical surface and the lower hemispherical surface of a sphere to be detected to obtain an upper hemispherical 3D depth image and a lower hemispherical 3D depth image of the sphere to be detected;
(b) Based on the obtained upper hemispherical 3D depth image and the lower hemispherical 3D depth image, splicing to obtain a 3D point cloud model corresponding to the sphere to be detected; and
(c) And judging whether defects exist on the surface of the sphere to be detected or not by combining the 3D point cloud image before splicing and the point cloud model after splicing.
In step (a) of the detection method of this preferred embodiment of the present application, an upper hemispherical surface of a sphere to be detected is photographed by a 3D camera, wherein a scanning field of view of the 3D camera covers an entire hemisphere of the sphere to be detected, and in a process of acquiring three-dimensional information, the sphere to be detected needs to perform autorotation motion along a rotation axis passing through a center of the sphere to acquire the upper hemispherical 3D point cloud image and the lower hemispherical 3D point cloud image.
In step (a) of the detection method in the preferred embodiment of the present application, after three-dimensional information of the sphere to be detected is collected at the first detection station, the sphere to be detected is turned upside down and is conveyed to the second detection station, so as to complete the acquisition of the three-dimensional information of the lower half.
Step (b) of the detection method of this preferred embodiment of the present application further comprises the steps of:
(b.1) converting the acquired upper hemisphere 3D depth image and the lower hemisphere 3D depth image into corresponding multi-slice 3D point cloud images; and
and (b.2) moving other 3D point cloud images based on the position of the first 3D point cloud image so as to realize the stitching of the 3D point cloud images.
Specifically, in step (b.2) of the detection method of the preferred embodiment of the present application, coordinates of start and stop points of the first 3D point cloud image are sorted, and translated to a theoretical position of the whole image. It should be noted that the above theoretical position refers to a world coordinate system, and the 3D camera captures the image in a camera coordinate system, because the camera is mounted obliquely, the two coordinate systems will be different by one rotation-translation transformation. Therefore, the angle between the normal vector of camera shooting and the vertical plane of the rotation axis needs to be calibrated to determine the rotation matrix of the two coordinate systems.
Specifically, in step (b.2) of the detection method in the preferred embodiment of the present application, translational matching and angle difference calculation are performed on the first 3D point cloud image and the remaining other point cloud images, and the remaining other 3D point cloud images are rotated by a certain angle according to the calculation result to complete the stitching of the 3D point cloud. In this application, the point cloud movement is transformed from the camera coordinate system to the world coordinate system, such as translating the P0 point (x 0, y0, z 0) in the translation vector T, i.e., (x ', y', z '), resulting in the P1 point (x 1, y1, z 1) = (x0+x', y0+y ', z0+z').
Because the camera scans and forms the point cloud/depth map according to the default linear direction, the ball to be detected is imaged in a rotating way in the scanning process, and each row of 3D point cloud images needs to be processed independently and sequentially as follows: wherein the nth row of points (clouds) translates back to the first row position; fitting a circle to each row of points to obtain a circle center; rotating the beta angle around the circle center clockwise; rotation (n-1) by an angle alpha according to a z-axis passing through the center of a circle.
It should be noted that, the rotation operation on the remaining 3D point cloud image may be understood as pts1=r×pts0, where pts0 is the original point cloud, pts1 is the rotated point cloud, where Pts0 and Pts1 are matrices, each column of which is a coordinate (x, y, z) ≡t of one point, (≡t represents a matrix transpose) ·r is a rotation matrixRotation axis->Calculating an initial rotation matrix +.>The formula is as follows:
step (c) of the detection method of this preferred embodiment of the present application further comprises the steps of:
the method comprises the steps of (c.1) grabbing suspected points different from surrounding depths in a 3D point cloud image before splicing, and positioning the suspected points into the spliced 3D point cloud image after coordinate conversion;
selecting a pixel area in a specific range around the suspected point, constructing a space plane for the area, and projecting the suspected point to the space plane to obtain a projection point, wherein the projection point and the original suspected point form a detection vector; and
and (c.3) calculating the included angle between the normal vector and the detection vector of the suspected point, and judging the suspected point to be a concave surface when the included angle is smaller than 90 degrees, and otherwise judging the suspected point to be a convex surface.
It should be noted that, in step (c.1) of the detection method in the preferred embodiment of the present application, a depth map (a two-dimensional image, each pixel value corresponds to a z value, and the x and y direction point intervals are fixed) obtained from the 3D camera, and if the map is m rows and n columns (1, 2, …, m rows, 1,2, …, n columns), then the point cloud point sequence number corresponding to the i row and j column is (i-1) ×n+j; otherwise, a point with the sequence number k on the point cloud corresponds to a quotient of the image k divided by n as a row number, and the remainder of the image k divided by n as a column number.
By way of example, in this preferred embodiment of the present application, a 100x100 pixel area is selected around the suspected point.
In step (c.1) of the detection method according to the preferred embodiment of the present application, a suspected point is roughly acquired, wherein the step further includes:
(c.1.1) fitting a theoretical sphere based on the 3D point cloud data, and obtaining a sphere center and a radius R;
(c.1.2) calculating the distance D from each point on the 3D point cloud to the fitted sphere center; and
(c.1.3) all columns of distance d versus radius R, |d-R| > thresh are suspicious points.
In step (c) of the detection method of the preferred embodiment of the present application, filtering and judging the suspected points, wherein the suspected points are connected into pieces, the area of the pieces is calculated, and the suspected points with small area are filtered; for the suspected points with large areas, if the area with gentle and excessive |d-R| belongs to the area with poor sphericity, but no scratches are generated, filtering is also carried out.
It should be noted that, in step (c) of the detection method in the preferred embodiment of the present application, the gradient of the statistical normal vector of the filtering method is gentle if the gradient is small, otherwise, the detection method is identified as a collision scratch.
The technical scope of the present invention is not limited to the above description, and those skilled in the art may make various changes and modifications to the above-described embodiments without departing from the technical spirit of the present invention, and these changes and modifications are all within the scope of the present invention.

Claims (6)

1. A method for detecting a defect on the surface of a sphere based on 3D imaging, wherein the detection method comprises the following steps:
(a) Scanning the upper hemispherical surface and the lower hemispherical surface of a sphere to be detected to obtain an upper hemispherical 3D depth image and a lower hemispherical 3D depth image of the sphere to be detected;
(b) Based on the obtained upper hemispherical 3D depth image and the lower hemispherical 3D depth image, splicing to obtain a 3D point cloud model corresponding to the sphere to be detected; and
(c) Judging whether defects exist on the surface of the sphere to be detected or not by combining the 3D point cloud image before splicing and the point cloud model after splicing; in the step (a), the upper hemispherical surface of the sphere to be measured is photographed by a 3D camera, wherein the scanning field of view of the 3D camera covers the whole hemisphere of the sphere to be measured, and in the process of collecting three-dimensional information, the sphere to be measured needs to perform autorotation along the rotation axis passing through the center of the sphere so as to obtain the upper hemispherical 3D point cloud image and the lower hemispherical 3D point cloud image; after three-dimensional information of the ball to be detected is acquired on the first detection station, the ball to be detected is turned upside down and is conveyed to the second detection station, so that the acquisition of the three-dimensional information of the lower half part is completed; wherein said step (b) further comprises the steps of:
(b.1) converting the acquired upper hemisphere 3D depth image and the lower hemisphere 3D depth image into corresponding multi-slice 3D point cloud images; and
(b.2) moving other 3D point cloud images based on the first sheet of 3D point cloud image positions to achieve stitching of the 3D point cloud images; wherein said step (c) further comprises the steps of:
the method comprises the steps of (c.1) grabbing suspected points different from surrounding depths in a 3D point cloud image before splicing, and positioning the suspected points into the spliced 3D point cloud image after coordinate conversion;
selecting a pixel area around the suspected point, constructing a space plane for the area, and projecting the suspected point to the space plane to obtain a projection point, wherein the projection point and the original suspected point form a detection vector; and
and (c.3) calculating the included angle between the normal vector and the detection vector of the suspected point, and judging the suspected point to be a concave surface when the included angle is smaller than 90 degrees, and otherwise judging the suspected point to be a convex surface.
2. The detection method according to claim 1, wherein in step (b.2) of the detection method, start-stop point coordinates of the first sheet of 3D point cloud image are collated and translated to theoretical positions of the whole image.
3. The method according to claim 2, wherein in the step (b.2) of the method, the first 3D point cloud image and the other point cloud images are subjected to translational matching and angle difference calculation, and the remaining other 3D point cloud images are rotated by a certain angle according to the calculation result to complete the stitching of the 3D point cloud.
4. A method according to claim 3, wherein a 100x100 pixel area is selected around the suspected point.
5. A detection method according to claim 3, wherein in step (c.1) of the detection method, a suspected point is roughly acquired, wherein the step further comprises:
(c.1.1) fitting a theoretical sphere based on the 3D point cloud data, and obtaining a sphere center and a radius R;
(c.1.2) calculating the distance D from each point on the 3D point cloud to the fitted sphere center; and
(c.1.3) all columns of distance d versus radius R, |d-R| > thresh are suspicious points.
6. The detection system of any one of claims 1 to 5, wherein the detection system comprises:
the device comprises a first detection station, a second detection station, a first scanning mechanism, a second scanning mechanism, a turnover conveying mechanism and a processor, wherein a sphere to be detected can be placed at the first detection station and the second detection station, the turnover conveying mechanism is used for turnover and conveying the sphere to be detected from the first detection station to the second detection station, the first scanning mechanism is used for acquiring an upper hemisphere depth image of the sphere to be detected, the second scanning mechanism is used for acquiring a lower hemisphere depth image of the sphere to be detected, and the processor is used for splicing the depth images acquired by the first scanning mechanism and the second scanning mechanism into a sphere 3D point cloud model corresponding to the sphere to be detected; judging whether the concave-convex defects exist or not by combining the spherical point cloud model; wherein the first and second scanning mechanisms have a central scanning axis, wherein the first and second scanning mechanisms are mounted 45 ° inclined to one side of the first and second inspection stations; the first detection station and the second detection station can rotate along a central axis, and the first detection station and the second detection station bear the ball to be detected to do autorotation along a rotation axis passing through the center of the ball; the processor comprises an information conversion unit, a point cloud splicing unit and a defect detection unit, wherein the information conversion unit converts hemispherical images obtained by scanning by the first scanning mechanism and the second scanning mechanism into corresponding upper hemispherical 3D point cloud images and lower hemispherical 3D point cloud images; the point cloud splicing unit is used for sorting and splicing the upper hemispherical 3D point cloud image and the lower hemispherical 3D point cloud image to obtain a point cloud model corresponding to the sphere to be detected; and the defect detection unit is used for judging whether the concave-convex defects exist or not by combining the point cloud images before and after splicing.
CN202311369143.5A 2023-10-23 2023-10-23 Sphere surface defect detection method and detection system based on 3D imaging Active CN117110319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311369143.5A CN117110319B (en) 2023-10-23 2023-10-23 Sphere surface defect detection method and detection system based on 3D imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311369143.5A CN117110319B (en) 2023-10-23 2023-10-23 Sphere surface defect detection method and detection system based on 3D imaging

Publications (2)

Publication Number Publication Date
CN117110319A CN117110319A (en) 2023-11-24
CN117110319B true CN117110319B (en) 2024-01-26

Family

ID=88805907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311369143.5A Active CN117110319B (en) 2023-10-23 2023-10-23 Sphere surface defect detection method and detection system based on 3D imaging

Country Status (1)

Country Link
CN (1) CN117110319B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651752A (en) * 2016-09-27 2017-05-10 深圳市速腾聚创科技有限公司 Three-dimensional point cloud data registration method and stitching method
CN112581457A (en) * 2020-12-23 2021-03-30 武汉理工大学 Pipeline inner surface detection method and device based on three-dimensional point cloud
CN114279361A (en) * 2021-12-27 2022-04-05 哈尔滨工业大学芜湖机器人产业技术研究院 Three-dimensional measurement system and method for defect size of inner wall of cylindrical part
CN115326835A (en) * 2022-10-13 2022-11-11 汇鼎智联装备科技(江苏)有限公司 Cylinder inner surface detection method, visualization method and detection system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10444160B2 (en) * 2014-09-18 2019-10-15 Zhejiang University Surface defects evaluation system and method for spherical optical components

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651752A (en) * 2016-09-27 2017-05-10 深圳市速腾聚创科技有限公司 Three-dimensional point cloud data registration method and stitching method
CN112581457A (en) * 2020-12-23 2021-03-30 武汉理工大学 Pipeline inner surface detection method and device based on three-dimensional point cloud
CN114279361A (en) * 2021-12-27 2022-04-05 哈尔滨工业大学芜湖机器人产业技术研究院 Three-dimensional measurement system and method for defect size of inner wall of cylindrical part
CN115326835A (en) * 2022-10-13 2022-11-11 汇鼎智联装备科技(江苏)有限公司 Cylinder inner surface detection method, visualization method and detection system

Also Published As

Publication number Publication date
CN117110319A (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN108844459B (en) Calibration method and device of blade digital sample plate detection system
US20240177382A1 (en) Systems and Methods for Stitching Sequential Images of an Object
CN112161619B (en) Pose detection method, three-dimensional scanning path planning method and detection system
CN110018178A (en) A kind of mobile phone bend glass typical defect on-line measuring device and method
US11948344B2 (en) Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime
WO2020144784A1 (en) Image processing device, work robot, substrate inspection device, and specimen inspection device
CN113884510B (en) Method for acquiring appearance image of 3D glass cover plate
CN113702384A (en) Surface defect detection device, detection method and calibration method for rotary component
Ozeki et al. Real-time range measurement device for three-dimensional object recognition
CN117110319B (en) Sphere surface defect detection method and detection system based on 3D imaging
CN112257536B (en) Space and object three-dimensional information acquisition and matching equipment and method
CN108709892A (en) Detecting system and its method
CN115326835B (en) Cylinder inner surface detection method, visualization method and detection system
CN104296657B (en) The detection of a kind of cliff blast hole based on binocular vision and positioner and localization method
CN100582653C (en) System and method for determining position posture adopting multi- bundle light
Petković et al. A note on geometric calibration of multiple cameras and projectors
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
CN107024184A (en) Systems for optical inspection is with applying its optical detecting method
CN103900488A (en) 3D scanning technique
Chen et al. Automated robot-based large-scale 3D surface imaging
CN214767039U (en) Plate detection device
CN117825398A (en) Mirror surface and mirror-like object surface defect detection method based on gradient information
Anchini et al. Subpixel location of discrete target images in close-range camera calibration: A novel approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant