CN117522784B - Gear part image detection method and system based on tooth distance segmentation compensation - Google Patents

Gear part image detection method and system based on tooth distance segmentation compensation Download PDF

Info

Publication number
CN117522784B
CN117522784B CN202311373451.5A CN202311373451A CN117522784B CN 117522784 B CN117522784 B CN 117522784B CN 202311373451 A CN202311373451 A CN 202311373451A CN 117522784 B CN117522784 B CN 117522784B
Authority
CN
China
Prior art keywords
tooth
detected
far
tooth surface
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311373451.5A
Other languages
Chinese (zh)
Other versions
CN117522784A (en
Inventor
李文奇
王建峰
徐海霞
王亚松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Singukeller Automotive Cold Forming Parts Inc
Original Assignee
Beijing Singukeller Automotive Cold Forming Parts Inc
Filing date
Publication date
Application filed by Beijing Singukeller Automotive Cold Forming Parts Inc filed Critical Beijing Singukeller Automotive Cold Forming Parts Inc
Priority to CN202311373451.5A priority Critical patent/CN117522784B/en
Publication of CN117522784A publication Critical patent/CN117522784A/en
Application granted granted Critical
Publication of CN117522784B publication Critical patent/CN117522784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a gear piece image detection method and system based on tooth distance segmentation compensation, which comprises the steps of using a preset segmentation neural network to segment and identify an area to be detected, dividing a projection image into a far area, a middle area and a near area, respectively using corresponding calibration coefficients to conduct targeted correction on thickness values obtained by image matching to obtain measured values for detecting a gear piece, and distinguishing far teeth and near teeth by introducing the neural network to obtain corresponding calibration coefficients, so that targeted detection of tooth surfaces with different distances under a single image acquisition of the gear piece is realized, the detection precision of the gear piece image is effectively improved, and the method is particularly suitable for solving the problems of inaccurate measurement caused by the fact that the size of the detected gear piece exceeds the effective range of a lens and the depth of field is insufficient.

Description

Gear part image detection method and system based on tooth distance segmentation compensation
Technical Field
The invention relates to the technical fields of mechanical part measurement, visual image measurement and mapping image correction, in particular to a gear part image detection method and system based on tooth distance segmentation compensation.
Background
The machine vision measurement method is adopted to detect the sizes of the mechanical parts, has the advantages of low cost, high precision, high efficiency, convenient operation and the like, and particularly compared with the traditional contact detection methods such as a micrometer, a vernier caliper, a feeler gauge and the like, the machine vision measurement method has the advantages that the detection speed is remarkably improved, and the machine vision measurement method is more suitable for the high-frequency detection requirement of large-scale automatic production.
The visual dimension measurement implementation of the mechanical parts generally comprises the steps of image acquisition, image processing, feature extraction, dimension calculation, result output and the like. The current method based on visual dimension measurement is mainly realized based on a projection method, an edge detection principle is applied, an object is imaged by a backlight in front of a camera in a projection mode, a black-and-white clear image is formed, edges of a bright part and a dark part in the image are detected through a visual system, and then a series of processing steps are carried out, so that the high-precision measurement of the dimension of a mechanical part is finally realized. Therefore, when the reagent is used for measuring the visual size, the requirements on the size of the measured part, the backlight parallel quality, the depth of field of the detected area and the like are high.
Because the measurement of mechanical parts often involves the detection of a plurality of measurement points under different object planes in a three-dimensional state, a telecentric lens is generally used for collecting detection images so as to ensure that the magnification of the detection images does not change, overcome the influence of parallax of the images and eliminate image distortion.
In the specific scene of detecting the thickness dimension of the gear piece, the detection process needs to determine the thickness of the highest point of each tooth surface of the gear piece, and is a typical detection task of multiple measuring points under different object planes. In actual detection execution, when the diameter of the detected gear piece is larger (for example, a gear piece with the diameter of 50 mm), the depth distance of the detected tooth is beyond the effective depth-of-field adjusting range of the telecentric lens, so that the front tooth surface which is closer to the lens in the acquired image is obviously thicker, and the rear tooth surface which is farther from the lens is thinner, and the requirement for simultaneously and accurately detecting the thickness of all tooth surfaces of the gear piece cannot be realized only by statically shooting a single picture of the gear piece.
The existing solution is to discard single image detection, focus only the front tooth surface thickness close to the lens for the collected image, and detect each tooth surface of the gear part in sequence by matching with the rotation station. The processing mode can ensure that the distance between each tooth surface to be detected and the lens is consistent, so that the depth of field error is avoided, but the detection efficiency is greatly reduced.
On the other hand, even with high-performance telecentric lenses, there is still a degree of distortion (typically within 0.1%), which, although having very high accuracy compared to conventional industrial lenses, may still exhibit pixel-level errors in the actual acquired images, and auxiliary corrections such as calibration patterns still need to be employed for component detection that require very high accuracy.
In view of the above problems, the prior art gives improvements based on detection algorithms. For example, patent application CN101477685A entitled "sub-pixel level image detection method with depth of field part processing quality" discloses that in the process of performing part image detection, a method of part edge positioning and layered calibration is adopted to improve detection accuracy for the problem that multiple detection points have different depths of field. However, in the prior art, compensation can be realized only for parts with fixed detection angles and with certain shapes, and for the detection of parts with unfixed detection point positions (for example, large difference in projection images caused by rotation of a gear piece in the detection of tooth surface thickness of the gear piece), complicated structures at edges (for example, bending and oblique lines formed by chamfering edges of the gear piece) and the like, accurate edge positions cannot be determined, and visual measurement compensation of the layered calibration cannot be realized.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a gear piece image detection method and a system based on tooth distance segmentation compensation, which are used for distinguishing far teeth from near teeth to obtain corresponding calibration coefficients by introducing a neural network, so that the targeted detection of tooth surfaces with different distances under the condition that the gear piece is subjected to single-time image acquisition is realized, the detection precision of the gear piece image is effectively improved, and the method and the system are particularly suitable for solving the problems of inaccurate measurement caused by the fact that the size of a detected gear piece exceeds the effective range of a lens and the depth of field is insufficient.
In order to achieve the above object, the present invention adopts the technical scheme that:
The gear piece image detection method based on tooth distance segmentation compensation is characterized by comprising the following steps of:
S1, acquiring a projection image of a gear piece to be detected, and respectively extracting a first reference area, a second reference area and the area to be detected from the projection image according to a preset judging area;
s2, performing segmentation recognition operation on the region to be detected by using a preset segmentation neural network to obtain a far region, a middle region and a near region which are far to near relative to the shooting lens;
S3, respectively executing tooth surface recognition operation on the far-end region, the middle-end region and the near-end region by using a preset recognition neural network, and generating a far-end tooth set, a middle-end tooth set and a near-end tooth set which respectively comprise a plurality of tooth surfaces to be detected;
S4, establishing a datum line by using the first datum region and the second datum region;
S5, respectively matching tooth surfaces to be detected in the far-end tooth set, the middle tooth set and the near-end tooth set according to the datum lines to obtain a far-end image thickness set, a middle image thickness set and a near-end image thickness set;
s6, obtaining a far calibration coefficient k f, a middle calibration coefficient k m and a near calibration coefficient k n which correspond to the far region, the middle region and the near region respectively;
S7, correcting a far-end calibration coefficient k f and a near-end calibration coefficient k n according to the abscissa value of the tooth surface to be detected in the far-end tooth set and the near-end tooth set relative to the datum line respectively, and generating a far-end correction coefficient k ft set and a near-end correction coefficient k nt set corresponding to each tooth surface to be detected respectively;
S8, calculating a far-end tooth surface thickness measured value set by using a far-end image thickness set matched far-end correction coefficient k ft, calculating a middle-end tooth surface thickness measured value set by using a middle-end image thickness set matched middle calibration coefficient k m, and calculating a near-end tooth surface thickness measured value set by using a near-end image thickness set matched near-end correction coefficient k nt;
And S9, matching the measured value set of the far tooth surface thickness with the measured value set of the middle tooth surface thickness and the measured value set of the near tooth surface thickness with the standard value of the gear piece to be detected, and judging whether the gear piece to be detected is qualified.
Further, the step S3 further includes:
judging whether the total number of tooth surfaces to be detected contained in the far tooth set, the middle tooth set and the near tooth set is equal to the number of tooth surfaces designed for the gear piece to be detected;
When the judgment is equal to the judgment, the generated far tooth set, middle tooth set and near tooth set are accepted;
And when the set of the far teeth, the set of the middle teeth and the set of the near teeth are judged to be unequal, the generated set of the far teeth, the generated set of the middle teeth and the generated set of the near teeth are refused, and the step S2 is repeatedly executed.
Further, the step S7 includes performing the following sub-steps:
S71, calculating the difference between the far calibration coefficient k f and the near calibration coefficient k n to generate a coefficient difference k d;
S72, calculating the average value of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference midpoint coordinate value x middle, and calculating the distance of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference distance dis x;
s73, respectively calculating correction values a corresponding to tooth surfaces to be detected according to the abscissa value x tooth of the center point of the tooth surface to be detected in the far tooth set and the near tooth set relative to the datum line by using an application type 1;
S74, correspondingly calculating a far part correction coefficient k ft and a near part correction coefficient k nt of each tooth surface to be detected by using the correction value a and applying the correction value a to 2;
further, the projection image of the gear piece to be detected is focused into a middle shaft area of the size of the gear piece to be detected.
Further, extracting the first reference area, the second reference area and the area to be detected from the projection image according to the preset determination area includes:
Respectively selecting 100-pixel long and 30-pixel high areas of preset reference surface positions on two sides of a gear piece to be detected to generate a first reference area and a second reference area;
And selecting the tooth surface position 50 pixels of the gear piece to be detected, and covering all tooth surface areas to be detected in length to generate a to-be-detected area.
Further, the step S1 further includes:
and respectively executing filtering operation on the first reference area, the second reference area and the area to be detected.
Further, the performing a tooth surface recognition operation includes:
Judging whether the edge length of the tooth surface position is not less than 50 pixels, and discarding the tooth surface position when the edge length of the tooth surface position is less than 50 pixels;
When judging that the edge length of the tooth surface position is not less than 50 pixels, further judging whether the slope of the tooth surface position straight line is less than 0.02, and when judging that the slope of the tooth surface position straight line is not less than 0.02, discarding the tooth surface position;
When judging that the slope of the tooth surface position straight line is smaller than 0.02, the tooth surface position is reserved as the tooth surface to be detected.
Further, the step S4 includes:
s41, respectively calculating gray histograms of plane edges of a first reference area and a second reference area, and sampling the edge contour according to preset sampling precision to obtain a first sampling point set and a second sampling point set;
S42, obtaining a first intersection point set and a second intersection point set according to the first derivative extremum positions of the first sampling point set and the second sampling point set as intersection points;
S43, fitting and generating a first plane edge straight line and a second plane edge straight line by using the first intersection point set and the second intersection point set respectively;
S44, calculating and obtaining a datum line by using the first plane edge straight line and the second plane edge straight line by using an application type 3;
Wherein x 1、y1 is a reference line first reference point, x 2、y2 is a reference line second reference point, x L1、yL1 is a first plane edge straight line first end point, x L2、yL2 is a first plane edge straight line second end point, x R1、yR1 is a second plane edge straight line first end point, and x R2、yR2 is a second plane edge straight line second end point.
The invention also relates to a gear piece image detection system based on tooth distance segmentation compensation, which is characterized by comprising:
The image acquisition module is used for respectively extracting a first reference area, a second reference area and an area to be detected from the projection image according to a preset judging area;
the image segmentation module is used for carrying out segmentation recognition operation on the region to be detected by using a preset segmentation neural network to obtain a far region, a middle region and a near region which are far to near relative to the shooting lens;
The tooth surface recognition module is used for respectively executing tooth surface recognition operation on the far-end region, the middle-end region and the near-end region by using a preset recognition neural network to generate a far-end tooth set, a middle tooth set and a near-end tooth set which respectively comprise a plurality of tooth surfaces to be detected;
the datum line module is used for establishing a datum line by using the first datum region and the second datum region;
the thickness recognition module is used for respectively matching tooth surfaces to be detected in the far-end tooth set, the middle tooth set and the near-end tooth set according to the datum lines to obtain a far-end image thickness set, a middle image thickness set and a near-end image thickness set;
the coefficient correction module is used for correcting the far-end calibration coefficient k f and the near-end calibration coefficient k n according to the abscissa value of the tooth surface to be detected in the far-end tooth set and the near-end tooth set relative to the datum line respectively, and generating a far-end correction coefficient k ft set and a near-end correction coefficient k nt set which correspond to each tooth surface to be detected respectively;
the thickness calculation module is used for obtaining a far-end tooth surface thickness measurement value set by using a far-end image thickness set to match a far-end correction coefficient k ft set through calculation, obtaining a middle tooth surface thickness measurement value set by using a middle image thickness set to match a middle calibration coefficient k m through calculation, and obtaining a near-end tooth surface thickness measurement value set by using a near-end image thickness set to match a near-end correction coefficient k nt set through calculation;
And the image detection module is used for judging whether the gear piece to be detected is qualified or not by using the far tooth surface thickness measurement value set, the middle tooth surface thickness measurement value set and the near tooth surface thickness measurement value set to match the standard value of the gear piece to be detected.
The beneficial effects of the invention are as follows:
By adopting the gear piece image detection method and system based on tooth distance segmentation compensation, provided by the invention, the neural network is introduced to distinguish the far teeth and the near teeth to obtain the corresponding calibration coefficients, so that the targeted detection of tooth surfaces with different distances under the condition of single image acquisition of the gear piece is realized. Compared with the existing processing method, the method can effectively improve the detection repetition precision under the single image measurement scene, so that all tooth heights of the gear are photographed at one time, and meanwhile, the detection precision and the detection efficiency of tooth thickness detection are greatly improved. The method is particularly suitable for solving the technical problem of inaccurate measurement caused by insufficient depth of field because the size of the measured gear piece exceeds the effective range of the lens.
Drawings
Fig. 1 is a schematic flow chart of a gear piece image detection method based on tooth distance segmentation compensation.
Fig. 2 is a schematic diagram of a gear member image detection system based on tooth distance segmentation compensation according to the present invention.
Fig. 3 is a schematic view of a projected image of a gear member to be inspected according to the present invention.
Detailed Description
For a clearer understanding of the present invention, reference will be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
The first aspect of the present invention relates to a gear member image detection method based on tooth distance segmentation compensation, which is shown in fig. 1, and includes:
S1, acquiring a projection image of a gear piece to be detected, and respectively extracting a first reference area, a second reference area and the area to be detected from the projection image according to a preset judging area.
Preferably, the projected image of the gear member to be detected is focused into the central axis region of the size of the gear member to be detected, so that the teeth (the central region of the distance from the photographing lens) at the left and right ends of the image are near the focusing point position. In such an in-focus state, due to distortion of the lens, teeth on the near side of the lens (near region at a distance from the photographing lens) are larger in the image, and the calibration coefficient is larger and larger as the teeth are closer to the center of the image; the teeth distal to the lens (the far region from the photographing lens) will be smaller in the image, with a smaller calibration factor and smaller closer to the center of the image. Meanwhile, in order to obtain the tooth surface characteristics of the far-part region through shooting, the lens is arranged to have a certain angle relative to the gear piece to be detected, namely, a smaller perspective angle exists between the lens and the gear piece to be detected. Comprehensive arrangement, the resulting projection image is shown in fig. 3. Wherein the first reference region and the second reference region respectively correspond to left and right end regions (corresponding to the detection base and kept relatively horizontal) in the projection image; the area to be detected is selected as the top area of the projection image and covers the whole area of the tooth surface of the gear piece to be detected.
Preferably, extracting the first reference area, the second reference area, and the area to be detected in the projection image according to the preset determination area includes:
Respectively selecting 100-pixel long and 30-pixel high areas of preset reference surface positions on two sides of a gear piece to be detected to generate a first reference area and a second reference area; and selecting the tooth surface position 50 pixels of the gear piece to be detected, and covering all tooth surface areas to be detected in length to generate a to-be-detected area.
Preferably, the method further comprises performing a filtering operation on the first reference region, the second reference region and the region to be detected, respectively. For example, the filter size is selected to be 7*3, the filtering operation is performed twice on each of the first reference region and the second reference region, and the filtering operation is performed once on the region to be detected.
S2, performing segmentation recognition operation on the region to be detected by using a preset segmentation neural network to obtain a far region, a middle region and a near region which are far to near relative to the shooting lens.
Preferably, the segmented neural network can be trained by dividing a data set (to-be-detected area) of the gear piece to be detected into three types of far, medium and near, so as to obtain the corresponding segmented neural network.
S3, performing tooth surface recognition operation on the far-end region, the middle-end region and the near-end region by using a preset recognition neural network, and generating a far-end tooth set, a middle-end tooth set and a near-end tooth set which respectively comprise a plurality of tooth surfaces to be detected.
Preferably, performing the tooth surface recognition operation includes: judging whether the edge length of the tooth surface position is not less than 50 pixels, and discarding the tooth surface position when the edge length of the tooth surface position is less than 50 pixels; when judging that the edge length of the tooth surface position is not less than 50 pixels, further judging whether the slope of the tooth surface position straight line is less than 0.02, and when judging that the slope of the tooth surface position straight line is not less than 0.02, discarding the tooth surface position; when judging that the slope of the tooth surface position straight line is smaller than 0.02, the tooth surface position is reserved as the tooth surface to be detected.
Preferably, the straight line detection mode is to sample every 3 pixels, calculate the edge position of the sub-pixels, and fit a straight line.
Preferably, after executing the tooth surface recognition operation, judging whether the total number of tooth surfaces to be detected contained in the far tooth set, the middle tooth set and the near tooth set is equal to the designed tooth surface number of the gear piece to be detected; when the judgment is equal to the judgment, the generated far tooth set, middle tooth set and near tooth set are accepted; and when the set of the far teeth, the set of the middle teeth and the set of the near teeth are judged to be unequal, the generated set of the far teeth, the generated set of the middle teeth and the generated set of the near teeth are refused, and the step S2 is repeatedly executed.
S4, establishing a datum line by using the first datum region and the second datum region.
Specifically, the method comprises the following sub-steps: s41, respectively calculating gray histograms of plane edges of a first reference area and a second reference area, and sampling the edge contour according to preset sampling precision to obtain a first sampling point set and a second sampling point set; s42, obtaining a first intersection point set and a second intersection point set according to the first derivative extremum positions of the first sampling point set and the second sampling point set as intersection points; s43, fitting and generating a first plane edge straight line and a second plane edge straight line by using the first intersection point set and the second intersection point set respectively; s44, calculating and obtaining a datum line by using the first plane edge straight line and the second plane edge straight line by using an application type 3;
Wherein x 1、y1 is a reference line first reference point, x 2、y2 is a reference line second reference point, x L1、yL1 is a first plane edge straight line first end point, x L2、yL2 is a first plane edge straight line second end point, x R1、yR1 is a second plane edge straight line first end point, and x R2、yR2 is a second plane edge straight line second end point.
And S5, respectively matching tooth surfaces to be detected in the far-end tooth set, the middle-end tooth set and the near-end tooth set according to the datum lines to obtain a far-end image thickness set, a middle-end image thickness set and a near-end image thickness set.
S6, obtaining a far calibration coefficient k f, a middle calibration coefficient k m and a near calibration coefficient k n which correspond to the far region, the middle region and the near region respectively.
And S7, correcting the far calibration coefficient k f and the near calibration coefficient k n according to the abscissa value of the tooth surface to be detected in the far tooth set and the near tooth set relative to the datum line respectively, and generating a far correction coefficient k ft set and a near correction coefficient k nt set which respectively correspond to each tooth surface to be detected.
The corresponding focusing is the projection image of the central axis area of the size of the gear to be detected, the far calibration coefficient k f is larger than the near calibration coefficient k n, and the value of the middle calibration coefficient k m is about the arithmetic average value of the far calibration coefficient k f and the near calibration coefficient k n.
Specifically, the method comprises the following sub-steps: s71, calculating the difference between the far calibration coefficient k f and the near calibration coefficient k n to generate a coefficient difference k d; s72, calculating the average value of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference midpoint coordinate value x middle, and calculating the distance of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference distance dis x; s73, respectively calculating correction values a corresponding to tooth surfaces to be detected according to the abscissa value x tooth of the center point of the tooth surface to be detected in the far tooth set and the near tooth set relative to the datum line by using an application type 1;
S74, correspondingly calculating a far part correction coefficient k ft and a near part correction coefficient k nt of each tooth surface to be detected by using the correction value a and applying the correction value a to 2;
Meanwhile, the tooth surface to be detected in the middle tooth set is corresponding, and the middle calibration coefficient k m does not need to be additionally corrected.
S8, calculating to obtain a far-end tooth surface thickness measurement value set by using a far-end image thickness set matched far-end correction coefficient k ft, calculating to obtain a middle tooth surface thickness measurement value set by using a middle image thickness set matched middle calibration coefficient k m, and calculating to obtain a near-end tooth surface thickness measurement value set by using a near-end image thickness set matched near-end correction coefficient k nt.
And S9, matching the measured value set of the far tooth surface thickness with the measured value set of the middle tooth surface thickness and the measured value set of the near tooth surface thickness with the standard value of the gear piece to be detected, and judging whether the gear piece to be detected is qualified.
By applying the method to detect the gear parts, all detection can be completed only by photographing once, and the whole detection process corresponding to one gear part can be completed within about 32ms under the current gear product when the whole detection time is about 30ms for triggering the camera photographing time of 2ms and the running time of the algorithm. In contrast to the method of imaging measurements on each tooth individually, the process of gear rotation is increased, rotating each tooth to a fixed position. For example, in the case of 18 teeth in total of a gear product to be detected, the detection time is about (gear rotation time 1000 ms+photographing time 2 ms+detection time 10 ms) ×18= 18216ms, and the detection speed is several times that of the prior art method for individually focusing imaging for each tooth by using the above method. Meanwhile, compared with single imaging, the method has the advantages that each tooth is calibrated in a single detection method. The position of the tooth distance lens is calibrated and corrected independently, so that the lens is more accurate. In the case of specific gear piece application to be detected, the product detection repetition accuracy can be improved from 0.1mm to 0.04mm. Compared with the calibration of parts with different depth of field in the prior art, the method can solve the technical problem that the calibration positions of the parts are not fixed, and is particularly suitable for the situation that the gear part and the like have a plurality of unfixed detection positions.
In another aspect, the present invention further relates to a gear member image detection system based on tooth distance segmentation compensation, whose structure is shown in fig. 2, and includes:
The image acquisition module is used for respectively extracting a first reference area, a second reference area and an area to be detected from the projection image according to a preset judging area;
the image segmentation module is used for carrying out segmentation recognition operation on the region to be detected by using a preset segmentation neural network to obtain a far region, a middle region and a near region which are far to near relative to the shooting lens;
The tooth surface recognition module is used for respectively executing tooth surface recognition operation on the far-end region, the middle-end region and the near-end region by using a preset recognition neural network to generate a far-end tooth set, a middle tooth set and a near-end tooth set which respectively comprise a plurality of tooth surfaces to be detected;
the datum line module is used for establishing a datum line by using the first datum region and the second datum region;
the thickness recognition module is used for respectively matching tooth surfaces to be detected in the far-end tooth set, the middle tooth set and the near-end tooth set according to the datum lines to obtain a far-end image thickness set, a middle image thickness set and a near-end image thickness set;
the coefficient correction module is used for correcting the far-end calibration coefficient k f and the near-end calibration coefficient k n according to the abscissa value of the tooth surface to be detected in the far-end tooth set and the near-end tooth set relative to the datum line respectively, and generating a far-end correction coefficient k ft set and a near-end correction coefficient k nt set which correspond to each tooth surface to be detected respectively;
the thickness calculation module is used for obtaining a far-end tooth surface thickness measurement value set by using a far-end image thickness set to match a far-end correction coefficient k ft set through calculation, obtaining a middle tooth surface thickness measurement value set by using a middle image thickness set to match a middle calibration coefficient k m through calculation, and obtaining a near-end tooth surface thickness measurement value set by using a near-end image thickness set to match a near-end correction coefficient k nt set through calculation;
And the image detection module is used for judging whether the gear piece to be detected is qualified or not by using the far tooth surface thickness measurement value set, the middle tooth surface thickness measurement value set and the near tooth surface thickness measurement value set to match the standard value of the gear piece to be detected.
By using the system, the above-mentioned operation processing method can be executed and the corresponding technical effects can be achieved.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (6)

1. The gear piece image detection method based on tooth distance segmentation compensation is characterized by comprising the following steps of:
S1, acquiring a projection image of a gear piece to be detected, and respectively extracting a first reference area, a second reference area and the area to be detected from the projection image according to a preset judging area; the extracting the first reference area, the second reference area and the area to be detected in the projection image according to the preset judging area respectively comprises the following steps: respectively selecting 100-pixel long and 30-pixel high areas of preset reference surface positions on two sides of a gear piece to be detected to generate a first reference area and a second reference area; selecting the tooth surface position 50 pixels of the gear piece to be detected, and covering all tooth surface areas to be detected in length to generate a to-be-detected area;
s2, performing segmentation recognition operation on the region to be detected by using a preset segmentation neural network to obtain a far region, a middle region and a near region which are far to near relative to the shooting lens;
S3, respectively executing tooth surface recognition operation on the far-end region, the middle-end region and the near-end region by using a preset recognition neural network, and generating a far-end tooth set, a middle-end tooth set and a near-end tooth set which respectively comprise a plurality of tooth surfaces to be detected;
S4, establishing a datum line by using the first datum region and the second datum region;
S5, respectively matching tooth surfaces to be detected in the far-end tooth set, the middle tooth set and the near-end tooth set according to the datum lines to obtain a far-end image thickness set, a middle image thickness set and a near-end image thickness set;
s6, obtaining a far calibration coefficient k f, a middle calibration coefficient k m and a near calibration coefficient k n which correspond to the far region, the middle region and the near region respectively;
S7, correcting a far-end calibration coefficient k f and a near-end calibration coefficient k n according to the abscissa value of the tooth surface to be detected in the far-end tooth set and the near-end tooth set relative to the datum line respectively, and generating a far-end correction coefficient k ft set and a near-end correction coefficient k nt set corresponding to each tooth surface to be detected respectively;
S8, calculating a far-end tooth surface thickness measured value set by using a far-end image thickness set matched far-end correction coefficient k ft, calculating a middle-end tooth surface thickness measured value set by using a middle-end image thickness set matched middle calibration coefficient k m, and calculating a near-end tooth surface thickness measured value set by using a near-end image thickness set matched near-end correction coefficient k nt;
S9, matching the standard value of the gear piece to be detected with the far tooth surface thickness measurement value set, the middle tooth surface thickness measurement value set and the near tooth surface thickness measurement value set, and judging whether the gear piece to be detected is qualified or not;
the step S4 includes:
s41, respectively calculating gray histograms of plane edges of a first reference area and a second reference area, and sampling the edge contour according to preset sampling precision to obtain a first sampling point set and a second sampling point set;
S42, obtaining a first intersection point set and a second intersection point set according to the first derivative extremum positions of the first sampling point set and the second sampling point set as intersection points;
S43, fitting and generating a first plane edge straight line and a second plane edge straight line by using the first intersection point set and the second intersection point set respectively;
S44, calculating and obtaining a datum line by using the first plane edge straight line and the second plane edge straight line by using an application type 3;
Wherein x 1、y1 is a reference line first reference point, x 2、y2 is a reference line second reference point, x L1、yL1 is a first plane edge straight line first end point, x L2、yL2 is a first plane edge straight line second end point, x R1、yR1 is a second plane edge straight line first end point, and x R2、yR2 is a second plane edge straight line second end point;
Said step S7 comprises performing the following sub-steps:
S71, calculating the difference between the far calibration coefficient k f and the near calibration coefficient k n to generate a coefficient difference k d;
S72, calculating the average value of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference midpoint coordinate value x middle, and calculating the distance of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference distance dis x;
s73, respectively calculating correction values a corresponding to tooth surfaces to be detected according to the abscissa value x tooth of the center point of the tooth surface to be detected in the far tooth set and the near tooth set relative to the datum line by using an application type 1;
S74, correspondingly calculating a far part correction coefficient k ft and a near part correction coefficient k nt of each tooth surface to be detected by using the correction value a and applying the correction value a to 2;
2. the method of claim 1, wherein the step S3 further comprises:
judging whether the total number of tooth surfaces to be detected contained in the far tooth set, the middle tooth set and the near tooth set is equal to the number of tooth surfaces designed for the gear piece to be detected;
When the judgment is equal to the judgment, the generated far tooth set, middle tooth set and near tooth set are accepted;
And when the set of the far teeth, the set of the middle teeth and the set of the near teeth are judged to be unequal, the generated set of the far teeth, the generated set of the middle teeth and the generated set of the near teeth are refused, and the step S2 is repeatedly executed.
3. The method of claim 1, wherein the projected image of the gear member to be inspected is focused to a central axis area of the gear member dimension to be inspected.
4. The method of claim 1, wherein the step S1 further comprises:
and respectively executing filtering operation on the first reference area, the second reference area and the area to be detected.
5. The method of claim 1, wherein the performing a tooth surface recognition operation comprises:
Judging whether the edge length of the tooth surface position is not less than 50 pixels, and discarding the tooth surface position when the edge length of the tooth surface position is less than 50 pixels;
When judging that the edge length of the tooth surface position is not less than 50 pixels, further judging whether the slope of the tooth surface position straight line is less than 0.02, and when judging that the slope of the tooth surface position straight line is not less than 0.02, discarding the tooth surface position;
When judging that the slope of the tooth surface position straight line is smaller than 0.02, the tooth surface position is reserved as the tooth surface to be detected.
6. A gear piece image detection system based on tooth distance segmentation compensation, comprising:
The image acquisition module is used for respectively extracting a first reference area, a second reference area and an area to be detected from the projection image according to a preset judging area; the extracting the first reference area, the second reference area and the area to be detected in the projection image according to the preset judging area respectively comprises the following steps: respectively selecting 100-pixel long and 30-pixel high areas of preset reference surface positions on two sides of a gear piece to be detected to generate a first reference area and a second reference area; selecting the tooth surface position 50 pixels of the gear piece to be detected, and covering all tooth surface areas to be detected in length to generate a to-be-detected area;
the image segmentation module is used for carrying out segmentation recognition operation on the region to be detected by using a preset segmentation neural network to obtain a far region, a middle region and a near region which are far to near relative to the shooting lens;
The tooth surface recognition module is used for respectively executing tooth surface recognition operation on the far-end region, the middle-end region and the near-end region by using a preset recognition neural network to generate a far-end tooth set, a middle tooth set and a near-end tooth set which respectively comprise a plurality of tooth surfaces to be detected;
The datum line module is used for establishing a datum line by using the first datum region and the second datum region; comprising the following steps:
s41, respectively calculating gray histograms of plane edges of a first reference area and a second reference area, and sampling the edge contour according to preset sampling precision to obtain a first sampling point set and a second sampling point set;
S42, obtaining a first intersection point set and a second intersection point set according to the first derivative extremum positions of the first sampling point set and the second sampling point set as intersection points;
S43, fitting and generating a first plane edge straight line and a second plane edge straight line by using the first intersection point set and the second intersection point set respectively;
S44, calculating and obtaining a datum line by using the first plane edge straight line and the second plane edge straight line by using an application type 3;
Wherein x 1、y1 is a reference line first reference point, x 2、y2 is a reference line second reference point, x L1、yL1 is a first plane edge straight line first end point, x L2、yL2 is a first plane edge straight line second end point, x R1、yR1 is a second plane edge straight line first end point, and x R2、yR2 is a second plane edge straight line second end point;
the thickness recognition module is used for respectively matching tooth surfaces to be detected in the far-end tooth set, the middle tooth set and the near-end tooth set according to the datum lines to obtain a far-end image thickness set, a middle image thickness set and a near-end image thickness set;
the coefficient correction module is used for correcting the far-end calibration coefficient k f and the near-end calibration coefficient k n according to the abscissa value of the tooth surface to be detected in the far-end tooth set and the near-end tooth set relative to the datum line respectively, and generating a far-end correction coefficient k ft set and a near-end correction coefficient k nt set which correspond to each tooth surface to be detected respectively; comprising the following sub-steps:
S71, calculating the difference between the far calibration coefficient k f and the near calibration coefficient k n to generate a coefficient difference k d;
S72, calculating the average value of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference midpoint coordinate value x middle, and calculating the distance of the abscissa values of the center points of the two tooth surfaces to be detected which are farthest from each other relative to the datum line to generate a reference distance dis x;
s73, respectively calculating correction values a corresponding to tooth surfaces to be detected according to the abscissa value x tooth of the center point of the tooth surface to be detected in the far tooth set and the near tooth set relative to the datum line by using an application type 1;
S74, correspondingly calculating a far part correction coefficient k ft and a near part correction coefficient k nt of each tooth surface to be detected by using the correction value a and applying the correction value a to 2;
the thickness calculation module is used for obtaining a far-end tooth surface thickness measurement value set by using a far-end image thickness set to match a far-end correction coefficient k ft set through calculation, obtaining a middle tooth surface thickness measurement value set by using a middle image thickness set to match a middle calibration coefficient k m through calculation, and obtaining a near-end tooth surface thickness measurement value set by using a near-end image thickness set to match a near-end correction coefficient k nt set through calculation;
And the image detection module is used for judging whether the gear piece to be detected is qualified or not by using the far tooth surface thickness measurement value set, the middle tooth surface thickness measurement value set and the near tooth surface thickness measurement value set to match the standard value of the gear piece to be detected.
CN202311373451.5A 2023-10-23 Gear part image detection method and system based on tooth distance segmentation compensation Active CN117522784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311373451.5A CN117522784B (en) 2023-10-23 Gear part image detection method and system based on tooth distance segmentation compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311373451.5A CN117522784B (en) 2023-10-23 Gear part image detection method and system based on tooth distance segmentation compensation

Publications (2)

Publication Number Publication Date
CN117522784A CN117522784A (en) 2024-02-06
CN117522784B true CN117522784B (en) 2024-07-16

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021135300A (en) * 2020-02-27 2021-09-13 由田新技股▲ふん▼有限公司 Substrate measuring system and substrate measuring method
CN114383505A (en) * 2022-01-06 2022-04-22 江苏大学 Automatic detection device for dimension of short shaft type part

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021135300A (en) * 2020-02-27 2021-09-13 由田新技股▲ふん▼有限公司 Substrate measuring system and substrate measuring method
CN114383505A (en) * 2022-01-06 2022-04-22 江苏大学 Automatic detection device for dimension of short shaft type part

Similar Documents

Publication Publication Date Title
JP4440341B2 (en) Calibration method, calibration apparatus, and calibration system including the apparatus
CN109489566B (en) Lithium battery diaphragm material slitting width detection method, detection system and device
CN111080582B (en) Method for detecting defects of inner and outer surfaces of workpiece
CN107392849B (en) Target identification and positioning method based on image subdivision
CN107358628B (en) Linear array image processing method based on target
CN108716890A (en) A kind of high-precision size detecting method based on machine vision
CN106996748A (en) Wheel diameter measuring method based on binocular vision
CN108088381B (en) Non-contact type micro gap width measuring method based on image processing
CN112629441A (en) 3D curved surface glass contour scanning detection method and system based on line structured light dynamic vision
CN113450418A (en) Improved method, device and system for underwater calibration based on complex distortion model
CN108805870A (en) A kind of detection method of the connector with needle stand
CN111829439B (en) High-precision translation measuring method and device
CN113222955A (en) Gear size parameter automatic measurement method based on machine vision
CN113610929B (en) Combined calibration method of camera and multi-line laser
CN113607058B (en) Straight blade size detection method and system based on machine vision
CN112634375B (en) Plane calibration and three-dimensional reconstruction method in AI intelligent detection
CN111815580B (en) Image edge recognition method and small module gear module detection method
CN117522784B (en) Gear part image detection method and system based on tooth distance segmentation compensation
CN111968182B (en) Calibration method for nonlinear model parameters of binocular camera
CN113310426A (en) Thread parameter measuring method and system based on three-dimensional profile
CN116880353A (en) Machine tool setting method based on two-point gap
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN111256612A (en) Machine vision-based method for measuring straight tooth involute small-modulus gear
CN115289997B (en) Binocular camera three-dimensional contour scanner and application method thereof
CN117522784A (en) Gear part image detection method and system based on tooth distance segmentation compensation

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant