CN112304512A - Multi-workpiece scene air tightness detection method and system based on artificial intelligence - Google Patents

Multi-workpiece scene air tightness detection method and system based on artificial intelligence Download PDF

Info

Publication number
CN112304512A
CN112304512A CN202011342212.XA CN202011342212A CN112304512A CN 112304512 A CN112304512 A CN 112304512A CN 202011342212 A CN202011342212 A CN 202011342212A CN 112304512 A CN112304512 A CN 112304512A
Authority
CN
China
Prior art keywords
bubble
image
central point
workpiece
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011342212.XA
Other languages
Chinese (zh)
Inventor
陈洪
丁群芬
王富才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Yaolan Intelligent Technology Co ltd
Original Assignee
Henan Yaolan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Yaolan Intelligent Technology Co ltd filed Critical Henan Yaolan Intelligent Technology Co ltd
Priority to CN202011342212.XA priority Critical patent/CN112304512A/en
Publication of CN112304512A publication Critical patent/CN112304512A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M3/00Investigating fluid-tightness of structures
    • G01M3/02Investigating fluid-tightness of structures by using fluid or vacuum
    • G01M3/04Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point
    • G01M3/06Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point by observing bubbles in a liquid pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention relates to the technical field of artificial intelligence, in particular to a multi-workpiece scene air tightness detection method and system based on artificial intelligence. The method comprises the following steps: before the air tightness detection is started, acquiring an example mask area of each workpiece in a first water body top view image; after the air tightness detection is started, acquiring a first central point position of each bubble at the water surface of the initial frame image; obtaining the water surface shaking degree of the detection image by comparing the water surface change in the detection image and the initial frame image; taking a target area obtained according to the position of the first central point and the water surface shaking degree as a target area of each bubble; and sequentially acquiring the intersection ratio of the target area of each bubble and the mask area of each example, matching the bubble corresponding to the maximum intersection ratio with the workpiece, determining the air-leaking workpiece, and accurately judging the position of the air-leaking workpiece.

Description

Multi-workpiece scene air tightness detection method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a multi-workpiece scene air tightness detection method and system based on artificial intelligence.
Background
In the conventional airtightness detection method, a commonly used method is to pressurize a sealing workpiece, place the workpiece in a liquid, and observe generated bubbles to determine the tightness of the workpiece.
At present, in the air tightness detection process, single workpiece detection is mostly adopted. In order to improve the detection efficiency, the simultaneous detection of a plurality of workpieces is undoubtedly an important means. When a plurality of workpieces are detected simultaneously, how to distinguish which parts are qualified and which parts are unqualified is an important research direction.
In a bubble detection method document with publication number CN105389814B for an air-tightness experiment, a method for determining the position of an air leakage point when the number of circle center coordinates of a bubble detection area is greater than a preset number is disclosed.
In practice, the inventors found that the above prior art has the following disadvantages:
the problem of finding the position of an air leakage point when a plurality of air bubble detection areas coincide is not solved, namely a plurality of air leakage points are arranged in the detection area, and the detection accuracy is not high.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a multi-workpiece scene air tightness detection method and system based on artificial intelligence, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an artificial intelligence-based method for detecting airtightness of a multi-workpiece scene, where the method includes: before the air tightness detection is started, acquiring an example mask area of each workpiece in a first water body top view image; after the air tightness detection is started, carrying out air tightness detection acquisition once, presetting a plurality of frames of second water body overlooking images, wherein each frame of image in the second water body overlooking images contains bubbles, and taking a first frame of image of a detection image as an initial frame of image; acquiring a first central point position of each bubble at the water surface of the initial frame image; obtaining the water surface shaking degree of the detection image by comparing the water surface change in the detection image and the initial frame image; taking a target area obtained according to the position of the first central point and the water surface shaking degree as a target area of each bubble; and sequentially acquiring the intersection ratio of the target area of each bubble and the mask area of each example, matching the bubble corresponding to the maximum intersection ratio with the workpiece, and determining the air-leaking workpiece.
In a second aspect, another embodiment of the present invention provides an artificial intelligence-based multi-workpiece scene airtightness detection system, which includes an image acquisition module, an image analysis module, a bubble region acquisition module, and a bubble matching module.
The image acquisition module is used for acquiring a first water body overlook image before the start of air tightness detection and acquiring a second water body overlook image with a preset number of frames every time the air tightness detection is carried out after the start of the air tightness detection; the second water body overlook image comprises bubbles; taking each frame of image in the second water body overlooking image as a detection image; and taking a first frame image of the detection image as an initial frame image.
The image analysis module is used for acquiring an example mask area of each workpiece in the first water body overlook image, acquiring a first central point position of each bubble at the water surface of the initial frame image, and acquiring the water surface shaking degree of the detection image by comparing the detection image with the water surface change in the initial frame image.
And the bubble area acquisition module is used for taking a target area obtained according to the position of the first central point and the water surface shaking degree as a target area of each bubble.
And the bubble matching module is used for sequentially acquiring the intersection ratio of the target area of each bubble and the mask area of each example, matching the bubble corresponding to the maximum intersection ratio with the workpiece and determining the air-leaking workpiece.
The invention has at least the following beneficial effects:
according to the air tightness detection method and system, the water body shaking degree is combined, the air bubbles are matched with the workpieces through the intersection and comparison of the target area of each air bubble and the example mask area of each workpiece, the problem that the target areas of the air bubbles are overlapped can be solved, and the matching accuracy is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a method for detecting air tightness in a multi-part scene based on artificial intelligence according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for detecting a water surface sloshing degree in a method for detecting air tightness in a multi-part scene based on artificial intelligence according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a bubble target region acquisition process in a multi-part scene based on artificial intelligence according to an embodiment of the present invention;
FIG. 4 is a block diagram of a system for detecting air tightness in a multi-part scene based on artificial intelligence according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an image analysis module of the air tightness detection system in a multi-part scene based on artificial intelligence according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an acquisition module for bubble regions of the air-tightness detection system in a multi-component scene based on artificial intelligence according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a bubble matching module of an air-tightness detection system in a multi-component scene based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a method and a system for detecting air tightness in a multi-component scene based on artificial intelligence according to the present invention, with reference to the accompanying drawings and preferred embodiments, and the detailed implementation, structure, features and effects thereof are as follows. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of a method and a system for detecting air tightness in a multi-part scene based on artificial intelligence in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for detecting air tightness in a multi-part scene based on artificial intelligence according to an embodiment of the present invention is shown. The method comprises the following steps:
step S1, before the air-tightness detection is started, an example mask region of each workpiece in the first water body overhead image is acquired.
After the workpieces are sent to the water bottom, the workpieces do not move any more, therefore, before the workpieces are not stamped, namely before the air tightness detection of the workpieces is started, the camera acquires a first water body top view image of the water tank, the acquired first water body top view image in the water tank is sent to the example segmentation network, the network inputs the acquired first water body top view image in the water tank, the image acquired by the camera is subjected to example segmentation of the workpieces, namely, the enclosing frame of each workpiece is obtained first, then, the semantic segmentation is carried out on each enclosing frame, the example of each workpiece is obtained, and the output is the Mask of each workpiece example.
Specifically, the example segmentation network training process is as follows:
a large number of water body top view images in a water tank are used as a training data set, the data set is artificially labeled, and the specific labeling method comprises the following steps: drawing each mask area in each picture, giving different category labels and size information of a surrounding frame to each workpiece, and enabling the finally obtained label data to have T channels, wherein each channel contains label information of one workpiece.
Images contained in the data set need to be preprocessed, a normalization method is adopted as preprocessing in the embodiment of the invention so that the model can be converged better, and the label data are also subjected to normalization processing.
The example segmentation network is trained end-to-end through the collected images and label data. Inputting the normalized image data into a first Encoder (Encoder1), performing feature extraction on the image by an Encoder1, outputting a feature map (FeatureMap) of the image, sending the FeatureMap into a full connection layer FC to obtain a bounding box of each workpiece, inputting a cropping picture of each bounding box into a second Encoder (Encoder2) and a Decoder (Decoder) to obtain an example of each workpiece, and finally outputting an example segmentation Mask map with T channels, wherein each channel represents an example of one workpiece.
Wherein, the loss function of the real force segmentation network adopts a cross entropy loss function.
In other embodiments, the example segmentation network can also adopt a network structure of Mask-RCNN and PANNet, and the Encoder1 and Encoder2 can also apply lightweight models such as Mobilene and Shufflenet, so as to facilitate better network training.
And step S2, acquiring a second water body overhead image with a preset number of frames every time the air tightness detection is carried out after the air tightness detection is started. The second water body overlooking image contains bubbles, each frame image in the second water body overlooking image is taken as a detection image, and a first frame image of the detection image is taken as an initial frame image.
Specifically, when the airtightness of the workpieces is detected, the T workpieces are connected with the stamping pipe and are placed into liquid, and the airtightness of the workpieces is judged by observing bubbles. Each workpiece corresponds to one stamping pipe, and the stamping pipe is placed into the water tank after connection is finished, so that air tightness detection is started.
And then deploying a camera with a fixed pose right above the water tank of the air tightness detection device, overlooking the visual angle of the camera, acquiring images in the water tank, and acquiring second water body overlooking images with preset frames, namely acquiring n frames of second water body overlooking images, every time the air tightness detection is carried out. Each frame image in the n frames of second water body top view images is a detection image, and a first frame image of the detection image is used as an initial frame image. In order to enable the bubbles to present obvious bright spots to enhance the characteristics of the bubbles in the image and sense the shaking degree of the water surface, area array laser is deployed at a position close to the water surface below the water surface on one side of the water tank, and floating markers with obvious characteristics are placed on the water surface.
In step S3, a first center point position of each bubble at the water surface of the initial frame image is acquired.
Specifically, position information of a bubble on the water surface is acquired, and the bubble can appear a bright spot under the irradiation of laser. In the embodiment of the invention, as the manufacturing requirement on the workpiece is higher, even if the workpiece leaks air in the detection scene, larger bubbles can not appear, and due to the problem of the resolution of the camera, one bubble in an image formed by a plurality of lasers passing through one bubble can be prevented from presenting one bright point by adjusting the resolution of the camera, the arrangement of the lasers or the arrangement of the workpiece and the like. In order to detect bright spots appearing on the bubbles, the second water body overhead image needs to be converted into HSV space. Of the three components in the RGB image, let the maximum value of R, G and B be MAX and the minimum value be MIN, and the conversion formula from RGB to HSV is:
Figure BDA0002798865390000041
Figure BDA0002798865390000051
V=MAX
in the calculation, H ranges from 0 to 360 degrees, and the values of S and V range from 0 to 1.
Setting thresholds of a hue H channel and a brightness V channel, wherein the threshold of the hue H channel is selected to be related to the color of the used laser, if red laser is selected, the threshold is set to obtain a red bright spot, the bright spot of the bubble is separated by setting the thresholds of the hue H channel and the brightness V channel, the position of the bubble in each frame of second water body overlooking image is obtained by detecting the position of the bubble each time, namely, the bright spot position of each bubble in each detection in a g-th frame image pixel coordinate system is set as Pg(i, j), i, the abscissa representing the position of the center point, j the ordinate representing the position of the center point, and the position of the bright point of the bubble represents the position of the center point of the bubble. Recording bubble bright spot P in initial frame image1(i, j) as the first center point position, and the bright point position of the bubble in the rest frame images as the second center point position.
Step S4, obtaining the water surface shaking degree of the detection image by comparing the water surface change in the detection image and the initial frame image.
The utensil can influence the position of bubble in surface of water department because rocking of water, causes the interference to the matching of bubble and work piece, in order to solve this problem, needs the rocking degree of perception surface of water.
As shown in fig. 2, step S401, a marker is placed on the water surface, and the position of the marker in the initial frame image and the detection image is obtained.
Specifically, in the embodiment of the invention, the marker with obvious characteristics is placed on the water surface, the position of the marker does not influence the acquisition of the bubble bright spots, the marker with obvious characteristics can enhance the weak characteristics of water body shaking, and the water body shaking degree at the water surface is reflected through the position deviation of the marker. And acquiring the position information of the corner points on the marker by using corner point detection. The specific method for detecting the angular point comprises the following steps: sliding a sliding window with fixed size on the image, calculating gradient information in the sliding window, if more than one side in the opposite direction appears in the sliding window, judging that a point at the center of the sliding window is an angular point to be detected, and simultaneously obtaining the angular point to be detectedThe position of the corner point in the image pixel coordinate system is noted as (x, y). Acquiring the position information of the corner points detected in the continuous n frames of detection images as follows: { (x)1,y1),(x2,y2),......,(xg,yg),...,(xn,yn) Wherein (x)1,y1) Represented as the location information of the corner of the marker in the initial frame image.
Step S402, obtaining the change degree of the marker position in the detection image and the initial frame image.
The offset degree of the corner position is obtained by subtracting the position information of the corner in the initial frame image from the position information of the corner of the current frame detection image, and the obtained change sequences in two directions are respectively as follows: abscissa variation sequence { Δ x2,Δx3,...,Δxg,...ΔxnAnd a series of ordinate changes { Δ y }2,Δy3,...,Δyg,...Δyn}. Wherein, Δ xg=xg-x1,Δyg=yg-y1. Further obtaining the variation sequence of two directions to obtain the mean value of the horizontal coordinate variation of the corner points of the marker
Figure BDA0002798865390000052
And mean value of variation of ordinate
Figure BDA0002798865390000053
Wherein the mean value of the abscissa variation
Figure BDA0002798865390000054
And mean value of variation of ordinate
Figure BDA0002798865390000055
Respectively as follows:
Figure BDA0002798865390000061
and S403, acquiring the shaking degree of the water surface in the detection image according to the change degree of the position of the marker.
Mean change of abscissa
Figure BDA0002798865390000062
And mean value of variation of ordinate
Figure BDA0002798865390000063
As the degree of sloshing of the water surface in the detected image.
In step S5, a target area obtained from the first center point position and the water surface sloshing degree is set as a target area for each bubble.
Considering that the position of the bubble changes due to the water body shaking, in order to obtain the position of each bubble region, the position of the bubble in n continuous frames of images needs to be detected, and the central point position P of each bubble in each frame of image is detectedg(i, j) were superimposed to obtain a scattergram Img of the distribution of all bubbles. The superposition process of each bubble position and the process of acquiring the water body shaking degree by using the marker are synchronously carried out.
As shown in fig. 3, the specific steps of acquiring each bubble target area are as follows:
in step S501, each first central point location is marked with a first label different from each other.
Acquiring a first central point position P of a bubble in an initial frame image in n continuous frames of detection images1(i, j), marking each first central point position with a first label which is different from each other, wherein the first labels are sequentially marked as 1, 2,. once, K, and K, wherein K represents the first label of the kth first central point position detected in the initial frame image, and K is the first label total number of the first central point positions detected in the initial frame image.
Step S502, a second central point position in the detection image is obtained, and a central point distance between the second central point position and the first central point position is obtained.
After the g-th frame detection image is acquired, acquiring a second central point position 1, 2, a, A, a of the bubble in the current frame detection image, wherein the a represents the a-th second central point position in the current frame detection image, and the current frame detection image at the momentThe second center point position in the image has not marked the first label k. Then, obtaining a center point distance L between the second center point position of each unmarked first tag and the first center point position of the marked first tag k: { L1,L2,...,La,...LAAcquiring the change delta x of the angular points in the horizontal and vertical directions in the current frame detection image and the initial frame imagegAnd Δ yg
Step S503, within the preset central point distance, obtaining a minimum value of the central point distance, and marking the second central point position with a first label having the same position as the first central point corresponding to the minimum value.
According to the distance of the preset central point
Figure BDA0002798865390000064
Is assigned a weight in the range of 0.8, 1]The weighting is used to avoid a mismatch in bubble rise rate.
When in use
Figure BDA0002798865390000065
A weight of 1 is assigned when the value of (a) is 0,
Figure BDA0002798865390000066
the larger the value is, the smaller the assigned weight is, and the weight range [0.8, 1 ] of the position of the second center point of the a-th bubble in the current frame detection image is]And selecting the position of the second central point with the maximum weight as the corresponding first label. Traversing all second central point positions in n continuous frames of detection images by the same method, matching the second central point positions in all detection images acquired after the initial frame image with the first labels in the initial frame image to obtain a bubble scattergram with the same labels, and obtaining a scattergram Img distributed at all the first central point positions in the current detection initial frame imageo
Step S504, when the second central point position is not marked with the first label, the second central point position is obtained as a third central point position, and each third central point position is marked with a different second label based on a clustering algorithm.
Considering that all bubbles in the detected initial frame image cannot be completely counted according to the bubble bright spots, all the missed bubbles are bubbles without marking the first label, and the bubbles are bubbles which fail to mark the first label for the bubbles because of the different rising speeds, so that the initial frame image fails to acquire the bright spot characteristics of the bubbles, but the bubble bright spots in the detected image, that is, the second central point positions without the first label, are acquired. Acquiring the positions of the second central points as the positions of third central points, classifying the positions of the third central points based on a clustering algorithm DBSCAN by using a small number of bubbles without being marked with first labels, and marking the positions of each classified type of third central points with different second labels: k +1, K + 2. The category labels of all bubble bright spots are obtained by combining the first label detected before, and the scattered spot distribution { I } of all first central point positions is obtained1,I2,...Ik,...IK,IK+1,...,IK+b,...IK+BK + B represents the total number of labels, i.e. the total number of air-leaking workpieces.
And step S505, acquiring the mean value of the positions of each central point with the same label as the actual central point position of each bubble, and acquiring a target area corresponding to each actual central point position as the target area of each bubble according to the shaking degree of the water surface.
The scattered point distribution of each first central point position approximately presents symmetrical distribution around the first central point position due to the shaking of the water body, each central point position with the same label is obtained, and the abscissa i of each central point position with the same label is usedzWith ordinate jzAs the actual center point position of each bubble
Figure BDA0002798865390000071
Abscissa of the position of the actual center point
Figure BDA0002798865390000072
And the ordinate
Figure BDA0002798865390000073
Figure BDA0002798865390000074
Q represents the scatter distribution I of the k-th bubblekNumber of individual scatter points.
According to the changing characteristics of the corner points of the marker, to
Figure BDA0002798865390000075
Is the minor axis, of
Figure BDA0002798865390000076
And drawing an elliptical area by taking the actual central point position of each bubble as the center for the long axis, and acquiring a target area of each central point position, wherein the target area is marked as the ROI target area of the bubble at the water surface, namely the target area of each bubble.
And step S6, sequentially acquiring the intersection ratio of the target area of each bubble and the mask area of each example, matching the bubble corresponding to the maximum intersection ratio with the workpiece, and determining the workpiece with air leakage.
Specifically, after the ROI target region of the bubble and the example Mask of the workpiece are obtained, an intersection ratio (IOU) between the ROI target region of each bubble and each workpiece example Mask is obtained through the ROI target region of each bubble on the water surface and the workpiece example Mask, that is, the probability that the ROI target region of each bubble belongs to each workpiece example Mask is obtained, the maximum value of each IOU is taken as the probability that the ROI of the bubble belongs to the corresponding workpiece example Mask, and the matching between the bubble and the workpiece is completed.
For example, with a certain bubble ROI1The matching process is as an example, and the specific steps are as follows:
calculating ROI1IOU with each workpiece instance Maskt
Figure BDA0002798865390000081
T represents the number of Mask instances of the workpiecetDenoted as the tth workpiece instance Mask, IOUtDenoted as the bubble ROI1Target area and t-th workpiece example MasktThe cross-over ratio of (A) to (B), the ROI of the bubble is obtained1The matching probability with each workpiece instance Mask is as follows: { IOU1,IOU2,...,IOUt,...IOUTTaking the maximum value of the matching sequence as: argmax ({ IOU)1,IOU2,...,IOUt,...IOUTAnd } matching the ROI target area of the bubble corresponding to the maximum value of the IOU with the corresponding workpiece instance Mask to finish the matching of the bubble and the workpiece.
And similarly, performing the same operation on the rest bubbles to complete the matching of all the K + B bubbles and the T workpieces.
Step S7, when judging that the bubbles are matched with a plurality of workpieces through the maximum intersection ratio, acquiring the workpieces as suspected abnormal workpieces; and (4) sequentially cutting off the gas of each suspected abnormal workpiece, and judging the suspected abnormal workpiece as an abnormal workpiece if the bubbles disappear within a preset time.
When the bubble is judged to be matched with a plurality of workpieces through the maximum intersection ratio, each workpiece is connected with a stamping pipe, the stamping pipes are controlled to cut off gas for the plurality of suspected abnormal workpieces one by one, if the bubble disappears in the preset time, the bubble belongs to the suspected abnormal workpieces, the suspected abnormal workpieces are judged to be abnormal workpieces, if the suspected abnormal workpieces do not disappear, the stamping pipe of the next workpiece is cut off, whether the bubble disappears is judged again, the step is circulated, then the matching of the bubble to the workpieces is completed, and all gas leakage workpieces are determined.
In summary, the embodiment of the invention acquires the examples and the position information of each workpiece by using the force segmentation network, acquires the position information of the bubbles by using the positions of bright spots generated when the laser passes through the bubbles, acquires the ROI of the bubbles by combining the influence of the water body shaking on the positions of the bubbles, matches the ROI of the bubbles with the example regions of the workpieces, and determines the accurate position of the air leakage workpiece. Even if the multi-bubble detection areas are overlapped, the position of the air leakage workpiece can be accurately judged.
Based on the same inventive concept as the system/method, another embodiment of the invention further provides a system for detecting air tightness in a multi-part scene based on artificial intelligence.
Referring to fig. 4, a block diagram of a system for detecting air tightness in a multi-part scene based on artificial intelligence according to another embodiment of the present invention is shown.
The system includes an image acquisition module 100, an image analysis module 200, a bubble region acquisition module 300, and a bubble matching module 400.
The image acquisition module is used for acquiring a first water body top view image before the start of air tightness detection, and acquiring a second water body top view image with a preset number of frames every time the air tightness detection is carried out after the start of the air tightness detection; the second water body overlook image comprises bubbles; taking each frame of image in the second water body overlooking image as a detection image; and taking a first frame image of the detection image as an initial frame image.
Specifically, an area array laser is deployed at a position close to the water surface below the water surface on one side of the water tank, and a floating marker with obvious characteristics is placed on the water surface.
Since the workpiece is not moved any more after being sent to the water bottom, the camera acquires the first water body overlook image of the water tank before the airtightness detection of the workpiece is started.
After the workpiece starts to carry out air tightness detection, a camera with a fixed pose is deployed right above a water tank of the air tightness detection device, the visual angle of the camera is overlooked, images in the water tank are collected, and every time the air tightness detection is carried out, the camera collects second water body overlooking images with preset frame numbers, namely n frames of second water body overlooking images are collected. Each frame image in the n frames of second water body top view images is a detection image, and a first frame image of the detection image is used as an initial frame image.
The image analysis module is used for acquiring an example mask area of each workpiece in the first water body overlook image, acquiring the central point position of each bubble at the water surface of the initial frame image, and acquiring the water surface shaking degree of the detection image by comparing the detection image with the change of the water surface in the initial frame image.
Specifically, a collected first water body top view image in the water tank is sent to an example segmentation network, the collected first water body top view image in the water tank is input by the network, the image collected by the camera is subjected to example segmentation of the workpiece, namely, an enclosure frame of each workpiece is obtained first, then, each enclosure frame is subjected to semantic segmentation to obtain an example of each workpiece, and the example of each workpiece is output as a Mask of each workpiece example.
In order to detect bright spots appearing on the bubbles, the second water body overlook images need to be converted into HSV spaces, threshold values of hue H and brightness V channels are set, wherein the threshold value of hue H is selected to be related to the color of used laser, if red laser is selected, the threshold value is set to obtain the red bright spot, the bright spots of the bubbles are separated by setting the threshold values of hue H and brightness V, the positions of the bubbles are detected every time, the positions of the bubble bright spots in each frame of second water body overlook images are obtained and serve as the central point positions P of the bubblesg(i, j). Recording the bright point position of each bubble on the water surface of the initial frame image as P1(i, j) as a first center point position of the bubble, the bright point position of each bubble at the water surface in the remaining detection images is a second center point position of the bubble.
As shown in fig. 5, the image analysis module 200 includes a marker analysis unit 210 and a shake detection unit 220.
The marker analysis unit is used for placing a marker on the water surface, acquiring the initial frame image and the position of the marker in the detection image, and acquiring the detection image and the position of the marker in the initial frame image.
And acquiring the position information of the corner points on the marker by using corner point detection. The specific method for detecting the angular point comprises the following steps: sliding a sliding window with a fixed size on the picture, calculating gradient information in the sliding window, if more than one side in the sliding window is arranged, judging a point at the center of the sliding window as an angular point to be detected, obtaining the position of the angular point in an image pixel coordinate system and marking as (x, y) at the same time, and acquiring n continuous frames of detectionPosition information of corner points detected in an image { (x)1,y1),(x2,y2),.......,(xg,yg),...,(xn,yn) In which (x)1,y1) The position information of the corner of the marker in the initial frame image is obtained.
The shaking detection unit is used for acquiring the shaking degree of the water surface in the detection image according to the change of the position of the marker.
Specifically, the offset degree of the corner position is obtained by subtracting the position information of the corner in the initial frame image from the position information of the corner in the current frame image, and the obtained change sequences in two directions are respectively: abscissa variation sequence { Δ x2,Δx3,...,Δxg,...ΔxnAnd a series of ordinate changes { Δ y }2,Δy3,...,Δyg,...Δyn}. Further obtaining the change sequence of two directions to obtain the mean value of the change of the abscissa
Figure BDA0002798865390000101
And mean value of variation of ordinate
Figure BDA0002798865390000102
The water surface sloshing degree is obtained.
The bubble area acquisition module is used for taking all distribution areas obtained according to the position of the central point and the water surface shaking degree as target areas of each bubble.
As shown in fig. 6, the bubble area acquiring module 300 includes a first label unit 310, a second label unit 320, and an area acquiring unit 330.
The first label unit is used for marking each first central point position with a first label which is different from each other, acquiring a second central point position of the detection image, acquiring a central point distance between the second central point position and the first central point position, acquiring a minimum value of the acquired central point distance within a preset central point distance, and marking the second central point position with the first label which is corresponding to the minimum value and has the same first central point position.
Specifically, obtainingFirst central point position P of each bubble in initial frame image1(i, j), marking each first central point position with a first label which is different from each other, and sequentially marking the labels as 1, 2,. multidot.k, wherein K represents the first label of the kth first central point position detected in the initial frame image, and K is the first label total number of the first central point positions detected in the initial frame image.
After the g-th frame detection image is acquired, acquiring second central point positions 1, 2, a in the current frame detection image, wherein a represents the a-th second central point position in the current frame detection image, and A is the number of the acquired second central point positions in the current frame image, and the second central point position in the current frame detection image is not marked with a first label k.
Acquiring center point distances L between the second center point positions of all the unlabeled labels k and the bubbles marking the first center point positions of the first labels k: { L1,L2,...,La,...LAAcquiring the change delta x of the angular points in the horizontal and vertical directions of the current frame detection image and the initial frame image obtained by using the markersgAnd Δ ygAccording to
Figure BDA0002798865390000111
Is assigned a weight in the range of 0.8, 1]The weighting is used to avoid a mismatch in bubble rise rate.
When in use
Figure BDA0002798865390000112
A weight of 1 is assigned when the value of (a) is 0,
Figure BDA0002798865390000113
the larger the value is, the smaller the weight is assigned, and the weight range [0.8, 1 ] of the position of the a-th second center point in the current frame detection image is]And selecting the position of the second central point with the maximum weight as the corresponding first label. Traversing all second central point positions in the continuous n frames of detection images by the same method, and completing all detection collected after the initial frame of imageMatching the position of a second central point in the image with a first label in the initial frame image to obtain a bubble scattergram with the same label, namely obtaining a scattergram Img of the position distribution of the first central point which can be detected in the initial frame imageo
The second label unit is used for acquiring the position of the second central point as the position of a third central point when the position of the second central point is not marked with the first label, and marking each third central point with a different second label based on a clustering algorithm.
Specifically, considering that there are missing bubbles in the detected initial frame image, the bubbles are bright point features that the initial frame image fails to acquire the bubbles due to different rising speeds, so that the first label cannot be marked for the second center point position of the bubbles, and a few bubbles are not marked for the first label. Obtaining the positions of the second central points as the positions of third central points, classifying the positions of the third central points based on a clustering algorithm DBSCAN, and marking different second labels on the classified positions of the third central points which are not marked: k +1, K + 2. All bubble category labels are obtained by combining the previously detected first label of the bubble, and the scatter point distribution { I ] of all first central point positions is obtained1,I2,...Ik,...IK,IK+1,...,IK+b,...IK+BWhere K + B represents the total number of bubbles.
The area acquisition unit is used for acquiring the mean value of the positions of each central point with the same label as the actual central point position of each bubble, and acquiring a target area corresponding to each actual central point position as the target area of each bubble according to the shaking degree of the water surface.
Specifically, the scatter distribution of each first center point position approximately presents a symmetrical distribution around the first center point position due to the shaking of the water body, so as to have an abscissa i of each center point position of the same labelzWith ordinate jzAs the actual center point position of each bubble
Figure BDA0002798865390000114
Abscissa of the position of the actual center point
Figure BDA0002798865390000115
And the ordinate
Figure BDA0002798865390000116
Figure BDA0002798865390000117
Q represents the scatter distribution I of the kth bubblekNumber of scatter points in the population.
According to the changing characteristics of the corner points of the marker, to
Figure BDA0002798865390000118
Is the minor axis, of
Figure BDA0002798865390000119
And drawing an elliptical area by taking the actual central point position of each bubble as the center for the long axis, and acquiring a target area of each central point position, wherein the target area is marked as the ROI (region of interest) of the bubble at the water surface, namely the target area of each bubble.
The bubble matching module is used for sequentially acquiring the intersection ratio of the target area of each bubble and the mask area of each example, matching the bubble corresponding to the maximum intersection ratio with the workpiece and determining the air-leaking workpiece.
Specifically, an intersection-parallel ratio (IOU) of the ROI target region of each bubble and the Mask of each workpiece example is obtained through the ROI target region of each bubble on the water surface and the Mask of each workpiece example, so that the probability that the ROI target region of each bubble belongs to each workpiece example Mask is obtained, the maximum IOU value corresponding to the ROI region of the bubble is obtained, the workpiece example Mask corresponding to the maximum IOU value is obtained, the bubble is matched with the workpiece example Mask, and the workpiece is determined to be an air-leakage workpiece.
As shown in fig. 7, the bubble matching module further includes an additional judgment unit 410.
And the extra judgment unit is used for acquiring a plurality of workpieces as suspected abnormal workpieces when the bubbles are judged to be matched with the plurality of workpieces through the maximum intersection ratio, sequentially cutting off the gas of each suspected abnormal workpiece, and judging the suspected abnormal workpieces as abnormal workpieces if the bubbles disappear within a preset time.
Specifically, when the maximum intersection ratio is used for judging that the bubbles are matched with a plurality of workpieces, each workpiece is connected with a stamping pipe, the stamping pipes are controlled to cut off the gas supply of the plurality of suspected abnormal workpieces one by one, if the bubbles disappear in a preset time, the bubbles belong to the suspected abnormal workpieces, the suspected abnormal workpieces are judged to be abnormal workpieces, if the suspected abnormal workpieces do not disappear, the stamping pipe of the next workpiece is cut off, whether the bubbles disappear is judged again, the step is circulated, the matching of the bubbles on the workpieces is completed, and all the gas leakage workpieces are determined.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A multi-workpiece scene air tightness detection method based on artificial intelligence is characterized by comprising the following steps:
before the air tightness detection is started, acquiring an example mask area of each workpiece in a first water body top view image;
after the air tightness detection is started, acquiring a second water body top view image with a preset number of frames every time the air tightness detection is carried out; the second water body top view image comprises bubbles; taking each frame of image in the second water body top view image as a detection image; taking a first frame image of the detection image as an initial frame image;
acquiring a first central point position of each bubble at the water surface of the initial frame image;
obtaining the water surface shaking degree of the detection image by comparing the water surface change in the detection image and the initial frame image;
taking a target area obtained according to the position of the first central point and the water surface shaking degree as a target area of each bubble;
and sequentially acquiring the intersection ratio of the target area of each bubble and the example mask area, matching the bubble corresponding to the maximum intersection ratio with the workpiece, and determining the air-leaking workpiece.
2. The artificial intelligence-based multi-workpiece scene airtightness detection method according to claim 1, wherein the step of matching the bubble corresponding to the maximum cross-correlation with the workpiece and determining the workpiece with an air leak comprises the steps of:
when the bubble is judged to be matched with a plurality of workpieces through the maximum intersection ratio, acquiring the workpieces as suspected abnormal workpieces; and sequentially cutting off the gas of each suspected abnormal workpiece, and judging the suspected abnormal workpiece as an abnormal workpiece if the bubbles disappear within a preset time.
3. The artificial intelligence based multi-workpiece scene airtightness detection method according to claim 1, wherein the first central point positions are:
the position of a bright spot generated when the laser passes through the bubble; the laser is an area array laser arranged on the water surface; the bubble is a bubble in the initial frame image.
4. The artificial intelligence based multi-workpiece scene airtightness detection method according to claim 1, wherein the step of obtaining the water surface sloshing degree in the detection image by comparing the water surface change in the detection image with the initial frame image comprises:
placing a marker on the water surface, and acquiring the positions of the markers in the initial frame image and the detection image;
acquiring the change degree of the position of the marker in the detection image and the initial frame image;
and acquiring the shaking degree of the water surface in the detection image according to the change degree of the position of the marker.
5. The artificial intelligence based multi-workpiece scene airtightness detection method according to claims 1 and 3, wherein the step of using a target region obtained from the first center point position and the water surface sloshing degree as a target region of each bubble comprises:
marking a first label different from each other at each first central point position;
acquiring a second central point position of the detection image, and acquiring a central point distance between the second central point position and the first central point position;
within a preset central point distance, acquiring a minimum value of the central point distance, and marking the second central point position with the first label with the same position as the first central point corresponding to the minimum value;
when the first label is not marked at the second central point position, the second central point position is obtained as a third central point position, and each third central point position is marked with a different second label based on a clustering algorithm; the second tag is different from the first tag;
and acquiring the mean value of the positions of each central point with the same label as the actual central point position of each bubble, and acquiring a target area corresponding to each actual central point position as the target area of each bubble according to the shaking degree of the water surface.
6. A multi-workpiece scene air tightness detection system based on artificial intelligence is characterized by comprising an image acquisition module, an image analysis module, a bubble area acquisition module and a bubble matching module;
the image acquisition module is used for acquiring a first water body overlook image before the start of air tightness detection, and acquiring a second water body overlook image with a preset number of frames every time the air tightness detection is carried out after the start of the air tightness detection; the second water body top view image comprises bubbles; taking each frame of image in the second water body top view image as a detection image; taking a first frame image of the detection image as an initial frame image;
the image analysis module is used for acquiring an example mask region of each workpiece in the first water body overlook image, acquiring the central point position of each bubble at the water surface of the initial frame image, and acquiring the water surface shaking degree of the detection image by comparing the water surface change in the detection image and the initial frame image;
the bubble area acquisition module is used for taking a target area obtained according to the central point position and the water surface shaking degree as a target area of each bubble;
the bubble matching module is used for sequentially acquiring the intersection ratio of the target area of each bubble and each example mask area, matching the bubble corresponding to the maximum intersection ratio with the workpiece, and determining the air-leaking workpiece.
7. The artificial intelligence based multi-workpiece scene airtightness detection system according to claim 6, wherein the image analysis module further comprises a marker analysis unit and a shake detection unit;
the marker analyzing unit is used for placing a marker on the water surface, acquiring the positions of the marker in the initial frame image and the detection image, and acquiring the positions of the marker in the detection image and the initial frame image;
and the shake detection unit is used for acquiring the shake degree of the water surface in the detection image according to the change of the position of the marker.
8. The artificial intelligence based multi-workpiece scene airtightness detection system according to claim 6, wherein the bubble region acquisition module includes a first label unit, a second label unit and a region acquisition unit;
the first label unit is configured to mark each first center point location with a different first label, obtain a second center point location of the detection image, obtain a center point distance between the second center point location and the first center point location, obtain a minimum value of the obtained center point distances within a preset center point distance, and mark the second center point location with the first label having the same first center point location corresponding to the minimum value;
the second label unit acquires the second central point position as a third central point position when the first label is not marked at the second central point position, and marks different second labels for each third central point position based on a clustering algorithm; the second tag is different from the first tag;
the area acquisition unit is used for acquiring the mean value of the positions of each central point with the same label as the actual central point position of each bubble, and acquiring a target area corresponding to each actual central point position as the target area of each bubble according to the shaking degree of the water surface.
9. The artificial intelligence based multi-workpiece scene airtightness detection system according to claim 6, wherein the bubble matching module further comprises an additional judgment unit;
the extra judgment unit is used for acquiring a plurality of workpieces as suspected abnormal workpieces when the bubble is judged to be matched with the plurality of workpieces through the maximum intersection ratio, sequentially cutting off the gas of each suspected abnormal workpiece, and judging the suspected abnormal workpieces as abnormal workpieces if the bubble disappears in a preset time.
10. The artificial intelligence based multi-workpiece scene airtightness detection system according to claims 6 and 8, wherein the first central point positions are:
the position of a bright spot generated when the laser passes through the bubble; the laser is an area array laser arranged on the water surface; the bubble is a bubble in the initial frame image.
CN202011342212.XA 2020-11-26 2020-11-26 Multi-workpiece scene air tightness detection method and system based on artificial intelligence Withdrawn CN112304512A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011342212.XA CN112304512A (en) 2020-11-26 2020-11-26 Multi-workpiece scene air tightness detection method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011342212.XA CN112304512A (en) 2020-11-26 2020-11-26 Multi-workpiece scene air tightness detection method and system based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN112304512A true CN112304512A (en) 2021-02-02

Family

ID=74486911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011342212.XA Withdrawn CN112304512A (en) 2020-11-26 2020-11-26 Multi-workpiece scene air tightness detection method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112304512A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114925660A (en) * 2022-05-23 2022-08-19 马上消费金融股份有限公司 Text processing model training method and device and text processing method and device
CN115147598A (en) * 2022-06-02 2022-10-04 粤港澳大湾区数字经济研究院(福田) Target detection segmentation method and device, intelligent terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114925660A (en) * 2022-05-23 2022-08-19 马上消费金融股份有限公司 Text processing model training method and device and text processing method and device
CN114925660B (en) * 2022-05-23 2023-07-28 马上消费金融股份有限公司 Text processing model training method and device, text processing method and device
CN115147598A (en) * 2022-06-02 2022-10-04 粤港澳大湾区数字经济研究院(福田) Target detection segmentation method and device, intelligent terminal and storage medium

Similar Documents

Publication Publication Date Title
US11669971B2 (en) Colony contrast gathering
CN105913093B (en) A kind of template matching method for Text region processing
CN106529537B (en) A kind of digital instrument reading image-recognizing method
CN110580723B (en) Method for carrying out accurate positioning by utilizing deep learning and computer vision
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN108961235A (en) A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm
CN109900711A (en) Workpiece, defect detection method based on machine vision
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN105844621A (en) Method for detecting quality of printed matter
CN109253722A (en) Merge monocular range-measurement system, method, equipment and the storage medium of semantic segmentation
CN110264457A (en) Weld seam autonomous classification method based on rotary area candidate network
CN112304512A (en) Multi-workpiece scene air tightness detection method and system based on artificial intelligence
CN113724231A (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN111914761A (en) Thermal infrared face recognition method and system
CN111461036A (en) Real-time pedestrian detection method using background modeling enhanced data
CN110288612B (en) Nameplate positioning and correcting method and device
CN114781514A (en) Floater target detection method and system integrating attention mechanism
CN112381043A (en) Flag detection method
CN112729691A (en) Batch workpiece airtightness detection method based on artificial intelligence
CN112308828A (en) Artificial intelligence detection method and detection system for air tightness of sealing equipment
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
JPH05215547A (en) Method for determining corresponding points between stereo images
CN113853607A (en) System and method for monitoring bacterial growth and predicting colony biomass of colonies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210202