CN113610740A - Bubble feature enhancement method and system in air tightness detection based on artificial intelligence - Google Patents

Bubble feature enhancement method and system in air tightness detection based on artificial intelligence Download PDF

Info

Publication number
CN113610740A
CN113610740A CN202110932646.3A CN202110932646A CN113610740A CN 113610740 A CN113610740 A CN 113610740A CN 202110932646 A CN202110932646 A CN 202110932646A CN 113610740 A CN113610740 A CN 113610740A
Authority
CN
China
Prior art keywords
image
fusion
pixel
obtaining
fusion weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110932646.3A
Other languages
Chinese (zh)
Inventor
张来娣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Fuen Daily Chemical Technology Co ltd
Original Assignee
Jiangsu Fuen Daily Chemical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Fuen Daily Chemical Technology Co ltd filed Critical Jiangsu Fuen Daily Chemical Technology Co ltd
Priority to CN202110932646.3A priority Critical patent/CN113610740A/en
Publication of CN113610740A publication Critical patent/CN113610740A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention relates to the technical field of artificial intelligence and air tightness detection, in particular to a bubble feature enhancement method and system in air tightness detection based on artificial intelligence. The method comprises the following steps: and obtaining a first fusion weight through the texture similarity in the initial image, and fusing the initial image sequence according to the first fusion weight to obtain a background template. And extracting a foreground image sequence in the initial image sequence through a background template, fusing the foreground image sequence through a first fusion weight, and controlling a fusion process through a pixel value threshold value to obtain a first fusion image. And continuously fusing the foreground images through the characteristics of three pixel categories of fixed noise, bubble characteristics and random noise in the first fused image to obtain a second fused image. And controlling the fusion process through a pixel value threshold, and removing fixed noise and random noise through image negation twice to obtain a bubble characteristic image. According to the method, after the images are controlled to be overlapped, denoising is carried out through the formation characteristic of the noise pixels, and a stable bubble characteristic image with obvious characteristics is obtained.

Description

Bubble feature enhancement method and system in air tightness detection based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence and air tightness detection, in particular to a bubble feature enhancement method and system in air tightness detection based on artificial intelligence.
Background
The immersion method is one of the traditional air tightness detection methods. A workpiece to be detected, such as an automobile engine device, is placed in a glass container filled with liquid, gas is filled into the workpiece to be detected, and the air tightness of the workpiece to be detected is judged according to the bubble generation condition in the container. In the prior art, the air tightness can be judged by obtaining bubble characteristics through images by a machine vision technology.
Different work pieces to be detected are constantly put along with getting, and the impurity of liquid in the container and the impurity of treating that it can make the turbidity of liquid increase gradually to detect adnexed impurity on the work piece, and the turbidity increases and to make bubble characteristic unobvious in the picture, and noise data is big to very big influence gas tightness testing result.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a bubble feature enhancement method and system in air tightness detection based on artificial intelligence, and the adopted technical scheme is as follows:
the invention provides a bubble feature enhancement method in air tightness detection based on artificial intelligence, which comprises the following steps:
acquiring an initial image sequence; obtaining texture similarity of the initial images of adjacent frames, and obtaining a first fusion weight according to the texture similarity; fusing the initial image sequence according to the first fusion weight to obtain a background template;
extracting foreground images in the initial image sequence according to the background template to obtain a foreground image sequence; fusing the foreground image sequence according to the first fusion weight to obtain a first fused image, and stopping fusing when a pixel point in the first fused image reaches a pixel value threshold;
acquiring the pixel category in the first fusion image; the pixel classes include fixed noise, bubble characteristics, and random noise; setting the pixel point in each pixel category as the average pixel value corresponding to the pixel category; taking the occurrence frequency of each pixel category as a second fusion weight, and fusing each pixel category of the first fusion image according to the first fusion weight and the second fusion weight to obtain a second fusion image; when the pixel point in the second fusion image reaches the pixel value threshold, negating the second fusion image to obtain a third fusion image; fusing the foreground image sequence with the third fused image; and when the pixel point in the third fused image reaches the pixel value threshold value, negating the third fused image to obtain a bubble feature image.
Further, the acquiring the initial image sequence further comprises:
and graying the initial image and then carrying out top hat operation.
Further, the acquiring the initial image sequence further comprises:
keeping the initial image with the minimum average pixel value between adjacent frames of the initial image sequence;
after obtaining the foreground image sequence, the method further comprises:
and reserving the foreground image with the maximum average pixel value between the adjacent frames of the foreground image sequence.
Further, the obtaining the texture similarity of each of the initial images comprises:
obtaining gray level co-occurrence matrix characteristics of each initial image; the gray level co-occurrence matrix characteristics comprise energy, contrast, correlation and entropy; and taking the similarity vector of the gray level co-occurrence matrix characteristic of the initial image of the adjacent frame as the texture similarity.
Further, the obtaining a first fusion weight according to the texture similarity comprises:
obtaining an average texture similarity of the initial image sequence; establishing a difference matrix according to the difference between the texture similarity and the average texture similarity to obtain a plurality of difference characteristic vectors and difference characteristic values;
obtaining the number ratio of the initial images corresponding to the texture similarity in the initial image sequence;
and obtaining the first fusion weight according to the texture similarity, the quantity ratio and the difference characteristic value.
Further, the obtaining the first fusion weight according to the texture similarity, the quantity-to-number ratio and the difference feature value comprises: obtaining the first fusion weight by a first fusion weight formula, the first fusion weight formula comprising:
Figure BDA0003211620300000021
where α is the first fusion weight, k is the number of disparity feature vectors, cjThe number ratio, s, corresponding to the jth difference feature vectorjFor the texture similarity corresponding to the jth difference feature vector, τjAnd tau is the sum of the j difference characteristic values.
Further, the fusing the initial image sequence according to the first fusion weight to obtain a background template includes: fusing the initial image sequence by a first fusion formula, the first fusion formula comprising:
fi,i+1=αfi+(1-α)fi+1
wherein f isi,i+1For the first fused image, fiFor the initial image of the ith frame, fi+1The initial image is the i +1 th frame, and alpha is the first fusion weight.
Further, the fusing each pixel category of the first fused image according to the first fusion weight and the second fusion weight to obtain a second fused image includes: obtaining the second fusion image through a second fusion formula, wherein the second fusion formula is as follows:
Gi+1,j=(1-α)Gi,j+α(1+kj)G′i,j
wherein G isi+1,jThe pixel value of the j-th type of the pixel category in the second fusion image of the (i + 1) th frame,
Figure BDA0003211620300000031
for the pixel value, alpha, of the j-th class of the pixel class in the second fused image of the i-th frameFor the first fusion weight, kjIs the second fusion weight, G ', of the pixel class of the j-th class'i,jAnd the pixel value of the j-th type of pixel category in the foreground image sequence of the ith frame is obtained.
Further, the acquiring the pixel classes in the first fused image comprises:
and acquiring three pixel categories of the first fusion image by using a mean value clustering algorithm.
The invention also provides a bubble characteristic enhancement system in the air tightness detection based on the artificial intelligence, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and is characterized in that the processor realizes any step of the bubble characteristic enhancement method in the air tightness detection based on the artificial intelligence when executing the computer program.
The invention has the following beneficial effects:
1. the embodiment of the invention can invert the pixel values of the images to screen out the fixed noise and the random noise by performing multiple fusion processing on the foreground images, and can change the fixed noise and the random noise into background images, thereby obtaining the bubble characteristics with obvious characteristics after multi-frame superposition.
2. According to the embodiment of the invention, the initial image sequence is fused through the first fusion weight, so that a stable and effective background template is obtained. And fusing the pixel classes through the first fusion weight and the second fusion weight to obtain a second fusion image which is stable and has obvious pixel class difference.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for enhancing bubble characteristics in an artificial intelligence-based airtightness detection according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a method and a system for enhancing bubble characteristics in an artificial intelligence based air tightness detection according to the present invention, with reference to the accompanying drawings and preferred embodiments, and the detailed description thereof will be provided below. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of a bubble feature enhancement method and system in air tightness detection based on artificial intelligence in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for enhancing bubble characteristics in artificial intelligence-based air tightness detection according to an embodiment of the present invention is shown, where the method includes:
step S1: acquiring an initial image sequence; obtaining texture similarity of initial images of adjacent frames, and obtaining a first fusion weight according to the texture similarity; and fusing the initial image sequence according to the first fusion weight to obtain a background template.
An industrial RGB camera was attached to the wall of the air tightness detection vessel to capture the initial image. In the embodiment of the present invention, 100 consecutive initial images are obtained.
Preferably, after the initial image is obtained, the initial image is grayed and then subjected to top hat operation processing. The top hat operation is the difference between the original image and the image after the opening operation, when the image has a large background and the tiny features are regular, the top hat operation can be used for carrying out coarse background extraction. The initial image is processed by using top hat operation, most background pixels can be extracted according to the pixel value of the original image and the regular characteristics of bubble motion, and the coarse extraction of the background is realized. Note that a small amount of bubble characteristics also exist in the effect map after the top hat operation.
Preferably, the subsequent foreground image is made more visible in order to ensure that the background image is a pure black color. And keeping the initial image with the minimum average pixel value between adjacent frames of the initial image sequence.
Because the background image changes slightly, the bubble characteristics are constantly changing. Therefore, the texture features corresponding to the background are wholly basically similar, a first fusion weight can be obtained according to the texture similarity of the initial image, the initial image sequence is fused according to the first fusion weight, the pixel features of the same part are reserved to the maximum degree, the corresponding degrees of different parts are reserved, and the stable and accurate background template is obtained.
The gray level co-occurrence matrix can reflect the comprehensive information of the gray level of the image, such as the direction, the adjacent interval, the change amplitude and the like, and can reflect the grain change information in the image. Therefore, the texture similarity is obtained through the gray level co-occurrence matrix algorithm, which specifically includes:
and obtaining the gray level co-occurrence matrix characteristic of each initial image. The gray level co-occurrence matrix features include:
1) energy: the energy is the sum of the squares of the elements of the gray level co-occurrence matrix, also known as the angular second moment. The energy is the measurement of the uniform change of the texture gray level of the image, and reflects the uniform degree of the gray level distribution of the image and the thickness degree of the texture.
2) Contrast ratio: the contrast is the moment of inertia near the main diagonal of the gray level co-occurrence matrix, and reflects how the values of the matrix are distributed, and the definition of the image and the depth of texture grooves are reflected.
3) Correlation degree: the correlation represents the similarity of the gray level co-occurrence matrix elements in the row or column direction in space, and reflects the correlation of local gray levels of the image.
4) Entropy: the entropy reflects the randomness of image texture, and if all values in the gray level co-occurrence matrix are equal, the maximum value is obtained; if the values in the co-occurrence matrix are not uniform, the entropy becomes small.
And taking the similarity vector of the gray level co-occurrence matrix characteristic of the initial images of the adjacent frames as texture similarity. I.e. the texture similarity is denoted as Zi(sγi,sεi,s∈i,sσi) Wherein Z isiFor the ith texture similarity, sγiSimilarity of energy eigenvalues, s, in gray level co-occurrence matrixεiIs the similarity of contrast eigenvalues, s, in the gray level co-occurrence matrix∈iSimilarity of correlation eigenvalues, s, in gray level co-occurrence matrixσiThe similarity of the entropy eigenvalues in the gray level co-occurrence matrix.
Obtaining the first fusion weight according to the texture similarity comprises:
obtaining the average texture similarity of the initial image sequence, noted as Z0(sγ0,sε0,s∈0,sσ0). And establishing a difference matrix according to the difference between the texture similarity and the average texture similarity to obtain a plurality of difference eigenvectors and difference eigenvalues. In the embodiment of the invention, the difference value between the texture similarity and the average texture similarity is used as the difference to establish the difference matrix.
And obtaining the number ratio of the initial images corresponding to the texture similarity in the initial image sequence. Because the larger the difference feature value is, the more important the corresponding difference feature vector is, i.e. the larger the background-related pixels under the difference feature vector occupy in the background fusion process. Thus, a first fusion weight is obtained based on the texture similarity, the number ratio and the difference eigenvalue. The method specifically comprises the following steps: obtaining a first fusion weight through a first fusion weight formula, the first fusion weight formula comprising:
Figure BDA0003211620300000051
where α is the first fusion weight, k is the number of disparity feature vectors, cjIs the number ratio, s, corresponding to the jth difference feature vectorjFor texture similarity corresponding to the jth difference feature vector, τjIs the jth difference characteristic value, and tau is the sum of the difference characteristic values. Need toTo be noted, c isjsjIs unfolded into (c)j1·sγj+cj2·sεj+cj3·s∈j+cj4·sσj)。
Fusing the initial image sequence according to the first fusion weight to obtain a background template, wherein the background template comprises: fusing the initial image sequence by a first fusion formula, the first fusion formula comprising:
fi,i+1=αfi+(1-α)fi+1
wherein f isi,i+1As a background template, fiFor the i-th frame initial image, fi+1And alpha is a first fusion weight for the i +1 th frame initial image. And fusing the initial image sequence into a background template through a first fusion formula.
Step S2: extracting foreground images in the initial image sequence according to the background template to obtain a foreground image sequence; and fusing the foreground image sequence according to the first fusion weight to obtain a first fusion image, and stopping fusing when the pixel point in the first fusion image reaches the pixel value threshold.
And subtracting each initial image from the background template to obtain a foreground image sequence. Because bubble motion is a dynamic process, there is a large amount of noise information in the foreground image and also bubble features.
Preferably, in order to obtain clear bubble characteristics, the foreground image with the largest average pixel value between adjacent frames of the foreground image sequence is reserved, and obvious contrast is generated between the foreground image and the background template.
And fusing the foreground image sequence according to the first fusion weight to obtain a first fusion image. To highlight the bubble features, the fusion formula of the first fused image is:
li,i+1=(1-α)li+αli+1
wherein li,i+1As a first fused image,/iFor the ith frame foreground image, li+1Is the i +1 th frame foreground image. α is the first fusion weight. Random noise is generated in the process of image superposition, so that the larger the first fusion weight is, the random noise is inThe faster the disappearance in the accumulation.
In the embodiment of the invention, the mean value filtering is carried out on the first fusion image, so that the pixel gray value in the image tends to be uniform.
And recording the pixel value of a pixel point in a pixel connected domain in each frame of first fusion image, and stopping fusion when detecting that the pixel point in the first fusion image reaches a pixel value threshold value. In the present embodiment, the pixel value threshold is set to 255.
Step S3: acquiring the pixel category in the first fusion image; setting the pixel point in each pixel category as the average pixel value of the corresponding pixel category; taking the frequency of occurrence of each pixel category as a second fusion weight, and continuously fusing each pixel category of the first fusion image according to the first fusion weight and the second fusion weight to obtain a second fusion image; when the pixel point in the second fusion image reaches the pixel value threshold, negating the second fusion image, removing fixed noise, and obtaining a third fusion image; fusing the foreground image sequence with the third fused image; and when the pixel point in the third fused image reaches the pixel value threshold, negating the third fused image, removing random noise and obtaining a bubble characteristic image.
In the process of foreground image accumulation fusion, three pixel categories exist: fixed noise, bubble characteristics, and random noise. The fixed noise exists in the image all the time, so the accumulated pixel value of the fixed noise in the image superposition fusion process reaches the pixel value threshold value firstly. The random noise is noise which randomly appears in the image superposition fusion process, so that the pixel value accumulated by the random noise in the fusion process finally reaches the pixel value threshold value. The bubble signature is a motion process during the airtightness test, and therefore the bubble signature accumulates between fixed noise and random noise at a frequency.
And acquiring three pixel categories in the first fusion image by using a mean value clustering algorithm. And obtaining the average pixel value of each pixel category, and setting the pixel point in each pixel category as the average pixel value. And obtaining a second fusion weight according to the frequency of each pixel category appearing in the first fusion image, and continuously fusing each pixel category on the basis of the first fusion image according to the first fusion weight and the second fusion weight to obtain a second fusion image. The method specifically comprises the following steps:
and continuously fusing each pixel category through a second fusion formula, wherein the second fusion formula is as follows:
Gi+1,j=(1-α)Gi,j+α(1+kj)G′i,j
wherein G isi+1,jThe pixel value of the j-th class pixel category in the second fusion image of the (i + 1) th frame,
Figure BDA0003211620300000071
is the pixel value of the jth class of pixels in the ith frame of the second fused image, alpha is the first fusion weight, kjIs a second fusion weight, G ', of the jth class of the pixel class'i,jThe pixel value of the jth class of pixels in the ith frame foreground image sequence. When i is 0,
Figure BDA0003211620300000072
is the first fused image.
The second fusion weight, which is a weight in the second fusion formula, can expand the difference in pixel values of the three classes, thereby making it easier to extract the bubble feature.
In the process of accumulating the second fused image, the fixed noise first reaches the pixel value threshold. Therefore, when it is detected that the pixel points in the second fused image reach the pixel value threshold, the second fused image is inverted, that is, the pixel value of each pixel point in the second fused image is subtracted by 255, and a third fused image is obtained. At this time, the pixel values of the pixels corresponding to the fixed noise are all 0, and the background image is formed. Since the random noise pixel value accumulated pixel value is smallest in the second fused image, the accumulated pixel value of the random noise is largest in the third fused image after inversion.
And continuously fusing the third fused image with the foreground image sequence, and when the pixel point in the third fused image reaches the pixel value threshold, negating the third fused image, and removing random noise, so that the random noise becomes a background image, and a bubble characteristic image is obtained. The bubble characteristic image is overlapped by a plurality of images without the influence of fixed noise and random noise, the bubble characteristic is enhanced, and the bubble characteristic image with obvious characteristics under turbid liquid is obtained.
It should be noted that the fusion of the third fusion image and the foreground image sequence still adopts the second fusion formula, and the frequency of the bubble features and the random noise in the second fusion formula are exchanged, so as to ensure that the random noise in the third fusion image reaches the pixel value threshold value first.
In summary, in the embodiments of the present invention, the first fusion weight is obtained through the texture similarity in the initial image, and the background template is obtained by fusing the initial image sequence according to the first fusion weight. And extracting a foreground image sequence in the initial image sequence through a background template, fusing the foreground image sequence through a first fusion weight, and controlling fusion stop through a pixel value threshold value to obtain a first fusion image. And continuously fusing the foreground images through the characteristics of three pixel categories of fixed noise, bubble characteristics and random noise in the first fused image to obtain a second fused image. And controlling the fusion process through a pixel value threshold, and removing fixed noise and random noise through image negation twice to obtain a bubble characteristic image.
The invention also provides a bubble characteristic enhancement system in the airtightness detection based on the artificial intelligence, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and is characterized in that the processor realizes any step of the bubble characteristic enhancement method in the airtightness detection based on the artificial intelligence when executing the computer program.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A bubble characteristic enhancement method in air tightness detection based on artificial intelligence is characterized by comprising the following steps:
acquiring an initial image sequence; obtaining texture similarity of the initial images of adjacent frames, and obtaining a first fusion weight according to the texture similarity; fusing the initial image sequence according to the first fusion weight to obtain a background template;
extracting foreground images in the initial image sequence according to the background template to obtain a foreground image sequence; fusing the foreground image sequence according to the first fusion weight to obtain a first fused image, and stopping fusing when a pixel point in the first fused image reaches a pixel value threshold;
acquiring the pixel category in the first fusion image; the pixel classes include fixed noise, bubble characteristics, and random noise; setting the pixel point in each pixel category as the average pixel value corresponding to the pixel category; taking the occurrence frequency of each pixel category as a second fusion weight, and fusing each pixel category of the first fusion image according to the first fusion weight and the second fusion weight to obtain a second fusion image; when the pixel point in the second fusion image reaches the pixel value threshold, negating the second fusion image to obtain a third fusion image; fusing the foreground image sequence with the third fused image; and when the pixel point in the third fused image reaches the pixel value threshold value, negating the third fused image to obtain a bubble feature image.
2. The method of claim 1, wherein the acquiring the initial image sequence further comprises:
and graying the initial image and then carrying out top hat operation.
3. The method of claim 1, wherein the acquiring the initial image sequence further comprises:
keeping the initial image with the minimum average pixel value between adjacent frames of the initial image sequence;
after obtaining the foreground image sequence, the method further comprises:
and reserving the foreground image with the maximum average pixel value between the adjacent frames of the foreground image sequence.
4. The method of claim 1, wherein the obtaining the texture similarity of each of the initial images comprises:
obtaining gray level co-occurrence matrix characteristics of each initial image; the gray level co-occurrence matrix characteristics comprise energy, contrast, correlation and entropy; and taking the similarity vector of the gray level co-occurrence matrix characteristic of the initial image of the adjacent frame as the texture similarity.
5. The method of claim 4, wherein the obtaining the first fusion weight according to the texture similarity comprises:
obtaining an average texture similarity of the initial image sequence; establishing a difference matrix according to the difference between the texture similarity and the average texture similarity to obtain a plurality of difference characteristic vectors and difference characteristic values;
obtaining the number ratio of the initial images corresponding to the texture similarity in the initial image sequence;
and obtaining the first fusion weight according to the texture similarity, the quantity ratio and the difference characteristic value.
6. The method of claim 5, wherein the obtaining the first fusion weight according to the texture similarity, the quantity-to-number ratio and the difference feature value comprises: obtaining the first fusion weight by a first fusion weight formula, the first fusion weight formula comprising:
Figure FDA0003211620290000021
where α is the first fusion weight, k is the number of disparity feature vectors, cjThe number ratio, s, corresponding to the jth difference feature vectorjFor the texture similarity corresponding to the jth difference feature vector, τjAnd tau is the sum of the j difference characteristic values.
7. The method as claimed in claim 1 or 6, wherein the fusing the initial image sequence according to the first fusion weight to obtain a background template comprises: fusing the initial image sequence by a first fusion formula, the first fusion formula comprising:
fi,i+1=αfi+(1-α)fi+1
wherein f isi,i+1For the first fused image, fiFor the initial image of the ith frame, fi+1For the i +1 th frame, alpha is the first blendAnd (4) combining the weights.
8. The method of claim 1, wherein the fusing each pixel class of the first fused image according to the first fusion weight and the second fusion weight to obtain a second fused image comprises: obtaining the second fusion image through a second fusion formula, wherein the second fusion formula is as follows:
Gi+1,j=(1-α)Gi,j+α(1+kj)G′i,j
wherein G isi+1,jThe pixel value of the j-th type of the pixel category in the second fusion image of the (i + 1) th frame,
Figure FDA0003211620290000022
is the pixel value of the jth class of the pixel category in the ith frame of the second fusion image, alpha is the first fusion weight, k is the second fusion weightjIs the second fusion weight, G ', of the pixel class of the j-th class'i,jAnd the pixel value of the j-th type of pixel category in the foreground image sequence of the ith frame is obtained.
9. The method of claim 1, wherein the obtaining of the pixel class in the first fused image comprises:
and acquiring three pixel categories of the first fusion image by using a mean value clustering algorithm.
10. An artificial intelligence based bubble signature enhancement system in air tightness detection, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 9 when executing the computer program.
CN202110932646.3A 2021-08-13 2021-08-13 Bubble feature enhancement method and system in air tightness detection based on artificial intelligence Withdrawn CN113610740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110932646.3A CN113610740A (en) 2021-08-13 2021-08-13 Bubble feature enhancement method and system in air tightness detection based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110932646.3A CN113610740A (en) 2021-08-13 2021-08-13 Bubble feature enhancement method and system in air tightness detection based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN113610740A true CN113610740A (en) 2021-11-05

Family

ID=78340735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110932646.3A Withdrawn CN113610740A (en) 2021-08-13 2021-08-13 Bubble feature enhancement method and system in air tightness detection based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113610740A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842027A (en) * 2022-04-24 2022-08-02 南通真馨家纺有限公司 Fabric defect segmentation method and system based on gray level co-occurrence matrix

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4293930A (en) * 1979-10-24 1981-10-06 Sperry Corporation Bubble detection system
WO2008129650A1 (en) * 2007-04-13 2008-10-30 Toyo Glass Co., Ltd. Container mouth portion defect inspection method and device
CN104867117A (en) * 2015-05-13 2015-08-26 华中科技大学 Flow field image preprocessing method and system thereof
CN112184644A (en) * 2020-09-21 2021-01-05 河南颂达信息技术有限公司 Air tightness bubble detection method and device based on multiple illumination intensities
CN112348757A (en) * 2020-11-11 2021-02-09 赵华 Noise reduction method and device in air tightness detection based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4293930A (en) * 1979-10-24 1981-10-06 Sperry Corporation Bubble detection system
WO2008129650A1 (en) * 2007-04-13 2008-10-30 Toyo Glass Co., Ltd. Container mouth portion defect inspection method and device
CN104867117A (en) * 2015-05-13 2015-08-26 华中科技大学 Flow field image preprocessing method and system thereof
CN112184644A (en) * 2020-09-21 2021-01-05 河南颂达信息技术有限公司 Air tightness bubble detection method and device based on multiple illumination intensities
CN112348757A (en) * 2020-11-11 2021-02-09 赵华 Noise reduction method and device in air tightness detection based on artificial intelligence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIANWEN WANG ETAL.: "Bubble recognizing and tracking in a plate heat exchanger by using image processing and convolutional neural network", 《NTERNATIONAL JOURNAL OF MULTIPHASE FLOW》, 13 February 2021 (2021-02-13) *
李鹏凡: "基于机器视觉的水下废气气泡测量系统设计研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, vol. 2013, no. 2, 15 December 2013 (2013-12-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842027A (en) * 2022-04-24 2022-08-02 南通真馨家纺有限公司 Fabric defect segmentation method and system based on gray level co-occurrence matrix

Similar Documents

Publication Publication Date Title
TWI737040B (en) Fingerprint recognition method, chip and electronic device
CN110555842A (en) Silicon wafer image defect detection method based on anchor point set optimization
CN110188730B (en) MTCNN-based face detection and alignment method
WO2020048248A1 (en) Textile defect detection method and apparatus, and computer device and computer-readable medium
JP2012208597A (en) Pattern identification device, pattern identification method and program
CN113610740A (en) Bubble feature enhancement method and system in air tightness detection based on artificial intelligence
CN112508923A (en) Weak and small target detection method
KR20200066130A (en) Object recognition device, operating method of object recognition device, and computing device including object recognition device
CN113269010B (en) Training method and related device for human face living body detection model
CN116152209A (en) Earphone cover defect detection method, device, equipment and storage medium
US9053354B2 (en) Fast face detection technique
CN112926667B (en) Method and device for detecting saliency target of depth fusion edge and high-level feature
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
CN116703925B (en) Bearing defect detection method and device, electronic equipment and storage medium
Mousavi et al. Three dimensional face recognition using svm classifier
CN116523916A (en) Product surface defect detection method and device, electronic equipment and storage medium
CN115294163A (en) Face image quality evaluation method based on adaptive threshold segmentation algorithm
CN114913607A (en) Finger vein counterfeit detection method based on multi-feature fusion
JP2008027130A (en) Object recognition apparatus, object recognition means, and program for object recognition
CN114332112A (en) Cell image segmentation method and device, electronic equipment and storage medium
CN112991294A (en) Foreign matter detection method, apparatus and computer readable medium
Truong et al. A study on visual saliency detection in infrared images using Boolean map approach
CN110785769A (en) Face gender identification method, and training method and device of face gender classifier
CN115205939B (en) Training method and device for human face living body detection model, electronic equipment and storage medium
CN114821030B (en) Planet image processing method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211105