CN116612126B - Container disease vector biological detection early warning method based on artificial intelligence - Google Patents

Container disease vector biological detection early warning method based on artificial intelligence Download PDF

Info

Publication number
CN116612126B
CN116612126B CN202310898052.4A CN202310898052A CN116612126B CN 116612126 B CN116612126 B CN 116612126B CN 202310898052 A CN202310898052 A CN 202310898052A CN 116612126 B CN116612126 B CN 116612126B
Authority
CN
China
Prior art keywords
image
value
gray
pixel point
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310898052.4A
Other languages
Chinese (zh)
Other versions
CN116612126A (en
Inventor
滕新栋
杨宇
贺骥
王渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao International Travel Health Care Center Qingdao Customs Port Outpatient Department
Original Assignee
Qingdao International Travel Health Care Center Qingdao Customs Port Outpatient Department
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao International Travel Health Care Center Qingdao Customs Port Outpatient Department filed Critical Qingdao International Travel Health Care Center Qingdao Customs Port Outpatient Department
Priority to CN202310898052.4A priority Critical patent/CN116612126B/en
Publication of CN116612126A publication Critical patent/CN116612126A/en
Application granted granted Critical
Publication of CN116612126B publication Critical patent/CN116612126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a container vector biological detection early warning method based on artificial intelligence. The invention can detect the container vector biology more accurately through the self-adaptive image segmentation result.

Description

Container disease vector biological detection early warning method based on artificial intelligence
Technical Field
The invention relates to the technical field of image data processing, in particular to a container vector biological detection early warning method based on artificial intelligence.
Background
As global trade has evolved vigorously, containers have become an important way of transporting goods, and some insects, rodents or other organisms may remain in the container, which may carry pathogens such as malaria parasites, dengue viruses, plague bacteria, etc., and risk potential transmission to humans or animals. In order to reduce the risk of transmission of these diseases, it is therefore necessary to perform a vector biological test on the container. At present, an X-ray scanning method is generally adopted for detecting the container vector biology, and the interior of the container is subjected to imaging scanning by utilizing radiation technologies such as X-rays or gamma rays, so that the density and the shape of objects in the container are detected, and hidden biology can be found. And further carrying out image segmentation according to the X-ray scanning image of the container, and carrying out feature extraction according to an image segmentation result so as to detect possible vector organisms.
The GrabCut algorithm is widely used in the field of image segmentation due to the characteristics of high segmentation speed, good segmentation effect, convenient use and the like. The image segmentation effect of the GrabCut algorithm is very susceptible to smoothing parameters, especially in cases where relatively slender or narrow edges are processed. Larger smoothing parameters can make the segmentation result smoother, but can lead to excessive smoothing and loss of image detail information; while smaller smoothing parameters may reduce smoothness and thus preserve more detailed information, but may result in insufficient continuity of the segmentation boundary. In the prior art, smooth parameters are generally input into a GrabCut algorithm as prior information, so that when different X-ray scanning images are subjected to image segmentation, the corresponding robustness is poor, irrelevant tiny edges are caused due to object stacking and noise in a container, the subsequent image segmentation effect is reduced, and the accuracy of container disease media biological detection is poor. Therefore, the accuracy of the container vector biological detection is poor in the image segmentation method for the X-ray scanning image of the container through the GrabCut algorithm in the prior art.
Disclosure of Invention
In order to solve the technical problem that the accuracy of container disease vector biological detection is poor by an image segmentation method for an X-ray scanning image of a container through a GrabCut algorithm in the prior art, the invention aims to provide an artificial intelligence-based container disease vector biological detection early warning method, and the adopted technical scheme is as follows:
the invention provides a container vector biological detection and early warning method based on artificial intelligence, which comprises the following steps:
acquiring an X-ray scanning image of the container; acquiring an up-sampling image and a down-sampling image corresponding to the X-ray scanning image;
obtaining a corresponding foreground region of the X-ray scanning image according to the gray gradient distribution condition and the gray value distribution condition of pixel points in the downsampled image; according to the local gray level distribution difference of the pixel points in the foreground region, each continuous texture region in the foreground region is obtained; according to the difference of each pixel point in the continuous texture region in the spatial position and gray gradient between the up-sampling image and the down-sampling image, obtaining the corresponding scale feature difference of each pixel point in the continuous texture region; obtaining the edge confidence corresponding to each continuous texture region according to the scale feature difference and the number of pixel points in the continuous texture region;
Obtaining smoothing parameters of the X-ray scanning image according to the integral gray scale difference and the edge confidence coefficient of each continuous texture region; grabCot image segmentation is carried out on the X-ray scanning image according to the smoothing parameters, and a self-adaptive image segmentation result of the X-ray scanning image is obtained;
and carrying out container vector biological detection and early warning according to the self-adaptive image segmentation result.
Further, the method for acquiring the foreground region of the X-ray scanning image comprises the following steps:
calculating the gray value average value of all pixel points in the downsampled image, and taking the difference value between the gray value of each pixel point and the gray value average value as a first reference judgment value of each pixel point;
acquiring gray gradient values of all pixel points in the downsampled image; calculating the gray gradient value average value of all the pixel points in the preset first neighborhood range of each pixel point, and taking the difference value between the gray gradient value of each pixel point and the corresponding gray gradient value average value as a second reference judgment value of each pixel point;
in the downsampled image, taking a pixel point which meets the condition that a first reference judgment value is smaller than or equal to 0 and a second reference judgment value is larger than 0 as a foreground pixel point; taking the areas corresponding to all the foreground pixel points in the downsampled image as the foreground areas corresponding to the downsampled image; and mapping the foreground region corresponding to the downsampled image into the X-ray scanning image to obtain the foreground region of the X-ray scanning image.
Further, the method for acquiring the continuous texture region comprises the following steps:
selecting a pixel point in the foreground area as a target texture pixel point; in an X-ray scanning image, taking the target texture pixel point as a growth point to perform regional growth, taking the pixel point with the gray value meeting the preset growth condition in the preset second neighborhood range of the growth point as a new growth point to perform regional growth, and stopping regional growth until the gray values of all the pixel points in the preset second neighborhood range of all the new growth points do not meet the preset growth condition, so as to obtain a corresponding continuous texture region; the preset growth conditions include: the negative correlation map value of the difference between the gray values of the corresponding growing points is greater than the preset growing threshold.
Further, the method for acquiring the scale feature differences comprises the following steps:
taking any pixel point in any continuous texture area as a target pixel point;
placing the up-sampling image and the down-sampling image in the same coordinate system, wherein in the coordinate system, a coordinate point after the target pixel point is mapped to the down-sampling image is used as a down-sampling coordinate point, and a coordinate point after the target pixel point is mapped to the up-sampling image is used as an up-sampling coordinate point; taking the distance between the up-sampling coordinate point and the down-sampling coordinate point as a spatial position difference characteristic value corresponding to the target pixel point;
Acquiring gray gradient values of all pixel points in the up-sampling image and the down-sampling image, and mapping a target pixel point to the corresponding gray gradient value after the up-sampling image to serve as the up-sampling gray gradient value; mapping the target pixel point to a corresponding gray gradient value after downsampling the image to serve as a downsampling gray gradient value; taking the difference between the up-sampling gray gradient value and the down-sampling gray gradient value as a gray gradient difference characteristic value corresponding to a target pixel point;
and obtaining the scale characteristic difference corresponding to the target pixel point according to the spatial position difference characteristic value and the gray gradient difference characteristic value, wherein the spatial position difference characteristic value and the scale characteristic difference are in negative correlation, and the gray gradient difference characteristic value and the scale characteristic difference are in negative correlation.
Further, the method for obtaining the edge confidence comprises the following steps:
for any one continuous texture region:
taking the product of the number of the pixel points in the continuous texture area and the scale characteristic difference of each pixel point as an edge scale characteristic value corresponding to each pixel point in the continuous texture area; and accumulating the edge scale characteristic values of all the pixel points in the continuous texture region to obtain the edge confidence corresponding to the continuous texture region.
Further, the method for acquiring the smoothing parameters comprises the following steps:
taking the gray value average value of the pixel points in each continuous texture area as the gray characteristic value of each continuous texture area; taking the variance of gray characteristic values of all continuous texture areas as the contrast of an X-ray scanning image;
taking a continuous texture region with the corresponding edge confidence coefficient larger than a preset confidence coefficient threshold value as a reference texture region; taking the accumulated value of the normalized value of the edge confidence corresponding to all the reference texture areas as the texture richness of the X-ray scanning image;
and taking the product of the contrast and the texture richness as a smoothing parameter of an X-ray scanning image.
Further, the method for acquiring the adaptive image segmentation result comprises the following steps:
and performing image segmentation on the X-ray scanning image by taking the smoothing parameter as a lambda parameter in a GrabCut algorithm to obtain an adaptive image segmentation result of the X-ray scanning image.
Further, the performing container disease vector biological detection and early warning according to the adaptive image segmentation result comprises:
acquiring an image segmentation area in the self-adaptive image segmentation result, inputting the image segmentation area into a trained convolutional neural network, and outputting a container vector biological detection result; when the container vector organism detection result contains vector organisms, sending out early warning; and when the container vector organism detection result does not contain vector organisms, no early warning is sent out.
Further, the acquiring the up-sampling image and the down-sampling image corresponding to the X-ray scanning image includes:
taking an image with the lowest resolution in a plurality of images with preset sampling layers, which are obtained by downsampling the X-ray scanning image through a pyramid, as a downsampled image;
and taking the image with highest resolution in a plurality of images with preset sampling layers, which are obtained by upsampling the X-ray scanning image through a pyramid, as an upsampled image.
The invention has the following beneficial effects:
in order to reduce the influence of irrelevant tiny edges generated by an X-ray scanning image on the acquisition of a subsequent foreground object caused by the stacking of objects in a container, the invention obtains a corresponding foreground area by acquiring a downsampled image of the X-ray scanning image and combining gray scale gradient information and gray scale information in the downsampled image, thereby reducing the influence of the irrelevant tiny edges on the segmentation of the subsequent image and improving the accuracy of the segmentation of the image. In consideration of the morphological change of fine textures under sampling images of different scales, the invention obtains the edge confidence coefficient representing each continuous texture region by combining the difference of the pixel points in the continuous texture region in the corresponding space and gray gradient between the up-sampling image and the down-sampling image and the number of the pixel points in the continuous texture region corresponding to the pixel point number characteristics of the fine texture region, and further obtains the smoothing parameters for image segmentation in a self-adaption mode according to the edge confidence coefficient and the integral gray difference representing the gray richness of the continuous texture region, so that the segmentation effect of the self-adaption image segmentation obtained by carrying out GrabCut image segmentation on the X-ray scanning image according to the smoothing parameters is better, and the biological detection of container vectors is more accurate. In conclusion, the method for dividing the GrabCot image of the X-ray scanning image of the container by obtaining the self-adaptive smoothing parameters is more accurate in biological detection of the container vector.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a container disease medium biological detection and early warning method based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects thereof of the container disease medium biological detection and early warning method based on artificial intelligence provided by the invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a specific scheme of a container disease vector biological detection early warning method based on artificial intelligence, which is specifically described below with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a container disease media biological detection and early warning method based on artificial intelligence according to an embodiment of the invention is shown, and the method includes:
step S1: acquiring an X-ray scanning image of the container; and obtaining an up-sampling image and a down-sampling image corresponding to the X-ray scanning image.
The embodiment of the invention aims to provide an artificial intelligence-based container vector biological detection and early warning method, which is used for improving a GrabCut image segmentation algorithm by an image processing method according to texture details in an X-ray scanning image of a container, so that the segmentation result of the X-ray scanning image of the container by the improved GrabCut image segmentation algorithm is more accurate, and the accuracy of container vector biological detection is improved. There is a first need for an image processing object of an embodiment of the present invention, an X-ray scanned image that characterizes the internal texture of a container.
The embodiment of the invention firstly acquires an X-ray scanning image of the container. In the embodiment of the invention, an X-ray container scanning imager is adopted to scan and image the container, and an initial scanning image corresponding to the container is obtained. And in consideration of the subsequent need of analyzing gray information in the X-ray scanning image, the embodiment of the invention gray the initial scanning image to obtain the X-ray scanning gray image. In addition, due to the influence of external environment and factors of an imager, noise inevitably occurs in a corresponding X-ray scanning gray image, so that in order to reduce the influence of noise on a subsequent analysis process, denoising pretreatment is needed for the X-ray scanning gray image. It should be noted that, the preprocessed X-ray scanning gray-scale image is the X-ray scanning image obtained in the embodiment of the present invention, and for convenience of description, all subsequent X-ray scanning images are preprocessed X-ray scanning gray-scale images, which will not be further described later.
It should be further noted that, according to the specific implementation environment, the implementer may select other imaging instruments besides the X-ray container scanning imaging instrument to scan and image the container, which will not be further described herein. In addition, it should be noted that the median filtering algorithm is a prior art well known to those skilled in the art, and belongs to a conventional technology among numerous filtering denoising algorithms in the prior art, and an operator may select other filtering denoising algorithms to perform denoising pretreatment on the X-ray scanning gray image according to a specific implementation environment, which is not further limited and described herein.
Because the embodiment of the invention needs to divide the X-ray scanned image, the corresponding foreground object is divided from the background area. However, the interior of the container is typically a stack of objects, resulting in the presence of extraneous small edges in the X-ray scanned image that can interfere with the subsequent image segmentation process and therefore require removal. Considering that the main characteristic of the irrelevant tiny edges is tiny, the corresponding tiny texture areas can generate morphological change after being sampled by different scales in pyramid sampling and can generate the condition of edge discontinuity, the irrelevant tiny edges can be further acquired and screened out according to the difference between sampling images of the tiny texture areas with different scales in pyramid sampling, and the influence on the subsequent image segmentation process is reduced. The embodiment of the invention acquires an up-sampling image and a down-sampling image corresponding to an X-ray scanning image.
Preferably, acquiring the up-sampled image and the down-sampled image corresponding to the X-ray scanned image includes:
and taking an image with the lowest resolution in a plurality of images with preset sampling layers, which are obtained by downsampling the X-ray scanning image through a pyramid, as a downsampled image. And taking the image with highest resolution in a plurality of images with preset sampling layers, which are obtained by upsampling the X-ray scanning image through the pyramid, as an upsampled image. In the embodiment of the present invention, the number of preset sampling layers is set to 2, and it should be noted that, according to a specific implementation environment, an operator may select the size of the preset number of preset sampling layers by himself, which will not be further described herein. It should be further noted that, the pyramidal downsampling and the pyramidal upsampling are well known in the art, and are not further limited and described herein.
Step S2: obtaining a corresponding foreground region of the X-ray scanning image according to the gray gradient distribution condition and the gray value distribution condition of pixel points in the downsampled image; according to the local gray level distribution difference of the pixel points in the foreground region, each continuous texture region in the foreground region is obtained; according to the difference of each pixel point in the continuous texture region in the spatial position and gray gradient between the up-sampling image and the down-sampling image, obtaining the corresponding scale feature difference of each pixel point in the continuous texture region; and obtaining the edge confidence corresponding to each continuous texture region according to the scale feature difference and the pixel point number in the continuous texture region.
The principle of obtaining an X-ray scanning image corresponding to the container through X-rays is as follows: the X-rays are emitted by one or more X-ray sources, penetrate through the container and the object therein, interact with tissues or structures inside the object, and generate different transmittances due to different absorption degrees of the X-rays by different materials, further receive the X-rays with different intensities transmitted through the object and the container through the detector, convert intensity information of the corresponding X-rays into electric signals, and process and convert the electric signals, thereby obtaining corresponding scanned images. Because the corresponding materials and densities of the objects through which the X-rays pass are different, different gray levels or density variations are shown in the scanned image, i.e., the characteristics of the boundary, shape and internal structure of the objects in the scanned image can affect the gray features of the scanned image. Therefore, the corresponding gray scale features exist in the areas corresponding to the objects in the container, and the foreground areas corresponding to the X-ray scanning images required by the subsequent GrabCot image segmentation algorithm are further separated according to the edge texture details in the X-ray scanning images.
However, considering that the X-ray scanned image can generate a plurality of irrelevant tiny texture edges due to the stacking of objects in the container, if the original X-ray scanned image is directly analyzed to extract the foreground region, the corresponding foreground region can be enlarged, and the accuracy of the subsequent image segmentation is affected. And pyramid downsampling is to continuously shrink the original image by reducing the pixel points of the image, so that irrelevant fine texture edges in the original image are lost, and therefore, the embodiment of the invention obtains a corresponding foreground region of the X-ray scanning image according to the gray gradient distribution condition and gray value distribution condition of the pixel points in the downsampled image.
Preferably, the method for acquiring the foreground region of the X-ray scanned image includes:
and calculating the gray value average value of all the pixel points in the downsampled image, and taking the difference value between the gray value of each pixel point and the gray value average value as a first reference judgment value of each pixel point. In order to separate the foreground region from the background region in the X-ray scanned image, first it is necessary to determine the distinguishing features corresponding to the background region and the foreground region. Since the background area in the X-ray scanned image is usually blank, that is, the gray value of the pixel point corresponding to the background area in the downsampled image corresponding to the X-ray scanned image is usually greater than the gray value average value of all the pixel points, and the gray value of the pixel point corresponding to the foreground area with obvious texture details is usually less than or equal to the gray value average value, the first reference determination value of each pixel point is represented by the difference between the gray value of each pixel point and the gray value average value, and the area to which the corresponding pixel point belongs is determined by the first reference determination value. It should be noted that, in addition to obtaining the corresponding first reference determination value through the difference between the gray value and the gray value average value of each pixel point, the first reference determination value may also be obtained through other manners according to a specific implementation environment, for example, the difference between the normalized value of the gray value of each pixel point and the preset gray threshold value is used as the first reference determination value of each pixel point, which is not further described herein.
Acquiring gray gradient values of all pixel points in the downsampled image; calculating the gray gradient value average value of all the pixel points in the preset first neighborhood range of each pixel point, and taking the difference value between the gray gradient value of each pixel point and the corresponding gray gradient value average value as a second reference judgment value of each pixel point. In the X-ray scanning image, only the gray gradient value corresponding to the edge pixel point is generally larger than the gray gradient mean value in the preset first neighborhood range, so that the corresponding edge pixel point can be screened out through the size of the second reference judgment value, and the corresponding foreground area is further obtained. In the embodiment of the invention, the preset first neighborhood range is set to be an eight-neighborhood range, and a sobel operator is adopted when the gray gradient value of each pixel point is calculated in the embodiment of the invention. It should be noted that, the implementer may select the size of the preset first neighborhood according to the specific implementation environment, which will not be further described herein. Furthermore, it should be noted that the sobel operator is a commonly used edge detection operator, and an operator may use other edge detection operators besides the sobel operator to obtain a gray gradient value of each pixel according to a specific implementation environment, and a method for calculating the gray gradient value of each pixel in the image according to the sobel operator is well known to those skilled in the art, and a process for calculating the gray gradient value according to the sobel operator in this and subsequent processes is not further limited and described herein.
In the downsampled image, taking a pixel point which meets the first reference judgment value smaller than or equal to 0 and the second reference judgment value larger than 0 as a foreground pixel point; taking the areas corresponding to all the foreground pixel points in the downsampled image as the foreground areas corresponding to the downsampled image; and mapping the foreground region corresponding to the downsampled image into the X-ray scanning image to obtain the foreground region of the X-ray scanning image. Because the essence of image downsampling is to combine the pixel points in each window with the same size in the original image to obtain each corresponding pixel point, each pixel point in the downsampled image in the embodiment of the invention corresponds to each window of the X-ray scanning image, so that the foreground region in the X-ray scanning image is the region corresponding to all the windows in the downsampled image, and all the pixel points in the foreground region in the downsampled image are mapped to all the windows in the X-ray scanning image. The corresponding foreground region is obtained according to the characteristics of smaller gray value and larger gray gradient value corresponding to the edge pixel points in the downsampled image. And the foreground region obtained by downsampling the image is mapped into the X-ray scanned image to obtain the foreground region corresponding to the X-ray scanned image in consideration of the need for subsequent analysis in the X-ray scanned image.
In an embodiment of the present invention, the first sample in the image is downsampledThe method for obtaining the first reference judgment value of each pixel point is expressed as the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the +.>First reference determination value of each pixel, < >>For the +.>Gray value of each pixel, +.>The gray value average value of all pixel points in the downsampled image is obtained. Further according to the first of the downsampled imagesThe method for acquiring the first reference judgment value of each pixel point obtains the first reference judgment values of the rest pixel points.
In addition, the practitioner can also obtain the first image in the downsampled image through other forms of formulasFirst reference determination values of the individual pixel points, for example:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the preset gray threshold +.>For normalization function->Is->Normalized values of gray values of each pixel point, and meanings of other parameters are the same as the +.>The formulas corresponding to the method for obtaining the first reference judgment value of each pixel point are the same, and further description is omitted here. In the embodiment of the invention, the normalization method adopts linear normalization, and the corresponding preset gray threshold value is set to be 0.5. It should be noted that, the practitioner may select the normalization method and the magnitude of the preset gray threshold according to the specific implementation environment, and the linear normalization is a prior art well known to those skilled in the art, which is not further limited and described herein.
In the embodiment of the present invention, the method for obtaining the second reference determination value is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the +.>Second reference determination value of each pixel, < >>For the +.>Gray gradient value of each pixel point, +.>For the +.>And presetting the gray gradient value average value of all the pixel points in the first neighborhood range by each pixel point. Further according to +.>And obtaining second reference judgment values of the rest pixel points by the acquisition method of the second reference judgment values of the pixel points.
Thus, a foreground area corresponding to the X-ray scanning image is obtained. However, considering that the gray values corresponding to the pixels of the same texture edge in the foreground region are similar, there is usually a certain difference in the gray values corresponding to the pixels of different texture edges, that is, adjacent edge pixels in the foreground region may belong to different texture edges, so in order to divide the different texture edges more accurately, the embodiment of the invention obtains each continuous texture region in the foreground region according to the local gray distribution difference of the pixels in the foreground region.
Preferably, the method for acquiring the continuous texture region includes:
selecting a pixel point in the foreground area as a target texture pixel point; in the X-ray scanning image, taking a target texture pixel point as a growth point to perform regional growth, taking a pixel point with a gray value meeting a preset growth condition in a preset second neighborhood range of the growth point as a new growth point to perform regional growth, and stopping regional growth until the gray values of all the pixel points in the preset second neighborhood range of all the new growth points do not meet the preset growth condition, so as to obtain a corresponding continuous texture region; the preset growth conditions comprise: the negative correlation map value of the difference between the gray values of the corresponding growing points is greater than the preset growing threshold. In the embodiment of the invention, the preset second neighborhood range is set to be an eight-neighborhood range, and the preset growth threshold is set to be 0.85. It should be noted that, the practitioner may select the preset second neighborhood range and the preset growth threshold according to the specific implementation environment, and the region growth is a well-known technology of those skilled in the art, which is not further described herein.
For the same texture region, its corresponding edges should be continuous and the gray values between the corresponding edge pixels should be the same or close, which can be region-grown using a region-growing algorithm. And if the pixel points which are similar to the gray values of the pixel points and meet the preset growth conditions exist in the preset second neighborhood range of the target texture pixel points, merging the pixel points with the target texture pixel points, and continuing to extend the pixel points as new growth. And the stop region growth condition is that the gray values of all the pixel points in the preset second neighborhood range of all the new growth points do not meet the preset growth condition, namely, all the other pixel points except the pixel points in the corresponding continuous texture region in the preset second neighborhood range of each pixel point in each continuous texture region do not meet the preset growth condition. Since the edges of the same texture region are continuous and the gray values of the edge pixels are similar, the pixels in the same texture region are divided into the same growing region.
After all the continuous texture regions in the foreground region are obtained, it is necessary to consider that many regions corresponding to fine textures still exist in all the continuous texture regions, so in order to reduce the influence of the fine textures on the subsequent image segmentation process, it is necessary to further screen all the continuous texture regions, i.e. screen out the continuous texture regions conforming to the fine texture features in all the continuous texture regions. Considering that the region corresponding to the fine texture is different in morphological change of images on different sampling scales compared with other continuous texture regions, the embodiment of the invention obtains the scale feature difference corresponding to each pixel point in the continuous texture region according to the difference of each pixel point in the continuous texture region in spatial position and gray gradient between the up-sampled image and the down-sampled image. I.e. the fine texture areas in all the continuous texture areas are further screened out by the scale feature differences. And characterizing the saliency of the fine texture features of the corresponding continuous texture regions through the scale feature difference of each pixel point in each continuous texture region, namely, the larger the scale feature difference of the whole pixel point of the corresponding continuous texture region is, the more likely the corresponding continuous texture region is the region corresponding to the fine texture.
Preferably, the method for acquiring the scale feature differences comprises the following steps:
taking any pixel point in any continuous texture area as a target pixel point;
placing the up-sampling image and the down-sampling image in the same coordinate system, wherein in the coordinate system, a coordinate point after the target pixel point is mapped to the down-sampling image is used as a down-sampling coordinate point, and a coordinate point after the target pixel point is mapped to the up-sampling image is used as an up-sampling coordinate point; and taking the distance between the up-sampling coordinate point and the down-sampling coordinate point as a spatial position difference characteristic value corresponding to the target pixel point. Since objects in the container are generally stacked, textures corresponding to corresponding continuous texture regions overlap, in sample images of different scales, the continuous texture regions corresponding to the fine textures overlap with larger textures, so that morphology changes, and further, corresponding spatial difference characteristic values are increased. Therefore, the larger the spatial difference characteristic value of the pixel point in the corresponding continuous texture region, the more likely the corresponding continuous texture region is a region of fine texture. In the embodiment of the invention, euclidean distance between the up-sampling coordinate point and the down-sampling coordinate point is used as a spatial position difference characteristic value corresponding to the target pixel point. In the embodiment of the invention, because the shape and the size of the up-sampling image and the down-sampling image are the same, the up-sampling image and the down-sampling image are overlapped, the top left corner point of the up-sampling image and the down-sampling image is taken as an original point, the horizontal direction is taken as an x-axis, and the vertical direction is taken as a y-axis to construct a rectangular coordinate system; it should be noted that, the practitioner may construct the coordinate system by other methods according to the specific implementation environment, but it is necessary to ensure that the up-sampled image and the down-sampled image are placed in superposition.
The method comprises the steps of up-sampling gray gradient values of all pixel points in an image and a down-sampling image, and mapping a target pixel point to the corresponding gray gradient value after the up-sampling image to serve as an up-sampling gray gradient value; mapping the target pixel point to a corresponding gray gradient value after downsampling the image to serve as a downsampling gray gradient value; and taking the difference between the up-sampling gray gradient value and the down-sampling gray gradient value as a gray gradient difference characteristic value corresponding to the target pixel point. Since downsampling reduces detail information in the image, fine textures may be reduced to a few pixels or even one pixel, causing edge breaks. That is, in a downsampled image, since the distance between pixels increases and the width of the fine texture edge is relatively small, the edge integrity may not be maintained with enough pixels, resulting in edge discontinuities. Therefore, the gray gradient difference characteristic value corresponding to the pixel point corresponding to the fine texture is generally larger, and when the gray gradient difference characteristic value corresponding to the pixel point in the continuous texture region is larger, the corresponding continuous texture region is more likely to be the region of the fine texture.
And obtaining the scale feature difference corresponding to the target pixel point according to the spatial position difference feature value and the gray gradient difference feature value, wherein the spatial position difference feature value and the scale feature difference are in negative correlation, and the gray gradient difference feature value and the scale feature difference are in negative correlation. And further combining the spatial position difference characteristic value with the gray gradient difference characteristic value to obtain the corresponding scale characteristic difference of each pixel point. Since the scale feature difference of each pixel point in the continuous texture region is positively correlated with the saliency of the fine texture feature, and the larger the spatial difference feature value is, the more likely the corresponding continuous texture region is a region of fine texture, and the more the fine texture feature of the corresponding continuous texture region is considered. Therefore, the spatial position difference characteristic value and the scale characteristic difference are in negative correlation, and the gray gradient difference characteristic value and the scale characteristic difference are also in negative correlation. In the embodiment of the invention, the product of the negative correlation mapping value of the gray gradient difference characteristic value and the negative correlation mapping value of the spatial position difference characteristic value of each pixel point in the continuous texture area is used as the scale characteristic difference of each pixel point in the continuous texture area.
In the embodiment of the invention, the method for acquiring the scale feature difference corresponding to the target pixel point is expressed as the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the scale feature difference corresponding to the target pixel point, < ->The gray gradient value corresponding to the target pixel point after being mapped to the downsampled image is downsampled gray gradient value, < >>The corresponding gray gradient value after the target pixel point is mapped to the up-sampling image is up-sampling gray gradient value, < >>For the abscissa of the downsampled coordinate point corresponding to the target pixel point, < >>The abscissa of the up-sampling coordinate point corresponding to the target pixel point, < >>For the ordinate of the downsampled coordinate point corresponding to the target pixel point, < >>The ordinate of the up-sampling coordinate point is corresponding to the target pixel point, < >>For the spatial position difference characteristic value corresponding to the target pixel point,/for>The gray gradient difference characteristic value corresponding to the target pixel point is obtained; />Is an exponential function based on a natural constant e. And further obtaining the scale feature differences corresponding to other pixel points according to the obtaining method of the scale feature differences corresponding to the target pixel points. It should be noted that, depending on the implementation environment, the implementer may use other negative correlation mapping methods other than the exponential function based on the natural constant e, for example:
The meaning of each parameter in the formula is the same as that of the method for acquiring the scale feature difference corresponding to the target pixel point in the embodiment of the present invention, and further description is omitted herein.
Thus, the scale feature difference corresponding to each pixel point in each continuous texture area in the embodiment of the invention is obtained, and the scale feature difference representation of each pixel point in each continuous texture area corresponds to the saliency of the fine texture feature, namely the larger the scale feature difference of the pixel point corresponding to each continuous texture area is, the more likely the corresponding continuous texture area is the fine texture area. Further, considering that the number of corresponding pixel points is smaller for the fine texture than for the other texture regions, the fine texture feature saliency corresponding to each continuous texture region can be further evaluated by the number of pixel points in each continuous texture region. According to the embodiment of the invention, the edge confidence corresponding to each continuous texture region is obtained according to the scale feature difference and the number of pixel points in the continuous texture region. Namely, the embodiment of the invention characterizes the irrelevance of the fine texture features of each continuous texture region through the edge confidence.
Preferably, the method for acquiring the edge confidence comprises the following steps:
for any one continuous texture region:
taking the product of the number of the pixel points in the continuous texture area and the scale characteristic difference of each pixel point as an edge scale characteristic value corresponding to each pixel point in the continuous texture area; and accumulating the edge scale characteristic values of all the pixel points in the continuous texture region to obtain the edge confidence corresponding to the continuous texture region. Because the smaller the number of the pixel points of each continuous texture area is, the smaller the scale feature difference of each pixel point is, the more obvious the fine texture features of the corresponding continuous texture area are, and the edge confidence represents the fine texture feature saliency of each continuous texture area, the embodiment of the invention multiplies the number of the pixel points of each continuous texture area and the scale feature difference corresponding to each pixel point to obtain the edge scale feature value corresponding to each pixel point. And further combining the edge scale characteristic values corresponding to all the pixel points in each continuous texture region, namely accumulating the edge scale characteristic values corresponding to all the pixel points in each continuous texture region to obtain the edge confidence corresponding to each continuous texture region.
In an embodiment of the invention, the continuous texture feature regionThe method for obtaining the edge confidence of the (c) is expressed as the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,for continuous texture feature region->Edge confidence of->For continuous texture feature region->The number of pixels in->For continuous texture feature region->Middle->Scale feature difference corresponding to each pixel point, < ->For continuous texture feature region->Middle->Edge scale characteristic values corresponding to the pixel points. Further according to the continuous texture feature region->The edge confidence coefficient corresponding to other continuous texture areas is obtained by the edge confidence coefficient obtaining method.
Step S3: obtaining smoothing parameters of the X-ray scanning image according to the integral gray scale difference and the edge confidence coefficient of each continuous texture region; and carrying out GrabCot image segmentation on the X-ray scanning image according to the smoothing parameters to obtain an adaptive image segmentation result of the X-ray scanning image.
Thus, the edge confidence of each continuous texture region is obtained, and the smaller the edge confidence is, the more remarkable the feature of the fine texture of the corresponding continuous texture region is, so that in order to reduce the influence of the fine texture on the subsequent image segmentation process, the region corresponding to the fine texture can be further screened out according to the edge confidence. According to the embodiment of the invention, the smoothing parameters of the X-ray scanning image are obtained according to the integral gray level difference and the edge confidence of each continuous texture region.
Preferably, the method for acquiring the smoothing parameter includes:
taking the gray value average value of the pixel points in each continuous texture area as the gray characteristic value of each continuous texture area; the variance of the gray feature values of all the continuous texture areas is used as the contrast of the X-ray scanning image. Since the larger the corresponding texture information of each continuous texture region in the foreground region, the larger the corresponding gray value, the larger the gray change of the image, that is, the more abundant the information in the corresponding X-ray scanned image, the more accurate segmentation is required when the image segmentation is performed, that is, the larger the corresponding contrast, and the larger the smoothing parameter corresponding to the image segmentation.
Taking a continuous texture region with the corresponding edge confidence coefficient larger than a preset confidence coefficient threshold value as a reference texture region; and taking the accumulated value of the normalized values of the edge confidence corresponding to all the reference texture areas as the texture richness of the X-ray scanning image. In the embodiment of the invention, the normalization method adopts maximization, the preset confidence threshold is set to 0.6, and an implementer can automatically adjust the preset confidence threshold according to a specific implementation environment, and no further description is given here. Because the smaller the edge confidence of the continuous texture region is, the more likely the corresponding continuous texture region is the region corresponding to the fine texture, in order to reduce the influence of the fine texture on the image segmentation process, the embodiment of the invention obtains the smooth parameters for image segmentation by analyzing the reference texture region, so that the subsequent segmentation process is more accurate, and the maximization aims at adjusting the value of the smooth parameters, so that the value of the smooth parameters is more reasonable, and an implementer can also analyze by adopting other normalization methods according to the specific implementation environment, and further description is omitted herein.
The product of contrast and texture richness is used as a smoothing parameter of the X-ray scanning image. Because the contrast is positively correlated with the size of the smoothing parameter, and the texture richness is positively correlated with the size of the smoothing parameter, the embodiment of the invention obtains the corresponding smoothing parameter by multiplying the contrast and the texture richness.
In the embodiment of the invention, the method for acquiring the smoothing parameters is expressed as the following formula:
/>
wherein, the liquid crystal display device comprises a liquid crystal display device,smoothing parameters for X-ray scanned images, +.>For X-ray scanning of the image +.>Edge confidence of the individual reference texture regions, +.>For the number of reference texture areas in the X-ray scanned image, a reference texture area is provided>Scanning the first image for X-raysGray feature values of the continuous texture regions +.>For the mean value of the gray feature values of all the continuous texture areas in the X-ray scanned image,/for the X-ray scanned image>For the number of consecutive texture areas in the X-ray scanned image, a combination of two or more of the following>Selecting a function for maximum value>For the maximum value in the edge confidence of all reference texture regions,/>For X-ray scanning of the image +.>Maximum value of edge confidence of each reference texture region, < >>The texture richness corresponding to the X-ray scanning image is obtained;the variance of gray feature values for all consecutive texture regions, i.e. the contrast of the X-ray scanned image. It should be noted that, since the edge confidence in the embodiment of the present invention is obtained according to the scale feature difference and the number of pixels corresponding to the continuous texture region, the number of pixels in the continuous texture region must be an integer greater than 0, and according to the formula corresponding to the method for obtaining the scale feature difference in the embodiment of the present invention, the scale feature difference must be greater than 0, so all the edge confidence corresponding to the embodiment of the present invention is greater than 0, that is, the maximum value in the edge confidence of all the reference texture regions cannot appear as 0.
The smoothing parameters are lambda parameters corresponding to the GrabCot image segmentation method according to the embodiment of the invention, so that the embodiment of the invention performs GrabCot image segmentation on the X-ray scanning image according to the smoothing parameters to obtain an adaptive image segmentation result of the X-ray scanning image. Since the smoothing parameter is obtained by analyzing the X-ray scanned image, that is, the smoothing parameter is a value obtained by self-adaption, an image segmentation result obtained by performing image segmentation according to the smoothing parameter is an adaptive image segmentation result.
Preferably, the method for acquiring the adaptive image segmentation result includes:
and performing image segmentation on the X-ray scanning image by taking the smoothing parameter as a lambda parameter in a GrabCut algorithm to obtain a self-adaptive image segmentation result of the X-ray scanning image. It should be noted that the lambda parameter in the GrabCut algorithm is a parameter required for the smoothing process, and thus corresponds to the smoothing parameter obtained in the embodiment of the present invention. It should be noted that, the GrabCut algorithm is a conventional technology commonly used in the image segmentation algorithm, and is not further limited and described herein.
Step S4: and carrying out container vector biological detection and early warning according to the self-adaptive image segmentation result.
Considering that the purpose of the embodiment of the invention is to detect container vector organisms, after image segmentation, each segmented region obtained by image segmentation needs to be detected, and whether early warning is sent out or not is judged according to the detection result.
Preferably, the container disease vector biological detection and early warning according to the self-adaptive image segmentation result comprises the following steps:
acquiring an image segmentation area in the self-adaptive image segmentation result, inputting the image segmentation area into a trained convolutional neural network, and outputting a container vector biological detection result; when the container vector organism detection result contains vector organisms, sending out early warning; and when the container vector organism detection result does not contain vector organisms, no early warning is sent out. In the embodiment of the invention, when the output container disease agent biological detection result contains the disease agent biological, the image segmentation area corresponding to the disease agent biological is marked, the corresponding image segmentation area is transmitted to the manual observation display through the data unit, and the early warning is sent out. It should be noted that, the implementer may select other neural networks according to the specific implementation environment, and convolutional neural networks are well known in the art, and are not further limited and described herein.
In summary, according to the gray distribution condition of the X-ray scanned image in the corresponding downsampled image, the foreground region required by the subsequent image segmentation is obtained, the distinguishing features of the continuous texture region in the foreground region in the upsampled image and the downsampled image are analyzed to obtain the corresponding scale feature difference, the region corresponding to the fine texture is screened out according to the scale feature difference and the size of the continuous texture region through the obtained edge confidence, the smoothing parameter required by the image segmentation is further obtained according to the edge confidence, the adaptive image segmentation result is obtained through the GrabCut image segmentation method according to the smoothing parameter, and the container vector biological detection early warning is carried out according to the adaptive image segmentation result. The invention can detect the container vector biology more accurately through the self-adaptive image segmentation result.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (6)

1. The container vector biological detection and early warning method based on artificial intelligence is characterized by comprising the following steps:
acquiring an X-ray scanning image of the container; acquiring an up-sampling image and a down-sampling image corresponding to the X-ray scanning image;
obtaining a corresponding foreground region of the X-ray scanning image according to the gray gradient distribution condition and the gray value distribution condition of pixel points in the downsampled image; according to the local gray level distribution difference of the pixel points in the foreground region, each continuous texture region in the foreground region is obtained; according to the difference of each pixel point in the continuous texture region in the spatial position and gray gradient between the up-sampling image and the down-sampling image, obtaining the corresponding scale feature difference of each pixel point in the continuous texture region; obtaining the edge confidence corresponding to each continuous texture region according to the scale feature difference and the number of pixel points in the continuous texture region;
obtaining smoothing parameters of the X-ray scanning image according to the integral gray scale difference and the edge confidence coefficient of each continuous texture region; grabCot image segmentation is carried out on the X-ray scanning image according to the smoothing parameters, and a self-adaptive image segmentation result of the X-ray scanning image is obtained;
Performing container vector biological detection and early warning according to the self-adaptive image segmentation result;
the method for acquiring the foreground region of the X-ray scanning image comprises the following steps:
calculating the gray value average value of all pixel points in the downsampled image, and taking the difference value between the gray value of each pixel point and the gray value average value as a first reference judgment value of each pixel point;
acquiring gray gradient values of all pixel points in the downsampled image; calculating the gray gradient value average value of all the pixel points in the preset first neighborhood range of each pixel point, and taking the difference value between the gray gradient value of each pixel point and the corresponding gray gradient value average value as a second reference judgment value of each pixel point;
in the downsampled image, taking a pixel point which meets the condition that a first reference judgment value is smaller than or equal to 0 and a second reference judgment value is larger than 0 as a foreground pixel point; taking the areas corresponding to all the foreground pixel points in the downsampled image as the foreground areas corresponding to the downsampled image; mapping a foreground region corresponding to the downsampled image into an X-ray scanning image to obtain a foreground region of the X-ray scanning image;
the method for acquiring the scale feature difference comprises the following steps:
Taking any pixel point in any continuous texture area as a target pixel point;
placing the up-sampling image and the down-sampling image in the same coordinate system, wherein in the coordinate system, a coordinate point after the target pixel point is mapped to the down-sampling image is used as a down-sampling coordinate point, and a coordinate point after the target pixel point is mapped to the up-sampling image is used as an up-sampling coordinate point; taking the distance between the up-sampling coordinate point and the down-sampling coordinate point as a spatial position difference characteristic value corresponding to the target pixel point;
acquiring gray gradient values of all pixel points in the up-sampling image and the down-sampling image, and mapping a target pixel point to the corresponding gray gradient value after the up-sampling image to serve as the up-sampling gray gradient value; mapping the target pixel point to a corresponding gray gradient value after downsampling the image to serve as a downsampling gray gradient value; taking the difference between the up-sampling gray gradient value and the down-sampling gray gradient value as a gray gradient difference characteristic value corresponding to a target pixel point;
obtaining a scale feature difference corresponding to a target pixel point according to the spatial position difference feature value and the gray gradient difference feature value, wherein the spatial position difference feature value and the scale feature difference are in negative correlation, and the gray gradient difference feature value and the scale feature difference are in negative correlation;
The method for acquiring the smoothing parameters comprises the following steps:
taking the gray value average value of the pixel points in each continuous texture area as the gray characteristic value of each continuous texture area; taking the variance of gray characteristic values of all continuous texture areas as the contrast of an X-ray scanning image;
taking a continuous texture region with the corresponding edge confidence coefficient larger than a preset confidence coefficient threshold value as a reference texture region; taking the accumulated value of the normalized value of the edge confidence corresponding to all the reference texture areas as the texture richness of the X-ray scanning image;
and taking the product of the contrast and the texture richness as a smoothing parameter of an X-ray scanning image.
2. The method for pre-warning detection of container media based on artificial intelligence according to claim 1, wherein the method for obtaining the continuous texture region comprises the following steps:
selecting a pixel point in the foreground area as a target texture pixel point; in an X-ray scanning image, taking the target texture pixel point as a growth point to perform regional growth, taking the pixel point with the gray value meeting the preset growth condition in the preset second neighborhood range of the growth point as a new growth point to perform regional growth, and stopping regional growth until the gray values of all the pixel points in the preset second neighborhood range of all the new growth points do not meet the preset growth condition, so as to obtain a corresponding continuous texture region; the preset growth conditions include: the negative correlation map value of the difference between the gray values of the corresponding growing points is greater than the preset growing threshold.
3. The container disease vector biological detection and early warning method based on artificial intelligence according to claim 1, wherein the method for obtaining the edge confidence comprises the following steps:
for any one continuous texture region:
taking the product of the number of the pixel points in the continuous texture area and the scale characteristic difference of each pixel point as an edge scale characteristic value corresponding to each pixel point in the continuous texture area; and accumulating the edge scale characteristic values of all the pixel points in the continuous texture region to obtain the edge confidence corresponding to the continuous texture region.
4. The container disease vector biological detection and early warning method based on artificial intelligence according to claim 1, wherein the self-adaptive image segmentation result obtaining method comprises the following steps:
and performing image segmentation on the X-ray scanning image by taking the smoothing parameter as a lambda parameter in a GrabCut algorithm to obtain an adaptive image segmentation result of the X-ray scanning image.
5. The method for pre-warning container disease media biological detection based on artificial intelligence according to claim 1, wherein the pre-warning container disease media biological detection based on the self-adaptive image segmentation result comprises:
acquiring an image segmentation area in the self-adaptive image segmentation result, inputting the image segmentation area into a trained convolutional neural network, and outputting a container vector biological detection result; when the container vector organism detection result contains vector organisms, sending out early warning; and when the container vector organism detection result does not contain vector organisms, no early warning is sent out.
6. The method for pre-warning detection of container disease media based on artificial intelligence according to claim 1, wherein the step of obtaining up-sampling images and down-sampling images corresponding to the X-ray scanning images comprises the steps of:
taking an image with the lowest resolution in a plurality of images with preset sampling layers, which are obtained by downsampling the X-ray scanning image through a pyramid, as a downsampled image;
and taking the image with highest resolution in a plurality of images with preset sampling layers, which are obtained by upsampling the X-ray scanning image through a pyramid, as an upsampled image.
CN202310898052.4A 2023-07-21 2023-07-21 Container disease vector biological detection early warning method based on artificial intelligence Active CN116612126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310898052.4A CN116612126B (en) 2023-07-21 2023-07-21 Container disease vector biological detection early warning method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310898052.4A CN116612126B (en) 2023-07-21 2023-07-21 Container disease vector biological detection early warning method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN116612126A CN116612126A (en) 2023-08-18
CN116612126B true CN116612126B (en) 2023-09-19

Family

ID=87678688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310898052.4A Active CN116612126B (en) 2023-07-21 2023-07-21 Container disease vector biological detection early warning method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116612126B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876360A (en) * 2024-03-08 2024-04-12 卡松科技股份有限公司 Intelligent detection method for lubricating oil quality based on image processing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895393A (en) * 2017-10-24 2018-04-10 天津大学 A kind of story image sequence generation method of comprehensive word and shape
WO2021000524A1 (en) * 2019-07-03 2021-01-07 研祥智能科技股份有限公司 Hole protection cap detection method and apparatus, computer device and storage medium
CN115049664A (en) * 2022-08-16 2022-09-13 金乡县强力机械有限公司 Vision-based ship engine fitting defect detection method
CN115809981A (en) * 2022-07-20 2023-03-17 河南职业技术学院 Robot sorting method based on target detection
CN116110053A (en) * 2023-04-13 2023-05-12 济宁能源发展集团有限公司 Container surface information detection method based on image recognition
CN116168026A (en) * 2023-04-24 2023-05-26 山东拜尔检测股份有限公司 Water quality detection method and system based on computer vision
CN116258716A (en) * 2023-05-15 2023-06-13 青岛宇通管业有限公司 Plastic pipe quality detection method based on image processing
WO2023134792A2 (en) * 2022-12-15 2023-07-20 苏州迈创信息技术有限公司 Led lamp wick defect detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895393A (en) * 2017-10-24 2018-04-10 天津大学 A kind of story image sequence generation method of comprehensive word and shape
WO2021000524A1 (en) * 2019-07-03 2021-01-07 研祥智能科技股份有限公司 Hole protection cap detection method and apparatus, computer device and storage medium
CN115809981A (en) * 2022-07-20 2023-03-17 河南职业技术学院 Robot sorting method based on target detection
CN115049664A (en) * 2022-08-16 2022-09-13 金乡县强力机械有限公司 Vision-based ship engine fitting defect detection method
WO2023134792A2 (en) * 2022-12-15 2023-07-20 苏州迈创信息技术有限公司 Led lamp wick defect detection method
CN116110053A (en) * 2023-04-13 2023-05-12 济宁能源发展集团有限公司 Container surface information detection method based on image recognition
CN116168026A (en) * 2023-04-24 2023-05-26 山东拜尔检测股份有限公司 Water quality detection method and system based on computer vision
CN116258716A (en) * 2023-05-15 2023-06-13 青岛宇通管业有限公司 Plastic pipe quality detection method based on image processing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种快速鲁棒的红外图像分割方法;杨如林;丑修建;李庆;梁艳菊;;电视技术(第03期);全文 *
基于融合显著图与GrabCut算法的水下海参图像分割;郭传鑫;李振波;乔曦;李晨;岳峻;;农业机械学报(第S1期);全文 *
综合边界和纹理信息的合成孔径雷达图像目标分割;谌华;郭伟;闫敬文;;中国图象图形学报(第06期);全文 *

Also Published As

Publication number Publication date
CN116612126A (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US10192099B2 (en) Systems and methods for automated screening and prognosis of cancer from whole-slide biopsy images
Smal et al. Quantitative comparison of spot detection methods in fluorescence microscopy
US20170053398A1 (en) Methods and Systems for Human Tissue Analysis using Shearlet Transforms
JP5920994B2 (en) Method and system for identifying well wall boundaries of microplates
EP0576961B1 (en) Method for automatic foreground and background detection in digital radiographic images
Acharya et al. Particle swarm optimized texture based histogram equalization (PSOTHE) for MRI brain image enhancement
KR101969022B1 (en) Image analysis apparatus and method
CA2744690C (en) Image analysis
US20150131889A1 (en) Necrotic cell region detection apparatus and method of the same, and non-transitory computer readable storage medium to store a necrotic cell region detection program
JP2017521779A (en) Detection of nuclear edges using image analysis
CN116612126B (en) Container disease vector biological detection early warning method based on artificial intelligence
JP2005228342A (en) Method and system for segmenting scanned document
CN113962976B (en) Quality evaluation method for pathological slide digital image
CN116152115B (en) Garbage image denoising processing method based on computer vision
Kwon et al. ETVOS: An enhanced total variation optimization segmentation approach for SAR sea-ice image segmentation
WO2020078888A1 (en) System for co-registration of medical images using a classifier
CN113989799A (en) Cervical abnormal cell identification method and device and electronic equipment
CN117456376A (en) Remote sensing satellite image target detection method based on deep learning
CN115423806B (en) Breast mass detection method based on multi-scale cross-path feature fusion
US11887355B2 (en) System and method for analysis of microscopic image data and for generating an annotated data set for classifier training
Heena et al. Machine Learning based Detection and Classification of Heart Abnormalities
CN114792300A (en) Multi-scale attention X-ray broken needle detection method
CN113420636A (en) Nematode identification method based on deep learning and threshold segmentation
CN116205940B (en) Digital image target detection method and system based on medical examination
Dhar et al. A novel method for underwater image segmentation based on M-band wavelet transform and human psychovisual phenomenon (HVS)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant