CN114897875B - Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel - Google Patents

Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel Download PDF

Info

Publication number
CN114897875B
CN114897875B CN202210625685.3A CN202210625685A CN114897875B CN 114897875 B CN114897875 B CN 114897875B CN 202210625685 A CN202210625685 A CN 202210625685A CN 114897875 B CN114897875 B CN 114897875B
Authority
CN
China
Prior art keywords
microspheres
escherichia coli
image
micro
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210625685.3A
Other languages
Chinese (zh)
Other versions
CN114897875A (en
Inventor
徐莹
孙乐圣
陈俊涛
刘哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202210625685.3A priority Critical patent/CN114897875B/en
Publication of CN114897875A publication Critical patent/CN114897875A/en
Application granted granted Critical
Publication of CN114897875B publication Critical patent/CN114897875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • Y02A50/30Against vector-borne diseases, e.g. mosquito-borne, fly-borne, tick-borne or waterborne diseases whose impact is exacerbated by climate change

Abstract

The invention discloses a three-dimensional positioning method of escherichia coli and microspheres under a micro-channel based on deep learning; the positioning method comprises the following steps: 1. and (4) sample preparation. 2. And (4) sampling and splitting training of the sample video. 3. And labeling the data set. In the images of the data set, E.coli and microspheres were labeled separately. 4. And (4) modeling. And training the model by using the data set with the tag to obtain recognition models capable of respectively recognizing escherichia coli and microspheres. 5. Respectively carrying out target positioning on the escherichia coli and the microspheres. The invention constructs an effective training set of escherichia coli and microspheres in three-dimensional scene frame-by-frame detection, formulates effective labeling rules with different focusing degrees, effectively detects the escherichia coli and the microspheres, and reduces the missing detection rate and the false detection rate of the microspheres and bacteria and the influence of impurities. In addition, the invention can count the number of bacteria and microspheres at the same time, and can effectively screen samples and estimate the concentration of bacteria liquid by using the microspheres as reference or assistance.

Description

Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel
Technical Field
The invention belongs to the field of bacterial counting, and particularly relates to a three-dimensional positioning method of escherichia coli and microspheres under a micro-channel based on deep learning.
Background
Because food is often polluted by bacteria and the like, various diseases are caused, and various hazards are brought to human health. Therefore, the determination of the degree of contamination of food is of great significance to food safety. Wherein the bacterial number in the food is an important index reflecting the degree of food contamination. Common bacterial counting methods include plate counting, sensor-based detection, fluorescence detection, flow cytometry, and microscopic image-based detection. Among them, the plate counting method requires incubation for a long period of time, and the detection method based on the sensor is limited due to the limitation of the sensor. Fluorescence detection methods require staining of the sample, which can be destructive to the sample. Flow cytometry requires the use of flow cytometers, which are only suitable for laboratory use due to their high price. The micro-fluidic chip is often combined with a microscopic image detection mode, and the microscopic image detection mode is taken as a detection platform and combined with the quantitative detection of bacteria due to the advantages of high flux, small sample consumption and the like, so that the detection time is greatly shortened, the operation steps are simplified, and the microscopic image detection mode is suitable for field detection. Because the sample in the microfluidic chip is very tiny, a microscope is generally required to obtain a microscopic image of the sample for detection, and the method is more intuitive and can obtain more information. For observation in the micro-channel, agarose can be used for fixing the sample liquid to be detected and then observing the sample liquid, and bacteria can not float in the micro-channel during observation in the mode and can be statically observed. Meanwhile, the sample liquid flowing in the chip can be detected in real time by dyeing the object to be detected. The static observation mode can carry out the time delay formation of image to the bacterium in the passageway, observes the area change of bacterium in the passageway and detects through operations such as adding medicine. The number of bacteria passing through under the visual field can be detected in real time in a dynamic mode, the total volume is calculated through the flow rate and the volume of the visual field, and finally the number of the bacteria is obtained through treatment. However, most of these imaging methods only focus on a certain layer in the microchannel for observation, and due to the limitation of the depth of field, information of other layers in the microchannel is lost, and the bacteria on the layer may be interfered with as the bacteria grow. The number of targets that can be detected is more limited when the size of the target differs significantly from the depth of the microchannel.
Therefore, a method for counting bacteria and microspheres under a micro-channel in a three-dimensional space is provided. An escherichia coli concentration detection system is designed by utilizing a micro-channel, a bacterial liquid sample to be detected, a microsphere solution with known concentration and low-melting-point agarose are mixed and fixed in the micro-channel, so that bacteria and microspheres cannot suspend or settle along with the flowing of liquid in the micro-channel, and the activity of the bacteria is not influenced. And photographing from the uppermost end focus to the lowermost end focus of the microchannel by using a common optical microscope and a CCD camera to obtain a sample video. And taking out the sample videos frame by frame, tracking and detecting bacteria and microspheres in the three-dimensional space by using a YOLO algorithm, and counting the number of the bacteria and the number of the microspheres. The method can effectively distinguish different targets (bacteria and microspheres) and accurately count the targets. Meanwhile, the positions of the microspheres and bacteria are reconstructed in a three-dimensional space, and the bacteria or the positions of the microspheres to be observed can be positioned according to the selection of different focusing layers.
Disclosure of Invention
Aiming at the problems, the invention provides a deep learning-based three-dimensional positioning method of escherichia coli and microspheres in a microchannel, which can effectively distinguish different targets (bacteria and microspheres) and accurately count the different targets. Meanwhile, the detected target object is reconstructed in a three-dimensional space, whether the target object is uniform in the space is effectively shown, and evaluation indexes are provided for sample selection and sample effectiveness. Different focusing layers can be selected to position the bacteria or microspheres to be observed.
A three-dimensional positioning method of escherichia coli and microspheres under a micro-channel based on deep learning comprises the following steps:
step 1, sample preparation
And mixing a plurality of escherichia coli bacterial liquids with different concentrations with the microspheres and the agarose respectively, and injecting the mixture into the microchannel respectively to obtain a plurality of microchannel samples.
Step 2, sampling and splitting of sample video
2-1, taking a plurality of fields of view for each microchannel sample; and acquiring images with different focusing depths aiming at each visual field to obtain a video sample.
And 2-2, extracting a plurality of images from the video sample to form a data set.
And 3, labeling the data set. In the images of the data set, E.coli and microspheres were labeled separately.
And 4, training a model. And training the model by using the labeled data set to obtain recognition models capable of respectively recognizing escherichia coli and microspheres.
And 5, respectively carrying out target positioning on the escherichia coli and the microspheres.
5-1, inputting a tested bacterial solution mixed with microspheres and an agarose solution into the micro-channel; focusing and shooting different depths of the micro-channel to obtain detected images of the micro-channel at different depths; detecting each detected image by using the recognition model; obtaining candidate frames [ x ] corresponding to all bacteria and microspheres in each detected image l ,y l ,x r ,y r ]And its confidence level.
5-2, sequencing the tested images according to the sequence of the focusing positions from top to bottom to obtain a tested image set F, and aiming at each image frame F in the tested image set F i
5-3, sequentially carrying out identification and duplicate removal operation on all image frames according to the arrangement sequence, which comprises the following steps: sequentially fetching image frames F i The candidate box k which is not accessed is used as a characteristic box; marking the feature box as an accessed state; for the current image frame F i Carrying out approximate frame identification downwards frame by frame until the approximate frame identification of the last image frame is completed or an approximate frame without a characteristic frame exists in the image frame; all approximation boxes of the feature box are marked as accessed. And taking the candidate frame with the highest confidence level in the feature frame and the approximate frame thereof as the position of the target point, and putting the coordinates of the position of the target point into a result set. And when all the candidate boxes k mark the accessed state, the identification and deduplication operation is completed.
Step 6: three-dimensional reconstruction
And (5) constructing a three-dimensional image containing all the target points according to the coordinates of all the target points in the result set obtained in the step (5).
Preferably, the process of extracting the data set in step 2-2 is as follows:
coefficient of uniformity gamma of each microchannel i The following were used:
Figure BDA0003677263720000031
in the formula, phi k Indicating the number of microspheres under the kth visual field in the current microchannel, and indicating the average number of microspheres under all the visual fields in the current microchannel by mu; v k Represents the volume corresponding to the kth field of view in the current microchannel, and n represents the number of field of view regions of the current microchannel. Coefficient of uniformity gamma i Removing all video samples corresponding to the micro-channels smaller than or equal to 0.8; and then, removing the video samples of which the number of the microspheres in the residual video samples is not within the preset interval.
Taking out the reserved video samples frame by frame to obtain a picture set; and randomly selecting a plurality of sample images in the picture set as a data set.
Preferably, the specific process of labeling the data set in step 3 is as follows:
3-1, selecting a plurality of local images containing bacteria and microspheres with different focusing degrees in the data set; the local image contains only one escherichia coli or one microsphere; for a local image containing microspheres, intercepting a straight line through the center of the microsphere, and calculating the SNR (signal to noise ratio) of each point on the straight line as shown in the formula (2); and calculating the signal-to-noise ratio range on the transversal line, and selecting the microsphere image corresponding to the transversal line with the signal-to-noise ratio range larger than 1 as the microsphere in a focusing state.
For a local image containing escherichia coli, intercepting a straight line through the center of the bacteria, calculating the SNR of each point on the straight line by using a formula (2), and calculating the SNR of each point on the straight line as shown in the formula (2); and calculating the signal-to-noise ratio range on the transversal line, and selecting the escherichia coli image corresponding to the transversal line with the signal-to-noise ratio range larger than 0.15 as the escherichia coli in a focusing state.
Figure BDA0003677263720000032
Where SNR represents the signal-to-noise ratio of a point on the stub; g i Representing the gray value size; stdDev denotes the standard deviation of the background.
And 3-2, manually labeling all images in the data set by workers according to the shape characteristics of the bacteria and the microspheres in the focusing state.
Preferably, the microchannel has a length of 40mm, a width of 0.5mm and a height of 0.1mm.
Preferably, the microspheres are polystyrene microspheres with the diameter of 2 μm.
Preferably, in step 4, the identification model is a YOLO model. The model loss function adopts a CIoU function.
Preferably, in step 4, in the model training process, data enhancement is performed on the images in the data set by using an image scaling, color space adjustment and Mosaic-8 enhancement method, so as to increase the number of training samples.
Preferably, in step 5-3, the approximate box represents a candidate box having an overlapping area with the feature box of 90% or more.
Preferably, after step 6 is executed, the bacterial change tracking of the target area is performed, and the specific process is as follows: and recording the region of interest and the position and the form of the bacteria in the region of interest according to the three-dimensional image obtained in the step 6.
After the preset time, carrying out image acquisition on the micro-channel again, and detecting the positions of bacteria and microspheres to obtain a new three-dimensional image; and (4) taking the positions of the microspheres in the two three-dimensional images as a positioning benchmark to obtain the growth condition of bacteria in the region of interest.
Preferably, in the microchannel sample, the volume ratio of the escherichia coli bacterial solution, the microspheres and the agarose is 1.
The invention has the beneficial effects that:
1. the invention constructs an effective training set of escherichia coli and microspheres in three-dimensional scene frame-by-frame detection, formulates effective labeling rules with different focusing degrees, quickly and accurately detects the escherichia coli and the microspheres, and reduces the missing detection rate and the false detection rate of the microspheres and bacteria and the influence of impurities.
2. The invention is based on the micro-channel, utilizes the three-dimensional structure to detect and count bacteria and microspheres, effectively reduces the possibility of bacteria stacking in two-dimensional plane detection, and is beneficial to positioning and identifying the bacteria and the microspheres under long-term culture.
3. The invention can count the number of bacteria and microspheres at the same time, and can effectively screen samples and estimate the concentration of bacterial liquid by using the microspheres as reference or assistance.
4. The method can reconstruct the detected target object in a three-dimensional space, effectively show whether the target object is uniform in the space, and provide evaluation indexes for sample selection and sample effectiveness. Different focusing layers can be selected to position bacteria or microspheres to be observed.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic representation of microspheres in different states of focus;
FIG. 3 is a schematic view of bacteria in different states of focus;
FIG. 4 is a schematic view of a microchannel;
FIG. 5 is a flow chart of object location in the present invention;
FIG. 6 is a schematic diagram of a three-dimensional image obtained by performing three-dimensional reconstruction according to the present invention;
FIG. 7 is a diagram of the effect of template matching detection;
FIG. 8 is a graph showing the effect of the YOLO assay in the present invention.
Detailed Description
The following detailed description is made with reference to the accompanying drawings.
A three-dimensional positioning method of escherichia coli and microspheres under a micro-channel based on deep learning is used for measuring the number and the position of escherichia coli in an escherichia coli liquid in the micro-channel. As shown in FIG. 4, the length of the micro-channel in which the Escherichia coli liquid is located is 40mm, the width is 0.5mm, and the height is 0.1mm; polystyrene microspheres with a diameter of 2 μm were added to the microchannels.
As shown in FIG. 1, the three-dimensional localization method of Escherichia coli and microspheres under deep learning-based microchannel comprises the following steps:
step 1, sample preparation
1-1, fully and uniformly mixing 1mL of bacterial liquid, 1mL of microspheres and 2mL of 1% low-melting-point agarose at 37 ℃ by using a constant-temperature magnetic stirrer, and injecting the obtained mixed liquid into a micro-channel by using a micro-pump.
And 1-2, standing the microchannel injected with the mixed solution for 1min at room temperature to solidify the mixed solution in the microchannel.
1-3, respectively using six escherichia coli liquid with different concentrations; the preparation of samples was repeated five times for each concentration of E.coli solution, yielding 30 microchannel samples. In this example, the six concentrations were 1.8X 10, respectively 8 cfu/mL、4.2×10 8 cfu/mL、4.7×10 8 cfu/mL、6.4×10 8 cfu/mL、7.3×10 8 cfu/mL、9.9×10 8 cfu/mL。
Step 2, sampling video and splitting video
2-1. The micro-channel was videotaped using a biomicroscope equipped with a 40 x objective lens and a 20 x electron system connected to a 1000 ten thousand pixel CCD camera. Each microchannel randomly selects 10 different views; in the video recording process, gradually adjusting the focus of the biological microscope from the top of the micro-channel to the bottom of the micro-channel, and obtaining a video sample in each visual field; each image frame in the video sample corresponds to an image with different depths in the micro-channel. There are 10 fields of view for each of the 30 microchannel samples, resulting in 300 video samples.
2-2, respectively calculating the uniformity coefficient gamma of 30 micro-channels by using the formula (1) i ,i=1,2,...,30;
Figure BDA0003677263720000051
In the formula, phi k The number of microspheres in the current microchannel under the kth visual field is represented, and mu represents the average number of microspheres in all visual fields in the current microchannel; v k Representing the volume corresponding to the kth field of view in the current microchannel, n representing the current microchannelThe number of viewing zones of the street. Coefficient of uniformity gamma i Removing all video samples corresponding to the micro-channels smaller than or equal to 0.8; then, the number of microspheres in the residual video sample is in the interval [150,300 ]]Removing the outer video sample; the dispersion of E.coli in the retained video sample is more uniform.
Taking out the reserved video samples frame by frame to obtain a picture set; randomly selecting 400 sample images in the picture set as a data set; splitting a data set into a training set and a verification set; the ratio of the training set to the validation set is 0.8, wherein the training set comprises 320 frames of images, the total number of the images comprises 1952 bacterial samples and 2701 microsphere samples; the validation set contains a total of 80 images, 506 bacteria and 729 microsphere samples.
Step three, labeling the data set
3-1, as shown in figures 3 and 4, selecting a plurality of local images containing bacteria and microspheres with different focusing degrees in the data set; the local image further contains an escherichia coli or a microsphere; for a local image containing microspheres, a straight line is cut through the center of the microsphere, the SNR of each point on the straight line is calculated by using the formula (2), the extreme difference of the SNR on the section line is calculated (shown in table 1), and the state with the difference value larger than 1 is selected as the focusing state of the microsphere.
Similarly, for the local image containing escherichia coli, a straight line is cut through the center of the bacteria, the signal-to-noise ratio (SNR) of each point on the straight line is calculated by using the formula (2), the very poor signal-to-noise ratio on the cut line is calculated (as shown in table 2), and the state with the very poor signal-to-noise ratio greater than 0.15 is selected as the focusing state of the microspheres. The focus state of the bacteria and microspheres is noted.
Figure BDA0003677263720000061
Where SNR represents the signal-to-noise ratio of a point on the stub; g is a radical of formula i Indicating the gray value size of the point; stdDev denotes the standard deviation of the background.
TABLE 1 maximum SNR difference table for different focusing states of microspheres
Figure BDA0003677263720000062
TABLE 2 maximum SNR difference table for different focusing states of microspheres
Figure BDA0003677263720000063
And 3-2, manually labeling all images in the data set by workers according to the shape characteristics of local images of the bacteria and the microspheres in the focusing state.
And 4, step 4: model training
Adopting self-adaptive pictures to zoom to 1280 multiplied by 1280 for all data set images so as to adapt to the set standard input size of a YOLO model, and carrying out 300 rounds of training on the model to obtain an identification model capable of respectively identifying bacteria and microspheres in the images; and in the model training process, data enhancement is carried out on the image by adopting an image scaling method, a color space adjustment method and a Mosaic-8 enhancement method, and the number of training samples is increased. The model loss function is a CIoU function.
Step 5, target positioning
5-1, inputting mixed solution with concentration of 2 x 10 into the micro-channel 8 A tested bacterium solution of microsphere solution and low-melting-point agarose solution with the concentration of 1 percent; then, carrying out focusing shooting on different depths of the micro-channel to obtain detected images of the micro-channel at different depths; detecting each detected image by using the recognition model obtained by training in the step 4 to obtain a candidate frame [ x ] of the corresponding target detection area positions of all bacteria and microspheres in the detected image l ,y l ,x r ,y r ]And its confidence level.
5-2, sequencing the tested images according to the sequence of the focusing positions from top to bottom to obtain a tested image set F, and aiming at each image frame F in the tested image set F i Separately creating a set of candidate box access states T i
5-3, as shown in fig. 5, sequentially performing the identifying and deduplication operations on all candidate frames k of all image frames that are not visited according to the ranking order, which is as follows:
selecting an image frame F i If the image frame F i There is no candidate frame k that has not been accessed, and image frame F i If it is the last frame, the identification and deduplication operation is completed, and the step 6 is directly entered.
If the image frame F i There is no candidate frame k that has not been accessed, and image frame F i If the frame is not the last frame, the identification and deduplication operation of the candidate frames which are not accessed in the next image frame is continued.
If the image frame F i If there is a candidate box k which has not been visited, the candidate box is taken as a feature box, a confidence list is created, and the following operations are performed:
and marking the feature frame as an accessed state in the candidate frame access state set of the corresponding frame, and adding the confidence coefficient of the feature frame into a confidence coefficient list. For the current image frame F i Carrying out approximate frame identification downwards frame by frame until the identification of the last image frame is completed or an approximate frame without a characteristic frame exists in the image frame; the approximate box represents a candidate box having an overlapping area with the feature box of 90% or more. And marking all the approximate boxes of the characteristic box as accessed states, and adding the confidence degrees of all the approximate boxes into a confidence degree list. And taking the characteristic frame or the approximate frame with the highest confidence level in the confidence level list as the unique coordinate of the target point, and putting the coordinate of the target point into a result set to avoid repeated counting.
When all the candidate frames are marked as the accessed state, the recognition and the deduplication operations of all the image frames are completed; at this time, the result set contained all of E.coli and microspheres in the microchannel to be tested, and E.coli was distinguished from microspheres.
Step 6: three-dimensional reconstruction
And (5) constructing a three-dimensional image containing all the target points according to the coordinates of all the target points in the result set obtained in the step (5), as shown in fig. 6.
And 7: target area bacterial change tracking
And 7-1, recording the region of interest and the position and the form of the bacteria in the region of interest according to the three-dimensional image obtained in the step 6.
7-2, after the bacteria grow for a period of time, carrying out image acquisition and detection on the shape of the bacteria and the position of the microspheres again according to the methods in the steps 5 and 6, wherein accurate z-axis position information cannot be obtained through manual layer-by-layer scanning detection, namely when carrying out image acquisition on the same sample again, each frame cannot correspond to the last image acquisition one by one, so that the growth conditions of the bacteria in the z-axis direction at two different moments cannot be distinguished. The microspheres are used as positioning references, and the positions of the bacteria obtained by two image acquisitions are connected, so that the growth and movement of each bacteria in the z-axis direction between the two image acquisitions are obtained. Since the microspheres are in the shape of a solid sphere, the length in the z-axis is the diameter of the microsphere, regardless of how it rotates in three dimensions. The image frame from the appearance of each microsphere to the disappearance of the microsphere was taken as a 2 μm observation layer (microsphere diameter), and all observation layers were obtained from the relative positions between the microspheres. When image acquisition and detection of the position of the bacteria and the microspheres are carried out again, the bacteria position in the region of interest is firstly positioned through the microspheres, and then the growth condition of the bacteria on the z axis is obtained through the relative position of the observation layer in the region of interest.
In order to verify that the effect of the method is better than that of a traditional machine vision method represented by template matching, a template matching mode is adopted to compare with the method. One frame was randomly drawn from the experimental data. The target detection is performed by two different methods as described above. Fig. 7 is a diagram of detection performed by using a conventional template matching method, and since microspheres have a relatively regular structure, most of the microspheres can be detected, but there are obvious conditions of missed detection and false detection, and bacteria only detect bacterial targets consistent with a template due to different forms, and accurate detection of all bacteria cannot be achieved. FIG. 8 is a YOLO model detection result, from an enlarged image of a local region, in which bacteria and microspheres are marked by rectangular frames with different colors, respectively, and the confidence thereof is displayed, and no false detection occurs for the microsphere model with a faded view, while bacteria appearing in the view are accurately detected. By comparing the detection results (as shown in table 3), the YOLO model is much better than the conventional template matching method in the aspect of target detection effect, and the efficiency is greatly improved. Although there is missing detection in the YOLO detection method, the missing detection is detected in the previous frame or the next frame due to the difference of the focus states, and the detection can be effectively marked in step 5.
Identification accuracy table of table 3 YOLO detection method and template matching method
Figure BDA0003677263720000081

Claims (10)

1. A three-dimensional positioning method of escherichia coli and microspheres under a micro-channel based on deep learning is characterized in that: the method comprises the following steps:
step 1, sample preparation
Mixing a plurality of escherichia coli bacterial liquids with different concentrations with microspheres and agarose respectively, and injecting the mixture into a microchannel respectively to obtain a plurality of microchannel samples;
step 2, sampling and splitting of sample video
2-1, taking a plurality of fields of view for each microchannel sample; acquiring images with different focusing depths aiming at each visual field to obtain a video sample;
2-2, extracting a plurality of images from the video sample to form a data set;
step 3, labeling the data set; labeling escherichia coli and microspheres in an image of a data set respectively;
step 4, training a model; training the model by using the labeled data set to obtain recognition models capable of respectively recognizing escherichia coli and microspheres;
step 5, respectively carrying out target positioning on the escherichia coli and the microspheres;
5-1, inputting a tested bacterium solution mixed with microspheres and an agarose solution into the micro-channel; focusing and shooting different depths of the micro-channel to obtain detected images of the micro-channel at different depths; detecting each detected image by using the recognition model; obtain the measured image of the siteCandidate boxes [ x ] corresponding to bacteria and microspheres l ,y l ,x r ,y r ]And its confidence level;
5-2, sequencing the tested images according to the sequence of the focusing positions from top to bottom to obtain a tested image set F, and obtaining each image frame F in the tested image set F i
5-3, sequentially carrying out identification and duplicate removal operation on all image frames according to the arrangement sequence, specifically as follows: sequentially fetching image frames F i The candidate box k which is not accessed is used as a characteristic box; marking the feature box as an accessed state; for the current image frame F i Carrying out approximate frame identification downwards frame by frame until the approximate frame identification of the last image frame is completed or an approximate frame without a characteristic frame exists in the image frame; marking all approximate boxes of the characteristic box as accessed states; taking the candidate frame with the highest confidence level in the feature frame and the approximate frame thereof as a target point position, and putting the coordinate of the target point position into a result set; when all the candidate frames k mark the accessed state, the identification and deduplication operation is completed;
step 6: three-dimensional reconstruction
And (5) constructing a three-dimensional image containing all the target points according to the coordinates of all the target points in the result set obtained in the step (5).
2. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in a microchannel, according to claim 1, is characterized in that: the process of extracting the data set in step 2-2 is as follows:
uniformity coefficient gamma of each microchannel i The following were used:
Figure FDA0003867645830000021
in the formula, phi k Indicating the number of microspheres under the kth visual field in the current microchannel, and indicating the average number of microspheres under all the visual fields in the current microchannel by mu; v k Representing the volume corresponding to the kth visual field in the current micro-channel, wherein n represents the number of visual field areas of the current micro-channel; coefficient of uniformity gamma i Removing all video samples corresponding to the micro-channels smaller than or equal to 0.8; then, removing the video samples of which the number of the microspheres is not within a preset interval from the rest video samples;
taking out the reserved video samples frame by frame to obtain a picture set; and randomly selecting a plurality of sample images in the picture set as a data set.
3. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in a microchannel, according to claim 1, is characterized in that: the specific process of labeling the data set in step 3 is as follows:
3-1, selecting a plurality of local images containing bacteria and microspheres with different focusing degrees in the data set; the local image contains only one escherichia coli or one microsphere; for a local image containing microspheres, intercepting a straight line through the center of the microsphere, and calculating the signal-to-noise ratio (SNR) of each point on the straight line by using a formula (2); calculating the signal-to-noise ratio range on the sectional line, and selecting a microsphere image corresponding to the sectional line with the signal-to-noise ratio range larger than 1 as a microsphere in a focusing state;
for a local image containing escherichia coli, intercepting a straight line through the center of the bacteria, and calculating the signal-to-noise ratio (SNR) of each point on the straight line by using a formula (2); calculating the signal-to-noise ratio range on the intercepted straight line, and selecting an escherichia coli image corresponding to the section line with the signal-to-noise ratio range larger than 0.15 as escherichia coli in a focusing state;
Figure FDA0003867645830000022
where SNR represents the signal-to-noise ratio of a point on the stub; g i Representing the gray value size; stdDev denotes the standard deviation of the background;
and 3-2, manually labeling all images in the data set by workers according to the shape characteristics of the bacteria and the microspheres in the focusing state.
4. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in the micro-channel, according to claim 1, is characterized in that: the length of the micro-channel is 40mm, the width is 0.5mm, and the height is 0.1mm.
5. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in a microchannel, according to claim 1, is characterized in that: the microspheres are polystyrene microspheres with the diameter of 2 mu m.
6. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in the micro-channel, according to claim 1, is characterized in that: in step 4, the identification model adopts a YOLO model; the model loss function adopts a CIoU function.
7. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in the micro-channel, according to claim 1, is characterized in that: in the step 4, in the model training process, the images in the data set are subjected to data enhancement by adopting an image scaling method, a color space adjustment method and a Mosaic-8 enhancement method, and the number of training samples is increased.
8. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in the micro-channel, according to claim 1, is characterized in that: in step 5-3, the approximate box represents a candidate box having an overlapping area with the feature box of 90% or more.
9. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in the micro-channel, according to claim 1, is characterized in that: after the step 6 is executed, carrying out target area bacteria change tracking, wherein the specific process is as follows: recording the region of interest, and the position and the form of the bacteria in the region of interest according to the three-dimensional image obtained in the step 6;
after the preset time, acquiring images of the micro-channel again, and detecting the positions of bacteria and microspheres to obtain a new three-dimensional image; and (4) taking the positions of the microspheres in the two three-dimensional images as a positioning benchmark to obtain the growth condition of bacteria in the region of interest.
10. The deep learning-based three-dimensional positioning method of escherichia coli and microspheres in the micro-channel, according to claim 1, is characterized in that: in the microchannel sample, the volume ratio of the escherichia coli bacterial liquid to the microspheres to the agarose is 1.
CN202210625685.3A 2022-06-02 2022-06-02 Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel Active CN114897875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210625685.3A CN114897875B (en) 2022-06-02 2022-06-02 Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210625685.3A CN114897875B (en) 2022-06-02 2022-06-02 Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel

Publications (2)

Publication Number Publication Date
CN114897875A CN114897875A (en) 2022-08-12
CN114897875B true CN114897875B (en) 2022-11-11

Family

ID=82727054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210625685.3A Active CN114897875B (en) 2022-06-02 2022-06-02 Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel

Country Status (1)

Country Link
CN (1) CN114897875B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598160A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Training method and device of image classification model, computer equipment and storage medium
WO2022095514A1 (en) * 2020-11-06 2022-05-12 北京迈格威科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598160A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Training method and device of image classification model, computer equipment and storage medium
WO2022095514A1 (en) * 2020-11-06 2022-05-12 北京迈格威科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Study on method of bacteria image recognition;Hong-wei Shi et al;《2011 4th International Congress on Image and Signal Processing》;20111212;273-277 *
应用于微细流场的全息荧光粒子图像三维测速系统;罗锐等;《工程热物理学报》;20060510(第03期);135-137 *

Also Published As

Publication number Publication date
CN114897875A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN102782561B (en) Carry out the system and method for the microexamination of time correlation to biologic artifact
JP4266813B2 (en) A robust method for detecting and quantifying stains in histological specimens based on a physical model of stain absorption
JP6845221B2 (en) Methods and systems for automated microbial colony counting from samples streaked on plate medium
CN105143850B (en) Autofocus system and method for the particle analysis in blood sample
US7796815B2 (en) Image analysis of biological objects
DK2973397T3 (en) Tissue-object-based machine learning system for automated assessment of digital whole-slide glass
US8682050B2 (en) Feature-based registration of sectional images
JP6949875B2 (en) Devices and methods for obtaining particles present in a sample
US20160069786A1 (en) An optical system and a method for real-time analysis of a liquid sample
CN110446803A (en) Automatically the cell specified number is collected
US20140139625A1 (en) Method and system for detecting and/or classifying cancerous cells in a cell sample
WO2011055791A1 (en) Device for harvesting bacterial colony and method therefor
CN107460119B (en) Equipment and method for monitoring bacterial growth
CN110234749A (en) Analyze and use the motility kinematics of microorganism
JP2011055734A (en) Device for classifying bacterium and pretreatment device for inspection of bacterium
US20230194848A1 (en) Method And System For Identifying Objects In A Blood Sample
Piccinini et al. Improving reliability of live/dead cell counting through automated image mosaicing
EP2856165A1 (en) Automated detection, tracking and analysis of cell migration in a 3-d matrix system
CN105044108B (en) Micro-array chip arraying quality automatization judgement system and judgment method
CN114897875B (en) Deep learning-based three-dimensional positioning method for escherichia coli and microspheres in micro-channel
JP4160117B2 (en) Methods for testing cell samples
CN110177883A (en) It is tested using the antimicrobial neurological susceptibility of digital micro-analysis art
US8744827B2 (en) Method for preparing a processed virtual analysis plate
Sieracki¹ et al. Enumeration and sizing of micro-organisms using digital image analysis
JP2012502266A (en) Method and apparatus for classification, visualization and search of biological data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant