CN114332058A - Serum quality identification method, device, equipment and medium based on neural network - Google Patents
Serum quality identification method, device, equipment and medium based on neural network Download PDFInfo
- Publication number
- CN114332058A CN114332058A CN202111680320.2A CN202111680320A CN114332058A CN 114332058 A CN114332058 A CN 114332058A CN 202111680320 A CN202111680320 A CN 202111680320A CN 114332058 A CN114332058 A CN 114332058A
- Authority
- CN
- China
- Prior art keywords
- serum
- test tube
- neural network
- image
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000002966 serum Anatomy 0.000 title claims abstract description 324
- 238000000034 method Methods 0.000 title claims abstract description 107
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 82
- 238000012360 testing method Methods 0.000 claims abstract description 337
- 238000003062 neural network model Methods 0.000 claims abstract description 75
- 238000012549 training Methods 0.000 claims abstract description 51
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 239000011159 matrix material Substances 0.000 claims description 60
- 239000007788 liquid Substances 0.000 claims description 45
- 238000010586 diagram Methods 0.000 claims description 26
- 238000011176 pooling Methods 0.000 claims description 20
- 206010023126 Jaundice Diseases 0.000 claims description 19
- 206010018910 Haemolysis Diseases 0.000 claims description 16
- 230000008588 hemolysis Effects 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 13
- 238000007477 logistic regression Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 description 38
- 238000004364 calculation method Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 208000007536 Thrombosis Diseases 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000002159 abnormal effect Effects 0.000 description 5
- 210000004369 blood Anatomy 0.000 description 5
- 239000008280 blood Substances 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 210000001268 chyle Anatomy 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000002949 hemolytic effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000338 in vitro Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000036765 blood level Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001900 immune effect Effects 0.000 description 1
- 238000003018 immunoassay Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Landscapes
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The invention discloses a serum quality identification method, a serum quality identification device, a terminal device and a computer readable storage medium based on a neural network, wherein a test tube image is obtained by carrying out image acquisition on a test tube containing a serum sample; inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image. The invention can improve the efficiency of serum quality identification and effectively reduce the influence of the label on the test tube, thereby ensuring the accuracy of the serum quality identification of the model.
Description
Technical Field
The invention relates to the technical field of serum quality analysis, in particular to a serum quality identification method and device based on a neural network, terminal equipment and a computer readable storage medium.
Background
With the progress of society, the inspection level and the automation degree of in vitro diagnostic equipment are gradually improved. Before biochemical or immunological detection is carried out on serum, in-vitro diagnostic equipment such as a biochemical analyzer and an immunoassay analyzer needs to carry out sample quality detection on the serum, and only if the detection result is qualified, the sample can be further used for carrying out relevant detection. In addition to the normal blood sample collected through the test tube, the blood sample may also include abnormal samples in a hemolytic state, a chyle state or a jaundice state, which are first screened out in the previous process of detecting the quality of the sample.
In the past, the modes of screening abnormal samples aiming at the quality detection and identification of serum samples are all observed and judged by experienced technicians through naked eyes, so that the respective judgment standards of different technicians are different, and the efficiency of screening the abnormal samples integrally is low. In addition, in the prior art, the serum quality can be judged by extracting the liquid level color value of the serum to be detected after dividing respective color ranges of serum samples in normal, hemolytic, chyle, jaundice and other states in advance. However, this method is very susceptible to the effect of the label attached to the test tube containing serum, resulting in errors in color determination.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a serum quality identification method, a serum quality identification device, a terminal device and a computer readable storage medium based on a deep neural network, and aims to solve the technical problems that the existing quality identification and judgment method for serum samples contained in test tubes is low in efficiency or is easily influenced by labels on the test tubes to cause errors.
The embodiment of the invention provides a serum quality identification method based on a neural network, which comprises the following steps:
acquiring an image of a test tube containing a serum sample to obtain a test tube image;
inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
Optionally, the method performs convolutional neural network model training through multi-exposure tube images;
the serum quality identification method based on the neural network further comprises the following steps:
acquiring multi-exposure test tube images, and extracting a matrix area belonging to the non-label area of the test tube from the multi-exposure test tube images;
inputting the matrix area into a preset first convolution module to perform first convolution neural network model training, and acquiring a characteristic diagram output by the first convolution module after performing first convolution neural network model training on the matrix area;
and stacking the characteristic graphs, inputting the stacked characteristic graphs into a preset second convolution module, and performing second convolution neural network model training to obtain a neural network model for performing serum quality identification on the serum sample.
Optionally, the first convolution module and the second convolution module include: the first convolution module and the second convolution module are connected with one pooling layer after every two convolution layers except the last two convolution layers;
the convolution layers in the first convolution module comprise a plurality of first convolution layers with the step length of 1 and a plurality of second convolution layers with the step length of 2, and in every two first convolution layers, the first convolution layer of which the output end is not connected with the pooling layer is connected with one second convolution layer;
the number of convolution kernels of two first convolution layers at the end of the first convolution module is smaller than the number of convolution kernels of other first convolution layers and the second convolution layer.
Optionally, a full connection layer and a logistic regression layer are connected to the end of the second convolution module, and the step of inputting the stacked feature maps into a preset second convolution module to perform second convolution neural network model training includes:
stacking the feature maps, inputting the stacked feature maps into a preset second convolution module, and acquiring a new feature map output by the second convolution module after the feature maps are processed on the basis of the plurality of convolution layers and the pooling layer;
inputting the new feature map into the full-connection layer for feature classification to obtain a quality category of serum, wherein the quality category comprises: normal, hemolysis, lipemia and jaundice;
inputting the new feature map into the logistic regression layer to calculate a probability value of each quality class for determining the quality class as the corresponding quality grade when the hemolysis, the lipemia and the jaundice.
Optionally, the neural network-based serum quality identification method further includes:
and distributing weights to the feature maps, multiplying the feature maps by the distributed weights, and then executing the step of inputting the stacked feature maps into a preset second convolution module.
Optionally, the step of extracting a matrix area belonging to the tube non-label area from the multi-exposed tube image includes:
performing serum sample analysis on the multi-exposure test tube image to calculate the highest position and the lowest position of the serum liquid level of the serum sample contained in the test tube in the multi-exposure test tube image;
and extracting a matrix area with a preset size from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level.
Optionally, the method performs image acquisition on the test tube through a preset industrial camera;
the step of extracting a matrix area with a preset size from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level comprises the following steps:
determining a serum image area from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level;
and intercepting the matrix region from the serum image region according to the preset size, wherein the preset size is smaller than the size of the serum image region.
In addition, in order to achieve the above object, the present invention also provides a neural network-based serum quality identification apparatus, including:
the device comprises a test tube image acquisition module, a blood serum sample collection module and a blood serum sample collection module, wherein the test tube image acquisition module is used for acquiring an image of a test tube containing a blood serum sample;
and the serum quality identification module is used for inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
When the serum quality recognition device based on the neural network operates each functional module, the steps of the serum quality recognition method based on the neural network are realized.
In addition, to achieve the above object, the present invention also provides a terminal device, including: a memory, a processor and a neural network-based serum quality identification program stored on the memory and executable on the processor, the neural network-based serum quality identification program, when executed by the processor, implementing the steps of the neural network-based serum quality identification method of the present invention as described above.
Further, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a neural network-based serum quality identification program, which when executed by a processor, implements the steps of the neural network-based serum quality identification method of the present invention as described above.
The serum quality identification method, the serum quality identification device, the terminal equipment and the computer readable storage medium based on the neural network are provided by the embodiment of the invention, and the test tube image is obtained by carrying out image acquisition on the test tube containing the serum sample; inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
Compared with the existing mode of identifying and judging the quality of serum in a test tube, the method and the device have the advantages that the test tube image of the test tube containing the serum is collected in advance, the test tube image is used for training the convolutional neural network model to obtain the neural network model, in the actual serum quality detection process, only the test tube containing the serum sample needs to be subjected to image collection to obtain the test tube image, then the test tube image is input into the neural network model, and the serum quality identification result of the serum sample obtained after training calculation based on the test tube image can be output based on the neural network model. Therefore, the quality of the serum sample can be identified and judged based on the collection of the proper test tube image, compared with the mode of judging the serum quality by naked eyes, the serum quality identification efficiency is greatly improved, the proper multi-exposure test tube image is further used for training the neural network model, the influence of the label on the test tube is effectively reduced, and the accuracy of the model for identifying the serum quality is ensured.
Drawings
Fig. 1 is a schematic structural diagram of a hardware operating environment of a terminal device according to an embodiment of the present invention;
FIG. 2 is a schematic view of a process involved in tube image acquisition according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 3 is a schematic diagram of a system architecture according to an embodiment of the present invention;
FIG. 4 is a schematic view of an application scenario involved in an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 5 is a schematic view of another application scenario involved in an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 6 is a schematic view of another application scenario involved in an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 7 is a schematic view of an application flow involved in an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 8 is a schematic diagram of a scene involved in calculating the height of a tube in a real-time image according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 9 is a schematic view of another scenario of the neural network-based serum quality identification method according to an embodiment of the present invention, which involves calculating the height of a test tube in a real-time image;
FIG. 10 is a schematic view of another scenario of the neural network-based serum quality identification method according to an embodiment of the present invention, which involves calculating the height of a test tube in a real-time image;
FIG. 11 is a binarized image according to an embodiment of the serum quality identification method based on neural network of the present invention;
FIG. 12 is a diagram illustrating a non-label region area calculation formula according to an embodiment of the present invention;
fig. 13 is a schematic view of an application scenario of an embodiment of the neural network-based serum quality identification method according to the present invention, which involves rotating a test tube by a preset angle;
FIG. 14 is a schematic diagram illustrating an embodiment of a neural network-based serum quality identification method according to the present invention, which relates to a requirement of test tube labeling;
FIG. 15 is a schematic view of a scenario involving calculation of a preset angle of rotation of a test tube according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 16 is a schematic flow chart of the neural network-based serum quality identification method according to an embodiment of the present invention, which relates to the analysis of serum samples;
FIG. 17 is a schematic diagram of an application scenario involving capturing an ROI area of an image according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 18 is a schematic diagram of an application scenario involving determining a liquid level of a serum sample according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 19 is a diagram of a test tube with or without gel according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 20 is a diagram illustrating the mean HSV three-channel values according to an embodiment of the neural network-based serum quality identification method of the present invention;
fig. 21 is a schematic view of an application scenario of determining a serum liquid level by a single pixel line segment according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 22 is a schematic flow chart illustrating an embodiment of a neural network-based serum quality identification method according to the present invention;
FIG. 23 is a schematic structural diagram of a convolutional neural network model according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 24 is a schematic structural diagram of a first convolution module in a convolution neural network model according to an embodiment of the neural network-based serum quality identification method of the present invention;
FIG. 25 is a schematic structural diagram of a second convolution module in a convolution neural network model according to an embodiment of the neural network-based serum quality identification method of the present invention;
fig. 26 is a block diagram of a neural network-based serum quality recognition apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring an image of a test tube containing a serum sample to obtain a test tube image; inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
In the past, the modes of screening abnormal samples aiming at the quality detection and identification of serum samples are all observed and judged by experienced technicians through naked eyes, so that the respective judgment standards of different technicians are different, and the efficiency of screening the abnormal samples integrally is low. In addition, in the prior art, the serum quality can be judged by extracting the liquid level color value of the serum to be detected after dividing respective color ranges of serum samples in normal, hemolytic, chyle, jaundice and other states in advance. However, this method is very susceptible to the effect of the label attached to the test tube containing serum, resulting in errors in color determination.
The invention provides a solution, which is characterized in that a neural network model is obtained by acquiring a test tube image of a test tube containing serum in advance and utilizing the test tube image to train a convolutional neural network model, so that in the actual serum quality detection process, only the test tube containing the serum sample needs to be subjected to image acquisition to obtain a test tube image, and then the test tube image is input into the neural network model, so that the serum quality identification result of the serum sample obtained after training calculation based on the test tube image can be output based on the neural network model. Therefore, the quality of the serum sample can be identified and judged based on the collection of the proper test tube image, compared with the mode of judging the serum quality by artificial naked eyes, the efficiency of identifying the serum quality is greatly improved, the proper multi-exposure test tube image is further used for training a neural network model, the model training is carried out by the color information of the image under different exposures, the influence of the label background on the test tube is effectively reduced, and the accuracy of identifying the serum quality by the model is ensured.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal device in the embodiment of the present invention may be various terminal devices configured to perform tube image acquisition, such as a terminal server and a PC, and may even be a mobile terminal device such as a smart phone and a tablet computer, or an immobile terminal device.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a neural network-based serum quality identification program.
In the terminal device shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke the neural network-based serum quality identification program stored in the memory 1005 and perform the following operations:
acquiring an image of a test tube containing a serum sample to obtain a test tube image;
inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
Further, the processor 1001 may be configured to invoke a neural network-based serum quality identification program stored in the memory 1005 and perform the following operations:
acquiring multi-exposure test tube images, and extracting a matrix area belonging to the non-label area of the test tube from the multi-exposure test tube images;
inputting the matrix area into a preset first convolution module to perform first convolution neural network model training, and acquiring a characteristic diagram output by the first convolution module after performing first convolution neural network model training on the matrix area;
and stacking the characteristic graphs, inputting the stacked characteristic graphs into a preset second convolution module, and performing second convolution neural network model training to obtain a neural network model for performing serum quality identification on the serum sample.
Further, the first convolution module and the second convolution module include: the first convolution module and the second convolution module are connected with one pooling layer after every two convolution layers except the last two convolution layers;
the convolution layers in the first convolution module comprise a plurality of first convolution layers with the step length of 1 and a plurality of second convolution layers with the step length of 2, and in every two first convolution layers, the first convolution layer of which the output end is not connected with the pooling layer is connected with one second convolution layer;
the number of convolution kernels of two first convolution layers at the end of the first convolution module is smaller than the number of convolution kernels of other first convolution layers and the second convolution layer.
Further, the end of the second convolution module is connected to the fully-connected layer and the logistic regression layer, and the processor 1001 may be configured to call the neural network-based serum quality identification program stored in the memory 1005, and perform the following operations:
stacking the feature maps, inputting the stacked feature maps into a preset second convolution module, and acquiring a new feature map output by the second convolution module after the feature maps are processed on the basis of the plurality of convolution layers and the pooling layer;
inputting the new feature map into the full-connection layer for feature classification to obtain a quality category of serum, wherein the quality category comprises: normal, hemolysis, lipemia and jaundice;
inputting the new feature map into the logistic regression layer to calculate a probability value of each quality class for determining the quality class as the corresponding quality grade when the hemolysis, the lipemia and the jaundice.
Further, the processor 1001 may be configured to invoke a neural network-based serum quality identification program stored in the memory 1005 and perform the following operations:
and distributing weights to the feature maps, multiplying the feature maps by the distributed weights, and then executing the step of inputting the stacked feature maps into a preset second convolution module.
Further, the processor 1001 may be configured to invoke a neural network-based serum quality identification program stored in the memory 1005 and perform the following operations:
performing serum sample analysis on the multi-exposure test tube image to calculate the highest position and the lowest position of the serum liquid level of the serum sample contained in the test tube in the multi-exposure test tube image;
and extracting a matrix area with a preset size from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level.
Further, the processor 1001 may be configured to invoke a neural network-based serum quality identification program stored in the memory 1005 and perform the following operations:
determining a serum image area from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level;
and intercepting the matrix region from the serum image region according to the preset size, wherein the preset size is smaller than the size of the serum image region.
Based on the hardware structure, the invention provides various embodiments of the serum quality identification method based on the neural network.
It should be noted that the serum quality identification method based on the neural network of the present invention can be applied to a system architecture for conducting image acquisition on test tubes as shown in fig. 3, the system architecture includes a background plate, a rotation device, a shooting system, a control unit and a data processing unit, wherein the rotation device can be used for rotating the test tubes by 0-360 °, the shooting system is used for conducting image acquisition on the test tubes, the control unit is used for controlling the rotation device and the shooting system in real time, and the data processing unit is used for processing and storing the shot test tube images. Specifically, when the control unit controls the rotating member to perform the first rotating operation or the second rotating operation on the test tube in the specific application of the image capturing method for automatically rotating the test tube according to the present invention, when it is detected that the test tube is currently present on the tube rack, the gripping member in the rotating member grips the cap end of the test tube, and then the image is captured by the capturing system and transmitted to the data processing unit for a series of processing of the image, such as: test tube location, non-label area calculation, threshold value judgment, rotation angle calculation and the like, and then the control unit further executes corresponding operations according to the output result of the data processing unit, such as: the control rotary part rotates the test tube to the predetermined angle with the biggest image of label clearance department on this test tube of final collection, perhaps, sends out alarm and warning to unsatisfactory test tube image or test tube.
Referring to fig. 22, a specific embodiment of the serum quality identification method based on neural network according to the present invention is developed by using a terminal device instead of the system architecture for tube-oriented image acquisition shown in fig. 3. In an embodiment of the neural network-based serum quality identification method of the present invention, the neural network-based serum quality identification method of the present invention includes:
step S10, acquiring images of test tubes containing serum samples to obtain test tube images;
in this embodiment, when the terminal device actually performs real-time serum quality identification for a serum sample to be identified in the serum quality contained in a test tube, it first performs image acquisition for the test tube to obtain a suitable test tube image.
It should be noted that, in this embodiment, since different exposure times of the industrial camera greatly affect the color of serum when the terminal device acquires the tube image by using the industrial camera, the terminal device may specifically acquire the tube image with multiple exposure times by adjusting the exposure time.
Specifically, for example, the terminal device acquires the tube image P1 in which the area of the non-label area is the largest by setting an exposure time T1 (at which the color of serum/plasma taken by the industrial camera is equal to the actual value) after rotating a tube containing a serum sample of which serum quality is to be identified among tubes so that the midpoint of the non-label area of the tube faces the industrial camera.
In addition, terminal equipment can also carry out image acquisition to the test tube of splendid attire serum sample according to the exposure time rule of predetermineeing at the in-process that carries out convolution neural network model training and obtains many test tube images.
Specifically, after the terminal device rotates the cuvette to satisfy the situation where the midpoint of the non-label area is facing the industrial camera, in addition to capturing the cuvette image P1 of the above exposure time T1, an exposure time T2 shorter than T1 is further set, and an image P2 is captured; and setting an exposure time T3 longer than T1, and acquiring an image P3.
Step S20, inputting the test tube image into a preset neural network model for the neural network model to output a serum quality recognition result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training through the test tube image.
In this embodiment, after acquiring a suitable tube image, the terminal device immediately inputs the tube image into a neural network model obtained by performing convolutional neural network model training on a multi-exposure tube image, so that the neural network model performs training calculation based on the suitable tube image, and then outputs a serum quality recognition result for a serum sample contained in a corresponding tube of the tube image.
Further, in a possible embodiment, the serum quality identification method based on the neural network performs convolutional neural network model training through multi-exposure tube images, that is, obtains the multi-exposure tube images, and performs convolutional neural network model training according to the multi-exposure tube images to obtain a neural network model for performing serum quality identification on a serum sample.
In this embodiment, before actually performing real-time serum quality identification on a serum sample to be identified in the test tube, the terminal device further performs convolutional neural network model training by acquiring a multi-exposure test tube image, so as to obtain the above neural network model for performing serum quality identification on the serum sample in the test tube in real time.
The serum quality identification method based on the neural network can further comprise the following steps:
in this embodiment, in the process of performing convolutional neural network model training using the collected multi-exposure tube image to obtain the neural network model, the terminal device first determines a corresponding serum index by performing serum sample analysis on the multi-exposure tube image, and then extracts a matrix region belonging to a non-label region from the tube image based on the serum index.
The terminal device analyzes a serum sample aiming at a multi-exposure test tube image in advance to calculate the highest position of the serum liquid level and the lowest position of the serum liquid level of the serum sample contained in the test tube; then, a matrix area of a preset size is extracted from the non-label area of the test tube in the multi-exposure test tube image according to the highest position of the serum liquid level and the lowest position of the serum liquid level.
Further, in the present embodiment, specifically, the serum index includes: the highest position of the serum liquid level and the lowest position of the serum liquid level of the serum sample. The step of extracting a matrix area of a preset size from the non-label area of the test tube may include:
determining a serum image area from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level;
and intercepting the matrix region from the serum image region according to the preset size, wherein the preset size is smaller than the size of the serum image region.
It should be noted that, in this embodiment, the preset size is smaller than the size of the serum image region, for example, the specific size of the preset size may be 32 × 32, taking the pixel unit of the image as a basic unit. It should be understood that, based on different design requirements of practical applications, in different feasible embodiments, the specific size of the preset size may be set to the specific size listed in different embodiments, and the serum quality identification method based on neural network of the present invention is not limited to the specific size of the preset size as long as the preset size is smaller than the size of the serum image area in the test tube image.
That is, the terminal device extracts, based on the highest position of the serum level and the lowest position of the serum level of the serum sample contained in the cuvette, which are obtained by performing the analysis of the serum sample, matrix regions F1, F2, and F3 of a preset size of 32 × 32 at any position within the non-label region of the cuvette, respectively, in the collected multi-exposure cuvette images P1, P2, and P3.
Step S40, inputting the matrix area into a preset first convolution module for first convolution neural network model training, and acquiring a characteristic diagram output by the first convolution module after the first convolution module carries out the first convolution neural network model training aiming at the matrix area;
and step S50, stacking the characteristic graphs, inputting the stacked characteristic graphs into a preset second convolution module for second convolution neural network model training, so as to obtain a neural network model for performing serum quality identification on the serum samples.
It should be noted that, in this embodiment, the first convolution module and the second convolution module include: the first convolution module and the second convolution module are connected with one pooling layer after every two convolution layers except the last two convolution layers;
the convolution layers in the first convolution module comprise a plurality of first convolution layers with the step length of 1 and a plurality of second convolution layers with the step length of 2, and in every two first convolution layers, the first convolution layer of which the output end is not connected with the pooling layer is connected with one second convolution layer;
the number of convolution kernels of two first convolution layers at the end of the first convolution module is smaller than the number of convolution kernels of other first convolution layers and the second convolution layer.
Specifically, please refer to the convolutional neural network model shown in fig. 23, the first convolution module in the model shown in fig. 24, and the second convolution module of the model shown in fig. 25. The terminal device firstly inputs the extracted three matrix region images F1, F2 and F3 to a first convolution module, wherein the first convolution module specifically comprises 8 convolution layers and 2 pooling layers. The step size of the convolutional layers conv1 to conv6 is 1, the output characteristic diagram is the same as the input characteristic diagram, and the step size of the convolutional layers conv7 to conv8 is 2, and the output characteristic diagram is half of the input characteristic diagram. In the first convolution module, the convolution kernels of convolution layers conv5 and conv6 are 20, and the number of convolution kernels of the other convolution layers is 30 (it should be understood that the size of the convolution kernels can be 5 × 5 or 3 × 3, and the specific number of the convolution kernels can be adjusted according to the actual situation of the network). In the first convolution module, every two convolution layers are followed by a maximum pooling layer of 2 × 2, and after the feature map passes through the pooling layer, the feature map is half of the input feature map. In addition, the convolutional layers Conv1 and Conv3 are connected to convolutional layers Conv7 and Conv8, respectively, and the obtained characteristic diagrams and the characteristic diagrams after the pooling operation are combined and input into the next convolutional layer operation. The second convolution module is composed of 4 convolution layers and 1 maximum pooling layer, and the convolution kernel size of each convolution layer is 60. The signature output by the second convolution module has a size of 4 × 4 × 60. In the second convolution module, each convolution layer is followed by a BN layer and a relu layer, and the relu layer is used for increasing the nonlinear expression capability of the network.
Further, in the step S50, the "stacking the feature maps and inputting the stacked feature maps into a preset second convolution module for second convolutional neural network model training" may include:
stacking the feature maps, inputting the stacked feature maps into a preset second convolution module, and acquiring a new feature map output by the second convolution module after the feature maps are processed on the basis of the plurality of convolution layers and the pooling layer;
inputting the new feature map into the full-connection layer for feature classification to obtain a quality category of serum, wherein the quality category comprises: normal, hemolysis, lipemia and jaundice;
inputting the new feature map into the logistic regression layer to calculate a probability value of each quality class for determining the quality class as the corresponding quality grade when the hemolysis, the lipemia and the jaundice.
Specifically, please refer to the convolutional neural network model shown in fig. 23. The three matrix regions F1, F2 and F3 are processed by the first convolution module to output a feature map with a size of 8 × 8 × 60, and the feature maps are further stacked along the channel direction, and then the stacked feature maps with a size of 8 × 8 × 60 are further input into the second convolution module to perform second convolution neural network model training to obtain a new feature map.
In addition, in this embodiment, the last part of the convolutional neural network model is the fully-connected layer and the logistic regression layer: and in the softmax layer, the new characteristic diagram output after passing through the second convolution module is classified respectively through the full-connection layer to obtain multiple quality categories of the serum sample, such as: normal, haemolysis, lipemia, jaundice, and calculating respective probability values for the respective quality classes via the softmax layer to determine the quality classes corresponding to the respective classes when the quality classes are haemolysis, lipemia and jaundice.
It should be noted that, in this embodiment, the fully-connected layer is used to classify features, and in order to prevent the over-fitting phenomenon of the image, a dropout operation is introduced here to randomly delete part of neurons in the convolutional neural network, and in addition, local normalization and data enhancement may be performed to increase robustness. The softmax layer is the final processing step of the entire network model, suitable for solving the multi-class problem, and is output as a probability value for each class. In the network, the serum can be divided into four categories of normal, hemolysis, lipemia and jaundice, and can be further subdivided, such as hemolysis grade 1, hemolysis grade 2 and hemolysis grade N; jaundice grade 1, jaundice grade 2, and jaundice grade N; level 1 lipemia, level 2 lipemia, level 3 lipemia, etc.
And the terminal equipment trains the convolutional neural network model by using thousands or tens of thousands of the multi-exposure test tube images, so that the trained neural network model for performing serum quality identification on the serum sample in real time can be obtained.
Further, in a possible embodiment, the serum quality identification method based on a neural network of the present invention may further include:
and distributing weights to the feature maps, multiplying the feature maps by the distributed weights, and then executing the step of inputting the stacked feature maps into a preset second convolution module.
It should be noted that, in this embodiment, since the image P1 is closer to the color seen by the real naked eye than the image P2 and the image P3, weights may be further assigned before stacking, for example, the weight of P1 is 2, and the weights of P2 and P3 are both 1, so that the feature map is multiplied by the weights before stacking, and then the feature map is logarithmically stacked and input to the second convolution module for the second convolution neural network model training process.
In this embodiment, compared with the existing mode of identifying and judging the quality of serum in a test tube, the method provided by the invention acquires the multi-exposure test tube image of the test tube containing serum in advance to train the convolutional neural network model by using the test tube image to obtain the neural network model, so that in the actual serum quality detection process, only the test tube containing the serum sample needs to be subjected to image acquisition to obtain the test tube image, and then the test tube image is input into the neural network model, so that the serum quality identification result of the serum sample obtained after training calculation based on the test tube image can be output based on the neural network model. Therefore, the quality of the serum sample can be identified and judged based on the collection of the proper test tube image, compared with the mode of judging the serum quality by naked eyes, the serum quality identification efficiency is greatly improved, the proper multi-exposure test tube image is further used for training the neural network model, the influence of the label on the test tube is effectively avoided, and the accuracy of the model in identifying the serum quality is ensured.
Further, referring to fig. 16, in an embodiment of the method for analyzing a serum sample based on a tube image according to the present invention, the step of performing the serum sample analysis on the serum sample contained in the tube by the terminal device, that is, "performing the serum sample analysis on the multi-exposure tube image" may include:
step a, separating the multi-exposure test tube images in an RGB color space, and respectively determining interested regions ROI from each separated channel image and the multi-exposure test tube images;
in this embodiment, after acquiring the tube image meeting the requirement by performing image acquisition on a tube containing a serum sample, the terminal device further separates the tube image into corresponding three-channel images in an RGB color space, and further determines respective corresponding ROI from the three-channel images and the original tube image.
Further, in a possible embodiment, each of the separated channel images includes: b channel image and R channel image, the step a may include:
a1, intercepting a first matrix region within each coordinate from the B channel image according to each coordinate of a preset image region, and determining the first matrix region as a region of interest ROI-B, wherein the preset image region belongs to a non-label region in the test tube image;
a2, cutting a second matrix area within each coordinate from the R channel image according to each coordinate, and determining the second matrix area as a region of interest ROI-R;
step a3, a third matrix area within each coordinate is cut out from the test tube image according to each coordinate, and the third matrix area is determined as a region of interest ROI-P.
Specifically, for example, referring to the application scenario shown in fig. 17, the terminal device separates the acquired tube image into three-channel images P1_ R, P1_ G, and P1_ B in RGB color space, and cuts out partial regions of each image, i.e., matrix regions within four points with coordinates (0, H-10), (0, H +10) (M-10, H-10), and (M-10, H +10), respectively, and sets the cut matrix regions as corresponding regions of interest ROI. Such as: partial regions within the four coordinate points are cut out in the B-channel image, the R-channel image, and the original tube image P1, respectively, and named ROI _ B, ROI _ R and ROI _ P1, respectively, as corresponding regions of interest ROI.
B, analyzing sample indexes corresponding to the serum samples according to the ROI, wherein the sample indexes corresponding to the serum samples comprise: the highest position of the serum liquid level and the lowest position of the serum liquid level.
In this embodiment, after the terminal device determines the respective ROI from the three-channel image and the original tube image, the terminal device further performs analysis by using the sample index corresponding to the ROI, so as to calculate the serum liquid level position of the serum sample contained in the tube, and further calculate the serum amount based on the serum liquid level position.
It should be noted that, in this embodiment, after analyzing and calculating the highest position and the lowest position of the serum liquid level in the test tube, the terminal device may directly calculate the serum amount based on a common volume calculation manner by combining with the known circumference of the test tube.
Further, in one possible embodiment, the sample indicators corresponding to the serum samples include: the highest position of the serum liquid level, the highest position of the blood clot liquid level and the lowest position of the serum liquid level; the step b may further include:
step B1, calculating the highest position of the serum liquid level according to the ROI-B;
step b2, calculating the highest position of the blood level of the blood clot according to the ROI-R;
in the present embodiment, the terminal device analyzes and calculates the highest position of the serum level of the serum sample in the test tube by using the region of interest ROI _ B corresponding to the B-channel image determined in the above process, and analyzes and calculates the highest position of the blood clot level of the serum sample in the test tube by using the region of interest ROI _ R corresponding to the R-channel image determined in the above process.
Specifically, for example, referring to the application scenario shown in fig. 18, the terminal device performs an operation such as averaging or median calculation on the ROI _ B in the row direction to change the ROI _ B matrix block into a single pixel line, so as to find a line segment turning point, which is the highest position of the serum level. Similarly, the terminal device performs operations such as averaging or median and the like on the ROI _ R in the row direction to change the ROI _ R matrix block into a single pixel line, so that a turning point of the line segment is found, and the turning point is the highest position of the liquid level of the blood block.
Step b3, detecting whether gel exists in the test tube according to the ROI-P to obtain a detection result, and determining the lowest position of the serum liquid level according to the detection result.
In this embodiment, the terminal device determines whether a gel exists in the tube containing the serum sample through the region of interest ROI-P corresponding to the original tube image determined in the above process, so as to determine the lowest position of the serum level of the serum sample in the tube based on the detection result of the presence or absence of the gel.
Further, in a possible embodiment, the step b3 may further include:
and after the ROI-P is converted from the RGB color space to a preset color space, averaging the ROI-P along the horizontal direction, and determining whether gel exists in the test tube according to the average value to obtain a corresponding detection result.
In this embodiment, after the terminal device has determined the highest positions of the serum level and the blood clot level of the serum sample contained in the test tube, the terminal device further determines the lowest position of the serum level of the serum sample by determining the presence or absence of the gel in the test tube.
Specifically, refer to the tube with gel shown in the left image and the tube without gel shown in the right image of fig. 19. The terminal device transforms an original region of interest ROI _ P image corresponding to the tube image P1 from RGB color space to HSV color space (which may be converted into other color spaces such as RGB, YUV, HIS, etc. based on different design requirements of practical applications), and then obtains a mean value of three channels HSV of the matrix shown in fig. 20 by taking a mean value or a median value of the matrix of the region of interest ROI _ P along the horizontal direction, wherein whether gel exists in a blood sample can be observed from the second channel.
That is, in the application scenario shown in fig. 21, L1 is the highest position of the liquid level determined by the ROI _ B single pixel line segment, L2 is the highest position of the blood clot liquid level determined by the ROI _ R single pixel line segment, and the middle line segment between L1 and L2 corresponds to no distinct layering if the original tube image P1 is a tube without gel as shown in the left side of fig. 21, whereas if the tube image P1 is a tube with gel as shown in the right side of fig. 19, two distinct segments of the middle line segment are clearly visible, wherein the lower pixel portion on the right is the separating agent region and the higher pixel on the left is the serum region.
Finally, the terminal device finds a turning point of the middle line segment by traversing a point of the middle line segment between the above L1 and L2, wherein the turning point is a point at which the difference between the total pixels at the left and right ends of the middle line segment is the largest, and determines whether the difference between the pixels at the two ends is greater than the threshold value X to determine the lowest position of the serum level. That is, if the difference is larger than the threshold value X, the tube contains gel, and the lowest serum level position (if L3) is the inflection point, whereas if the difference is smaller than the threshold value X, the tube does not contain gel, and the lowest serum level position L3 is the highest clot level position L2.
It should be noted that, in this embodiment, the selecting process of the threshold X is as follows: and taking a plurality of serum samples, and acquiring the average value A1 of the pixels in the serum area and the average value A2 of the pixels in the gel area of each sample, wherein the threshold value X is | A1-A2 |.
In this embodiment, when the terminal device analyzes a serum sample that is contained in a test tube and requires to calculate the serum liquid level and the serum amount, first, a preset industrial camera or other image capturing device performs an image acquisition operation on the test tube containing the serum sample to obtain an optimal test tube image meeting requirements; then, the test tube image is further separated into corresponding three-channel images in an RGB color space, and corresponding ROI (region of interest) are further respectively determined from the three-channel images and the original test tube image; finally, the sample indexes corresponding to the ROI are respectively used for analysis so as to calculate the serum liquid level position of the serum sample contained in the test tube, and further calculate the serum amount and the like based on the serum liquid level position.
Compared with the existing method for analyzing the serum sample contained in the test tube, the method provided by the invention has the advantages that the collected test tube image is separated firstly and then the corresponding ROI is extracted to perform corresponding analysis on the serum sample, so that the influence of the test tube label on the image color can be avoided, the accuracy of analyzing and calculating the liquid level height and the serum volume of the blood sample can be ensured, the human resource waste condition caused by artificial naked eye observation can be further avoided, and the efficiency of analyzing the serum sample based on the collected test tube image is effectively improved.
Further, referring to fig. 2, in an embodiment of the serum quality identification method based on a neural network of the present invention, the acquiring an image of a test tube by taking an image of the test tube containing a serum sample may include:
step i, counting the number of pixel values of a non-label area of a test tube in a real-time test tube image aiming at the test tube of an image to be acquired so as to determine the area of the non-label area;
in this embodiment, in the process of performing image acquisition on a test tube, a terminal device, for a test tube that needs to perform image acquisition, first counts a non-label area that is not covered by a label or a barcode on the test tube, and determines the number of pixel values in a real-time test tube image acquired by an industrial camera for the test tube, so as to determine the area of the non-label area in the real-time test tube image.
Specifically, for example, referring to the application flow shown in fig. 7, when image acquisition is performed on any one of the test tubes, the terminal device first uses a preset industrial camera to capture a real-time test tube image of the test tube in an initial state, and then performs data analysis on the captured real-time test tube image through a data processing unit in the system architecture. That is, in the scenario shown in fig. 8, the terminal device establishes a coordinate system in the image with the top left corner of the real-time tube image as the origin of coordinates, the direction perpendicular to the tube in the image as the x-axis, and the height direction of the tube in the image as the y-axis, and then processes the image to obtain the tube height. Finally, the image is converted from the RGB space to the HSV and other color spaces by using the calculation process shown in fig. 12, so as to intercept 50 pixel values in the middle area of the S-channel tube as a judgment matrix based on the determined tube height, that is, take a matrix block of 50 pixel values between H/2-25 and H/2+25, where H is the tube height, divide each pixel value of the image in the matrix, and count the number of pixel values of the non-label area in the matrix block as the area of the non-label area on the tube in the current real-time tube image.
It should be noted that, in this embodiment, it should be understood that the terminal device intercepts 50 pixel values as the determination matrix is not a unique value, and based on different design requirements of practical applications, in different feasible real-time manners, the terminal device may certainly adjust the number of the intercepted pixel values according to the size of the real-time tube image actually captured.
In addition, referring to fig. 4, since the background and the label area in the real-time tube image are relatively dark, and the non-label area (or also referred to as a label gap area) is relatively bright, the terminal device can specifically determine the area larger than the pixel threshold a in the image as the non-label area by setting the pixel threshold a, and determine the area smaller than the pixel threshold a as the background and the label area.
In this embodiment, since the terminal device establishes the coordinate system in the real-time tube image and then fixes the uppermost position of the tube, the height of the tube can be determined by locating the lowest position of the tube in the image. It should be understood that, depending on the design requirements of the actual application, in different possible embodiments, the terminal device may specifically locate the bottom coordinate of the test tube by a plurality of methods, such as: firstly, directly binarizing the B channel image or detecting the edge of the image (specifically, detecting the edge of a test tube by an edge detection operator such as Sobel, Roberts, Prewitt, Canny), then performing morphology and other processing to separate a test tube area and a background area, and finally finding the coordinates (H, M) of the bottom end of the test tube by counting the pixel values of the image along the height direction of the test tube.
In particular, the terminal device may specifically utilize the B-channel image to locate the tube bottom coordinates. The terminal device separates the real-time test tube image into three channels of R, G and B, and performs binarization and morphological processing on the B channel image, at the moment, the non-label area of the test tube in the image, which is not shielded by the label, is black, and the label or background area is white. Then, as shown in the scenario of fig. 9, the terminal device further superimposes or averages the processed image in the row direction so that the image is changed from a matrix to a single-pixel line, and by looking up from the bottom of the image, a first position that is not 0 is found, which is the height H of the cuvette. In addition, as shown in the scenario of fig. 10, the terminal device may specifically intercept a small region of the image above the point H (e.g., select 30 pixel values), so as to average or superimpose the small region in the column direction, then take the first point different from 0 from left to right as x1, take the first point different from 0 from right to left as x2, and take the average value as the central region M of the test tube in the image.
Further, in another possible embodiment, the terminal device may specifically also use an edge detection algorithm to locate the bottom coordinates of the test tubes in the image.
Specifically, the terminal device selects a filter with a proper size in advance according to the size of a real-time test tube image, and then carries out edge detection on the image by using a Canny operator to obtain a contour image of a test tube in the image; then, using morphological processing (such as closing operation), the two-pass scanning method processes the edge image and binarizes the image to obtain a binary image as shown in fig. 11, where the edge is white and the rest is black, and finally, the bottom coordinates (H, M) of the test tube are found by counting the pixel values of the image along the height direction of the test tube.
Step ii, detecting whether the area of the non-label area meets a preset rotation condition, and when yes is detected, acquiring an image after a first rotation operation is performed on the test tube;
note that, in the present embodiment, the preset rotation condition is that the non-label area of the test tube is 0 at this time, and the first rotation operation is a rotation of 180 ° for the test tube.
In this embodiment, after counting the area of the non-label region of the test tube in the current real-time test tube image, the terminal device detects whether the area is 0, and when detecting that the area is 0, immediately rotates the test tube by 180 °, and then performs image acquisition on the test tube.
Specifically, for example, referring to the application flow shown in fig. 7, when it is counted that the area of the non-label area of the test tube in the current real-time test tube image is 0, the terminal device first performs a rotation operation on the test tube to rotate the test tube by 180 °, and then repeats the process of the step S10 to count that the new area of the non-label area is greater than or equal to the preset area threshold, and performs image acquisition on the test tube to obtain a test tube image meeting the requirement of performing sample detection.
Further, in a possible embodiment, the method for analyzing a serum sample based on a tube image of the present invention may further include:
step A, outputting a preset first alarm prompt for the test tube after executing a first rotation operation for the test tube;
it should be noted that, in this embodiment, the preset first alarm prompt is a prompt for prompting the user that each angle of the test tube is covered by the label.
In this embodiment, after the terminal device performs the first rotation operation to rotate the test tube by 180 °, if the area of the new non-label area of the test tube in the new real-time test tube image, which is further counted by the terminal device, is still 0, the terminal device outputs a preset first alarm prompt to the test tube to prompt the user that each angle of the test tube is covered by the label.
Further, in a possible embodiment, the step a may include:
step A1, determining the area of the new non-label area of the test tube after the first rotation operation is executed on the test tube;
step A2, detecting whether the area of the new non-label area meets the preset rotation condition, and outputting a preset first alarm prompt for the test tube when detecting that the area meets the preset rotation condition.
Referring to the application flow shown in fig. 7, in this embodiment, after the terminal device performs the rotation operation on the test tube to rotate the test tube by 180 °, the process of step S10 is repeatedly performed to count a new non-label area, and then, if the terminal device further detects that the new non-label area is still 0, the terminal device immediately outputs a preset first alarm prompt to the user for the test tube to prompt the user that each angle of the test tube is covered by the label.
Step iii, when no, detecting the size relation between the area of the non-label area and a preset area threshold value to obtain a detection result;
in this embodiment, when detecting whether the area of the non-label region of the test tube in the current real-time test tube image is 0, and thus detecting that the area is not 0, the terminal device further detects a size relationship between the area of the non-label region and a preset area threshold, so as to obtain a detection result that the area of the non-label region is greater than the area threshold, or obtain a detection result that the area of the non-label region is smaller than the area threshold.
Specifically, for example, referring to the application flow shown in fig. 7, when it is counted that the non-label area of the test tube in the current real-time test tube image is not 0, the terminal device further compares the non-label area with a preset area threshold to determine whether the non-label area is greater than the area threshold or smaller than the area threshold.
Note that, in this example, the terminal device sets the area threshold value in advance based on the process in which the non-label area of the test tube is capturing an image of the industrial camera. Further, in a possible embodiment, the method for analyzing a serum sample based on a tube image according to the present invention may further include:
and 3, counting the number of the pixel values of the non-label area in the matrix block to determine the area threshold of the non-label area of the test tube.
In this embodiment, the terminal device obtains, based on the industrial camera, a test tube image in which a non-label area of the test tube faces the industrial camera in advance, converts the test tube image from an RGB space to an HSV color space, so as to extract a matrix block of pixel values in a middle area of an S channel, and finally, the terminal device counts the number of pixel values of the non-label area in the image in the matrix block to determine an area threshold value at which the area of the non-label area of the test tube is the minimum when performing sample detection on the test tube.
It should be noted that, in this embodiment, before determining whether the non-label area in the collected tube image meets the template detection requirement, a minimum area threshold of the non-label area needs to be first given for the tube. Therefore, if the area of the non-label area of the test tube counted by the terminal device in the real-time test tube image is larger than or equal to the area threshold value, it can be judged that the current test tube image obtained by image acquisition aiming at the test tube is the non-label area and meets the requirement of sample detection of the device.
Referring to the application scenario shown in fig. 4, after the terminal device converts the real-time test tube image RGB space captured by the industrial camera into the HSV color space, the difference between the non-label area and the label area of the test tube in the image can be clearly observed in the S channel, that is, the background and the white label area in the image are dark black in the S channel, and the color of the non-label area is brighter. Therefore, the terminal device determines which positions of the test tubes are labeled according to the gray level of the pixel values in the image.
In addition, referring to the application scenario shown in fig. 5, when a blood clot (e.g., centrifuged serum) is contained in a serum sample contained in a test tube, the pixel value of the blood clot in the S channel is dark, so that the terminal device may divide an image area where the blood clot is located into a label area of the test tube when calculating a non-label area of the test tube, thereby causing an identification error. However, in general, in the serum sample containing gel, the blood clot is not higher than 1/2 of the overall liquid level, and based on this, in order to obtain the minimum area threshold value of the accurate non-label area, the terminal device detects the height of the test tube and cuts the middle fixed area of the test tube to detect the non-label area of the test tube.
Specifically, referring to the application scenario shown in fig. 6, the terminal device captures a tube image of a tube at a non-label area of a tube (specifically, the tube image can be acquired by a user manually adjusting a tube angle), takes a matrix block between heights H/2-25 and H/2+25 of an S-channel middle area, then counts a number C of pixel values of the non-label area of the tube in the matrix block area, and uses the number as an area threshold V ═ C- α, where α is an allowable deviation number.
Step iiii, according to the detection result, acquiring an image directly for the test tube or after performing a second rotation operation for the test tube.
In this embodiment, the terminal device detects a size relationship between the area of the non-label region and a preset area threshold, so as to obtain a detection result that the area of the non-label region is greater than the area threshold, or after obtaining a detection result that the area of the non-label region is smaller than the area threshold, the terminal device further performs image acquisition on the test tube directly or after performing a second rotation operation on the test tube to rotate the test tube by a preset angle, based on the different detection results.
Further, in a possible embodiment, the above step iiii may include:
step i1, when the detection result is that the area of the non-label area is greater than or equal to the area threshold value, directly acquiring an image for the test tube by the industrial camera;
in this embodiment, please refer to the application flow shown in fig. 7, assuming that a real-time tube image currently acquired by a terminal device is specifically the rightmost image in fig. 13, the terminal device detects whether an area of a non-label region of a tube in the current real-time tube image is 0, so that when the area is detected to be not 0, further detects a size relationship between the area of the non-label region and a preset area threshold, and obtains a detection result that the area of the non-label region is greater than the area threshold, the terminal device determines that the currently acquired image meets a requirement of device for sample detection, so that image acquisition can be directly performed without rotating the tube (specifically, the current real-time tube image is directly stored as an optimal tube image for subsequent sample detection by the device).
Step i2, when the detection result is that the area of the non-label area is smaller than the area threshold value, acquiring an image for the test tube by the industrial camera after performing the second rotation operation for the test tube to rotate the midpoint of the non-label area to the right opposite side of the industrial camera.
In this embodiment, please refer to the application process shown in fig. 7, when the terminal device detects whether the area of the non-labeled area of the test tube in the current real-time test tube image is 0, so that when the area is detected to be not 0, the size relationship between the area of the non-labeled area and the preset area threshold is further detected, and when the detection result that the area of the non-labeled area is smaller than the area threshold is obtained, the terminal device further performs a second rotation operation on the test tube to rotate the test tube by a preset angle, so that after the new area of the non-labeled area of the test tube meets the requirement of the device for performing sample detection, performs image acquisition on the test tube to obtain an optimal test tube image for the device to perform subsequent sample detection.
Further, in a possible embodiment, in the step i2, the step of performing the second rotation operation on the test tube to rotate the midpoint of the non-label area to the right opposite side of the industrial camera may include:
step i21, dividing a first area in the non-label area collected by the industrial camera into a left area and a right area in half;
step i2, counting the number of pixel values of the left region and the right region respectively to correspondingly determine the area of the left region and the area of the right region, and detecting the size relationship between the area of the left region and the area of the right region;
step i3, if it is detected that the area of the left side area is larger than the area of the right side area, rotating the test tube to the right by a preset angle to rotate the midpoint of the non-label area to the right opposite to the industrial camera; or,
step i4, if it is detected that the area of the right side region is larger than the area of the left side region, rotating the test tube to the left by a preset angle to rotate the midpoint to be opposite to the industrial camera.
In this embodiment, referring to the application scenario shown in fig. 7, if the terminal device detects that the area of the non-labeled area of the test tube is not 0 is smaller than the preset area threshold, the terminal device determines that the non-labeled area on the test tube is currently found, so that the terminal device rotates the midpoint of the non-labeled area to the opposite side of the industrial camera by a preset angle with respect to the test tube rotation, that is, the terminal device divides the matrix block into left and right areas at the center area M of the matrix block that counts the non-labeled areas of the test tubes in the current real-time test tube image, and then the terminal device counts the number of pixel values of the non-labeled areas of the test tubes in the left area to obtain the area of the left area, and counts the number of pixel values of the non-labeled areas of the test tubes in the right area to obtain the area of the right area.
Then, the terminal device detects the size relationship between the left area and the right area, and if the left area is larger than the right area (if the terminal device is currently acquiring a real-time tube image such as the image in the middle of fig. 13, the left area is larger than the right area), the terminal device determines that the midpoint of the non-label area on the tube at this time is on the left side of the tube, so that a second rotation operation is performed on the tube to rotate the tube to the right by a preset angle N, so that the midpoint is directly opposite to the industrial camera.
Or, if the terminal device detects that the area of the left side region is smaller than the area of the right side region (if the real-time tube image currently acquired by the terminal device is the rightmost image in fig. 13, the area of the left side region is smaller than the area of the right side region), the terminal device determines that the midpoint of the non-label region on the tube is located on the right side of the tube at the moment, so that a second rotation operation is performed on the tube to rotate the tube to the left by a preset angle N, so that the midpoint is directly opposite to the industrial camera.
It should be noted that, in this embodiment, the preset angle is calculated based on the diameter of the test tube and an angle at a label gap of the test tube in a real-time test tube image currently acquired by the industrial camera, where the angle at the label gap is calculated based on a label area of a label on the test tube and a perimeter of the test tube.
Specifically, referring to the application scenario shown in fig. 15, when the terminal device controls the test tube to rotate left or right by a preset angle N, based on the diameter d and the circumference C of the test tube collected in advance and the area size M of the label or the barcode adhered to the test tube, the preset angle N is calculated in real time according to the following formula 1:
wherein,is the angle of the non-label area of the test tube in the real-time test tube imageCalculated by the following equation 2:
further, in a possible embodiment, the method for analyzing a serum sample based on a tube image according to the present invention may further include:
step B, outputting a preset second alarm prompt for the test tube after executing a second rotation operation for the test tube;
it should be noted that, in this embodiment, the preset second alarm prompt is a prompt for prompting that the sticking of the label or the barcode on the test tube by the user does not meet the sample detection requirement.
In this embodiment, after the terminal device performs the second rotation operation to rotate the test tube by the preset angle, if the area of the new non-label area of the test tube in the new real-time test tube image, which is further counted by the terminal device, is not 0 but still smaller than the area threshold, the terminal device outputs a preset second alarm prompt to the test tube to prompt the user that the label or barcode on the test tube is not adhered to the sample detection requirement.
Further, in a possible embodiment, the step B may include:
step B1, determining a new non-label area of the test tube after performing a second rotation operation on the test tube;
step B2, detecting the size relation between the area of the new non-label area and the area threshold value, and outputting a preset second alarm prompt aiming at the test tube when detecting that the area of the new non-label area is smaller than the area threshold value.
Referring to the application flow shown in fig. 7, in this embodiment, after the terminal device performs the rotation operation on the test tube to rotate the test tube by the preset angle N, the process of step S10 is repeated to count the area of the new non-label region, then, if the terminal device further detects the size relationship between the new non-label area and the area threshold, thus, if the terminal device still detects that the non-tag area is less than the area threshold, the terminal device determines that the current test tube is a test tube with an unsatisfactory non-label area (as shown in fig. 14, the left side is a test tube with a label or a bar code with satisfactory adhesion, and the right side is a test tube with an unsatisfactory label or bar code with unsatisfactory adhesion), therefore, the terminal equipment immediately outputs a preset second alarm prompt to the user aiming at the test tube to prompt the user that the label or the bar code on the test tube is not adhered to meet the sample detection requirement.
In this embodiment, in the process of performing image acquisition on a test tube, a terminal device, in the present embodiment, firstly, counts a non-label area that is not covered by a label or a barcode on the test tube for the test tube that needs to perform image acquisition, and determines the number of pixel values in a real-time test tube image acquired by an industrial camera for the test tube, so as to determine the area of the non-label area in the real-time test tube image; after counting the area of a non-label area of a test tube in the current real-time test tube image, the terminal equipment detects whether the area is 0, and immediately rotates the test tube by 180 degrees when the area is detected to be 0, and then performs image acquisition on the test tube; or when detecting that the area of the non-label area of the test tube in the current real-time test tube image is not 0, the terminal device further detects the size relationship between the area of the non-label area and a preset area threshold value, so as to obtain a detection result that the area of the non-label area is larger than the area threshold value, or obtain a detection result that the area of the non-label area is smaller than the area threshold value; the terminal equipment further carries out image acquisition to the test tube or carries out the second rotation operation to this test tube earlier after making the test tube rotate preset angle, carries out image acquisition to this test tube directly corresponding promptly based on this different testing result.
Compared with the existing mode of carrying out image acquisition on a test tube, the method and the device have the advantages that the non-label area of the test tube in the real-time image is determined, then the image is acquired after the first rotation operation is carried out on the test tube according to the non-label area, or the image is acquired after the second rotation operation is carried out on the test tube according to the size relation between the non-label area and the preset area threshold value. Therefore, the test tube image which meets the requirement of the equipment for sample detection can be acquired and obtained only based on 1-3 times of shooting, the time for acquiring the image is greatly shortened, the occupation of storage resources caused by shooting of a large amount of image data is avoided, the optimal image of the test tube non-label area facing the camera can be accurately acquired based on the rotation operation, and the overall acquisition efficiency of the test tube image is effectively improved.
In addition, referring to fig. 26, an embodiment of the present invention further provides a serum quality identification device based on a neural network, and the serum quality identification device based on the neural network includes:
the device comprises a test tube image acquisition module 10, a blood serum sample collection module and a blood serum sample collection module, wherein the test tube image acquisition module is used for acquiring an image of a test tube containing a blood serum sample;
and the serum quality identification module 20 is configured to input the test tube image into a preset neural network model, so that the neural network model outputs a serum quality identification result for the serum sample, where the neural network model is obtained by performing convolutional neural network model training on the test tube image.
Preferably, the serum quality identification device based on the neural network further comprises:
the model training module is used for carrying out convolutional neural network model training through multi-exposure test tube images;
a model training module comprising:
an extraction unit for extracting a matrix area belonging to the tube non-label area from the multi-exposed tube image;
the first convolution unit is used for inputting the matrix area into a preset first convolution module to perform first convolution neural network model training, and acquiring a characteristic diagram output by the first convolution module after performing first convolution neural network model training on the matrix area;
and the second convolution unit is used for inputting the stacked characteristic graphs into a preset second convolution module for second convolution neural network model training so as to obtain a neural network model for performing serum quality identification on the serum sample.
Preferably, the first convolution module and the second convolution module include: the first convolution module and the second convolution module are connected with one pooling layer after every two convolution layers except the last two convolution layers;
the convolution layers in the first convolution module comprise a plurality of first convolution layers with the step length of 1 and a plurality of second convolution layers with the step length of 2, and in every two first convolution layers, the first convolution layer of which the output end is not connected with the pooling layer is connected with one second convolution layer;
the number of convolution kernels of two first convolution layers at the end of the first convolution module is smaller than the number of convolution kernels of other first convolution layers and the second convolution layer.
Preferably, the second convolution module is connected to the full connection layer and the logistic regression layer at the end, and the second convolution unit is further configured to:
stacking the feature maps, inputting the stacked feature maps into a preset second convolution module, and acquiring a new feature map output by the second convolution module after the feature maps are processed on the basis of the plurality of convolution layers and the pooling layer;
inputting the new feature map into the full-connection layer for feature classification to obtain a quality category of serum, wherein the quality category comprises: normal, hemolysis, lipemia and jaundice;
inputting the new feature map into the logistic regression layer to calculate a probability value of each quality class for determining the quality class as the corresponding quality grade when the hemolysis, the lipemia and the jaundice.
Preferably, the model training module of the neural network-based serum quality identification apparatus of the present invention is further configured to assign a weight to the feature map, and after multiplying the feature map by the assigned weight, perform the step of stacking the feature map and inputting the stacked feature map into a preset second convolution module.
Preferably, the extracting unit is further used for performing serum sample analysis on the multi-exposure test tube image to calculate the highest position of the serum liquid level and the lowest position of the serum liquid level of the serum sample contained in the test tube; and extracting a matrix area with a preset size from the test tube non-label area in the multi-exposure test tube image according to the highest position of the serum liquid level and the lowest position of the serum liquid level.
Preferably, the extracting unit is further configured to determine a serum image area from the non-label area of the test tube according to the highest position of the serum liquid level and the lowest position of the serum liquid level;
and intercepting the matrix region from the serum image region according to the preset size, wherein the preset size is smaller than the size of the serum image region.
When each functional module of the serum quality identification device based on the neural network provided in this embodiment operates, the steps of the serum quality identification method based on the neural network described above are implemented, and are not described herein again.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a neural network-based serum quality identification program is stored on the computer-readable storage medium, and when the neural network-based serum quality identification program is executed by a processor, the steps of the neural network-based serum quality identification method described above are implemented.
For the specific implementation of the computer-readable storage medium of the present invention, reference may be made to the embodiments of the neural network-based serum quality identification method described above, which are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A serum quality identification method based on a neural network is characterized by comprising the following steps:
acquiring an image of a test tube containing a serum sample to obtain a test tube image;
inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
2. The neural network-based serum quality recognition method according to claim 1, wherein the method is characterized in that convolutional neural network model training is performed through multi-exposure test tube images;
the serum quality identification method based on the neural network further comprises the following steps:
acquiring multi-exposure test tube images, and extracting a matrix area belonging to the non-label area of the test tube from the multi-exposure test tube images;
inputting the matrix area into a preset first convolution module to perform first convolution neural network model training, and acquiring a characteristic diagram output by the first convolution module after performing first convolution neural network model training on the matrix area;
and stacking the characteristic graphs, inputting the stacked characteristic graphs into a preset second convolution module, and performing second convolution neural network model training to obtain a neural network model for performing serum quality identification on the serum sample.
3. The neural network-based serum quality identification method of claim 2, wherein the first convolution module and the second convolution module comprise: the first convolution module and the second convolution module are connected with one pooling layer after every two convolution layers except the last two convolution layers;
the convolution layers in the first convolution module comprise a plurality of first convolution layers with the step length of 1 and a plurality of second convolution layers with the step length of 2, and in every two first convolution layers, the first convolution layer of which the output end is not connected with the pooling layer is connected with one second convolution layer;
the number of convolution kernels of two first convolution layers at the end of the first convolution module is smaller than the number of convolution kernels of other first convolution layers and the second convolution layer.
4. The serum quality identification method based on the neural network as claimed in any one of claims 2 or 3, wherein a full connection layer and a logistic regression layer are connected to the end of the second convolution module, and the step of inputting the stacked feature maps into a preset second convolution module for second convolution neural network model training comprises:
stacking the feature maps, inputting the stacked feature maps into a preset second convolution module, and acquiring a new feature map output by the second convolution module after the feature maps are processed on the basis of the plurality of convolution layers and the pooling layer;
inputting the new feature map into the full-connection layer for feature classification to obtain a quality category of serum, wherein the quality category comprises: normal, hemolysis, lipemia and jaundice;
inputting the new feature map into the logistic regression layer to calculate a probability value of each quality class for determining the quality class as the corresponding quality grade when the hemolysis, the lipemia and the jaundice.
5. The neural network-based serum quality recognition method according to any one of claims 2 or 3, wherein the neural network-based serum quality recognition method further comprises:
and distributing weights to the feature maps, multiplying the feature maps by the distributed weights, and then executing the step of inputting the stacked feature maps into a preset second convolution module.
6. The neural network-based serum quality identification method according to claim 2, wherein the step of extracting a matrix area belonging to the tube non-label area from the multi-exposed tube image comprises:
performing serum sample analysis on the multi-exposure test tube image to calculate the highest position and the lowest position of the serum liquid level of the serum sample contained in the test tube in the multi-exposure test tube image;
and extracting a matrix area with a preset size from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level.
7. The neural network-based serum quality recognition method according to claim 6, wherein the step of extracting a matrix region of a preset size from the test tube non-label region according to the highest position of the serum level and the lowest position of the serum level comprises:
determining a serum image area from the non-label area of the test tube according to the highest position and the lowest position of the serum liquid level;
and intercepting the matrix region from the serum image region according to the preset size, wherein the preset size is smaller than the size of the serum image region.
8. A neural network-based serum quality identification apparatus, comprising:
the device comprises a test tube image acquisition module, a blood serum sample collection module and a blood serum sample collection module, wherein the test tube image acquisition module is used for acquiring an image of a test tube containing a blood serum sample;
and the serum quality identification module is used for inputting the test tube image into a preset neural network model so that the neural network model can output a serum quality identification result aiming at the serum sample, wherein the neural network model is obtained by performing convolutional neural network model training on the test tube image.
9. A terminal device, characterized in that the terminal device comprises: a memory, a processor and a neural network-based serum quality identification program stored on the memory and executable on the processor, the neural network-based serum quality identification program when executed by the processor implementing the steps of the neural network-based serum quality identification method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a neural network-based serum quality identification program, which when executed by a processor, implements the steps of the neural network-based serum quality identification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111680320.2A CN114332058A (en) | 2021-12-30 | 2021-12-30 | Serum quality identification method, device, equipment and medium based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111680320.2A CN114332058A (en) | 2021-12-30 | 2021-12-30 | Serum quality identification method, device, equipment and medium based on neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332058A true CN114332058A (en) | 2022-04-12 |
Family
ID=81023316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111680320.2A Pending CN114332058A (en) | 2021-12-30 | 2021-12-30 | Serum quality identification method, device, equipment and medium based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114332058A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114878844A (en) * | 2022-05-20 | 2022-08-09 | 上海捷程医学科技有限公司 | Method, system and equipment for automatically detecting quality of centrifuged blood sample |
CN116402671A (en) * | 2023-06-08 | 2023-07-07 | 北京万象创造科技有限公司 | Sample coding image processing method for automatic coding system |
-
2021
- 2021-12-30 CN CN202111680320.2A patent/CN114332058A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114878844A (en) * | 2022-05-20 | 2022-08-09 | 上海捷程医学科技有限公司 | Method, system and equipment for automatically detecting quality of centrifuged blood sample |
CN116402671A (en) * | 2023-06-08 | 2023-07-07 | 北京万象创造科技有限公司 | Sample coding image processing method for automatic coding system |
CN116402671B (en) * | 2023-06-08 | 2023-08-15 | 北京万象创造科技有限公司 | Sample coding image processing method for automatic coding system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110060237B (en) | Fault detection method, device, equipment and system | |
CN112115818B (en) | Mask wearing identification method | |
US6137899A (en) | Apparatus for the identification of free-lying cells | |
JP6791864B2 (en) | Barcode tag detection in side view sample tube images for laboratory automation | |
US9684958B2 (en) | Image processing device, program, image processing method, computer-readable medium, and image processing system | |
KR20190043135A (en) | Systems and methods for classifying biological particles | |
WO2014087689A1 (en) | Image processing device, image processing system, and program | |
CN114332058A (en) | Serum quality identification method, device, equipment and medium based on neural network | |
CN112070711A (en) | Analysis method of micro-droplets in micro-droplet image detection method | |
CN111242899A (en) | Image-based flaw detection method and computer-readable storage medium | |
CN111340831A (en) | Point cloud edge detection method and device | |
CN113658174A (en) | Microkaryotic image detection method based on deep learning and image processing algorithm | |
Meimban et al. | Blood cells counting using python opencv | |
KR20190114241A (en) | Apparatus for algae classification and cell countion based on deep learning and method for thereof | |
CN113255766B (en) | Image classification method, device, equipment and storage medium | |
US20160283821A1 (en) | Image processing method and system for extracting distorted circular image elements | |
KR102220574B1 (en) | Method, apparatus and computer program for calculating quality score threshold for filtering image data | |
Sankaran et al. | Quantitation of Malarial parasitemia in Giemsa stained thin blood smears using Six Sigma threshold as preprocessor | |
WO2023034441A1 (en) | Imaging test strips | |
CN113506266B (en) | Method, device, equipment and storage medium for detecting greasy tongue coating | |
CN115672778A (en) | Intelligent visual recognition system | |
US10146042B2 (en) | Image processing apparatus, storage medium, and image processing method | |
CN113505784A (en) | Automatic nail annotation analysis method and device, electronic equipment and storage medium | |
CN114332059A (en) | Method, device, equipment and medium for analyzing serum sample based on test tube image | |
CN114339046B (en) | Image acquisition method, device, equipment and medium based on automatic rotation test tube |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |