CN112907544A - Machine learning-based liquid dung character recognition method and system and handheld intelligent device - Google Patents
Machine learning-based liquid dung character recognition method and system and handheld intelligent device Download PDFInfo
- Publication number
- CN112907544A CN112907544A CN202110206359.4A CN202110206359A CN112907544A CN 112907544 A CN112907544 A CN 112907544A CN 202110206359 A CN202110206359 A CN 202110206359A CN 112907544 A CN112907544 A CN 112907544A
- Authority
- CN
- China
- Prior art keywords
- image
- liquid dung
- character
- liquid
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000003608 fece Anatomy 0.000 title claims abstract description 168
- 239000007788 liquid Substances 0.000 title claims abstract description 164
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000010801 machine learning Methods 0.000 title claims abstract description 48
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 60
- 230000002550 fecal effect Effects 0.000 claims abstract description 59
- 230000003749 cleanliness Effects 0.000 claims abstract description 38
- 239000010865 sewage Substances 0.000 claims abstract description 38
- 210000001035 gastrointestinal tract Anatomy 0.000 claims abstract description 29
- 238000003708 edge detection Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 31
- 238000002360 preparation method Methods 0.000 claims description 30
- 230000000968 intestinal effect Effects 0.000 claims description 21
- 239000010871 livestock manure Substances 0.000 claims description 21
- 230000006399 behavior Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 238000012797 qualification Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000002496 gastric effect Effects 0.000 description 18
- 230000002829 reductive effect Effects 0.000 description 14
- 230000015654 memory Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000001839 endoscopy Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 210000000936 intestine Anatomy 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 210000002784 stomach Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000008141 laxative Substances 0.000 description 1
- 230000002475 laxative effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 239000002893 slag Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
Abstract
The invention discloses a liquid dung character recognition method based on machine learning, which comprises the steps of obtaining a liquid dung image to be recognized; dividing a liquid dung image to be identified based on a multi-threshold edge detection method, and extracting a liquid dung area image of the liquid dung image; extracting image characteristics of the fecal sewage area image according to a preset convolutional neural network model to generate fecal sewage character characteristics; and judging the liquid dung character characteristics through a matching model stored with a liquid dung character database, and if the liquid dung character characteristics are not matched with the liquid dung character database, outputting a result for prompting that the cleanliness of the intestinal tract is qualified.
Description
Technical Field
The invention relates to the technical field of machine learning, in particular to a method and a system for identifying the character of liquid dung based on machine learning and a handheld intelligent device.
Background
At present, before a patient takes a gastrointestinal endoscopy examination, the patient needs to make intestinal tract preparation, namely, liquid dung discharged after taking a cathartic needs to be clarified and has no excrement residue, the patient firstly carries out contrast examination according to a standard diagram, and then medical staff carries out the examination again. When the medical staff judges that the cleanliness of the intestinal tract is good, the patient can carry out the gastrointestinal endoscopy. However, the examination method causes a large workload for medical staff, and particularly when the patient fails to accurately judge the cleanliness of the intestinal tract by himself, the doctor and the patient need to repeat the previous preparation again, which not only reduces the work efficiency of the medical staff, but also delays the time of the enteroscopy for the patient.
At present, most of the existing automatic inspection methods for liquid dung extract features by traditional machine learning methods, and recognize the features by linear regression or one-by-one comparison between an SVM and a sample map. However, this machine learning method has a poor recognition effect and may not perform complete feature extraction. In addition, most of the feature extraction is focused on one aspect of the picture, for example, only the color of the image and only the character are extracted and analyzed, so that the connection among the parts of the image is lost, and the actual condition of the liquid dung cannot be judged.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a fecal sewage character recognition method based on machine learning, which can effectively and comprehensively extract the characteristics of the fecal sewage character, is favorable for accurately recognizing the fecal sewage, and can be independently completed by a patient at home or in a ward, thereby reducing the workload of medical personnel, reducing the frequency of back and forth wave of the patient and saving the preparation time before gastrointestinal endoscopy.
In order to solve the technical problem, the invention discloses a machine learning-based method for identifying the character of liquid dung, which comprises the following steps: acquiring a liquid dung image to be identified; segmenting the liquid dung image to be identified based on a multi-threshold edge detection method, and extracting a liquid dung area image of the liquid dung image; extracting the image characteristics of the fecal sewage area image according to a preset convolutional neural network model to generate fecal sewage character characteristics; and judging the liquid dung character characteristics through a matching model stored with a liquid dung character database, and if the liquid dung character characteristics are not matched with the liquid dung character database, outputting a result for prompting that the cleanliness of the intestinal tract is qualified.
In some embodiments, segmenting the fecal image to be identified based on a multi-threshold edge detection method comprises: smoothing the liquid dung image to be identified by using a Gaussian function; calculating gradient values of the fecal sewage image to be identified in all directions; filtering the non-maximum values of the gradient values of the respective directions; collecting edge information of the liquid dung image to be identified through a set high threshold value, and generating a contour map through a set low threshold value and the edge information; and segmenting the fecal sewage image to be identified according to the contour map.
In some embodiments, the convolutional neural network model includes a plurality of different-structure, different-initialization convolutional neural network models, and the preset convolutional neural network model is generated by a method including: acquiring a standard map of the liquid dung character to generate an image training set and a sample image testing set; training a convolutional neural network model through the image training set and the sample image testing set to generate a convolutional neural network model; and averaging the results of the convolutional neural network models generated by training to generate a preset convolutional neural network model.
In some embodiments, determining the characteristics of the fecal traits through a matching model having a database of fecal traits stored therein comprises: and comparing the characteristics of the liquid dung with the images stored in the liquid dung characteristic database one by one through a perceptual Hash algorithm.
In some embodiments, determining the characteristics of the liquid manure traits through a matching model stored with a liquid manure trait database, and if the characteristics of the liquid manure traits are not matched with the liquid manure trait database, then: and judging the effectiveness of the liquid dung character characteristic through a preset convolutional neural network model, and if the liquid dung character characteristic is effective, outputting a result for prompting preparation of intestinal examination.
In some embodiments, further comprising: acquiring a manual judgment result, wherein the manual judgment result comprises an extended image set; and expanding the image training set according to the expanded image set, and training a convolutional neural network model by using the expanded image training set.
In some embodiments, the manual determination further includes a liquid dung image to be identified, and the obtaining the manual determination further includes: updating a matching model stored with a liquid dung character database according to the liquid dung image to be identified; and judging the character characteristics of the liquid dung by using the updated matching model.
According to a second aspect of the present invention, there is provided a machine learning-based fecal behavior identification system, the system comprising: the image acquisition module is used for acquiring a liquid dung image to be identified; the identification module is used for segmenting the liquid dung image to be identified based on a multi-threshold edge detection method and extracting a liquid dung area image of the liquid dung image; the characteristic extraction module is used for extracting the image characteristics of the fecal sewage area image according to a preset convolutional neural network model to generate fecal sewage character characteristics; and the judging module is used for judging the liquid dung character characteristics through a matching model stored with a liquid dung character database, and outputting a result for prompting that the cleanliness of the intestinal tract is qualified if the liquid dung character characteristics are not matched with the liquid dung character database.
In some embodiments, the system further comprises: and the effectiveness judgment module is used for judging the effectiveness of the liquid dung character characteristic through a preset convolutional neural network model, and if the liquid dung character characteristic is effective, outputting a result for prompting intestinal examination preparation.
According to a third aspect of the present invention, there is provided a handheld smart device comprising: the shooting module is used for generating a liquid dung image to be identified through shooting; the processing module is used for processing the liquid dung image by using the liquid dung character recognition method based on machine learning to generate a result for prompting that the cleanliness of the intestinal tract is qualified; and the display module is used for displaying the result for prompting the qualification of the cleanliness of the intestinal tract.
Compared with the prior art, the invention has the beneficial effects that:
by implementing the method, the liquid dung image can be obtained, the liquid dung characteristic of integrity is obtained through the combination of high and low thresholds through an image recognition technology, and then the result of prompting the intestinal cleanliness is judged and output by utilizing a convolutional neural network model trained by an image set which is continuously updated. Therefore, the preparation work before the gastrointestinal endoscope can be independently completed by the patient, the workload of medical personnel is reduced, the frequency of back and forth running of the patient is also reduced, and the preparation time before the gastrointestinal endoscope is saved.
Drawings
FIG. 1 is a flow chart of a method for identifying the character of liquid dung based on machine learning, which is disclosed by the embodiment of the invention;
FIG. 2 is a schematic flow chart of another machine learning-based fecal behavior identification disclosed in the embodiments of the present invention;
FIG. 3 is a schematic diagram of a system for identifying fecal behavior based on machine learning according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a handheld smart device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an interaction device for identifying the fecal behavior based on machine learning according to an embodiment of the present invention.
Detailed Description
For better understanding and implementation, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
At present, before gastrointestinal endoscopy examination, patients need to prepare intestines, namely, the patients need to check that discharged liquid dung is clear and has no excrement slag by themselves or after medical staff take cathartic, and the patients can only perform gastrointestinal examination when the cleanliness of the intestines is judged to be good. The preparation steps before the current gastrointestinal endoscope in hospital are as follows:
1) the patient defecates after taking the laxative;
2) judging the cleanliness of the intestinal tract before gastrointestinal endoscopy, and if the cleanliness of the intestinal tract is good after a patient compares with a standard chart by himself, then manually judging whether the liquid dung is clear and has no dung residues by medical staff, and enabling the patient to perform gastrointestinal endoscopy;
3) if the intestinal cleanliness is judged to be poor, the patient needs to repeat the steps until the intestinal cleanliness is good.
The preparation step before the enteroscopy can ensure that the intestinal cleanliness of the patient is good, but can cause the workload of medical staff to be increased, and particularly when the patient fails to accurately judge the intestinal cleanliness by himself/herself, the preparation work before the doctor and the patient need to be repeated again, so that the work efficiency of the medical staff is reduced, the patient can more easily run back and forth, and the enteroscopy time is delayed.
The prior medical stool image recognition system mostly extracts features by a traditional machine learning method, a group of training examples is given, each training example is marked to belong to one or the other of two categories, and the SVM training algorithm creates a model which allocates a new example to one of the two categories to enable the new example to become a non-probability binary linear classifier, and the model is compared with a sample map one by one so as to carry out recognition. Meanwhile, most of the feature extraction in the prior art focuses on a certain aspect of the picture, for example, only the image color is extracted, and the color feature is analyzed after being compared with the sample picture; or only the characteristics of the liquid dung related to the picture are extracted and analyzed, and the method can only extract partial characteristics of the picture, thereby losing the connection of each part of the picture.
The embodiment of the invention discloses a fecal behavior identification method and a system based on machine learning, which can obtain a fecal image, obtain the complete fecal characteristics through the combination of high and low thresholds by an image identification technology, and judge and output the result of prompting the cleanliness of an intestinal tract by using a convolutional neural network model trained by an image set which is continuously updated. Therefore, the preparation work before the gastrointestinal endoscope can be independently completed by the patient, the workload of medical personnel is reduced, the frequency of back and forth running of the patient is also reduced, and the preparation time before the gastrointestinal endoscope is saved.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for identifying characteristics of liquid dung based on machine learning according to an embodiment of the present invention. The method for identifying the fecal behavior based on machine learning can be applied to an intelligent system with a shooting function, and the embodiment of the invention of the intelligent system is not limited. As shown in fig. 1, the method for identifying a fecal behavior based on machine learning may include the following operations:
101. and acquiring a liquid dung image to be identified.
In order to facilitate self-examination of the patient, the acquisition of the fecal sewage image can be specifically realized by photographing the fecal sewage through a handheld intelligent device after the patient starts to perform the self-examination of the intestinal cleanliness or automatically judges that the intestinal cleanliness is good, wherein the handheld intelligent device is a device with a photographing function, and the invention is not limited.
102. And (3) segmenting the liquid dung image to be recognized based on a multi-threshold edge detection method, and extracting a liquid dung region image of the liquid dung image.
For the traditional edge segmentation method, the following specific implementation modes are included:
firstly, smoothing the fecal sewage image to be identified by using a Gaussian function, and smoothing the image by using the Gaussian function to achieve the effect of reducing noise of the image.
And secondly, calculating gradient values of the to-be-identified fecal sewage image in all directions, and calculating the gray value change of the fecal sewage image by utilizing a Sobel operator to obtain the gradient values of the fecal sewage image in all directions.
After that, non-maximum values of the gradient values in the respective directions are filtered, since there is a possibility that edges of the image are enlarged during the gaussian filtering. The non-maximum value of the gradient value in each direction is filtered to filter the points which do not belong to the image edge, so that the width of the edge is 1 pixel point as far as possible, and on the contrary, if one pixel point belongs to the image edge, the gradient value of the pixel point in the gradient direction is the maximum. Otherwise, it is not an image edge, and the gray value is set to 0.
Finally, edges are detected using upper and lower thresholds. In this section, by setting an upper threshold and a lower threshold, a pixel point having a gradient between the two is detected and used as an edge. However, for a stricter threshold setting, that is, when the upper and lower thresholds are high, image noise can be suppressed to obtain a clear contour map, but the higher threshold also suppresses contours, so that the contours are incomplete. For a looser threshold setting, i.e. the upper and lower thresholds are lower, a complete contour can be obtained, but there will be more noise in the image and the edge cannot be resolved well.
Therefore, the final step of the traditional edge segmentation method is improved in the application, the edge information of the liquid dung image to be identified is collected through the set high threshold, the contour map is generated through the set low threshold and the edge information, and then the liquid dung image to be identified is segmented according to the contour map. The concrete implementation is as follows:
and calculating edge boundaries Xmin, Xmax, Ymin and Ymax in four directions of up, down, left and right for the contour map obtained by the preset high threshold value. The contour in the low threshold value within the boundary is complemented with the resulting boundary and the following formula.
And calculating according to a formula and a formula to obtain a complementary contour map, and collecting edges to obtain a segmented image with good segmentation effect.
103. And extracting the image characteristics of the fecal sewage area image according to a preset convolutional neural network model to generate the fecal sewage character characteristics.
The convolutional neural network model comprises a plurality of convolutional neural network models with different structures and different initializations, and the optional convolutional model templates include but are not limited to VGG16, VGG19, SENET and GOOGLENET. Since the neural network is a feedforward neural network, the artificial neurons thereof can respond to a part of the surrounding units within the coverage range, and the neural network has excellent performance for large-scale image processing. The convolutional neural network consists of one or more convolutional layers and a top fully-connected layer, and also includes associated weights and pooling layers. The structure enables the convolutional neural network to utilize a two-dimensional structure of input data, and compared with common machine learning methods in the prior art, such as an SVM, an Xgboost (a boosting algorithm based on GBDT) or a single convolutional neural network, etc., although the effect of classification and identification can be achieved, the effect is not as good as that of the convolutional neural network based on ensemble learning provided by the present application. The generation method of the preset convolutional neural network model comprises the following steps: firstly, a standard graph of the liquid dung character is obtained to generate an image training set and a sample image testing set, a convolutional neural network model is trained through the image training set and the sample image testing set to generate a convolutional neural network model, and finally results of the convolutional neural network models generated through training are averaged to generate a preset convolutional neural network model. Comprehensive features can be extracted according to the convolutional neural network which is learned and trained by a machine.
104. And judging the character characteristics of the liquid dung through a matching model in which a liquid dung character database is stored, and if the character characteristics of the liquid dung are not matched with the liquid dung character database, outputting a result for prompting that the cleanliness of the intestinal tract is qualified.
The concrete implementation is as follows: and comparing the acquired liquid dung character characteristics with the images stored in the liquid dung character database one by one through a perceptual hash algorithm, if the images are matched, directly outputting a result to prompt a patient to continue intestinal tract preparation, namely performing the intestinal tract preparation again and performing image analysis again until the images are not matched with the liquid dung character database. It is noted that at this step, only the rejected (matched) images are output, and if the images are qualified (i.e. not matched with the images in either database), a result indicating that the intestinal tract cleanliness is qualified is output. The result for prompting the qualification of the intestinal cleanliness can be output and displayed to prompt the patient with a qualified mark.
Further, as a preferred embodiment, the determining the characteristics of the liquid dung character by using a matching model stored with a liquid dung character database further includes, after the characteristics of the liquid dung character are not matched with the liquid dung character database: and judging the effectiveness of the liquid dung character characteristic through a preset convolution neural network model, namely judging whether the liquid dung character characteristic is obtained based on the training basis of the convolution neural network model, and if the liquid dung character characteristic is effective, outputting a result for prompting the preparation of intestinal examination.
According to the method provided by the embodiment, a liquid dung image can be obtained, the liquid dung characteristic of integrity is obtained through combination of high and low thresholds through an image recognition technology, and then a result of prompting the cleanliness of the intestinal tract is judged and output by using a convolutional neural network model trained by an image set which is continuously updated. Therefore, the preparation work before the gastrointestinal endoscope can be independently completed by the patient, the workload of medical personnel is reduced, the frequency of back and forth running of the patient is also reduced, and the preparation time before the gastrointestinal endoscope is saved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for identifying characteristics of liquid dung based on machine learning according to an embodiment of the present invention. The method for identifying the fecal behavior based on machine learning can be applied to an intelligent system with a shooting function, and the embodiment of the invention of the intelligent system is not limited. As shown in fig. 2, the method for identifying a fecal behavior based on machine learning may include the following operations:
after completing the results for prompting the preparation of the bowel examination according to steps 101-104 (see the above steps, which are not described herein) in embodiment one, in order to increase the flexibility of the whole machine learning-based fecal behavior identification method, the method further includes the following steps:
201. and acquiring a manual judgment result, wherein the manual judgment result comprises an extended image set. After the patient is prepared in the steps and the qualified result for prompting the intestinal tract detection is output, the medical staff and the like manually judge that the output result is wrong, and the wrong image can be stored as an extended image set for the adjustment and learning of the neural network model.
202. And expanding the image training set according to the expanded image set, and training the convolutional neural network model by using the expanded image training set.
Wherein, artifical judgement result still includes the liquid dung image of waiting to discern, acquires artifical judgement result, later still includes: updating the matching model stored with the liquid dung character database according to the liquid dung image to be identified; and judging the character characteristics of the liquid dung by using the updated matching model. Namely, the stored extended image set which is found artificially and judged wrongly is collected as an image training set and a sample image test set of the convolutional neural network model for periodic ensemble learning to perform extended dynamic adjustment. For the existing static recognition system, the pre-collected pictures are directly put into use after feature extraction model training, when recognition errors occur in a practical environment, the pictures cannot be rapidly processed, the models must be retrained until the errors are accumulated, and the training model is relatively high in cost and cannot be trained every time when errors occur, so that the capability of dynamically adjusting the whole model is lacked, and the use experience of a user and the accuracy of the model are reduced.
According to the method, the fecal sewage image can be obtained, the integral fecal sewage characteristic is obtained through the combination of high and low thresholds through an image recognition technology, and then a result of prompting the intestinal cleanliness is judged and output by using a convolutional neural network model trained by an image set which is updated continuously. From this, the preparation work before the intestines and stomach mirror, the patient can independently accomplish, has both reduced medical personnel's work load, has also reduced the number of times that the patient is making a round trip to rush, has saved the preparation time before the intestines and stomach mirror, can also in time carry out real-time update to model and database according to manual detection result, and the data that are used for the detection are provided to the dynamicization, when having overcome prior art and having met the problem of discernment mistake in the practical environment, the problem of processing that can not be quick.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic view of a fecal behavior recognition system based on machine learning according to an embodiment of the present invention. Wherein, this liquid dung trait identification system based on machine learning includes:
and the image acquisition module 31 is used for acquiring a liquid dung image to be identified.
In order to facilitate self-examination of the patient, the acquisition of the fecal sewage image can be specifically realized by photographing the fecal sewage through a handheld intelligent device after the patient starts to perform the self-examination of the intestinal cleanliness or automatically judges that the intestinal cleanliness is good, wherein the handheld intelligent device is a device with a photographing function, and the invention is not limited.
The identification module 32 is configured to segment the fecal sewage image to be identified based on a multi-threshold edge detection method, and extract a fecal sewage area image of the fecal sewage image.
For the traditional edge segmentation method, the following specific implementation modes are included:
firstly, smoothing the fecal sewage image to be identified by using a Gaussian function, and smoothing the image by using the Gaussian function to achieve the effect of reducing noise of the image.
And secondly, calculating gradient values of the to-be-identified fecal sewage image in all directions, and calculating the gray value change of the fecal sewage image by utilizing a Sobel operator to obtain the gradient values of the fecal sewage image in all directions.
After that, non-maximum values of the gradient values in the respective directions are filtered, since there is a possibility that edges of the image are enlarged during the gaussian filtering. The non-maximum value of the gradient value in each direction is filtered to filter the points which do not belong to the image edge, so that the width of the edge is 1 pixel point as far as possible, and on the contrary, if one pixel point belongs to the image edge, the gradient value of the pixel point in the gradient direction is the maximum. Otherwise, it is not an image edge, and the gray value is set to 0.
Finally, edges are detected using upper and lower thresholds. In this section, by setting an upper threshold and a lower threshold, a pixel point having a gradient between the two is detected and used as an edge. However, for a stricter threshold setting, that is, when the upper and lower thresholds are high, image noise can be suppressed to obtain a clear contour map, but the higher threshold also suppresses contours, so that the contours are incomplete. For a looser threshold setting, i.e. the upper and lower thresholds are lower, a complete contour can be obtained, but there will be more noise in the image and the edge cannot be resolved well.
Therefore, the final step of the traditional edge segmentation method is improved in the application, the edge information of the liquid dung image to be identified is collected through the set high threshold, the contour map is generated through the set low threshold and the edge information, and then the liquid dung image to be identified is segmented according to the contour map. The concrete implementation is as follows:
and calculating edge boundaries Xmin, Xmax, Ymin and Ymax in four directions of up, down, left and right for the contour map obtained by the preset high threshold value. The contour in the low threshold value within the boundary is complemented with the resulting boundary and the following formula.
And the feature extraction module 33 is configured to extract image features of the fecal sewage area image according to a preset convolutional neural network model to generate fecal sewage character features.
The convolutional neural network model comprises a plurality of convolutional neural network models with different structures and different initializations, and the optional convolutional model templates include but are not limited to VGG16, VGG19, SENET and GOOGLENET. Since the neural network is a feedforward neural network, the artificial neurons thereof can respond to a part of the surrounding units within the coverage range, and the neural network has excellent performance for large-scale image processing. The convolutional neural network consists of one or more convolutional layers and a top fully-connected layer, and also includes associated weights and pooling layers. The structure enables the convolutional neural network to utilize a two-dimensional structure of input data, and compared with common machine learning methods in the prior art, such as an SVM, an Xgboost (a boosting algorithm based on GBDT) or a single convolutional neural network, etc., although the effect of classification and identification can be achieved, the effect is not as good as that of the convolutional neural network based on ensemble learning provided by the present application. The generation method of the preset convolutional neural network model comprises the following steps: firstly, a standard graph of the liquid dung character is obtained to generate an image training set and a sample image testing set, a convolutional neural network model is trained through the image training set and the sample image testing set to generate a convolutional neural network model, and finally results of the convolutional neural network models generated through training are averaged to generate a preset convolutional neural network model. Comprehensive features can be extracted according to the convolutional neural network which is learned and trained by a machine.
And the judging module 34 is configured to judge the liquid dung character characteristics through a matching model in which a liquid dung character database is stored, and if the liquid dung character characteristics are not matched with the liquid dung character database, output a result for prompting that the cleanliness of the intestinal tract is qualified. And comparing the acquired liquid dung character characteristics with the images stored in the liquid dung character database one by one through a perceptual hash algorithm, if the images are matched, directly outputting a result to prompt a patient to continue intestinal tract preparation, namely performing the intestinal tract preparation again and performing image analysis again until the images are not matched with the liquid dung character database. It is noted that at this step, only the rejected (matched) images are output, and if the images are qualified (i.e. not matched with the images in either database), a result indicating that the intestinal tract cleanliness is qualified is output. The result for prompting the qualification of the intestinal cleanliness can be output and displayed to prompt the patient with a qualified mark.
And the validity judging module 35 is configured to judge validity of the liquid dung characteristic through a preset convolutional neural network model, and if the liquid dung characteristic is valid, output a result for prompting preparation of intestinal examination. Judging the liquid dung character characteristics through a matching model stored with a liquid dung character database, and if the liquid dung character characteristics are not matched with the liquid dung character database, the method further comprises the following steps: and judging the effectiveness of the liquid dung character characteristic through a preset convolution neural network model, namely judging whether the liquid dung character characteristic is obtained based on the training basis of the convolution neural network model, and if the liquid dung character characteristic is effective, outputting a result for prompting the preparation of intestinal examination.
According to the system provided by the embodiment, a liquid dung image can be obtained, the liquid dung characteristic of integrity is obtained through combination of high and low thresholds through an image recognition technology, and then a result of prompting the cleanliness of the intestinal tract is judged and output by using a convolutional neural network model trained by an image set which is continuously updated. Therefore, the preparation work before the gastrointestinal endoscope can be independently completed by the patient, the workload of medical personnel is reduced, the frequency of back and forth running of the patient is also reduced, and the preparation time before the gastrointestinal endoscope is saved.
Example four
Referring to fig. 4, fig. 4 is a schematic view of a handheld smart device according to an embodiment of the present invention. Wherein, this kind of handheld smart machine includes:
and the photographing module 41 is used for generating a liquid dung image to be identified through photographing. The shooting can be realized through a camera lens or a camera module bound on a mobile terminal such as a mobile phone, and the invention does not limit the shooting.
And the processing module 42 is used for processing the liquid dung image by using the liquid dung character recognition method based on machine learning to generate a result for prompting that the cleanliness of the intestinal tract is qualified. The processing module may be implemented as a central processing unit programmed with a program for using the method for identifying characteristics of liquid dung based on machine learning, wherein for the method for identifying characteristics of liquid dung based on machine learning, reference may be made to the above-mentioned embodiment one or embodiment two, which is not described herein again.
And the display module 43 is used for displaying a result for prompting that the cleanliness of the intestinal tract is qualified. The display module can be realized as a display screen or a display device connected with the processing module 42 and having a display function, and when the result for prompting the qualification of the intestinal cleanliness is received, the display module 43 is triggered to display the result for prompting the qualification of the intestinal cleanliness.
According to the handheld intelligent device provided by the embodiment, a liquid dung image can be obtained, the liquid dung characteristic of integrity is obtained through combination of high and low thresholds through an image recognition technology, and then a result of prompting the cleanliness of the intestinal tract is judged and output by utilizing a convolutional neural network model trained by an image set which is continuously updated. Therefore, the preparation work before the gastrointestinal endoscope is finished by the patient independently through the handheld intelligent equipment, the workload of medical personnel is reduced, the frequency of back-and-forth running of the patient is also reduced, and the preparation time before the gastrointestinal endoscope is saved.
EXAMPLE five
Please refer to fig. 5, fig. 5 is a schematic structural diagram of a machine learning-based liquid dung trait identification interaction device according to an embodiment of the present invention. The device for identifying the character of the liquid manure based on machine learning depicted in fig. 5 can be applied to an intelligent system with a shooting function, and the embodiment of the invention is not limited to the application system of the interactive device for identifying the character of the liquid manure based on machine learning. As shown in fig. 5, the apparatus may include:
a memory 601 in which executable program code is stored;
a processor 602 coupled to a memory 601;
the processor 602 calls executable program code stored in the memory 601 for performing the method for identifying a fecal behavior based on machine learning as described in the first embodiment.
EXAMPLE six
The embodiment of the invention discloses a computer-readable storage medium for storing a computer program for electronic data exchange, wherein the computer program enables a computer to execute the fecal water character identification based on machine learning described in the first embodiment.
EXAMPLE seven
The embodiment of the invention discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, wherein the computer program is operable to make a computer execute the method for identifying the fecal behavior based on machine learning described in the first embodiment or the second embodiment.
The above-described embodiments are only illustrative, and the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above detailed description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on such understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, where the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM), or other magnetic disk memories, magnetic tape memories, magnetic disk drives, magnetic disk, Or any other medium which can be used to carry or store data and which can be read by a computer.
Finally, it should be noted that: the method and system for identifying the fecal behavior based on machine learning disclosed in the embodiments of the present invention are only preferred embodiments of the present invention, and are only used for illustrating the technical solutions of the present invention, not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for identifying the character of liquid dung based on machine learning is characterized in that the method comprises the following steps:
acquiring a liquid dung image to be identified;
segmenting the liquid dung image to be identified based on a multi-threshold edge detection method, and extracting a liquid dung area image of the liquid dung image;
extracting the image characteristics of the fecal sewage area image according to a preset convolutional neural network model to generate fecal sewage character characteristics;
and judging the liquid dung character characteristics through a matching model stored with a liquid dung character database, and if the liquid dung character characteristics are not matched with the liquid dung character database, outputting a result for prompting that the cleanliness of the intestinal tract is qualified.
2. The machine learning-based liquid manure character recognition method of claim 1, wherein the segmenting the liquid manure image to be recognized based on the multi-threshold edge detection method comprises:
smoothing the liquid dung image to be identified by using a Gaussian function;
calculating gradient values of the fecal sewage image to be identified in all directions;
filtering the non-maximum values of the gradient values of the respective directions;
collecting edge information of the liquid dung image to be identified through a set high threshold value, and generating a contour map through a set low threshold value and the edge information;
and segmenting the fecal sewage image to be identified according to the contour map.
3. The machine learning-based fecal behavior identification method according to claim 2, wherein the convolutional neural network model comprises a plurality of convolutional neural network models of different structures and different initializations, and the preset convolutional neural network model is generated by a method comprising:
acquiring a standard map of the liquid dung character to generate an image training set and a sample image testing set;
training a convolutional neural network model through the image training set and the sample image testing set to generate a convolutional neural network model;
and averaging the results of the convolutional neural network models generated by training to generate a preset convolutional neural network model.
4. The machine learning-based method for identifying characteristics of manure traits as claimed in claim 3, wherein the determining characteristics of manure traits through a matching model stored with a manure trait database comprises:
and comparing the characteristics of the liquid dung with the images stored in the liquid dung characteristic database one by one through a perceptual Hash algorithm.
5. The method for identifying characteristics of liquid manure based on machine learning according to any one of claims 1 to 4, wherein the method further comprises the following steps of judging the characteristics of the liquid manure characteristics through a matching model stored with a liquid manure characteristic database, and if the characteristics of the liquid manure characteristics are not matched with the liquid manure characteristic database:
and judging the effectiveness of the liquid dung character characteristic through a preset convolutional neural network model, and if the liquid dung character characteristic is effective, outputting a result for prompting preparation of intestinal examination.
6. The machine learning-based liquid manure trait identification method of claim 5, further comprising:
acquiring a manual judgment result, wherein the manual judgment result comprises an extended image set;
and expanding the image training set according to the expanded image set, and training a convolutional neural network model by using the expanded image training set.
7. The machine learning-based liquid manure character recognition method according to claim 6, wherein the manual judgment result further comprises a liquid manure image to be recognized, and the obtaining of the manual judgment result further comprises:
updating a matching model stored with a liquid dung character database according to the liquid dung image to be identified;
and judging the character characteristics of the liquid dung by using the updated matching model.
8. A machine learning-based liquid manure trait identification system, the system comprising:
the image acquisition module is used for acquiring a liquid dung image to be identified;
the identification module is used for segmenting the liquid dung image to be identified based on a multi-threshold edge detection method and extracting a liquid dung area image of the liquid dung image;
the characteristic extraction module is used for extracting the image characteristics of the fecal sewage area image according to a preset convolutional neural network model to generate fecal sewage character characteristics;
and the judging module is used for judging the liquid dung character characteristics through a matching model stored with a liquid dung character database, and outputting a result for prompting that the cleanliness of the intestinal tract is qualified if the liquid dung character characteristics are not matched with the liquid dung character database.
9. The machine learning-based liquid manure trait identification system of claim 8, the system further comprising:
and the effectiveness judgment module is used for judging the effectiveness of the liquid dung character characteristic through a preset convolutional neural network model, and if the liquid dung character characteristic is effective, outputting a result for prompting intestinal examination preparation.
10. A handheld smart device, the handheld smart device comprising:
the shooting module is used for generating a liquid dung image to be identified through shooting;
a processing module, which is used for processing the liquid dung image by using the liquid dung character recognition method based on machine learning according to any one of claims 1-7 to generate a result for prompting that the cleanliness of the intestinal tract is qualified;
and the display module is used for displaying the result for prompting the qualification of the cleanliness of the intestinal tract.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110206359.4A CN112907544A (en) | 2021-02-24 | 2021-02-24 | Machine learning-based liquid dung character recognition method and system and handheld intelligent device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110206359.4A CN112907544A (en) | 2021-02-24 | 2021-02-24 | Machine learning-based liquid dung character recognition method and system and handheld intelligent device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112907544A true CN112907544A (en) | 2021-06-04 |
Family
ID=76106909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110206359.4A Pending CN112907544A (en) | 2021-02-24 | 2021-02-24 | Machine learning-based liquid dung character recognition method and system and handheld intelligent device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112907544A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511558A (en) * | 2022-04-18 | 2022-05-17 | 武汉楚精灵医疗科技有限公司 | Method and device for detecting cleanliness of intestinal tract |
WO2023074292A1 (en) * | 2021-10-28 | 2023-05-04 | Necプラットフォームズ株式会社 | Excrement analysis device, excrement analysis method, pre-colonoscopy state confirmation device, state confirmation system, state confirmation method, and non-temporary computer-readable medium |
CN117351317A (en) * | 2023-10-25 | 2024-01-05 | 中国人民解放军总医院第二医学中心 | Automatic identification method and system for last stool character picture |
-
2021
- 2021-02-24 CN CN202110206359.4A patent/CN112907544A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023074292A1 (en) * | 2021-10-28 | 2023-05-04 | Necプラットフォームズ株式会社 | Excrement analysis device, excrement analysis method, pre-colonoscopy state confirmation device, state confirmation system, state confirmation method, and non-temporary computer-readable medium |
JP7424651B2 (en) | 2021-10-28 | 2024-01-30 | Necプラットフォームズ株式会社 | Excrement analysis device, excrement analysis method, and program |
CN114511558A (en) * | 2022-04-18 | 2022-05-17 | 武汉楚精灵医疗科技有限公司 | Method and device for detecting cleanliness of intestinal tract |
CN117351317A (en) * | 2023-10-25 | 2024-01-05 | 中国人民解放军总医院第二医学中心 | Automatic identification method and system for last stool character picture |
CN117351317B (en) * | 2023-10-25 | 2024-04-09 | 中国人民解放军总医院第二医学中心 | Automatic identification method and system for last stool character picture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112907544A (en) | Machine learning-based liquid dung character recognition method and system and handheld intelligent device | |
CN110033456B (en) | Medical image processing method, device, equipment and system | |
CN108875821A (en) | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing | |
CN109697719B (en) | Image quality evaluation method and device and computer readable storage medium | |
CN111462049B (en) | Automatic lesion area form labeling method in mammary gland ultrasonic radiography video | |
CN110197474B (en) | Image processing method and device and training method of neural network model | |
CN113129287A (en) | Automatic lesion mapping method for upper gastrointestinal endoscope image | |
CN107786867A (en) | Image identification method and system based on deep learning architecture | |
CN110310280A (en) | Hepatic duct and the image-recognizing method of calculus, system, equipment and storage medium | |
CN114092450A (en) | Real-time image segmentation method, system and device based on gastroscopy video | |
CN113223041A (en) | Method, system and storage medium for automatically extracting target area in image | |
CN113052844A (en) | Method and device for processing images in intestinal endoscope observation video and storage medium | |
CN115205520A (en) | Gastroscope image intelligent target detection method and system, electronic equipment and storage medium | |
CN112464802A (en) | Automatic identification method and device for slide sample information and computer equipment | |
CN114842000A (en) | Endoscope image quality evaluation method and system | |
CN111493805A (en) | State detection device, method, system and readable storage medium | |
CN111563439A (en) | Aquatic organism disease detection method, device and equipment | |
CN108985302A (en) | A kind of skin lens image processing method, device and equipment | |
CN111199050A (en) | System for automatically desensitizing medical records and application | |
Cerrolaza et al. | Fully-automatic glottis segmentation with active shape models | |
CN111046848B (en) | Gait monitoring method and system based on animal running platform | |
CN114581402A (en) | Capsule endoscope quality inspection method, device and storage medium | |
CN112883970A (en) | Digital identification method based on neural network model | |
CN110705352A (en) | Fingerprint image detection method based on deep learning | |
CN111951233B (en) | Fishbone residue detection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |