CN113470030B - Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment - Google Patents
Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment Download PDFInfo
- Publication number
- CN113470030B CN113470030B CN202111033610.8A CN202111033610A CN113470030B CN 113470030 B CN113470030 B CN 113470030B CN 202111033610 A CN202111033610 A CN 202111033610A CN 113470030 B CN113470030 B CN 113470030B
- Authority
- CN
- China
- Prior art keywords
- cleanliness
- sample
- tissue image
- image
- rounding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003749 cleanliness Effects 0.000 title claims abstract description 342
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000007667 floating Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 17
- 239000013598 vector Substances 0.000 claims description 115
- 238000013145 classification model Methods 0.000 claims description 47
- 238000000605 extraction Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008520 organization Effects 0.000 claims description 15
- 238000007781 pre-processing Methods 0.000 claims description 6
- 210000001519 tissue Anatomy 0.000 description 258
- 230000006870 function Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 18
- 238000012549 training Methods 0.000 description 18
- 210000001035 gastrointestinal tract Anatomy 0.000 description 16
- 230000000968 intestinal effect Effects 0.000 description 10
- 210000003608 fece Anatomy 0.000 description 9
- 210000004347 intestinal mucosa Anatomy 0.000 description 9
- 239000007788 liquid Substances 0.000 description 9
- 230000015654 memory Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000001839 endoscopy Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000002183 duodenal effect Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 210000002784 stomach Anatomy 0.000 description 2
- 208000003200 Adenoma Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000003238 esophagus Anatomy 0.000 description 1
- 230000002496 gastric effect Effects 0.000 description 1
- 238000002575 gastroscopy Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000010865 sewage Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
Abstract
The disclosure relates to a method and a device for determining cleanliness of a tissue cavity, a readable medium and electronic equipment, and relates to the technical field of image processing, wherein the method comprises the following steps: firstly, acquiring a tissue image acquired by an endoscope, and then determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, wherein the initial cleanliness is a floating point type. And finally, rounding the initial cleanliness according to a target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer. According to the method and the device, the floating point type initial cleanliness and the target rounding mode suitable for the tissue image are determined through the recognition model, so that the initial cleanliness is rounded by the target rounding mode to obtain the cleanliness of the tissue image, and the accuracy of the cleanliness can be improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for determining cleanliness of a tissue cavity, a readable medium, and an electronic device.
Background
The endoscope is provided with components such as an optical lens, an image sensor and a light source, and can enter the human body to be checked, so that a doctor can visually observe the condition inside the human body, and the endoscope is widely applied to the field of medical treatment. To ensure the accurate result of the endoscopy, the cleanliness in the tissue cavity needs to be judged, if the cleanliness is too low, the tissue preparation is insufficient, and the problem of missed detection of small polyps or adenomas or even the problem of failure of the endoscopy and repeated detection is caused. Therefore, the cleanliness of the tissue can be accurately identified, and the effectiveness and accuracy of the endoscopy can be ensured. The endoscope may be, for example, an enteroscope, a gastroscope, or the like. For enteroscopy, the examined tissue is the intestinal tract, and the cleanliness of the intestinal cavity needs to be identified, and for gastroscopy, the examined tissue is the esophagus or the stomach, and the cleanliness of the esophageal cavity or the gastric cavity needs to be identified.
However, the cleanliness of the tissue cavity is usually determined by professionals according to the actual examination condition of the tissue in the endoscope withdrawal stage after the endoscope examination is finished, the requirements on the experience and the operation level of the professionals are high, certain subjectivity exists, and the cleanliness of the tissue cavity is difficult to ensure to be accurately identified.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a method of determining cleanliness of a tissue cavity, the method comprising:
acquiring a tissue image acquired by an endoscope;
determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, wherein the initial cleanliness is a floating point type;
and rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer.
In a second aspect, the present disclosure provides an apparatus for determining cleanliness of a tissue cavity, the apparatus comprising:
the acquisition module is used for acquiring a tissue image acquired by the endoscope;
the identification module is used for determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained identification model, wherein the initial cleanliness is a floating point type;
and the rounding module is used for rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, and the cleanliness is integer.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect of the present disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of the method of the first aspect of the present disclosure.
According to the technical scheme, the method comprises the steps of firstly obtaining a tissue image acquired by an endoscope, and then determining the initial cleanliness and the target rounding mode of a floating point type according to the tissue image and a pre-trained recognition model. And finally, rounding the initial cleanliness according to a target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer. According to the method and the device, the floating point type initial cleanliness and the target rounding mode suitable for the tissue image are determined through the recognition model, so that the initial cleanliness is rounded by the target rounding mode to obtain the cleanliness of the tissue image, and the accuracy of the cleanliness can be improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow chart illustrating a method of determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating another method of determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating a recognition model in accordance with an exemplary embodiment;
FIG. 4 is a flow diagram illustrating training a recognition model in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating another method of determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 6 is a flow chart illustrating another method of determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 7 is a flow diagram illustrating a method of training a classification model according to an exemplary embodiment;
FIG. 8 is a block diagram illustrating an apparatus for determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating another device for determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating another device for determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 11 is a block diagram illustrating another device for determining cleanliness of a tissue cavity in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
FIG. 1 is a flow chart illustrating a method for determining cleanliness of a tissue cavity, as shown in FIG. 1, according to an exemplary embodiment, the method comprising the steps of:
For example, during the endoscopic examination, the endoscope may continuously acquire images in the tissue according to a preset acquisition cycle, and the tissue image in this embodiment may be an image acquired by the endoscope at the current time, or an image acquired by the endoscope at any time. That is, the tissue image may be an image acquired by the endoscope during the tissue entering process (i.e., the endoscope entering process) or an image acquired by the endoscope during the tissue exiting process (i.e., the endoscope exiting process), which is not particularly limited by the present disclosure. Further, after obtaining the tissue image, the tissue image may be preprocessed, which may be understood as performing enhancement processing on data included in the tissue image. The pre-processing may include: random affine transformation, random brightness, contrast, saturation, chromaticity adjustment, random erasing of partial pixels, flipping (including left-right flipping, up-down flipping, rotation, etc.), size transformation (english: Resize), etc., and the resulting pre-processed tissue image may be an image of a certain size (e.g., 224 × 224).
And 102, determining an initial cleanliness and a target rounding mode according to the tissue image and the pre-trained recognition model, wherein the initial cleanliness is a floating point type.
For example, the preprocessed tissue image may be input into a pre-trained recognition model, so that the recognition model recognizes the tissue image, and outputs the initial cleanliness and the target rounding mode of the floating point type. Specifically, the recognition model can determine a match probability of the tissue image with a plurality of cleanliness types, and then determine an initial cleanliness according to the plurality of match probabilities, wherein the initial cleanliness is a floating point type, that is, the initial cleanliness is not usually an integer. A plurality of cleanliness types for indicating the degree of cleanliness of the tissue image, as exemplified by an endoscope being an enteroscope and the tissue image being an intestinal tract image, may be four types in boston's intestinal cleanliness score standard (english: boston bowel Preparation Scale, abbreviation: BBPS): namely, the type of cleanliness is 0, corresponding to "the whole section of intestinal mucosa cannot be observed due to solid and liquid feces which cannot be removed", the type of cleanliness is 1, corresponding to "the part of intestinal mucosa cannot be observed due to dirty spots, turbid liquid and residual feces", the type of cleanliness is 2, corresponding to "the intestinal mucosa is observed well, but a small amount of dirty spots, turbid liquid and feces remain", the type of cleanliness is 3, corresponding to "the intestinal mucosa is observed well, and basically no dirty spots, turbid liquid and feces remain". Further, the recognition model can also determine the matching probability of the tissue image with a plurality of rounding modes, and then determine the target rounding mode according to the plurality of matching probabilities. The multiple rounding approaches may include, for example: rounding up (e.g., ceil function), rounding down (e.g., floor function), etc. The recognition model can be trained according to a large number of pre-collected training images and cleanliness labels corresponding to the training images. The recognition model may be, for example, CNN (Convolutional Neural Networks, chinese) or LSTM (Long Short-Term Memory network, chinese), or an Encoder in a transform (for example, Vision transform), and the disclosure is not limited thereto.
And 103, rounding the initial cleanliness according to a target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer.
For example, after obtaining the initial cleanliness output by the recognition model and the target rounding mode, rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the reshaped tissue image. If the target rounding mode is an upward rounding, the initial cleanliness can be rounded upward to be used as the cleanliness of the tissue image, and if the target rounding mode is a downward rounding, the initial cleanliness can be rounded downward to be used as the cleanliness of the tissue image. For example, the initial cleanliness is 2.8, if the target rounding is up, the cleanliness is 3, and if the target rounding is down, the cleanliness is 2. Compared with a processing mode of randomly selecting a rounding mode to round the cleanliness of a floating point type, introducing a random error and reducing the accuracy of the cleanliness, the method and the device for learning the target rounding mode suitable for the tissue image from the tissue image by using the identification model can obtain the cleanliness of the tissue image and can effectively improve the robustness and the accuracy of the cleanliness. Furthermore, because the tissue image can be an image acquired by the endoscope at any time, the present embodiment can determine the current cleanliness of the tissue cavity in real time without limiting the judgment of the cleanliness in the process of endoscope withdrawal, and can determine the next operation of the endoscope in time according to the cleanliness of the tissue cavity, thereby avoiding the problems of invalid endoscope introduction, repeated examination and the like.
It should be noted that, the endoscope described in the embodiment of the present disclosure may be, for example, an enteroscope or a gastroscope, and if the endoscope is an enteroscope, the tissue image is an intestinal tract image, and the tissue cavity is an intestinal cavity, so that the cleanliness of the intestinal cavity is determined in the embodiment. If the endoscope is a gastroscope, the tissue image may be an esophageal image, a stomach image or a duodenal image, and correspondingly, the tissue cavity may be an esophageal cavity, an intra-gastric cavity or a duodenal cavity, and then the cleanliness of the esophageal cavity, the intra-gastric cavity or the duodenal cavity is determined in this embodiment. The endoscope may also be used to acquire images of other tissues having cavities to determine the cleanliness of the tissue cavity in the present disclosure, which is not specifically limited by the present disclosure.
In summary, the present disclosure first obtains a tissue image acquired by an endoscope, and then determines an initial cleanliness and a target rounding mode of a floating point type according to the tissue image and a pre-trained recognition model. And finally, rounding the initial cleanliness according to a target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer. According to the method and the device, the floating point type initial cleanliness and the target rounding mode suitable for the tissue image are determined through the recognition model, so that the initial cleanliness is rounded by the target rounding mode to obtain the cleanliness of the tissue image, and the accuracy of the cleanliness can be improved.
FIG. 2 is a flow chart illustrating another method for determining cleanliness of a tissue cavity, according to an exemplary embodiment, as shown in FIG. 2, and an identification model, as shown in FIG. 3, comprising: the system comprises a feature extraction sub-model, a cleanliness sub-model and an integer extraction sub-model. Specifically, the structure of the feature extraction submodel may be, for example, an Encoder in a Vision Transformer, or may be another structure capable of extracting image features, and this disclosure is not limited to this. The structure of the cleanliness submodel may be, for example, two linear layers (which may be understood as fully connected layers) cascaded together through the ReLU nonlinear layer, and may also be other structures, and the structure of the rounding submodel may be, for example, one linear layer, and may also be other structures, which are not specifically limited by this disclosure.
Accordingly, the implementation of step 102 may include:
For example, the tissue image is first input into the feature extraction submodel to obtain the image features output by the feature extraction submodel for characterizing the tissue image. The following describes a process of extracting image features in detail, taking as an example that the structure of the feature extraction submodel is an Encoder in a Vision Transformer.
The input tissue image is first divided into a plurality of sub-images (which may be denoted as patch) of equal size, for example, 224 × 224 for the input tissue image and 16 × 16 for the input tissue image, then the tissue image may be divided into 196 sub-images. Each sub-image may be first flattened by using a Linear Projection (english) layer, so as to obtain an image vector (which may be denoted as "patch embedding") corresponding to the sub-image, where the image vector can represent the sub-image. Further, a position vector (which may be represented as position embedding) indicating a position of the sub-image in the tissue image may also be generated, wherein a size of the position embedding is the same as a size of the patch embedding. It should be noted that the position embedding can be randomly generated, and the Encoder can learn the representation of the position of the corresponding sub-image in the tissue image. Thereafter, a token (which may be denoted as token) corresponding to each sub-image may be generated according to the image vector and the position vector of the sub-image. Specifically, the token corresponding to each sub-image may be obtained by splicing (which may be understood as concat) an image vector and a position vector of the sub-image.
Further, after the token corresponding to each sub-image is obtained, a token corresponding to the tissue image may be generated. For example, an image vector and a position vector may be randomly generated and stitched to serve as tokens corresponding to the tissue images.
Then, the token corresponding to each sub-image and the token corresponding to the organization image can be input into the encoder, the encoder can generate a local encoding vector corresponding to each sub-image according to the token corresponding to each sub-image, and simultaneously, a global encoding vector corresponding to the organization image can be generated according to the tokens corresponding to all the sub-images. The local encoding vector can be understood as a vector which is learned by an encoder and can represent a corresponding sub-image, and the global encoding vector can be understood as a vector which is learned by the encoder and can represent the whole tissue image. Finally, the global encoding vector may be used as the output of the feature extraction submodel, i.e., the image features. The global coding vector and the local coding vector can be spliced to be used as the output of the feature extraction submodel, namely, the image features, so that the image features can represent both global information and local information.
And 1022, inputting the image characteristics into the cleanliness submodel and the rounding submodel respectively to obtain a cleanliness vector output by the cleanliness submodel and a rounding vector output by the rounding submodel.
And 1023, determining the initial cleanliness according to the cleanliness vector, and determining a target rounding mode according to the rounding vector.
For example, the image features may be input into the cleanliness submodel and the rounding submodel, respectively, to obtain a cleanliness vector output by the cleanliness submodel and a rounding vector output by the rounding submodel. And the dimension of the cleanliness vector output by the cleanliness submodel is the same as the number of the cleanliness types. For example, taking the tissue image as an intestinal image (i.e., the endoscope is an enteroscope), the BBPS includes four types of cleanliness types, and the cleanliness vector may have dimensions of 1 × 4, each corresponding to one cleanliness type. Similarly, the number of the rounding vector output by the rounding sub-model is the same as the number of the rounding modes, for example, the rounding mode includes two rounding-up and rounding-down, and then the dimension of the rounding vector may be 1 × 2, and each dimension corresponds to one rounding type. And finally, determining the initial cleanliness according to the cleanliness vector, and determining a target rounding mode according to the rounding vector.
In one implementation, the manner of determining the initial cleanliness in step 1023 may include:
step 1) determining the matching probability of the tissue image and various cleanliness types according to the cleanliness vector.
And 2) determining the initial cleanliness according to the weight corresponding to each cleanliness type and the matching probability of the tissue image and the cleanliness types.
For example, the cleanliness vector can be processed by using a Softmax function to obtain matching probabilities of the tissue image and the multiple cleanliness types, and then the multiple matching probabilities are weighted and summed according to the weight corresponding to each cleanliness type and the matching probabilities of the tissue image and the multiple cleanliness types to obtain the initial cleanliness. In the case where the tissue image is an intestinal image (i.e., the endoscope is an enteroscope), the weight corresponding to each cleanliness type can be determined according to the score of the BBPS. Specifically, the initial cleanliness can be determined by formula one:
Wherein,Sthe initial cleanliness is indicated as the degree of cleanliness,Nthe number of types of cleanliness is indicated,a i is shown asiA weight corresponding to the cleanliness type.p i (x) Representing the tissue image andimatch probabilities of a cleanliness type (which can be understood as the output of the Softmax function),f i (x) Representing the second in the cleanliness vectoriThe value of the dimension(s) is,xrepresenting image features. Taking a tissue image as an intestinal image (namely, an endoscope is an enteroscope), determining corresponding weight according to BBPS as an example, wherein the weight corresponding to the cleanliness type 0 is 0, the weight corresponding to the cleanliness type 1 is 1, the weight corresponding to the cleanliness type 2 is 2, and the weight corresponding to the cleanliness type 3 is 3, thenN=4,a i =i。
The manner of determining the target rounding mode in step 1023 may include:
and 3) determining the matching probability of the tissue image and a plurality of rounding modes according to the rounding vector.
And 4) determining a target rounding mode in the multiple rounding modes according to the matching probability of the tissue image and the multiple rounding modes.
For example, the rounding vector may also be processed by using a Softmax function to obtain matching probabilities of the tissue image and the multiple rounding modes, and then, the rounding mode with the largest matching probability is selected as the target rounding mode from the matching probabilities of the tissue image and the multiple rounding modes. Specifically, the matching probability of the tissue image with a plurality of rounding modes can be determined by a formula two:
Wherein,Mthe number of rounding modes is indicated,q j (x) Representing the tissue image andjthe matching probability of the rounding mode is adopted,g j (x) Representing the second in the rounding vectorjThe value of the dimension(s) is,xrepresenting image features. Taking rounding mode including rounding up and rounding down as an example, thenM=2。
FIG. 4 is a flow diagram illustrating training a recognition model according to an exemplary embodiment, where the recognition model is trained by:
step A, obtaining a first sample input set and a first sample output set, wherein the first sample input set comprises: a plurality of first sample inputs, each first sample input comprising a sample tissue image, the set of first sample outputs comprising a first sample output corresponding to each first sample input, each first sample output comprising a true cleanliness of the corresponding sample tissue image.
And B, taking the first sample input set as the input of the recognition model, and taking the first sample output set as the output of the recognition model so as to train the recognition model.
And identifying the loss of the model, determining according to the loss of the cleanliness and the rounding loss, determining the loss of the cleanliness according to the output of the cleanliness submodel and the first sample output set, and determining the rounding loss according to the output of the rounding submodel and the first sample output set.
For example, when training the recognition model, a first sample input set and a first sample output set for training the recognition model need to be obtained first. The first sample input set includes a plurality of first sample inputs, each of which may be a sample tissue image, such as a tissue image previously acquired during performance of an endoscopic examination. The first set of sample outputs includes a first sample output corresponding to each first sample input, each first sample output including a true cleanliness of the corresponding sample tissue image. The real cleanliness is used for indicating the cleanliness degree of the sample tissue image, the endoscope is used as an enteroscope, the sample tissue image is used as a sample intestinal tract image for example, and the real cleanliness can be divided into four types according to BBPS: that is, 0 point corresponds to "the entire intestinal mucosa cannot be observed due to the solid and liquid feces which cannot be removed", 1 point corresponds to "the partial intestinal mucosa cannot be observed due to the stain, the turbid liquid, and the residual feces", 2 points corresponds to "the intestinal mucosa is observed well, but a small amount of the stain, the turbid liquid, and the feces remains", and 3 points corresponds to "the intestinal mucosa is observed well, and basically no stain, turbid liquid, and feces remain".
When the recognition model is trained, the first sample input set can be used as the input of the recognition model, and then the first sample output set is used as the output of the recognition model to train the recognition model, so that when the first sample input set is input, the output of the recognition model can be matched with the first sample output set. For example, a loss function of the recognition model may be determined from the output of the recognition model and the first sample output set, and parameters of neurons in the recognition model, such as weights (in English: Weight) and offsets (in English: Bias) of the neurons, may be modified using a back propagation algorithm with the goal of reducing the loss function. And repeating the steps until the loss function meets a preset condition, for example, the loss function is smaller than a preset loss threshold value, so as to achieve the purpose of training the recognition model.
Specifically, the loss of the recognition model can be divided into two parts, namely cleanliness loss and rounding loss. Wherein, the root cleanliness loss is determined according to the output of the cleanliness submodel and the first sample output set, and the rounding loss is determined according to the output of the rounding submodel and the first sample output set.
The loss of the recognition model can be determined by equation three:
L=L 1+γL 2formula (II)III
Wherein,Lrepresenting the loss of the recognition model,L 1the loss of rounding is represented by the loss of rounding,L 2which is indicative of a loss of cleanliness,γthe weighting parameter indicating the cleanliness loss may be set to 0.5, for example.
Further, the loss of cleanliness can be determined by equation four:
L 2=| f-y 2|2=l 2formula four
Wherein,L 2which is indicative of a loss of cleanliness,fthe output of the cleanliness submodel is represented,y 2indicating the actual cleanliness that the first sample output includes,l= f-y 2。
the rounding loss can be determined by the formula five, the cross entropy loss function (English: Cross EntropyLoss):
Wherein,L 1the loss of rounding is represented by the loss of rounding,Mthe number of rounding modes is indicated,y j1,indicating the rounding mode corresponding to the true cleanliness factor comprised by the first sample output,g j the second in the rounding vector representing the output of the rounding sub-modeljThe value of the dimension.
Further, the initial learning rate for training the recognition model may be set as: 5e-2, Batch size can be set to: 128, the optimizer may select: SGD, Epoch may be set to: 60, Decay may be set to: 0.1, the size of the sample tissue image may be: 224 × 224.
In one implementation, each sample tissue image comprises a plurality of cleanliness labels, the real cleanliness of the sample tissue image is determined according to the cleanliness labels of the sample tissue image, and the consistency of the sample tissue image is determined according to the number of cleanliness labels, matched with the real cleanliness, in the cleanliness labels of the sample tissue image. The first sample output also includes a degree of conformity of the corresponding sample tissue image.
Accordingly, the cleanliness loss is determined from the output of the cleanliness submodel, the true cleanliness and the degree of conformity included in each of the first sample inputs.
For example, taking the tissue image as an intestinal tract image (i.e. the endoscope is an enteroscope), according to the scoring standard of BBPS, it can be seen that the cleanliness is actually determined according to the area ratio of intestinal mucosa to stains, turbid liquid and residual feces, so that the professional is easily subjectively influenced when labeling the tissue image of the sample. Thus, in training the recognition model, each of the first sample inputs included in the set of first sample inputs may be labeled by a plurality of professionals (e.g., persons having experienced a practitioner for more than 5 years), and after labeling, each of the sample tissue images includes a plurality of cleanliness labels. The true cleanliness and consistency of each sample tissue image can then be determined from the cleanliness labels of the sample tissue image.
Specifically, the true cleanliness may be determined based on the number of the same cleanliness labels among the plurality of cleanliness labels. For example, a sample tissue image includesKA cleanliness label having excessKThe/2 cleanliness tags are 2 points, then the true cleanliness of the sample tissue image can be determined to be 2 points. As another example, a sample tissue image includesKA cleanliness label in which excess does not existKA/2 identical cleanliness tags, then the sample tissue image can be deleted from the first sample input set, i.e., discarded. Therefore, the influence of subjectivity on the real cleanliness can be reduced, and the stability of the recognition model training is ensured.
The consistency can be determined according to the number of cleanliness labels matched with the real cleanliness in the plurality of cleanliness labels of the sample tissue image. For example, a sample tissue image includesKA cleanliness label, thereinD(D≥KAnd 2) the cleanliness labels are 3 minutes, the real cleanliness of the sample tissue image is 3 minutes, and the consistency of the sample tissue image isD. The coincidence degree is used to indicate how easily the sample tissue images are distinguished, and a higher coincidence degree indicates that the sample tissue images are easier to distinguish, and a lower coincidence degree indicates that the sample tissue images are harder to distinguish. Further, in the first sample output set, each first sample output may include a degree of conformity of the corresponding sample tissue image, in addition to a true degree of cleanliness of the corresponding sample tissue image.
Accordingly, the cleanliness loss can be determined from the output of the cleanliness submodel, the true cleanliness and the degree of conformity included in each of the first sample inputs. Specifically, the loss of cleanliness can be determined by the formula six:
Wherein,L 2which is indicative of a loss of cleanliness,twhich represents a pre-set threshold value, is,l= f-y 2。αindicating a preset control coefficient, may be set, for example, to 0.1,βindicating a predetermined bias coefficient for ensuring inl=tWhen the calculation results of the upper and lower calculation results are the same, the calculation result can be set to 0.2,Dindicating the degree of agreement included in the first sample output,Kthe possible consistency in the first sample output is shown, and taking 5 cleanliness labels included in each sample tissue image as an example, the possible consistency is 3, 4, and 5.
The consistency of the sample tissue images is introduced into the loss of the cleanliness, so that the influence of subjectivity on the training of the recognition model can be reduced, and the stability and the accuracy of the recognition model are improved.
FIG. 5 is a flow chart illustrating another method for determining cleanliness of a tissue cavity according to an exemplary embodiment, as shown in FIG. 5, before step 102, the method may further comprise:
and 104, classifying the tissue images by using a pre-trained classification model to determine the target type of the tissue images.
Accordingly, the implementation manner of step 102 may be:
and if the target type indicates that the quality of the tissue image meets the preset condition, determining the initial cleanliness and the target rounding mode according to the tissue image and the identification model.
For example, a tissue image acquired by an endoscope may be input into a pre-trained classification model to classify the tissue image by the classification model, and the output of the classification model is a target type of the tissue image. The target types may include: the first type is used for indicating that the quality of the tissue image meets a preset condition and represents that the quality of the tissue image is high, and the second type is used for indicating that the quality of the tissue image does not meet the preset condition and represents that the quality of the tissue image is poor. The classification model is used for identifying the type of an input image, and can be trained according to a large number of pre-collected training images and type labels corresponding to the training images. The classification model may be CNN or LSTM, or Encoder in a transform (e.g., Vision transform), for example, and the disclosure is not limited thereto. When the endoscope is an enteroscope and the tissue image is an intestinal image, the preset conditions may include: the enteroscope is not shielded when the intestinal tract image is collected, the distance between the enteroscope and the intestinal wall is larger than a preset distance threshold value when the intestinal tract image is collected, the exposure degree of the intestinal tract image is smaller than a preset exposure degree threshold value, the ambiguity degree of the intestinal tract image is smaller than a preset ambiguity degree threshold value, the intestinal tract in the intestinal tract image is not adhered, and the like. For example, if the intestinal tract is blocked by sewage, or the enteroscope is too close to the intestinal wall, the intestinal tract image is overexposed, the intestinal tract image is too blurred, and the intestinal tract is adhered, the quality of the intestinal tract image does not meet the preset condition.
Accordingly, when the target type indicates that the quality of the tissue image meets the preset condition, the tissue image can be input into the recognition model, so that the recognition model determines the initial cleanliness and the target rounding mode. That is, when it is determined that the quality of the tissue image is high, the tissue image is identified again. In case the target type indicates that the quality of the tissue image does not meet the preset condition, the tissue image may be discarded directly. Further, the images acquired by the endoscope in the next acquisition cycle can be selected, and the above steps are repeatedly executed to determine the cleanliness of the tissue cavity.
Fig. 6 is a flowchart illustrating another method for determining cleanliness of a tissue cavity according to an exemplary embodiment, and as shown in fig. 6, the implementation of step 104 may include:
And step 1044, inputting the global coded vector and the plurality of local coded vectors into a classification layer to obtain a target type output by the classification layer.
For example, the classification model may include: an encoder and a classification layer, and may further include a linear projection layer. Wherein the Encoder may be an Encoder in a Vision transform, the classification layer may be an MLP (multi layer per pixel Head), and the linear projection layer may be understood as a fully connected layer.
The tissue image may first be preprocessed to enhance data included in the tissue image, the preprocessing may include: the resulting pre-processed tissue image may be an image of a predetermined size (e.g., 224 x 224), which may be processed by random affine transformation, random brightness, contrast, saturation, chromaticity adjustment, size transformation, etc. The pre-processed tissue image may then be divided into a plurality of sub-images (which may be denoted as patch) of equal size, e.g., 224 x 224 for the pre-processed tissue image and 16 x 16 for the pre-processed tissue image, and 196 sub-images.
Then, each sub-image may be first flattened by using the linear projection layer to obtain an image vector (which may be denoted as patch embedding) corresponding to the sub-image, and the image vector may represent the sub-image. Further, a position vector (which may be represented as position embedding) indicating a position of the sub-image in the pre-processed tissue image may also be generated, wherein a size of the position embedding is the same as a size of the patch embedding. It should be noted that the position embedding can be randomly generated, and the encoder can learn the representation of the position of the corresponding sub-image in the tissue image. Thereafter, a token (which may be denoted as token) corresponding to each sub-image may be generated according to the image vector and the position vector of the sub-image. Specifically, the token corresponding to each sub-image may be obtained by splicing an image vector and a position vector of the sub-image.
Further, after the token corresponding to each sub-image is obtained, a token corresponding to the tissue image may be generated. For example, an image vector and a position vector may be randomly generated and stitched to serve as tokens corresponding to the tissue images.
Then, the token corresponding to each sub-image and the token corresponding to the organization image can be input into the encoder, and the encoder can generate a local encoding vector corresponding to each sub-image according to the token corresponding to each sub-image, and simultaneously can generate a global encoding vector corresponding to the organization image according to the tokens corresponding to all the sub-images. The local encoding vector can be understood as a vector which is learned by an encoder and can represent a corresponding sub-image, and the global encoding vector can be understood as a vector which is learned by the encoder and can represent the whole tissue image.
Finally, the global coded vector and the plurality of local coded vectors may be input to a classification layer, and the output of the classification layer is the target type. Specifically, the global coding vector and the plurality of local coding vectors may be spliced to obtain a comprehensive coding vector, the comprehensive coding vector is input to the classification layer, the classification layer may determine matching probabilities between the tissue image and the plurality of types according to the comprehensive coding vector, and finally, the type with the largest matching probability is used as the target type. Because the input of the classification layer comprises the global coding vector and each local coding vector, the characteristics of the whole organization image and each sub-image are integrated, namely the global information and the local information are considered, and the classification accuracy of the classification model can be effectively improved.
FIG. 7 is a flowchart illustrating a method for training a classification model according to an exemplary embodiment, where the classification model is trained by the following method, as shown in FIG. 7:
step C, obtaining a second sample input set and a second sample output set, the second sample input set comprising: a plurality of second sample inputs, each second sample input comprising a sample tissue image, the set of second sample outputs comprising a second sample output corresponding to each second sample input, each second sample output comprising a true type of the corresponding sample tissue image.
And D, taking the second sample input set as the input of the classification model, and taking the second sample output set as the output of the classification model so as to train the classification model.
For example, when training the classification model, a second sample input set and a second sample output set for training the classification model need to be obtained first. The second sample input set comprises a plurality of second sample inputs, each of which may be a sample tissue image, which may be, for example, a tissue image previously acquired when performing an endoscopy. The second sample output set includes a second sample output corresponding to each second sample input, each second sample output includes a true type of the corresponding sample tissue image, and the true type may include: the first type is used for indicating that the quality of the tissue image meets a preset condition, and the second type is used for indicating that the quality of the tissue image does not meet the preset condition.
When the classification model is trained, the second sample input set may be used as the input of the classification model, and then the second sample output set may be used as the output of the classification model to train the classification model, so that when the second sample input set is input, the output of the classification model can be matched with the second sample output set. For example, the parameters of the neurons in the classification model, such as weights and offsets of the neurons, may be modified using a back propagation algorithm with the goal of reducing the loss function based on the output of the classification model and the difference (or mean square error) from the second sample output set as the loss function of the classification model. And repeating the steps until the loss function meets a preset condition, for example, the loss function is smaller than a preset loss threshold value, so as to achieve the purpose of training the classification model. Specifically, the loss function of the classification model can be as shown in formula seven (i.e., cross-entropy loss function):
Wherein,L class a loss function representing the classification model,representing the output of the classification model (which can be understood as the sample tissue image and the secondqThe probability of a match of a type),s q representing true type and second of sample tissue imageqThe probability of a match of a type is,Frepresenting the number of true types. Taking the example that the real types include a first type and a second type, the first type is used for indicating that the quality of the tissue image satisfies the preset condition, and the second type is used for indicating that the quality of the tissue image does not satisfy the preset condition, thenF=2。
In summary, the present disclosure first obtains a tissue image acquired by an endoscope, and then determines an initial cleanliness and a target rounding mode of a floating point type according to the tissue image and a pre-trained recognition model. And finally, rounding the initial cleanliness according to a target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer. According to the method and the device, the floating point type initial cleanliness and the target rounding mode suitable for the tissue image are determined through the recognition model, so that the initial cleanliness is rounded by the target rounding mode to obtain the cleanliness of the tissue image, and the accuracy of the cleanliness can be improved.
Fig. 8 is a block diagram illustrating an apparatus for determining cleanliness of a tissue cavity according to an exemplary embodiment, and as shown in fig. 8, the apparatus 200 may include:
an acquisition module 201, configured to acquire a tissue image acquired by an endoscope.
The identification module 202 is configured to determine an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained identification model, where the initial cleanliness is a floating point type.
And the rounding module 203 is used for rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer.
Fig. 9 is a block diagram illustrating another apparatus for determining cleanliness of a tissue cavity according to an exemplary embodiment, where the identification model includes, as shown in fig. 9: the system comprises a feature extraction sub-model, a cleanliness sub-model and an integer extraction sub-model.
Accordingly, the identification module 202 may include:
the feature extraction sub-module 2021 is configured to input the tissue image into the feature extraction sub-model to obtain an image feature output by the feature extraction sub-model and used for representing the tissue image.
And the processing submodule 2022 is configured to input the image features into the cleanliness submodel and the rounding submodel, respectively, to obtain a cleanliness vector output by the cleanliness submodel and a rounding vector output by the rounding submodel.
The determining submodule 2023 is configured to determine an initial cleanliness according to the cleanliness vector, and determine a target rounding mode according to the rounding vector.
In one implementation, the determining submodule 2023 may be configured to perform the following steps:
step 1) determining the matching probability of the tissue image and various cleanliness types according to the cleanliness vector.
And 2) determining the initial cleanliness according to the weight corresponding to each cleanliness type and the matching probability of the tissue image and the cleanliness types.
And 3) determining the matching probability of the tissue image and a plurality of rounding modes according to the rounding vector.
And 4) determining a target rounding mode in the multiple rounding modes according to the matching probability of the tissue image and the multiple rounding modes.
In another implementation, the recognition model is trained by:
step A, obtaining a first sample input set and a first sample output set, wherein the first sample input set comprises: a plurality of first sample inputs, each first sample input comprising a sample tissue image, the set of first sample outputs comprising a first sample output corresponding to each first sample input, each first sample output comprising a true cleanliness of the corresponding sample tissue image.
And B, taking the first sample input set as the input of the recognition model, and taking the first sample output set as the output of the recognition model so as to train the recognition model.
And identifying the loss of the model, determining according to the loss of the cleanliness and the rounding loss, determining the loss of the cleanliness according to the output of the cleanliness submodel and the first sample output set, and determining the rounding loss according to the output of the rounding submodel and the first sample output set.
In yet another implementation, each sample tissue image includes a plurality of cleanliness labels, the true cleanliness of the sample tissue image is determined according to the cleanliness labels of the sample tissue image, and the consistency of the sample tissue image is determined according to the number of cleanliness labels, which match the true cleanliness, in the cleanliness labels of the sample tissue image. The first sample output also includes a degree of conformity of the corresponding sample tissue image.
Accordingly, the cleanliness loss is determined from the output of the cleanliness submodel, the true cleanliness and the degree of conformity included in each of the first sample inputs.
Fig. 10 is a block diagram illustrating another apparatus for determining cleanliness of a tissue cavity according to an exemplary embodiment, and as shown in fig. 10, the apparatus 200 further includes:
the classification module 204 is configured to classify the tissue image by using a pre-trained classification model before determining the initial cleanliness and the target rounding mode according to the tissue image and the pre-trained recognition model, so as to determine the target type of the tissue image.
Accordingly, the identification module 202 may be configured to determine the initial cleanliness and the target rounding mode according to the tissue image and the identification model if the target type indicates that the quality of the tissue image satisfies the preset condition.
Fig. 11 is a block diagram illustrating another apparatus for determining cleanliness of a tissue cavity according to an exemplary embodiment, and as shown in fig. 11, the classification module 204 may include:
the preprocessing submodule 2041 is configured to preprocess the tissue image, and divide the preprocessed tissue image into a plurality of sub-images with equal size.
The token determining submodule 2042 is configured to determine a token corresponding to each sub-image according to the image vector corresponding to each sub-image and the position vector corresponding to the sub-image, where the position vector is used to indicate a position of the sub-image in the preprocessed tissue image.
The encoding submodule 2043 is configured to input the token corresponding to each sub-image and the token corresponding to the organization image into the encoder, so as to obtain a local encoding vector corresponding to each sub-image and a global encoding vector corresponding to the organization image.
The classification submodule 2044 is configured to input the global encoding vector and the plurality of local encoding vectors into the classification layer, so as to obtain a target type output by the classification layer.
In one implementation, the classification model is trained by:
step C, obtaining a second sample input set and a second sample output set, the second sample input set comprising: a plurality of second sample inputs, each second sample input comprising a sample tissue image, the set of second sample outputs comprising a second sample output corresponding to each second sample input, each second sample output comprising a true type of the corresponding sample tissue image.
And D, taking the second sample input set as the input of the classification model, and taking the second sample output set as the output of the classification model so as to train the classification model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, the present disclosure first obtains a tissue image acquired by an endoscope, and then determines an initial cleanliness and a target rounding mode of a floating point type according to the tissue image and a pre-trained recognition model. And finally, rounding the initial cleanliness according to a target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer. According to the method and the device, the floating point type initial cleanliness and the target rounding mode suitable for the tissue image are determined through the recognition model, so that the initial cleanliness is rounded by the target rounding mode to obtain the cleanliness of the tissue image, and the accuracy of the cleanliness can be improved.
Referring now to fig. 12, a schematic structural diagram of an electronic device (e.g., an execution subject, which may be a terminal device or a server in the above embodiments) 300 suitable for implementing an embodiment of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 12, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 12 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the terminal devices, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a tissue image acquired by an endoscope; determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, wherein the initial cleanliness is a floating point type; and rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module does not in some cases constitute a limitation of the module itself, for example, the acquisition module may also be described as a "module for acquiring an image of tissue".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides a method of determining cleanliness of a tissue cavity, in accordance with one or more embodiments of the present disclosure, comprising: acquiring a tissue image acquired by an endoscope; determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, wherein the initial cleanliness is a floating point type; and rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer.
Example 2 provides the method of example 1, the identifying a model comprising: a feature extraction sub-model, a cleanliness sub-model and an integer-taking sub-model; determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, comprising: inputting the tissue image into the feature extraction submodel to obtain image features which are output by the feature extraction submodel and used for representing the tissue image; respectively inputting the image characteristics into the cleanliness submodel and the rounding submodel to obtain a cleanliness vector output by the cleanliness submodel and a rounding vector output by the rounding submodel; and determining the initial cleanliness according to the cleanliness vector, and determining the target rounding mode according to the rounding vector.
Example 3 provides the method of example 2, the determining the initial cleanliness from the cleanliness vector, comprising, in accordance with one or more embodiments of the present disclosure: determining the matching probability of the tissue image and a plurality of cleanliness types according to the cleanliness vector; determining the initial cleanliness according to the weight corresponding to each cleanliness type and the matching probability of the tissue image and the cleanliness types; the determining the target rounding mode according to the rounding vector comprises: determining the matching probability of the tissue image and a plurality of rounding modes according to the rounding vector; and determining the target rounding mode in the plurality of rounding modes according to the matching probability of the tissue image and the plurality of rounding modes.
Example 4 provides the method of example 2, the recognition model being trained in the following manner, in accordance with one or more embodiments of the present disclosure: obtaining a first sample input set and a first sample output set, the first sample input set comprising: a plurality of first sample inputs, each of said first sample inputs comprising a sample tissue image, said first sample output set comprising a first sample output corresponding to each of said first sample inputs, each of said first sample outputs comprising a true cleanliness of the corresponding sample tissue image; using the first sample input set as the input of the recognition model and the first sample output set as the output of the recognition model to train the recognition model; and determining the loss of the identification model according to the loss of cleanliness and the rounding loss, wherein the loss of cleanliness is determined according to the output of the cleanliness submodel and the first sample output set, and the rounding loss is determined according to the output of the rounding submodel and the first sample output set.
Example 5 provides the method of example 4, each of the sample tissue images including a plurality of cleanliness labels, the true cleanliness of the sample tissue image being determined from the cleanliness labels of the sample tissue image, the degree of conformity of the sample tissue image being determined from the number of cleanliness labels of the plurality of cleanliness labels of the sample tissue image that match the true cleanliness; the first sample output further comprises a degree of correspondence of the corresponding sample tissue image; the loss of cleanliness is determined from the output of the cleanliness submodel, the true cleanliness and the degree of conformity included in each of the first sample inputs.
Example 6 provides the method of example 1, further comprising, prior to determining an initial cleanliness and a target rounding from the tissue image and a pre-trained recognition model, in accordance with one or more embodiments of the present disclosure: classifying the tissue image by using a pre-trained classification model to determine a target type of the tissue image; determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, comprising: and if the target type indicates that the quality of the tissue image meets a preset condition, determining the initial cleanliness and the target rounding mode according to the tissue image and the identification model.
Example 7 provides the method of example 6, the classification model comprising: an encoder and a classification layer that classifies the tissue image using a pre-trained classification model to determine a target type of the tissue image, comprising: preprocessing the tissue image, and dividing the preprocessed tissue image into a plurality of sub-images with equal sizes; determining tokens corresponding to the sub-images according to the image vectors corresponding to the sub-images and the position vectors corresponding to the sub-images, wherein the position vectors are used for indicating the positions of the sub-images in the preprocessed tissue images; inputting the token corresponding to each sub-image and the token corresponding to the organization image into an encoder to obtain a local encoding vector corresponding to each sub-image and a global encoding vector corresponding to the organization image; and inputting the global coding vector and the plurality of local coding vectors into a classification layer to obtain the target type output by the classification layer.
Example 8 provides the method of example 7, the classification model being trained in the following manner, in accordance with one or more embodiments of the present disclosure: obtaining a second input set of samples and a second output set of samples, the second input set of samples comprising: a plurality of second sample inputs, each of the second sample inputs comprising a sample tissue image, the set of second sample outputs comprising a second sample output corresponding to each of the second sample inputs, each of the second sample outputs comprising a true type of the corresponding sample tissue image; and taking the second sample input set as the input of the classification model, and taking the second sample output set as the output of the classification model, so as to train the classification model.
Example 9 provides an apparatus for determining cleanliness of a tissue cavity, according to one or more embodiments of the present disclosure, including: the acquisition module is used for acquiring a tissue image acquired by the endoscope; the identification module is used for determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained identification model, wherein the initial cleanliness is a floating point type; and the rounding module is used for rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, and the cleanliness is integer.
Example 10 provides a computer-readable medium having stored thereon a computer program that, when executed by a processing device, implements the steps of the methods of examples 1-8, in accordance with one or more embodiments of the present disclosure.
Example 11 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the methods of examples 1 to 8.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Claims (10)
1. A method of determining the cleanliness of a tissue cavity, the method comprising:
acquiring a tissue image acquired by an endoscope;
determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, wherein the initial cleanliness is a floating point type;
rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, wherein the cleanliness is integer;
the recognition model includes: a feature extraction sub-model, a cleanliness sub-model and an integer-taking sub-model; determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, comprising:
inputting the tissue image into the feature extraction submodel to obtain image features which are output by the feature extraction submodel and used for representing the tissue image;
respectively inputting the image characteristics into the cleanliness submodel and the rounding submodel to obtain a cleanliness vector output by the cleanliness submodel and a rounding vector output by the rounding submodel;
and determining the initial cleanliness according to the cleanliness vector, and determining the target rounding mode according to the rounding vector.
2. The method of claim 1, wherein said determining the initial cleanliness from the cleanliness vector comprises:
determining the matching probability of the tissue image and a plurality of cleanliness types according to the cleanliness vector;
determining the initial cleanliness according to the weight corresponding to each cleanliness type and the matching probability of the tissue image and the cleanliness types;
the determining the target rounding mode according to the rounding vector comprises:
determining the matching probability of the tissue image and a plurality of rounding modes according to the rounding vector;
and determining the target rounding mode in the plurality of rounding modes according to the matching probability of the tissue image and the plurality of rounding modes.
3. The method of claim 1, wherein the recognition model is trained by:
obtaining a first sample input set and a first sample output set, the first sample input set comprising: a plurality of first sample inputs, each of said first sample inputs comprising a sample tissue image, said first sample output set comprising a first sample output corresponding to each of said first sample inputs, each of said first sample outputs comprising a true cleanliness of the corresponding sample tissue image;
using the first sample input set as the input of the recognition model and the first sample output set as the output of the recognition model to train the recognition model;
and determining the loss of the identification model according to the loss of cleanliness and the rounding loss, wherein the loss of cleanliness is determined according to the output of the cleanliness submodel and the first sample output set, and the rounding loss is determined according to the output of the rounding submodel and the first sample output set.
4. The method of claim 3, wherein each of the sample tissue images includes a plurality of cleanliness labels, the true cleanliness of the sample tissue image is determined from the cleanliness labels of the sample tissue image, and the degree of conformity of the sample tissue image is determined from the number of cleanliness labels of the plurality of cleanliness labels of the sample tissue image that match the true cleanliness; the first sample output further comprises a degree of correspondence of the corresponding sample tissue image;
the loss of cleanliness is determined from the output of the cleanliness submodel, the true cleanliness and the degree of conformity included in each of the first sample inputs.
5. The method of claim 1, wherein prior to said determining initial cleanliness and target rounding from said tissue image and a pre-trained recognition model, said method further comprises:
classifying the tissue image by using a pre-trained classification model to determine a target type of the tissue image;
determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained recognition model, comprising:
and if the target type indicates that the quality of the tissue image meets a preset condition, determining the initial cleanliness and the target rounding mode according to the tissue image and the identification model.
6. The method of claim 5, wherein the classification model comprises: an encoder and a classification layer that classifies the tissue image using a pre-trained classification model to determine a target type of the tissue image, comprising:
preprocessing the tissue image, and dividing the preprocessed tissue image into a plurality of sub-images with equal sizes;
determining tokens corresponding to the sub-images according to the image vectors corresponding to the sub-images and the position vectors corresponding to the sub-images, wherein the position vectors are used for indicating the positions of the sub-images in the preprocessed tissue images;
inputting the token corresponding to each sub-image and the token corresponding to the organization image into an encoder to obtain a local encoding vector corresponding to each sub-image and a global encoding vector corresponding to the organization image;
and inputting the global coding vector and the plurality of local coding vectors into a classification layer to obtain the target type output by the classification layer.
7. The method of claim 6, wherein the classification model is trained by:
obtaining a second input set of samples and a second output set of samples, the second input set of samples comprising: a plurality of second sample inputs, each of the second sample inputs comprising a sample tissue image, the set of second sample outputs comprising a second sample output corresponding to each of the second sample inputs, each of the second sample outputs comprising a true type of the corresponding sample tissue image;
and taking the second sample input set as the input of the classification model, and taking the second sample output set as the output of the classification model, so as to train the classification model.
8. An apparatus for determining the cleanliness of a tissue cavity, the apparatus comprising:
the acquisition module is used for acquiring a tissue image acquired by the endoscope;
the identification module is used for determining an initial cleanliness and a target rounding mode according to the tissue image and a pre-trained identification model, wherein the initial cleanliness is a floating point type;
the rounding module is used for rounding the initial cleanliness according to the target rounding mode to obtain the cleanliness of the tissue image, and the cleanliness is integer;
the recognition model includes: a feature extraction sub-model, a cleanliness sub-model and an integer-taking sub-model; the identification module comprises:
the feature extraction submodule is used for inputting the tissue image into the feature extraction submodel to obtain image features which are output by the feature extraction submodel and used for representing the tissue image;
the processing submodule is used for respectively inputting the image characteristics into the cleanliness submodel and the rounding submodel so as to obtain a cleanliness vector output by the cleanliness submodel and a rounding vector output by the rounding submodel;
and the determining submodule is used for determining the initial cleanliness according to the cleanliness vector and determining the target rounding mode according to the rounding vector.
9. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111033610.8A CN113470030B (en) | 2021-09-03 | 2021-09-03 | Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment |
PCT/CN2022/114259 WO2023030097A1 (en) | 2021-09-03 | 2022-08-23 | Method and apparatus for determining cleanliness of tissue cavity, and readable medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111033610.8A CN113470030B (en) | 2021-09-03 | 2021-09-03 | Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113470030A CN113470030A (en) | 2021-10-01 |
CN113470030B true CN113470030B (en) | 2021-11-23 |
Family
ID=77867368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111033610.8A Active CN113470030B (en) | 2021-09-03 | 2021-09-03 | Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113470030B (en) |
WO (1) | WO2023030097A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113470030B (en) * | 2021-09-03 | 2021-11-23 | 北京字节跳动网络技术有限公司 | Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment |
CN113487609B (en) * | 2021-09-06 | 2021-12-07 | 北京字节跳动网络技术有限公司 | Tissue cavity positioning method and device, readable medium and electronic equipment |
CN113658178B (en) * | 2021-10-14 | 2022-01-25 | 北京字节跳动网络技术有限公司 | Tissue image identification method and device, readable medium and electronic equipment |
CN114332019B (en) * | 2021-12-29 | 2023-07-04 | 小荷医疗器械(海南)有限公司 | Endoscopic image detection assistance system, method, medium, and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295715A (en) * | 2016-08-22 | 2017-01-04 | 电子科技大学 | A kind of leucorrhea cleannes automatic classification method based on BP neural network classifier |
CN111127426A (en) * | 2019-12-23 | 2020-05-08 | 山东大学齐鲁医院 | Gastric mucosa cleanliness evaluation method and system based on deep learning |
CN111932532A (en) * | 2020-09-21 | 2020-11-13 | 安翰科技(武汉)股份有限公司 | Method for evaluating capsule endoscope without reference image, electronic device, and medium |
CN112686162A (en) * | 2020-12-31 | 2021-04-20 | 北京每日优鲜电子商务有限公司 | Method, device, equipment and storage medium for detecting clean state of warehouse environment |
CN113240042A (en) * | 2021-06-01 | 2021-08-10 | 平安科技(深圳)有限公司 | Image classification preprocessing method, image classification preprocessing device, image classification equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10482313B2 (en) * | 2015-09-30 | 2019-11-19 | Siemens Healthcare Gmbh | Method and system for classification of endoscopic images using deep decision networks |
CN110266914B (en) * | 2019-07-23 | 2021-08-24 | 北京小米移动软件有限公司 | Image shooting method, device and computer readable storage medium |
CN113012162A (en) * | 2021-03-08 | 2021-06-22 | 重庆金山医疗器械有限公司 | Method and device for detecting cleanliness of endoscopy examination area and related equipment |
CN113470030B (en) * | 2021-09-03 | 2021-11-23 | 北京字节跳动网络技术有限公司 | Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment |
-
2021
- 2021-09-03 CN CN202111033610.8A patent/CN113470030B/en active Active
-
2022
- 2022-08-23 WO PCT/CN2022/114259 patent/WO2023030097A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295715A (en) * | 2016-08-22 | 2017-01-04 | 电子科技大学 | A kind of leucorrhea cleannes automatic classification method based on BP neural network classifier |
CN111127426A (en) * | 2019-12-23 | 2020-05-08 | 山东大学齐鲁医院 | Gastric mucosa cleanliness evaluation method and system based on deep learning |
CN111932532A (en) * | 2020-09-21 | 2020-11-13 | 安翰科技(武汉)股份有限公司 | Method for evaluating capsule endoscope without reference image, electronic device, and medium |
CN112686162A (en) * | 2020-12-31 | 2021-04-20 | 北京每日优鲜电子商务有限公司 | Method, device, equipment and storage medium for detecting clean state of warehouse environment |
CN113240042A (en) * | 2021-06-01 | 2021-08-10 | 平安科技(深圳)有限公司 | Image classification preprocessing method, image classification preprocessing device, image classification equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113470030A (en) | 2021-10-01 |
WO2023030097A1 (en) | 2023-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113470030B (en) | Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment | |
US11915415B2 (en) | Image processing method and apparatus, computer-readable medium, and electronic device | |
CN113487609B (en) | Tissue cavity positioning method and device, readable medium and electronic equipment | |
CN113487608B (en) | Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus | |
CN113658178B (en) | Tissue image identification method and device, readable medium and electronic equipment | |
CN110689025A (en) | Image recognition method, device and system, and endoscope image recognition method and device | |
CN113470031B (en) | Polyp classification method, model training method and related device | |
WO2023029741A1 (en) | Tissue cavity locating method and apparatus for endoscope, medium and device | |
CN113469295B (en) | Training method for generating model, polyp recognition method, device, medium, and apparatus | |
CN113496512B (en) | Tissue cavity positioning method, device, medium and equipment for endoscope | |
CN114782760B (en) | Stomach disease picture classification system based on multitask learning | |
WO2023125008A1 (en) | Artificial intelligence-based endoscope image processing method and apparatus, medium and device | |
CN111325709A (en) | Wireless capsule endoscope image detection system and detection method | |
CN114399465A (en) | Benign and malignant ulcer identification method and system | |
CN114863124A (en) | Model training method, polyp detection method, corresponding apparatus, medium, and device | |
CN114429458A (en) | Endoscope image processing method and device, readable medium and electronic equipment | |
CN112884702B (en) | Polyp identification system and method based on endoscope image | |
CN114332080B (en) | Tissue cavity positioning method and device, readable medium and electronic equipment | |
WO2023185497A1 (en) | Tissue image recognition method and apparatus, and readable medium and electronic device | |
CN114937178B (en) | Multi-modality-based image classification method and device, readable medium and electronic equipment | |
CN113470026B (en) | Polyp recognition method, device, medium, and apparatus | |
CN116434287A (en) | Face image detection method and device, electronic equipment and storage medium | |
CN116704593A (en) | Predictive model training method, apparatus, electronic device, and computer-readable medium | |
CN112991266A (en) | Semantic segmentation method and system for small sample medical image | |
CN118658045B (en) | AI-based upper gastrointestinal tract hemorrhage recognition system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20211001 Assignee: Xiaohe medical instrument (Hainan) Co.,Ltd. Assignor: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd. Contract record no.: X2021990000694 Denomination of invention: Method, device, readable medium and electronic equipment for determining cleanliness of tissue cavity License type: Common License Record date: 20211117 |