CN118096772A - Anatomical part recognition system, control method, medium, equipment and terminal - Google Patents
Anatomical part recognition system, control method, medium, equipment and terminal Download PDFInfo
- Publication number
- CN118096772A CN118096772A CN202410528101.XA CN202410528101A CN118096772A CN 118096772 A CN118096772 A CN 118096772A CN 202410528101 A CN202410528101 A CN 202410528101A CN 118096772 A CN118096772 A CN 118096772A
- Authority
- CN
- China
- Prior art keywords
- anatomical
- endoscopic image
- information
- anatomical part
- dissected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000003484 anatomy Anatomy 0.000 title claims abstract description 146
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000003745 diagnosis Methods 0.000 claims abstract description 22
- 238000004458 analytical method Methods 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000012552 review Methods 0.000 claims abstract description 5
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 27
- 210000001989 nasopharynx Anatomy 0.000 claims description 27
- 238000013528 artificial neural network Methods 0.000 claims description 21
- 210000000867 larynx Anatomy 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 15
- 238000010276 construction Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract 1
- 210000001331 nose Anatomy 0.000 description 13
- 230000008569 process Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 210000003928 nasal cavity Anatomy 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/252—Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and discloses an anatomical part recognition system, a control method, a medium, equipment and a terminal, wherein an endoscopic image is acquired through an endoscopic image diagnosis equipment of nose and throat, and a screenshot is analyzed through an intelligent image and a video segment related to an anatomical part is extracted; the system performs multi-graph analysis and obtains different anatomical part information through artificial intelligence analysis and calculation; establishing a real-time database, comparing and checking the anatomical part information with a historical database, and if the anatomical part information is obviously inconsistent with the actual data, returning to an anatomical part information calculation step and re-calculating; if the calculated anatomic position information accords with the actual situation, recording data and outputting the anatomic position information; and when the output is not carried out for a plurality of times, reporting to a manual checking output port for manual review, and if the AI misjudges, recording relevant data into a learning database. The invention realizes the identification of the anatomical part through the AI technology, thereby improving the identification accuracy of the AI.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an anatomical part recognition system, a control method, a medium, equipment and a terminal.
Background
Currently, a conventional method for identifying an anatomical structure in a medical image is to identify the anatomical structure in the medical image by using an anatomical structure detection model obtained by training, and specifically, the anatomical structure detection model may be obtained by training using an anatomical structure labeling standard and image data of a labeled anatomical structure. For example, training the anatomical structure detection model by convolutional neural networks may also be accomplished by constructing a conventional algorithmic B-spline model. However, the recognition result of the anatomical structure detection model trained by the prior art is subject to error and may thus lead to adverse consequences.
Through the above analysis, the problems and defects existing in the prior art are as follows: the recognition results of the anatomical detection model trained by the prior art are subject to error and thus lead to adverse consequences.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an anatomical part recognition system, a control method, a medium, equipment and a terminal.
The invention is realized in that a control method of an anatomic part recognition system comprises the following steps: obtaining an endoscopic image through an endoscopic image diagnosis device of nose and throat, analyzing screenshot through an intelligent image, and extracting video segments related to anatomical parts; the system performs multi-graph analysis and obtains different anatomical part information through artificial intelligence analysis and calculation; establishing a real-time database, comparing and checking the anatomical part information with a historical database, and if the calculated anatomical part information is obviously inconsistent with the actual data, returning to an anatomical part information calculation step and recalculating; if the calculated anatomic position information accords with the actual situation, recording data and outputting the anatomic position information; and when the test is repeated for a plurality of times, the test is reported to a manual checking output port for manual review, and if the test is misjudged by the AI, the related data is recorded into the learning database.
Further, the anatomical region recognition system control method includes the steps of:
Firstly, the system acquires an endoscopic image by using endoscopic image diagnosis equipment of nose and throat through an endoscopic image acquisition module, analyzes screenshot through an intelligent image and extracts a video segment related to an anatomical part;
step two, an anatomical part recognition model construction module constructs, trains and optimizes an anatomical part recognition model according to an endoscopic image acquired by endoscopic image diagnosis equipment of the nose and the throat;
Inputting the extracted video segments related to the anatomical parts into a pre-trained anatomical part recognition model by an anatomical part recognition module, and calculating to obtain different anatomical part information;
and step four, a real-time database is established by using a comparison checking analysis module, and the calculated anatomic part information and a historical database are subjected to comparison checking analysis to determine the final anatomic part information.
Further, the training process of the anatomical region recognition model in the second step includes:
obtaining an endoscopic image sample of the nasopharynx and the larynx, and marking position information of a part to be dissected in the endoscopic image of the nasopharynx and the larynx;
Generating a thermodynamic diagram corresponding to an endoscopic image sample of the part to be dissected; wherein the pixel values in the thermodynamic diagram represent probabilities pertaining to pixels of the region to be dissected;
inputting the endoscopic image sample of the nasopharynx and the larynx into a U-shaped neural network, so that the U-shaped neural network identifies the probability that each pixel point in the endoscopic image sample of the nasopharynx and the larynx belongs to the pixel of the part to be dissected;
Comparing the identification result of the U-shaped neural network with a thermodynamic diagram corresponding to the endoscopic image sample of the nasopharynx and the throat, and correcting the operation parameters of the U-shaped neural network according to the comparison result;
Sequentially obtaining endoscopic image samples of the nasopharynx and the larynx of the next batch, and repeating the correction steps respectively until the difference of thermodynamic diagrams corresponding to the endoscopic image samples of the nasopharynx and the larynx of the identification result of the U-shaped neural network is smaller than a set difference threshold value, wherein the trained U-shaped neural network is used as an anatomical part identification model.
Further, generating a thermodynamic diagram corresponding to an endoscopic image sample of the site to be dissected comprises:
generating a thermodynamic diagram of central attenuation by taking a pixel point of a position where an anatomical part is to be positioned as a center;
and carrying out binarization processing on the central decaying thermodynamic diagram to generate a thermodynamic diagram of the part to be dissected.
Further, obtaining different anatomical region information using the anatomical region recognition model in step three includes:
Inputting the extracted video segment related to the anatomical part into an anatomical part recognition model to obtain a label range of the anatomical part to be detected;
Determining a corresponding layer position range of the final anatomical part category in the endoscopic image sample of the nasopharynx by using the label range of the to-be-dissected part corresponding to the final anatomical part;
searching a preset dictionary in a corresponding layer position range in the endoscopic image sample of the nasopharynx by utilizing the final anatomical part category to obtain final anatomical part information.
Further, a first corresponding relation exists between the to-be-dissected part label and the dissected part information, and the preset dictionary comprises a second corresponding relation between the to-be-dissected part label and the dissected part information.
Another object of the present invention is to provide an anatomical region recognition system to which the control method of the anatomical region recognition system is applied, the anatomical region recognition system comprising:
The endoscopic image acquisition module is used for acquiring an endoscopic image through endoscopic image diagnosis equipment of nose and throat, analyzing screenshot through an intelligent image and extracting video segments related to anatomical parts;
The anatomy part recognition model construction module is used for constructing, training and optimizing an anatomy part recognition model according to an endoscopic image acquired by the endoscopic image diagnosis equipment of the nose and the throat;
The anatomy part recognition module is used for inputting the extracted video segments related to the anatomy part into a pre-trained anatomy part recognition model, and calculating to obtain different anatomy part information;
And the comparison checking analysis module is used for establishing a real-time database, comparing the calculated anatomic part information with the historical database, and determining the final anatomic part information.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the anatomical region identification system control method.
It is a further object of the present invention to provide a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the anatomical region recognition system control method.
Another object of the present invention is to provide an information data processing terminal for implementing the anatomical region recognition system.
In combination with the technical scheme and the technical problems to be solved, the technical scheme to be protected has the following advantages and positive effects:
First, aiming at the technical problems in the prior art and the difficulty of solving the problems, the technical problems solved by the technical proposal of the invention are analyzed in detail and deeply by tightly combining the technical proposal to be protected, the results and data in the research and development process, and the like, and some technical effects brought after the problems are solved have creative technical effects. The specific description is as follows:
The invention obtains an endoscopic image (photo/video) through an endoscopic image diagnosis device of nose and throat, and captures a video segment related to an anatomical part by utilizing intelligent image analysis; different anatomical part information is obtained through artificial intelligence calculation, and a real-time database is established; comparing the result with the past database, checking and calculating the analysis result, and if the result is obviously different from the actual data, returning to the artificial intelligent calculation step and re-calculating; if the data accords with the actual condition, recording the data and outputting the data; and when the output is still impossible after repeated times, reporting to a manual checking output port for manual review, and if the AI is misjudged, recording related data into a learning database to improve the AI precision.
Secondly, the technical scheme is regarded as a whole or from the perspective of products, and the technical scheme to be protected has the following technical effects and advantages:
The control method of the anatomical part recognition system provided by the invention is based on the endoscopic image diagnosis equipment of the nose and throat, and the endoscopic image is guided by the AI recognition acquisition information, so that the recognition of the anatomical part is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for controlling an anatomical region recognition system according to an embodiment of the invention;
FIG. 2 is a flowchart of a training method for an anatomical region recognition model provided by an embodiment of the invention;
FIG. 3 is a flow chart of a method for obtaining different anatomical region information using an anatomical region identification model according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In view of the problems existing in the prior art, the present invention provides an anatomical region recognition system, a control method, a medium, a device and a terminal, and the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the method for controlling an anatomical region recognition system according to an embodiment of the present invention includes the following steps:
S101, the system acquires an endoscopic image by using endoscopic image diagnosis equipment of nose and throat through an endoscopic image acquisition module, analyzes screenshot through an intelligent image and extracts a video segment related to an anatomical part;
s102, an anatomical part recognition model construction module constructs, trains and optimizes an anatomical part recognition model according to an endoscopic image acquired by endoscopic image diagnosis equipment of nose and throat;
s103, inputting the extracted video segments related to the anatomical parts into a pre-trained anatomical part recognition model by an anatomical part recognition module, and calculating to obtain different anatomical part information;
s104, a real-time database is established by using a comparison checking analysis module, and the calculated anatomic part information and a historical database are subjected to comparison checking analysis to determine the final anatomic part information.
As shown in fig. 2, the training process of the anatomical region recognition model in step S102 according to the embodiment of the present invention includes:
s201, acquiring an endoscopic image sample of the nasopharynx and the larynx, and marking position information of a part to be dissected in the endoscopic image of the nasopharynx and the larynx;
s202, generating a thermodynamic diagram corresponding to an endoscopic image sample of a part to be dissected; wherein the pixel values in the thermodynamic diagram represent probabilities pertaining to pixels of the region to be dissected;
s203, inputting the endoscopic image sample of the nasopharynx and the larynx into a U-shaped neural network, so that the U-shaped neural network identifies the probability that each pixel point in the endoscopic image sample of the nasopharynx and the larynx belongs to the pixel of the part to be dissected;
S204, comparing the identification result of the U-shaped neural network with a thermodynamic diagram corresponding to the endoscopic image sample of the nasopharynx and the larynx, and correcting the operation parameters of the U-shaped neural network according to the comparison result;
S205, sequentially obtaining the endoscopic image samples of the nasopharynx and the larynx of the next batch, and respectively repeating the correction steps until the difference of the thermodynamic diagram corresponding to the endoscopic image samples of the nasopharynx and the identification result of the U-shaped neural network is smaller than a set difference threshold, wherein the trained U-shaped neural network is used as an anatomical part identification model.
The generating the thermodynamic diagram corresponding to the endoscopic image sample of the to-be-dissected part provided by the embodiment of the invention comprises the following steps:
generating a thermodynamic diagram of central attenuation by taking a pixel point of a position where an anatomical part is to be positioned as a center;
and carrying out binarization processing on the central decaying thermodynamic diagram to generate a thermodynamic diagram of the part to be dissected.
As shown in fig. 3, in step S103 provided by the embodiment of the present invention, obtaining different anatomical region information by using the anatomical region recognition model includes:
s301, inputting the extracted video segment related to the anatomical part into an anatomical part recognition model to obtain a label range of the anatomical part to be detected;
S302, determining a corresponding layer position range of the final anatomical part category in the endoscopic image sample of the nasopharynx and the larynx by utilizing a label range of the to-be-dissected part corresponding to the final anatomical part;
s303, searching a preset dictionary by utilizing the position range corresponding to the final anatomical part category in the endoscopic image sample of the nasopharynx and the throat to obtain final anatomical part information.
The first corresponding relation exists between the to-be-dissected part label and the dissected part information, and the preset dictionary comprises the second corresponding relation between the to-be-dissected part label and the dissected part information.
The anatomy part recognition system provided by the embodiment of the invention comprises:
The endoscopic image acquisition module is used for acquiring an endoscopic image through endoscopic image diagnosis equipment of nose and throat, analyzing screenshot through an intelligent image and extracting video segments related to anatomical parts;
The anatomy part recognition model construction module is used for constructing, training and optimizing an anatomy part recognition model according to an endoscopic image acquired by the endoscopic image diagnosis equipment of the nose and the throat;
The anatomy part recognition module is used for inputting the extracted video segments related to the anatomy part into a pre-trained anatomy part recognition model, and calculating to obtain different anatomy part information;
And the comparison checking analysis module is used for establishing a real-time database, comparing the calculated anatomic part information with the historical database, and determining the final anatomic part information.
An application embodiment of the invention provides a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the anatomical region recognition system control method.
An application embodiment of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the anatomical region identification system control method.
The application embodiment of the invention provides an information data processing terminal which is used for realizing the anatomic part recognition system.
The working principle of the control method of the anatomical part recognition system provided by the embodiment of the invention is as follows:
First, an endoscopic image is acquired by an endoscopic image acquisition module using an endoscopic image diagnosis apparatus for nose and throat. This step is the basis of the system operation, and through the endoscopic image diagnosis equipment, the doctor can clearly observe the internal condition of the nose and throat parts of the patient. The module then also captures the acquired images and extracts video segments associated with the anatomical site by intelligent image analysis techniques. These video segments contain important information about the anatomical site of interest to the physician.
Next, the anatomy identification model construction module constructs and trains an optimized anatomy identification model from the previously acquired endoscopic images. This model is the core of the system, which is subject to extensive learning and training to identify different anatomical sites in the image.
The anatomical region recognition module then inputs the extracted video segments related to the anatomical region into a pre-trained anatomical region recognition model. The model analyzes the video segments and calculates different anatomical region information. Such information may include location, size, shape, etc. of the site, which is critical to the accurate diagnosis of the physician.
After the anatomical site information is obtained, the system may enter a contrast analysis module. The module establishes a real-time database and compares the calculated anatomical site information with a historical database for analysis. This step is to ensure the accuracy of the calculation result. If the calculated anatomical region information is not obviously consistent with the actual data, the system returns to the anatomical region information calculation step and performs calculation again. Such a cyclic process is continued until the anatomical site information is obtained that corresponds to the actual situation.
Once the actual anatomy information is obtained, the system records the data and outputs it to the physician. From this information, the physician can make an accurate diagnosis. However, if the system cannot output the anatomical part information according with the actual situation after repeated calculation, the system will report the problem to the manual checking output port for manual review by the doctor. Such a design is to ensure the reliability of the system and to avoid influencing the diagnostic result due to system errors.
Finally, if the doctor finds that the system has an AI misjudgment, the relevant data can be recorded into the learning database. These data will be used to further train and optimize the anatomy recognition model, thereby improving the recognition accuracy and stability of the system.
The working principle of the whole anatomical part recognition system control method is that the control method is a cyclic reciprocating process, and accurate and reliable anatomical part information is finally provided for doctors through continuously acquiring images, extracting information, calculating, analyzing and verifying output, so that the doctors are assisted to perform accurate diagnosis.
The embodiment of the invention adopts a U-shaped neural network (U-Net) to analyze the image of the anatomical part. The U-Net structure includes a contracted path (for capturing context information) and a symmetrical extended path (for pinpointing). Each puncturing operation consists of two 3x3 convolutional layers, each followed by a ReLU activation function and a 2x2 max pooling operation. Each step in the extended path includes an up-sampling operation followed by a 2x2 convolution ("up-convolution") followed by two 3x3 convolutions, each of which is also followed by ReLU activation.
The optimization scheme provided by the embodiment of the invention comprises the following steps:
Setting a difference threshold value: the difference threshold was set to 0.05 based on previous test and experimental data. This means that when the mean square error between the thermodynamic diagram output by the U-shaped neural network and the labeled thermodynamic diagram is less than 0.05, the training of the model is considered to reach satisfactory accuracy, and the training process can be ended.
The optimization scheme provided by the embodiment of the invention comprises the following steps:
U-shaped neural network: batch normalization is used after each convolution layer to speed up network training and reduce internal covariate offset.
The parameter correction method comprises the following steps: the weight of the network is updated by adopting a back propagation algorithm and an Adam optimizer. The learning rate is initially set to 0.001 and is adjusted during training according to the loss of the validation set.
The optimization scheme provided by the embodiment of the invention comprises the following steps:
first correspondence: each anatomical site is marked as a unique label, e.g., nasal cavity 1 and throat 2. Such a tag would be used directly to generate a high probability region for the corresponding location in the thermodynamic diagram.
The second correspondence: the preset dictionary will contain a mapping of labels to specific anatomical names, such as {1 } "nasal cavity", 2 } "throat". This allows the specific name of the anatomical site to be directly derived from the label, facilitating interpretation and use of the final output.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those of ordinary skill in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The device of the present invention and its modules may be implemented by hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., as well as software executed by various types of processors, or by a combination of the above hardware circuitry and software, such as firmware.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.
Claims (8)
1. An anatomical region recognition system control method, characterized in that the anatomical region recognition system control method comprises: obtaining an endoscopic image through an endoscopic image diagnosis device of nose and throat, analyzing screenshot through an intelligent image, and extracting video segments related to anatomical parts; the system performs multi-graph analysis and obtains different anatomical part information through artificial intelligence analysis and calculation; establishing a real-time database, comparing the calculated anatomic position information with a historical database for checking analysis, and if the calculated anatomic position information is obviously inconsistent with the actual data, returning to the anatomic position information calculation step and recalculating; if the calculated anatomic position information accords with the actual situation, recording data and outputting the anatomic position information; reporting to a manual checking output port for manual review when the output is still unable to be output after repeated for a plurality of times, and recording relevant data into a learning database if the AI misjudges;
an anatomical region recognition system applied to the anatomical region recognition system control method includes:
The endoscopic image acquisition module is used for acquiring an endoscopic image through endoscopic image diagnosis equipment of nose and throat, analyzing screenshot through an intelligent image and extracting video segments related to anatomical parts;
The anatomy part recognition model construction module is used for constructing, training and optimizing an anatomy part recognition model according to an endoscopic image acquired by the endoscopic image diagnosis equipment of the nose and the throat;
The anatomy part recognition module is used for inputting the extracted video segments related to the anatomy part into a pre-trained anatomy part recognition model, and calculating to obtain different anatomy part information;
And the comparison checking analysis module is used for establishing a real-time database, comparing the calculated anatomic part information with the historical database, and determining the final anatomic part information.
2. The anatomical portion recognition system control method according to claim 1, characterized in that the anatomical portion recognition system control method comprises the steps of:
Firstly, the system acquires an endoscopic image by using endoscopic image diagnosis equipment of nose and throat through an endoscopic image acquisition module, analyzes screenshot through an intelligent image and extracts a video segment related to an anatomical part;
step two, an anatomical part recognition model construction module constructs, trains and optimizes an anatomical part recognition model according to an endoscopic image acquired by endoscopic image diagnosis equipment of the nose and the throat;
Inputting the extracted video segments related to the anatomical parts into a pre-trained anatomical part recognition model by an anatomical part recognition module, and calculating to obtain different anatomical part information;
and step four, a real-time database is established by using a comparison checking analysis module, and the calculated anatomic part information and a historical database are subjected to comparison checking analysis to determine the final anatomic part information.
3. The method of claim 2, wherein the training of the anatomical region recognition model in the second step includes:
obtaining an endoscopic image sample of the nasopharynx and the larynx, and marking position information of a part to be dissected in the endoscopic image of the nasopharynx and the larynx;
Generating a thermodynamic diagram corresponding to an endoscopic image sample of the part to be dissected; wherein the pixel values in the thermodynamic diagram represent probabilities pertaining to pixels of the region to be dissected;
inputting the endoscopic image sample of the nasopharynx and the larynx into a U-shaped neural network, so that the U-shaped neural network identifies the probability that each pixel point in the endoscopic image sample of the nasopharynx and the larynx belongs to the pixel of the part to be dissected;
Comparing the identification result of the U-shaped neural network with a thermodynamic diagram corresponding to the endoscopic image sample of the nasopharynx and the throat, and correcting the operation parameters of the U-shaped neural network according to the comparison result;
Sequentially obtaining endoscopic image samples of the nasopharynx and the larynx of the next batch, and repeating the correction steps respectively until the difference of thermodynamic diagrams corresponding to the endoscopic image samples of the nasopharynx and the larynx of the identification result of the U-shaped neural network is smaller than a set difference threshold value, wherein the trained U-shaped neural network is used as an anatomical part identification model.
4. The anatomical region identification system control method according to claim 3 wherein generating a thermodynamic diagram corresponding to an endoscopic image sample of the region to be dissected comprises:
generating a thermodynamic diagram of central attenuation by taking a pixel point of a position where an anatomical part is to be positioned as a center;
and carrying out binarization processing on the central decaying thermodynamic diagram to generate a thermodynamic diagram of the part to be dissected.
5. The method of claim 2, wherein obtaining different anatomical region information using the anatomical region recognition model in step three comprises:
Inputting the extracted video segment related to the anatomical part into an anatomical part recognition model to obtain a label range of the anatomical part to be detected;
Determining a corresponding layer position range of the final anatomical part category in the endoscopic image sample of the nasopharynx by using the label range of the to-be-dissected part corresponding to the final anatomical part;
searching a preset dictionary in a corresponding layer position range in the endoscopic image sample of the nasopharynx by utilizing the final anatomical part category to obtain final anatomical part information.
6. The method of claim 5, wherein the first correspondence exists between the to-be-dissected site tag and the anatomical site information, and the predetermined dictionary includes a second correspondence between the to-be-dissected site tag and the anatomical site information.
7. A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the anatomical region identification system control method as claimed in any one of claims 1 to 6.
8. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the anatomical region identification system control method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410528101.XA CN118096772A (en) | 2024-04-29 | 2024-04-29 | Anatomical part recognition system, control method, medium, equipment and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410528101.XA CN118096772A (en) | 2024-04-29 | 2024-04-29 | Anatomical part recognition system, control method, medium, equipment and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118096772A true CN118096772A (en) | 2024-05-28 |
Family
ID=91165657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410528101.XA Pending CN118096772A (en) | 2024-04-29 | 2024-04-29 | Anatomical part recognition system, control method, medium, equipment and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118096772A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180308235A1 (en) * | 2017-04-21 | 2018-10-25 | Ankon Technologies Co., Ltd. | SYSTEM and METHOAD FOR PREPROCESSING CAPSULE ENDOSCOPIC IMAGE |
CN110136106A (en) * | 2019-05-06 | 2019-08-16 | 腾讯科技(深圳)有限公司 | Recognition methods, system, equipment and the endoscopic images system of medical endoscope image |
CN111353978A (en) * | 2020-02-26 | 2020-06-30 | 合肥凯碧尔高新技术有限公司 | Method and device for identifying cardiac anatomical structure |
CN112766314A (en) * | 2020-12-31 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Anatomical structure recognition method, electronic device, and storage medium |
CN114299072A (en) * | 2022-03-11 | 2022-04-08 | 四川大学华西医院 | Artificial intelligence-based anatomy variation identification prompting method and system |
US20230206435A1 (en) * | 2021-12-24 | 2023-06-29 | Infinitt Healthcare Co., Ltd. | Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate |
CN116392109A (en) * | 2023-03-27 | 2023-07-07 | 西安交通大学医学院第一附属医院 | Gland body size measurement system, control method, medium, equipment and terminal |
-
2024
- 2024-04-29 CN CN202410528101.XA patent/CN118096772A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180308235A1 (en) * | 2017-04-21 | 2018-10-25 | Ankon Technologies Co., Ltd. | SYSTEM and METHOAD FOR PREPROCESSING CAPSULE ENDOSCOPIC IMAGE |
CN110136106A (en) * | 2019-05-06 | 2019-08-16 | 腾讯科技(深圳)有限公司 | Recognition methods, system, equipment and the endoscopic images system of medical endoscope image |
CN111353978A (en) * | 2020-02-26 | 2020-06-30 | 合肥凯碧尔高新技术有限公司 | Method and device for identifying cardiac anatomical structure |
CN112766314A (en) * | 2020-12-31 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Anatomical structure recognition method, electronic device, and storage medium |
US20230206435A1 (en) * | 2021-12-24 | 2023-06-29 | Infinitt Healthcare Co., Ltd. | Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate |
CN114299072A (en) * | 2022-03-11 | 2022-04-08 | 四川大学华西医院 | Artificial intelligence-based anatomy variation identification prompting method and system |
CN116392109A (en) * | 2023-03-27 | 2023-07-07 | 西安交通大学医学院第一附属医院 | Gland body size measurement system, control method, medium, equipment and terminal |
Non-Patent Citations (1)
Title |
---|
李凯旋等: "胶囊内窥图像识别系统算法设计及实现", 南方医科大学学报, vol. 32, no. 7, 30 June 2012 (2012-06-30), pages 948 - 951 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111160367B (en) | Image classification method, apparatus, computer device, and readable storage medium | |
CN109886928A (en) | A kind of target cell labeling method, device, storage medium and terminal device | |
CN111626177A (en) | PCB element identification method and device | |
CN112509661B (en) | Methods, computing devices, and media for identifying physical examination reports | |
CN113688817A (en) | Instrument identification method and system for automatic inspection | |
CN116434266A (en) | Automatic extraction and analysis method for data information of medical examination list | |
CN114494215A (en) | Transformer-based thyroid nodule detection method | |
CN113763348A (en) | Image quality determination method and device, electronic equipment and storage medium | |
CN112233128A (en) | Image segmentation method, model training method, device, medium, and electronic device | |
CN113780145A (en) | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium | |
CN114596440A (en) | Semantic segmentation model generation method and device, electronic equipment and storage medium | |
CN115082659A (en) | Image annotation method and device, electronic equipment and storage medium | |
CN118098558A (en) | X-ray image diagnosis method and device based on dialogue function and electronic equipment | |
CN113344873A (en) | Blood vessel segmentation method, device and computer readable medium | |
CN116580801A (en) | Ultrasonic inspection method based on large language model | |
CN112766314A (en) | Anatomical structure recognition method, electronic device, and storage medium | |
CN118096772A (en) | Anatomical part recognition system, control method, medium, equipment and terminal | |
CN114972263B (en) | Real-time ultrasonic image follicle measurement method and system based on intelligent picture segmentation | |
CN116091522A (en) | Medical image segmentation method, device, equipment and readable storage medium | |
CN113920088B (en) | Radius density detection method, system and device based on deep learning | |
CN115294576A (en) | Data processing method and device based on artificial intelligence, computer equipment and medium | |
CN113408356A (en) | Pedestrian re-identification method, device and equipment based on deep learning and storage medium | |
CN114283114A (en) | Image processing method, device, equipment and storage medium | |
CN113256625A (en) | Electronic equipment and recognition device | |
CN111860100A (en) | Pedestrian number determination method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |