CN113554641B - Pediatric pharyngeal image acquisition method and device - Google Patents

Pediatric pharyngeal image acquisition method and device Download PDF

Info

Publication number
CN113554641B
CN113554641B CN202110876510.5A CN202110876510A CN113554641B CN 113554641 B CN113554641 B CN 113554641B CN 202110876510 A CN202110876510 A CN 202110876510A CN 113554641 B CN113554641 B CN 113554641B
Authority
CN
China
Prior art keywords
information
image
result
pharyngeal
partition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110876510.5A
Other languages
Chinese (zh)
Other versions
CN113554641A (en
Inventor
杨政
沈志英
钱雪莹
沈春明
钱冬梅
万荣
周艳
夏珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Shengze Hospital
Original Assignee
Jiangsu Shengze Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Shengze Hospital filed Critical Jiangsu Shengze Hospital
Priority to CN202110876510.5A priority Critical patent/CN113554641B/en
Publication of CN113554641A publication Critical patent/CN113554641A/en
Application granted granted Critical
Publication of CN113554641B publication Critical patent/CN113554641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a pediatric pharyngeal image acquisition method and device, wherein the method comprises the following steps: obtaining first user information; acquiring pharyngeal image information of a first user according to a first image acquisition module; preprocessing the first pharyngeal image information to obtain first preprocessed image information; uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information; obtaining first partition historical image information by combining big data according to the first partition result information; inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result; the first diagnosis evaluation result is used for marking the pharyngeal image information of the first user to obtain the first marked image set, and the technical problem that the pharyngeal image processing accuracy degree is not enough in the prior art is solved.

Description

Pediatric pharyngeal image acquisition method and device
Technical Field
The invention relates to the technical field of image recognition, in particular to a pediatric pharyngeal image acquisition method and device.
Background
In recent years, with the continuous deepening of machine learning research and the introduction of intelligent medical concepts, the combination of artificial intelligence and medical treatment becomes a research hotspot of a new mode of medical health services, and a great deal of practice is made in the fields of image recognition, disease model prediction, biological big data mining and the like.
Because the pathological changes of the ear, nose and throat are frequently generated in lacuna, an endoscope and an influence study are needed for auxiliary diagnosis, the research of artificial intelligence gradually pays attention to, and the color and the texture of the throat are evaluated in laryngology to judge the health condition of the throat and assist diagnosis.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
in the prior art, symptom clustering is carried out on the texture and the color of the collected pharyngeal image, the texture and the color are classified into possible pathological causes, reference is provided for diagnosis, and the technical problem of insufficient accuracy degree is caused by the fact that the image processing is too simple and unilateral.
Disclosure of Invention
The embodiment of the application provides a method and a device for collecting images of a pediatric pharynx, and solves the technical problems that in the prior art, the texture and the color of collected images of the pharynx are subjected to symptom clustering, the images are classified into possible pathological causes, reference is provided for diagnosis, and due to the fact that image processing is too simple and one-sided, the accuracy degree is not enough. The technical effect that the image is subjected to gradient partition, the information of each partition is compared with the historical similar symptoms, the diagnosis result is obtained through intelligent model evaluation, the diagnosis result is matched with the pharyngeal image, and the obtained processed image set provides more intelligent and accurate reference information for auxiliary diagnosis is achieved.
In view of the above problems, the present application provides a method and an apparatus for acquiring images of a pediatric pharynx.
In a first aspect, an embodiment of the present application provides a method for capturing images of a pediatric pharyngeal portion, where the method is applied to a medical goggle, where the medical goggle includes a first supplementary lighting module and a first image capturing module, and includes: obtaining first user information; acquiring pharyngeal image information of the first user according to the first image acquisition module; preprocessing the first pharyngeal image information to obtain first preprocessed image information; uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information; according to the first partition result information, combining big data to obtain first partition historical image information; inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result; labeling the first user pharyngeal image information using the first diagnostic evaluation, obtaining a first set of labeled images.
In another aspect, an embodiment of the present application provides a pediatric pharyngeal image capture device, where the device includes: a first obtaining unit configured to obtain first user information; the second obtaining unit is used for obtaining the pharyngeal image information of the first user according to the first image acquisition module; a third obtaining unit, configured to pre-process the first pharyngeal image information to obtain first pre-processed image information; the first sending unit is used for uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; a first partitioning unit, configured to perform gradient partitioning on the first preprocessed image information according to the first partitioning instruction information, so as to obtain first partitioning result information; a fourth obtaining unit, configured to obtain first segmentation history image information in combination with big data according to the first segmentation result information; a fifth obtaining unit configured to input the first segmentation result information and the first segmentation history image information into a first diagnosis evaluation model, and obtain a first diagnosis evaluation result; a sixth obtaining unit for labeling the first user pharyngeal image information using the first diagnostic evaluation result, obtaining a first set of labeled images.
In a third aspect, an embodiment of the present application provides a pediatric pharyngeal image acquisition system, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the first user information is obtained; acquiring pharyngeal image information of the first user according to the first image acquisition module; preprocessing the first pharyngeal image information to obtain first preprocessed image information; uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information; according to the first partition result information, combining big data to obtain first partition historical image information; inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result; the technical scheme that the first user pharyngeal image information is marked by using the first diagnosis evaluation result to obtain the first marked image set achieves the technical effect that the processed image set provides more intelligent and accurate reference information for auxiliary diagnosis by performing gradient zoning on the image, comparing each zone information with historical similar symptoms, then obtaining a diagnosis result through intelligent model evaluation, and matching the diagnosis result with the pharyngeal image.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flow chart of a pediatric pharyngeal image acquisition method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another pediatric pharyngeal image acquisition method according to the embodiment of the present application
FIG. 3 is a schematic structural diagram of a pediatric pharyngeal image acquisition device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a third obtaining unit 13, a first sending unit 14, a first partitioning unit 15, a fourth obtaining unit 16, a fifth obtaining unit 17, a sixth obtaining unit 18, an electronic device 300, a memory 301, a processor 302, a communication interface 303, and a bus architecture 304.
Detailed Description
The embodiment of the application provides a method and a device for collecting images of a pediatric pharynx, and solves the technical problems that in the prior art, the texture and the color of collected images of the pharynx are subjected to symptom clustering, the images are classified into possible pathological causes, reference is provided for diagnosis, and due to the fact that image processing is too simple and one-sided, the accuracy degree is not enough. The technical effect that the image is subjected to gradient partition, the information of each partition is compared with the historical similar symptoms, the diagnosis result is obtained through intelligent model evaluation, the diagnosis result is matched with the pharyngeal image, and the obtained processed image set provides more intelligent and accurate reference information for auxiliary diagnosis is achieved.
Summary of the application
In recent years, with the continuous deepening of machine learning research and the introduction of intelligent medical concepts, the combination of artificial intelligence and medical treatment becomes a research hotspot of a new mode of medical health services, and a great deal of practice is made in the fields of image recognition, disease model prediction, biological big data mining and the like. Because the pathological changes of the ear, nose and throat are frequently generated in lacuna, an endoscope and an influence study are needed for auxiliary diagnosis, the research of artificial intelligence gradually pays attention to, and the color and the texture of the throat are evaluated in laryngology to judge the health condition of the throat and assist diagnosis. However, in the prior art, symptom clustering is carried out on the texture and the color of the collected pharyngeal image, the texture and the color are classified into possible pathological causes, reference is provided for diagnosis, and due to the fact that image processing is too simple and unilateral, the technical problem that the accuracy degree is not enough exists.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a method for acquiring images of a pharyngeal portion of a pediatrics, wherein the method is applied to medical goggles, and the first medical goggles comprise a first light supplement module and a first image acquisition module and comprise the following steps: obtaining first user information; acquiring pharyngeal image information of the first user according to the first image acquisition module; preprocessing the first pharyngeal image information to obtain first preprocessed image information; uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information; according to the first partition result information, combining big data to obtain first partition historical image information; inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result; labeling the first user pharyngeal image information using the first diagnostic evaluation, obtaining a first set of labeled images.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a method for capturing images of a pediatric pharyngeal portion, wherein the method is applied to a medical goggle, the medical goggle includes a first supplementary lighting module and a first image capturing module, and includes:
s100: obtaining first user information;
s200: acquiring pharyngeal image information of the first user according to the first image acquisition module;
specifically, the first user information refers to throat patient information for a visit in otolaryngological department, including but not limited to: information such as age, sex, throat medical history, past examination data, etc. of the patient; the preferred acquiring mode of the first user information is as follows: the first goggles are in communication connection with a patient information platform of a hospital, when the first user is checked, the first goggles perform face recognition matching on the first user by means of the face features of the first user through the first image acquisition module, and the first user information is called. The first image acquisition module refers to a management module for acquiring an original image of the first user, and an intelligent high-definition miniature camera is optionally used; the pharyngeal image information of the first user refers to data obtained by acquiring images of the pharyngeal of the first user after identifying an examination instruction of a doctor. Preferably, the acquisition position is adjusted through the movement of the first goggles, and the high-definition images are acquired through the coordination and matching of the first light supplement module and the first image acquisition module, wherein the first light supplement module refers to a management module capable of performing auxiliary polishing according to the actual image shooting condition. By calling the first user information and acquiring the first user pharyngeal original image information, a complete information basis is provided for further processing of the subsequent images.
S300: preprocessing the first pharyngeal image information to obtain first preprocessed image information;
further, based on the preprocessing the first pharyngeal image information, obtaining first preprocessed image information, step S300 further includes:
s310: obtaining a first preprocessing standard according to the first pharyngeal image analysis platform;
s320: comparing the first pharyngeal image information with the first preprocessing standard one by one, and deleting the image information which does not accord with the first preprocessing standard to obtain a first comparison result;
s330: and sequencing the first comparison result according to the acquisition time to obtain a first preprocessed image.
Specifically, the first preprocessed image refers to an image set obtained by performing dimension reduction screening on the pharyngeal image information of the first user by the first goggles after the pharyngeal original information of the first user is acquired by the first image acquisition module. Further, the first preprocessing standard refers to set standard data for filtering the pharyngeal image information of the first user, for example, without limitation, the first preprocessing standard may be set as a minimum image resolution for pharyngeal image recognition, and may be set according to an actual application scenario; the first comparison result may refer to a result obtained by comparing the first pharyngeal image information with the first preset standard, deleting the first pharyngeal image which does not meet the first preset standard, and supplementing the image information until an image set which meets the first preset standard and can represent the first user pharyngeal basic information is obtained; furthermore, the image information in the first comparison result is preferably sorted according to shooting time to obtain the first preprocessed image. By preprocessing the pharyngeal image information of the first user, the redundancy of the image information can be reduced, and the efficiency is improved.
S400: uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction;
s500: performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information;
specifically, the first pharyngeal influence analysis platform refers to an image processing platform that reprocesses the first pre-processed image; the first segmentation instruction refers to an instruction signal for processing the first pre-processed image, which is obtained by uploading the obtained first pre-processed image to the first pharyngeal image analysis platform after the first goggles stop pre-processing the first user eye image. Further, the first segmentation result information indicates that the first pharyngeal image analysis platform performs segmentation on the first preprocessed image according to the first segmentation instruction, and the preferred segmentation mode is to perform region segmentation according to the texture, color and appearance of the first preprocessed image to obtain a plurality of regions with similar texture, color and appearance. And according to the first partition result, the area where the abnormal texture, the chromaticity and the morphology are located can be quickly identified.
S600: according to the first partition result information, combining big data to obtain first partition historical image information;
specifically, the first-partition historical image information refers to storing image information of historical patients similar to the color, texture and morphology of each partition in the first partition result in combination with big data, and storing the image information corresponding to each partition to obtain an image information set. By collecting the historical image information of the first partition, the historical image information of the first partition can be further compared with the information in the first partition result, abnormal partitions can be rapidly distinguished, the diagnosis result of the corresponding historical patient can be obtained, and an information reference basis can be provided for pharyngeal disease diagnosis of the first user.
S700: inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result;
s800: labeling the first user pharyngeal image information using the first diagnostic evaluation, obtaining a first set of labeled images.
Specifically, the first diagnosis evaluation result information is an evaluation result of the pharyngeal condition of the first user obtained through intelligent model analysis according to the first partition result information and the first partition historical image information, the first diagnosis evaluation result is established based on a neural network model, and has the characteristics of the neural network model, wherein the artificial neural network is an abstract mathematical model which is proposed and developed on the basis of modern neuroscience and is intended to reflect the structure and the function of the human brain, the neural network is an operation model and is formed by a plurality of nodes (or called neurons) which are connected with each other, each node represents a specific output function called an excitation function, the connection between every two nodes represents a weighted value for signals passing through the connection, called a weighted value, which is equivalent to the memory of the artificial neural network, the output of the network is the expression of a logic strategy according to the connection mode of the network, and the first diagnosis and evaluation result established based on the neural network model can output accurate information of the first diagnosis and evaluation result, so that the method has stronger analysis and calculation capacity and achieves the accurate and efficient technical effect.
Further, the first marked image set refers to an image set in which a mark corresponding to abnormal partition information in the first diagnosis evaluation result information and the first partition result information is used in the pharyngeal image information of the first user to represent the pharyngeal disease information of the first user. The acquired original image is processed to obtain the processed first marker image set representing the pharyngeal disease information of the first user, and an information basis is provided for efficient diagnosis of the pharyngeal disease of the first user.
Further, based on the gradient partitioning of the first preprocessed image information according to the first partitioning instruction information, obtaining first partitioning result information, step S500 further includes:
s510: performing feature extraction on the first preprocessed image to obtain first feature information, wherein the first feature information comprises first color feature information and first texture feature information;
s520: clustering and analyzing the first preprocessed image according to the first color characteristic information and the first texture characteristic information to obtain a first clustering result;
s530: and obtaining the first partition result information according to the first clustering result.
Specifically, the first feature information refers to data obtained by performing feature extraction on the first preprocessed image information, preferably, feature extraction is performed by using a feature extraction model trained based on a convolutional neural network, and the convolution can be used as a feature extractor in machine learning, so that the extracted feature information has concentration and representativeness, and then convolution features of the first preprocessed image information are obtained, wherein the convolutional neural network is one of neural networks, and has an excellent recognition function for feature extraction, particularly for image feature extraction, and the obtained first feature information has high accuracy; the first color feature and the first texture feature refer to the color feature and the texture feature of the first preprocessed image information, and in the laryngological department, the abnormal condition of the laryngopharynx is judged by analyzing the color and the texture of the image acquired by the laryngoscope; further, the first clustering result refers to dividing the first preprocessed image into regions with obvious texture and chromaticity difference according to the first color feature information and the first texture feature information, and dividing the first preprocessed image into small regions; furthermore, it is preferable that the region image divided from the first preprocessed image is stored according to the first clustering result, and is used as a plurality of partitions of the pharyngeal image information of the first user, that is, the first partition result information, and the pharyngeal anomaly position can be more accurately and efficiently located through the first partition result information.
Further, the method further includes step S900:
s910: extracting spatial features of the first preprocessed image to obtain second feature information, wherein the second feature information comprises first feature information;
s920: obtaining a first correction instruction according to the first appearance characteristic information;
s930: and correcting the first partition result information according to the first correction instruction to obtain second partition result information.
Specifically, the second feature information refers to spatial feature extraction, that is, feature extraction of features, performed on the first preprocessed information, preferably, the feature extraction is performed by using a feature extraction model trained based on a convolutional neural network, the convolution can be used as a feature extractor in machine learning, so that the extracted feature information has centralization and representativeness, and further, convolution features of the first preprocessed image information are obtained, the convolutional neural network is one of neural networks, has an excellent recognition function for feature extraction, particularly image feature extraction, and has higher accuracy for obtaining the second feature information, the first feature information refers to information representing the pharyngeal features of the first user, and image information observed by a general laryngoscope is mainly diagnosed according to chromaticity and texture evaluation, but cannot spatially evaluate the pharyngeal feature information, the pharyngeal condition of the first user can be analyzed and evaluated more intuitively through the first morphological characteristic information; the first correction instruction refers to the evaluation of the pharyngeal health condition of the first user according to the first appearance information by combining a human body pathology and physiology knowledge base; the second partition result information refers to a result obtained by correcting or adding information to the first partition result according to the first correction instruction. The pharyngeal disease condition of the first user can be analyzed and evaluated more visually through the first morphological feature information, and the pharyngeal disease condition information of the first user is represented more comprehensively by the second partition result obtained by correcting the first partition result.
Further, before performing spatial feature extraction on the first preprocessed image, as shown in fig. 2, step S910 includes:
s911: according to the first image acquisition module, first position information of a first preprocessed image is obtained, wherein the first position information comprises first acquisition angle information, first acquisition height information and first acquisition horizontal position information.
S912: constructing spatial position information of the first preprocessed image by using the first acquisition angle information, the first acquisition height information and the first acquisition horizontal position information to obtain a first arrangement instruction;
s913: performing spatial arrangement on the first preprocessed image information according to the first arrangement instruction and the first spatial position information to obtain first spatial arrangement information;
s914: and inputting the first spatial arrangement information into the first modeling system to obtain first pharyngeal modeling image information.
Specifically, the first position information refers to information representing position data of the first preprocessed image obtained according to information such as a shooting angle and a shooting height of the first image acquisition module, the first acquisition angle information, the first acquisition height information, and the first acquisition horizontal position information are included in the first position information, and the first acquisition angle information, the first acquisition height information, and the first acquisition horizontal position information are matched with the first preprocessed image; further, the first spatial position information refers to spatial coordinate information of the first preprocessed image constructed by using the acquisition angle information, the first acquisition height information and the first acquisition horizontal position information of the first preprocessed image; the first arrangement instruction is a control signal which is sent out after the first pre-processing image constructs the first spatial position information and arranges the first pre-processing image according to the first spatial position information. Further, the first spatial arrangement information refers to that the first preprocessed image is arranged according to the first arrangement instruction and the first spatial position corresponding to the first preprocessed image is arranged according to the first arrangement instruction to obtain a result; furthermore, the first modeling system is an intelligent automatic modeling system, and can perform reasoning to obtain complete spatial image information of the pharynx of the first user according to the first sparse spatial arrangement information and the pharynx characteristic data of the first user. When the first user is a group which is not suitable for acquiring the pharyngeal image by using a laryngoscope, such as children, the old and the situation that the pharyngeal portion is seriously inflamed, the first modeling system can be adopted to reason the pharyngeal portion of the first user, so that the image information representing the pharyngeal portion data of the first user is obtained, and the image information is the first pharyngeal portion modeling image information. The first pharyngeal modeling image information can provide a more accurate information basis for the spatial feature extraction of the first user.
Further, based on the obtaining of the first segmentation historical image information by combining big data according to the first segmentation result information, step S600 includes:
s610: retrieving in big data according to the first user information to obtain first historical image information;
s620: matching the first historical image information with the first partition result to obtain a first matching result, wherein the first historical image information and the first partition result in the first matching result are in one-to-one correspondence;
s630: constructing a plurality of first historical image partition databases according to the first matching result information;
s640: performing traversal operation on the first historical image partition database according to the first characteristic information to obtain a first screening result;
s650: and taking the first screening result as the first partition historical image information.
Specifically, the first historical image information refers to pharyngeal examination image information and corresponding diagnosis result information of a historical patient similar to the first user, which are obtained by searching in a big data platform according to the basic information of the first user and the first preprocessing image information; further, the first matching result refers to matching each partition of the first historical image information and the first partition result information, and correspondingly storing each partition in the first partition result information and the corresponding first historical image information to obtain a result; furthermore, the first historical image partition database refers to information stored according to the first matching result, the first preprocessed image and the first historical image information corresponding to each partition in the first partition result are stored as one database, and a plurality of partitions are provided with a plurality of databases; furthermore, the first screening result refers to a comparison traversal according to the first feature information and the first historical image partition database, and preferably leaves textures and chromaticities which meet preset requirements, and deletes the textures and the chromaticities which do not meet the preset requirements, wherein the preset requirements can be set by a doctor independently; further, the first historical image partition database after being screened is used as the first partition historical image information.
Further, based on the inputting of the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model, a first diagnosis evaluation result is obtained, and step S700 includes:
s710: inputting the first segmentation result information and the first segmentation historical image information into a diagnostic evaluation model;
s720: the first diagnosis and evaluation model is obtained by training a plurality of groups of input data to a convergence state, wherein each group of data in the plurality of groups of input data comprises the first partition result information, the first partition historical image information and identification information for identifying the pharyngeal condition of the first user;
s730: obtaining output information of the first diagnostic evaluation model, the output information including the first diagnostic evaluation result.
Specifically, the first diagnostic evaluation model neural network model, which is a neural network model in machine learning, reflects many essential features of human brain function, and is a highly complex nonlinear dynamical learning system. Wherein, it can carry out continuous self-training study according to training data, every group in the multiunit training data all includes: the first segmentation result information, the first segmentation historical image information and identification information for identifying the pharyngeal condition of the first user, the first diagnosis evaluation model continuously corrects itself, and when the output information of the first diagnosis evaluation model reaches a preset accuracy rate/convergence state, the supervised learning process is ended. By carrying out data training on the first diagnosis and evaluation model, the first diagnosis and evaluation model can process input data more accurately, so that the output information of the first diagnosis and evaluation result is more accurate, the accurate data information acquisition is achieved, and the intelligent technical effect of the evaluation result is improved.
To sum up, the method and the device for acquiring images of the pharyngeal portion of the paediatrics provided by the embodiment of the application have the following technical effects:
1. the first user information is obtained; acquiring pharyngeal image information of the first user according to the first image acquisition module; preprocessing the first pharyngeal image information to obtain first preprocessed image information; uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information; according to the first partition result information, combining big data to obtain first partition historical image information; inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result; the technical scheme that the first user pharyngeal image information is marked by using the first diagnosis evaluation result to obtain the first marked image set achieves the technical effect that the processed image set provides more intelligent and accurate reference information for auxiliary diagnosis by performing gradient zoning on the image, comparing each zone information with historical similar symptoms, then obtaining a diagnosis result through intelligent model evaluation, and matching the diagnosis result with the pharyngeal image.
2. The pharyngeal disease condition of the first user can be analyzed and evaluated more visually through the first morphological feature information, and the pharyngeal disease condition information of the first user is represented more comprehensively by the second partition result obtained by correcting the first partition result.
Example two
Based on the same inventive concept as the method for acquiring the pediatric pharyngeal image in the foregoing embodiment, as shown in fig. 3, an embodiment of the present application provides a device for acquiring the pediatric pharyngeal image, wherein the device includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain first user information;
a second obtaining unit 12, wherein the second obtaining unit 12 is configured to obtain pharyngeal image information of the first user according to a first image acquisition module;
a third obtaining unit 13, where the third obtaining unit 13 is configured to perform preprocessing on the first pharyngeal image information to obtain first preprocessed image information;
a first sending unit 14, where the first sending unit 14 is configured to upload the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction;
a first partitioning unit 15, where the first partitioning unit 15 is configured to perform gradient partitioning on the first preprocessed image information according to the first partitioning instruction information, so as to obtain first partitioning result information;
a fourth obtaining unit 16, where the fourth obtaining unit 16 is configured to obtain first segmentation history image information in combination with big data according to the first segmentation result information;
a fifth obtaining unit 17, where the fifth obtaining unit 17 is configured to input the first segmentation result information and the first segmentation history image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result;
a sixth obtaining unit 18, said sixth obtaining unit 18 being configured to label said first user pharyngeal image information using said first diagnostic evaluation result, obtaining a first set of labeled images.
Further, the apparatus further comprises:
a seventh obtaining unit, configured to obtain a first preprocessing standard according to the first pharyngeal image analysis platform;
the first comparison unit is used for comparing the first pharyngeal image information with the first preprocessing standard one by one, and deleting the image information which does not accord with the first preprocessing standard to obtain a first comparison result;
and the first sequencing unit is used for sequencing the first comparison result according to the acquisition time to obtain a first preprocessed image.
Further, the apparatus further comprises:
a first extraction unit, configured to perform feature extraction on the first preprocessed image to obtain first feature information, where the first feature information includes first color feature information and first texture feature information;
the first clustering unit is used for clustering and analyzing the first preprocessed image according to the first color characteristic information and the first texture characteristic information to obtain a first clustering result;
an eighth obtaining unit, configured to obtain the first partition result information according to the first clustering result.
Further, the apparatus further comprises:
the second extraction unit is used for extracting spatial features of the first preprocessed image to obtain second feature information, and the second feature information comprises first feature information;
a ninth obtaining unit, configured to obtain a first correction instruction according to the first feature information;
a tenth obtaining unit, configured to correct the first partition result information according to the first correction instruction, and obtain second result information.
Further, the apparatus further comprises:
an eleventh obtaining unit, configured to obtain first position information of the first preprocessed image according to the first image acquisition module, where the first position information includes first acquisition angle information, first acquisition height information, and first acquisition horizontal position information.
The first construction unit is used for constructing spatial position information of the first preprocessed image by utilizing the first acquisition angle information, the first acquisition height information and the first acquisition horizontal position information to obtain a first arrangement instruction;
a twelfth obtaining unit, configured to perform spatial arrangement on the first preprocessed image information according to the first arrangement instruction and the first spatial position information, to obtain first spatial arrangement information;
a thirteenth obtaining unit, configured to input the first spatial arrangement information into the first modeling system, and obtain first pharyngeal modeling image information.
Further, the apparatus further comprises:
a fourteenth obtaining unit, configured to obtain first history image information by retrieving from big data according to the first user information;
a first matching unit, configured to match the first historical image information with the first segmentation result to obtain a first matching result, where the first historical image information and the first segmentation result in the first matching result are in one-to-one correspondence;
a second construction unit, configured to construct a plurality of first historical image partition databases according to the first matching result information;
the first traversal unit is used for performing traversal operation on the first historical image partition database according to the first characteristic information to obtain a first screening result;
a first setting unit configured to take the first filtering result as the first division history image information.
Further, the apparatus further comprises:
a first input unit for inputting the first segmentation result information and the first segmentation history image information into a diagnosis evaluation model;
a first training unit, configured to train the first diagnosis and evaluation model to a convergence state through a plurality of sets of input data, where each set of data in the plurality of sets of input data includes the first segmentation result information, the first segmentation historical image information, and identification information for identifying a pharyngeal condition of the first user;
a first output unit to obtain output information of the first diagnostic evaluation model, the output information including the first diagnostic evaluation result.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to figure 4,
based on the same inventive concept as the method for acquiring the pediatric pharyngeal image in the foregoing embodiments, the present application embodiment further provides a system for acquiring the pediatric pharyngeal image, including: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the method of any of the first aspects
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication interface 303 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), a wired access network, and the like.
The memory 301 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement the pediatric pharyngeal image acquisition method provided by the above-mentioned embodiments of the present application.
Optionally, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
The embodiment of the application provides a method for acquiring images of a pharyngeal portion of a pediatrics, wherein the method is applied to medical goggles, and the first medical goggles comprise a first light supplement module and a first image acquisition module and comprise the following steps: obtaining first user information; acquiring pharyngeal image information of the first user according to the first image acquisition module; preprocessing the first pharyngeal image information to obtain first preprocessed image information; uploading the first preprocessed image information to a first pharyngeal image analysis platform to obtain a first partition instruction; performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information; according to the first partition result information, combining big data to obtain first partition historical image information; inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result; the first diagnosis evaluation result is used for marking the pharyngeal image information of the first user to obtain a first marked image set, and the technical effects that the images are subjected to gradient partition, the partition information is compared with historical similar symptoms, then the diagnosis result is obtained through intelligent model evaluation, the diagnosis result is matched with the pharyngeal image, and the obtained processed image set provides more intelligent and accurate reference information for auxiliary diagnosis are achieved.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations.

Claims (8)

1. A pediatric pharyngeal image acquisition method, wherein the method is applied to a medical goggle comprising a first supplementary lighting module and a first image acquisition module, comprising:
obtaining first user information;
acquiring pharyngeal image information of a first user according to the first image acquisition module;
preprocessing the pharyngeal image information of the first user to obtain first preprocessed image information;
uploading the first preprocessed image information to a first user pharyngeal image analysis platform to obtain a first partition instruction;
performing gradient partition on the first preprocessed image information according to the first partition instruction information to obtain first partition result information;
according to the first partition result information, combining big data to obtain first partition historical image information;
inputting the first segmentation result information and the first segmentation historical image information into a first diagnosis evaluation model to obtain a first diagnosis evaluation result;
labeling the first user pharyngeal image information using the first diagnostic evaluation result, obtaining a first labeled image set;
the obtaining of the first segmentation historical image information by combining big data according to the first segmentation result information comprises:
retrieving in big data according to the first user information to obtain first historical image information;
matching the first historical image information with the first partition result to obtain a first matching result, wherein the first historical image information and the first partition result in the first matching result are in one-to-one correspondence;
constructing a plurality of first historical image partition databases according to the first matching result information;
performing traversal operation on the first historical image partition database according to the first characteristic information to obtain a first screening result;
and taking the first screening result as the first partition historical image information.
2. The method of claim 1, wherein said pre-processing said first user pharyngeal image information, obtaining first pre-processed image information, comprises:
obtaining a first preprocessing standard according to the first user pharyngeal image analysis platform;
comparing the pharyngeal image information of the first user with the first preprocessing standard one by one, and deleting the image information which does not accord with the first preprocessing standard to obtain a first comparison result;
and sequencing the first comparison result according to the acquisition time to obtain a first preprocessed image.
3. The method of claim 1, wherein said gradient partitioning the first pre-processed image information according to the first partitioning instruction information, obtaining first partitioning result information, comprises:
performing feature extraction on the first preprocessed image to obtain first feature information, wherein the first feature information comprises first color feature information and first texture feature information;
clustering and analyzing the first preprocessed image according to the first color characteristic information and the first texture characteristic information to obtain a first clustering result;
and obtaining the first partition result information according to the first clustering result.
4. The method of claim 3, wherein the method further comprises:
extracting spatial features of the first preprocessed image to obtain second feature information, wherein the second feature information comprises first feature information;
obtaining a first correction instruction according to the first appearance characteristic information;
and correcting the first partition result information according to the first correction instruction to obtain second partition result information.
5. The method of claim 4, wherein prior to said spatial feature extraction of said first pre-processed image, comprising:
acquiring first position information of a first preprocessed image according to the first image acquisition module, wherein the first position information comprises first acquisition angle information, first acquisition height information and first acquisition horizontal position information;
constructing spatial position information of the first preprocessed image by using the first acquisition angle information, the first acquisition height information and the first acquisition horizontal position information to obtain a first arrangement instruction;
performing spatial arrangement on the first preprocessed image information according to the first arrangement instruction and the spatial position information to obtain first spatial arrangement information;
and inputting the first spatial arrangement information into a first modeling system to obtain pharyngeal modeling image information of a first user.
6. The method of claim 1, wherein said inputting said first segmentation result information and said first segmentation historical image information into a first diagnostic evaluation model, obtaining a first diagnostic evaluation result, comprises:
inputting the first segmentation result information and the first segmentation historical image information into a diagnostic evaluation model;
the first diagnosis and evaluation model is obtained by training a plurality of groups of input data to a convergence state, wherein each group of data in the plurality of groups of input data comprises the first partition result information, the first partition historical image information and identification information for identifying the pharyngeal condition of the first user;
obtaining output information of the first diagnostic evaluation model, the output information including the first diagnostic evaluation result.
7. A pediatric pharyngeal image capture device, wherein the device comprises:
a first obtaining unit configured to obtain first user information;
the second obtaining unit is used for obtaining pharyngeal image information of the first user according to the first image acquisition module;
a third obtaining unit, configured to pre-process the pharyngeal image information of the first user to obtain first pre-processed image information;
the first sending unit is used for uploading the first preprocessed image information to a first user pharyngeal image analysis platform to obtain a first partition instruction;
a first partitioning unit, configured to perform gradient partitioning on the first preprocessed image information according to the first partitioning instruction information, so as to obtain first partitioning result information;
a fourth obtaining unit, configured to obtain first segmentation history image information in combination with big data according to the first segmentation result information;
a fifth obtaining unit configured to input the first segmentation result information and the first segmentation history image information into a first diagnosis evaluation model, and obtain a first diagnosis evaluation result;
a sixth obtaining unit for labeling the first user pharyngeal image information using the first diagnostic evaluation result, obtaining a first set of labeled images;
a fourteenth obtaining unit, configured to obtain first history image information by retrieving from big data according to the first user information;
a first matching unit, configured to match the first historical image information with the first segmentation result to obtain a first matching result, where the first historical image information and the first segmentation result in the first matching result are in one-to-one correspondence;
a second construction unit, configured to construct a plurality of first historical image partition databases according to the first matching result information;
the first traversal unit is used for performing traversal operation on the first historical image partition database according to first characteristic information to obtain a first screening result;
a first setting unit configured to take the first filtering result as the first division history image information.
8. A pediatric pharyngeal image acquisition system, comprising: a processor coupled with a memory for storing a program that, when executed by the processor, causes a system to perform the method of any of claims 1 to 6.
CN202110876510.5A 2021-07-30 2021-07-30 Pediatric pharyngeal image acquisition method and device Active CN113554641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876510.5A CN113554641B (en) 2021-07-30 2021-07-30 Pediatric pharyngeal image acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876510.5A CN113554641B (en) 2021-07-30 2021-07-30 Pediatric pharyngeal image acquisition method and device

Publications (2)

Publication Number Publication Date
CN113554641A CN113554641A (en) 2021-10-26
CN113554641B true CN113554641B (en) 2022-04-12

Family

ID=78105131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876510.5A Active CN113554641B (en) 2021-07-30 2021-07-30 Pediatric pharyngeal image acquisition method and device

Country Status (1)

Country Link
CN (1) CN113554641B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037677B (en) * 2021-11-05 2022-07-26 安徽宇呈数据技术有限公司 Can insert precious portable map acquisition equipment that charges

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550651A (en) * 2015-12-14 2016-05-04 中国科学院深圳先进技术研究院 Method and system for automatically analyzing panoramic image of digital pathological section
CN106029000A (en) * 2014-02-21 2016-10-12 阿克伦大学 Imaging and display system for guiding medical interventions
CN109154978A (en) * 2016-05-13 2019-01-04 巴斯夫欧洲公司 System and method for detecting plant disease
CN110215340A (en) * 2019-06-11 2019-09-10 重庆市中医院 A kind of dedicated goggles of the department of stomatology
CN111588345A (en) * 2020-06-18 2020-08-28 歌尔科技有限公司 Eye disease detection method, AR glasses and readable storage medium
CN112950601A (en) * 2021-03-11 2021-06-11 成都微识医疗设备有限公司 Method, system and storage medium for screening pictures for esophageal cancer model training

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9208556B2 (en) * 2010-11-26 2015-12-08 Quantitative Insights, Inc. Method, system, software and medium for advanced intelligent image analysis and display of medical images and information
US20200058125A1 (en) * 2018-08-14 2020-02-20 Mikhail Teverovskiy Comparative cancer survival models to assist physicians to choose optimal treatment
CN111428737B (en) * 2020-04-01 2024-01-19 南方科技大学 Instance retrieval method, device, server and storage medium for ophthalmic image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106029000A (en) * 2014-02-21 2016-10-12 阿克伦大学 Imaging and display system for guiding medical interventions
CN105550651A (en) * 2015-12-14 2016-05-04 中国科学院深圳先进技术研究院 Method and system for automatically analyzing panoramic image of digital pathological section
CN109154978A (en) * 2016-05-13 2019-01-04 巴斯夫欧洲公司 System and method for detecting plant disease
CN110215340A (en) * 2019-06-11 2019-09-10 重庆市中医院 A kind of dedicated goggles of the department of stomatology
CN111588345A (en) * 2020-06-18 2020-08-28 歌尔科技有限公司 Eye disease detection method, AR glasses and readable storage medium
CN112950601A (en) * 2021-03-11 2021-06-11 成都微识医疗设备有限公司 Method, system and storage medium for screening pictures for esophageal cancer model training

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于数学形态学的医学图像处理研究;冯进功;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20091215(第12期);正文第1-60页 *

Also Published As

Publication number Publication date
CN113554641A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN110909780B (en) Image recognition model training and image recognition method, device and system
CN109886273B (en) CMR image segmentation and classification system
WO2020151536A1 (en) Brain image segmentation method, apparatus, network device and storage medium
WO2022057078A1 (en) Real-time colonoscopy image segmentation method and device based on ensemble and knowledge distillation
WO2023077603A1 (en) Prediction system, method and apparatus for abnormal brain connectivity, and readable storage medium
US11138726B2 (en) Method, client, server and system for detecting tongue image, and tongue imager
CN109544518B (en) Method and system applied to bone maturity assessment
CN110689025A (en) Image recognition method, device and system, and endoscope image recognition method and device
CN111915584A (en) Focus follow-up assessment method and system based on CT (computed tomography) image
Zhang et al. Dual encoder fusion u-net (defu-net) for cross-manufacturer chest x-ray segmentation
CN111738302A (en) System for classifying and diagnosing Alzheimer disease based on multi-modal data
WO2022242392A1 (en) Blood vessel image classification processing method and apparatus, and device and storage medium
Li et al. Natural tongue physique identification using hybrid deep learning methods
US20240112329A1 (en) Distinguishing a Disease State from a Non-Disease State in an Image
CN113554641B (en) Pediatric pharyngeal image acquisition method and device
Manikandan et al. Segmentation and Detection of Pneumothorax using Deep Learning
CN111681247A (en) Lung lobe and lung segment segmentation model training method and device
CN111540467A (en) Schizophrenia classification identification method, operation control device and medical equipment
CN117237351B (en) Ultrasonic image analysis method and related device
CN113822323A (en) Brain scanning image identification processing method, device, equipment and storage medium
CN115115570A (en) Medical image analysis method and apparatus, computer device, and storage medium
Viscaino et al. Computer-aided ear diagnosis system based on CNN-LSTM hybrid learning framework for video otoscopy examination
CN110110750B (en) Original picture classification method and device
CN111667469A (en) Lung disease classification method, device and equipment
CN113077894A (en) System, method, apparatus and medium for skin diagnosis based on graph convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant