CN114565611A - Medical information acquisition method and related equipment - Google Patents

Medical information acquisition method and related equipment Download PDF

Info

Publication number
CN114565611A
CN114565611A CN202210455932.XA CN202210455932A CN114565611A CN 114565611 A CN114565611 A CN 114565611A CN 202210455932 A CN202210455932 A CN 202210455932A CN 114565611 A CN114565611 A CN 114565611A
Authority
CN
China
Prior art keywords
abnormal
image
local image
type grade
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210455932.XA
Other languages
Chinese (zh)
Other versions
CN114565611B (en
Inventor
于红刚
张丽辉
卢姿桦
姚理文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202210455932.XA priority Critical patent/CN114565611B/en
Publication of CN114565611A publication Critical patent/CN114565611A/en
Application granted granted Critical
Publication of CN114565611B publication Critical patent/CN114565611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Endoscopes (AREA)

Abstract

The embodiment of the application provides a medical information acquisition method and related equipment. The medical information acquisition method includes: preprocessing a digestive tract endoscope video of a target object to obtain a preprocessed standard image and an abnormal local image; determining abnormal type grade information corresponding to the abnormal local image according to the abnormal local image; determining a background feature corresponding to the area where the abnormal local image is located according to the preprocessing standard image; and obtaining medical information of the target object according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, wherein the medical information is used for assisting in standardized recording of the endoscopy examination content. The technical scheme of the embodiment of the application solves the problem that medical information in the prior art only contains regional local parts to perform generalized judgment, and realizes more accurate and more comprehensive judgment on the whole environment.

Description

Medical information acquisition method and related equipment
Technical Field
The application relates to the technical field of computers and communication, in particular to a medical information acquisition method and related equipment.
Background
The electronic digestive endoscope can directly observe the condition of the digestive tract mucosa, and as an important means for detecting digestive tract diseases, detailed endoscope record becomes an important link of endoscopy. However, the existing endoscope records have the problems of poor heterogeneity and difficult normalization. At present, the existing cases applied to the gastrointestinal endoscope equipment can be simply classified according to the discovered common abnormal conditions, and are difficult to comprehensively identify and record. That is, the medical information in the prior art only contains regional parts for general judgment, and the overall environment of the digestive tract cannot be judged and recorded more accurately and comprehensively.
Disclosure of Invention
The embodiment of the application provides a medical information acquisition method and related equipment, and further the problems that medical information in the prior art only contains regional parts to perform generalized judgment and does not have more accurate and comprehensive judgment and overall environment assessment can be overcome at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a medical information acquisition method including: preprocessing a digestive tract endoscope video of a target object to obtain a preprocessed standard image and an abnormal local image; determining abnormal type grade information corresponding to the abnormal local image according to the abnormal local image; determining a background feature corresponding to the area where the abnormal local image is located according to the preprocessing standard image; and obtaining medical information of the target object according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, wherein the medical information is used for assisting in standardized recording of the endoscopy examination content.
In an embodiment of the present application, the preprocessing the video of the endoscope in the alimentary tract by the target object to obtain a preprocessed standard image and an abnormal local image specifically includes: splitting the digestive tract endoscope video to obtain an original digestive tract endoscope image; correcting the original image of the digestive tract endoscope to obtain the preprocessed standard image; determining the position of an abnormality according to the preprocessing standard image; and intercepting the abnormal local image on the original image of the digestive tract endoscope according to the position of the abnormality.
In an embodiment of the present application, the determining, according to the abnormal local image, an abnormal type level corresponding to the abnormal local image specifically includes: sequentially inputting the abnormal local images into a plurality of abnormal feature recognition models, and outputting whether the abnormal local images have corresponding abnormal features by each abnormal feature recognition model; fitting an abnormal type corresponding to the abnormal local graph according to the abnormal characteristics of the abnormal local graph; and determining the abnormal type grade corresponding to the abnormal local graph according to the abnormal characteristics existing in the abnormal local graph and the abnormal type corresponding to the abnormal local graph.
In an embodiment of the present application, the training method of the abnormal feature recognition model is: acquiring an abnormal local image containing corresponding abnormal features and an abnormal local image sample set not containing corresponding abnormal features, wherein each sample is calibrated whether to contain corresponding abnormal features or not in advance; respectively inputting the data of each sample into an abnormal feature recognition model to obtain a judgment result of whether the abnormal feature recognition model outputs corresponding abnormal features; if the judgment result obtained after the data of the sample is input into the abnormal feature recognition model is inconsistent with the judgment result obtained by calibrating the sample in advance, adjusting the coefficient of the abnormal feature recognition until the judgment result is consistent with the judgment result; and when the data of all the samples are input into the abnormal feature recognition model, the obtained judgment result is consistent with the result obtained by calibrating the data samples in advance, and the training is finished.
In an embodiment of the present application, the determining, according to the abnormal feature existing in the abnormal local graph and the abnormal type corresponding to the abnormal local graph, an abnormal type level corresponding to the abnormal local graph specifically includes: and inputting the abnormal features of the abnormal local graph and the abnormal types corresponding to the abnormal local graph into a type grade judgment model, and outputting the corresponding abnormal type grades by the type grade judgment model.
In an embodiment of the present application, the training method of the type-level determination model is: acquiring a medical map sample set, wherein each medical map sample comprises abnormal characteristics and abnormal types of the medical map sample, and each medical map sample is calibrated with corresponding abnormal type grade information in advance; respectively inputting the data of each medical atlas sample into a type grade judgment model to obtain abnormal type grade information output by the type grade judgment model; if the obtained abnormal type grade is inconsistent with the abnormal type grade information calibrated in advance for the medical atlas sample after the data input type grade judgment model of the medical atlas sample exists, adjusting the coefficient for judging the type grade until the obtained abnormal type grade is consistent with the abnormal type grade information calibrated in advance for the medical atlas sample; and when the data of all the samples are input into the type grade judgment model, the obtained abnormal type grade is consistent with the information of the abnormal type grade calibrated in advance for the data samples, and the training is finished.
In an embodiment of the present application, the determining, according to the preprocessed standard image, a background feature corresponding to a region where the abnormal local image is located specifically includes: inputting the preprocessing standard image into a region discrimination model, outputting a corresponding region by the region discrimination model, and outputting a region overview image corresponding to each region; and inputting the area overview image into a background feature recognition model, and outputting a background feature corresponding to the area overview image by the background feature recognition model.
According to an aspect of an embodiment of the present application, there is provided a medical information acquisition system including: the preprocessing module is used for preprocessing the gastrointestinal endoscope video of the target object to obtain a preprocessed standard image and an abnormal local image; the anomaly detection module is used for determining anomaly type grade information corresponding to the anomaly local image according to the anomaly local image; the area detection module is used for determining the background characteristics corresponding to the area where the abnormal local image is located according to the preprocessing standard image; and the medical information module is used for obtaining medical information of the target object according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, and the medical information is used for assisting in standardized recording of endoscopy examination content.
In an embodiment of the present application, the preprocessing module specifically includes: the splitting submodule is used for splitting the digestive tract endoscope video to obtain an original digestive tract endoscope image; the correction submodule is used for correcting the original image of the digestive tract endoscope to obtain the preprocessing standard image; the positioning sub-module is used for determining the position of an abnormality according to the preprocessing standard image; and the screenshot submodule is used for intercepting the abnormal local image on the original image of the digestive tract endoscope according to the position of the abnormality.
In an embodiment of the present application, the anomaly detection module specifically includes: the judgment submodule is used for sequentially inputting the abnormal local images into a plurality of abnormal feature recognition models, and each abnormal feature recognition model outputs whether the corresponding abnormal feature exists in the abnormal local images or not; the fitting submodule is used for fitting the abnormal type corresponding to the abnormal local graph according to the abnormal characteristics existing in the abnormal local graph; and the description submodule is used for determining the grade of the abnormal type corresponding to the abnormal local graph according to the abnormal characteristics existing in the abnormal local graph and the abnormal type corresponding to the abnormal local graph.
In an embodiment of the present application, the training method of the abnormal feature recognition model is: acquiring an abnormal local image containing corresponding abnormal features and an abnormal local image sample set not containing corresponding abnormal features, wherein each sample is calibrated whether to contain corresponding abnormal features or not in advance; respectively inputting the data of each sample into an abnormal feature recognition model to obtain a judgment result of whether the output of the abnormal feature recognition model contains corresponding abnormal features; if the judgment result obtained after the data of the sample is input into the abnormal feature recognition model is inconsistent with the judgment result obtained by calibrating the sample in advance, adjusting the coefficient of the abnormal feature recognition until the judgment result is consistent with the judgment result; and when the data of all the samples are input into the abnormal feature recognition model, the obtained judgment result is consistent with the result obtained by calibrating the data samples in advance, and the training is finished.
In an embodiment of the application, the description submodule is specifically configured to perform the following steps: and inputting the abnormal features of the abnormal local graph and the abnormal types corresponding to the abnormal local graph into a type grade judgment model, and outputting the corresponding abnormal type grades by the type grade judgment model.
In an embodiment of the present application, the training method of the type-level determination model is: acquiring a medical atlas sample set, wherein each sample is calibrated with a corresponding abnormal type grade in advance; respectively inputting the data of each sample into a type grade judgment model to obtain an abnormal type grade output by the type grade judgment model; if the data input type grade judgment model of the sample exists, and the obtained abnormal type grade is inconsistent with the abnormal type grade calibrated in advance for the sample, adjusting the coefficient for judging the type grade until the abnormal type grade is consistent; and when the data of all the samples are input into the type grade judgment model, the obtained abnormal type grade is consistent with the abnormal type grade calibrated for the data samples in advance, and the training is finished.
In an embodiment of the present application, the area detection module specifically includes: the marking sub-module is used for inputting the preprocessing standard image into a region screening model, outputting a corresponding region by the region screening model, and outputting a region overview image corresponding to each region; and the extraction submodule is used for inputting the region overview image into a background feature recognition model, and the background feature recognition model outputs the background feature corresponding to the region overview image.
According to an aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, which, when executed by a processor, implements a medical information acquisition method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the medical information acquisition method as described in the above embodiments.
In the technical scheme provided by some embodiments of the application, the overall background feature of the area where the abnormality is located and the abnormality type grade of the abnormality local are obtained by respectively identifying and processing the preprocessing standard image and the abnormality local image, and the medical information which is wide in coverage and accurate is obtained by combining the overall background feature and the local abnormality type grade, so that the problems that the medical information in the prior art only contains the area local to perform generalized judgment and does not perform more accurate and more comprehensive judgment on the overall environment are solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
Fig. 2 schematically shows a flow chart of a medical information acquisition method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a specific implementation of step S100 in the method for acquiring medical information according to the corresponding embodiment in fig. 2.
Fig. 4 is a flowchart illustrating a specific implementation of step S200 in the method for acquiring medical information according to the corresponding embodiment in fig. 2.
Fig. 5 is a flowchart illustrating a specific implementation of step S300 in the method for acquiring medical information according to the corresponding embodiment in fig. 2.
Fig. 6 schematically shows a block diagram of a medical information acquisition system according to an embodiment of the present application.
FIG. 7 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the embodiments of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include a terminal device (e.g., one or more of a smartphone 101, a tablet computer 102, and a portable computer 103 shown in fig. 1, but may also be a desktop computer, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types such as wired communication links, wireless communication links, and the like.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
A user may use a terminal device to interact with the server 105 over the network 104 to receive or send messages or the like. The server 105 may be a server that provides various services. For example, a user uploads a gastrointestinal endoscopy video to the server 105 by using a terminal device, the server 105 can preprocess the gastrointestinal endoscopy video to obtain a preprocessed standard image and an abnormal local image, then determines an abnormal type grade corresponding to the abnormal local image according to the abnormal local image, then determines a background feature corresponding to a region where the abnormal local image is located according to the preprocessed standard image, and finally obtains medical information according to the background feature corresponding to the region where the abnormal local image is located and the abnormal type grade corresponding to the abnormal local image.
It should be noted that the medical information acquisition method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the medical information acquisition system is generally disposed in the server 105. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the scheme of medical information acquisition provided by the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 shows a flow diagram of a medical information acquisition method according to an embodiment of the present application, which may be performed by a server, which may be the server shown in fig. 1. Referring to fig. 2, the medical information acquisition method at least includes:
and S100, preprocessing the gastrointestinal endoscope video of the target object to obtain a preprocessed standard image and an abnormal local image.
And S200, determining the abnormal type grade corresponding to the abnormal local image according to the abnormal local image.
And step S300, determining the background characteristics corresponding to the area where the abnormal local image is located according to the preprocessing standard image.
And S400, obtaining medical information of the target object according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, wherein the medical information is used for assisting in standardized recording of endoscopy content.
In the embodiment of the application, the digestive tract endoscope video is preprocessed and converted into a standard image meeting certain specification requirements, namely, the preprocessed standard image; then searching for abnormality and positioning and intercepting to obtain an abnormal local image; determining the corresponding abnormal type grade according to the abnormal local image; then determining the background characteristics corresponding to the area where the abnormal local image is located according to the preprocessed standard image; and finally, according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, obtaining the medical information of the target object, wherein the medical information is used for assisting the standardized recording of the endoscopy examination content.
In the embodiment of the present application, the above-mentioned abnormality refers to a situation that is clearly distinguished from a normal gastrointestinal endoscope video, for example, a color, a shape, and the like of the position are different from those of the normal gastrointestinal endoscope video, or a situation that a breakage occurs.
Step S200 and step S300 may be performed synchronously or asynchronously, which is not limited in this disclosure.
According to the embodiment of the application, the overall background characteristics of the area where the abnormity is located and the abnormity type grade of the abnormity local are obtained by respectively identifying and processing the preprocessing standard image and the abnormity local image, and the medical information which is wide in coverage and accurate is obtained by combining the overall background characteristics and the local abnormity type grade, so that the problems that the medical information in the prior art can only be used for generally judging the local area and cannot be used for more accurately and comprehensively judging the overall environment are solved.
In step S100, the preprocessed standard image may be obtained by performing a standardized process after disassembling the video of the endoscope of the digestive tract, and the abnormal local image may be obtained by capturing an image of an abnormal area on the video of the endoscope of the digestive tract.
Specifically, in some embodiments, a specific implementation of step S100 may be found in fig. 3. Fig. 3 is a detailed description of step S100 in the medical information acquisition method according to the corresponding embodiment shown in fig. 2, in which step S100 may include the following steps:
and step S110, splitting the digestive tract endoscope video to obtain an original digestive tract endoscope image.
And step S120, correcting the original image of the digestive tract endoscope to obtain the preprocessing standard image.
And step S130, determining the position of the abnormality according to the preprocessing standard image.
And S140, intercepting the abnormal local image on the original image of the digestive tract endoscope according to the abnormal position.
In this embodiment, a video of the endoscope of the digestive tract is firstly split, so that the video becomes a plurality of frames of continuous original images of the endoscope of the digestive tract; then, according to the specification requirement, correcting the original image of the digestive tract endoscope to change the original image into a pretreatment standard image; then determining the abnormal position according to the preprocessed standard image; and finally, according to the abnormal position, intercepting the abnormal local image on the corresponding original image of the digestive tract endoscope.
In step S110, the digestive tract endoscope video is split into frame images, and each frame image is a frame of an original digestive tract endoscope image.
In step S120, the original image of the alimentary canal endoscope is modified by scaling, cropping, and supplementing the original image of the alimentary canal endoscope according to the specification requirements of the size, pixels, and the like of the preprocessed standard image.
For example, in one embodiment, the pixel size of the raw image of the intra-alimentary tract endoscope is 2048 × 1080, while the specification for the pre-processed standard image requires the pixel size to be 1280 × 720. At this time, the left and right sides of the original image of the gastrointestinal endoscope can be cut, the pixel size of the original image of the gastrointestinal endoscope is trimmed to 1920 × 1080, and then the pixel size of the image is compressed to 1280 × 720 in proportion, so as to form a preprocessing standard image meeting the requirements.
In another embodiment, the pixel size of the raw image of the gastrointestinal endoscope is 2048 × 1080, and the specification of the preprocessed standard image requires that the pixel size is 1280 × 720. At this time, modified frames can be added on the upper side and the lower side of the original image of the gastrointestinal endoscope, in order to ensure that the original image of the gastrointestinal endoscope is in the middle, the pixel sizes of the modified frames on the two sides are both 2048 × 36, and then the pixel sizes of the image are compressed into 1280 × 720 in proportion, so as to form a pre-processing standard image meeting the requirements.
Compared with the previous embodiment, the method has the advantages that the original image of the gastrointestinal endoscope is not cut, information loss of the image edge is avoided, and the abnormality at the image edge can be well identified.
In step S130, the abnormality position is determined by inputting the preprocessed standard image into the abnormality detection model, and locating the abnormality by the abnormality detection model.
The specific method is that the preprocessed standard image is input into an abnormality detection model, the abnormality detection model outputs no abnormality for the image without abnormality, and the abnormality detection model outputs the position of abnormality for the image with abnormality.
The training method of the anomaly detection model comprises the following steps:
acquiring an abnormal local image containing an abnormality and an abnormal local image sample set containing no abnormality, wherein each sample containing no abnormality is calibrated in advance to contain no abnormality, and each abnormal local image containing the abnormality is calibrated in advance to contain an abnormal coordinate;
respectively inputting the data of each sample into an abnormality detection model to obtain a judgment result of whether the output of the abnormality detection model contains abnormal features and abnormal coordinates;
if the data of the sample is input into an abnormal detection model, and the obtained judgment result is inconsistent with the preset calibration of the sample, adjusting the coefficient of the abnormal feature identification until the judgment result is consistent with the preset calibration of the sample;
and when the data of all the samples are input into the abnormal detection model, the obtained judgment result is consistent with the preset calibration of the data samples, and the training is finished.
In step S140, the method for capturing the abnormal local image includes capturing an image of the abnormal position on the original image of the gastrointestinal endoscope corresponding to the preprocessed standard image after the abnormal position is determined, and taking the image of the abnormal position as the abnormal local image, wherein the original image of the gastrointestinal endoscope corresponding to the preprocessed standard image is the original image of the gastrointestinal endoscope from which the preprocessed standard image is obtained. The pre-processing standard image and the corresponding original image of the digestive tract endoscope have correlation marks.
In step S200, the determination may be performed by combining various images or indexes according to the characteristics of the color, shape, pattern, and the like of the abnormal region in the abnormal local image, or may be performed by using a machine learning model, and the finally generated abnormal type level is more precise and detailed than the prior art.
Specifically, in some embodiments, reference may be made to fig. 4 for a specific implementation of step S200. Fig. 4 is a detailed description of step S200 in the medical information acquisition method according to the corresponding embodiment shown in fig. 2, in which step S200 may include the following steps:
step S210, sequentially inputting the abnormal local image into a plurality of abnormal feature recognition models, and outputting, by each abnormal feature recognition model, whether the abnormal local image has a corresponding abnormal feature.
And step S220, fitting the abnormal type corresponding to the abnormal local graph according to the abnormal characteristics of the abnormal local graph.
Step S230, determining an abnormal type grade corresponding to the abnormal local graph according to the abnormal features existing in the abnormal local graph and the abnormal type corresponding to the abnormal local graph.
In the embodiment, the anomaly in the abnormal local image is identified through the anomaly feature identification model, in which different anomalies are identified through different anomaly identification models, each anomaly identification model corresponds to one anomaly feature, and whether the corresponding anomaly feature exists in the abnormal local image is determined. And then fitting the abnormal type corresponding to the abnormal local graph according to the abnormal characteristics of the abnormal local graph. And finally, combining the abnormal characteristics and the abnormal types to obtain the abnormal type grade corresponding to the abnormal local graph.
In step S210, the specific way of performing the abnormal feature identification and extraction is as follows: the same abnormal local image is respectively input into different abnormal feature recognition models, each abnormal feature recognition model correspondingly recognizes one abnormal feature, for example, the abnormal feature recognition model for recognizing the moss only recognizes whether the abnormal local image contains the moss features, and the abnormal feature recognition model for recognizing the hemorrhage only recognizes whether the abnormal local image contains the hemorrhage features. That is, in this step, the abnormality feature included in the abnormal local image can be specified based on the determination result of each abnormality recognition model. The more the input abnormal feature recognition models are, the more comprehensive the determined abnormal feature models are, and the more comprehensive and accurate the subsequently generated abnormal type grade and the obtained final medical information are.
The training method of the abnormal feature recognition model comprises the following steps: acquiring an abnormal local image containing corresponding abnormal features and an abnormal local image sample set not containing corresponding abnormal features, wherein each sample is calibrated whether to contain corresponding abnormal features or not in advance; respectively inputting the data of each sample into an abnormal feature recognition model to obtain a judgment result of whether the output of the abnormal feature recognition model contains corresponding abnormal features; if the judgment result obtained after the data of the sample is input into the abnormal feature recognition model is inconsistent with the judgment result obtained by calibrating the sample in advance, adjusting the coefficient of the abnormal feature recognition until the judgment result is consistent with the judgment result; and when the data of all the samples are input into the abnormal feature recognition model, the obtained judgment result is consistent with the result obtained by calibrating the data samples in advance, and the training is finished.
For example, if an abnormal partial image includes a mucosal defect feature and a hemorrhage feature but does not include a moss feature, the training is verified when the abnormality recognition model for recognizing moss outputs a result not including the moss feature, the abnormality recognition model for recognizing the hemorrhage feature outputs a result including the hemorrhage feature, and the abnormality recognition model for recognizing the mucosal defect feature outputs a result including the mucosal defect feature.
In step S220, a corresponding anomaly type may be matched according to the anomaly features included in the abnormal local image and the anomaly features not included in the abnormal local image determined in step S210. The more anomaly feature recognizers that are used in step S210, the more accurate the anomaly type determined in this step.
In step S230, an anomaly type level is formed by combining the anomaly characteristics with the anomaly type. The method can be used for directly inquiring the medical map to generate according to the abnormal type and the abnormal characteristic, and can also be used for generating through a machine learning model trained by the medical map.
Specifically, in some embodiments, step S230 may include the following steps:
and inputting the abnormal features of the abnormal local graph and the abnormal types corresponding to the abnormal local graph into a type grade judgment model, and outputting the corresponding abnormal type grades by the type grade judgment model.
In this embodiment, the method for generating the exception type level includes: the type grade judgment model is trained through the medical atlas, so that the type grade judgment model can accurately generate a corresponding abnormal type grade according to the abnormal type and the abnormal characteristics.
The training method of the type grade judgment model comprises the following steps: acquiring a medical atlas sample set, wherein each medical atlas sample contains abnormal features and abnormal types of the medical atlas sample, and each medical atlas sample is calibrated with corresponding abnormal type grade information in advance; respectively inputting the data of each medical atlas sample into a type grade judgment model to obtain abnormal type grade information output by the type grade judgment model; if the obtained abnormal type grade is inconsistent with the abnormal type grade information calibrated in advance for the medical atlas sample after the data of the medical atlas sample is input into the type grade judgment model, adjusting the coefficient of the type grade judgment until the obtained abnormal type grade is consistent with the abnormal type grade information calibrated in advance for the medical atlas sample; and when the data of all the samples are input into the type grade judgment model, the obtained abnormal type grade is consistent with the abnormal type grade information calibrated for the data samples in advance, and the training is finished.
In this embodiment, the medical map is used as a training sample to train the type grade determination model, and the medical map records the contained abnormality type, abnormality characteristics, and a calibrated abnormality type grade. During training, the type grade judgment model determines the abnormal type grade according to the abnormal type and the abnormal characteristics recorded in the sample, and compares the abnormal type grade with the calibrated abnormal type grade, so that the type grade judgment model is continuously adjusted until convergence.
In the embodiment of the present application, in addition to the microscopic detection of the local abnormality in step S200, in step S300, the area where the abnormality is located is also macroscopically detected, and the macroscopic background feature is determined, so as to more fully analyze the endoscope video of the alimentary tract.
Specifically, in some embodiments, reference may be made to fig. 5 for a specific implementation of step S300. Fig. 5 is a detailed description of step S300 in the medical information acquisition method according to the corresponding embodiment shown in fig. 2, wherein step S300 may include the following steps:
step S310, inputting the preprocessing standard image into a region discrimination model, outputting a corresponding region by the region discrimination model, and outputting a region overview image corresponding to each region;
step S320, inputting the region overview image into a background feature identification model, and outputting a background feature corresponding to the region overview image by the background feature identification model.
In this embodiment, the macro-detecting of the region where the abnormality is located is performed by first identifying the region corresponding to each of the pre-processing standard images, and then screening out one image that may include all background information of the region from the pre-processing standard images corresponding to the regions as the region overview image corresponding to the region. And then extracting and analyzing the background features based on the region overview image to obtain the background features, so as to perform systematic comprehensive judgment by combining abnormal type grades with abnormality in the region to obtain final medical information.
The region in this embodiment refers to the alimentary canal region divided according to medical standards, such as the esophagus, the cardia, the posterior wall of the antrum, the duodenal bulb, and the like.
In step S310, the method for identifying the preprocessed standard image is performed by using a region discrimination model, so as to achieve the purpose of accurate identification.
The training method of the region discrimination model comprises the following steps: acquiring a preprocessing standard image sample set, wherein each sample is calibrated in advance in a corresponding area; respectively inputting the data of each sample into a region screening model to obtain the name of an output region of the region screening model; if the area name obtained after the data of the sample is input into the area discrimination model is inconsistent with the area name calibrated in advance for the sample, adjusting the coefficient for area discrimination until the area name is consistent with the area name; and when the data of all the samples are input into the region discrimination model, the obtained region name is consistent with the region name calibrated in advance for the data samples, and the training is finished.
After step S310, each of the preprocessed standard images is marked with a corresponding region, and each region is screened out a region overview image representing the region. The abnormal local images can be associated with the region overview images according to the regions corresponding to the preprocessed standard images from the abnormal local images, and the macroscopic background and the microscopic abnormalities can be conveniently combined in the step S400 to analyze to obtain comprehensive and accurate medical information.
In step S320, the background feature is identified and extracted by a background feature identification model, and the background feature identification model may analyze the color, the contour, and the shape of each region in the region overview image to extract the relevant background feature. Among other features, background features may include normal, redness, bowel movements, and atrophy.
The training method of the background feature recognition model comprises the following steps: acquiring a region overview image sample set, wherein each sample is calibrated with a corresponding background feature in advance; respectively inputting the data of each sample into a background feature recognition model to obtain the output background features of the background feature recognition model; if the background feature obtained after the data of the sample is input into the background feature recognition model is inconsistent with the pre-calibrated sample, adjusting the coefficient of the background feature recognition until the background feature is consistent with the pre-calibrated sample; and when the data of all the samples are input into the background feature recognition model, the obtained background features are consistent with those calibrated in advance for the data samples, and the training is finished.
After step S320, the background feature as the macro feature of the area is obtained, and the background feature may be combined with the abnormal feature of the corresponding area to obtain more comprehensive medical information.
In step S400, by combining the background characteristics of each region with the type grade comprehensive analysis of the abnormality, comprehensive and accurate medical information can be obtained, where the medical information includes not only microscopic information of the region abnormality but also macroscopic information of the region background, which is more comprehensive and accurate intermediate information.
The microscopic information in the embodiments of the present application is abnormal local information such as abnormal local elevations, depressions, redness, blushing, bleeding, white fur, etc. and corresponding abnormal types.
In the embodiment of the present application, the macro information is background information of the whole area, such as size and size of the area, activity state of the area, color state of the area, and the like.
Embodiments of the apparatus of the present application are described below, which may be used to perform the medical information acquisition method in the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the medical information obtaining method described above in the present application.
Fig. 6 shows a block diagram of a medical information acquisition system according to an embodiment of the present application.
Referring to fig. 6, a medical information acquisition system 900 according to an embodiment of the present application includes: the preprocessing module 910 is configured to preprocess an endoscope video of a digestive tract of a target object to obtain a preprocessed standard image and an abnormal local image; an anomaly detection module 920, configured to determine, according to the abnormal local image, anomaly type level information corresponding to the abnormal local image; a region detection module 930, configured to determine, according to the pre-processing standard image, a background feature corresponding to a region where the abnormal local image is located; the medical information module 940 is configured to obtain medical information of the target object according to the background features corresponding to the region where the abnormal local image is located and the abnormal type level information corresponding to the abnormal local image, where the medical information is used to assist in standardized recording of endoscopy examination content.
In an embodiment of the present application, the preprocessing module specifically includes: the splitting submodule is used for splitting the digestive tract endoscope video to obtain an original digestive tract endoscope image; the correction submodule is used for correcting the original image of the digestive tract endoscope to obtain the preprocessing standard image; the positioning submodule is used for determining the position of an abnormality according to the preprocessing standard image; and the screenshot submodule is used for intercepting the abnormal local image on the original image of the digestive tract endoscope according to the position of the abnormality.
In an embodiment of the present application, the anomaly detection module specifically includes: the judging submodule is used for sequentially inputting the abnormal local images into a plurality of abnormal feature recognition models, and each abnormal feature recognition model outputs whether the abnormal local image has corresponding abnormal features or not; the fitting submodule is used for fitting the abnormal type corresponding to the abnormal local graph according to the abnormal characteristics existing in the abnormal local graph; and the description submodule is used for determining the abnormal type grade corresponding to the abnormal local graph according to the abnormal characteristics existing in the abnormal local graph and the abnormal type corresponding to the abnormal local graph.
In an embodiment of the present application, the training method of the abnormal feature recognition model is: acquiring an abnormal local image containing corresponding abnormal features and an abnormal local image sample set not containing corresponding abnormal features, wherein each sample is calibrated whether to contain corresponding abnormal features or not in advance; respectively inputting the data of each sample into an abnormal feature recognition model to obtain a judgment result of whether the output of the abnormal feature recognition model contains corresponding abnormal features; if the judgment result obtained after the data of the sample is input into the abnormal feature recognition model is inconsistent with the judgment result obtained by calibrating the sample in advance, adjusting the coefficient of the abnormal feature recognition until the judgment result is consistent with the judgment result; and when the data of all the samples are input into the abnormal feature recognition model, the obtained judgment result is consistent with the result obtained by calibrating the data samples in advance, and the training is finished.
In an embodiment of the application, the description submodule is specifically configured to perform the following steps: and inputting the abnormal features of the abnormal local graph and the abnormal types corresponding to the abnormal local graph into a type grade judgment model, and outputting the corresponding abnormal type grades by the type grade judgment model.
In an embodiment of the present application, the training method of the type-level determination model is: acquiring a medical atlas sample set, wherein each sample is calibrated with a corresponding abnormal type grade in advance; inputting the data of each sample into a type grade judgment model respectively to obtain an abnormal type grade output by the type grade judgment model; if the abnormal type grade obtained after the data of the sample is input into the type grade judgment model is inconsistent with the abnormal type grade calibrated in advance for the sample, adjusting the coefficient of the type grade judgment until the abnormal type grade is consistent; and when the data of all the samples are input into the type grade judgment model, the obtained abnormal type grade is consistent with the abnormal type grade calibrated in advance for the data samples, and the training is finished.
In an embodiment of the present application, the area detecting module specifically includes: the marking sub-module is used for inputting the preprocessing standard image into a region screening model, outputting a corresponding region by the region screening model, and outputting a region overview image corresponding to each region; and the extraction submodule is used for inputting the region overview image into a background feature recognition model, and the background feature recognition model outputs the background feature corresponding to the region overview image.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
FIG. 7 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system of the electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system includes a Central Processing Unit (CPU) 1801, which can perform various appropriate actions and processes, such as executing the method described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1802 or a program loaded from a storage portion 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data necessary for system operation are also stored. The CPU 1801, ROM 1802, and RAM 1803 are connected to each other via a bus 1804. An Input/Output (I/O) interface 1805 is also connected to bus 1804.
The following components are connected to the I/O interface 1805: an input portion 1806 including a keyboard, a mouse, and the like; an output section 1807 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1808 including a hard disk and the like; and a communication section 1809 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. A driver 1810 is also connected to the I/O interface 1805 as needed. A removable medium 1811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1810 as necessary, so that a computer program read out therefrom is mounted in the storage portion 1808 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable media 1811. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1801.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A medical information acquisition method characterized by comprising:
preprocessing a digestive tract endoscope video of a target object to obtain a preprocessed standard image and an abnormal local image;
determining abnormal type grade information corresponding to the abnormal local image according to the abnormal local image;
determining the background characteristics corresponding to the area where the abnormal local image is located according to the preprocessing standard image;
and obtaining medical information of the target object according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, wherein the medical information is used for assisting in standardized recording of the endoscopy examination content.
2. The medical information acquisition method according to claim 1, wherein the determining, according to the abnormal local image, an abnormality type level corresponding to the abnormal local image specifically includes:
sequentially inputting the abnormal local images into a plurality of abnormal feature recognition models, and outputting whether the abnormal local images have corresponding abnormal features by each abnormal feature recognition model;
fitting the abnormal type corresponding to the abnormal local graph according to the abnormal characteristics of the abnormal local graph;
and determining the abnormal type grade information corresponding to the abnormal local graph according to the abnormal characteristics existing in the abnormal local graph and the abnormal type corresponding to the abnormal local graph.
3. The medical information acquisition method according to claim 2, wherein the determining, according to the abnormal feature existing in the abnormal local map and the abnormal type corresponding to the abnormal local map, the abnormal type level information corresponding to the abnormal local map specifically includes:
and inputting the abnormal features of the abnormal local graph and the abnormal types corresponding to the abnormal local graph into a type grade judgment model, and outputting corresponding abnormal type grade information by the type grade judgment model.
4. The medical information acquisition method according to claim 3, wherein the training method of the type class judgment model is:
acquiring a medical map sample set, wherein each medical map sample comprises abnormal characteristics and abnormal types of the medical map sample, and each medical map sample is calibrated with corresponding abnormal type grade information in advance;
respectively inputting the data of each medical atlas sample into a type grade judgment model to obtain abnormal type grade information output by the type grade judgment model;
if the obtained abnormal type grade is inconsistent with the abnormal type grade information calibrated in advance for the medical atlas sample after the data of the medical atlas sample is input into the type grade judgment model, adjusting the coefficient of the type grade judgment until the obtained abnormal type grade is consistent with the abnormal type grade information calibrated in advance for the medical atlas sample;
and when the data of all the medical map samples are input into the type grade judgment model, the obtained abnormal type grade is consistent with the abnormal type grade information calibrated in advance for the medical map samples, and the training is finished.
5. The medical information acquisition method according to claim 2, wherein the training method of the abnormal feature recognition model is:
acquiring an abnormal local image containing corresponding abnormal features and an abnormal local image sample set not containing corresponding abnormal features, wherein each sample is calibrated whether to contain corresponding abnormal features or not in advance;
respectively inputting the data of each abnormal local image sample into an abnormal feature recognition model to obtain a judgment result of whether the output of the abnormal feature recognition model contains corresponding abnormal features;
if the data of the abnormal local image sample is input into an abnormal feature identification model, and the obtained judgment result is inconsistent with the preset calibration of the abnormal local image sample, adjusting the coefficient of the abnormal feature identification until the judgment result is consistent with the preset calibration of the abnormal local image sample;
and when the data of all the abnormal local image samples are input into an abnormal feature recognition model, the obtained judgment result is consistent with the preset calibration of the abnormal local image samples, and the training is finished.
6. The medical information acquisition method according to claim 1, wherein the preprocessing a video of an endoscope in the digestive tract of the target object to obtain a preprocessed standard image and an abnormal local image comprises:
splitting the digestive tract endoscope video to obtain an original digestive tract endoscope image;
correcting the original image of the digestive tract endoscope to obtain the preprocessed standard image;
determining the position of an anomaly according to the preprocessed standard image;
and intercepting the abnormal local image on the original image of the digestive tract endoscope according to the position of the abnormality.
7. The method according to claim 1, wherein the determining, according to the pre-processing standard image, the background feature corresponding to the region where the abnormal local image is located specifically includes:
inputting the preprocessing standard image into a region discrimination model, outputting a corresponding region by the region discrimination model, and outputting a region overview image corresponding to each region;
and inputting the area overview image into a background feature recognition model, and outputting a background feature corresponding to the area overview image by the background feature recognition model.
8. A medical information acquisition system characterized by comprising:
the preprocessing module is used for preprocessing the gastrointestinal endoscope video of the target object to obtain a preprocessed standard image and an abnormal local image;
the anomaly detection module is used for determining anomaly type grade information corresponding to the anomaly local image according to the anomaly local image;
the area detection module is used for determining the background characteristics corresponding to the area where the abnormal local image is located according to the preprocessing standard image;
and the medical information module is used for obtaining the medical information of the target object according to the background characteristics corresponding to the area where the abnormal local image is located and the abnormal type grade information corresponding to the abnormal local image, and the medical information is used for assisting the standardized recording of the endoscopy examination content.
9. A computer-readable medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out a medical information acquisition method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the medical information acquisition method according to any one of claims 1 to 7.
CN202210455932.XA 2022-04-28 2022-04-28 Medical information acquisition method and related equipment Active CN114565611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455932.XA CN114565611B (en) 2022-04-28 2022-04-28 Medical information acquisition method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455932.XA CN114565611B (en) 2022-04-28 2022-04-28 Medical information acquisition method and related equipment

Publications (2)

Publication Number Publication Date
CN114565611A true CN114565611A (en) 2022-05-31
CN114565611B CN114565611B (en) 2022-07-19

Family

ID=81720776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455932.XA Active CN114565611B (en) 2022-04-28 2022-04-28 Medical information acquisition method and related equipment

Country Status (1)

Country Link
CN (1) CN114565611B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912247A (en) * 2023-09-13 2023-10-20 威海市博华医疗设备有限公司 Medical image processing method and device, storage medium and electronic equipment

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN108292366A (en) * 2015-09-10 2018-07-17 美基蒂克艾尔有限公司 The system and method that suspect tissue region is detected in endoscopic surgery
CN108695001A (en) * 2018-07-16 2018-10-23 武汉大学人民医院(湖北省人民医院) A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning
CN108937871A (en) * 2018-07-16 2018-12-07 武汉大学人民医院(湖北省人民医院) A kind of alimentary canal micro-optics coherence tomography image analysis system and method
CN109410196A (en) * 2018-10-24 2019-03-01 东北大学 Cervical cancer tissues pathological image diagnostic method based on Poisson annular condition random field
CN109636796A (en) * 2018-12-19 2019-04-16 中山大学中山眼科中心 A kind of artificial intelligence eye picture analyzing method, server and system
US20190297276A1 (en) * 2018-03-20 2019-09-26 EndoVigilant, LLC Endoscopy Video Feature Enhancement Platform
CN110473192A (en) * 2019-04-10 2019-11-19 腾讯医疗健康(深圳)有限公司 Digestive endoscope image recognition model training and recognition methods, apparatus and system
CN110600122A (en) * 2019-08-23 2019-12-20 腾讯医疗健康(深圳)有限公司 Digestive tract image processing method and device and medical system
CN111524093A (en) * 2020-03-23 2020-08-11 中润普达(十堰)大数据中心有限公司 Intelligent screening method and system for abnormal tongue picture
CN111932520A (en) * 2018-08-31 2020-11-13 上海联影智能医疗科技有限公司 Medical image display method, viewing device and computer device
US20210153808A1 (en) * 2018-06-22 2021-05-27 Ai Medical Service Inc. Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ
US11024031B1 (en) * 2020-02-13 2021-06-01 Olympus Corporation System and method for diagnosing severity of gastric cancer
US20210174923A1 (en) * 2019-12-06 2021-06-10 Ankon Technologies Co., Ltd Method, device and medium for structuring capsule endoscopy report text
US20210256701A1 (en) * 2020-02-13 2021-08-19 Olympus Corporation System and method for diagnosing severity of gastritis
CN113643291A (en) * 2021-10-14 2021-11-12 武汉大学 Method and device for determining esophagus marker infiltration depth grade and readable storage medium
CN113642537A (en) * 2021-10-14 2021-11-12 武汉大学 Medical image recognition method and device, computer equipment and storage medium
US20220031227A1 (en) * 2018-10-02 2022-02-03 Industry Academic Cooperation Foundation, Hallym University Device and method for diagnosing gastric lesion through deep learning of gastroendoscopic images
CN114022936A (en) * 2021-11-10 2022-02-08 广东金豪漾科技控股有限公司 Method for marking background of marked image, method and device for identifying skin problem

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN108292366A (en) * 2015-09-10 2018-07-17 美基蒂克艾尔有限公司 The system and method that suspect tissue region is detected in endoscopic surgery
US20190297276A1 (en) * 2018-03-20 2019-09-26 EndoVigilant, LLC Endoscopy Video Feature Enhancement Platform
US20210153808A1 (en) * 2018-06-22 2021-05-27 Ai Medical Service Inc. Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ
CN108695001A (en) * 2018-07-16 2018-10-23 武汉大学人民医院(湖北省人民医院) A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning
CN108937871A (en) * 2018-07-16 2018-12-07 武汉大学人民医院(湖北省人民医院) A kind of alimentary canal micro-optics coherence tomography image analysis system and method
CN111932520A (en) * 2018-08-31 2020-11-13 上海联影智能医疗科技有限公司 Medical image display method, viewing device and computer device
US20220031227A1 (en) * 2018-10-02 2022-02-03 Industry Academic Cooperation Foundation, Hallym University Device and method for diagnosing gastric lesion through deep learning of gastroendoscopic images
CN109410196A (en) * 2018-10-24 2019-03-01 东北大学 Cervical cancer tissues pathological image diagnostic method based on Poisson annular condition random field
CN109636796A (en) * 2018-12-19 2019-04-16 中山大学中山眼科中心 A kind of artificial intelligence eye picture analyzing method, server and system
CN110473192A (en) * 2019-04-10 2019-11-19 腾讯医疗健康(深圳)有限公司 Digestive endoscope image recognition model training and recognition methods, apparatus and system
CN110600122A (en) * 2019-08-23 2019-12-20 腾讯医疗健康(深圳)有限公司 Digestive tract image processing method and device and medical system
US20210174923A1 (en) * 2019-12-06 2021-06-10 Ankon Technologies Co., Ltd Method, device and medium for structuring capsule endoscopy report text
US11024031B1 (en) * 2020-02-13 2021-06-01 Olympus Corporation System and method for diagnosing severity of gastric cancer
US20210256701A1 (en) * 2020-02-13 2021-08-19 Olympus Corporation System and method for diagnosing severity of gastritis
CN111524093A (en) * 2020-03-23 2020-08-11 中润普达(十堰)大数据中心有限公司 Intelligent screening method and system for abnormal tongue picture
CN113643291A (en) * 2021-10-14 2021-11-12 武汉大学 Method and device for determining esophagus marker infiltration depth grade and readable storage medium
CN113642537A (en) * 2021-10-14 2021-11-12 武汉大学 Medical image recognition method and device, computer equipment and storage medium
CN114022936A (en) * 2021-11-10 2022-02-08 广东金豪漾科技控股有限公司 Method for marking background of marked image, method and device for identifying skin problem

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIANLIAN WU ET AL.: "A deep neural network improves endoscopic detection of early gastric cancer without blind spots", 《ENDOSCOPY》 *
PEDRO GUIMARAES ET AL.: "Deep learning based detection of gastric precancerous conditions", 《ENDOSCOPY NEWS》 *
杜泓柳 等: "基于深度学习的内镜下胃黏膜多病灶辅助识别系统", 《兰州大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912247A (en) * 2023-09-13 2023-10-20 威海市博华医疗设备有限公司 Medical image processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN114565611B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN111369545A (en) Edge defect detection method, device, model, equipment and readable storage medium
CN110705583A (en) Cell detection model training method and device, computer equipment and storage medium
CN112634203B (en) Image detection method, electronic device, and computer-readable storage medium
CN109886928A (en) A kind of target cell labeling method, device, storage medium and terminal device
WO2022242392A1 (en) Blood vessel image classification processing method and apparatus, and device and storage medium
US20230417700A1 (en) Automated analysis of analytical gels and blots
CN114565611B (en) Medical information acquisition method and related equipment
CN110969616B (en) Method and device for evaluating oocyte quality
WO2021082433A1 (en) Digital pathological image quality control method and apparatus
CN114549390A (en) Circuit board detection method, electronic device and storage medium
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN112036295A (en) Bill image processing method, bill image processing device, storage medium and electronic device
CN111882544B (en) Medical image display method and related device based on artificial intelligence
CN114219754A (en) Thyroid-related eye disease identification method and device based on eye CT image
CN113469944A (en) Product quality inspection method and device and electronic equipment
CN112967224A (en) Electronic circuit board detection system, method and medium based on artificial intelligence
CN115294505A (en) Risk object detection and model training method and device and electronic equipment
CN116091522A (en) Medical image segmentation method, device, equipment and readable storage medium
CN112308062B (en) Medical image access number identification method in complex background image
CN116152168A (en) Medical lung image lesion classification method and classification device
CN115601546A (en) Instance segmentation model training method and device and readable medium
CN115272055A (en) Chromosome image analysis method based on knowledge representation
CN111935480B (en) Detection method for image acquisition device and related device
CN111985423A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium
CN113592771B (en) Image segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant