CN113610840A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113610840A
CN113610840A CN202110991058.7A CN202110991058A CN113610840A CN 113610840 A CN113610840 A CN 113610840A CN 202110991058 A CN202110991058 A CN 202110991058A CN 113610840 A CN113610840 A CN 113610840A
Authority
CN
China
Prior art keywords
image
physiological
target
determining
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110991058.7A
Other languages
Chinese (zh)
Other versions
CN113610840B (en
Inventor
肖月庭
阳光
郑超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shukun Shenzhen Intelligent Network Technology Co ltd
Original Assignee
Shukun Beijing Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shukun Beijing Network Technology Co Ltd filed Critical Shukun Beijing Network Technology Co Ltd
Priority to CN202110991058.7A priority Critical patent/CN113610840B/en
Publication of CN113610840A publication Critical patent/CN113610840A/en
Application granted granted Critical
Publication of CN113610840B publication Critical patent/CN113610840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment. The method comprises the following steps: determining a physiological image corresponding to the target area image by acquiring the target area image; then acquiring a target physiological image in the physiological image; optimizing the target physiological image according to the target area image to obtain an optimized physiological image; and finally, identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue. Through a series of processing optimization on the target area image, the obtained optimized image is easier to identify, so that the identification image corresponding to the target physiological tissue is more accurately identified in the optimized image, and the accuracy of identifying the target physiological tissue in the image is improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
Due to the complexity of the human physiological tissue structure, many complex information may be contained in the medical image. In the prior art, in order to analyze medical images quickly, extraction and identification are only carried out through a network, but the identification precision is low, and the medical images cannot be analyzed accurately.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment. The image processing method can improve the accuracy of identifying the target physiological tissues in the image.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a target area image, and determining a physiological image corresponding to the target area image;
acquiring a target physiological image in the physiological image;
optimizing the target physiological image according to the target area image to obtain an optimized physiological image;
and identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring a target area image and determining a physiological image corresponding to the target area image;
the second acquisition module is used for acquiring a target physiological image in the physiological image;
the optimization module is used for optimizing the target physiological image according to the target area image to obtain an optimized physiological image;
and the identification module is used for identifying the optimized physiological image so as to obtain an identification image corresponding to the target physiological tissue.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program is suitable for being loaded by a processor to execute steps in an image processing method provided in an embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor executes steps in an image processing method provided in an embodiment of the present application by calling the computer program stored in the memory.
In the embodiment of the application, a physiological image corresponding to a target area image is determined by acquiring the target area image; then acquiring a target physiological image in the physiological image; optimizing the target physiological image according to the target area image to obtain an optimized physiological image; and finally, identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue. Through a series of processing optimization on the target area image, the obtained optimized image is easier to identify, so that the identification image corresponding to the target physiological tissue is more accurately identified in the optimized image, and the accuracy of identifying the target physiological tissue in the image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a first flowchart of an image processing method according to an embodiment of the present application.
Fig. 2 is a second flowchart of the image processing method according to the embodiment of the present application.
Fig. 3 is a third flowchart of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a first structure of an image processing apparatus according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a second structure of an image processing apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, due to the complexity of the human physiological tissue structure, a lot of complex information is contained in the medical image. If the medical image is identified only by the network model to extract the image corresponding to the target physiological portion, an error in the extracted image may occur. For example, if a medical image is directly identified by a network model, the biomembrane on the internal organ of the human body may not be accurately identified. Ultimately, significant medical accidents may result.
In order to solve the above technical problem, embodiments of the present application provide an image processing method, an image processing apparatus, a storage medium, and an electronic device. The image processing method can improve the accuracy of identifying the target physiological tissues in the image. The following are detailed below.
110. And acquiring a target area image, and determining a physiological image corresponding to the target area image.
The electronic device may first acquire an initial image, which is a medical image, for example, the medical image may be a medical image acquired by Computed Tomography (CT), helical CT, X-ray, Positron Emission Computed Tomography (PRT), fluoroscopy, ultrasound, Magnetic Resonance (MR), and the like. The medical image may include physiological structures such as blood vessels, bones, and organs.
In some embodiments, after acquiring the initial image, the electronic device may acquire a target area image within the initial image, which may be an image within the physician annotation selection area. For example, after acquiring a medical image, a doctor may select a target area in the medical image by using a mouse, an electronic brush, or the like, and an image in the defined area is an image of the target area.
In some embodiments, in the process of selecting the target area by the doctor, the electronic device may identify coordinates of the drawn handwriting so as to determine the drawn trajectory in the initial image, and after the doctor completes the delineation of the target area, the electronic device determines an area delineated by the drawn handwriting as the target area and determines an image corresponding to the target area as the target area image.
In a medical scene for treatment, a target region image drawn by a doctor is often an image of a focus part to be examined, a region where a focus is located can be roughly determined through medical experience judgment of the doctor, and then the target region image is analyzed.
After determining the target area image, the electronic device may determine a physiological image corresponding to the target area image.
In some embodiments, the electronic device may acquire image feature information corresponding to the target area image, and then determine a corresponding physiological image in the target area image according to the image feature information corresponding to the target area image. Wherein the image characteristic information includes shape information, contrast information, gradient information, CT value information, and the like.
In some embodiments, the electronic device may determine a corresponding physiological structure of the target area image, and then determine a physiological image in the target area image according to the physiological structure and the image feature information. The electronic device may determine the position of the target area image in the initial image, and then determine the physiological structure corresponding to the target area image according to the position of the target area image in the initial image. For example, after determining the position of the target area image, for example, if the corresponding physiological structure of the position is a heart, the physiological image in the target area image is determined according to the image of the whole heart and the image feature information of the target area image.
In some embodiments, the electronic device may determine a contour of the physiological structure, then enlarge an image corresponding to the physiological structure within the contour based on the contour to obtain a comparison image, and finally determine a physiological image corresponding to the target region image according to the comparison image and the image feature information.
For example, when it is determined that the physiological structure corresponding to the target region image is a heart, the electronic device may determine an outline of the heart, and then enlarge the image corresponding to the heart within the outline with the outline of the heart as a reference, so as to obtain a comparison image, where the comparison image has a larger area and clearer image details relative to the image corresponding to the original heart.
Then, the physiological image corresponding to the target image is determined by using the image characteristic information of the comparison image and the target region image, for example, the physiological image corresponding to the target image is determined by comparing the comparison image with the image characteristic information such as shape information, contrast information, gradient information, CT value information and the like, so that the target region image is determined to be consistent with the image of the protective film in the comparison image, and then the protective film image of the heart is determined to be the physiological image corresponding to the target region image.
In some embodiments, the electronic device may further determine sub-comparison images corresponding to respective physiological objects in the comparison images, and then compare the target area image with the respective sub-comparison images according to the image feature information to determine a physiological image corresponding to the target area image.
For example, after obtaining the comparison image corresponding to the heart, the comparison image may be segmented into a plurality of sub-regions, and the image of each sub-region is a sub-comparison image. Or dividing the comparison image into multiple layers of images, and determining the image of each layer as a sub-comparison image.
And then comparing the image characteristic information of the target area image with each sub-comparison image to determine that a certain sub-comparison image is matched with the target area image, and determining the sub-comparison image as a physiological image corresponding to the target area image. For example, the sub-comparison image a corresponds to an image of a left ventricle blood vessel, the sub-comparison image B is an image of a protective membrane, and if it is determined through the image characteristic information of the target area image that the target area image and the sub-comparison image B match, the physiological image corresponding to the target area image is determined to be an image of the protective membrane.
120. A target physiological image within the physiological image is acquired.
In some embodiments, the electronic device may determine a physiological tissue type corresponding to the physiological image, and then determine a corresponding preset algorithm model according to the physiological tissue type. And finally, inputting the physiological image into a preset algorithm model for image extraction so as to extract a target physiological image from the physiological image.
For example, if the physiological image is an image of a protective membrane, the type of the physiological tissue corresponding to the physiological image is a type of a biological membrane. Then, a preset algorithm model corresponding to the type of the biological membrane is determined, and then the physiological image is input into the preset algorithm model corresponding to the type of the biological membrane. The preset algorithm model extracts the physiological image, namely extracts the image of the protective film, and then obtains a pericardium image, so that the pericardium image is the target physiological image.
In some embodiments, the preset algorithm model may extract a plurality of target physiological images from one physiological image, for example, the physiological image is a protective membrane image of a heart, and the pericardium image and the epicardium image may be extracted by extracting the protective membrane image through the preset algorithm model, so that the pericardium image and the epicardium image are both the target physiological images.
130. And optimizing the target physiological image according to the target area image to obtain an optimized physiological image.
In some cases, the boundary of the target physiological image extracted by the preset algorithm model may have the problems of deletion, fracture and the like, for example, the target physiological image is a blood vessel image, and the boundary of the blood vessel image may have the problem of deletion, and at this time, the boundary of the target physiological image needs to be optimized, so that the problem of deletion and fracture of the target physiological image is solved.
In some embodiments, the electronic device may determine the boundary of the hand-drawn region as the boundary information of the target region image, and then optimize the boundary of the target physiological image according to the boundary information, thereby obtaining an optimized image.
Specifically, the electronic device may determine a target optimization region image in the target physiological image, then determine a target boundary corresponding to the target optimization region image in the boundary information and an image to be connected corresponding to the target boundary, and then connect the image to be connected with the target optimization region image to obtain the optimized physiological image.
For example, the electronic device determines a target optimization boundary with a missing or broken portion in the target physiological image, and then determines an image of a region where the target optimization boundary is located as a target optimization region image with the missing or broken portion in the target optimization region image. It will be appreciated that the boundary of the target optimization region image comprises the target optimization boundary.
The electronic device may determine a target optimization boundary in the target optimization area image, then determine a target boundary corresponding to the target optimization boundary in the target area image, and then determine an image of an area where the target boundary is located as an image to be connected, where the boundary of the image to be connected includes the target boundary. And then connecting the image to be connected with the target optimization image so as to optimize the boundary of the target physiological image.
In an embodiment, the target optimization region image includes a target optimization boundary, the images to be connected include a target boundary, the target optimization boundary has a missing fracture or missing part, and the target boundary and the target optimization boundary are connected to make up for the missing fracture or missing part in the target optimization boundary, so that the boundary of the target physiological image does not have the missing or fractured part, and the final optimized physiological image is obtained.
In an embodiment, the electronic device may further determine a target optimization boundary with a deletion or a fracture in the target physiological image, then intercept a region where the target optimization boundary is located to obtain an intercepted image, find a replacement image corresponding to the intercepted image from the target region image, and connect the replacement image to a position where the intercepted image is located in the target physiological image, thereby implementing replacement of the intercepted image, implementing optimization of the target physiological image, and obtaining an optimized physiological image without the deletion or the fracture.
In some embodiments, the electronic device may further identify a physiological tissue in the target physiological image first to determine whether the physiological tissue has integrity, and if the physiological tissue is incomplete or missing, the target physiological image may be optimized through the target region image, for example, by determining a target physiological tissue image corresponding to the missing physiological tissue in the target region image, and then completing the missing physiological tissue in the target physiological image through the target physiological tissue image, so as to obtain a more complete optimized image.
In some embodiments, the electronic device may further optimize the target physiological image by the target area image upon identifying diseased or suspected diseased tissue in the target physiological image. For example, the lesion tissue or the suspected lesion tissue is compared with the corresponding tissue image in the target region image, and if a lesion position exists in the target physiological image after comparison, a position near the lesion position is determined as a complete lesion region, so as to determine a region corresponding to the complete lesion tissue in the target physiological image.
140. And identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue.
In some embodiments, after obtaining the optimized physiological image, the optimized physiological image may be input into a preset recognition model, and a recognition result is obtained. And finally, determining an identification image corresponding to the target physiological tissue according to the identification result, wherein the target physiological tissue comprises the physiological tissue of the focus part.
For example, the optimized physiological image is an optimized pericardium image, the optimized pericardium image can be input into a preset identification model, and the preset identification model can actively identify a focus region on the pericardium and extract an identification image corresponding to the focus region.
In addition, the optimized pericardium image is input into a preset identification model, and the preset identification model can also extract the image of the selected area according to the area selected by a doctor on the optimized physiological image, and the image is used as the physiological image corresponding to the target physiological tissue.
In the embodiment of the application, a physiological image corresponding to a target area image is determined by acquiring the target area image; then acquiring a target physiological image in the physiological image; optimizing the target physiological image according to the target area image to obtain an optimized physiological image; and finally, identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue. Through a series of processing optimization on the target area image, the obtained optimized image is easier to identify, so that the identification image corresponding to the target physiological tissue is more accurately identified in the optimized image, and the accuracy of identifying the target physiological tissue in the image is improved.
For a more detailed description of the image processing method provided in the embodiment of the present application, please refer to fig. 2, where fig. 2 is a second flowchart of the image processing method provided in the embodiment of the present application. The image processing method may include the steps of:
201. and determining a hand-drawn area in the initial image, and determining the image in the hand-drawn area as a target area image.
The initial image is a medical image, and the doctor can circle a hand-drawn region with a painting brush in the medical image, and then determine the image in the hand-drawn region as a target region image, wherein the target region image comprises the boundary of the hand-drawn region.
202. And determining a physiological structure corresponding to the target area image, and determining the outline of the physiological structure.
In some embodiments, the position of the target area image in the initial image may be determined, for example, the position is the position of the heart, and the physiological structure corresponding to the target area image is the heart, and at this time, the electronic device may acquire the outline of the heart.
203. And enlarging the image corresponding to the physiological structure in the contour by taking the contour as a reference so as to obtain a comparison image.
In some embodiments, the image within the contour is expanded proportionally based on the contour of the physiological structure, for example, the image within the contour is expanded outward to form an expanded region image, and the image corresponding to the expanded region can be understood as an enlarged image of the image within the contour. And stopping amplifying the image in the outline until the expansion area image can cover the hand-drawing area. Thereby obtaining a comparison image, wherein the expansion area image is the comparison image.
204. And determining sub-comparison images corresponding to all physiological objects in the comparison images.
In some embodiments, the comparison image includes a plurality of physiological objects, for example, the comparison image is an enlarged heart image, the heart image includes different physiological objects such as blood vessels, biological membranes, and the like, and the different physiological objects correspond to one sub-comparison image.
For another example, the comparison image includes a plurality of layer images, and each layer image may be determined as a sub-comparison image.
205. And comparing the target area image with each sub comparison image according to the image characteristic information of the target area image, and determining the physiological image corresponding to the target area image.
The image feature information of the target region image includes shape information, contrast information, gradient information, CT value information, and the like.
In some embodiments, the electronic device may match the shape of each sub-comparison image according to the shape information to obtain a first matching degree corresponding to each sub-comparison image, and then determine the sub-comparison image with the highest first matching degree as the physiological image corresponding to the target region image.
For example, the sub-comparison image a is an image of a blood vessel, the shape of the hand-drawn target region image and the shape of the blood vessel image may be compared to obtain a first matching degree, and if the first matching degree of the target region image with respect to the sub-comparison image a is the highest with respect to the first matching degrees of the other sub-comparison images, it is determined that the physiological image corresponding to the target region image is the sub-comparison image a.
In some embodiments, the electronic device may further perform matching on the contrast of each sub-comparison image according to the contrast information to obtain a second matching degree corresponding to each sub-comparison image, and determine the sub-comparison image with the highest second matching degree as the physiological image corresponding to the target region image.
For example, the electronic device may first obtain contrast information of the target area image, then obtain contrast information of each sub-comparison image, and then match the contrast information of the target area image with the contrast information of each sub-comparison image, so as to obtain a second matching degree corresponding to each sub-comparison image, and if the second matching degree corresponding to the sub-comparison image a is the highest relative to other second matching degrees, determine that the sub-comparison image a is the physiological image corresponding to the target area image. The contrast information includes a contrast value, which may be calculated by a contrast enhancement algorithm such as a canny operator.
In some embodiments, the electronic device may determine a gradient of each sub-comparison image, and determine a sub-comparison image in each sub-comparison image, in which the gradient is the same as the gradient of the target region image, as the physiological image corresponding to the target region image. Wherein the gradient of the image corresponds to the difference between 2 neighboring pixels.
For example, the electronic device may calculate the gradient of the target region image through an image gradient algorithm such as a Sobel operator and a Prewitt operator, then may also calculate the gradient of each sub-comparison image through a gradient algorithm, then compare the gradient of the target region image with the gradient of each sub-comparison image, and if the gradient of the sub-comparison image a is the same as the gradient of the target region image, determine the sub-comparison image a as the physiological image corresponding to the target region image.
In some embodiments, the electronic device may determine a CT value of each sub-comparison image, and then determine the sub-comparison image in each sub-comparison image with the same CT value as the CT value of the target region image as the physiological image corresponding to the target region image.
For example, when a CT scanner scans a human body, each region of the acquired initial image has a corresponding CT value, and the electronic device may determine the CT value of the target region image, then determine the CT value of each sub-comparison image, and then compare the CT value of the target region image with the CT value of each sub-comparison image. And if the CT value of the sub-comparison image A is the same as that of the target area image, determining the sub-comparison image A as the physiological image corresponding to the target area image.
It should be noted that the above method for determining the physiological image corresponding to the target area is only an example, and the physiological image corresponding to the target area image may also be obtained by other methods.
206. A target physiological image within the physiological image is acquired.
In some embodiments, the electronic device may determine a physiological tissue type corresponding to the physiological image, and then determine a corresponding preset algorithm model according to the physiological tissue type. And finally, inputting the physiological image into a preset algorithm model for image extraction so as to extract a target physiological image from the physiological image.
For example, if the physiological image is an image of a protective membrane, the type of the physiological tissue corresponding to the physiological image is a type of a biological membrane. Then, a preset algorithm model corresponding to the type of the biological membrane is determined, and then the physiological image is input into the preset algorithm model corresponding to the type of the biological membrane. The preset algorithm model extracts the physiological image, namely extracts the image of the protective film, and then obtains a pericardium image, so that the pericardium image is the target physiological image.
It should be noted that the preset algorithm model may be a preset algorithm model obtained by training a U-net network model and a V-net network model. In the training process of the algorithm model, different training sample images can be adopted for different human body structures, for example, for bones, bone tissue images can be adopted to train the algorithm model. For the biological membrane, the algorithm model can be trained by adopting the biological membrane image, so that preset algorithm models corresponding to different human body structures are obtained.
207. And determining the boundary information of the target area image according to the boundary of the hand-drawn area.
In some implementations, the electronic device can determine coordinate information of the hand-drawn handwriting in the hand-drawn area and then determine the coordinate system information as boundary information of the target area image.
208. And optimizing the boundary of the target physiological image according to the boundary information to obtain an optimized physiological image.
Step 208 may be executed in the following manner, referring to fig. 3 in particular, and fig. 3 is a third flowchart of the image processing method according to the embodiment of the present application.
301. A target optimization boundary is determined for the presence of a missing or broken object in the target physiological image.
In some cases, where there may be a problem of missing or broken in the target physiological image, a target optimization boundary may be determined where there is missing or broken in the target physiological image.
302. And determining the image of the area where the target optimization boundary is positioned as a target optimization area image.
The electronic equipment determines a target optimization boundary with deficiency or fracture in the target physiological image, and then determines an image of a region where the target optimization boundary is located as a target optimization region image, wherein the target optimization region image has a deficiency or fracture part. It will be appreciated that the boundary of the target optimization region image comprises the target optimization boundary.
303. And determining a target boundary corresponding to the target optimization area image in the boundary information and an image to be connected corresponding to the target boundary.
In some embodiments, the electronic device may determine a target optimization boundary in the target optimization area image, then determine a corresponding target boundary of the target optimization boundary in the target area image, and then determine an image of an area where the target boundary is located as an image to be connected, where the boundary of the image to be connected includes the target boundary.
304. And connecting the image to be connected with the target optimization area image to obtain an optimized physiological image.
In an embodiment, the target optimization region image includes a target optimization boundary, the images to be connected include a target boundary, the target optimization boundary has a missing fracture or a missing part, and the images to be connected are connected with the target optimization region image to realize the connection between the target boundary and the target optimization boundary, so that the missing fracture or the missing part in the target optimization boundary is compensated, the missing or fractured part does not exist in the boundary of the target physiological image, and the final optimized physiological image is obtained.
With continued reference to fig. 2, 209, the identification-optimized physiological image is obtained to obtain an identification image corresponding to the target physiological tissue.
In some embodiments, after obtaining the optimized physiological image, the optimized physiological image may be input into a preset recognition model, and a recognition result is obtained. And finally, determining an identification image corresponding to the target physiological tissue according to the identification result, wherein the target physiological tissue comprises the physiological tissue of the focus part.
For example, the optimized physiological image is an optimized pericardium image, the optimized pericardium image can be input into a preset identification model, and the preset identification model can actively identify a focus region on the pericardium and extract an identification image corresponding to the focus region.
In addition, the optimized pericardium image is input into a preset identification model, and the preset identification model can also extract the image of the selected area according to the area selected by a doctor on the optimized physiological image, and the image is used as the physiological image corresponding to the target physiological tissue. For example, an image of the pericardium is extracted, and an image of a lesion is automatically extracted by clicking on a lesion area on the pericardium.
In the embodiment of the application, a hand-drawn area in an initial image is determined, an image in the hand-drawn area is determined as a target area image, a physiological structure corresponding to the target area image is determined, a contour of the physiological structure is determined, and an image corresponding to the physiological structure in the contour is enlarged by taking the contour as a reference, so that a comparison image is obtained.
And then determining sub-comparison images corresponding to all physiological objects in the comparison images, comparing the target area images with the sub-comparison images according to the image characteristic information of the target area images, determining physiological images corresponding to the target area images, and acquiring the target physiological images in the physiological images. And finally, determining the boundary information of the target region image according to the boundary of the hand-drawn region, optimizing the boundary of the target physiological image according to the boundary information to obtain an optimized physiological image, and identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue.
Therefore, the identification image corresponding to the target physiological tissue is obtained according to the target area image, and the accurate determination of the final identification of the target physiological tissue can be improved due to the fact that the target area image is the area image selected by a doctor and the subsequent process of processing the target area image is combined.
Correspondingly, the embodiment of the application also provides an image processing device which can be used for executing the image processing method provided by the embodiment of the application. Referring to fig. 4, fig. 4 is a schematic diagram of a first structure of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 400 includes: a first obtaining module 410, a second obtaining module 420, an optimizing module 430, and an identifying module 440.
The first obtaining module 410 is configured to obtain a target area image and determine a physiological image corresponding to the target area image.
The first obtaining module 410 is specifically configured to determine a hand-drawn region in the initial image, and determine an image in the hand-drawn region as the target region image.
The first obtaining module 410 includes a first determining sub-module 411 and a second determining sub-module 412, please refer to fig. 5 specifically, and fig. 5 is a second schematic structural diagram of the image processing apparatus according to the embodiment of the present application.
The first determining submodule 411 is configured to determine image feature information corresponding to the target area image.
A second determining sub-module 412 for determining the physiological image according to the image feature information.
The second determining submodule 412 is specifically configured to determine a physiological structure corresponding to the target region image; determining the physiological image according to the physiological structure and the image characteristic information.
The second determining submodule 412 is specifically configured to determine a contour of the physiological structure;
enlarging an image corresponding to the physiological structure in the contour by taking the contour as a reference so as to obtain a comparison image;
and determining a physiological image corresponding to the target area image according to the comparison image and the image characteristic information.
The second determining sub-module 412 is specifically configured to determine sub-comparison images corresponding to respective physiological objects in the comparison images;
and comparing the target area image with each sub-comparison image according to the image characteristic information to determine a physiological image corresponding to the target area image.
In some embodiments, the image feature information includes shape information, and the second determining sub-module 412 is specifically configured to perform matching with the shape of each sub-comparison image according to the shape information to obtain a first matching degree corresponding to each sub-comparison image; and determining the sub-comparison image with the highest first matching degree as the physiological image corresponding to the target area image.
In some embodiments, the image feature information includes contrast information, and the second determining sub-module 412 is specifically configured to perform matching on the contrast of each sub-comparison image according to the contrast information to obtain a second matching degree corresponding to each sub-comparison image; and determining the sub-comparison image with the highest second matching degree as the physiological image corresponding to the target area image.
In some embodiments, the image feature information includes gradient information, and the second determining sub-module 412 is specifically configured to determine a gradient of each of the sub-comparison images; and determining the sub-comparison image with the gradient which is the same as that of the target area image in each sub-comparison image as the physiological image corresponding to the target area image.
In some embodiments, the image feature information includes a CT value, and the second determining sub-module 412 is specifically configured to determine the CT value of each of the sub-comparison images; and determining the sub-comparison image with the CT value being the same as that of the target area image in each sub-comparison image as the physiological image corresponding to the target area image.
A second acquiring module 420, configured to acquire a target physiological image in the physiological image.
A second obtaining module 420, configured to determine a corresponding preset algorithm model according to a physiological tissue type corresponding to the physiological image; and inputting the physiological image into a preset algorithm model for image extraction so as to extract the target physiological image from the physiological image.
And an optimizing module 430, configured to optimize the target physiological image according to the target region image to obtain an optimized physiological image.
An optimization module 430, specifically configured to determine a boundary of the hand-drawn region as boundary information of the target region image; and optimizing the boundary of the target physiological image according to the boundary information to obtain an optimized physiological image.
An identifying module 440, configured to identify the optimized physiological image to obtain an identified image corresponding to the target physiological tissue.
The recognition module 440 is specifically configured to input the optimized physiological image into a preset recognition model to obtain a recognition result; and determining an identification image corresponding to the target physiological tissue according to the identification result, wherein the target physiological tissue comprises the physiological tissue of the focus part.
In the embodiment of the application, a physiological image corresponding to a target area image is determined by acquiring the target area image; then acquiring a target physiological image in the physiological image; optimizing the target physiological image according to the target area image to obtain an optimized physiological image; and finally, identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue. Through a series of processing optimization on the target area image, the obtained optimized image is easier to identify, so that the identification image corresponding to the target physiological tissue is more accurately identified in the optimized image, and the accuracy of identifying the target physiological tissue in the image is improved.
Accordingly, an electronic device 500 may include, as shown in fig. 6, an input unit 501 having one or more computer-readable storage media, a display unit 502, a memory 503, a processor 504 having one or more processing cores, and a power supply 505. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the input unit 501 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Specifically, in one particular embodiment, input unit 501 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 504, and can receive and execute commands sent by the processor 504. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 501 may include other input devices in addition to a touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 502 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 502 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 504 to determine the type of touch event, and then the processor 504 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 6 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The memory 503 may be used to store software programs and modules, and the processor 504 executes various functional applications and data processing by operating the software programs and modules stored in the memory 503. The memory 503 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device, and the like. Further, the memory 503 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 503 may also include a memory controller to provide the processor 504 and the input unit 501 access to the memory 503.
The processor 504 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 503 and calling data stored in the memory 503, thereby performing overall monitoring of the electronic device. Optionally, processor 504 may include one or more processing cores; preferably, the processor 504 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 504.
The electronic device also includes a power supply 505 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 504 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 505 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 504 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 503 according to the following instructions, and the processor 504 runs the application program stored in the memory 503, so as to implement various functions:
acquiring a target area image, and determining a physiological image corresponding to the target area image;
acquiring a target physiological image in the physiological image;
optimizing the target physiological image according to the target area image to obtain an optimized physiological image;
and identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the image processing methods provided by the embodiments of the present application. For example, the instructions may perform the steps of:
acquiring a target area image, and determining a physiological image corresponding to the target area image;
acquiring a target physiological image in the physiological image;
optimizing the target physiological image according to the target area image to obtain an optimized physiological image;
and identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image processing method provided in the embodiments of the present application, beneficial effects that can be achieved by any image processing method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (20)

1. An image processing method, comprising:
acquiring a target area image, and determining a physiological image corresponding to the target area image;
acquiring a target physiological image in the physiological image;
optimizing the target physiological image according to the target area image to obtain an optimized physiological image;
and identifying the optimized physiological image to obtain an identification image corresponding to the target physiological tissue.
2. The image processing method according to claim 1, wherein the determining the physiological image corresponding to the target area image comprises:
determining image characteristic information corresponding to the target area image;
and determining the physiological image according to the image characteristic information.
3. The image processing method of claim 2, wherein prior to said determining the physiological image from the image feature information, the method further comprises:
determining a physiological structure corresponding to the target area image;
the determining the physiological image according to the image feature information comprises:
determining the physiological image according to the physiological structure and the image characteristic information.
4. The image processing method of claim 3, wherein said determining the physiological image from the physiological structure and the image feature information comprises:
determining a contour of the physiological structure;
enlarging an image corresponding to the physiological structure in the contour by taking the contour as a reference so as to obtain a comparison image;
and determining a physiological image corresponding to the target area image according to the comparison image and the image characteristic information.
5. The image processing method according to claim 4, wherein the determining the physiological image corresponding to the target area image according to the comparison image and the image feature information includes:
determining sub-comparison images corresponding to all physiological objects in the comparison images;
and comparing the target area image with each sub-comparison image according to the image characteristic information to determine a physiological image corresponding to the target area image.
6. The image processing method according to claim 5, wherein the image feature information includes shape information, and the comparing the target area image with each of the sub-comparison images according to the image feature information to determine the physiological image corresponding to the target area image includes:
matching the shape information with the shape of each sub-comparison image to obtain a first matching degree corresponding to each sub-comparison image;
and determining the sub-comparison image with the highest first matching degree as the physiological image corresponding to the target area image.
7. The image processing method according to claim 5, wherein the image feature information includes contrast information, and the comparing the target area image with each of the sub-comparison images according to the image feature information to determine the physiological image corresponding to the target area image includes:
matching the contrast of each sub-comparison image according to the contrast information to obtain a second matching degree corresponding to each sub-comparison image;
and determining the sub-comparison image with the highest second matching degree as the physiological image corresponding to the target area image.
8. The image processing method according to claim 5, wherein the image feature information includes gradient information, and the comparing the target region image with each of the sub-comparison images according to the image feature information to determine the physiological image corresponding to the target region image includes:
determining the gradient of each sub-comparison image;
and determining the sub-comparison image with the gradient which is the same as that of the target area image in each sub-comparison image as the physiological image corresponding to the target area image.
9. The image processing method according to claim 5, wherein the image feature information includes CT value information, and the comparing the target region image with each of the sub-comparison images according to the image feature information to determine the physiological image corresponding to the target region image includes:
determining the CT value of each sub-comparison image;
and determining the sub-comparison image with the CT value being the same as that of the target area image in each sub-comparison image as the physiological image corresponding to the target area image.
10. The image processing method according to claim 1, wherein the acquiring the target area image comprises:
and determining a hand-drawn area in the initial image, and determining an image in the hand-drawn area as the target area image.
11. The image processing method according to claim 10, wherein the optimizing the target physiological image according to the target region image to obtain an optimized physiological image comprises:
determining the boundary of the hand-drawn area as the boundary information of the target area image;
and optimizing the boundary of the target physiological image according to the boundary information to obtain an optimized physiological image.
12. The image processing method according to claim 11, wherein the optimizing the boundary of the target physiological image according to the boundary information to obtain an optimized physiological image comprises:
determining a target optimized region image in the target physiological image;
determining a target boundary corresponding to the target optimization area image in the boundary information and an image to be connected corresponding to the target boundary;
and connecting the image to be connected with the target optimization area image to obtain the optimized physiological image.
13. The image processing method of claim 12, wherein the determining a target optimized region image in the target physiological image comprises:
determining a target optimization boundary in the target physiological image where a missing or a break exists;
and determining the image of the area where the target optimization boundary is positioned as the target optimization area image.
14. The image processing method according to any one of claims 1 to 13, wherein the acquiring a target physiological image within the physiological image comprises:
determining a corresponding preset algorithm model according to the physiological tissue type corresponding to the physiological image;
and inputting the physiological image into a preset algorithm model for image extraction so as to extract the target physiological image from the physiological image.
15. The image processing method according to any one of claims 1 to 13, wherein the identifying the optimized physiological image to obtain an identified image corresponding to the target physiological tissue comprises:
inputting the optimized physiological image into a preset recognition model to obtain a recognition result;
and determining an identification image corresponding to the target physiological tissue according to the identification result, wherein the target physiological tissue comprises the physiological tissue of the focus part.
16. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a target area image and determining a physiological image corresponding to the target area image;
the second acquisition module is used for acquiring a target physiological image in the physiological image;
the optimization module is used for optimizing the target physiological image according to the target area image to obtain an optimized physiological image;
and the identification module is used for identifying the optimized physiological image so as to obtain an identification image corresponding to the target physiological tissue.
17. The image processing apparatus according to claim 16, wherein the first acquisition module includes:
the first determining submodule is used for determining image characteristic information corresponding to the target area image;
and the second determining submodule is used for determining the physiological image according to the image characteristic information.
18. The image processing apparatus of claim 16, wherein the first obtaining module is configured to:
and determining a hand-drawn area in the initial image, and determining an image in the hand-drawn area as the target area image.
19. A computer-readable storage medium, in which a computer program is stored which is adapted to be loaded by a processor for performing the steps in the image processing method according to any of claims 1-15.
20. An electronic device, characterized in that the electronic device comprises a memory in which a computer program is stored and a processor, the processor performing the steps in the image processing method according to any one of claims 1 to 15 by calling the computer program stored in the memory.
CN202110991058.7A 2021-08-26 2021-08-26 Image processing method, image processing device, storage medium and electronic equipment Active CN113610840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110991058.7A CN113610840B (en) 2021-08-26 2021-08-26 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110991058.7A CN113610840B (en) 2021-08-26 2021-08-26 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113610840A true CN113610840A (en) 2021-11-05
CN113610840B CN113610840B (en) 2022-05-10

Family

ID=78309440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110991058.7A Active CN113610840B (en) 2021-08-26 2021-08-26 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113610840B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170201676A1 (en) * 2016-01-08 2017-07-13 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
CN110610181A (en) * 2019-09-06 2019-12-24 腾讯科技(深圳)有限公司 Medical image identification method and device, electronic equipment and storage medium
CN112435263A (en) * 2020-10-30 2021-03-02 苏州瑞派宁科技有限公司 Medical image segmentation method, device, equipment, system and computer storage medium
CN112508010A (en) * 2020-11-30 2021-03-16 广州金域医学检验中心有限公司 Method, system, device and medium for identifying digital pathological section target area

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170201676A1 (en) * 2016-01-08 2017-07-13 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
CN110610181A (en) * 2019-09-06 2019-12-24 腾讯科技(深圳)有限公司 Medical image identification method and device, electronic equipment and storage medium
CN112435263A (en) * 2020-10-30 2021-03-02 苏州瑞派宁科技有限公司 Medical image segmentation method, device, equipment, system and computer storage medium
CN112508010A (en) * 2020-11-30 2021-03-16 广州金域医学检验中心有限公司 Method, system, device and medium for identifying digital pathological section target area

Also Published As

Publication number Publication date
CN113610840B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
US10383602B2 (en) Apparatus and method for visualizing anatomical elements in a medical image
US10603134B2 (en) Touchless advanced image processing and visualization
US10362941B2 (en) Method and apparatus for performing registration of medical images
CN107980148B (en) System and method for fusing images to account for motion compensation
US10796498B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable medium
KR20140055152A (en) Apparatus and method for aiding lesion diagnosis
US10748282B2 (en) Image processing system, apparatus, method and storage medium
JP2022548237A (en) Interactive Endoscopy for Intraoperative Virtual Annotation in VATS and Minimally Invasive Surgery
CN111145160A (en) Method, device, server and medium for determining coronary artery branch where calcified area is located
US10290099B2 (en) Image processing device and image processing method
US20080285831A1 (en) Automatically updating a geometric model
US10699424B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium with generation of deformed images
CN113610840B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114612461A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113689355B (en) Image processing method, image processing device, storage medium and computer equipment
US20210192717A1 (en) Systems and methods for identifying atheromatous plaques in medical images
CN114463323B (en) Focal region identification method and device, electronic equipment and storage medium
US20240087304A1 (en) System for medical data analysis
JP2010075330A (en) Medical image processor and program
EP4356837A1 (en) Medical image diagnosis system, medical image diagnosis system evaluation method, and program
CN114092426A (en) Image association method and device, electronic equipment and storage medium
KR20220161990A (en) Method and apparatus for quantifying a size of a tissue of interest of a sick animal using an X-ray image thereof
CN117476185A (en) Medical image display method, device, electronic equipment and storage medium
KR20210073041A (en) Method for combined artificial intelligence segmentation of object searched on multiple axises and apparatus thereof
JP2011250812A (en) Medical image processing apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230112

Address after: 518026 Rongchao Economic and Trade Center A308-D9, No. 4028, Jintian Road, Fuzhong Community, Lianhua Street, Futian District, Shenzhen, Guangdong Province

Patentee after: Shukun (Shenzhen) Intelligent Network Technology Co.,Ltd.

Address before: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Patentee before: Shukun (Beijing) Network Technology Co.,Ltd.