CN112102929A - Medical image labeling method and device, storage medium and electronic equipment - Google Patents

Medical image labeling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112102929A
CN112102929A CN202010956468.3A CN202010956468A CN112102929A CN 112102929 A CN112102929 A CN 112102929A CN 202010956468 A CN202010956468 A CN 202010956468A CN 112102929 A CN112102929 A CN 112102929A
Authority
CN
China
Prior art keywords
image
area
medical image
original training
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010956468.3A
Other languages
Chinese (zh)
Inventor
王冠
彭成宝
邱文旭
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Original Assignee
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd filed Critical Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority to CN202010956468.3A priority Critical patent/CN112102929A/en
Publication of CN112102929A publication Critical patent/CN112102929A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a medical image labeling method, device, storage medium and electronic device, including: performing region segmentation on the medical image to obtain a plurality of image regions; calculating the characteristic space distance between each adjacent image area according to the area image characteristics of each image area; merging the image areas with the characteristic space distance smaller than a preset distance threshold into the same area; and receiving a point selection instruction and marking information input by a user, and marking the area of the pixel selected by the point selection instruction according to the marking information. Therefore, automatic blocking and edge recognition of the medical images can be realized, the complexity of medical image labeling is greatly reduced, the labeling precision is improved while the complexity of the medical image labeling is reduced, and the problem that the number of the medical images with accurate labels is not enough is solved.

Description

Medical image labeling method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computers, and in particular, to a medical image labeling method, device, storage medium, and electronic device.
Background
With the rapid development of medical imaging technology, medical images have been widely and deeply applied in clinical medicine. In the conventional method of diagnosis and treatment based on medical image, a physician reads and recognizes medical image data and makes a judgment on diagnosis and treatment of a disease. The diagnosis and treatment mode is very inefficient, the individual difference is large, doctors are easy to miss diagnosis and misdiagnose according to personal experience, the doctor is tired when the doctor reads the film for a long time, and the film reading accuracy rate is reduced.
With the rise of artificial intelligence, the machine is used for screening and judging image data in advance, key suspicious regions are marked, and then the images are sent to doctors for diagnosis and treatment, so that the workload of the doctors can be greatly reduced, and the results are comprehensive, stable and efficient. Therefore, artificial intelligence has an important application prospect in the field of medical imaging. Most of artificial intelligence training can not leave a large amount of labeled data, so that the existing medical labeling data are few, the labeling is time-consuming and labor-consuming, and the problem needs to be solved urgently in the current image medicine.
Disclosure of Invention
The invention aims to provide a medical image labeling method, a medical image labeling device, a storage medium and electronic equipment, which can reduce the complexity of medical image labeling, improve the labeling precision and solve the problem of insufficient number of medical images with accurate labeling.
In order to achieve the above object, the present disclosure provides a medical image annotation method, the method comprising:
performing region segmentation on the medical image to obtain a plurality of image regions;
calculating the characteristic space distance between each adjacent image area according to the area image characteristics of each image area;
merging the image areas with the characteristic space distance smaller than a preset distance threshold into the same area;
receiving a point selection instruction and marking information input by a user, and marking an area where a pixel selected by the point selection instruction is located according to the marking information.
Optionally, before the step of calculating the feature space distance between each adjacent image region according to the region image feature of each image region, the method further includes:
performing feature extraction on the medical image according to a pre-trained feature extraction model to obtain pixel point image features of each pixel point in the medical image;
and regarding each image area, taking the average value of the pixel point image characteristics of all pixel points included in the image area as the area image characteristics of the image area.
Optionally, the feature extraction model is trained by:
transforming the original training sample through an image transformation module to obtain a target training sample;
and taking the target training sample as the input of the feature extraction model, taking the original training sample as the target output of the feature extraction model, and training the feature extraction model.
Optionally, the image transformation module is configured to perform at least one of a non-linear transformation, a local pixel reorganization, an outward filling, and an inward filling on the original training sample.
Optionally, in a case that the image transformation module is configured to perform a non-linear transformation on the original training sample, the transforming, by the image transformation module, the original training sample includes:
performing monotonic nonlinear transformation on the brightness characteristics in the original training sample according to a preset transformation function;
in a case that the image transformation module is used for outward padding or inward padding of the original training samples, the transforming, by the image transformation module, the original training samples comprises:
and filling the original training samples outwards or inwards to enable the ratio of the filled area to the total image area of the original training samples to be smaller than a preset proportion threshold value.
Optionally, before the step of merging the image regions with the feature space distance smaller than the preset distance threshold into the same region, the method further includes:
sorting the characteristic space distances between every two adjacent image areas according to the distance;
and taking the feature space distance at a preset sequence in the sequence as the preset distance threshold.
Optionally, the region segmenting the medical image comprises:
and carrying out region segmentation on the medical image according to a superpixel segmentation algorithm.
The present disclosure also provides a medical image annotation apparatus, the apparatus comprising:
the segmentation module is used for carrying out region segmentation on the medical image to obtain a plurality of image regions;
the calculation module is used for calculating the characteristic space distance between each two adjacent image areas according to the area image characteristics of each image area;
the merging module is used for merging the image areas with the characteristic space distance smaller than a preset distance threshold into the same area;
and the marking module is used for receiving a point selection instruction and marking information input by a user and marking the area where the pixel selected by the point selection instruction is located according to the marking information.
Optionally, before the calculating module calculates the feature space distance between each adjacent image region according to the region image feature of each image region, the apparatus further includes:
the feature extraction module is used for extracting features of the medical image according to a pre-trained feature extraction model so as to obtain pixel point image features of each pixel point in the medical image;
a feature determining module, configured to, for each image region, use an average value of the pixel point image features of all pixel points included in the image region as a region image feature of the image region.
Optionally, the feature extraction model is trained by:
transforming the original training sample through an image transformation module to obtain a target training sample;
and taking the target training sample as the input of the feature extraction model, taking the original training sample as the target output of the feature extraction model, and training the feature extraction model.
Optionally, the image transformation module is configured to perform at least one of a non-linear transformation, a local pixel reorganization, an outward filling, and an inward filling on the original training sample.
Optionally, in a case that the image transformation module is configured to perform a non-linear transformation on the original training sample, the transforming, by the image transformation module, the original training sample includes:
performing monotonic nonlinear transformation on the brightness characteristics in the original training sample according to a preset transformation function;
in a case that the image transformation module is used for outward padding or inward padding of the original training samples, the transforming, by the image transformation module, the original training samples comprises:
and filling the original training samples outwards or inwards to enable the ratio of the filled area to the total image area of the original training samples to be smaller than a preset proportion threshold value.
Optionally, before the merging module merges the image regions with the feature space distance smaller than the preset distance threshold into the same region, the apparatus further includes:
the sorting module is used for sorting the characteristic space distances between the adjacent image areas according to the distance;
a threshold determination module, configured to use the feature space distance located at a preset order in the sorting as the preset distance threshold.
Optionally, the segmentation module is further configured to: and carrying out region segmentation on the medical image according to a superpixel segmentation algorithm.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
The present disclosure also provides an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method described above
Through the technical scheme, after the medical image is preliminarily segmented, the image characteristics are recycled, the segmentation result is further merged, automatic segmentation and edge identification of the medical image are realized, the complexity of medical image labeling is greatly reduced, the complicated edge tracing operation of the boundary of the confirmed region during original labeling of the medical image is changed into the operation of selecting any pixel point in any image region by a user, the boundary of the region where the pixel point is located can be automatically judged, the region where the pixel point is located is labeled according to the label data input by the user, the labeling precision is improved while the labeling complexity of the medical image is reduced, and the problem that the number of the medical image with accurate labels is not enough is solved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a medical image annotation method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a medical image annotation method according to yet another exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a training method of a feature extraction module in a medical image labeling method according to still another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a medical image annotation method according to yet another exemplary embodiment of the present disclosure.
Fig. 5 is a block diagram illustrating a structure of a medical image labeling apparatus according to an exemplary embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating a structure of a medical image labeling apparatus according to still another exemplary embodiment of the present disclosure.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flowchart illustrating a medical image annotation method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method includes steps 101 to 104.
In step 101, a medical image is subjected to region segmentation to obtain a plurality of image regions.
The method for performing region segmentation on the medical image may be any method capable of performing region segmentation, such as a superpixel segmentation algorithm or a neural network segmentation model.
When the region of the medical image is segmented by the superpixel segmentation algorithm, the superpixel segmentation algorithm may include the following:
in the first step, the initial pixel size of the super-pixel is set to be N. If the medical image size is w × H, there are a total of about (w × H)/N superpixel seed points. Each super-pixel seed point is evenly distributed in the image, and the distance S between adjacent super-pixel seed points is approximately
Figure BDA0002678763570000061
Second, the seed points are reselected within 3 x 3 neighborhoods of the superpixel seed points. And calculating the gradient values of all pixel points in the neighborhood, and moving the super-pixel seed point to the place with the minimum gradient value of the pixel points in the neighborhood.
Thirdly, calculating the distance D between each super pixel seed point and other pixel points in the neighborhood of 2S by 2S through the following formula,
dc=lj-li
Figure BDA0002678763570000062
Figure BDA0002678763570000071
wherein ljAnd liFor the luminance values of the current two neighboring superpixel seed points, dcIndicating a luminance distance, dsThe spatial distance is expressed, m is a parameter for balancing the luminance space and the distance space, and preferably, m is 40, and S is a distance between the adjacent super-pixel seed points.
And fourthly, searching each pixel point by a plurality of super-pixel seed points, so that each pixel point has a distance D from the surrounding super-pixel seed points, and taking the super-pixel seed point c corresponding to the minimum value as the clustering center of the pixel point.
Fifthly, updating the position of the super-pixel seed point to be c (c)x,cy) And the above process was repeated 10 times as shown by the following expression:
Figure BDA0002678763570000072
and sixthly, enhancing connectivity. The following defects may occur through the iterative optimization: the situation that multiple connection situations occur, the super-pixel is undersized, a single super-pixel is cut into a plurality of discontinuous super-pixels and the like can be solved by enhancing the connection, and the discontinuous super-pixels and the undersized super-pixels are redistributed to adjacent super-pixels until the connection and the size of all super-pixels can meet the requirements.
The superpixels in the medical image obtained through the above steps are also image areas in the medical image.
In addition, when the distance between each super-pixel seed point and the pixel point in the neighborhood is calculated in the third step, the distance l is calculatedjAnd liIt can also be the color information of the current two neighboring super-pixel seed points, dcColor distance may also be represented.
In step 102, a feature space distance between each adjacent image area is calculated according to the area image features of each image area. The region image features may be any features such as color features, texture features, shape features, spatial relationship features, and the like of each image region, and the method for acquiring and describing the region image features may be any method as long as the region image features can represent information of the image region. The feature space distance between adjacent image regions can be calculated by any distance solving method such as euclidean distance or manhattan distance.
In step 103, the image regions with the characteristic space distance smaller than the preset distance threshold are merged into the same region.
The preset distance threshold may be a preset fixed distance threshold.
If the feature space distance between any two image regions is smaller than the preset distance threshold, the two image regions are merged into the same region, and if the feature space distance between any one of the two image regions and the other third image region is also smaller than the preset distance threshold, the third image region is also merged with the image region, so that a merged new image region including three original image regions can be obtained.
For example, if the image area a is traversed, and other image areas adjacent to the image area a are determined, the feature space distances between image area B, image area C and image area a are all less than the preset distance threshold, then image areas A, B, C are all merged into the same image area, the image area B in the merged new image area may then be traversed, calculating the distance between the new image area B and the new image area B, if the characteristic space distance between the image area B and the image area except the image area A and the image area C is smaller than the preset distance threshold, if so, the image areas are also merged into the new image area, and then sequentially traversing the non-traversed image areas in the new image area one by one, so that all the image areas with the characteristic space distance smaller than the preset distance threshold can be respectively combined into the same image area.
In step 104, a point selection instruction and labeling information input by a user are received, and according to the labeling information, a region where a pixel selected by the point selection instruction is located is labeled.
After dividing the medical image into a plurality of image areas according to the image features of the medical image in steps 101 to 103, each image area can represent an area with similar image features in the medical image, so that after receiving a point selection instruction input by a user, the image area selected by the user can be determined according to the pixel selected by the point selection instruction and the image area division in the medical image determined before, and then the image area can be labeled according to the labeling information input by the user, and the user is not required to perform complicated boundary determination on the image area to be labeled and then label the image area.
Through the technical scheme, after the medical image is preliminarily segmented, the image characteristics are recycled, the segmentation result is further merged, automatic segmentation and edge identification of the medical image are realized, the complexity of medical image labeling is greatly reduced, the complicated edge tracing operation of the boundary of the confirmed region during original labeling of the medical image is changed into the operation of selecting any pixel point in any image region by a user, the boundary of the region where the pixel point is located can be automatically judged, the region where the pixel point is located is labeled according to the label data input by the user, the labeling precision is improved while the labeling complexity of the medical image is reduced, and the problem that the number of the medical image with accurate labels is not enough is solved.
Fig. 2 is a flowchart illustrating a medical image annotation method according to still another exemplary embodiment of the present disclosure, as shown in fig. 2, the method further includes step 201 and step 202.
In step 201, feature extraction is performed on the medical image according to a pre-trained feature extraction model to obtain pixel point image features of each pixel point in the medical image.
In step 202, for each of the image regions, an average value of the pixel point image features of all pixel points included in the image region is used as a region image feature of the image region.
That is, the regional image characteristics of each image region determined in step 101 may be determined according to the pixel image characteristics of all pixels included in each image region.
The regional image feature may be, as shown in step 202, determining an average value of pixel point image features of all pixel points in each image region as the regional image feature of the image region, or may also determine a weighted average value or a median value of pixel point image features of all pixel points in each image region as the regional image feature of the image region.
The feature extraction model for obtaining the pixel point image features of each pixel point in the medical image can be a neural network model, such as U-Net or V-Net, and the extraction of the pixel point image features of each pixel point in the medical image can be realized through training.
The training of the feature extraction model may be to directly input the labeled training samples into the feature model to train the network, or may also be to train the feature extraction model according to the method shown in fig. 3, including step 301 and step 302.
In step 301, the original training sample is transformed by an image transformation module to obtain a target training sample.
In step 302, the target training sample is used as the input of the feature extraction model, and the original training sample is used as the target output of the feature extraction model, so as to train the feature extraction model.
The method for transforming the original training sample by the image transformation module may be one or more, and may include one or more of nonlinear transformation, local pixel reorganization, outward filling, and inward filling, for example. The nonlinear transformation and the local pixel recombination are image transformation methods based on distortion-based, and the outward filling and the inward filling are image transformation methods based on rendering-based, so that the trained feature extraction model can learn different image features respectively.
When the image transformation module transforms the original training sample by using a nonlinear transformation, the image transformation module may perform a monotonic nonlinear transformation on the luminance characteristics in the original training sample according to a preset transformation function, where the preset transformation function may be a bezier function as shown below:
B(t)=(1-t)3P0+3(1-t)2tP1+3(1-t)2t2P2+t3P3,t∈[0,1],
t is the luminance of the original training sample, B (t) is the luminance of the training sample after image change, P0,P1,P2,P3Is a random parameter. In addition, the image transformation module may also perform monotonic nonlinear transformation on other image features, such as color features, in the original training sample according to the preset transformation function.
Under the condition that the image transformation module uses outward filling or inward filling when transforming the original training sample, the ratio of the filled area when the image transformation module performs outward filling or inward filling on the original training sample to the total image area of the original training sample is smaller than a preset proportion threshold. The preset proportion threshold value is preferably 25%.
In one possible embodiment, if the image transformation module performs image transformation on the original training samples by using multiple image transformation methods, the original training samples on which the subsequent image transformation method is based may be the transformed training samples output by the previous image transformation method according to the order of the image transformation methods used.
In one possible implementation, the loss function used in training the feature extraction model may be a Mean Square Error (MSE) loss function.
The original training sample is transformed, the feature of the transformed target training sample is extracted, and then the feature extraction model is trained through the loss between the original training sample and the extracted feature map, so that the feature extraction model can be trained under the condition that labeled medical images which can be used as training samples are few.
Fig. 4 is a flowchart illustrating a medical image annotation method according to yet another exemplary embodiment of the present disclosure. As shown in fig. 4, the method further includes step 401 and step 402.
In step 401, the feature space distances between adjacent image regions are sorted according to distance size.
In step 402, the feature space distance in the sequence at a preset order is taken as the preset distance threshold.
For example, in the region S for any one of the imagesiThe set of image regions adjacent to the image region may be:
Ri={Sa,Sb,...,Sm},
the image area SiImage area S adjacent thereto, for exampleaThe feature space distance between can be expressed as, for example:
Figure BDA0002678763570000121
wherein L is2The expression of the euclidean distance,
Figure BDA0002678763570000122
representing an image area SiThe characteristics of the region image of (a),
Figure BDA0002678763570000123
representing an image area SaThe regional image feature of (1).
Then it will be associated with the image area SiEach adjacent image area and the image area SiThe feature space distance between can be constructed as the following list 1:
Di={Dia,Dib,...,Dim},
the feature space distances between adjacent image regions may then form, for example, the following list 2: d ═ D1u,...,Dia,Dib,...,Dim,...Dtv},
The subscripts a, b, m, u, t, v may be determined according to the number of the plurality of image regions obtained by segmenting the medical image and the value of the feature space distance between the respective image regions.
In the case of obtaining the above list 2, sorting all the feature space distances in the list 2 according to the size of the value, and selecting the feature space distances at the preset order in the sorting as the preset distance threshold, for example, the feature space distances included in the list 2 may be, for example, 100, sorting in the order from small to large, and taking the feature space distance at 40% of the sorting, that is, the feature space distance at 40 th in the sorting as the preset distance threshold.
In a possible embodiment, the preset distance threshold may also be a median value in the ranking.
By the technical scheme, the preset distance threshold used in combination can be determined according to the distance relation among the image areas actually included in the medical image before the divided image areas are combined, so that the preset distance threshold can be respectively determined for each medical image to be labeled, the image areas in the medical image can be divided more accurately, and the labeling precision of the medical image can be further ensured.
Fig. 5 is a block diagram illustrating a structure of a medical image labeling apparatus according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the apparatus includes: a segmentation module 10, configured to perform region segmentation on a medical image to obtain a plurality of image regions; a calculating module 20, configured to calculate a feature space distance between each adjacent image area according to the area image feature of each image area; a merging module 30, configured to merge the image regions with the feature space distance smaller than a preset distance threshold into a same region; and the marking module 40 is configured to receive a point selection instruction and marking information input by a user, and mark an area where a pixel selected by the point selection instruction is located according to the marking information.
Through the technical scheme, after the medical image is preliminarily segmented, the image characteristics are recycled, the segmentation result is further merged, automatic segmentation and edge identification of the medical image are realized, the complexity of medical image labeling is greatly reduced, the complicated edge tracing operation of the boundary of the confirmed region during original labeling of the medical image is changed into the operation of selecting any pixel point in any image region by a user, the boundary of the region where the pixel point is located can be automatically judged, the region where the pixel point is located is labeled according to the label data input by the user, the labeling precision is improved while the labeling complexity of the medical image is reduced, and the problem that the number of the medical image with accurate labels is not enough is solved.
Fig. 6 is a block diagram illustrating a structure of a medical image labeling apparatus according to still another exemplary embodiment of the present disclosure. As shown in fig. 6, before the calculating module 20 calculates the feature space distance between each adjacent image area according to the area image feature of each image area, the apparatus further includes: the feature extraction module 50 is configured to perform feature extraction on the medical image according to a pre-trained feature extraction model to obtain a pixel point image feature of each pixel point in the medical image; a feature determining module 60, configured to, for each image region, use an average value of the pixel point image features of all pixel points included in the image region as a region image feature of the image region.
In one possible embodiment, the feature extraction model is trained by: transforming the original training sample through an image transformation module to obtain a target training sample; and taking the target training sample as the input of the feature extraction model, taking the original training sample as the target output of the feature extraction model, and training the feature extraction model.
In one possible embodiment, the image transformation module is configured to perform at least one of a non-linear transformation, a local pixel reorganization, an outward filling, and an inward filling on the original training sample.
In a possible implementation, in a case that the image transformation module is configured to perform a non-linear transformation on the original training sample, the transforming, by the image transformation module, the original training sample includes: performing monotonic nonlinear transformation on the brightness characteristics in the original training sample according to a preset transformation function; in a case that the image transformation module is used for outward padding or inward padding of the original training samples, the transforming, by the image transformation module, the original training samples comprises: and filling the original training samples outwards or inwards to enable the ratio of the filled area to the total image area of the original training samples to be smaller than a preset proportion threshold value.
In a possible implementation manner, as shown in fig. 6, before the merging module 30 merges the image regions with the feature space distance smaller than the preset distance threshold into the same region, the apparatus further includes: a sorting module 70, configured to sort the feature space distances between each adjacent image area according to the distance; a threshold determining module 80, configured to use the feature space distance located at a preset order in the sorting as the preset distance threshold.
In a possible implementation, the segmentation module 10 is further configured to: and carrying out region segmentation on the medical image according to a superpixel segmentation algorithm.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. As shown in fig. 7, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the medical image annotation method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 705 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the medical image labeling methods described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the medical image annotation method described above is also provided. For example, the computer readable storage medium may be the above-mentioned memory 702 comprising program instructions executable by the processor 701 of the electronic device 700 to perform the above-mentioned medical image annotation method.
Fig. 8 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be provided as a server. Referring to fig. 8, an electronic device 800 includes a processor 822, which may be one or more in number, and a memory 832 for storing computer programs executable by the processor 822. The computer programs stored in memory 832 may include one or more modules that each correspond to a set of instructions. Further, the processor 822 may be configured to execute the computer program to perform the medical image annotation method described above.
Additionally, the electronic device 800 may also include a power component 826 and a communication component 850, the power component 826 may be configured to perform power management of the electronic device 800, and the communication component 850 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 800. The electronic device 800 may also include input/output (I/O) interfaces 858. The electronic device 800 may operate based on an operating system, such as Windows Server, stored in the memory 832TM,Mac OS XTM,UnixTM,LinuxTMAnd so on.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the medical image annotation method described above is also provided. For example, the computer readable storage medium may be the memory 832 comprising program instructions executable by the processor 822 of the electronic device 800 to perform the medical image annotation method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the medical image annotation method described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method of medical image annotation, the method comprising:
performing region segmentation on the medical image to obtain a plurality of image regions;
calculating the characteristic space distance between each adjacent image area according to the area image characteristics of each image area;
merging the image areas with the characteristic space distance smaller than a preset distance threshold into the same area;
receiving a point selection instruction and marking information input by a user, and marking an area where a pixel selected by the point selection instruction is located according to the marking information.
2. The method of claim 1, wherein prior to the step of calculating a feature space distance between each adjacent image region based on the region image features of each of the image regions, the method further comprises:
performing feature extraction on the medical image according to a pre-trained feature extraction model to obtain pixel point image features of each pixel point in the medical image;
and regarding each image area, taking the average value of the pixel point image characteristics of all pixel points included in the image area as the area image characteristics of the image area.
3. The method of claim 2, wherein the feature extraction model is trained by:
transforming the original training sample through an image transformation module to obtain a target training sample;
and taking the target training sample as the input of the feature extraction model, taking the original training sample as the target output of the feature extraction model, and training the feature extraction model.
4. The method of claim 3, wherein the image transformation module is configured to perform at least one of a non-linear transformation, a local pixel rebinning, an outward padding, and an inward padding on the original training samples.
5. The method of claim 4, wherein in a case where the image transformation module is used to transform the original training samples non-linearly, the transforming the original training samples by the image transformation module comprises:
performing monotonic nonlinear transformation on the brightness characteristics in the original training sample according to a preset transformation function;
in a case that the image transformation module is used for outward padding or inward padding of the original training samples, the transforming, by the image transformation module, the original training samples comprises:
and filling the original training samples outwards or inwards to enable the ratio of the filled area to the total image area of the original training samples to be smaller than a preset proportion threshold value.
6. The method according to claim 1, wherein before the step of merging the image regions with the feature space distance smaller than a preset distance threshold into the same region, the method further comprises:
sorting the characteristic space distances between every two adjacent image areas according to the distance;
and taking the feature space distance at a preset sequence in the sequence as the preset distance threshold.
7. The method of claim 1, wherein the region segmenting the medical image comprises:
and carrying out region segmentation on the medical image according to a superpixel segmentation algorithm.
8. A medical image annotation apparatus, characterized in that the apparatus comprises:
the segmentation module is used for carrying out region segmentation on the medical image to obtain a plurality of image regions;
the calculation module is used for calculating the characteristic space distance between each two adjacent image areas according to the area image characteristics of each image area;
the merging module is used for merging the image areas with the characteristic space distance smaller than a preset distance threshold into the same area;
and the marking module is used for receiving a point selection instruction and marking information input by a user and marking the area where the pixel selected by the point selection instruction is located according to the marking information.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202010956468.3A 2020-09-11 2020-09-11 Medical image labeling method and device, storage medium and electronic equipment Pending CN112102929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010956468.3A CN112102929A (en) 2020-09-11 2020-09-11 Medical image labeling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010956468.3A CN112102929A (en) 2020-09-11 2020-09-11 Medical image labeling method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112102929A true CN112102929A (en) 2020-12-18

Family

ID=73752096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010956468.3A Pending CN112102929A (en) 2020-09-11 2020-09-11 Medical image labeling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112102929A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926677A (en) * 2021-03-24 2021-06-08 中国医学科学院医学信息研究所 Information labeling method, device and system for medical image data
CN113469972A (en) * 2021-06-30 2021-10-01 沈阳东软智能医疗科技研究院有限公司 Method, device, storage medium and electronic equipment for labeling medical slice image
CN113838061A (en) * 2021-07-28 2021-12-24 中科云谷科技有限公司 Method and device for image annotation and storage medium
CN114881917A (en) * 2022-03-17 2022-08-09 深圳大学 Thrombolytic curative effect prediction method based on self-supervision and semantic segmentation and related device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737379A (en) * 2012-06-07 2012-10-17 中山大学 Captive test (CT) image partitioning method based on adaptive learning
CN107481248A (en) * 2017-07-28 2017-12-15 桂林电子科技大学 A kind of extracting method of salient region of image
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN107886512A (en) * 2016-09-29 2018-04-06 法乐第(北京)网络科技有限公司 A kind of method for determining training sample
CN109145921A (en) * 2018-08-29 2019-01-04 江南大学 A kind of image partition method based on improved intuitionistic fuzzy C mean cluster
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN110570352A (en) * 2019-08-26 2019-12-13 腾讯科技(深圳)有限公司 image labeling method, device and system and cell labeling method
CN110659692A (en) * 2019-09-26 2020-01-07 重庆大学 Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN110689548A (en) * 2019-09-29 2020-01-14 浪潮电子信息产业股份有限公司 Medical image segmentation method, device, equipment and readable storage medium
CN111161275A (en) * 2018-11-08 2020-05-15 腾讯科技(深圳)有限公司 Method and device for segmenting target object in medical image and electronic equipment
CN111369576A (en) * 2020-05-28 2020-07-03 腾讯科技(深圳)有限公司 Training method of image segmentation model, image segmentation method, device and equipment
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN111598900A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Image region segmentation model training method, segmentation method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737379A (en) * 2012-06-07 2012-10-17 中山大学 Captive test (CT) image partitioning method based on adaptive learning
CN107886512A (en) * 2016-09-29 2018-04-06 法乐第(北京)网络科技有限公司 A kind of method for determining training sample
CN107481248A (en) * 2017-07-28 2017-12-15 桂林电子科技大学 A kind of extracting method of salient region of image
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN109145921A (en) * 2018-08-29 2019-01-04 江南大学 A kind of image partition method based on improved intuitionistic fuzzy C mean cluster
CN111161275A (en) * 2018-11-08 2020-05-15 腾讯科技(深圳)有限公司 Method and device for segmenting target object in medical image and electronic equipment
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN110570352A (en) * 2019-08-26 2019-12-13 腾讯科技(深圳)有限公司 image labeling method, device and system and cell labeling method
CN110659692A (en) * 2019-09-26 2020-01-07 重庆大学 Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN110689548A (en) * 2019-09-29 2020-01-14 浪潮电子信息产业股份有限公司 Medical image segmentation method, device, equipment and readable storage medium
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN111598900A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Image region segmentation model training method, segmentation method and device
CN111369576A (en) * 2020-05-28 2020-07-03 腾讯科技(深圳)有限公司 Training method of image segmentation model, image segmentation method, device and equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
棉花糖灬: "周纵苇——三维迁移学习报告笔记", pages 1, Retrieved from the Internet <URL:https://blog.csdn.net/zuzhiang/article/details/103114414> *
沈楠;: "生物医学图像预处理和分割算法实验研究", 淮北职业技术学院学报, no. 04 *
第60页倒数第2段: "图像模式识别", 31 May 2020, 西安电子科技大学出版社, pages: 60 *
郑海燕: "三维医学图像预训练模型", pages 94, Retrieved from the Internet <URL:http://www.360doc.com/content/20/0505/12/32196507_910332499.shtml> *
钱俊任等: "基于颜色和空间信息的彩色图简单线性迭代簇超像素分割", 计算机应用, vol. 37, no. 2, pages 151 *
马子睿;: "基于数学形态学的医学图像分割方法研究", 电脑与信息技术, no. 02 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926677A (en) * 2021-03-24 2021-06-08 中国医学科学院医学信息研究所 Information labeling method, device and system for medical image data
CN112926677B (en) * 2021-03-24 2024-02-02 中国医学科学院医学信息研究所 Information labeling method, device and system for medical image data
CN113469972A (en) * 2021-06-30 2021-10-01 沈阳东软智能医疗科技研究院有限公司 Method, device, storage medium and electronic equipment for labeling medical slice image
CN113469972B (en) * 2021-06-30 2024-04-23 沈阳东软智能医疗科技研究院有限公司 Method and device for labeling medical slice image, storage medium and electronic equipment
CN113838061A (en) * 2021-07-28 2021-12-24 中科云谷科技有限公司 Method and device for image annotation and storage medium
CN114881917A (en) * 2022-03-17 2022-08-09 深圳大学 Thrombolytic curative effect prediction method based on self-supervision and semantic segmentation and related device

Similar Documents

Publication Publication Date Title
CN110874594B (en) Human body appearance damage detection method and related equipment based on semantic segmentation network
CN109389129B (en) Image processing method, electronic device and storage medium
CN108765278B (en) Image processing method, mobile terminal and computer readable storage medium
CN112102929A (en) Medical image labeling method and device, storage medium and electronic equipment
JP7026826B2 (en) Image processing methods, electronic devices and storage media
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
KR101640998B1 (en) Image processing apparatus and image processing method
US11586863B2 (en) Image classification method and device
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN109583345B (en) Road recognition method, device, computer device and computer readable storage medium
JP5226119B2 (en) Method and system for dividing objects based on hybrid labels
WO2019197021A1 (en) Device and method for instance-level segmentation of an image
CN112489143A (en) Color identification method, device, equipment and storage medium
CN112529913A (en) Image segmentation model training method, image processing method and device
WO2021027152A1 (en) Image synthesis method based on conditional generative adversarial network, and related device
CN113838061A (en) Method and device for image annotation and storage medium
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN113411550B (en) Video coloring method, device, equipment and storage medium
US11869125B2 (en) Generating composite images with objects from different times
CN113763370A (en) Digital pathological image processing method and device, electronic equipment and storage medium
CN116363374B (en) Image semantic segmentation network continuous learning method, system, equipment and storage medium
CN112906517A (en) Self-supervision power law distribution crowd counting method and device and electronic equipment
CN113837236B (en) Method and device for identifying target object in image, terminal equipment and storage medium
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
WO2023272495A1 (en) Badging method and apparatus, badge detection model update method and system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination