CN112418263A - Medical image focus segmentation and labeling method and system - Google Patents

Medical image focus segmentation and labeling method and system Download PDF

Info

Publication number
CN112418263A
CN112418263A CN202011079647.XA CN202011079647A CN112418263A CN 112418263 A CN112418263 A CN 112418263A CN 202011079647 A CN202011079647 A CN 202011079647A CN 112418263 A CN112418263 A CN 112418263A
Authority
CN
China
Prior art keywords
focus
medical image
labeling
results
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011079647.XA
Other languages
Chinese (zh)
Inventor
杨志文
王欣
黄烨霖
姚轩
贺婉佶
熊健皓
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202011079647.XA priority Critical patent/CN112418263A/en
Publication of CN112418263A publication Critical patent/CN112418263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the invention provides a method and a system for segmenting and labeling a medical image lesion, and relates to the technical field of medical image labeling. The method comprises the following steps: distributing medical images to the annotators according to the annotation application sent by the annotators, and enabling the annotators to carry out focus edge segmentation annotation and focus category annotation on the distributed medical images; one medical image is distributed to at least two annotators; when receiving the labeling results of the medical images uploaded by all the labels distributed to one medical image, judging whether all the labeling results of the received medical images meet the preset consistency requirement; and when all the labeling results of the medical image meet the preset consistency requirement, storing all the labeling results of the medical image, otherwise, returning all the labeling results of the medical image to each annotator. The invention ensures the accuracy and consistency of the lesion segmentation labeling result of the medical image by cross validation of the segmentation labeling result of a plurality of persons.

Description

Medical image focus segmentation and labeling method and system
Technical Field
The invention relates to the technical field of medical image annotation, in particular to a medical image focus segmentation and annotation method and system.
Background
In recent years, machine learning techniques have been widely used in the medical field, and in particular, machine learning techniques represented by deep learning have been widely used in the medical imaging field. The fully trained deep learning algorithm model reaches the level of even exceeding the level of a common outpatient doctor in the classification and identification performance of certain single disease types. However, this does not fully exploit the full potential of deep learning techniques. Current deep learning techniques include classification, detection, segmentation, and other segmentation fields. The segmentation algorithm can classify the image elements at a pixel level, not only can provide classification results with coarse granularity, but also can provide accurate quantification results, and the segmentation algorithm is more valuable for evaluating the deterioration degree of the focus in the medical image. Human doctors can only give subjective severity assessment, and are not as objective and accurate as algorithms.
The high performance of the deep learning algorithm model cannot be supported by a large amount of accurately labeled training data, especially the labeling of pixel-level segmentation data, the traditional method needs to label on specific isolated equipment by a professional-trained doctor, and the flow of the whole labeling system is time-consuming and labor-consuming. The medical image segmentation labeling needs to accurately describe the edge of the focus. The focus edge of the medical image is different from the conventional natural image, and different doctors and individuals can know the edge of some focuses differently, even the focus exists or does not exist greatly. Thus, segmentation annotation data generated by only a single doctor has great noise, low accuracy and adverse effect on the segmentation model.
Disclosure of Invention
The invention provides a medical image focus segmentation and labeling method and system, which solve the problems of high noise and low accuracy of medical image focus segmentation labeled by a single doctor in the prior art.
In a first aspect of the present invention, there is provided a medical image lesion segmentation labeling method, applied to a server, including:
distributing medical images to the annotators according to annotation applications sent by the annotators with the registered accounts, and enabling the annotators to carry out focus edge segmentation annotation and focus category annotation on the distributed medical images; one medical image is distributed to at least two annotators;
receiving the labeling results of the medical images uploaded by the annotators, and judging whether all the received labeling results of the medical images meet the preset consistency requirement or not when the labeling results of the medical images uploaded by all the annotators distributed for one medical image are received;
and when all the labeling results of the medical image meet the preset consistency requirement, storing all the labeling results of the medical image, otherwise, returning all the labeling results of the medical image to each annotator.
Preferably, the step of judging whether all the labeling results of the medical image meet the preset consistency requirement includes:
judging whether all the lesion types contained in the medical image indicated by each marking result are the same or not according to lesion type marking;
when any focus category contained in the medical image indicated by any marking result is different from other marking results, determining that all marking results do not meet the preset consistency requirement;
when all the lesion types contained in the medical image indicated by each marking result are the same, marking is segmented according to lesion edges, the range of each lesion region indicated by each marking result is determined, and whether all the marking results meet the preset consistency requirement or not is judged according to the range of each lesion region indicated by each marking result.
Preferably, the step of judging whether all the labeling results meet the preset consistency requirement according to the range of each type of lesion region indicated by each labeling result comprises:
acquiring an intersection region and a union region formed by each type of focus between region ranges indicated by different labeling results according to each type of focus region range indicated by each labeling result, and acquiring the ratio of intersection region range parameters to union region range parameters of each type of focus; wherein the range parameter comprises an area;
judging whether the ratio of the intersection region range parameter to the union region range parameter of each type of focus is greater than or equal to a threshold value corresponding to the type of focus; wherein each type of focus corresponds to a threshold value;
and when the ratio of the intersection region range parameter to the union region range parameter of any category of the focuses is smaller than the threshold corresponding to the focus, determining that all the labeling results do not meet the preset consistency requirement, and otherwise, determining that all the labeling results meet the preset consistency requirement.
Preferably, when all the labeling results of the medical image are returned to each of the annotators, the method further includes:
sending the ratio of the intersection region range parameter and the union region range parameter of each type of focus and the corresponding threshold value of each type of focus to each annotator, and prompting the focus type of which the ratio of the intersection region range parameter and the union region range parameter is smaller than the threshold value to each annotator.
Preferably, the method further comprises:
and receiving threshold adjustment information fed back by a marker, and adjusting the threshold corresponding to each type of focus according to the threshold adjustment information.
Preferably, after all the labeling results of the medical image are returned to each of the annotators, the method further includes:
receiving the annotation result of the medical image uploaded by the annotator again, or receiving an arbitration request of a submission expert sent by the annotator;
when the arbitration request of the submitting expert is received, submitting all the labeling results of the medical image to an arbitration expert so that the arbitration expert determines a correct labeling result according to all the labeling results of the medical image;
and when the annotation result of the medical image uploaded by the arbitration expert is received, storing the annotation result uploaded by the arbitration expert.
Preferably, when the request for submitting expert arbitration is received, the method further comprises:
the medical images that have been submitted for expert arbitration are deleted from all annotator accounts.
Preferably, the step of assigning the medical image to the annotator according to the annotation application sent by the annotator with the registered account number comprises:
and distributing a corresponding number of medical images for the annotators in the medical image data without annotation according to the number of the annotations indicated in the annotation application.
In a second aspect of the present invention, there is also provided a medical image lesion segmentation and labeling system, applied to a server, including:
the allocation module is used for allocating medical images to the annotators according to the annotation application sent by the annotators with the registered account numbers, so that the annotators mark the focus edges and the focus categories for each focus after performing focus segmentation on the allocated medical images; one medical image is distributed to at least two annotators;
the judging module is used for receiving the marking results of the medical images uploaded by the markers, and judging whether the marking results of all the markers meet the preset consistency requirement or not when the marking results of the medical images uploaded by all the markers distributed for one medical image are received;
and the storage module is used for storing the labeling results of all the markers when the labeling results of all the markers meet the preset consistency requirement, and otherwise, returning the labeling results of all the markers to each marker.
In a third aspect of the present invention, there is also provided a medical image lesion segmentation labeling method applied to an electronic device, including:
when receiving an annotation application input by an annotator, sending the annotation application to a server, and receiving a medical image distributed by the server according to the annotation application; one medical image is distributed to at least two annotators;
one or more regions selected by a marker on the medical image are used as focus regions;
respectively carrying out focus edge segmentation and labeling on each focus region according to an edge segmentation algorithm or a description operation input by a marker, and receiving focus category labels selected by the marker for each focus region to obtain a labeling result of the medical image;
after receiving a submission request input by a marker, uploading the marking results of the medical images to a server, so that when the server receives the marking results of the medical images uploaded by all markers distributed for one medical image, the server judges whether the marking results of all markers meet the preset consistency requirement, when the marking results of all markers meet the preset consistency requirement, the marking results of all markers are stored, otherwise, the marking results of all markers are returned to each marker.
Preferably, the step of performing lesion edge segmentation labeling on each lesion region according to an edge segmentation algorithm includes:
displaying a plurality of prestored edge segmentation algorithms, and respectively carrying out focus edge pre-segmentation marking on each focus area according to the edge segmentation algorithm selected by a marker for each focus area;
when the dragging operation of a marker on the edge of any focus area is received, adjusting the focus edge pre-segmentation marking of the focus area according to the dragging operation;
when receiving the parameter adjustment operation of the annotator on the edge segmentation algorithm of any focus area, adjusting the focus edge pre-segmentation annotation of the focus area according to the adjusted parameters and the edge segmentation algorithm of the focus area.
Preferably, the step of pre-dividing and labeling the lesion edge of each lesion area according to the edge segmentation algorithm selected by the label maker for each lesion area includes:
when a marker selects various edge segmentation algorithms for any focus area, respectively performing focus edge pre-segmentation on the focus area according to the selected various edge segmentation algorithms to obtain a plurality of focus edge pre-segmentation results;
and according to the result selected by the annotator in the pre-segmentation result of the plurality of focus edges, pre-segmentation labeling the focus edges of the focus area.
Preferably, the method further comprises:
receiving and displaying all labeling results of the medical images returned by the server;
when receiving the adjustment operation of the annotating result of the submitted medical image by the annotator, uploading the annotation result of the medical image adjusted by the annotator to a server;
and when a submission expert arbitration request input by a annotator is received, uploading the submission expert arbitration request to a server.
In a fourth aspect of the present invention, there is also provided a medical image lesion segmentation labeling system applied to an electronic device, including:
the first transceiver module is used for sending the annotation application to a server and receiving the medical image distributed by the server according to the annotation application when receiving the annotation application input by an annotator; one medical image is distributed to at least two annotators;
the focus selection module is used for taking one or more regions selected by the annotator on the medical image as focus regions;
the focus marking module is used for respectively carrying out edge segmentation marking on each focus area according to an edge segmentation algorithm or a drawing operation input by a marker, receiving focus category marking selected by the marker for each focus area and obtaining a marking result of the medical image;
and the second transceiver module is used for uploading the labeling results of the medical images to the server after receiving the submission request input by the annotator, so that when the server receives the labeling results of the medical images uploaded by all the annotators distributed for one medical image, the server judges whether the labeling results of all the annotators meet the preset consistency requirement, and when the labeling results of all the annotators meet the preset consistency requirement, the labeling results of all the annotators are stored, otherwise, the labeling results of all the annotators are returned to each annotator.
Aiming at the prior art, the invention has the following advantages:
in the embodiment of the invention, firstly, according to a marking application sent by a marker with a registered account, a medical image is distributed to the marker, so that the marker performs focus edge segmentation marking and focus category marking on the distributed medical image; one medical image is distributed to at least two annotators; then receiving the labeling results of the medical images uploaded by the annotators, and judging whether all the received labeling results of the medical images meet the preset consistency requirement or not when the labeling results of the medical images uploaded by all the annotators distributed for one medical image are received; and when all the labeling results of the medical image meet the preset consistency requirement, storing all the labeling results of the medical image, otherwise, returning all the labeling results of the medical image to each annotator. Therefore, the accuracy and consistency of the medical image focus segmentation labeling result are ensured by cross validation of the multi-person segmentation labeling result, and data noise generated by labeling of a single doctor is avoided.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly described below.
Fig. 1 is a schematic flowchart of a medical image lesion segmentation labeling method applied to a server according to an embodiment of the present invention;
fig. 2 is a schematic view of a lesion segmentation and labeling in a medical image according to an embodiment of the present invention;
fig. 3 is another flowchart of a medical image lesion segmentation labeling method applied to a server according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a medical image lesion segmentation labeling system applied to a server according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a server according to an embodiment of the present invention.
Fig. 6 is a flowchart illustrating a medical image lesion segmentation and labeling method applied to an electronic device according to an embodiment of the present invention;
FIG. 7 is a block diagram illustrating a medical image lesion segmentation and labeling system applied to an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic block diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a schematic flow chart of a medical image lesion segmentation and labeling method according to an embodiment of the present invention, and referring to fig. 1, the medical image lesion segmentation and labeling method is applied to a server, and includes:
step 101: distributing medical images to the annotators according to annotation applications sent by the annotators with the registered accounts, and enabling the annotators to carry out focus edge segmentation annotation and focus category annotation on the distributed medical images; wherein one medical image is assigned to at least two annotators.
Here, the annotator can submit an annotation application on any networked electronic device through a registered account of the annotator, and the server distributes the medical image to the annotator according to the annotation application after receiving the annotation application.
Specifically, according to an annotation application sent by an annotator having a registered account, the step of assigning the medical image to the annotator may include:
and distributing a corresponding number of medical images for the annotators in the medical image data without annotation according to the number of the annotations indicated in the annotation application.
At the moment, the annotator can set the number of the annotated medical images when submitting the annotation application according to the time arrangement of the annotator, and the system background automatically distributes a corresponding number of medical images to the annotator in the medical image data which is not annotated according to the completion condition of the remaining annotation tasks.
Wherein, a medical image is at least distributed to two annotators, so as to ensure the accuracy through multi-person annotation. In specific application, the server can also receive the number of common labeling people of one image set by a system administrator for each batch of labeling tasks, and the system administrator can flexibly set the number according to the requirements of the tasks, and generally 2-3 people are the best.
In the embodiment of the invention, after receiving the distributed medical image, a marker can select an approximate region of a focus considered on the medical image, then according to an edge segmentation algorithm or a drawing operation input by the marker, focus edge segmentation marking is respectively carried out on each focus region selected by the marker, focus category marking is selected for each focus region, wherein the focus category marking can be realized in a label mode, and then a marking result of the medical image is submitted to a server for differential analysis. For example, a further annotator may select any edge segmentation algorithm integrated by the system to perform lesion edge pre-segmentation annotation on the selected lesion region, and then fine-tune the pre-segmentation annotation by dragging operation or adjusting parameters inside the pre-segmentation algorithm to obtain a final lesion edge annotation result. The following implementation modes of the medical image lesion segmentation and labeling method on the electronic device side can be applied to the embodiment of the present invention, and are not described herein again.
Step 102: and receiving the labeling results of the medical images uploaded by the annotators, and judging whether all the received labeling results of the medical images meet the preset consistency requirement or not when the labeling results of the medical images uploaded by all the annotators distributed for one medical image are received.
The annotator finishes the annotation of all the lesion edges in the medical image, and uploads the annotation result of the medical image to the server after selecting a corresponding lesion category label for each lesion, and when the server receives the annotation result uploaded by all the annotators distributed for one medical image, the server automatically performs difference analysis, namely judges whether all the annotation results meet the requirement of consistency, so as to perform cross validation of multi-person annotation.
Step 103: and when all the labeling results of the medical image meet the preset consistency requirement, storing all the labeling results of the medical image, otherwise, returning all the labeling results of the medical image to each annotator.
When all the labeling results of the medical image meet the consistency requirement, all the labeling results of the medical image are stored, and therefore the accuracy and consistency of the labeling results are guaranteed through cross verification of the multi-person segmentation labeling results. At the moment, the labeling results of all the annotators have higher consistency, the labeling result of any one annotator can be added into the model training set, and the labeling results of all the annotators can also be added into the training set as a means for data amplification.
And when all the labeling results of the medical image do not meet the consistency requirement, the server returns all the labeling results of the medical image to each annotator for reference analysis of the annotators. At the moment, the server dynamically feeds back the difference analysis result in real time, so that a annotator can modify the annotation result in time, the interactivity in the annotation process is increased, and the annotation efficiency is improved.
The storage manner of the annotation result may be, for example, but not limited to, storing all the annotation results of the medical image in a database.
According to the medical image focus segmentation and annotation method, accuracy and consistency of medical image focus segmentation and annotation results are guaranteed through cross validation of multiple segmentation and annotation results, and data noise generated by single doctor annotation is avoided. And by dynamically feeding back the difference analysis result in real time, the annotator can modify the annotation result in time, the interactivity in the annotation process is increased, and the annotation efficiency is improved.
Preferably, in step 102, the step of determining whether all the labeling results of the medical image satisfy a preset consistency requirement includes:
step 1021: and judging whether all the lesion types contained in the medical image indicated by each marking result are the same or not according to lesion type marking.
Here, the lesion category marking may be implemented in a form of a label according to the lesion category marking, and whether all lesion categories included in the medical image indicated by each marking result are the same is determined, so as to ensure that each marker agrees with the lesion categories existing in the medical image.
Step 1022: and when any focus category contained in the medical image indicated by any marking result is different from other marking results, determining that all marking results do not meet the preset consistency requirement.
When any focus category indicated by the labeling result of any one annotator is different from other labeling results, the consistency requirement is determined not to be met, all the labeling results of the medical image are returned to each annotator for the annotator to perform difference analysis, and the self-labeling result is evaluated.
For example, when a medical image has three annotators, namely, A, B and C, the annotation results of A and B have A, B, C three types of focuses, and the annotation result of C only has A, B two types of focuses, at this time, the server returns the segmentation annotation results of the three annotators, namely, A, B and C, to each annotator terminal for the annotators to perform reference analysis and evaluate self-annotation results.
Step 1023: when all the lesion types contained in the medical image indicated by each marking result are the same, marking is segmented according to lesion edges, the range of each lesion region indicated by each marking result is determined, and whether all the marking results meet the preset consistency requirement or not is judged according to the range of each lesion region indicated by each marking result.
Here, when all the lesion types included in the medical image indicated by each labeling result are the same, that is, each annotator agrees with the lesion types existing in the medical image, the difference analysis is continuously performed on the lesion region range of the labeling result. Firstly, dividing and marking according to the focus edge, determining each type of focus region range indicated by each marking result, and then judging whether all marking results meet the requirement of consistency according to each type of focus region range indicated by each marking result.
Preferably, in the step 1023, the step of determining whether all the labeling results satisfy a preset consistency requirement according to the range of each type of lesion area indicated by each labeling result includes:
step 10231: acquiring an intersection region and a union region formed by each type of focus between region ranges indicated by different labeling results according to each type of focus region range indicated by each labeling result, and acquiring the ratio of intersection region range parameters to union region range parameters of each type of focus; wherein the range parameter comprises an area.
Here, first, an Intersection region and a Union region formed between region ranges indicated by different labeling results for each type of lesion are obtained, and then a ratio between an Intersection region range parameter and a Union region range parameter is obtained, that is, an iou (Intersection over Union) calculation analysis is performed on each type of lesion. For convenience of description, the ratio of the intersection region range parameter and the union region range parameter of the lesion is hereinafter referred to as iou of the lesion. The range parameter used in the embodiment of the present invention may include, but is not limited to, an area.
Step 10232: judging whether the ratio of the intersection region range parameter to the union region range parameter of each type of focus is greater than or equal to a threshold value corresponding to the type of focus; wherein each type of lesion corresponds to a threshold value.
Here, comparing the iou of each type of focus with the corresponding threshold of the type of focus, if the iou of a certain type of focus is greater than or equal to the corresponding threshold, the type of focus meets the requirement of consistency, otherwise, the type of focus does not meet the requirement of consistency, and repeating the step 10232 to judge the iou of other types of focuses until all types of focuses are judged completely.
Wherein, the threshold value corresponding to each type of focus can be preset, and different threshold values can be set for each type of focus.
Step 10233: and when the ratio of the intersection region range parameter to the union region range parameter of any category of the focuses is smaller than the threshold corresponding to the focus, determining that all the labeling results do not meet the preset consistency requirement, and otherwise, determining that all the labeling results meet the preset consistency requirement.
Here, if the iou of a certain category of focus is smaller than the corresponding threshold, all the labeling results of the medical image do not meet the requirement of range consistency, and the server returns the labeling results of all the people to the terminal of each labeling person for the labeling person to perform difference analysis.
If the iou of all the types of focuses is larger than or equal to the corresponding threshold value, all the labeling results of the medical image meet the consistency requirement, and the server can record the labeling results of all the labels into the terminal database. At the moment, the labeling results of all the annotators have higher consistency, the labeling result of any annotator can be added into the model training set, and the labeling results of all the annotators can also be added into the training set as a means for data amplification.
The following still takes a medical image with three labels of a, b, and c as an example, and the consistency determination process of the embodiment of the present invention is illustrated as follows.
As shown in fig. 2, it is assumed that A, B, C represents the region range of the segmentation and labeling of the X-type lesion on a medical image by the three-dimensional annotator, methylethyl and propyl, respectively, and the range parameter used is the area.
Firstly, according to the region range A, B, C of the X-type lesion indicated by each labeling result, the intersection region and the union region formed by the X-type lesion between the region ranges indicated by different labeling results are obtained, wherein the intersection region obtained by obtaining the X-type lesion between A, B, C is recorded as I, and the union region is recorded as U. The union region U is not shown in the figure, and the union region U is the sum of the hatched regions in the figure. In the figure, ab + I represents a marked overlapping region of the second person, ac + I represents a marked overlapping region of the second person, and bc + I represents a marked overlapping region of the second person.
And then obtaining the iou of the X-type focus, namely the ratio of the area of the intersection region to the area of the union region. Here, assume SA、SB、SCRespectively represents the area, S, of the focus area range A, B, C marked by the three peopleIRepresenting the intersection region, i.e. the area of the overlapping region of the three-person labels, Sab+IRepresenting the area of the overlapping region of the first and second person labels, Sac+IIndicates the area of the overlapping region of the first and second people, Sbc+IAnd (4) representing the area of the marked overlapping region of the second person. Firstly, acquiring the area S of a union region of three-person labeling results through the following formulaU
SU=SA+SB+SC-Sab+I-Sac+U-Sbc+U-2*SI
Then acquiring three-person marking results union ratio iou through the following formula:
iou=SI/SU
and after the iou of the X-type focus is obtained, comparing the iou of the X-type focus with a threshold value corresponding to the focus, if the iou of the X-type focus is larger than or equal to the threshold value, enabling the focus to meet the requirement, and repeating the steps to calculate the iou of other categories until all categories of focuses are judged completely. If all the types of focuses meet the requirements, the labeling results of the three medical images meet the consistency requirement, the server records the results of all the markers into the database, otherwise, the labeling results of the three medical images do not meet the consistency requirement, and the server returns the results of all the markers to the terminal of each marker for reference analysis of the markers.
According to the method provided by the embodiment of the invention, the accuracy and consistency of the labeling result are ensured by carrying out iou analysis on each type of focus of all the labels.
Preferably, when the step 103 returns all labeling results of the medical image to each of the annotators, the method further includes:
step 104: sending the ratio of the intersection region range parameter and the union region range parameter of each type of focus and the corresponding threshold value of each type of focus to each annotator, and prompting the focus type of which the ratio of the intersection region range parameter and the union region range parameter is smaller than the threshold value to each annotator.
At the moment, when all the labeling results of the medical image do not meet the consistency requirement, the labeling results of all the people are returned to each annotator, meanwhile, the actual iou calculated by all the types of focuses according to the labeling results and the preset threshold are given, and the classes of focuses are prompted to not meet the consistency requirement for reference analysis of the annotator.
Preferably, the method further comprises:
step 105: and receiving threshold adjustment information fed back by a marker, and adjusting the threshold corresponding to each type of focus according to the threshold adjustment information.
Here, the higher the iou threshold is set, the higher the consistency of the lesion segmentation labeling results is, the greater the difficulty of corresponding labeling is, and the lower the labeling efficiency is. The step also supports dynamic adjustment of the iou threshold of each type of focus according to threshold adjustment information fed back by a marker in the marking process so as to ensure dynamic balance of marking result accuracy and marking efficiency, and the dynamic balance is flexible and changeable.
Preferably, after the step 103 returns all the labeling results of the medical image to each of the annotators, the method further includes:
step 106: and receiving the annotation result of the medical image uploaded by the annotator again or receiving an arbitration request of a submission expert sent by the annotator.
After all annotators of a medical image finish independent annotation, the server automatically analyzes and returns results with unqualified consistency to all the annotator terminals, and the annotators judge the image again according to the annotation results of other annotators and the difference analysis results returned by the background.
If the results of other annotators are agreed, the annotation result of the server can be adjusted and uploaded again, the server continues to perform background difference processing, the consistency of all the results is analyzed, if the results of all the annotators meet the requirements, the results of all the annotators are recorded into the database, and if the results of all the annotators do not meet the requirements, the results are returned to each annotator. At the moment, the server dynamically feeds back the difference analysis result after each modification in real time, so that a annotator can adjust the annotation strategy in real time, the operation of the annotator is facilitated, and the annotation efficiency is improved.
If the opinion of any other annotator is not agreed, the medical image can be submitted to the expert for arbitration. Among them, arbitration experts are usually served by experts with experience in the industry. At the moment, disputed annotation results are processed through special expert arbitration, and the annotation results finally recorded into the database are guaranteed to be high-precision data.
Step 107: and when the arbitration request of the submitting expert is received, submitting all the annotation results of the medical image to an arbitration expert so that the arbitration expert determines the correct annotation result according to all the annotation results of the medical image.
In this step, the ratio of the intersection region range parameter and the union region range parameter of each type of focus and the corresponding threshold of each type of focus can be simultaneously sent to the arbitration expert, and the focus category of which the ratio of the intersection region range parameter and the union region range parameter is smaller than the threshold is prompted to the arbitration expert.
At this time, the arbitration expert can see the results of all the annotators and the iou difference analysis results of the system for each type of focus, and the arbitration expert can select an annotator marking result which is closest to the correct result considered by the arbitration expert, fine-tune the focus edge on the basis of the annotator, or perform focus segmentation and marking again, and upload the correct marking result to the server.
The manner of arbitrating the fine adjustment of the focus edge by the expert or performing the focus segmentation labeling again can refer to the introduction of the embodiment of the focus segmentation labeling method of the medical image on the electronic device side, for example, a labeling person can select any edge segmentation algorithm integrated by the system to perform focus edge pre-segmentation labeling on a selected focus area, and then fine-adjust the pre-segmentation labeling by dragging operation or adjusting parameters in the pre-segmentation algorithm to obtain a final focus edge labeling result. The following implementation modes of the medical image lesion segmentation and labeling method on the electronic device side can be applied to the embodiment of the present invention, and are not described herein again.
Step 108: and when the annotation result of the medical image uploaded by the arbitration expert is received, storing the annotation result uploaded by the arbitration expert.
The marking result uploaded by the arbitration expert is used as a final correct result to be stored, for example, the marking result is input into a database, and the disputed marking result is processed through special expert arbitration, so that the marking result input into the database finally is guaranteed to be high-precision data.
Preferably, the step 107, when receiving the request for submitting expert arbitration, further includes:
step 109: the medical images that have been submitted for expert arbitration are deleted from all annotator accounts.
At the moment, after a certain medical image is submitted to expert arbitration, other annotators can not see the image any more, unnecessary repeated labor is avoided, and space is saved.
A specific application flow of the medical image lesion segmentation and labeling method according to the embodiment of the present invention is illustrated as follows.
As shown in fig. 3, the medical image lesion segmentation labeling method according to the embodiment of the present invention includes:
s1 task application: the annotator submits an application request including the annotation quantity according to the request, and the server distributes a corresponding quantity of medical images to the annotator according to the application request.
S2 lesion labeling: and the annotator carries out focus edge segmentation annotation and focus category annotation in the distributed medical image and submits an annotation result.
For a specific labeling manner, reference may be made to the following description of an embodiment of a medical image lesion segmentation and labeling method on the electronic device side, where the following embodiment of the medical image lesion segmentation and labeling method on the electronic device side may be applied to the embodiment of the present invention.
And S3 classification analysis: the server receives the labeling results uploaded by all the labeling persons distributed to one medical image, judges whether all the lesion types contained in the medical image indicated by each labeling result are the same, if yes, the step is S4 iou analysis, and if not, the step is S5 difference processing.
S4 iou analysis: the server obtains the area ratio iou of the intersection region and the union region formed by each type of focus in the region ranges indicated by the different labeling results, compares whether each type of focus iou is larger than or equal to the corresponding threshold value of the type of focus, if the type of focus iou is smaller than the corresponding threshold value, the S5 difference processing is carried out, and if all the types of focus iou are larger than or equal to the corresponding threshold values, the S7 data storage is carried out.
S5 difference processing: and returning all the labeling results, each type of lesion iou and the corresponding threshold value to each annotator, if the annotator agrees with the results of other annotators, entering S51, and otherwise, entering S52.
S51 modification: and modifying the labeling result of the user, submitting the labeling result to the server, and returning to the S3 for category analysis.
S52 submits arbitration: the annotator submits an expert arbitration request, and the server submits all annotation results of the medical images to an arbitration expert, and then the expert arbitration is carried out in S6.
S6 expert arbitrates: the arbitration expert integrates all the annotation results to give a final result, and then the data storage is carried out S7.
S7 data storage: and inputting all the labeling results or arbitration expert results into a database.
According to the medical image focus segmentation and annotation method, accuracy and consistency of medical image focus segmentation and annotation results are guaranteed through cross validation of multiple segmentation and annotation results, and data noise generated by single doctor annotation is avoided. And the server side and the client side realize the real-time online cross labeling of multiple persons, and a labeler can label at any convenient time and place through the networked electronic equipment. And the server can dynamically feed back the difference analysis result after each submission in real time, so that a annotator can modify the annotation result in real time, the interactivity of the annotation process is increased, and the annotation efficiency is improved. Meanwhile, a special expert arbitration is designed to process disputed annotation results, and the annotation results finally recorded into the database are guaranteed to be highly accurate data.
Referring to fig. 4, an embodiment of the present invention further provides a medical image lesion segmentation labeling system 400, applied to a server, including:
the assigning module 401 is configured to assign a medical image to a annotator according to a labeling application sent by a annotator having a registered account, so that the annotator performs focus segmentation on the assigned medical image and then labels a focus edge and a focus category for each focus; one medical image is distributed to at least two annotators;
a judging module 402, configured to receive the annotation result of the medical image uploaded by the annotator, and when the annotation result of the medical image uploaded by all the annotators allocated to one medical image is received, judge whether the annotation result of all the annotators meets a preset consistency requirement;
the storage module 403 is configured to store the labeling results of all the annotators when the labeling results of all the annotators meet a preset consistency requirement, and otherwise, return the labeling results of all the annotators to each annotator.
The medical image lesion segmentation and annotation system 400 of the embodiment of the invention ensures the accuracy and consistency of the medical image lesion segmentation and annotation result and avoids data noise generated by single doctor annotation by cross-verifying the segmentation and annotation result of a plurality of persons. And by dynamically feeding back the difference analysis result in real time, the annotator can modify the annotation result in time, the interactivity in the annotation process is increased, and the annotation efficiency is improved.
Preferably, the judging module 402 includes:
the first judgment submodule is used for judging whether all the lesion categories contained in the medical image indicated by each marking result are the same or not according to lesion category marking;
the first determining submodule is used for determining that all the labeling results do not meet the preset consistency requirement when any lesion category contained in the medical image indicated by any labeling result is different from other labeling results;
and the second judgment submodule is used for segmenting and labeling according to the focus edge when all the focus categories contained in the medical image indicated by each labeling result are the same, determining each type of focus region range indicated by each labeling result, and judging whether all the labeling results meet the preset consistency requirement according to each type of focus region range indicated by each labeling result.
Preferably, the second judgment sub-module includes:
the first acquisition unit is used for acquiring an intersection region and a union region formed by each type of focus between region ranges indicated by different marking results according to each type of focus region range indicated by each marking result, and acquiring the ratio of intersection region range parameters to union region range parameters of each type of focus; wherein the range parameter comprises an area;
the first judging unit is used for judging whether the ratio of the intersection region range parameter to the union region range parameter of each type of focus is larger than or equal to a threshold value corresponding to the type of focus; wherein each type of focus corresponds to a threshold value;
the first determining unit is used for determining that all the labeling results do not meet the preset consistency requirement when the ratio of the intersection region range parameter to the union region range parameter of any category of focuses is smaller than the threshold corresponding to the category of focuses, and otherwise, determining that all the labeling results meet the preset consistency requirement.
Preferably, the method further comprises the following steps:
and the sending module is used for sending the ratio of the intersection region range parameter and the union region range parameter of each type of focus and the corresponding threshold of each type of focus to each annotator and prompting the focus category of which the ratio of the intersection region range parameter and the union region range parameter is smaller than the threshold to each annotator.
Preferably, the method further comprises the following steps:
and the threshold adjusting module is used for receiving threshold adjusting information fed back by the annotator and adjusting the threshold corresponding to each type of focus according to the threshold adjusting information.
Preferably, the method further comprises the following steps:
the first receiving module is used for receiving the annotation result of the medical image uploaded by the annotator again or receiving an arbitration request of a submission expert sent by the annotator;
the arbitration submitting module is used for submitting all the labeling results of the medical image to an arbitration expert when the arbitration request of the submission expert is received, so that the arbitration expert determines the correct labeling results according to all the labeling results of the medical image;
and the first uploading module is used for storing the annotation result uploaded by the arbitration expert when the annotation result of the medical image uploaded by the arbitration expert is received.
Preferably, the method further comprises the following steps:
and the deleting module is used for deleting the medical image which is submitted to expert arbitration from all the annotator accounts.
Preferably, the allocating module 401 includes:
and the distribution submodule is used for distributing a corresponding number of medical images for the annotators in the medical image data which is not annotated.
The medical image lesion segmentation and annotation system 400 of the embodiment of the invention ensures the accuracy and consistency of the medical image lesion segmentation and annotation result and avoids data noise generated by single doctor annotation by cross-verifying the segmentation and annotation result of a plurality of persons. And the server side and the client side realize the real-time online cross labeling of multiple persons, and a labeler can label at any convenient time and place through the networked electronic equipment. And the server can dynamically feed back the difference analysis result after each submission in real time, so that a annotator can modify the annotation result in real time, the interactivity of the annotation process is increased, and the annotation efficiency is improved. Meanwhile, a special expert arbitration is designed to process disputed annotation results, and the annotation results finally recorded into the database are guaranteed to be highly accurate data.
For the above system embodiment, since it is basically similar to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points.
The embodiment of the invention also provides a server. As shown in fig. 5, the system comprises a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory 503 are communicated with each other through the communication bus 504.
The memory 503 stores a computer program.
When the processor 501 is configured to execute the program stored in the memory 503, the following steps are implemented:
distributing medical images to the annotators according to annotation applications sent by the annotators with the registered accounts, and enabling the annotators to carry out focus edge segmentation annotation and focus category annotation on the distributed medical images; one medical image is distributed to at least two annotators;
receiving the labeling results of the medical images uploaded by the annotators, and judging whether all the received labeling results of the medical images meet the preset consistency requirement or not when the labeling results of the medical images uploaded by all the annotators distributed for one medical image are received;
and when all the labeling results of the medical image meet the preset consistency requirement, storing all the labeling results of the medical image, otherwise, returning all the labeling results of the medical image to each annotator.
The communication bus mentioned in the above server may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the server and other devices.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute the medical image lesion segmentation labeling method applied to the server side described in the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute the medical image lesion segmentation labeling method applied to the server side as described in the above embodiments.
Referring to fig. 6, an embodiment of the present invention further provides a medical image lesion segmentation labeling method applied to an electronic device, including:
step 601: when receiving an annotation application input by an annotator, sending the annotation application to a server, and receiving a medical image distributed by the server according to the annotation application; wherein one medical image is assigned to at least two annotators.
Here, the annotator can submit an annotation application on any networked electronic device through a registered account of the annotator, and the server distributes the medical image to the annotator according to the annotation application after receiving the annotation application.
Specifically, the annotation application sent by the annotator includes the number of annotations. And the server distributes a corresponding number of medical images for the annotators in the medical image data without annotation according to the annotation number indicated in the annotation application.
At the moment, the annotator can set the number of the annotated medical images when submitting the annotation application according to the time arrangement of the annotator, and the system background automatically distributes a corresponding number of medical images to the annotator in the medical image data which is not annotated according to the completion condition of the remaining annotation tasks.
Wherein, a medical image is at least distributed to two annotators, so as to ensure the accuracy through multi-person annotation. When the method is applied specifically, a system administrator can label the number of the common labeling people of one image set by each batch of tasks, and the system administrator can flexibly set the number according to the requirements of the tasks, and generally 2-3 people are the best.
Step 602: and taking one or more regions selected by the annotator on the medical image as lesion regions.
Here, the approximate region of the lesion that the annotator selected on any networked electronic device may be a rectangular box, any polygonal region, or any shape region delineated by a brush.
Step 603: according to an edge segmentation algorithm or a drawing operation input by a marker, respectively carrying out lesion edge segmentation marking on each lesion area, and receiving lesion category marking selected by the marker for each lesion area to obtain a marking result of the medical image.
Here, the annotator can use the edge segmentation algorithm to segment and label the focus edge of the selected region, or the annotator can manually input the drawing operation to segment and label the focus edge when the segmentation result obtained by the segmentation algorithm is not satisfactory, and the annotator selects the focus category label for each focus region. The lesion category label can be labeled by means of a label, but is not limited thereto.
Step 604: after receiving a submission request input by a marker, uploading the marking results of the medical images to a server, so that when the server receives the marking results of the medical images uploaded by all markers distributed for one medical image, the server judges whether the marking results of all markers meet the preset consistency requirement, when the marking results of all markers meet the preset consistency requirement, the marking results of all markers are stored, otherwise, the marking results of all markers are returned to each marker.
After uploading the annotation result of the medical image to the server, the annotator automatically analyzes whether all the annotation results meet the consistency requirement or not when the server receives the annotation results uploaded by all the annotators distributed for one medical image, so as to perform cross validation of multi-person annotation. And only when all the labeling results of the medical image meet the consistency requirement, storing all the labeling results of the medical image, thereby ensuring the accuracy and consistency of the labeling results through cross validation of the multi-person segmentation labeling results.
According to the medical image focus segmentation and annotation method, accuracy and consistency of medical image focus segmentation and annotation results are guaranteed through cross validation of multiple segmentation and annotation results, and data noise generated by single doctor annotation is avoided. And the server side and the client side realize real-time online cross labeling of multiple persons, and a marker can label at any convenient time and place through the networked electronic equipment, so that the convenience is improved. And by dynamically feeding back the difference analysis result in real time, the annotator can modify the annotation result in time, the interactivity in the annotation process is increased, and the annotation efficiency is improved.
Preferably, in the step 603, the step of performing lesion edge segmentation and labeling on each lesion region according to an edge segmentation algorithm includes:
step 6031: displaying a plurality of prestored edge segmentation algorithms, and respectively carrying out focus edge pre-segmentation marking on each focus area according to the edge segmentation algorithm selected by a marker for each focus area.
Here, all underlying edge segmentation algorithms such as canny algorithm, region growing algorithm, watershed algorithm, threshold segmentation algorithm, graph segmentation algorithm, neural network segmentation algorithm, etc. may be integrated in the system in advance. Interfaces may also be reserved to support future new image segmentation algorithms.
In this step, the annotator can select any edge segmentation algorithm integrated by the system to perform lesion edge pre-segmentation annotation on the selected lesion area. At the moment, a annotator can select corresponding segmentation algorithms and parameters to form preliminary pre-segmentation annotation according to different types of focuses in different regions, so that the segmentation annotation efficiency is greatly improved.
Of course, if the annotator is not satisfied with the pre-segmentation labeling result obtained by the edge segmentation algorithm, the lesion edge can be manually drawn, and the system can perform lesion edge pre-segmentation labeling on each lesion according to the drawing operation of the annotator.
Preferably, in the step 6031, the step of performing lesion edge pre-segmentation labeling on each lesion area according to an edge segmentation algorithm selected by a labeling person for each lesion area includes:
when a marker selects various edge segmentation algorithms for any focus area, respectively performing focus edge pre-segmentation on the focus area according to the selected various edge segmentation algorithms to obtain a plurality of focus edge pre-segmentation results; and according to the result selected by the annotator in the pre-segmentation result of the plurality of focus edges, pre-segmentation labeling the focus edges of the focus area.
At the moment, a marker can select different edge pre-segmentation algorithms to compare and check the effect, then select the pre-segmentation result which can best fit the focus edge thought by the marker, and pre-segment and mark the focus in the selected area, so that the marker can select the most satisfactory result, and the accuracy is improved.
Step 6032: when the dragging operation of the marker on the edge of any focus area is received, the focus edge pre-segmentation marking of the focus area is adjusted according to the dragging operation.
Here, the annotator can drag and adjust to any shape of the edge of the best-fit lesion considered by himself according to the pre-segmented annotated rough lesion boundary.
Step 6033: when receiving the parameter adjustment operation of the annotator on the edge segmentation algorithm of any focus area, adjusting the focus edge pre-segmentation annotation of the focus area according to the adjusted parameters and the edge segmentation algorithm of the focus area.
The method of the embodiment of the invention simultaneously supports the annotator to adjust the parameters in different pre-segmentation algorithms to achieve the best segmentation effect. The parameters of the edge segmentation algorithm may include, but are not limited to, threshold parameters of a threshold segmentation algorithm, watershed algorithm markers parameters, initial seed parameter selection of a region growing algorithm, and the like.
At the moment, a annotator can finely adjust the lesion edge pre-segmentation marking result through dragging operation or adjusting parameters of an edge segmentation algorithm, the implementation mode is diversified, the convenience of operation of the annotator is improved, the marking accuracy can be fully guaranteed, and the marking efficiency is improved.
Preferably, the method further comprises:
step 605: receiving and displaying all labeling results of the medical images returned by the server;
step 606: when receiving the adjustment operation of the annotating result of the submitted medical image by the annotator, uploading the annotation result of the medical image adjusted by the annotator to a server;
step 607: and when a submission expert arbitration request input by a annotator is received, uploading the submission expert arbitration request to a server.
At the moment, after all annotators of a medical image finish independent annotation, the server automatically analyzes and returns results with nonconforming consistency to all annotator terminals, and the annotators judge the image again according to the annotation results of other annotators and the difference analysis results returned by the background.
If the results of other annotators are agreed, the annotation result of the server can be adjusted and uploaded again, the server continues to perform background difference processing, the consistency of all the results is analyzed, if the results of all the annotators meet the requirements, the results of all the annotators are recorded into the database, and if the results of all the annotators do not meet the requirements, the results are returned to each annotator. At the moment, the server dynamically feeds back the difference analysis result after each modification in real time, so that a annotator can adjust the annotation strategy in real time, the operation of the annotator is facilitated, and the annotation efficiency is improved.
If the opinion of any other annotator is not agreed, the medical image can be submitted to the expert for arbitration. Among them, arbitration experts are usually served by experts with experience in the industry. At the moment, disputed annotation results are processed through special expert arbitration, and the annotation results finally recorded into the database are guaranteed to be high-precision data.
Specifically, after receiving the labeling result of the medical image, the arbitration expert selects a labeling result of a labeling person closest to the correct result considered by the arbitration expert, performs focus boundary fine adjustment on the basis, or performs focus segmentation labeling again, and uploads the correct labeling result to the server.
The manner of performing focus boundary fine tuning or re-performing focus segmentation labeling on the labeling result by the arbitration expert can refer to the related description, which is not described herein again.
According to the medical image focus segmentation and annotation method, accuracy and consistency of medical image focus segmentation and annotation results are guaranteed through cross validation of multiple segmentation and annotation results, and data noise generated by single doctor annotation is avoided. And the server side and the client side realize the real-time online cross labeling of multiple persons, and a labeler can label at any convenient time and place through the networked electronic equipment. And the server can dynamically feed back the difference analysis result after each submission in real time, so that a annotator can modify the annotation result in real time, the interactivity of the annotation process is increased, and the annotation efficiency is improved. Meanwhile, a special expert arbitration is designed to process disputed annotation results, and the annotation results finally recorded into the database are guaranteed to be highly accurate data.
Referring to fig. 7, an embodiment of the present invention further provides a medical image lesion segmentation labeling system 700 applied to an electronic device, including:
the first transceiver module 701 is configured to send an annotation application to a server when receiving the annotation application input by an annotator, and receive a medical image distributed by the server according to the annotation application;
a lesion selection module 702, configured to use one or more regions selected by the annotator on the medical image as lesion regions;
a lesion marking module 703, configured to perform edge segmentation marking on each lesion region according to an edge segmentation algorithm or a drawing operation input by a marker, and receive a lesion category marking selected by the marker for each lesion region to obtain a marking result of the medical image;
the second transceiver module 704 is configured to upload the annotation result of the medical image to a server after receiving the submission request input by the annotator.
The medical image lesion segmentation and annotation system 700 of the embodiment of the invention ensures the accuracy and consistency of the medical image lesion segmentation and annotation result and avoids data noise generated by single doctor annotation by cross-verifying the multi-person segmentation and annotation result. And the server side and the client side realize real-time online cross labeling of multiple persons, and a marker can label at any convenient time and place through the networked electronic equipment, so that the convenience is improved. And by dynamically feeding back the difference analysis result in real time, the annotator can modify the annotation result in time, the interactivity in the annotation process is increased, and the annotation efficiency is improved.
Preferably, the lesion labeling module 703 includes:
the pre-segmentation marking sub-module is used for displaying a plurality of pre-stored edge segmentation algorithms and respectively carrying out focus edge pre-segmentation marking on each focus area according to the edge segmentation algorithm selected by a marker for each focus area;
the dragging adjustment submodule is used for adjusting the pre-segmentation marking of the focus edge of any focus area according to the dragging operation when the dragging operation of the edge of the focus area by a marker is received;
and the reference adjusting submodule is used for adjusting the pre-segmentation marking of the focus edge of the focus area according to the adjusted parameters and the edge segmentation algorithm of the focus area when receiving the parameter adjusting operation of the annotator on the edge segmentation algorithm of any focus area.
Preferably, the pre-segmentation labeling sub-module comprises:
the pre-segmentation unit is used for respectively pre-segmenting the focus edges of the focus region according to the selected multiple edge segmentation algorithms when the marker selects the multiple edge segmentation algorithms for any focus region to obtain a plurality of focus edge pre-segmentation results;
and the marking unit is used for pre-dividing and marking the focus edge of the focus region according to the result selected by the marker in the pre-dividing result of the plurality of focus edges.
Preferably, the system further comprises:
the display module is used for receiving and displaying all the marking results of the medical images returned by the server;
the second uploading module is used for uploading the annotation result of the medical image adjusted by the annotator to the server when the adjustment operation of the annotation result of the submitted medical image by the annotator is received;
and the third uploading module is used for uploading the arbitration request of the submission expert to the server when receiving the arbitration request of the submission expert input by the annotator.
The medical image lesion segmentation and annotation system 700 of the embodiment of the invention ensures the accuracy and consistency of the medical image lesion segmentation and annotation result and avoids data noise generated by single doctor annotation by cross-verifying the multi-person segmentation and annotation result. And the server side and the client side realize the real-time online cross labeling of multiple persons, and a labeler can label at any convenient time and place through the networked electronic equipment. And the server can dynamically feed back the difference analysis result after each submission in real time, so that a annotator can modify the annotation result in real time, the interactivity of the annotation process is increased, and the annotation efficiency is improved. Meanwhile, a special expert arbitration is designed to process disputed annotation results, and the annotation results finally recorded into the database are guaranteed to be highly accurate data.
For the above system embodiment, since it is basically similar to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points.
The embodiment of the invention also provides the electronic equipment which can be a mobile terminal. As shown in fig. 8, the system comprises a processor 801, a communication interface 802, a memory 803 and a communication bus 804, wherein the processor 801, the communication interface 802 and the memory 803 are communicated with each other through the communication bus 804.
A memory 803 for storing a computer program.
When the processor 801 is used to execute the program stored in the memory 803, the following steps are implemented:
when receiving an annotation application input by an annotator, sending the annotation application to a server, and receiving a medical image distributed by the server according to the annotation application; one medical image is distributed to at least two annotators;
one or more regions selected by a marker on the medical image are used as focus regions;
respectively carrying out focus edge segmentation and labeling on each focus region according to an edge segmentation algorithm or a description operation input by a marker, and receiving focus category labels selected by the marker for each focus region to obtain a labeling result of the medical image;
after receiving a submission request input by a marker, uploading the marking results of the medical images to a server, so that when the server receives the marking results of the medical images uploaded by all markers distributed for one medical image, the server judges whether the marking results of all markers meet the preset consistency requirement, when the marking results of all markers meet the preset consistency requirement, the marking results of all markers are stored, otherwise, the marking results of all markers are returned to each marker.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute the medical image lesion segmentation labeling method on the electronic device side described in the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute the medical image lesion segmentation labeling method on the electronic device side described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (14)

1. A medical image focus segmentation labeling method is applied to a server and is characterized by comprising the following steps:
distributing medical images to the annotators according to annotation applications sent by the annotators with the registered accounts, and enabling the annotators to carry out focus edge segmentation annotation and focus category annotation on the distributed medical images; one medical image is distributed to at least two annotators;
receiving the labeling results of the medical images uploaded by the annotators, and judging whether all the received labeling results of the medical images meet the preset consistency requirement or not when the labeling results of the medical images uploaded by all the annotators distributed for one medical image are received;
and when all the labeling results of the medical image meet the preset consistency requirement, storing all the labeling results of the medical image, otherwise, returning all the labeling results of the medical image to each annotator.
2. The method for lesion segmentation and annotation of medical images according to claim 1, wherein the step of determining whether all annotation results of the medical image satisfy a predetermined consistency requirement comprises:
judging whether all the lesion types contained in the medical image indicated by each marking result are the same or not according to lesion type marking;
when any focus category contained in the medical image indicated by any marking result is different from other marking results, determining that all marking results do not meet the preset consistency requirement;
when all the lesion types contained in the medical image indicated by each marking result are the same, marking is segmented according to lesion edges, the range of each lesion region indicated by each marking result is determined, and whether all the marking results meet the preset consistency requirement or not is judged according to the range of each lesion region indicated by each marking result.
3. The method for segmentation labeling of a lesion according to claim 2, wherein the step of determining whether all labeling results satisfy the preset requirement for consistency according to the region range of each type of lesion indicated by each labeling result comprises:
acquiring an intersection region and a union region formed by each type of focus between region ranges indicated by different labeling results according to each type of focus region range indicated by each labeling result, and acquiring the ratio of intersection region range parameters to union region range parameters of each type of focus; wherein the range parameter comprises an area;
judging whether the ratio of the intersection region range parameter to the union region range parameter of each type of focus is greater than or equal to a threshold value corresponding to the type of focus; wherein each type of focus corresponds to a threshold value;
and when the ratio of the intersection region range parameter to the union region range parameter of any category of the focuses is smaller than the threshold corresponding to the focus, determining that all the labeling results do not meet the preset consistency requirement, and otherwise, determining that all the labeling results meet the preset consistency requirement.
4. The method for lesion segmentation and annotation of medical images according to claim 3, wherein when all annotation results of the medical image are returned to each annotator, further comprising:
sending the ratio of the intersection region range parameter and the union region range parameter of each type of focus and the corresponding threshold value of each type of focus to each annotator, and prompting the focus type of which the ratio of the intersection region range parameter and the union region range parameter is smaller than the threshold value to each annotator.
5. The method for lesion segmentation labeling in medical images as set forth in claim 3, further comprising:
and receiving threshold adjustment information fed back by a marker, and adjusting the threshold corresponding to each type of focus according to the threshold adjustment information.
6. The method of claim 1, wherein after all labeling results of the medical image are returned to each of the annotators, the method further comprises:
receiving the annotation result of the medical image uploaded by the annotator again, or receiving an arbitration request of a submission expert sent by the annotator;
when the arbitration request of the submitting expert is received, submitting all the labeling results of the medical image to an arbitration expert so that the arbitration expert determines a correct labeling result according to all the labeling results of the medical image;
and when the annotation result of the medical image uploaded by the arbitration expert is received, storing the annotation result uploaded by the arbitration expert.
7. The method according to claim 6, further comprising, when receiving the request for submitting expert arbitration:
the medical images that have been submitted for expert arbitration are deleted from all annotator accounts.
8. The method for lesion segmentation annotation of medical images according to claim 1, wherein the step of assigning the medical image to the annotator according to the annotation application sent by the annotator having the registered account number comprises:
and distributing a corresponding number of medical images for the annotators in the medical image data without annotation according to the number of the annotations indicated in the annotation application.
9. A medical image lesion segmentation labeling system is applied to a server and is characterized by comprising:
the allocation module is used for allocating medical images to the annotators according to the annotation application sent by the annotators with the registered account numbers, so that the annotators mark the focus edges and the focus categories for each focus after performing focus segmentation on the allocated medical images; one medical image is distributed to at least two annotators;
the judging module is used for receiving the marking results of the medical images uploaded by the markers, and judging whether the marking results of all the markers meet the preset consistency requirement or not when the marking results of the medical images uploaded by all the markers distributed for one medical image are received;
and the storage module is used for storing the labeling results of all the markers when the labeling results of all the markers meet the preset consistency requirement, and otherwise, returning the labeling results of all the markers to each marker.
10. A medical image focus segmentation labeling method is applied to electronic equipment and is characterized by comprising the following steps:
when receiving an annotation application input by an annotator, sending the annotation application to a server, and receiving a medical image distributed by the server according to the annotation application; one medical image is distributed to at least two annotators;
one or more regions selected by a marker on the medical image are used as focus regions;
respectively carrying out focus edge segmentation and labeling on each focus region according to an edge segmentation algorithm or a description operation input by a marker, and receiving focus category labels selected by the marker for each focus region to obtain a labeling result of the medical image;
after receiving a submission request input by a marker, uploading the marking results of the medical images to a server, so that when the server receives the marking results of the medical images uploaded by all markers distributed for one medical image, the server judges whether the marking results of all markers meet the preset consistency requirement, when the marking results of all markers meet the preset consistency requirement, the marking results of all markers are stored, otherwise, the marking results of all markers are returned to each marker.
11. The method for lesion segmentation and annotation of claim 10, wherein the step of performing lesion edge segmentation and annotation on each lesion region according to the edge segmentation algorithm comprises:
displaying a plurality of prestored edge segmentation algorithms, and respectively carrying out focus edge pre-segmentation marking on each focus area according to the edge segmentation algorithm selected by a marker for each focus area;
when the dragging operation of a marker on the edge of any focus area is received, adjusting the focus edge pre-segmentation marking of the focus area according to the dragging operation;
when receiving the parameter adjustment operation of the annotator on the edge segmentation algorithm of any focus area, adjusting the focus edge pre-segmentation annotation of the focus area according to the adjusted parameters and the edge segmentation algorithm of the focus area.
12. The method for lesion segmentation and annotation of claim 11, wherein the step of pre-segmentation and annotation of lesion edges for each lesion region according to the edge segmentation algorithm selected by the annotator for each lesion region comprises:
when a marker selects various edge segmentation algorithms for any focus area, respectively performing focus edge pre-segmentation on the focus area according to the selected various edge segmentation algorithms to obtain a plurality of focus edge pre-segmentation results;
and according to the result selected by the annotator in the pre-segmentation result of the plurality of focus edges, pre-segmentation labeling the focus edges of the focus area.
13. The medical image lesion segmentation labeling method of claim 10, further comprising:
receiving and displaying all labeling results of the medical images returned by the server;
when receiving the adjustment operation of the annotating result of the submitted medical image by the annotator, uploading the annotation result of the medical image adjusted by the annotator to a server;
and when a submission expert arbitration request input by a annotator is received, uploading the submission expert arbitration request to a server.
14. A medical image lesion segmentation labeling system is applied to electronic equipment and is characterized by comprising:
the first transceiver module is used for sending the annotation application to a server and receiving the medical image distributed by the server according to the annotation application when receiving the annotation application input by an annotator; one medical image is distributed to at least two annotators;
the focus selection module is used for taking one or more regions selected by the annotator on the medical image as focus regions;
the focus marking module is used for respectively carrying out edge segmentation marking on each focus area according to an edge segmentation algorithm or a drawing operation input by a marker, receiving focus category marking selected by the marker for each focus area and obtaining a marking result of the medical image;
and the second transceiver module is used for uploading the labeling results of the medical images to the server after receiving the submission request input by the annotator, so that when the server receives the labeling results of the medical images uploaded by all the annotators distributed for one medical image, the server judges whether the labeling results of all the annotators meet the preset consistency requirement, and when the labeling results of all the annotators meet the preset consistency requirement, the labeling results of all the annotators are stored, otherwise, the labeling results of all the annotators are returned to each annotator.
CN202011079647.XA 2020-10-10 2020-10-10 Medical image focus segmentation and labeling method and system Pending CN112418263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011079647.XA CN112418263A (en) 2020-10-10 2020-10-10 Medical image focus segmentation and labeling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011079647.XA CN112418263A (en) 2020-10-10 2020-10-10 Medical image focus segmentation and labeling method and system

Publications (1)

Publication Number Publication Date
CN112418263A true CN112418263A (en) 2021-02-26

Family

ID=74854398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011079647.XA Pending CN112418263A (en) 2020-10-10 2020-10-10 Medical image focus segmentation and labeling method and system

Country Status (1)

Country Link
CN (1) CN112418263A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409280A (en) * 2021-06-24 2021-09-17 青岛海信医疗设备股份有限公司 Medical image processing method, labeling method and electronic equipment
CN113409953A (en) * 2021-06-21 2021-09-17 数坤(北京)网络科技股份有限公司 Information processing method, information processing apparatus, storage medium, and electronic device
CN113642416A (en) * 2021-07-20 2021-11-12 武汉光庭信息技术股份有限公司 Test cloud platform for AI (Artificial intelligence) annotation and AI annotation test method
CN113764077A (en) * 2021-07-27 2021-12-07 上海思路迪生物医学科技有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN114463323A (en) * 2022-02-22 2022-05-10 数坤(北京)网络科技股份有限公司 Focal region identification method and device, electronic equipment and storage medium
CN114764812A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Focal region segmentation device

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868670A (en) * 2011-07-08 2013-01-09 北京亿赞普网络技术有限公司 Unified registration and logon system as well as registration and logon method for mobile user
CN104462738A (en) * 2013-09-24 2015-03-25 西门子公司 Method, device and system for labeling medical images
WO2017101142A1 (en) * 2015-12-17 2017-06-22 安宁 Medical image labelling method and system
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN108197658A (en) * 2018-01-11 2018-06-22 阿里巴巴集团控股有限公司 Image labeling information processing method, device, server and system
CN108461129A (en) * 2018-03-05 2018-08-28 余夏夏 A kind of medical image mask method, device and user terminal based on image authentication
CN109035187A (en) * 2018-07-10 2018-12-18 杭州依图医疗技术有限公司 A kind of mask method and device of medical image
US20190073447A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
CN109446370A (en) * 2018-10-26 2019-03-08 广州金域医学检验中心有限公司 Pathology mask method and device, the computer readable storage medium of medical image
CN109493325A (en) * 2018-10-23 2019-03-19 清华大学 Tumor Heterogeneity analysis system based on CT images
CN109558770A (en) * 2017-09-26 2019-04-02 纵目科技(上海)股份有限公司 True value mask method
CN110211087A (en) * 2019-01-28 2019-09-06 南通大学 The semi-automatic diabetic eyeground pathological changes mask method that can share
CN110288017A (en) * 2019-06-21 2019-09-27 河北数云堂智能科技有限公司 High-precision cascade object detection method and device based on dynamic structure optimization
CN110378232A (en) * 2019-06-20 2019-10-25 陕西师范大学 The examination hall examinee position rapid detection method of improved SSD dual network
CN110503705A (en) * 2019-08-29 2019-11-26 上海鹰瞳医疗科技有限公司 Image labeling method and equipment
CN110600122A (en) * 2019-08-23 2019-12-20 腾讯医疗健康(深圳)有限公司 Digestive tract image processing method and device and medical system
US10546049B2 (en) * 2013-09-25 2020-01-28 Heartflow, Inc. Systems and methods for validating and correcting automated medical image annotations
CN110796201A (en) * 2019-10-31 2020-02-14 深圳前海达闼云端智能科技有限公司 Method for correcting label frame, electronic equipment and storage medium
CN110880169A (en) * 2019-10-16 2020-03-13 平安科技(深圳)有限公司 Method, device, computer system and readable storage medium for marking focus area
CN110909195A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Picture labeling method and device based on block chain, storage medium and server
CN110969105A (en) * 2019-11-22 2020-04-07 清华大学深圳国际研究生院 Human body posture estimation method
CN110991486A (en) * 2019-11-07 2020-04-10 北京邮电大学 Method and device for controlling quality of multi-person collaborative image annotation
CN110993064A (en) * 2019-11-05 2020-04-10 北京邮电大学 Deep learning-oriented medical image labeling method and device
CN111028224A (en) * 2019-12-12 2020-04-17 广西医准智能科技有限公司 Data labeling method, model training device, image processing method, image processing device and storage medium
CN111062390A (en) * 2019-12-18 2020-04-24 北京推想科技有限公司 Region-of-interest labeling method, device, equipment and storage medium
CN111402226A (en) * 2020-03-13 2020-07-10 浙江工业大学 Surface defect detection method based on cascade convolution neural network
CN111507405A (en) * 2020-04-17 2020-08-07 北京百度网讯科技有限公司 Picture labeling method and device, electronic equipment and computer readable storage medium
CN111680689A (en) * 2020-08-11 2020-09-18 武汉精立电子技术有限公司 Target detection method, system and storage medium based on deep learning
CN111709966A (en) * 2020-06-23 2020-09-25 上海鹰瞳医疗科技有限公司 Fundus image segmentation model training method and device
CN111723225A (en) * 2020-05-09 2020-09-29 江苏丰华联合科技有限公司 Image data annotation method

Patent Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868670A (en) * 2011-07-08 2013-01-09 北京亿赞普网络技术有限公司 Unified registration and logon system as well as registration and logon method for mobile user
CN104462738A (en) * 2013-09-24 2015-03-25 西门子公司 Method, device and system for labeling medical images
US10546049B2 (en) * 2013-09-25 2020-01-28 Heartflow, Inc. Systems and methods for validating and correcting automated medical image annotations
WO2017101142A1 (en) * 2015-12-17 2017-06-22 安宁 Medical image labelling method and system
CN108463814A (en) * 2015-12-17 2018-08-28 北京安宁福祉科技有限公司 A kind of medical image mask method and system
US20190073447A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
CN109558770A (en) * 2017-09-26 2019-04-02 纵目科技(上海)股份有限公司 True value mask method
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN108197658A (en) * 2018-01-11 2018-06-22 阿里巴巴集团控股有限公司 Image labeling information processing method, device, server and system
CN108461129A (en) * 2018-03-05 2018-08-28 余夏夏 A kind of medical image mask method, device and user terminal based on image authentication
CN109035187A (en) * 2018-07-10 2018-12-18 杭州依图医疗技术有限公司 A kind of mask method and device of medical image
CN109493325A (en) * 2018-10-23 2019-03-19 清华大学 Tumor Heterogeneity analysis system based on CT images
CN109446370A (en) * 2018-10-26 2019-03-08 广州金域医学检验中心有限公司 Pathology mask method and device, the computer readable storage medium of medical image
CN110211087A (en) * 2019-01-28 2019-09-06 南通大学 The semi-automatic diabetic eyeground pathological changes mask method that can share
CN110378232A (en) * 2019-06-20 2019-10-25 陕西师范大学 The examination hall examinee position rapid detection method of improved SSD dual network
CN110288017A (en) * 2019-06-21 2019-09-27 河北数云堂智能科技有限公司 High-precision cascade object detection method and device based on dynamic structure optimization
CN110600122A (en) * 2019-08-23 2019-12-20 腾讯医疗健康(深圳)有限公司 Digestive tract image processing method and device and medical system
CN110503705A (en) * 2019-08-29 2019-11-26 上海鹰瞳医疗科技有限公司 Image labeling method and equipment
CN110909195A (en) * 2019-10-12 2020-03-24 平安科技(深圳)有限公司 Picture labeling method and device based on block chain, storage medium and server
CN110880169A (en) * 2019-10-16 2020-03-13 平安科技(深圳)有限公司 Method, device, computer system and readable storage medium for marking focus area
CN110796201A (en) * 2019-10-31 2020-02-14 深圳前海达闼云端智能科技有限公司 Method for correcting label frame, electronic equipment and storage medium
CN110993064A (en) * 2019-11-05 2020-04-10 北京邮电大学 Deep learning-oriented medical image labeling method and device
CN110991486A (en) * 2019-11-07 2020-04-10 北京邮电大学 Method and device for controlling quality of multi-person collaborative image annotation
CN110969105A (en) * 2019-11-22 2020-04-07 清华大学深圳国际研究生院 Human body posture estimation method
CN111028224A (en) * 2019-12-12 2020-04-17 广西医准智能科技有限公司 Data labeling method, model training device, image processing method, image processing device and storage medium
CN111062390A (en) * 2019-12-18 2020-04-24 北京推想科技有限公司 Region-of-interest labeling method, device, equipment and storage medium
CN111402226A (en) * 2020-03-13 2020-07-10 浙江工业大学 Surface defect detection method based on cascade convolution neural network
CN111507405A (en) * 2020-04-17 2020-08-07 北京百度网讯科技有限公司 Picture labeling method and device, electronic equipment and computer readable storage medium
CN111723225A (en) * 2020-05-09 2020-09-29 江苏丰华联合科技有限公司 Image data annotation method
CN111709966A (en) * 2020-06-23 2020-09-25 上海鹰瞳医疗科技有限公司 Fundus image segmentation model training method and device
CN111680689A (en) * 2020-08-11 2020-09-18 武汉精立电子技术有限公司 Target detection method, system and storage medium based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
闵秋莎 等: "《医学图像压缩算法与应用研究》", 31 May 2018, 华中师范大学出版社 *
陈峙宇 等: "基于众包的图片标注系统", 《计算机与现代化》 *
韩冬 等: "人工智能在医学影像中的研究与应用", 《大数据》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409953A (en) * 2021-06-21 2021-09-17 数坤(北京)网络科技股份有限公司 Information processing method, information processing apparatus, storage medium, and electronic device
CN113409280A (en) * 2021-06-24 2021-09-17 青岛海信医疗设备股份有限公司 Medical image processing method, labeling method and electronic equipment
CN113409280B (en) * 2021-06-24 2022-08-02 青岛海信医疗设备股份有限公司 Medical image processing method, labeling method and electronic equipment
CN113642416A (en) * 2021-07-20 2021-11-12 武汉光庭信息技术股份有限公司 Test cloud platform for AI (Artificial intelligence) annotation and AI annotation test method
CN113764077A (en) * 2021-07-27 2021-12-07 上海思路迪生物医学科技有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN113764077B (en) * 2021-07-27 2024-04-19 上海思路迪生物医学科技有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN114463323A (en) * 2022-02-22 2022-05-10 数坤(北京)网络科技股份有限公司 Focal region identification method and device, electronic equipment and storage medium
CN114463323B (en) * 2022-02-22 2023-09-08 数坤(上海)医疗科技有限公司 Focal region identification method and device, electronic equipment and storage medium
CN114764812A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Focal region segmentation device
CN114764812B (en) * 2022-03-14 2024-08-02 什维新智医疗科技(上海)有限公司 Focal region segmentation device

Similar Documents

Publication Publication Date Title
CN112418263A (en) Medical image focus segmentation and labeling method and system
US11348249B2 (en) Training method for image semantic segmentation model and server
US7639890B2 (en) Automatic significant image generation based on image characteristics
US11323577B2 (en) Image processing device for creating an album
CN108564102A (en) Image clustering evaluation of result method and apparatus
CN111986785B (en) Medical image labeling method, device, equipment and storage medium
CN112102230B (en) Ultrasonic section identification method, system, computer device and storage medium
CN109345201A (en) Human Resources Management Method, device, electronic equipment and storage medium
CN115393351B (en) Method and device for judging cornea immune state based on Langerhans cells
CN110110257B (en) Data processing method and system, computer system and computer readable medium
CN111414930B (en) Deep learning model training method and device, electronic equipment and storage medium
CN112560957A (en) Neural network training and detecting method, device and equipment
CN113130052A (en) Doctor recommendation method, doctor recommendation device, terminal equipment and storage medium
CN114419378B (en) Image classification method and device, electronic equipment and medium
US9538920B2 (en) Standalone annotations of axial-view spine images
CN115954101A (en) Health degree management system and management method based on AI tongue diagnosis image processing
CN115601473A (en) Printed matter typesetting system and method based on intelligent recognition
CN111275699A (en) Medical image processing method, device, equipment and storage medium
CN108052918A (en) A kind of person's handwriting Compare System and method
CN106960133A (en) A kind of disease forecasting method and device
CN108647986B (en) Target user determination method and device and electronic equipment
CN112445846A (en) Medical item identification method, device, equipment and computer readable storage medium
CN112183603A (en) Pox type recognition model training method and related device
CN113590937B (en) Hotel searching and information management method and device, electronic equipment and storage medium
CN114529892A (en) Card information detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210715

Address after: 100048 room 21, 4th floor, building 2, national defense science and Technology Park, Haidian District, Beijing

Applicant after: Beijing Yingtong Technology Development Co.,Ltd.

Applicant after: SHANGHAI YINGTONG MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 200233 room 01, 8th floor, building 1, No. 180, Yizhou Road, Xuhui District, Shanghai

Applicant before: SHANGHAI YINGTONG MEDICAL TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right