CN114297429A - Semi-automatic labeling system and method for medical image label - Google Patents

Semi-automatic labeling system and method for medical image label Download PDF

Info

Publication number
CN114297429A
CN114297429A CN202210164256.0A CN202210164256A CN114297429A CN 114297429 A CN114297429 A CN 114297429A CN 202210164256 A CN202210164256 A CN 202210164256A CN 114297429 A CN114297429 A CN 114297429A
Authority
CN
China
Prior art keywords
labeling
label
medical image
labels
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210164256.0A
Other languages
Chinese (zh)
Inventor
冯逢
杨矫云
安宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anhe Welfare Technology Co ltd
Original Assignee
Beijing Anhe Welfare Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anhe Welfare Technology Co ltd filed Critical Beijing Anhe Welfare Technology Co ltd
Publication of CN114297429A publication Critical patent/CN114297429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/169Annotation, e.g. comment data or footnotes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention relates to a semi-automatic labeling system and a semi-automatic labeling method for medical image labels.A manual label input unit is used for manually inputting accurate labels by a labeling operator based on at least two labels and/or related sentences in the system; the distribution unit is used for distributing the unmarked medical images to at least two marking terminal operators, and the at least two operators respectively and independently finish marking; the matching relation data record between the medical image and the label matched with the medical image is stored in the annotated content server unit; after the medical image to be labeled is preliminarily labeled, the label content server unit generates a label queue with dynamic change based on the label content of public labeling personnel, and labels smaller than a sequence threshold are added into a label body library after being confirmed by an expert; the labeling weight change based on the label sequence in the label queue, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue, so that the rediscovery of the related knowledge of the medical picture and the update of the newly discovered information are realized.

Description

Semi-automatic labeling system and method for medical image label
The invention discloses a medical image labeling method and system with application number of 201580084598.X, which is filed as a divisional application of an invention patent of 24-mesh 12-month 2015.
Technical Field
The invention relates to the technical field of image processing, in particular to a semi-automatic labeling system and method for medical image labels.
Background
To date, various medical and academic institutions in the world have accumulated a large number of medical images, which are huge in number and various in variety, and management, retrieval and reuse of the medical images have been a problem.
Currently, in Image Retrieval, Content-Based Image search (CBIR) is a commonly used solution. CBIR searches by comparing the visual features of the image with the visual features of the search criteria (e.g., picture) entered by the user. However, because there is a problem of "semantic gap" between the underlying visual features and the human high-level semantics of visual cognition, the search result of CBIR is often not ideal in the medical field. Therefore, the main method of image retrieval still needs text information based on the image, and the text label labeling of the image is very important.
The current image text label labeling mode is mainly manual labeling or pure automatic labeling. The manual labeling efficiency is low, the labor cost is high, the labeling operation personnel can completely depend on the professional knowledge, and the quality of the label labeled for a long time cannot be guaranteed. And the automatic labeling is high in efficiency, but a label recommendation method capable of completely ensuring the quality does not exist at present.
Chinese patent (CN 104462738A) discloses a method for labeling medical images, which comprises: dividing the set of unlabeled medical images into at least two subsets of unlabeled medical images; allocating the at least two unmarked medical image subsets to at least two marking terminals so that each marking terminal marks the medical images in the unmarked medical image subsets allocated to the marking terminal; and receiving the labeling information uploaded by each labeling terminal. Although the patent enables a user to carry out collaborative labeling on medical images at any time and place, the purely manual labeling completely depends on the professional knowledge of a labeling operator, the quality of the label cannot be guaranteed, and the purely manual labeling is slow and low in efficiency. Therefore, a semi-automatic medical image labeling method is needed in the market to improve the quality and efficiency of labeling.
In the prior art, no technical scheme discloses the method for adjusting the recommendation sequence of the labels based on the change of the labeling weight of the labels.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a medical image labeling method which is characterized by comprising the following steps:
recommending at least two selection labels for the selection of an annotation operator to at least two annotation terminals based on the matching value of the keywords of the medical image to be annotated and the labels in the label body library;
marking medical images to be marked independently by marking operators of at least two marking terminals based on the selected labels;
and confirming the labeling label of the medical image to be labeled based on the comparison result of the labeling contents of at least two labeling operators.
According to a preferred embodiment, the method further comprises:
and allocating the medical image to be labeled to at least two labeling operators for independent labeling, and confirming the labeling result based on the intersection of the labels labeled by the at least two labeling operators.
According to a preferred embodiment, the method further comprises:
and allocating the medical image to be labeled to at least two labeling operators for independent labeling, and fusing the weights of the labels labeled by the at least two labeling operators to obtain a confirmed labeling result.
According to a preferred embodiment, the method further comprises:
allocating medical images to be labeled to at least two labeling operators for independent labeling, comparing labeling labels of the at least two labeling operators, respectively displaying labeling results with large difference to the at least two labeling operators, and negotiating and confirming the labeling results by the at least two labeling operators.
According to a preferred embodiment, the method further comprises:
based on medical images derived from an authoritative journal and a book, sentences related to the medical images are automatically found out from the full text in the authoritative journal and the book by using a full text index, so that at least two labels selected by a labeling operator are generated based on the sentences.
According to a preferred embodiment, the method further comprises:
the label is displayed on the labeling terminal equipment of the labeling operator in a selectable button mode.
According to a preferred embodiment, the method further comprises:
the label is displayed on the terminal device of the annotation operator in a manner combined with the medical image related statement.
According to a preferred embodiment, the method further comprises:
and sorting the matching values of the keywords and the labels, and selecting at least two labels larger than the matching threshold value as recommended selection labels.
According to a preferred embodiment, the method further comprises:
generating a dynamically changed label queue based on the labeling content of public labeling personnel, and adding labels smaller than a sequence threshold into a label body library after the labels are confirmed by experts; wherein the content of the first and second substances,
the order of the labels in the label queue varies based on the labeling weight,
the labeling ability value of the public labeling personnel is changed based on the sequence of the labels in the label queue,
the number of the labels added into the label body library correspondingly increases the labeling capability value of the corresponding public labeling personnel.
A medical image annotation system is characterized by comprising an importing unit for importing medical images, a first storage server for storing the medical images and keywords thereof, a second storage server for storing labels, a matching unit, a distribution unit for distributing the medical images to be annotated and corresponding selection labels to at least two annotation terminals, a confirmation unit for confirming the annotation results of at least two annotation operators, and at least two annotation terminals;
the importing unit imports and stores the medical image to be annotated to the first storage unit;
the matching unit extracts the keywords of the medical image to be annotated stored in the first storage server and at least one label stored in the second storage unit for matching,
the distribution unit recommends at least two selection labels which can be selected by an annotation operator to at least two annotation terminals based on the matching value of the keywords of the medical image to be annotated and the labels in the label body library;
the confirming unit confirms the labeling label of the medical image to be labeled based on the comparison result of the labeling contents of at least two labeling operators.
According to an independent aspect of the invention, the invention discloses a multi-person collaborative semi-automatic medical image annotation method which is characterized in that an annotation operation is executed by distributing a medical image to be annotated to at least two annotation terminals, the annotation operation is completed based on at least two labels recommended by a multi-person collaborative semi-automatic medical image system and capable of being selected by annotation operators of the annotation terminals, and the multi-person collaborative semi-automatic medical image system fuses or compares annotation results independently completed by the annotation operators of the at least two annotation terminals to determine the annotation label of the medical image to be annotated.
According to a preferred embodiment, the multi-person collaborative semi-automatic medical image system determines the labeling label of the medical image to be labeled by the following method:
the marking label of the medical image to be marked is an intersection of the marking results of the multi-person collaborative semi-automatic medical image system to the at least two marking terminals; for the medical image to be annotated with empty intersection, the multi-person collaborative semi-automatic medical image system resends the medical image to be annotated to the at least two annotation terminals to execute annotation operation until the multi-person collaborative semi-automatic medical image system determines the annotation label of the medical image to be annotated; or
And for the medical image to be annotated with empty intersection, the multi-person collaborative semi-automatic medical image system simultaneously displays the annotation results of the at least two annotation terminals to the annotation operators of the at least two annotation terminals, and the annotation label of the medical image to be annotated is determined after negotiation by the at least two annotation operators.
According to a preferred embodiment, the multi-person collaborative semi-automatic medical image system determines the labeling label of the medical image to be labeled by the following method:
comparing the labeling results of the at least two labeling terminals by the multi-person collaborative semi-automatic medical image system, comparing the labeling with difference, simultaneously displaying the labeling results of the at least two labeling terminals to the labeling operators of the at least two labeling terminals by the multi-person collaborative semi-automatic medical image system, and determining the labeling label of the medical image to be labeled after negotiation by the at least two labeling operators.
According to a preferred embodiment, the multi-person collaborative semi-automatic medical image system recommends at least two selectable labels for the labeling operators of the at least two labeling terminals by:
the multi-person collaborative semi-automatic medical image system automatically finds out sentences related to medical images from the full texts in authoritative periodicals and books by using full-text indexes based on the medical images from the authoritative periodicals and books so as to generate at least two labels which can be selected by a labeling operator based on the related sentences, and the labels are displayed on a labeling terminal of the labeling operator in an optional button form and/or the labels are displayed on the labeling terminal of the labeling operator in a mode of being combined with the related sentences.
According to a preferred embodiment, keywords are extracted from the automatically found sentences related to the medical images, the keywords are matched with the labels in the label ontology library, and at least two labels which can be selected by a labeling operator are generated according to the matching degree of the keywords and the labels.
According to a preferred embodiment, the labeling operators of the at least two labeling terminals mark the boundary of the region of interest in the medical image to be labeled based on the selected label and the sentence related to the label.
According to a preferred embodiment, the allocation mode of the multi-person collaborative semi-automatic medical image annotation system for allocating the medical image to be annotated to at least two annotation terminals is as follows:
distributing the medical image to be labeled to the at least two labeling terminals based on the priority order of the preset labeling terminals; or
Distributing the medical image to be labeled to the at least two labeling terminals based on the terminal processing capacity sequence of the at least two labeling terminals; or
And distributing the medical image to be labeled to the at least two labeling terminals based on a load balancing principle.
According to a preferred embodiment, the medical image is stored on a first server, the label matched with the medical image is stored on a second server, and the matching relation between the medical image and the label is stored on the second server as a data record; when the user extracts the labeled medical image, the first server and the second server respectively send data in parallel, and the user locally matches the label with the medical image according to the data record from the second server and displays the label locally.
According to a preferred embodiment, the tagging terminal is a functional mobile phone, a smart phone, a palm top computer, a personal computer, a tablet computer or a personal digital assistant.
According to a preferred embodiment, the method comprises the steps of:
the multi-person collaborative semi-automatic medical image annotation system distributes the medical image to be annotated to at least two annotation terminals to execute annotation operation;
automatically finding out sentences related to the medical images from the full texts in the authoritative periodicals and the books by using a full-text index based on the medical images from the authoritative periodicals and the books by the multi-person collaborative semi-automatic medical image system so as to generate at least two labels which can be selected by an annotation operator based on the related sentences, and displaying the labels to the at least two annotation terminals by the system;
the labeling operators of the at least two labeling terminals independently complete the labeling of the medical image to be labeled based on the label recommended by the system;
and determining the labeling label of the medical image to be labeled by adopting a fusion or comparison mode for the labeling results independently completed by the labeling operators of the at least two labeling terminals.
According to another independent aspect of the invention, the invention discloses a multi-person collaborative semi-automatic medical image annotation system which is characterized in that the system matches labels and/or articles in a label ontology library unit based on the text of a medical image, and automatically recommends at least two matched labels and/or related sentences to at least two annotation terminals.
According to a preferred embodiment, the tag ontology library unit comprises a tagged tag unit and a medical journal and book data unit, when an operator of the tagging terminal imports an unmarked image into the system, the operator searches the tagged tag unit and/or the medical journal and book data unit according to image text information, and generates at least two tags and/or related sentences based on the matching scores of the search result.
According to a preferred embodiment, the at least two generated labels are displayed on the operator's annotation terminal in the form of selectable buttons.
According to a preferred embodiment, the generated at least two labels are displayed on the labeling terminal of the operator in combination with the related sentence.
According to a preferred embodiment, the system further comprises a manual input label unit for manually inputting an accurate label by the operator based on the at least two labels and/or the related sentence.
According to a preferred embodiment, the system further comprises a distribution unit and a comparison unit, wherein the distribution unit is used for distributing the unmarked medical images to at least two marking terminal operators, and the at least two operators respectively and independently complete marking; the comparison unit is used for comparing and analyzing the labeling results of the at least two operators, and if the comparison results are displayed differently for the same medical image, the comparison unit sends the labeling results of the at least two operators to the at least two operators simultaneously after the comparison and analysis, and the at least two operators negotiate to determine an accurate label.
According to a preferred embodiment, the system further comprises a high-speed remote server unit and an annotated content server unit, wherein the medical image is stored in the high-speed remote server unit, the label matched with the medical image is stored in the annotated content server unit, and the matching relation data record between the medical image and the label matched with the medical image is stored in the annotated content server unit.
According to a preferred embodiment, after receiving the command of extracting the labeled medical image, the high-speed remote server unit and the labeled content server unit respectively send out related data from different positions and display the related data on a labeling terminal, and an operator matches the label with the medical image and displays the matched label on the labeling terminal according to the matching relation data record from the labeled content server.
According to a preferred embodiment, the annotation content server unit has an encryption system. According to a preferred embodiment, the system further comprises an import and export unit for importing unmarked medical images and for exporting marked medical images into local files.
According to yet another independent aspect of the invention, the invention discloses a medical image labeling method, which is characterized by comprising the following steps:
responding to a request of at least one labeling terminal, and extracting keywords of the description information of the medical image to be labeled;
matching the keywords with tags in at least one tag ontology library;
individually allocating unmarked medical images to at least one marking terminal;
recommending at least one selected label to a labeling operator of a corresponding labeling terminal based on the keyword and the recommended value based on the at least one label;
retrieving sentences associated with the medical images and/or the selection labels in a full-text retrieval mode and marking and displaying the sentences to a labeling operator;
recording the labeling information of at least one labeling operator and counting intersection labels of the same medical image.
According to a preferred embodiment, the medical image to be annotated is stored in a medical image database and is divided into at least two unlabelled subsets of medical images according to their description information, each subset comprising at least one medical image to be annotated.
According to a preferred embodiment, the subset of unlabelled medical images is partitioned according to the descriptive information of the medical images to be labeled, based on the biological anatomical structure or the biological physiological system or a combination of the biological anatomical structure and the biological physiological system.
According to a preferred embodiment, the medical image database comprises an unlabelled medical image database and a labeled medical image database, the labeled medical image database is divided into at least two labeled medical image subsets based on the label or labeling information of the labeled images, and each subset comprises at least one medical image to be labeled.
According to a preferred embodiment, the subset of labeled medical images is partitioned according to labeling or labeling information of the labeled medical images and based on the biological anatomy and/or the biological physiological system.
According to a preferred embodiment, the selected tags include a selected tag generated by matching a keyword with tags in at least one tag ontology library and medical images derived from an authoritative journal and books, and sentences related to the keyword and the medical images are automatically found out from full texts in the authoritative journal and the books by using a full-text index, so that at least two tags which can be selected by a labeling operator are generated.
According to a preferred embodiment, the generated at least two labels which can be selected by the labeling operator are displayed on the labeling terminal of the labeling operator in a selectable button form, meanwhile, the sentences related to the labels are displayed on the labeling terminal of the labeling operator, and the labeling operator marks the boundary of the interest area in the medical image to be labeled based on the selected labels and the sentences related to the labels.
According to a preferred embodiment, the method further comprises: actively sending the unmarked medical image to at least one marking terminal, wherein the unmarked medical image comprises a medical image to be marked and sent to at least two marking operators at the same time, and the at least two marking operators respectively and independently complete marking.
According to a preferred embodiment, the labels which are respectively and independently completed by the at least two labeling operators are compared; if the comparison result has a large difference, the labeling contents of the two parties are displayed to the two persons at the same time, and the two parties negotiate to determine the most accurate labeling label.
A visualization device of a medical system is characterized by comprising an image display part, an image analysis part and an image labeling part, wherein the visualization device is connected with a medical image labeling system,
the visualization equipment is used as an image annotation terminal and sends an image annotation request to the image annotation system;
the image annotation system matches the keywords of the annotation request with the labels in at least one label body library and individually allocates medical images to be annotated to the visualization equipment;
the image annotation system recommends at least one selected label to an annotation operator of the visualization equipment based on the matching value of the keyword of the annotation request and the at least one label;
the image annotation system retrieves sentences associated with the selected labels in a full-text retrieval mode through the visualization equipment and displays the sentences to an annotation operator in a marking mode;
the annotation operator finishes the annotation of the medical image to be annotated in the image annotation part based on the selection label displayed by the image display part and the sentence associated with the selection label, and the image annotation part sends annotation content to the image annotation system;
the image analysis part records the labeling information of at least one labeling operator and counts intersection labels of the same medical image.
The invention also provides a semi-automatic labeling system of the medical image label, which at least comprises a label body library unit for storing labels and/or articles/related sentences related to the medical image, a manual input label unit, a distribution unit and a labeling content server unit, wherein the label body library unit comprises a labeled label unit, a medical journal and a book data unit; the manual label input unit is used for manually inputting an accurate label by a labeling operator based on at least two labels and/or related sentences; the distribution unit is used for distributing the unmarked medical images to at least two marking terminal operators, and the at least two operators respectively and independently finish marking; the matching relation data record between the medical image and the label matched with the medical image is stored in the annotation content server unit; after the medical image to be labeled is initially labeled, the labeled content server unit generates a label queue with dynamic change based on the labeled content of public labeling personnel, and labels smaller than a sequence threshold value are added into a label body library after being confirmed by an expert; and the labeling weight based on the label sequence in the label queue is changed, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue.
Preferably, the annotation weight is obtained by weighting the annotation content based on the qualification and annotation history of the public annotation staff.
Preferably, the annotation content server unit evaluates the annotation ability value of the corresponding public annotation person based on the dynamic change condition of the tag in the tag queue; if the sequence of the labels continuously changes forwards, the labeling capability value of public labeling personnel can be increased; if the sequence of labels is constantly changing backwards, the labeling ability value of its public labeling personnel will decrease.
Preferably, after the label is brought into the label entity library, the labeling ability value of the public labeling personnel corresponding to the label is increased, and the increased value is set by a manager; the more labels are included in the label entity library, the more the labeling capability value of the public labeling personnel corresponding to the labels is increased.
Preferably, the medical image labeling system further comprises a comparison unit, wherein the comparison unit is used for comparing and analyzing the labeling results of at least two operators, if the comparison results of the same medical image are displayed differently, the comparison unit sends the labeling results of the at least two operators to the at least two operators simultaneously after the comparison and analysis, and the at least two operators negotiate to determine an accurate label.
Preferably, the annotation operator completes annotation of the medical image to be annotated based on the selection tag displayed by the image display part and the sentence associated with the selection tag, and sends the annotation to the image annotation system.
Preferably, after the operator of the labeling terminal imports the unlabeled image into the system, the tag ontology library unit retrieves the labeled tag unit and/or the medical journal and book data unit according to the image text information, and generates at least two tags and/or related sentences based on the matching scores of the retrieval result.
The invention also provides a semi-automatic labeling method of medical image labels, which is characterized by at least comprising the following steps: storing the labeled tag unit, the medical journal and the book data unit; manually inputting an accurate label by a labeling operator based on at least two labels and/or related sentences; distributing the unmarked medical images to at least two marking terminal operators, and respectively and independently finishing marking by the at least two operators; recording and storing matching relation data between the medical image and the label matched with the medical image;
after the medical image to be labeled is initially labeled, generating a dynamically changed label queue based on the labeling content of public labeling personnel, and adding labels smaller than a sequence threshold value into a label body library after the labels are confirmed by experts; and the labeling weight based on the label sequence in the label queue is changed, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue.
Preferably, the method further comprises: after the medical image to be labeled is initially labeled, generating a dynamically changed label queue based on the labeling content of public labeling personnel, and adding labels smaller than a sequence threshold value into a label body library after the labels are confirmed by experts; and the labeling weight based on the label sequence in the label queue is changed, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue.
Preferably, the method further comprises: evaluating the labeling capacity value of corresponding public labeling personnel based on the dynamic change condition of the label in the label queue; if the sequence of the labels continuously changes forwards, the labeling capability value of public labeling personnel can be increased; if the sequence of labels is constantly changing backwards, the labeling ability value of its public labeling personnel will decrease.
The invention has the beneficial technical effects that:
firstly, the invention can automatically recommend labels for users and support a plurality of users to work cooperatively, which is the most important function.
Secondly, the invention provides a user management function, and an administrator can easily manage the information of the annotation user by using a management tool and distribute the images to be annotated to the annotators.
In addition, the invention provides a data import and export function. The annotator can log in and upload own pictures, and the administrator is responsible for distributing the pictures. In addition to storing the data in the system database, the user may export the specified data into a local file that supports both the csv and xml formats.
Finally, because pictures are all from some articles, the invention also supports the viewing of articles related to the images, automatically finds out sentences related to the labels and highlights the sentences.
Drawings
FIG. 1 is a schematic diagram of a medical image labeling method according to a preferred embodiment of the present invention;
FIG. 2 is a diagram illustrating a multi-user collaborative semi-automatic medical image annotation method according to the present invention;
FIG. 3 is a schematic diagram of another medical image annotation method preferred by the present invention;
FIG. 4 is a schematic diagram of a medical image annotation system in accordance with a preferred embodiment of the present invention;
FIG. 5 is a schematic diagram of a multi-person collaborative semi-automatic medical image annotation system of the present invention;
FIG. 6 is a schematic view of a medical system visualization device of the present invention; and
FIG. 7 is a schematic diagram of the architecture of the medical image annotation system of the present invention.
Detailed Description
The following detailed description is made with reference to the accompanying drawings.
In the present invention, a medical picture is also referred to as a medical image, and refers to a picture or an image obtained by non-invasively obtaining internal tissues of an animal body, a human body, or a part of a human body for medical treatment or medical research.
Example one
The embodiment provides a medical image annotation method which is characterized by comprising the steps of recommending at least two selection labels to at least two annotation terminals based on matching values of keywords of a medical image to be annotated and labels in a label body library, and enabling annotation operators of the at least two annotation terminals to respectively and independently annotate the medical image to be annotated based on the selection labels of the annotation operators. Or sorting the matching values of the keywords and the at least one label, selecting the at least two labels according to a certain rule, sending the at least two labels to the corresponding labeling terminal, and displaying the labels as the selected labels which can be selected by the labeling operator.
As shown in fig. 1, at least one annotation operator inputs an annotation request at least one annotation terminal. Responding to the labeling requirements of the labeling operators, and distributing at least one medical image to be labeled to at least two labeling terminals, so that the labeling operators can independently label at the labeling terminals. Preferably, the medical images to be annotated are allocated to at least two annotation terminals based on a preset annotation terminal priority order. Or, the subset of the non-labeled medical images is distributed to at least two labeling terminals based on the terminal processing capacity sequence of the labeling terminals. Or, the subset of the unlabeled medical images is distributed to at least two labeling terminals based on the load balancing principle. Or, simultaneously, the unmarked medical image subsets are distributed to at least two marked terminals based on the marked terminal priority order, the terminal processing capacity order and the load balancing principle.
According to a preferred embodiment, the subset of unlabeled medical images is assigned to the plurality of labeling terminals based on a preset labeling terminal priority order. For example, suppose there are three annotation terminals, namely annotation terminal 1, annotation terminal 2 and annotation terminal 3, and each annotation terminal can process 10 medical images. The priority of the marking terminal 1 is the highest, the priority of the marking terminal 2 is the second priority, and the priority of the marking terminal 3 is the lowest. Assume that there are two subsets of unlabeled medical images to be allocated, namely, the subset 1 of unlabeled medical images and the subset 2 of unlabeled medical images, and each subset of unlabeled medical images contains 10 images. Then, the task allocation result may be: 10 images in the non-labeled medical image subset 1 (or the non-labeled medical image subset 2) are sent to the labeling terminal 1, 10 images in the remaining non-labeled medical image subset are sent to the labeling terminal 1, and the non-labeled images are not sent to the labeling terminal 3.
According to a preferred embodiment, the subsets of unlabeled medical images are assigned to the labeling terminals in an order based on the terminal processing capabilities of the labeling terminals. For example, assume that there are three annotation terminals, namely annotation terminal 1, annotation terminal 2, and annotation terminal 3. The number of medical images which can be processed by the annotation terminal 1 is 10, the number of medical images which can be processed by the annotation terminal 2 is 10, and the number of medical images which can be processed by the annotation terminal 3 is 5. Assume that there are two subsets of unlabeled medical images to be allocated, namely, the subset 1 of unlabeled medical images and the subset 2 of unlabeled medical images, and each subset of unlabeled medical images contains 10 images. Then, the task allocation result may be: 10 images in the non-labeled medical image subset 1 (or the non-labeled medical image subset 2) are sent to the labeling terminal 1, 10 images in the remaining non-labeled medical image subset are sent to the labeling terminal 1, and the non-labeled images are not sent to the labeling terminal 3.
And extracting keywords from the description words of the medical image to be labeled, and matching the keywords with at least one label in the label body library. And recording the matching value of the key and at least one label. And recommending at least two labels with matching values larger than a preset matching threshold value to corresponding labeling terminals for selection of labeling operators. The label is displayed on the labeling terminal of the labeling operator in the form of selectable buttons. Alternatively, the label is displayed on the labeling terminal of the labeling operator in a manner of being combined with the medical image related sentence
Alternatively, sentences related to medical images are automatically found from the full text in the authoritative journal and book by using the full text index based on the medical images derived from the authoritative journal and book. And generating a label for the marking operator to select from the related statement, and selecting the label by the marking operator for marking.
After at least two labeling operators label the same medical image, comparing the labeling contents of the at least two labeling operators. And confirming the labeling result based on the intersection of the at least two labeling contents.
And if the difference of the labeling contents of at least two labeling operators is large, respectively displaying the labeling contents or the selected labels of other labeling operators to the labeling operators at the labeling terminal. And requesting the annotation operator to annotate the medical image to be annotated again. Or, under the condition that the difference of the labeling contents of at least two labeling operators is large, establishing communication connection or instant communication connection for at least two labeling operators labeling the same medical image. And at least two labeling operators confirm the final labeling label in a consultation mode.
Or after the at least two labeling operators label the same medical image, confirming the labeling result based on the weight fusion of the labels selected by the at least two labeling operators. Thereby obtaining the final labeling label.
Example two
The embodiment provides a medical image annotation method which is characterized by comprising the steps of recommending at least two selection labels to at least two annotation terminals based on matching values of keywords of a medical image to be annotated and labels in a label body library, and enabling annotation operators of the at least two annotation terminals to respectively and independently annotate the medical image to be annotated based on the selection labels of the annotation operators.
Firstly, a large number of labeled medical images, related keywords and labeled labels are classified and stored as samples. And respectively establishing mapping relations among the labeled medical images, the keywords and the labels, and storing the mapping relations in a database.
Then, the image similarity of each labeled medical image and the medical image to be labeled in the database is calculated respectively. The image similarity mainly calculates the similarity of the image contents of the two pictures to obtain an image similarity value, and the higher the image similarity value is, the more similar the contents of the two pictures are. The image similarity can be calculated by using visual features of the two pictures, and the visual features can be specifically color RGB (Red Green Blue, three primary colors) features, texture features, histogram features, SIFT (Scale-invariant feature transform) features and the like.
And selecting the labeled medical images with the image similarity greater than a first threshold value with the medical images to be labeled to form a picture group. The first threshold is a preset similarity value. In particular, the first threshold may be set by the annotation operator from the row. In contrast, the higher the first threshold is set, the more similar the marked image found in the database is to the image to be marked, but the smaller the number of marked images found.
And extracting labels corresponding to each labeled image in the medical image group to form a label phrase. The corresponding keywords constitute a keyword group. And extracting keywords and labels according to the mapping relation of the labeled images. If the labeled image printed with the label is stored in the database, the keyword of the description character of the labeled image is firstly identified, and then the label is extracted.
And outputting at least one label in the label word group as a selection label of the picture to be labeled. Since there may be many labels in the label phrase, but the user may not want to output too many labels, but only wants to output a preset number of labels, it can be implemented by the following steps: and judging whether the number of the labels in the label phrase is larger than a third threshold value. When the number of the labels in the label phrase is greater than the third threshold, outputting a preset number of the labels in the label phrase as selection labels of the picture to be labeled, wherein the preset number is less than or equal to the third threshold. Specifically, the preset number and the third threshold are the number of labels set by the labeling operator.
And displaying the corresponding at least one label on the labeling terminal in the form of a selectable button.
According to a preferred embodiment, the method for extracting the keywords from the description text of the medical image to be labeled comprises the following steps: the word information is segmented to obtain at least one segmented word, and the semantic content and semantic type of the at least one segmented word are obtained. The semantic content is semantic information having a meaning corresponding to the participle, and the semantic type is a type of the semantic information, for example, a part of speech of the participle, a meaning represented by the participle, and the like.
And screening at least one word segmentation in the corresponding key word group according to the semantic content and the semantic type so as to screen out the key words related to the medical image to be labeled. And the marked medical images with extremely high similarity with the medical images to be marked correspond to a plurality of keywords. And selecting the keyword with the highest similarity with the word meaning in the description characters in the keyword group as the keyword of the medical image to be labeled. The semantic similarity is mainly used for calculating the semantic similarity of two words to obtain a semantic similarity value, and the higher the semantic similarity value is, the more similar the semantics of the two words are. The second threshold may specifically be a semantic similarity value preset by a user. And obtaining at least one label according to the mapping relation between the keyword and the label phrase.
Similarly, the label may be a related statement related to the medical image to be annotated. The related sentences are obtained by full-text indexing of the authoritative periodicals or books. If the labeled medical images are from the authoritative periodicals and the books, establishing the articles of the authoritative periodicals and the books as related sentence groups. For the descriptive text of the medical image to be marked, sentences related to the medical image are automatically found out from the full text in the journal and the book by using the full text index as recommended labels. The related sentences are displayed on a labeling terminal of a labeling operator in a selectable button mode.
EXAMPLE III
The embodiment provides a multi-person collaborative semi-automatic medical image annotation method. The medical image to be annotated is distributed to at least two annotation terminals to execute annotation operation, the annotation operation is completed based on at least two labels which are recommended by a multi-person collaborative semi-automatic medical image system and can be selected by annotation operators of the annotation terminals, and the multi-person collaborative semi-automatic medical image system fuses or compares annotation results which are independently completed by the annotation operators of the at least two annotation terminals to determine the annotation label of the medical image to be annotated.
As shown in fig. 2, the annotation operator sends an annotation request at the annotation terminal. And responding to the annotation request of the annotation terminal, and distributing at least one medical image to be annotated to at least two annotation terminals. The distribution mode comprises the following steps: allocating the medical image to be annotated to at least two annotation terminals of an annotation operator based on a preset priority sequence of the annotation terminals; or distributing the medical image to be labeled to the at least two labeling terminals based on the terminal processing capacity sequence of the at least two labeling terminals; or distributing the medical image to be annotated to at least two annotation terminals based on a load balancing principle.
Automatically finding out sentences related to the medical images from the full texts in the journal and the book by using a full text index based on the medical images derived from the authoritative journal and the book so as to generate at least two labels which can be selected by a labeling operator based on the related sentences, and displaying the labels on the labeling terminal of the labeling operator in the form of selectable buttons and/or displaying the labels on the labeling terminal of the labeling operator in the form of combination with the sentences related to the labeling operator.
Or extracting keywords from the automatically found sentences related to the medical images, matching the keywords with the labels in the label body library, and generating at least two labels for the labeling operators to select according to the matching degree of the keywords and the labels.
The labeling label of the medical image to be labeled is an intersection of the labeling results of the at least two labeling terminals obtained by the multi-person collaborative semi-automatic medical image system. And for the medical image to be annotated with empty intersection, the multi-person collaborative semi-automatic medical image system resends the medical image to be annotated to at least two annotation terminals to execute annotation operation until the multi-person collaborative semi-automatic medical image system determines the annotation label of the medical image to be annotated.
Or, for the medical image to be annotated with empty intersection, the multi-person collaborative semi-automatic medical image system simultaneously displays the annotation results of the at least two annotation terminals to the annotation operators of the at least two annotation terminals, and the at least two annotation operators determine the annotation label of the medical image to be annotated after negotiation.
Or comparing the labeling results of the at least two labeling terminals by the multi-user collaborative semi-automatic medical image system, comparing the labeling with the difference, simultaneously displaying the labeling results of the at least two labeling terminals to the labeling operators of the at least two labeling terminals by the multi-user collaborative semi-automatic medical image system, and determining the labeling label of the medical image to be labeled after negotiation by the at least two labeling operators.
And marking the boundary of the region of interest (ROI) in the medical image to be marked by marking operators of the at least two marking terminals based on the selected label and the sentence related to the label.
According to a preferred embodiment, the medical image is stored on a first server. The tags that match the medical image are stored at the second server. The matching relationship between the medical image and the tag is stored as a data record on the second server. When the user extracts the labeled medical image, the first server and the second server respectively send data in parallel, and the user locally matches the label with the medical image according to the data record from the second server and displays the label locally.
Example four
The embodiment provides a medical image annotation method. The method comprises the following steps;
as shown in fig. 3, in response to a request from at least one annotation terminal, keywords of the description information of the medical image to be annotated are extracted. And matching the keywords with the labels in at least one label body library. And individually distributing the unmarked medical image to at least one marking terminal. And recommending at least one selected label to the labeling operator of the corresponding labeling terminal based on the matching value of the keyword and the at least one label. And retrieving sentences associated with the medical images and/or the selection labels in a full-text retrieval mode and marking and displaying the sentences to the labeling operator. Recording the labeling information of at least one labeling operator and counting the intersection of the same medical image.
Specifically, the medical image labeling method comprises the following steps:
SO 1: and responding to the request of at least one labeling terminal, and extracting the keywords of the description information of the medical image to be labeled.
And at least one annotation operator sends an annotation request at an annotation terminal. And responding to the request of at least one labeling terminal, and extracting keywords in the description information of the medical image to be labeled. The medical image to be marked is attached with description characters, and the description characters contain key words. And extracting keywords in the description words.
S02: and matching the keywords with the labels in at least one label body library.
S03: and recommending at least one selected label to the labeling operator of the corresponding labeling terminal based on the matching value of the keyword and the at least one label.
And matching the keywords with the labels in at least one label body library and calculating a matching value. And recommending at least one selection label for the marking operator to select to the marking operator of the corresponding marking terminal according to the sequence of the matching values.
Actively sending the unmarked medical image to at least one marking terminal, wherein the unmarked medical image to be marked is sent to at least two marking operators simultaneously. And at least two marking operators respectively and independently complete marking. And simultaneously displaying the selection label selected by the annotation operator and the corresponding medical image to be annotated on the annotation terminal. Meanwhile, a manual label input field is displayed on the label terminal. When the label operator is not satisfied with the displayed selection label, the manual label can be input in the manual label input field.
S04: and retrieving sentences associated with the medical images and/or the selection labels in a full-text retrieval mode and marking and displaying the sentences to the labeling operator.
Based on medical images derived from authoritative periodicals and books, sentences related to the medical images are automatically found out from the full texts in the authoritative periodicals and books by using a full-text index, so that at least two labels which can be selected by a labeling operator are generated based on the sentences.
S05: recording the labeling information of at least one labeling operator and counting the intersection of the same medical image.
And recording the labeling information of at least one labeling operator. And counting the intersection of the labels of at least one labeling operator for the same medical image. And taking the intersected label as a final labeling label of the medical image to be labeled.
According to a preferred embodiment, the medical image to be annotated is stored in the medical image database and is divided into at least two unlabelled subsets of medical images according to the description information of the medical image to be annotated, each subset including at least one medical image to be annotated.
The medical image database stores the unlabelled medical image set and the labeled medical image set. The set of labeled medical images contains medical images that have been labeled. The set of unlabelled medical images contains medical images that have not been labeled. The medical image database may have a central structure or a distributed structure. Moreover, the storage capacity of the medical image database can be correspondingly expanded with the increase of the number of medical images.
Upon receipt of the task of annotating a set of unlabeled medical images in the medical image database, the set of unlabeled medical images can be divided into a plurality of (at least two) subsets of unlabeled medical images, each subset of unlabeled medical images can include one or more medical images.
According to a preferred embodiment, the subset of unlabelled medical images is partitioned based on biological (e.g. human) anatomy. For example, the unlabeled medical image set may be specifically divided into: brain, chest, heart, abdomen, upper limbs, lower limbs, etc.
According to a preferred embodiment, the subset of unlabelled medical images is divided according to the structure of the bio-physiological system. For example, the unlabeled medical image set is specifically divided into image subsets of the digestive system, nervous system, motor system, endocrine system, urinary system, reproductive system, circulatory system, respiratory system, immune system, and so on.
In fact, multi-level detailed differentiation can be performed on the unlabeled medical image set. For example, the brain image may be divided into image subsets such as a brain nucleus (Central Core), a Limbic System (Limbic System), and a Cerebral Cortex (Cerebral Cortex).
According to a preferred embodiment, the combined cardiac images are divided into image subsets of the aorta, left atrium, left ventricle, right atrium, right ventricle, etc.
According to a preferred embodiment, each medical image subset may be assigned an identifier in order to facilitate subsequent combination of the medical image subsets. All medical images in the same medical image subset share the same identifier.
According to a preferred embodiment, the database of annotated medical images is divided into at least two subsets of annotated medical images based on the labeling or annotation information of the annotated images, each subset comprising at least one medical image to be annotated.
According to a preferred embodiment, the subset of labeled medical images is based on labeling or labeling information of the labeled medical images and is based on or partitioned based on the biological anatomy and/or the biological physiological system.
EXAMPLE five
The present embodiments provide a medical annotation system. As shown in fig. 4, the annotation system includes an importing unit for importing a medical image, a first storage server for storing the medical image, a second storage server for storing labels and/or statements related to the medical image, a matching unit, an assigning unit for assigning the medical image to be annotated and corresponding selection labels to at least two annotation terminals, a confirming unit for confirming at least two annotation results, and at least two annotation terminals.
The importing unit imports and stores the medical image generated by the visual medical image device to the first storage unit.
The first storage server divides the medical image into at least two medical image subsets and stores the medical image and keyword information thereof in a classified manner.
The matching unit extracts keywords of the medical image to be annotated stored in the first storage server and at least one label and/or medical image related statement stored in the second storage unit, performs matching and calculation of a matching value locally, and distributes the at least one label and/or medical image related statement meeting the conditions to a corresponding annotation terminal as a selection label.
The allocation unit allocates the medical image to be annotated to at least two annotation terminals based on a preset annotation terminal priority order, or the allocation unit allocates the medical image to be annotated to at least two annotation terminals based on a terminal processing capability order/based on a load balancing principle.
The confirming unit confirms the labeling result of the medical image to be labeled based on the labeling contents of at least two labeling operators.
EXAMPLE six
The embodiment provides a multi-person collaborative semi-automatic medical image annotation system. As shown in fig. 5, the multi-person collaborative semi-automatic medical image annotation system at least comprises a tag ontology library unit for storing tags and/or articles/related sentences related to medical images, a manual input tag unit, a distribution unit, a comparison unit, a high-speed remote server unit and an annotation content server unit.
The multi-person collaborative semi-automatic medical image annotation system matches the text of the medical image with the labels and/or articles in the label body library unit, and automatically recommends the matched at least two labels and/or related sentences to at least two annotation terminals.
The label body library unit comprises a labeled label unit and a medical journal and book data unit, when a labeling operator of the labeling terminal imports an unlabeled image into the system, the unlabeled image is searched in the labeled label unit and/or the medical journal and book data unit according to image text information, and at least two labels and/or related sentences are generated based on the matching scores of the search result. And displaying the generated at least two labels on a labeling terminal of a labeling operator in a form of selectable buttons. Or the generated at least two labels are displayed on the labeling terminal of the labeling operator in a manner of being combined with the related sentences.
The manual input label unit manually inputs an accurate label by a labeling operator based on at least two labels and/or related sentences.
The distribution unit is used for distributing the unmarked medical images to at least two marking terminal operators, and the at least two operators respectively and independently complete marking. The comparison unit is used for comparing and analyzing the labeling results of the at least two operators, if the comparison results are displayed differently for the same medical image, the comparison unit sends the labeling results of the at least two operators to the at least two operators simultaneously after the comparison and analysis, and the at least two operators negotiate to determine an accurate label.
According to a preferred embodiment, the medical image is stored in a high-speed remote server unit. The label matched with the medical image is stored in the annotation content server unit. The matching relation data record between the medical image and the label matched with the medical image is stored in the annotation content server unit. The high-speed remote server unit and the marked content server unit respectively send out related data from different positions and display the related data on the marked terminal after receiving a command of extracting the marked medical image, and a marking operator matches the label with the medical image and displays the label on the marked terminal according to the matching relation data record from the marked content server. Wherein the annotation content server unit has an encryption system.
According to a preferred embodiment, the multi-person collaborative semi-automatic medical image annotation system further comprises an import and export unit for importing unmarked medical images and for exporting marked medical images to generate local files.
EXAMPLE seven
The present embodiments provide a medical system visualization device. As shown in fig. 6, the visualization apparatus includes an imaging section, an image presentation section, an image analysis section, and an image labeling section. The image labeling part completes all image labeling based on the image generated by the imaging part and the biological anatomical structure or the biological physiological system division.
Meanwhile, the visualization equipment is used as an image annotation terminal to send an image annotation request to the image annotation system.
The image annotation system matches the keywords based on the annotation request with the tags in at least one tag ontology library, and individually allocates the medical images to be annotated to the visualization equipment. The imaging part converts the image information sent by the medical image labeling system into a medical image and presents the medical image to a labeling operator at the image display part.
The image annotation system recommends at least one selected tag to the visualization device based on the matching value of the keyword of the annotation request and the at least one tag.
Or the image annotation system retrieves the sentence associated with the selection label in a full-text retrieval mode through the visualization device and displays the sentence to the annotation operator in a marking mode.
And the annotation operator finishes annotation of the medical image to be annotated based on the selection label displayed by the image display part and the sentence associated with the selection label, and sends the annotation to the image annotation system.
The image analysis part can process and analyze the marked content marked by the imaging marking part by combining the marked image in the image marking system.
The image analysis part receives the medical image stored on a high-speed remote server, the annotation content matched with the medical image and the matching relation code of the annotation content and the medical image, which are stored on another annotation content server through a network.
The image labeling part completes the matching of the medical image and the label according to the received medical image, the label content matched with the medical image and the matching relation code of the label content and the medical image, records the label information of at least one label operator and counts the intersection label of the same medical image.
As shown in fig. 7, in the medical image annotation system of the present invention, the mitgger adopts a B/S architecture, the background is developed using Python + Django, a standard Django MVC framework is adopted, and a uniform HTTP interface is provided. All the services of the front end call the HTTP interface provided by the background server to complete. The background basically adopts a layered and modular architecture to divide the system into a plurality of layers, each layer consists of a certain number of modules,
the Http Interface encapsulates all functions as interfaces for use by the front end. The Request Auth is responsible for checking the identity of the requesting user, and most services can be accessed only by legal users. The Network Service completes each function process. The Tag recommendation layer encapsulates an entity library-based Tag recommendation engine. And the Data Storage is realized based on the Django Model and the Elasticissearch interface and is responsible for the Storage management of the Data.
Example eight
This embodiment is an improvement over any of the preceding embodiments.
After the medical picture is marked, the rediscovery and new discovery information of the related knowledge of the medical picture are updated by a crowdsourcing means. After the medical image to be labeled is initially labeled, the labeling information is opened to public users. The public user becomes a public annotation person after registering personal information in the medical image annotation system, and can annotate the medical image. And storing the labeling content of the public labeling personnel, and adding a label queue of the corresponding medical picture. The labels in the label queue comprise preliminary labeling labels and labels generated by labeling of public labeling personnel. And performing weighting processing on the labeled content based on the qualification and the labeling history of the public labeling personnel. Under the influence of the weight, the labels marked by a plurality of public marking personnel present a dynamically ordered label queue. When the labels marked by the public marking personnel are ranked in the queue in a leading way and the ranking order is smaller than the preset sequence threshold value, the labels marked by the public marking personnel are checked and confirmed by experts and then are added into the label body library. Thus, the present invention incorporates new knowledge relating to medical pictures into the tag ontology library, thereby enabling the tag ontology library to be continuously updated. For example, if the preset sequence threshold is 5, the sequence of the tag queue is added to the tag ontology library after the first five tags are confirmed and verified by the expert.
According to a preferred embodiment, the labeling ability value of the corresponding public labeling personnel is evaluated based on the dynamic change condition of the label in the label queue. If the sequence of labels is continuously changed forward, the labeling ability value of the public labeling personnel is increased. If the sequence of labels is constantly changing backwards, the labeling ability value of its public labeling personnel will decrease.
According to a preferred embodiment, the labeling ability value of public labeling personnel with at least one label included in the label ontology library is correspondingly improved. After the label is brought into the label entity library, the labeling capability value of the public labeling personnel corresponding to the label can be increased. The increased score is set by the manager. The more tags are included in the tag entity library, the more the labeling capability value of the public labeling personnel is increased.
The method steps in the various embodiments of the invention may be used in combination with each other. In any embodiment of the present invention, the tagging terminal is any entity with computing processing capability, such as a functional mobile phone, a smart phone, a palm computer, a personal computer, a tablet computer, or a personal digital assistant. The annotation terminal also has a communication function with the network, so that the medical image to be annotated provided by the medical image annotation system can be received through the network and annotation information can be returned to the medical image annotation system. The labeling operator can also check the labeled medical image through the labeling terminal. Preferably, the annotation functionality may be implemented by installing a plug-in a browser of the smart processing device, which may include, for example, Internet Explorer, Firefox, Safari, Opera, Google Chrome, GreenBrowser, and the like. The annotation terminals can be distributed among different geographical areas. The embodiments of the present invention are not limited to these browsers, but may be applied to any application (App) that can be used to display files in a web server or an archive system and enable a user to interact with the files, and these applications may be various browsers that are currently in common use, or other application programs with a web browsing function. The medical image of the present invention includes a medical image.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. A semi-automatic labeling system for medical image labels is characterized by at least comprising a label ontology library unit for storing labels and/or articles/related sentences related to medical images, a manual input label unit, a distribution unit and a labeling content server unit,
the label body library unit comprises a labeled label unit, a medical journal and a book data unit;
the manual label input unit is used for manually inputting an accurate label by a labeling operator based on at least two labels and/or related sentences;
the distribution unit is used for distributing the unmarked medical images to at least two marking terminal operators, and the at least two operators respectively and independently finish marking;
the matching relation data record between the medical image and the label matched with the medical image is stored in the annotation content server unit; after the medical image to be labeled is initially labeled, the labeled content server unit generates a label queue with dynamic change based on the labeled content of public labeling personnel, and labels smaller than a sequence threshold value are added into a label body library after being confirmed by an expert; and the labeling weight based on the label sequence in the label queue is changed, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue.
2. The system for semi-automatic labeling of medical image labels according to claim 1, wherein the labeling weight is obtained by weighting the labeling content based on the qualification and labeling history of the public labeling personnel.
3. The semi-automatic labeling system for medical image labels according to claim 1 or 2, wherein the labeling content server unit evaluates the labeling capability value of the corresponding public labeling person based on the dynamic variation condition of the label in the label queue;
if the sequence of the labels continuously changes forwards, the labeling capability value of public labeling personnel can be increased;
if the sequence of labels is constantly changing backwards, the labeling ability value of its public labeling personnel will decrease.
4. The semi-automatic labeling system for medical image labels according to any one of claims 1 to 3, wherein after the label is put into the label entity library, the labeling ability value of the public labeling personnel corresponding to the label is increased, and the increased value is set by a manager;
the more labels are included in the label entity library, the more the labeling capability value of the public labeling personnel corresponding to the labels is increased.
5. The system for semi-automatic labeling of medical image labels according to any one of claims 1 to 4, further comprising a comparison unit,
the comparison unit is used for comparing and analyzing the labeling results of at least two operators, if the comparison results are displayed differently for the same medical image, the comparison unit sends the labeling results of the at least two operators to the at least two operators simultaneously after the comparison and analysis, and the at least two operators negotiate to determine an accurate label.
6. The semi-automatic labeling system for medical image labels according to any one of claims 1 to 5, wherein a labeling operator finishes labeling the medical image to be labeled based on the selection label displayed by the image display part and the sentence associated with the selection label, and sends the label to the image labeling system.
7. The system for semi-automatic labeling of medical image labels according to any one of claims 1 to 6,
and after the operator of the labeling terminal imports the unlabeled image into the system, the label body library unit retrieves the labeled label unit and/or the medical journal and book data unit according to the image text information, and generates at least two labels and/or related sentences based on the matching scores of the retrieval result.
8. A method for semi-automatic labeling of medical image labels, the method comprising at least: storing the labeled tag unit, the medical journal and the book data unit;
manually inputting an accurate label by a labeling operator based on at least two labels and/or related sentences;
distributing the unmarked medical images to at least two marking terminal operators, and respectively and independently finishing marking by the at least two operators;
recording and storing matching relation data between the medical image and the label matched with the medical image;
after the medical image to be labeled is initially labeled, generating a dynamically changed label queue based on the labeling content of public labeling personnel, and adding labels smaller than a sequence threshold value into a label body library after the labels are confirmed by experts; and the labeling weight based on the label sequence in the label queue is changed, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue.
9. The method for semi-automatic labeling of medical image labels according to claim 8, further comprising:
after the medical image to be labeled is initially labeled, generating a dynamically changed label queue based on the labeling content of public labeling personnel, and adding labels smaller than a sequence threshold value into a label body library after the labels are confirmed by experts; and the labeling weight based on the label sequence in the label queue is changed, and the labeling capability value of the public labeling personnel is changed based on the label sequence in the label queue.
10. The method for semi-automatic labeling of medical image labels according to claim 8 or 9, further comprising:
evaluating the labeling capacity value of corresponding public labeling personnel based on the dynamic change condition of the label in the label queue;
if the sequence of the labels continuously changes forwards, the labeling capability value of public labeling personnel can be increased;
if the sequence of labels is constantly changing backwards, the labeling ability value of its public labeling personnel will decrease.
CN202210164256.0A 2015-12-17 2015-12-24 Semi-automatic labeling system and method for medical image label Pending CN114297429A (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
CN2015109379215 2015-12-17
CN201510937921 2015-12-17
CN201510937922 2015-12-17
CN201510937925 2015-12-17
CN2015109379249 2015-12-17
CN2015109379253 2015-12-17
CN201510937922X 2015-12-17
CN201510937924 2015-12-17
CN201580084598.XA CN108463814B (en) 2015-12-17 2015-12-24 Medical image labeling method and system
PCT/CN2015/098710 WO2017101142A1 (en) 2015-12-17 2015-12-24 Medical image labelling method and system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201580084598.XA Division CN108463814B (en) 2015-12-17 2015-12-24 Medical image labeling method and system

Publications (1)

Publication Number Publication Date
CN114297429A true CN114297429A (en) 2022-04-08

Family

ID=59055552

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201580084598.XA Active CN108463814B (en) 2015-12-17 2015-12-24 Medical image labeling method and system
CN202210115900.5A Pending CN114398511A (en) 2015-12-17 2015-12-24 Medical system visualization equipment and label labeling method thereof
CN202210164256.0A Pending CN114297429A (en) 2015-12-17 2015-12-24 Semi-automatic labeling system and method for medical image label

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201580084598.XA Active CN108463814B (en) 2015-12-17 2015-12-24 Medical image labeling method and system
CN202210115900.5A Pending CN114398511A (en) 2015-12-17 2015-12-24 Medical system visualization equipment and label labeling method thereof

Country Status (3)

Country Link
CN (3) CN108463814B (en)
DE (1) DE212015000240U1 (en)
WO (1) WO2017101142A1 (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516005A (en) * 2017-07-14 2017-12-26 上海交通大学 A kind of method and system of digital pathological image mark
US10671896B2 (en) * 2017-12-04 2020-06-02 International Business Machines Corporation Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
CN110752015A (en) * 2018-07-24 2020-02-04 由昉信息科技(上海)有限公司 Intelligent classification and marking system and method applied to medical field
US20200065706A1 (en) * 2018-08-24 2020-02-27 Htc Corporation Method for verifying training data, training system, and computer program product
US11094411B2 (en) 2018-10-26 2021-08-17 Guangzhou Kingmed Center for Clinical Laboratory Co., Ltd. Methods and devices for pathologically labeling medical images, methods and devices for issuing reports based on medical images, and computer-readable storage media
CN109461147B (en) * 2018-10-26 2020-05-19 广州金域医学检验中心有限公司 Pathological labeling method and device applied to FOV picture of mobile terminal
CN109446370A (en) * 2018-10-26 2019-03-08 广州金域医学检验中心有限公司 Pathology mask method and device, the computer readable storage medium of medical image
CN109461495B (en) * 2018-11-01 2023-04-14 腾讯科技(深圳)有限公司 Medical image recognition method, model training method and server
CN109686423A (en) * 2018-11-06 2019-04-26 众安信息技术服务有限公司 A kind of medical imaging mask method and system
CN115345819A (en) * 2018-11-15 2022-11-15 首都医科大学附属北京友谊医院 Gastric cancer image recognition system, device and application thereof
CN109544526B (en) * 2018-11-15 2022-04-26 首都医科大学附属北京友谊医院 Image recognition system, device and method for chronic atrophic gastritis
CN109523535B (en) * 2018-11-15 2023-11-17 首都医科大学附属北京友谊医院 Pretreatment method of lesion image
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN109670530A (en) * 2018-11-15 2019-04-23 首都医科大学附属北京友谊医院 A kind of construction method of atrophic gastritis image recognition model and its application
CN110096480A (en) * 2019-03-28 2019-08-06 厦门快商通信息咨询有限公司 A kind of text marking system, method and storage medium
CN110222709B (en) * 2019-04-29 2022-01-25 上海暖哇科技有限公司 Multi-label intelligent marking method and system
CN110796011B (en) * 2019-09-29 2022-04-12 湖北工程学院 Rice ear recognition method, system, device and medium based on deep learning
CN110751629A (en) * 2019-09-29 2020-02-04 中国科学院深圳先进技术研究院 Myocardial image analysis device and equipment
CN111062255A (en) * 2019-11-18 2020-04-24 苏州智加科技有限公司 Three-dimensional point cloud labeling method, device, equipment and storage medium
CN112951353A (en) * 2019-11-26 2021-06-11 广州知汇云科技有限公司 Medical record labeling platform and operation method thereof
US11501165B2 (en) 2020-03-04 2022-11-15 International Business Machines Corporation Contrastive neural network training in an active learning environment
CN111340131B (en) * 2020-03-09 2023-07-14 北京字节跳动网络技术有限公司 Image labeling method and device, readable medium and electronic equipment
CN111784284B (en) * 2020-06-15 2023-09-22 杭州思柏信息技术有限公司 Cervical image multi-person collaborative tag cloud service system and cloud service method
CN112418263A (en) * 2020-10-10 2021-02-26 上海鹰瞳医疗科技有限公司 Medical image focus segmentation and labeling method and system
CN112989087B (en) * 2021-01-26 2023-01-31 腾讯科技(深圳)有限公司 Image processing method, device and computer readable storage medium
CN112860416A (en) * 2021-04-25 2021-05-28 城云科技(中国)有限公司 Annotating task assignment strategy method and device
CN113326890B (en) * 2021-06-17 2023-07-28 北京百度网讯科技有限公司 Labeling data processing method, related device and computer program product
CN113592981B (en) * 2021-07-01 2022-10-11 北京百度网讯科技有限公司 Picture labeling method and device, electronic equipment and storage medium
CN113571162B (en) * 2021-07-19 2024-02-06 蓝网科技股份有限公司 Method, device and system for realizing multi-user collaborative operation medical image
CN115795076B (en) * 2023-01-09 2023-07-14 北京阿丘科技有限公司 Cross-labeling method, device, equipment and storage medium for image data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101765843A (en) * 2007-08-01 2010-06-30 皇家飞利浦电子股份有限公司 Accessing medical image detabases using medically relevant terms
US20120269436A1 (en) * 2011-04-20 2012-10-25 Xerox Corporation Learning structured prediction models for interactive image labeling
US20140129981A1 (en) * 2011-06-21 2014-05-08 Telefonaktiebolaget L M Ericsson (Publ) Electronic Device and Method for Handling Tags
CN104462738A (en) * 2013-09-24 2015-03-25 西门子公司 Method, device and system for labeling medical images
CN105094760A (en) * 2014-04-28 2015-11-25 小米科技有限责任公司 Picture marking method and device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7593967B2 (en) * 2002-11-27 2009-09-22 Amirsys, Inc. Electronic clinical reference and education system and method of use
JPWO2007119615A1 (en) * 2006-04-14 2009-08-27 コニカミノルタエムジー株式会社 Medical image display apparatus and program
US8065313B2 (en) * 2006-07-24 2011-11-22 Google Inc. Method and apparatus for automatically annotating images
US8600771B2 (en) * 2007-11-21 2013-12-03 General Electric Company Systems and methods for generating a teaching file message
CN102306298B (en) * 2011-07-19 2012-12-12 北京航空航天大学 Wiki-based dynamic evolution method of image classification system
JP2015505384A (en) * 2011-11-08 2015-02-19 ヴィディノティ エスアーVidinoti Sa Image annotation method and system
US9747600B2 (en) * 2012-03-30 2017-08-29 United State Poastal Service Item status tracking
CN104239359B (en) * 2013-06-24 2017-09-01 富士通株式会社 Based on multi-modal image labeling device and method
CN104572735B (en) * 2013-10-23 2018-02-23 华为技术有限公司 A kind of picture mark words recommending method and device
CN104809113B (en) * 2014-01-23 2019-08-09 腾讯科技(深圳)有限公司 The display methods and device of webpage information
CN104021222A (en) * 2014-06-26 2014-09-03 深圳信息职业技术学院 Labeling algorithm for biomedical image based on invisible dirichlet model
CN105118068B (en) * 2015-09-29 2017-12-05 常熟理工学院 Medical image automatic marking method under a kind of condition of small sample

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101765843A (en) * 2007-08-01 2010-06-30 皇家飞利浦电子股份有限公司 Accessing medical image detabases using medically relevant terms
US20120269436A1 (en) * 2011-04-20 2012-10-25 Xerox Corporation Learning structured prediction models for interactive image labeling
US20140129981A1 (en) * 2011-06-21 2014-05-08 Telefonaktiebolaget L M Ericsson (Publ) Electronic Device and Method for Handling Tags
CN104462738A (en) * 2013-09-24 2015-03-25 西门子公司 Method, device and system for labeling medical images
CN105094760A (en) * 2014-04-28 2015-11-25 小米科技有限责任公司 Picture marking method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭乔进等: "基于目标跟踪的半自动图像标注样本生成方法", 信息化研究, 20 October 2015 (2015-10-20), pages 23 - 27 *

Also Published As

Publication number Publication date
DE212015000240U1 (en) 2017-05-24
CN108463814B (en) 2022-02-18
CN108463814A (en) 2018-08-28
CN114398511A (en) 2022-04-26
DE212015000240U8 (en) 2017-07-13
WO2017101142A1 (en) 2017-06-22

Similar Documents

Publication Publication Date Title
CN108463814B (en) Medical image labeling method and system
Long et al. Content-based image retrieval in medicine: retrospective assessment, state of the art, and future directions
KR101618735B1 (en) Method and apparatus to incorporate automatic face recognition in digital image collections
Liu et al. Tiara: Interactive, topic-based visual text summarization and analysis
EP2433234B1 (en) Retrieving and viewing medical images
Müller et al. Benefits of content-based visual data access in radiology
Kalpathy-Cramer et al. Overview of the CLEF 2011 Medical Image Classification and Retrieval Tasks.
US8731308B2 (en) Interactive image selection method
CN108496200A (en) Pre-processing image data
Liu et al. Associating textual features with visual ones to improve affective image classification
Jiménez–del–Toro et al. Overview of the VISCERAL retrieval benchmark 2015
JP2002259410A (en) Object classification and management method, object classification and management system, object classification and management program and recording medium
US20040167800A1 (en) Methods and systems for searching, displaying, and managing medical teaching cases in a medical teaching case database
CN117235362A (en) Analysis system based on big data of wisdom text travel
Faruque et al. Teaching & Learning System for Diagnostic Imaging-Phase I: X-Ray Image Analysis & Retrieval
CN112286879B (en) Metadata-based data asset construction method and device
Othman et al. Categorizing color appearances of image scenes based on human color perception for image retrieval
Pinho et al. Extensible architecture for multimodal information retrieval in medical imaging archives
Paredes et al. A probabilistic model for user relevance feedback on image retrieval
Müller et al. The medGIFT project on medical image retrieval
Deselaers Image retrieval, object recognition, and discriminative models
Spanier et al. Medical case-based retrieval of patient records using the RadLex hierarchical lexicon
Abirami et al. CONTENT BASED IMAGE RETRIEVAL TECHNIQUES FOR RETRIEVAL OF MEDICAL IMAGES FROM LARGE MEDICAL DATASETS–A SURVEY
Jeong et al. Automatic image annotation using affective vocabularies: Attribute-based learning approach
Martin et al. A multimedia application for location-based semantic retrieval of tattoos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination