CN111383744B - Medical microscopic image annotation information processing method and system and image analysis equipment - Google Patents

Medical microscopic image annotation information processing method and system and image analysis equipment Download PDF

Info

Publication number
CN111383744B
CN111383744B CN202010481715.9A CN202010481715A CN111383744B CN 111383744 B CN111383744 B CN 111383744B CN 202010481715 A CN202010481715 A CN 202010481715A CN 111383744 B CN111383744 B CN 111383744B
Authority
CN
China
Prior art keywords
information
labeling
image
events
annotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010481715.9A
Other languages
Chinese (zh)
Other versions
CN111383744A (en
Inventor
罗琳
石雪迎
郭丽梅
倪庆雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute Of Collaborative Innovation
Peking University
Original Assignee
Beijing Institute Of Collaborative Innovation
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute Of Collaborative Innovation, Peking University filed Critical Beijing Institute Of Collaborative Innovation
Priority to CN202010481715.9A priority Critical patent/CN111383744B/en
Publication of CN111383744A publication Critical patent/CN111383744A/en
Application granted granted Critical
Publication of CN111383744B publication Critical patent/CN111383744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Library & Information Science (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of medical image analysis, and discloses a method and a system for processing labeling information of a medical microscopic image and image analysis equipment, wherein the method for processing the labeling information of the medical microscopic image comprises the following steps: step S1: acquiring a plurality of marking information when a user browses a medical microscopic image; step S2: extracting the characteristics of each marking information, and associating the characteristics with the corresponding marking information; step S3: integrating at least two pieces of marked information into an event according to the characteristics; the invention provides a more convenient medical microscopic image labeling information processing method and system and image analysis equipment for doctors and researchers.

Description

Medical microscopic image annotation information processing method and system and image analysis equipment
Technical Field
The present invention relates to the field of medical image analysis, and in particular, to a method and a system for processing labeling information using a medical microscopic image, and an image analysis device.
Background
Microscopes are commonly used in medicine to identify, analyze, and diagnose diseased tissue or cells. Pathological diagnosis is the gold standard of clinical diagnosis, but pathological features are complex, pathological changes with similar morphology are often difficult to identify, and a doctor needs to repeatedly switch multiplying power and visual field under a microscope, so that the film reading efficiency and accuracy are low. With the rapid development of the artificial intelligence field, the neural network algorithm can gradually simulate the automatic learning experience characteristics of the human visual system to carry out intelligent recognition on the image. The intelligent pathological diagnosis replaces the under-mirror film reading process of a doctor with the automatic sampling, searching, identifying, classifying and other operations of the image, so that the judgment of a large number of cases can be quickly made, and the diagnosis and treatment efficiency of the doctor can be improved.
The neural network algorithm needs a large number of labeled digital medical microscopic images as input for training, but the labeled digital medical microscopic images have very scarce resources, which is caused by the following reasons: firstly, the digital medical microscopic image is a high-definition digital image of an original slide obtained by digitally collecting and splicing a traditional slide through an optical magnification and scanning system. The characteristics of high definition, large size and unfavorable transmission increase the time cost of manual marking. Secondly, labeling of pathological images can be completed by pathological physicians with professional judgment experience, at present, only about 1 million pathological physicians exist in China, wherein the number of physicians with insufficient experience is more, and gaps are more than 2 million according to the standard of 'clinical department construction and management guidelines' (trial implementation), so that heavy examination tasks make it difficult for the pathological physicians to complete a large number of labeling tasks.
In the prior art, the samples are typically labeled with software in such a way that the physician manually adds a label line and a label. However, interpretation of medical images is often based on the subjective experience of the physician who, with due caution, often chooses to add annotations to certain diseased regions, and for uncertain regions he often chooses not to add annotations, with fear of misreading. In addition, the visual field attention of the doctor is changed when the doctor browses the large-size high-definition digital microscopic image. The change of the visual field attention contains logical information before and after the doctor marks. Therefore, the existing labeling method only keeps the method of adding the labeling line and the label by the doctor, and a large amount of valuable information left unconsciously by the doctor in interpreting the medical microscopic image is ignored, so that the interpretation process of the medical microscopic image by the doctor cannot be completely restored.
The method of adding the mark lines and the labels is different from the method of performing daily pathological diagnosis by a doctor, and when an image is marked, the doctor needs to manually add the mark lines and the labels while browsing the image. The tedious operation not only increases the workload of doctors, but also interrupts the diagnosis process of the doctors on the pathological images, thereby greatly reducing the marking efficiency.
Another video-based annotation method can completely reserve the interpretation process of the image by the doctor under the condition of not disturbing the doctor to browse the image. However, the retained video data is unstructured annotation data, and cannot be directly used for query and editing, so that the video data cannot be used for input of a neural network. This requires considerable time for the researchers obtaining the data to perform post-processing.
Therefore, there is an urgent need to develop a method, a system and an image analysis device for processing labeling information of medical microscopic images, which overcome the above-mentioned drawbacks.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for processing labeling information of a medical microscopic image, wherein the method comprises the following steps:
step S1: acquiring a plurality of different types of labeling information when a user browses a medical microscopic image;
step S2: extracting the characteristics of each piece of labeled information, and associating the characteristics with the labeled information corresponding to the characteristics;
step S3: and integrating the different types of the marking information into an event according to the characteristics.
In the above described label information processing method, step S1 includes:
step S11: acquiring and identifying identity information of a current user;
step S12: acquiring marking information of current user browsing operation;
step S13: and acquiring the marking information of the marking operation of the current user.
In the above method for processing annotation information, the annotation information of the current browsing operation of the user includes: image pose information; the labeling information of the current user labeling operation comprises: at least one of circle selection information, tag information, distance information, recording information, and track information.
The above label information processing method, wherein the characteristics include: at least one of a username, a location, a distance value, a time, a text, an audio.
In the above described label information processing method, step S3 includes:
step S31: sorting the labeling information according to the user name;
step S32: arranging all the marking information of the same user name according to time;
step S33: judging the relevance of the adjacent labeling information;
step S34: and integrating the related labeling information into an event.
The above described label information processing method further includes step S4: at least two consecutive events occurring within a consecutive time are integrated to form a logically related event.
In the above described label information processing method, step S4 includes:
step S41: ranking events in a continuous time;
step S42: judging the correlation of the adjacent events;
step S43: and integrating the related events to form the logic association event.
The invention also provides an annotation information processing system for medical microscopic images, which comprises:
the acquisition unit is used for acquiring a plurality of marking information when a user browses the medical microscopic image;
the extracting unit is used for extracting the characteristics of each piece of labeling information and storing the characteristics and the corresponding labeling information in a correlation manner;
and the integration unit integrates the labeling information into an event according to the characteristics.
The above-mentioned label information processing system, wherein the collecting unit comprises:
the identification module is used for identifying the identity information of the current user;
the first acquisition module is used for acquiring the marking information of the current user browsing operation;
and the second acquisition module is used for acquiring the marking information of the marking operation of the current user.
In the above labeled information processing system, the labeled information of the current browsing operation of the user includes: image pose information; the labeling information of the current user labeling operation comprises: at least one of circling information, distance information, tag information, recording information, and track information.
The above-mentioned label information processing system, wherein the features include: at least one of a username, a location, a distance value, a time, text, and audio.
The above-mentioned label information processing system, wherein the integration unit comprises:
the information sorting module sorts the marked information according to the user name;
the first information arrangement module is used for arranging all the marking information of the same user name according to time;
the first judgment module is used for judging the relevance of the adjacent marking information;
and the first integration module integrates the related labeling information into an event.
The above mentioned tagged information processing system further comprises a correlation unit, which integrates at least two consecutive events occurring within a consecutive time to form a logically correlated event.
The above-mentioned label information processing system, wherein the associating unit includes:
a second information arrangement module for arranging the events in the continuous time;
the second judgment module is used for judging the correlation of the adjacent events;
and the second integration module integrates the related events to form the logic association event.
The system for processing the annotation information further comprises a human-computer interaction unit, wherein the human-computer interaction unit queries and presents at least one of the annotation information and the characteristics thereof, the events and the logic associated events according to different query strategies.
The present invention also provides an image analysis apparatus, comprising:
the image labeling device is used for labeling the medical microscopic image to obtain a plurality of labeling information;
the tag information processing system according to any one of the above claims, wherein the tag information processing system processes a plurality of tag information.
Aiming at the prior art, the invention has the following effects: provides a more convenient medical microscopic image labeling information processing method and system for doctors and researchers. In the process of browsing and labeling medical microscopic images by doctors, the labeling efficiency of the doctors is improved, meanwhile, more structural labeling information unconsciously left by the doctors is obtained, the interpretation process of the medical microscopic images by the doctors is restored, and the diagnosis thought of the doctors is analyzed and presented, so that the requirements of training tasks of different neural network algorithms on image samples are met.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for processing annotation information according to the present invention;
FIG. 2 is a flowchart illustrating the substeps of step S1 in FIG. 1;
FIG. 3 is a flowchart illustrating the substeps of step S3 in FIG. 1;
FIG. 4 is a flowchart illustrating the substeps of step S4 in FIG. 1;
FIG. 5 is a labeled diagram;
FIG. 6 is a schematic diagram of a tag information processing system according to the present invention.
Wherein the reference numerals are;
acquisition unit 11
Identification module 111
First acquisition module 112
Second acquisition module 113
Extraction unit 12
Integration unit 13
Information arrangement module 131
First information arrangement module 132
First judging module 133
First integration module 134
Association unit 14
Second information arrangement module 141
Second judging module 142
Second integration module 143
Human-computer interaction unit 15
A storage unit 16.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
References to "a plurality" herein include "two" and "more than two".
The invention aims to record the annotation information related to interpretation under the condition of not disturbing a doctor to interpret the medical microscopic image as much as possible, and can restore the annotation process, and the recorded annotation information is structural information which can be directly used for inquiry and editing and can be directly used for training a neural network.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for processing annotation information according to the present invention. As shown in fig. 1, the method for processing label information of the present invention includes:
step S1: and acquiring a plurality of marking information when a user browses the medical microscopic image.
Referring to fig. 2, fig. 2 is a flowchart illustrating a sub-step of step S1 in fig. 1. As shown in fig. 2, the step S1 includes:
step S11: and acquiring and identifying the identity information of the current user.
The method for processing the labeling information aims to compare labeling and diagnosis ideas of medical microscopic images browsed by multiple users, so that each user is provided with respective identity identification information before use, and the labeling information can be obtained only after the identity information of the current user is identified during use. In this embodiment, when the identity information of the user passes the identification and starts to be labeled, the time for starting labeling is recorded, wherein the time may be embodied as time, month, day, hour, minute and second.
Step S12: acquiring marking information of current user browsing operation;
step S13: and acquiring the marking information of the marking operation of the current user.
Specifically, when a case image is labeled, a pathologist performs browsing operation according to the diagnosis idea of the pathologist, wherein the browsing operation comprises at least one of moving a visual field, magnifying power and reducing power, and circle selection, labels, distances, recording and tracks are added according to needs. The medical microscopic image may be a high-definition image of the order of billions of pixels, and one image may include various types of labeling operations and the number is not limited. The added sound recordings and labels comprise phenomenon descriptions and conclusion descriptions derived by reasoning and other relevant explanations.
With reference to fig. 5, fig. 5 is a labeled diagram. The annotation information added by the user is divided into the following six types: the image gesture information belongs to the labeling information of the browsing operation, and the circle selection information, the distance information, the label information, the recording information and the track information belong to the labeling information of the labeling operation.
Further, the following specifically describes specific contents of the labeling information in this embodiment:
1) the image posture information specifically refers to instruction information of a user to browse an image, and the instruction information includes one of a moving field of view, a magnification, and a reduction magnification.
2) The circle selection information specifically refers to information obtained by a user performing graphic frame selection, circle drawing and the like on an area in the current field, wherein the graphic can be a regular graphic, such as a rectangular frame selection and a polygon, and in addition, the graphic can also be an irregular graphic;
in another embodiment of the present invention, the circled information may also refer to automatically identifying and displaying a suspected lesion area in the current field of view; after the confirmation of the user, the suspected lesion area is circled; or after the user defaults, the suspected lesion area is selected, and after the user denies the selection, the suspected lesion area is cancelled to be displayed, wherein the suspected lesion area is an area which is required to be further judged by the user whether lesion is generated.
3) The distance information specifically refers to distance length information obtained by the user through distance measurement of two points in the current visual field. The distance information may include the size of the area where the two points are located, which is information on the distance between the two points in a certain area designated by the user, or the distance between the respective areas of the two points, which is information on the distance between the point designated by the user in one area to the point designated by the user in the other area.
4) The label information specifically refers to that a user adds a label to the selected area, or after the label is set, the user selects one or more areas with the label. The tag information may refer to a provided medical term or enter other textual information. The content of the label information may include a phenomenon description of the currently circled area and inferential conclusion descriptions and other relevant interpretations.
In another embodiment of the present invention, the tag information may also refer to a phenomenon description or conclusion description tag that automatically identifies and displays the currently circled area; after the user confirms, adding a label to the area; or after the user defaults, the label is added to the area, and after the user denies, the label is cancelled to be displayed.
5) The recorded information specifically refers to the interpretation of the current field of view or region by the user directly using voice.
In this embodiment, the automatic triggering condition for obtaining the recording information is to start recording when the detected sound intensity is greater than 1dB and the duration is greater than 1 s; when the silence time is greater than 2s, the recording is ended, but the invention is not limited thereto, and in other embodiments, the duration and the silence time may also be a time range.
In another embodiment of the present invention, the manual trigger condition for acquiring the recording information is a trigger signal output by the user.
6) The trajectory information specifically refers to a moving trajectory left by a user by sliding on a medical microscopic image through an auxiliary device in a process of browsing the medical microscopic image, specifically in this embodiment, the auxiliary device is a mouse as a preferred embodiment, and in other embodiments, the auxiliary device may also be a stylus, a gesture, or the like, but the invention is not limited thereto.
In this embodiment, the automatic triggering condition for acquiring the track information is when the recording starts, but the invention is not limited thereto.
Therefore, the method for processing the labeling information not only supports the traditional selection, labeling and distance measurement, but also adds the functions of recording and browsing operation and automatically capturing voice and mouse tracks. The former can acquire the change of the attention of a doctor in browsing the visual field, and the latter can lead the doctor to leave voice marks when facing to the scenes that the area outline is more complex and is inconvenient to select, or the uncertain phenomenon is inconvenient to write labels and the like which are difficult to mark or not marked intentionally. The function of automatically capturing the voice and the mouse track does not need a doctor to actively carry out complex labeling operation, so that the diagnosis thought of the doctor is more coherent, the labeling efficiency of the doctor can be improved, and more labeling information can be acquired.
Step S2: and extracting the characteristics of each piece of labeling information, and associating the characteristics with the corresponding labeling information.
Wherein the features include: at least one of a username, a location, a distance value, a time, text, and audio.
Specifically, in this step, feature extraction is performed based on different types of label information, and the feature extraction will be described below:
1) for image pose information: in this embodiment, the coordinates of the viewport refer to the coordinates of four vertices of the viewport, the coordinates of the four vertices are the coordinates based on the maximum-magnification image, and the time is a relative time with respect to the time of the start annotation, which is a preferred embodiment with the time being accurate to 10 μ s, but the invention is not limited thereto.
In other embodiments, the coordinates of the viewport may also be the center coordinates of the viewport, and the position of the current field of view may also be the center coordinates of the viewport and the length and width of the viewport.
It should be noted that the coordinates in the present invention are not limited to the coordinates based on the maximum-magnification image, and may be coordinates based on other-magnification images, for example, the coordinates of the minimum-magnification image; the coordinates in the present invention may be normalized coordinates.
2) Aiming at the circled information: and acquiring the user name, the circled position information, namely all vertex coordinates and time of the circled area, wherein the vertex coordinates are coordinates based on the maximum magnification image, and the time is relative time relative to the time for starting to label.
3) For the distance information: and acquiring the user name, the coordinates which are the position information of two ranging points, the distance value between the two points and the time, wherein the coordinates of the two points are the coordinates based on the maximum magnification image, and the time is the relative time relative to the time for starting to mark.
4) For the tag information: and acquiring the user name, the character information of the added label and the position information of the object with the added label, namely all vertex coordinates and time of the circled area. Wherein the vertex coordinates and the time meaning are the same as in 2).
5) Aiming at the recorded sound information: and acquiring the user name, the position information of the current visual field, namely the multiplying power, the coordinates of four vertexes of the viewport, the audio and the character information converted by the recording information, and the time, wherein the time is relative time relative to the time for starting to label.
6) For the track information: it should be noted that, in the embodiment, the fixed time is 50ms as a preferred embodiment, but the invention is not limited thereto. In this embodiment, the automatic triggering condition for acquiring the track information is the start of recording, but the present invention is not limited thereto, and the time is the same as that in 5).
Therefore, the annotation information processing method not only obtains the annotation result of interpreting the medical microscopic picture by the doctor, but also comprises the time characteristics of each step of browsing and annotation operation of the doctor. And according to the time characteristics of the labeling information, acquiring the time relation between browsing and labeling operation, and further restoring the interpretation process of the medical microscopic image by a doctor. Step S3: and integrating at least two pieces of the labeling information into an event according to the characteristics.
Further, referring to fig. 3, fig. 3 is a flowchart illustrating a sub-step of step S3 in fig. 1. As shown in fig. 3, the step S3 includes:
step S31: sorting the labeling information according to the user name;
step S32: arranging all the marking information of the same user name according to time;
step S33: judging the relevance of the adjacent labeling information;
step S34: and integrating the related labeling information into an event.
Specifically, firstly, a plurality of labeled information are classified and sorted according to user names, namely the labeled information of the same user name is of one type; secondly, arranging all the marking information of the same user name in sequence according to time, wherein the preferred embodiment is arranged according to time in the embodiment, but the invention is not limited to this; judging the relevance of every two adjacent marking information again, wherein in the embodiment, the position (coordinate) of the information is taken as a judgment basis of the relevance, and the adjacent marking information belonging to the same position is judged to be relevant; and finally integrating the related labeling information into an event.
For example, after arranging five pieces of annotation information A, B, C, D, E of the same type according to time, A, B, C, D, E is set, first, a and B are determined, when a is related to B, B and C are determined, if B is not related to C, a and B are integrated into an event, if B is related to C, C and D are determined, if C is not related to D, A, B is integrated into an event, and so on until all pieces of related annotation information are integrated.
Although the above description discloses the judgment basis of the correlation, the present invention is not limited thereto, and in another embodiment of the present invention, the time of the label information may be used as the judgment basis, and the time includes the start time and/or the end time; in another embodiment of the present invention, the position and time of the label information are used as the judgment basis; in another embodiment of the present invention, the relevance described in the voice or text description of the annotator is used as the basis for the determination.
Specifically, any event may include, but is not limited to:
1) one segment of recording and track information: the user is considered to use the voice to explain typical phenomena or explain uncertain phenomena, namely, the suspected phenomenon of a certain area is considered; or the user may not clearly indicate the boundary with the mouse or may have a plurality of positions in the same visual field, and therefore, the recorded information and the track information, including all the features of the recorded information in step S2) 5) and the track information in step S6), are integrated into the same event.
2) After the label is added, the tape label is marked for one or more times of circle selection: it is considered that the user collectively labels one or more default phenomena in the image, and therefore, the label information, the selection information, and the image pose information of the image during selection, including all the features of the image pose information, the label information, and the selection information in step S2 1) 2) 4), are integrated into the same event.
3) After the label is added, the label is added for one or more times of selection, and the recording information and the track information are added in a specified time before and after the operation: considering that the user collectively labels one or more of the default phenomena in the image, and the recording is a further explanation of the tag related information, wherein in this embodiment, the specified time can be set to 3s, so that the tag information, the circled information, the recording information, the track information, and the image posture information during the circled or recording process, including all the features thereof, are integrated into the same event.
4) Adding a label after selection: the user is considered to label the current typical phenomenon, and therefore, the circled information and the label information, including all the characteristics thereof, are integrated into the same event.
5) And adding a label after the selection, and adding recording information and track information within a specified time before and after the operation: considering that the user marks the current typical phenomenon, recording is a further explanation of the tag related information, wherein in the present embodiment, the specified time can be set to 3s, and therefore, the tag information, the selection information, the recording information and the track information, including all the features thereof, are integrated into the same event.
6) Measuring the distance and adding the recording within a specified time after the operation: considering that the user measures the distance between two areas or the size of a certain area, recording is a further explanation of the distance information, wherein in the present embodiment, the specified time can be set to 3s, so that the distance information and the recording information, including all the features thereof, are integrated into the same event.
Therefore, the annotation information processing method can arrange browsing and annotation operations of doctors according to the interpretation process of the doctors on the medical microscopic pictures, integrates the annotation operations with the auxiliary description function into the same event, is favorable for further analyzing the diagnosis thought of the doctors, and helps researchers and other annotation users obtaining the annotation images to understand the annotation information.
It should be noted that, although the invention is exemplified by specific time (for example, the specified time may be set to 3s, etc.), the invention is not limited thereto, and in other embodiments, the time may also be a time range.
Step S4: at least two consecutive events occurring within a consecutive time are integrated to form a logically related event.
Further, referring to fig. 4, fig. 4 is a flowchart illustrating a sub-step of step S4 in fig. 1. As shown in fig. 4, step S4 includes:
step S41: arranging the events according to time;
step S42: judging the correlation of the adjacent events;
step S43: and associating the adjacent and related events to form the logic association event. The logic association events are events having cause relationship in the process of browsing medical microscopic images by a doctor, and are illustrated as follows:
1) in the continuous time, in the process of two or more continuous events, if the user does not move the visual field area, only magnification is performed, the low-magnification visual field is considered to be helpful for marking the high-magnification visual field, and the events before and after the process are the logic association events, wherein the events before and after the process comprise the events in the process.
2) In the continuous time, in the process of two or more continuous events, the user only moves the visual field area without magnifying or reducing the magnification, and thinks that the user is searching for other focus areas with similar phenomena, the events before and after the process are logic association events, wherein the events before and after the process comprise the events in the process.
It should be noted that, in the embodiment, the magnification and the field of view used when browsing the medical microscopic image are taken as the basis for determination, but the invention is not limited thereto
Still further, the tag information processing method of the present invention further includes step S5: and performing relevance storage on the marking information, the characteristics, the events and the logic relevance events in real time. Therefore, the relevance among all corresponding relevant information is kept, and the relevant information can be inquired and called mutually when in use.
Specifically, the features obtained according to different types of labeling information are associatively stored in a repository, and meanwhile, all events and related events in the user labeling process keep the relevance of all corresponding related information, so that mutual query and call can be guaranteed during use. In the embodiment, the XML files are stored except the audio files, but the invention is not limited thereto.
It should be noted that the relevance storage content of the present invention includes:
1. the characteristics and the corresponding marking information are obtained;
2. the event is used for integrating the labeled information according to the characteristics, and the event comprises all information types and characteristics thereof;
3. and the logically related events are integrated with the events in continuous time.
The following is a specific process as follows:
the storage directory is as follows:
medical microscopic picture (name)
User name
Record 1 (begin time)
Event list
Image posture (time, position)
Circle selection and label (time, position, character)
Recording (time, location, audio filename, text (audio to text))
Audio file
Track information (time, position)
The event list is of the form:
event 1 start time image pose (position)
Time selection 1
Time tag 1
Time recording 1
And (3) associating events: event x, event x
Event 2:
the image pose file form is as follows:
time 1 image pose position 1
Time 2 image pose position 2
Other file forms are the same as above.
Therefore, the annotation information processing method can analyze and arrange the diagnosis thought of the doctor according to the interpretation process of the medical microscopic picture by the doctor, and associate and store the events with the causal relationship and the contrast relationship. The invention not only provides the labeling result of the doctor, but also adds the labeling process and the front-back logical relationship of the doctor, and is helpful for the researcher obtaining the labeled image to deepen the understanding of the medical microscopic image and provide help for the training of the neural network; for different labeled users, the method can also be used for the repeated communication of the diagnosis ideas among the doctors, can also be used for the high-age funders to display and teach the diagnosis ideas for the low-age funders, and plays a role in communication reference or teaching among the users.
In another embodiment of the present invention, the annotation information processing method of the present invention further includes step S6: and presenting the annotation information and at least one of the characteristics, the events and the logic association events thereof according to different query strategies.
Specifically, the method specifically comprises the following steps:
1. according to the user instruction, using different inquiry strategies to inquire the related labeling information and the characteristics thereof:
A. the marking information and the characteristics thereof at the same position can be inquired based on the position;
B. all the labeled information and the characteristics of a certain user can be inquired based on the user name and time;
C. the label information and the characteristics thereof corresponding to the characters containing a certain keyword can be inquired based on the characters.
Among them, the following are exemplified:
1) interpretation of medical micrographs is often based on the subjective experience of the physician, who may have different perspectives on the same picture. When multi-user comparison is needed, a strategy A can be utilized;
2) due to the large size of medical micrographs, magnification and field of view are variable. When a doctor interprets the medical microscopic picture, the doctor can integrate the information in the past browsing visual field and the currently browsed visual field to adjust the next visual field or add labels, and finally obtain a diagnosis conclusion. If the front and back logic information and the diagnosis idea of the doctor during marking are required to be restored, a strategy B can be utilized;
3) pathological features are complex, morphological phenomena are more, and doctors can classify and explain different types of phenomena by using more labels or recordings in a medical microscopic picture. When the picture material of a certain phenomenon needs to be acquired quickly, the strategy C can be utilized.
Therefore, the invention has stronger applicability and can meet various requirements in practical use.
2. Presenting related annotation information and characteristics thereof, or events, or logically associated events in different ways according to different query strategies:
A. in the user interface, the marking information and characteristics of a plurality of marking users in the same visual field can be presented in a contrasting manner, wherein the marking information and characteristics comprise labels selected and added in a circling manner, distances, text information after audio is converted into text, corresponding track information and the like;
B. in the user interface, the labeled events of a certain user are presented according to the time sequence, and in this embodiment, the events before and after the labeled events and the logic associated events can also be presented;
C. presenting a label or voice containing a certain keyword and a corresponding area or visual field in a user interface;
D. and combinations of the above A, B, C.
Therefore, the method and the device can present the labeling result, process and thought of the labeling user on the medical microscopic image according to different query strategies, thereby meeting the requirements of training tasks of different neural network algorithms on the image sample.
Specifically, first, a change process of an image posture when a user browses a medical microscopic image is acquired: the change process of the image posture when the user browses the medical microscopic image comprises the coordinate position of a visual field and a corresponding event, wherein the change process comprises information obtained by the medical microscopic image labeling method and related information obtained from other channels; analysis time, field of view position and image information: training a neural network model by using time, visual field position and image information; and finally, based on the trained neural network model, the method can better realize multiple applications, such as predicting the position of the next possible visual field after previewing a certain visual field, namely predicting the positions and the probabilities of all possible visual fields after jumping to a certain visual field according to instructions.
Referring to FIG. 6, FIG. 6 is a schematic diagram of a labeled information processing system according to the present invention. As shown in fig. 6, the annotation information processing system of the present invention includes:
the acquisition unit 11 is used for acquiring a plurality of marking information when a user browses the medical microscopic image;
the extracting unit 12 is configured to extract a feature of each piece of label information, and associate the feature with the label information corresponding to the feature;
and an integration unit 13 for integrating the label information into an event according to the characteristics.
Wherein the acquisition unit 11 comprises:
the identification module 111 identifies identity information of a current user;
the first acquisition module 112 is used for acquiring the marking information of the current user browsing operation;
the second collecting module 113 obtains the labeling information of the current user labeling operation.
The annotation information of the current user browsing operation comprises: image pose information; the labeling information of the current user labeling operation comprises: at least one of circling information, distance information, tag information, recording information, and track information.
Wherein the features include: at least one of a username, a location, a distance value, a time, a text, an audio.
Wherein the integration unit 13 includes:
the information sorting module 131 sorts the labeling information according to the user name;
the first information arrangement module 132 is configured to arrange all the labeled information of the same user name according to time;
the first judging module 133, judging the relevance of the adjacent labeling information;
the first integration module 134 integrates the related annotation information into an event.
Further, the annotation information processing system further comprises an association unit 14, and the association unit 14 integrates at least two consecutive events occurring within a consecutive time to form a logical association event.
Wherein the associating unit 14 comprises:
a second information arrangement module 141 that arranges the events according to time;
a second determining module 142, configured to determine a correlation between the adjacent events;
and the second integration module 143 associates the related events to form the logically associated events.
Still further, the system for processing the annotation information further comprises a human-computer interaction unit 15, and a user displays the annotation information and the characteristics thereof, or the event, or the logic association event through the human-computer interaction unit.
In this embodiment, the system for processing tagged information further includes a storage unit 16, where the storage unit 16 is configured to store the tagged information and its characteristics, the event, and the logic-related event, and query the related tagged information and its characteristics, or the event, or the logic-related event, by using different query strategies through the human-computer interaction unit 15 according to the query instruction, and present the query result.
The image analysis apparatus of the present invention includes:
the image labeling device is used for labeling the image so as to output a plurality of labeling information;
the tag information processing system described above processes a plurality of pieces of tag information.
Wherein the image annotation device is one of a tablet computer, a desktop computer, a notebook computer, and a microscope device, the microscope device comprising: microscope, recording unit for collecting recording information, display unit for enhancing image, etc.
It should be noted that the annotation information processing system and/or the image analysis device of the present invention also apply to a teaching scene of a doctor.
In conclusion, in the process of browsing and labeling the medical microscopic image by the doctor, the invention can improve the labeling efficiency of the doctor, simultaneously obtain more structural labeling information unconsciously left by the doctor, restore the interpretation process of the medical microscopic image by the doctor, analyze and present the diagnosis thought of the doctor, thereby meeting the requirements of the training tasks of different neural network algorithms on the image sample.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (14)

1. A method for processing labeling information of a medical microscopic image is characterized by comprising the following steps:
step S1: acquiring a plurality of marking information when a user browses a medical microscopic image;
step S2: extracting the characteristics of each piece of labeled information, and associating the characteristics with the labeled information corresponding to the characteristics;
step S3: integrating at least two pieces of the labeling information into an event according to the characteristics;
the step S3 includes:
step S31: sorting the labeling information according to the user name;
step S32: arranging all the marking information of the same user name according to time;
step S33: judging the relevance of the adjacent labeling information;
step S34: and integrating the related labeling information into an event.
2. The tag information processing method according to claim 1, wherein said step S1 includes:
step S11: acquiring and identifying identity information of a current user;
step S12: acquiring marking information of current user browsing operation;
step S13: and acquiring the marking information of the marking operation of the current user.
3. The annotation information processing method of claim 2, wherein the annotation information of the current user browsing operation includes: image pose information; the labeling information of the current user labeling operation comprises: at least one of circle selection information, tag information, distance information, recording information, and track information.
4. The annotation information processing method of any one of claims 1 to 3, wherein the characteristic comprises: at least one of a username, a location, a distance value, a time, text, and audio.
5. The tag information processing method according to claim 1, further comprising step S4: at least two consecutive events occurring within a consecutive time are integrated to form a logically related event.
6. The tag information processing method according to claim 5, wherein said step S4 includes:
step S41: ranking the events according to time;
step S42: judging the correlation of the adjacent events;
step S43: and integrating the related events to form the logic association event.
7. An annotation information processing system for medical microscopic images, comprising:
the acquisition unit is used for acquiring a plurality of marking information when a user browses the medical microscopic image;
the extracting unit is used for extracting the characteristics of each piece of labeling information and storing the characteristics and the corresponding labeling information in a correlation manner;
the integration unit integrates the label information into an event according to the characteristics;
the integration unit includes:
the information sorting module sorts the marked information according to the user name;
the first information arrangement module is used for arranging all the marking information of the same user name according to time;
the first judgment module is used for judging the relevance of the adjacent marking information;
and the first integration module integrates the related labeling information into an event.
8. The annotation information processing system of claim 7, wherein the acquisition unit comprises:
the identification module is used for identifying the identity information of the current user;
the first acquisition module is used for acquiring the marking information of the current user browsing operation;
and the second acquisition module is used for acquiring the marking information of the marking operation of the current user.
9. The annotation information processing system of claim 8, wherein the annotation information of the current user browsing operation comprises: image pose information; the labeling information of the current user labeling operation comprises: at least one of circling information, distance information, tag information, recording information, and track information.
10. The annotation information processing system of any one of claims 7-9, wherein the features comprise: at least one of a username, a location, a distance value, a time, text, and audio.
11. The annotation information processing system of claim 7, further comprising an association unit that integrates at least two consecutive events that occur within consecutive times to form a logically associated event.
12. The annotation information processing system of claim 11, wherein the associating unit comprises:
the second information arrangement module is used for arranging the events according to time;
the second judgment module is used for judging the correlation of the adjacent events;
and the second integration module integrates the related events to form the logic association event.
13. The annotation information processing system of claim 12, further comprising a human-computer interaction unit that queries and presents at least one of annotation information and its characteristics, events, and logically-associated events according to different query strategies.
14. An image analysis apparatus, characterized by comprising:
the image labeling device is used for labeling the image so as to output a plurality of labeling information;
the annotation information processing system of any one of claims 7 to 13, which processes a plurality of the annotation information.
CN202010481715.9A 2020-06-01 2020-06-01 Medical microscopic image annotation information processing method and system and image analysis equipment Active CN111383744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010481715.9A CN111383744B (en) 2020-06-01 2020-06-01 Medical microscopic image annotation information processing method and system and image analysis equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010481715.9A CN111383744B (en) 2020-06-01 2020-06-01 Medical microscopic image annotation information processing method and system and image analysis equipment

Publications (2)

Publication Number Publication Date
CN111383744A CN111383744A (en) 2020-07-07
CN111383744B true CN111383744B (en) 2020-10-16

Family

ID=71220438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010481715.9A Active CN111383744B (en) 2020-06-01 2020-06-01 Medical microscopic image annotation information processing method and system and image analysis equipment

Country Status (1)

Country Link
CN (1) CN111383744B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7962428B2 (en) * 2006-11-30 2011-06-14 Siemens Medical Solutions Usa, Inc. System and method for joint optimization of cascaded classifiers for computer aided detection
CN102254195A (en) * 2011-07-25 2011-11-23 广州市道真生物科技有限公司 Training set generation method
CN105184303B (en) * 2015-04-23 2019-08-09 南京邮电大学 A kind of image labeling method based on multi-modal deep learning
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN110931095A (en) * 2018-09-19 2020-03-27 北京赛迈特锐医疗科技有限公司 System and method based on DICOM image annotation and structured report association

Also Published As

Publication number Publication date
CN111383744A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
AU2020200835B2 (en) System and method for reviewing and analyzing cytological specimens
CN110993064B (en) Deep learning-oriented medical image labeling method and device
Kurzhals et al. Visual analytics for mobile eye tracking
US7925070B2 (en) Method for displaying virtual slide and terminal device for displaying virtual slide
CA2367330C (en) System and method for inputting, retrieving, organizing and analyzing data
JP2011154687A (en) Method and apparatus for navigating image data set, and program
CN1726496A (en) System and method for annotating multi-modal characteristics in multimedia documents
CN112927776A (en) Artificial intelligence automatic interpretation system for medical inspection report
CN110867228B (en) Intelligent information grabbing and evaluating method and system for wound severity of wound inpatient
CN113485555B (en) Medical image film reading method, electronic equipment and storage medium
Chiang et al. Quick browsing and retrieval for surveillance videos
CN111383744B (en) Medical microscopic image annotation information processing method and system and image analysis equipment
Nagy et al. Interactive visual pattern recognition
JP4578135B2 (en) Specimen image display method and specimen image display program
Elias Enhancing User Interaction with Business Intelligence Dashboards
JP4785347B2 (en) Specimen image display method and specimen image display program
CN114416664A (en) Information display method, information display device, electronic apparatus, and readable storage medium
JP5048038B2 (en) Blood cell classification result display method and blood cell classification result display program
KR20160125599A (en) Apparatus and methodology for an emotion event extraction and an emotion sketch based retrieval
CN115618837B (en) Laboratory instrument data acquisition and analysis method and system
Zhu et al. Application of visual saliency in the background image cutting for layout design
Monteiro et al. Applied medical informatics in the detection and counting of erythrocytes and leukocytes through an image segmentation algorithm
Obeso et al. Annotations of Mexican bullfighting videos for semantic index
CN115809638A (en) Method, system, device and storage medium for outputting document content
CN116719460A (en) Important content interaction method and device, terminal, handwriting suite and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant