CN113297836A - Image report label evaluation method and device, computer equipment and storage medium - Google Patents

Image report label evaluation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113297836A
CN113297836A CN202110588164.0A CN202110588164A CN113297836A CN 113297836 A CN113297836 A CN 113297836A CN 202110588164 A CN202110588164 A CN 202110588164A CN 113297836 A CN113297836 A CN 113297836A
Authority
CN
China
Prior art keywords
target text
position coordinate
text
label
organ
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110588164.0A
Other languages
Chinese (zh)
Inventor
胡任之
马秋英
刘青青
谢博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Good Diagnosis Shanghai Information Technology Co ltd
Original Assignee
Good Diagnosis Shanghai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Good Diagnosis Shanghai Information Technology Co ltd filed Critical Good Diagnosis Shanghai Information Technology Co ltd
Priority to CN202110588164.0A priority Critical patent/CN113297836A/en
Publication of CN113297836A publication Critical patent/CN113297836A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Provided herein are visual report tag evaluation methods, apparatus, computer devices, and storage media, wherein the methods comprise: acquiring an image report to be evaluated and a label thereof, wherein the label is extracted from the image report in advance; analyzing the image report to obtain a descriptive text and a conclusive text; analyzing the label to obtain an organ keyword and an attribute keyword; acquiring a first target text from the descriptive text and a second target text from the conclusive text according to the organ keywords and the attribute keywords; and judging whether the label is matched with the image report or not according to the first target text and the second target text. The method and the device can solve the problem that whether the label of the image report is accurate or not needs to be audited manually in the prior art, and can improve the accuracy and efficiency of label evaluation.

Description

Image report label evaluation method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of data analysis, in particular to an image report label evaluation method, an image report label evaluation device, computer equipment and a storage medium.
Background
The big data technology is efficiently utilized, the value of medical and health big data is fully mined, and the method has important significance for the research of disease diagnosis and treatment, prognosis, basic science and the like. However, most of medical texts are stored in a Portable Document Format (PDF), that is, data information in the medical texts is in an unstructured state, so that extraction is difficult, and analysis and application of the data are limited. The medical image report has a large data volume and a relatively regular structure, and is beneficial to the structural attempt of data. In the prior art, information in an image report is extracted and converted into a structured applicable label by methods such as regular extraction, natural language semantic analysis, text labeling and the like. However, in the prior art, the evaluation of the accuracy of the extracted tag still mainly depends on manual review, which is time-consuming, labor-consuming and difficult to obtain objective and uniform results. The accuracy of the tag directly affects the subsequent data application, and therefore, a tag accuracy evaluation method is urgently needed to help evaluate the accuracy of the tag.
Disclosure of Invention
In view of the foregoing problems in the prior art, it is an object of the present invention to provide a method, an apparatus, a computer device and a storage medium for evaluating tags of an image report, so as to solve the problems of time and labor waste and inaccuracy caused by the need of manually reviewing the tags of the image report in the prior art.
In order to solve the technical problems, the specific technical scheme is as follows:
in one aspect, provided herein is a method for evaluating image report tags, comprising:
acquiring an image report to be evaluated and a label thereof, wherein the label is extracted from the image report in advance;
analyzing the image report to obtain a descriptive text and a conclusive text;
analyzing the label to obtain an organ keyword and an attribute keyword;
acquiring a first target text from the descriptive text and a second target text from the conclusive text according to the organ keywords and the attribute keywords;
and judging whether the label is matched with the image report or not according to the first target text and the second target text.
Specifically, the obtaining a first target text from the descriptive text and a second target text from the conclusive text according to the organ keyword and the attribute keyword includes:
respectively acquiring a first position coordinate and a second position coordinate of the organ keyword and the attribute keyword in the descriptive text;
acquiring the first target text according to the first position coordinate and the second position coordinate;
respectively obtaining a third position coordinate and a fourth position coordinate of the organ keyword and the attribute keyword in the conclusive text;
and acquiring the second target text according to the third position coordinate and the fourth position coordinate.
Specifically, the obtaining the first target text according to the first position coordinate and the second position information coordinate includes:
judging whether the first position coordinate and the second position coordinate are unique or not;
if the first position coordinate and the second position coordinate are unique, acquiring the first position coordinate, the second position coordinate and a text between the first position coordinate and the second position coordinate as a first target text;
and if the number of the first position coordinates and/or the second position coordinates is larger than or equal to two, acquiring the forward distance between each first position coordinate and each second position coordinate, and selecting a group of first position coordinates and second position coordinates with the smallest forward distance and a text between the first position coordinates and the second position coordinates as a first target text.
Further, before the determining whether the first position coordinate is unique, the method further includes:
judging whether the first position coordinate is empty or not;
if the first position coordinate is empty, judging whether other organ keywords exist in the descriptive text or not;
if the label is not matched with the image report, judging that the label is not matched with the image report;
if not, the organ keywords are filled in the head of the descriptive text.
Further, before the determining whether the first position coordinate is empty, the method further includes:
judging whether the second position coordinate is empty or not;
and if the second position coordinate is null, determining that the first target text is null.
Further, the obtaining the first target text according to the first position coordinate and the second position information coordinate is:
judging whether a punctuation mark exists between the first position coordinate and the second position coordinate;
and if the first target text exists, acquiring a text between the punctuation mark and the second position coordinate as the first target text.
Specifically, the obtaining the second target text according to the third position coordinate and the fourth position coordinate includes:
judging whether the fourth position coordinate is empty or not;
if the fourth position coordinate is null, determining that the second target text is null;
if the fourth position coordinate is not null, judging whether the third position coordinate is null or not;
if the third position coordinate is not null, acquiring the third position coordinate, the fourth position coordinate and a text between the third position coordinate and the fourth position coordinate as a second target text;
and if the third position coordinate is null, supplementing the organ keywords into the head of the conclusive text.
Specifically, the determining whether the tag matches the image report according to the first target text and the second target text includes:
if the second target text is empty, judging whether the first target text is empty;
if the first target text is empty, determining that the label is not matched with the image report;
if the first target text is not empty, judging whether other organ keywords and negative words exist in the first target text;
if no other organ keywords exist and no negative words exist, judging that the label is matched with the image report;
and if other organ keywords and/or negative words exist, judging that the label does not match with the image report.
Further, the method further comprises:
if the second target text is not empty, judging whether other organ keywords exist in the second target text;
if no other organ keywords exist in the second target text, judging whether a negative word exists in the second target text;
if a negative word exists in the second target text, judging that the label is not matched with the image report;
and if no negative word exists in the second target text, judging that the label is matched with the image report.
Further, the method further comprises:
if the second target text contains other organ keywords, judging whether the first target text is empty;
if the first target text is empty, determining that the label is not matched with the image report;
if the first target text is not empty, judging whether other organ keywords and negative words exist in the first target text;
if no other organ keywords exist in the first target text and no negative words exist in the first target text, judging that the label is matched with the image report;
and if other organ keywords and/or negative words exist, judging that the label does not match with the image report.
In another aspect, an image report tag evaluation apparatus is provided herein, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an image report to be evaluated and a label thereof, and the label is extracted from the image report in advance;
the first analysis module is used for analyzing the image report to obtain a descriptive text and a conclusive text;
the second analysis module is used for analyzing the label to obtain an organ keyword and an attribute keyword;
a second obtaining module, configured to obtain a first target text from the descriptive text and a second target text from the conclusive text according to the organ keyword and the attribute keyword;
and the judging module is used for judging whether the label is matched with the image report or not according to the first target text and the second target text.
In another aspect, a computer device is also provided herein, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method provided in the above technical solution is implemented.
In another aspect, a computer storage medium is provided, and the computer storage medium stores a computer program, and the computer program is executed by a processor to implement the method provided in the above technical solution.
By adopting the technical scheme, the image report label evaluation method, the device, the computer equipment and the storage medium provided by the invention analyze the label into an organ keyword and an attribute keyword to respectively obtain a matching descriptive text and a conclusive text, so as to obtain a first target text and a second target text for judging whether the label is matched; the method and the device can not only solve the problem that whether the label of the image report is accurate or not depends on manual examination, but also improve the accuracy and efficiency of label evaluation.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating a method for evaluating a video report tag according to an embodiment of the present disclosure;
FIG. 2 is a first schematic diagram of obtaining a first target text based on a first position coordinate and a second position coordinate;
FIG. 3 is a second schematic diagram of obtaining a first target text based on a first position coordinate and a second position coordinate;
FIG. 4 is a schematic diagram of a first process for determining whether the tag matches the image report;
FIG. 5 is a second flowchart illustrating the process of determining whether the tag matches the image report;
fig. 6 is a schematic structural diagram illustrating an image report tag evaluation apparatus according to an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of a computer device provided in an embodiment herein.
Description of the symbols of the drawings:
10. a first acquisition module;
20. a first parsing module;
30. a second parsing module;
40. a second acquisition module;
50. a judgment module;
702. a computer device;
704. a processor;
706. a memory;
708. a drive mechanism;
710. an input/output module;
712. an input device;
714. an output device;
716. a presentation device;
718. a graphical user interface;
720. a network interface;
722. a communication link;
724. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments herein described are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
In the prior art, information in an image report is extracted as a tag by methods such as rule extraction, natural language semantic analysis, text labeling and the like, so as to classify and analyze the image report. However, in the prior art, whether the extracted label is accurate or not still mainly depends on manual review, which is time-consuming and labor-consuming, and objective and uniform results are difficult to obtain. Therefore, in order to solve the above problems, embodiments of the present disclosure provide an image report tag evaluation method, which can automatically evaluate the accuracy of an image report tag, and greatly save time and cost required for manual review. While fig. 1 is a schematic diagram of the steps of a method for evaluating an image report tag provided in the embodiments herein, the present disclosure provides the method steps as described in the embodiments or flowcharts, but may include more or less steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual system or apparatus product executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures. Specifically, as shown in fig. 1, the method may include:
s110: acquiring an image report to be evaluated and a label thereof, wherein the label is extracted from the image report in advance;
in the embodiment of the present specification, the image report to be evaluated is preferably a medical image report, and due to the relative rules of the structure and layout of the medical image report, the attempt of evaluating the accuracy of the label is facilitated. Of course, the image report may be in other fields. The image report can be directly called from the existing medical image database, the label is extracted from the image report in advance by methods such as rule extraction, natural language semantic analysis, text annotation and the like, and the extraction method and the extraction rule of the label are not specifically limited in the embodiment of the description.
S120: analyzing the image report to obtain a descriptive text and a conclusive text;
it should be noted that the medical image report (text part) generally includes three components: the first part is detailed information of the image examination, such as: examination site (e.g., chest, lung, liver, etc.) and examination modality (e.g., ultrasound, CT, etc.), patient information (e.g., patient's name, sex, age, etc.), examination time, report number (the number of each image report is unique), etc.; the second part is descriptive text, namely visual information, which is a detailed text report recorded by a photographer according to information observed from a visual image; the third section is conclusive text, which is the result of the diagnosis given to the patient by the attending physician according to the descriptive text. Medical image reports may sometimes include picture portions, i.e., ultrasound, CT, etc. image images, in addition to text portions.
S130: analyzing the label to obtain an organ keyword and an attribute keyword;
it should be noted that, in the embodiments of the present specification, the organ keyword may be a name of an organ such as thyroid, kidney, liver, lung, and the like; the attribute keywords are words with diagnostic conclusiveness, e.g., nodules, cysts, etc.; that is, the label may be "thyroid nodule", "renal cyst", or the like, and may be considered as a result of diagnosis of the organ nodules involved. Those skilled in the art can classify the image report according to the label, and can also use the classified image report to perform machine learning, etc. In the embodiment of the specification, different parts or different organs may have the same attribute key, for example, a breast nodule, a lung nodule, a thyroid nodule, and the like. When extracting labels from the image, a uniform medical attribute label set can be adopted for labeling.
S140: acquiring a first target text from the descriptive text and a second target text from the conclusive text according to the organ keywords and the attribute keywords;
s150: and judging whether the label is matched with the image report or not according to the first target text and the second target text. Namely, the inaccuracy of the label relative to the image report is judged according to the first target text and the second target text.
The embodiment of the present specification provides a method for evaluating an image report tag, which creatively provides a method for judging and evaluating whether an extracted tag is accurate or not: analyzing the label into an organ keyword and an attribute keyword to respectively obtain a first target text and a second target text, and then judging the accuracy of the label according to the first target text and the second target text; the method and the device make up for the lack of judgment and evaluation on the accuracy of the label in the prior art, and solve the problems of low efficiency and reliability caused by depending on whether the label is accurate or not.
Specifically, in the embodiment of the present specification, step S140: obtaining a first target text from the descriptive text and a second target text from the conclusive text according to the organ keywords and the attribute keywords, further comprising:
s210: respectively acquiring a first position coordinate and a second position coordinate of the organ keyword and the attribute keyword in the descriptive text;
in the embodiment of the present specification, the first position coordinate refers to the arrangement order of the organ keywords in all the characters of the descriptive text (i.e., the organ keywords are arranged from top to bottom and from left to right according to the reading habit, and the organ keywords are arranged in the first place in all the characters of the whole piece of the descriptive text). Of course, the first position coordinate may also be that the organ keyword is located in the several rows and several columns in the descriptive text, i.e. may also be represented by means of abscissa and ordinate. The meaning of the second position coordinate may be referred to the first position coordinate.
S220: acquiring the first target text according to the first position coordinate and the second position coordinate;
s230: respectively obtaining a third position coordinate and a fourth position coordinate of the organ keyword and the attribute keyword in the conclusive text; the meaning of the third position coordinate and the fourth position coordinate may refer to the first position coordinate, and is not described herein again.
S240: and acquiring the second target text according to the third position coordinate and the fourth position coordinate.
In the embodiment of the present specification, step S220: obtaining the first target text according to the first position coordinate and the second position information coordinate, which specifically includes:
s310: judging whether the first position coordinate and the second position coordinate are unique or not;
s320: and if the first position coordinate and the second position coordinate are unique, acquiring the first position coordinate, the second position coordinate and a text between the first position coordinate and the second position coordinate as a first target text.
Illustratively, as shown in fig. 2, the descriptive text in the image report is: the thyroid gland has normal size and shape, smooth surface, complete envelope, medium echo inside, thin, dense and evenly distributed light spots and no obvious nodule.
The organ keywords are: thyroid gland; the attribute keywords are: a nodule.
The first position coordinate is [0] (note that the position of the first character in the descriptive text is 0), the second position coordinate is [44 ]; and judging that the first position coordinate and the second position coordinate are unique, and acquiring the first position coordinate, the second position coordinate and a text between the first position coordinate and the second position coordinate as a first target text, wherein the first target text is as follows: the thyroid gland has normal size and shape, smooth surface, complete envelope, medium echo inside, thin, dense and evenly distributed light spots and no obvious nodule. It should be noted that the second position coordinate [44] is actually the coordinate of the word "knot" in the attribute keyword, but the entire attribute keyword is obtained together when the first target text is obtained.
S330: if the number of the first position coordinates and/or the second position coordinates is larger than or equal to two, acquiring the forward distance between each first position coordinate and each second position coordinate, and selecting a group of first position coordinates and second position coordinates with the smallest forward distance and a text between the first position coordinates and the second position coordinates as a first target text. The forward distance refers to a relative position relationship for a set of organ keywords and attribute keywords, with a first position coordinate (i.e., the organ keyword) preceding and a second position coordinate (the attribute keyword) succeeding.
As shown in fig. 3, for example, if the descriptive text in the image report is: the thyroid has normal size and shape, smooth surface, complete envelope, medium echo inside, fine, weak and dense light spots, uniform distribution and no obvious nodule in the thyroid.
The second position coordinate is [47 ]; if the first position coordinates are [0, 40], that is, the number of the first position coordinates is not unique, the forward distance between each first position coordinate and each second position coordinate is obtained, and it can be known that the forward distance between the organ keyword and the attribute keyword with the first position coordinates [40] is the minimum, so the organ keyword with the position coordinates [40] is selected to obtain the first target text, and finally the first target text is: no obvious nodules were seen in the thyroid. The situation where multiple first target texts are obtained can be avoided. Since the descriptive text is a detailed record of all the information viewed from the video image by the photographer, the descriptive text content is sometimes very lengthy. Therefore, in the embodiment of the present disclosure, the first target text is determined according to the relative position relationship between the first position coordinate and the second position coordinate, so that the first target text may be more simplified, and it is convenient to determine whether the tag is accurate subsequently.
In the embodiment of the present specification, the relative position relationship between each first position coordinate and each second position coordinate may be further represented by a difference between each first position coordinate and each second position coordinate, and an absolute value of the difference may be used to represent the magnitude of the relative distance. Since the descriptive text of the medical image report has a certain language expression specification, in some possible embodiments, the forward distance between each organ keyword and the data keyword can be calculated: as shown in the above descriptive text, the forward distance between the first position coordinate of each organ keyword and the second position coordinate of the attribute keyword is [0-47, 41-47-6 ], and the first target text is obtained by taking the minimum forward distance, i.e., the maximum negative number of-6. It should be noted that there may be more than one second position coordinate corresponding to the attribute keyword, and then the relative distance between each first position coordinate and each second position coordinate is obtained, so as to obtain the first target text.
The image report tag evaluation method provided by the embodiment of the specification can meet the requirement of obtaining the first target text in the image report under various conditions, not only avoids the obtaining confusion of the first target text when a plurality of first position coordinates exist, but also can simplify the first target text, and facilitates subsequent tag evaluation operation.
In some preferred embodiments, in step S310: before determining whether the first location coordinate is unique, the method further comprises:
s410: judging whether the first position coordinate is empty or not; i.e. to determine whether the organ keyword is present in the descriptive text.
S420: if the first position coordinate is empty, judging whether other organ keywords exist in the descriptive text or not; that is, when the label is "thyroid nodule", it is determined whether there are other organ keywords than "thyroid" in the descriptive text.
S430: if the label is not matched with the image report, judging that the label is not matched with the image report;
for example, if the descriptive text of the chest CT image report is: one nodule is visible at the mediastinum. The label is a lung nodule, i.e. the organ keywords are: a lung; the attribute keywords are: a nodule.
Then there is no organ keyword named lung in the descriptive text, but there is a mediastinum other organ keyword, and the label "lung nodule" is judged not to match the chest CT image report. That is, the description in the image report is actually the reflection of the mediastinum; on the premise, the label error can be directly judged without the operation of acquiring the second target text and subsequent judgment and analysis steps, so that the judgment efficiency can be improved.
S440: if not, the organ keywords are filled in the head of the descriptive text.
For example, if the descriptive text of the renal ultrasound image report is: one cyst is visible on the left. The label is renal cyst. There is no organ keyword in the descriptive text, and at this time, since there is no organ keyword "kidney", the first position coordinate and the first target text cannot be obtained, and therefore, the organ keyword is appended to the head of the piece of descriptive text, so that the descriptive text is updated to: one cyst is visible on the left side of the kidney.
In some preferred embodiments, in step S410: before determining whether the first location coordinate is empty, the method further comprises:
s510: judging whether the second position coordinate is empty or not;
s520: and if the second position coordinate is null, determining that the first target text is null. Namely, when the attribute keywords do not exist in the descriptive text, it is determined that the first target text is not acquired. In this embodiment of the present specification, whether the descriptive text includes the attribute keyword may be preferentially determined to determine whether the first target text is empty, and then whether the descriptive text includes the organ keyword may be further determined, so that the efficiency of acquiring the first target text may be improved.
In some preferred embodiments, the obtaining the first target text according to the first position coordinate and the second position information coordinate may further be:
judging whether a punctuation mark exists between the first position coordinate and the second position coordinate;
and if the first target text exists, acquiring a text between the punctuation mark and the second position coordinate as the first target text.
It should be noted that, when the number of punctuation marks located between the first position coordinate and the second position coordinate is more than one, the second position coordinate, the punctuation mark closest to the second position coordinate, and the text between the two are selected as the first target text.
For example, if the descriptive text of a video report is: the thyroid glands on both sides are full, the volume is increased, the isthmus is about 5mm thick, the envelope is complete, the substantial echo is thickened, the distribution is uneven, and multiple nodules can be seen. The labels of the image report are: thyroid nodules.
Then the first position coordinate is known as [2], the second position coordinate is known as [44], and a plurality of punctuation marks exist between the first position coordinate and the second position coordinate. Selecting a text between a comma with a position coordinate of [38] and a second position coordinate as a first target text, namely changing the first target text from the original 'thyroid gland shape full, volume increased, isthmus thickness of about 5mm, complete envelope, substantial echo thickening, uneven distribution and visible multiple nodules' into 'visible multiple nodules'. On the premise that the organ keywords can be obtained, a section of text connected with the attribute keywords in the descriptive text can be regarded as diagnostic description of the organ keywords, so that the language section of the first target text is shortened, and the subsequent evaluation and analysis of the accuracy of the label according to the first target text are facilitated.
In the embodiment of the present specification, step S240: obtaining the second target text according to the third position coordinate and the fourth position coordinate, which may further include:
s610: judging whether the fourth position coordinate is empty or not;
s620: if the fourth position coordinate is null, determining that the second target text is null;
and if the attribute keywords are not found in the conclusive text, determining that the second target text is empty.
S630: if the fourth position coordinate is not null, judging whether the third position coordinate is null or not;
s640: if the third position coordinate is not null, acquiring the third position coordinate, the fourth position coordinate and a text between the third position coordinate and the fourth position coordinate as a second target text; specifically, the third position coordinate is not null, and the two cases that the third position coordinate is unique and the third position coordinate is not unique are also included, and the acquisition of the second target text in the two cases is similar to the acquisition of the first target text in the case that the first position coordinate is different in number, and details are not repeated here.
S650: and if the third position coordinate is null, supplementing the organ keywords into the head of the conclusive text. And after supplementing the organ keywords, setting the third position coordinate of the organ keywords in the conclusive text as [0], and acquiring a second target text by using the third position coordinate and the fourth position coordinate. When the attribute keywords do not exist in the conclusive text, the content in the conclusive text can be considered as the conclusion of the symptom corresponding to the organ keywords, and the extraction of the second target text can be facilitated after the organ keywords are supplemented.
For example, if the conclusive text of the image report is: diffuse echo abnormality of thyroid gland. The label is as follows: thyroid nodule, organ key is: thyroid gland; the attribute keywords are: a nodule.
The third position coordinate is [0]](ii) a The fourth position coordinate is
Figure BDA0003088424550000121
That is, the conclusive text does not have the attribute keyword, the second target text is
Figure BDA0003088424550000122
Further, as shown in fig. 4, step S150: according to the first target text and the second target text, judging whether the label is matched with the image report, and further comprising:
s710: if the second target text is empty, judging whether the first target text is empty;
s720: if the first target text is empty, determining that the label is not matched with the image report;
that is, if the first target text and the second target text of an image report are both empty, it can be determined that the pre-extracted tag does not match the image report (i.e., the tag is inaccurate).
S730: if the first target text is not empty, further judging whether other organ keywords and negative words exist in the first target text;
s740: if no other organ keywords exist and no negative words exist, judging that the label is matched with the image report;
in the embodiments of the present specification, the term "negation" means a word or phrase such as "none", "unseen", "undetected", "no", and the like, which indicates negation.
Illustratively, there is a visual report whose descriptive text is: the thyroid glands on both sides are full, the volume is increased, the isthmus is about 5mm thick, the envelope is complete, the substantial echo is thickened, the distribution is uneven, and multiple nodules can be seen. The conclusive text is: no obvious abnormality was observed. The labels are as follows: thyroid nodules.
It is noted that the first target text is preferably: multiple nodules were visible. And the second target text is empty.
Then the first target text has no other organ keywords and no negative words; the image reports that the corresponding label "thyroid nodule" is the correct label.
S750: and if other organ keywords and/or negative words exist, judging that the label does not match with the image report.
Illustratively, if the descriptive text of the image report is: the thyroid has normal size and shape, smooth surface, complete envelope, medium echo inside, fine, weak and dense light spots, uniform distribution and no obvious nodule in the thyroid. Conclusive text: diffuse echo abnormality of thyroid gland. The label is as follows: thyroid nodules.
Then, according to the previous discussion, the first target text is known as: no obvious nodules are seen in the thyroid; the second target text is empty.
Although there are no other organ keywords in the first target text, there is a negative word "not seen" indicating negative meaning, and therefore the label "thyroid nodule" is not correct for this image report.
The above steps are only to determine the accuracy of the label in several situations where the second target text is empty, and the method for evaluating the image report label provided in the embodiment of the present specification further includes:
s810: if the second target text is not empty, judging whether other organ keywords exist in the second target text;
s820: if no other organ keywords exist in the second target text, judging whether a negative word exists in the second target text;
s821: if a negative word exists in the second target text, judging that the label is not matched with the image report;
s822: and if no negative word exists in the second target text, judging that the label is matched with the image report.
Further, as shown in fig. 5, the method further includes:
s830: if other organ keywords exist in the second target text, preferably, judging whether positive words such as "uniformly visible", and the like, which represent positive meanings, exist in the second target text;
s831: and if the positive word exists, judging that the label is matched with the image report.
Illustratively, if the labels of the video report are: pulmonary nodules; the second target text is: nodules can be seen in both lung and mediastinum. Although the second target text has the organ keyword of lung and the other organ keyword of mediastinum; however, since the second target text also contains "all visible" words, which indicate a positive word, the tag is determined to match the image report.
S840: otherwise, judging whether the first target text is empty or not;
s841: if the first target text is empty, determining that the label is not matched with the image report;
s850: if the first target text is not empty, judging whether other organ keywords and negative words exist in the first target text;
s851: if no other organ keywords exist in the first target text and no negative words exist in the first target text, judging that the label is matched with the image report;
s852: and if other organ keywords and/or negative words exist, judging that the label does not match with the image report.
Preferably, step S150: after judging whether the label is matched with the image report according to the first target text and the second target text, the method further comprises the following steps:
and eliminating the labels which are judged to be not matched with the image report, and generating correct labels according to the first target text and the second target text so as to replace the wrong labels.
In addition to this, the recall rate and accuracy of the tags can be evaluated: extracting a certain number of image reports and labels thereof (various types of labels can be extracted), verifying the label accuracy by combining rule verification and a machine learning algorithm, evaluating by using accuracy and recall rate, and obtaining the result of two-batch extraction evaluation as shown in the table below.
Figure BDA0003088424550000141
Wherein, A is: adopting the existing manual checking of the number of wrong labels;
column B: judging the number of the obtained error labels by using the image report label evaluation method provided by the embodiment of the specification;
c column: the number of error labels judged by the image report label evaluation method provided by the embodiment of the present specification is determined to be accurate (that is, the error labels in the column a are actually errors);
precision: column values C/column values B per row of each batch × 100%;
the recall ratio is as follows: column C values/column a values per row of each batch 100%;
original label accuracy: 1-A column/100%, namely the proportion of the number of correct labels obtained by adopting a manual review method to the total number of the labels of each batch of samples;
new label accuracy: 1- (column a-column C)/(column number-column B) 100%, i.e. after removing the determined error (including the actual error and the actual accurate label) by the evaluation method provided herein, the actual determined accurate label count accounts for the proportion of the total number of the remaining labels of each batch of samples from which the labels were removed;
as can be seen from the data in the table, the method for evaluating tags in an image report provided in the embodiment of the present disclosure can significantly improve the accuracy of tag evaluation.
It should be noted that, in the method for evaluating an image report tag provided in the embodiment of the present specification, when the descriptive text and the conclusive text are obtained through parsing, the method further includes preprocessing the image report text. The preprocessing comprises the steps of converting unformatted elements in the image report into formatted elements, removing repeated contents in the text, removing blank spaces, line feeds and the like in the text. Also included is generating a synonym table for the organ keywords and a synonym table for the attribute keywords. The descriptive text, conclusive text, organ keywords, and attribute keyword synonym tables may also be tabulated for recordation and statistical analysis.
In summary, the evaluation of the image report tag provided in the embodiment of the present disclosure can solve the problem that the prior art still needs to rely on manual work to check whether the tag of the image report is accurate, and can improve the reliability and efficiency of the evaluation of whether the tag is accurate or not.
As shown in fig. 6, an embodiment of the present disclosure further provides an image report tag evaluation apparatus, including:
the first acquisition module 10 is configured to acquire an image report to be evaluated and a tag thereof, where the tag is extracted from the image report in advance;
a first parsing module 20, configured to parse the image report to obtain a descriptive text and a conclusive text;
a second parsing module 30, configured to parse the tag to obtain an organ keyword and an attribute keyword;
a second obtaining module 40, configured to obtain, according to the organ keyword and the attribute keyword, a first target text from the descriptive text and a second target text from the conclusive text;
and the judging module 50 is configured to judge whether the label matches the image report according to the first target text and the second target text.
The advantages achieved by the device provided by the embodiment of the specification are consistent with those achieved by the method, and are not described in detail herein.
As shown in fig. 7, for a computer device provided for embodiments herein, the computer device 702 may include one or more processors 704, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device 702 may also include any memory 706 for storing any kind of information, such as code, settings, data, etc. For example, and without limitation, the memory 706 can include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of computer device 702. In one case, when the processor 704 executes associated instructions that are stored in any memory or combination of memories, the computer device 702 can perform any of the operations of the associated instructions. The computer device 702 also includes one or more drive mechanisms 708, such as a hard disk drive mechanism, an optical disk drive mechanism, or the like, for interacting with any memory.
Computer device 702 can also include an input/output module 710(I/O) for receiving various inputs (via input device 712) and for providing various outputs (via output device 714)). One particular output mechanism may include a presentation device 716 and an associated graphical user interface 718 (GUI). In other embodiments, input/output module 710(I/O), input device 712, and output device 714 may also not be included, as only one computer device in a network. Computer device 702 can also include one or more network interfaces 720 for exchanging data with other devices via one or more communication links 722. One or more communication buses 724 couple the above-described components together.
Communication link 722 may be implemented in any manner, such as over a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. Communication link 722 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
Corresponding to the method in fig. 1, the embodiments herein also provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the above-described method.
Embodiments herein also provide computer readable instructions, wherein a program therein causes a processor to perform the method as shown in fig. 1 when the instructions are executed by the processor.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, meaning that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional units in the embodiments herein may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (13)

1. A method for evaluating a label of an image report, comprising:
acquiring an image report to be evaluated and a label thereof, wherein the label is extracted from the image report in advance;
analyzing the image report to obtain a descriptive text and a conclusive text;
analyzing the label to obtain an organ keyword and an attribute keyword;
acquiring a first target text from the descriptive text and a second target text from the conclusive text according to the organ keywords and the attribute keywords;
and judging whether the label is matched with the image report or not according to the first target text and the second target text.
2. The method of claim 1, wherein obtaining a first target text from the descriptive text and a second target text from the conclusive text according to the organ keywords and the attribute keywords comprises:
respectively acquiring a first position coordinate and a second position coordinate of the organ keyword and the attribute keyword in the descriptive text;
acquiring the first target text according to the first position coordinate and the second position coordinate;
respectively obtaining a third position coordinate and a fourth position coordinate of the organ keyword and the attribute keyword in the conclusive text;
and acquiring the second target text according to the third position coordinate and the fourth position coordinate.
3. The method of claim 2, wherein obtaining the first target text from the first location coordinates and the second location information coordinates comprises:
judging whether the first position coordinate and the second position coordinate are unique or not;
if the first position coordinate and the second position coordinate are unique, acquiring the first position coordinate, the second position coordinate and a text between the first position coordinate and the second position coordinate as a first target text;
if the number of the first position coordinates and/or the second position coordinates is larger than or equal to two, acquiring the forward distance between each first position coordinate and each second position coordinate, selecting a group of first position coordinates and second position coordinates with the smallest forward distance, and taking the text between the first position coordinates and the second position coordinates as a first target text.
4. The method of claim 3, wherein prior to determining whether the first location coordinate is unique, further comprising:
judging whether the first position coordinate is empty or not;
if the first position coordinate is empty, judging whether other organ keywords exist in the descriptive text or not;
if the label is not matched with the image report, judging that the label is not matched with the image report;
if not, the organ keywords are filled in the head of the descriptive text.
5. The method of claim 4, wherein prior to determining whether the first location coordinate is empty, the method further comprises:
judging whether the second position coordinate is empty or not;
and if the second position coordinate is null, determining that the first target text is null.
6. The method of claim 3, wherein obtaining the first target text from the first location coordinates and the second location information coordinates is further by:
judging whether a punctuation mark exists between the first position coordinate and the second position coordinate;
and if the first target text exists, acquiring a text between the punctuation mark and the second position coordinate as the first target text.
7. The method of claim 2, wherein obtaining the second target text according to the third position coordinate and the fourth position coordinate comprises:
judging whether the fourth position coordinate is empty or not;
if the fourth position coordinate is null, determining that the second target text is null;
if the fourth position coordinate is not null, judging whether the third position coordinate is null or not;
if the third position coordinate is not null, acquiring the third position coordinate, the fourth position coordinate and a text between the third position coordinate and the fourth position coordinate as a second target text;
and if the third position coordinate is null, supplementing the organ keywords into the head of the conclusive text.
8. The method of claim 1, wherein determining whether the tag matches the visual report based on the first target text and the second target text comprises:
if the second target text is empty, judging whether the first target text is empty;
if the first target text is empty, determining that the label is not matched with the image report;
if the first target text is not empty, judging whether other organ keywords and negative words exist in the first target text;
if no other organ keywords exist and no negative words exist, judging that the label is matched with the image report;
and if other organ keywords and/or negative words exist, judging that the label does not match with the image report.
9. The method of claim 8, further comprising:
if the second target text is not empty, judging whether other organ keywords exist in the second target text;
if no other organ keywords exist in the second target text, judging whether a negative word exists in the second target text;
if a negative word exists in the second target text, judging that the label is not matched with the image report;
and if no negative word exists in the second target text, judging that the label is matched with the image report.
10. The method of claim 9, further comprising:
if the second target text contains other organ keywords, judging whether the first target text is empty;
if the first target text is empty, determining that the label is not matched with the image report;
if the first target text is not empty, judging whether other organ keywords and negative words exist in the first target text;
if no other organ keywords exist in the first target text and no negative words exist in the first target text, judging that the label is matched with the image report;
and if other organ keywords and/or negative words exist, judging that the label does not match with the image report.
11. An image report tag evaluation apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an image report to be evaluated and a label thereof, and the label is extracted from the image report in advance;
the first analysis module is used for analyzing the image report to obtain a descriptive text and a conclusive text;
the second analysis module is used for analyzing the label to obtain an organ keyword and an attribute keyword;
a second obtaining module, configured to obtain a first target text from the descriptive text and a second target text from the conclusive text according to the organ keyword and the attribute keyword;
and the judging module is used for judging whether the label is matched with the image report or not according to the first target text and the second target text.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 10.
CN202110588164.0A 2021-05-28 2021-05-28 Image report label evaluation method and device, computer equipment and storage medium Pending CN113297836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110588164.0A CN113297836A (en) 2021-05-28 2021-05-28 Image report label evaluation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110588164.0A CN113297836A (en) 2021-05-28 2021-05-28 Image report label evaluation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113297836A true CN113297836A (en) 2021-08-24

Family

ID=77325744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110588164.0A Pending CN113297836A (en) 2021-05-28 2021-05-28 Image report label evaluation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113297836A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034204A (en) * 2022-05-12 2022-09-09 浙江大学 Method for generating structured medical text, computer device, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160335493A1 (en) * 2015-05-15 2016-11-17 Jichuan Zheng Method, apparatus, and non-transitory computer-readable storage medium for matching text to images
CN107273657A (en) * 2017-05-15 2017-10-20 慧影医疗科技(北京)有限公司 The generation method and storage device of diagnostic imaging picture and text report
CN110097329A (en) * 2019-03-16 2019-08-06 平安科技(深圳)有限公司 Signal auditing method, device, equipment and computer readable storage medium
CN110364236A (en) * 2019-07-22 2019-10-22 卫宁健康科技集团股份有限公司 Intelligent follow-up method, system, equipment and the storage medium of irradiation image report
CN110706815A (en) * 2019-11-26 2020-01-17 北京推想科技有限公司 Evaluation method and device of image report and electronic equipment
CN111640480A (en) * 2020-05-21 2020-09-08 上海联影智能医疗科技有限公司 Medical report generation method, computer device, and storage medium
CN112541066A (en) * 2020-12-11 2021-03-23 清华大学 Text-structured-based medical and technical report detection method and related equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160335493A1 (en) * 2015-05-15 2016-11-17 Jichuan Zheng Method, apparatus, and non-transitory computer-readable storage medium for matching text to images
CN107273657A (en) * 2017-05-15 2017-10-20 慧影医疗科技(北京)有限公司 The generation method and storage device of diagnostic imaging picture and text report
CN110097329A (en) * 2019-03-16 2019-08-06 平安科技(深圳)有限公司 Signal auditing method, device, equipment and computer readable storage medium
CN110364236A (en) * 2019-07-22 2019-10-22 卫宁健康科技集团股份有限公司 Intelligent follow-up method, system, equipment and the storage medium of irradiation image report
CN110706815A (en) * 2019-11-26 2020-01-17 北京推想科技有限公司 Evaluation method and device of image report and electronic equipment
CN111640480A (en) * 2020-05-21 2020-09-08 上海联影智能医疗科技有限公司 Medical report generation method, computer device, and storage medium
CN112541066A (en) * 2020-12-11 2021-03-23 清华大学 Text-structured-based medical and technical report detection method and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田驰远;陈德华;王梅;乐嘉锦;: "基于依存句法分析的病理报告结构化处理方法", 计算机研究与发展, no. 12, 15 December 2016 (2016-12-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034204A (en) * 2022-05-12 2022-09-09 浙江大学 Method for generating structured medical text, computer device, storage medium and program product

Similar Documents

Publication Publication Date Title
CN108475538B (en) Structured discovery objects for integrating third party applications in an image interpretation workflow
US20220044812A1 (en) Automated generation of structured patient data record
US10628476B2 (en) Information processing apparatus, information processing method, information processing system, and storage medium
CN109478419B (en) Automatic identification of salient discovery codes in structured and narrative reports
CN113243033B (en) Integrated diagnostic system and method
JP2017509946A (en) Context-dependent medical data entry system
CN115516571A (en) Imaging research report generation system
CN110827941A (en) Electronic medical record information correction method and system
WO2021073157A1 (en) Image management display method and apparatus, computer device, and storage medium
CN111090641A (en) Data processing method and device, electronic equipment and storage medium
US20180032676A1 (en) Method and system for context-sensitive assessment of clinical findings
EP3440577A1 (en) Automated contextual determination of icd code relevance for ranking and efficient consumption
CN112541066A (en) Text-structured-based medical and technical report detection method and related equipment
US10235360B2 (en) Generation of pictorial reporting diagrams of lesions in anatomical structures
CN113297836A (en) Image report label evaluation method and device, computer equipment and storage medium
US20230125321A1 (en) User-guided structured document modeling
US20140195266A1 (en) Techniques to improve accuracy of a medical report relating to medical imaging study
WO2023274599A1 (en) Methods and systems for automated follow-up reading of medical image data
US20140195267A1 (en) Techniques to improve accuracy of a medical report relating to a medical imaging study
CN117633209A (en) Method and system for patient information summary
Burse et al. Facilitating SNOMED-CT Template Creation by Targeting Stopwords.
US20200118660A1 (en) Summarization of clinical documents with end points thereof
WO2023083647A1 (en) Systems and methods for extracting diagnostic and resolution procedures from heterogenous information sources
CN114998660A (en) Global optimal matching-based clinical diagnosis report multimedia display method and device
CN113241153A (en) Image loading method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination